Real-time object detection using deep learning, Python, and OpenCV

preview_player
Показать описание

In this video I demo real-time object detection using deep learning, Python, and OpenCV. The source code + tutorial can be found using the link above.
Рекомендации по теме
Комментарии
Автор

Hi adrain big fan of ur works
I have to ask in videostream(0) in place of (0) if im putting some ip address like videostream('http..'), it is giving me like no shape error
What should i do to run ip camera in ur code. Plz answer my question
Thanks

DarkShadow
Автор

how to add a voice feedback function to the code? like the object is detected and a voice feedback is given on the object that is detected?

iamapilotstar
Автор

where is the txt file you typed at beginning in the cmd? Do we need to create that by myself? Thank you.

zhixianjin
Автор

Hi Adrian, do have a tutorial on real-time measuring the height of a person using opencv?

jericreyechavez
Автор

How to add more objects in models?like suppose I want it to detect mobile phone as well how do I do that?

hitanshurami
Автор

On running the Source Code(Python file) which I received on my mail, I got an error stating ModuleNotFoundError: No module named 'imutils'

misbahshaikh
Автор

Code plzzzz i have my fyp mid year presentation on 25th :-(

alinamustafa
Автор

could you please help me to rectify the error !

blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 0.007843, (300, 300), 127.5)
AttributeError: module 'cv2.dnn' has no attribute 'blobFromImage'

afkniladri
Автор

usage: real_time_object_detection.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
error: the following arguments are required: -p/--prototxt, -m/--model
how to fix this

techlappy
Автор

I am using Picamera..and I got the error like....[INFO] loading model...
[INFO] starting video stream...
Traceback (most recent call last):
File "real_time_object_detection.py", line 48, in <module>
frame = imutils.resize(frame, width=400)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/imutils/convenience.py", line 69, in resize
(h, w) = image.shape[:2]

KandelDeepak
Автор

Hello Adrian, do you use Tensorflow ?

opemipodurodola
Автор

Hii Adrian..I got the following error while running the code:
usage: new.py [-h] -i IMAGE -p PROTOTXT -m MODEL [-c CONFIDENCE]
new.py: error: argument -i/--image is required

archanamuraleedharan
Автор

can i add speaker to its feature?
for example the object was detected then the speaker would say what object is it?
is it possible to do?
thanks a lot !

darrenmagturo
Автор

Hello, Adrian, may I know if its possible yo use custom video stream frame for it? Other than webcam. Thank You. It works on Windows

thomaschung
Автор

i got an error like this:
usage: real_time_object_detection.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
error: argument -p/--prototxt is required

i'm not familiar with this stuff and i'm seriously stuck.Please help

CodeBlazeKAT
Автор

Hello Adrian.. The link is not working!!

lokeshkumar
Автор

hi Adrian.. It gives me error like

Trackback (most recent call last):
File line 5, in <module>
"execution_count": null,
NameError: name 'null' is not defined

can you help me to solve this error and how can i run it.?
thank you

kinalgoti
Автор

def detect(image):

gray=image


blob = cv2.dnn.blobFromImage(gray, 1.0, (300, 300), [104, 117, 123], False, False)
net.setInput(blob)
detections = net.forward()
bboxes = []
gray=cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
frameWidth=image.shape[1]
frameHeight=image.shape[0]
for i in range(detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > 0.6:
x1 = int(detections[0, 0, i, 3] * frameWidth)
y1 = int(detections[0, 0, i, 4] * frameHeight)
x2 = int(detections[0, 0, i, 5] * frameWidth)
y2 = int(detections[0, 0, i, 6] * frameHeight)
cv2.rectangle(image, (x1, y1), (x2, y2), (255, 0, 0), 2)
try:
image1 = gray[y1:(y2), x1:(x2)]

img = cv2.resize(image1, (48, 48), interpolation = cv2.INTER_CUBIC) / 255.

prediction=model1.predict_proba(img.reshape(1, 48, 48, 1))



plot_images(img, 1)
plt.show()
print("Emotion Detected: -
emt=[prediction[0][0], prediction[0][1], prediction[0][2], prediction[0][3], prediction[0][4], prediction[0][5], prediction[0][6]]
indx=np.arange(len(emotion))
plt.bar(indx, emt)
plt.xticks(indx, emotion)
plt.show()
except:
during resize .Probably Cant detect any face")

Could anyone help me in getting this code run properly, here I am using the face detector DNN, the above code is running fine but it is creating 2 different bounding boxes one around the face and another in the side where is no face.Is thr some issue in the above somebody told me that in DNN Face detector I dont have to convert to grayscale....but my neural net accepts grayscale meaning 1 channel so for that reason I have converted.
Apart from that what is if confidence > 0.6:
x1 = int(detections[0, 0, i, 3] * frameWidth)
y1 = int(detections[0, 0, i, 4] * frameHeight)
x2 = int(detections[0, 0, i, 5] * frameWidth)
y2 = int(detections[0, 0, i, 6] * frameHeight)
cv2.rectangle(image, (x1, y1), (x2, y2), (255, 0, 0), 2)

part doing....?I am a bit new ..

babypiro
Автор

Not to sound stuffy, but unless you can bridge the deployment obstacles, the parlor tricks are meaningless. Virtualbox is will not run 24/7, VM"S are too big and Docker, well it's Docker. So the only way to use your opencv is with a dedicated linux "box".

AFuller