Emotion Detection in Python
How to detect emotions on your face using camera and Python
Welcome back, developers!
Today, in this brief story, I want to share with you another little project in Python. Before to dive in with the description of the project I suggest you to read all the articles about Python that I wrote in the list Py with me on my Medium profile, espacially if you aren’t good enough with Python.
Are you ready to follow me in this beautiful journey? Let’s go!
Note: don’t forget to see the final result at the end of the article.
Today I want to share the Emotion detection using Python, a very funny projects that allows you to become more skilled in python.
Remember to follow me on my other social media, in this way you’ll be updated every time I publish something new about Python. Below you’ll find a list of my other social media.
This project it was made completely in Python and it’s basically a script which allows user to open the pc camera to detect the face and through an
.xml file the script is able to recognize which emotion is on your face.
The project was made by using PyCharm as IDE, but you can use whatever IDE you prefere or also the terminal. You can download PyCharm at this link. It’s important to understand that if you want to run this project you should install cv2 package, it doesn’t matter if you install it in PyCharm or directly in your machine.
Since the project use two different packages. it’s crucial that you install them, otherwise the code won’t run.
The packages imported in this project are: OpenCV and deepface.
OpenCV is the huge open-source library for computer vision, machine learning, and image processing and now it plays a major role in real-time operation which is very important in today’s systems.
In the project you’ll see that I wrote
import cv2, well for those who are not experienced in Python, OpenCV-Python is the library of Python bindings designed to solve computer vision problems and cv2 (old interface in old OpenCV versions was named as cv ) is the name that OpenCV developers chose when they created the binding generators.
DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users.
In order to install these packages in our project, follow these steps:
- Go to Python packages tab
- A panel with a search box will appear, type in the search box opencv and click the Install button on the right of the panel
- Wait for the installation
4. Then in the same search box type deepface and install it in the same way.
Once you’ve done you can easily run the code.
Brief description of components
The project run the camera of your machine in order to recognize your face and detect the emotion. It’s pretty accurate. It creates a square around your face and put a text where there’s the emotion detected.
In order to create the square I used the
cv2.rectangle() method from
OpenCV in this way:
cv2.rectangle() method is used to draw a rectangle on any image.
- Syntax: cv2.rectangle(image, start_point, end_point, color, thickness)
– image: It is the image on which rectangle is to be drawn.
– start_point: It is the starting coordinates of rectangle. The coordinates are represented as tuples of two values i.e. (X coordinate value, Y coordinate value).
– end_point: It is the ending coordinates of rectangle. The coordinates are represented as tuples of two values i.e. (X coordinate value, Y coordinate value).
– color: It is the color of border line of rectangle to be drawn. For BGR, we pass a tuple. eg: (255, 0, 0) for blue color.
– thickness: It is the thickness of the rectangle border line in px. Thickness of -1 px will fill the rectangle shape by the specified color.
- Return value: It returns an image.
The OpenCV package provides a training method (see Cascade Classifier Training) or pretrained models, that can be read using the cv::CascadeClassifier::load method.
In this project I used it so:
haarcascade_frontalface_default.xml : Detects faces.
There are a lot of trained models that you can add to your project to make it more accurate. See this link.
To analyze the emotion detected on the face the script was made by using a try-except block, in order to launch an exception whenever the video opening fails:
Last but not least, remember to release the video and destroy all the windows:
This will improve a lot the output of your code.
Below I’ll put the complete code, now let’s see the final result.
Not too bad in my honest opinion. 🤣🤣🤣🤣
Note: To quit and to stop the script press ‘q’
Thanks for reading!! 🎉🎉🎉