MOODIFY – Suggestion of Songs on the basis of Facial Emotion Recognition Project

Modify is a song suggested that recommends the song to the user according to his mood. ‘Modify’ will do the job leaving the user to get carried away with the music.

I/We, student(s) of B.Tech, hereby declare that the project entitled “MOODIFY (Suggestion of Songs on the basis of Facial Emotion Recognition)” which is submitted to the Department of CSE in partial fulfillment of the requirement for the award of the degree of Bachelor of Technology in CSE. The Role of Team Mates involved in the project is listed below:

  • Training the model for facial emotion recognition.
  • Designing the algorithm for image segregation.
  • Algorithm designing for music player.
  • Graphical user interface designing.
  • Testing the model.
  • Collection of data for model and music player.
  • Preprocessing of the data and images.

Dataset:

The dataset we have used is “Cohn-Kanade”. 
This dataset is classified so we cannot provide the actual dataset but the link for you to download is :
http://www.consortium.ri.cmu.edu/ckagree/index.cgi
And to read more about the dataset you can refer to:
http://www.pitt.edu/~emotion/ck-spread.htm

Feature Extraction and Selection:

1. Lips
2. Eyes
3. Forehead
4. Nose

These features are processed by CNN layers and then selected by the algorithm and then they are converted to a NumPy array then the model is trained by that and the following three classifications are made.

How this project works:

  • First Open the Application, CHOOSE THE MODE IN WHICH YOU WANT TO LISTEN to THE SONG
  • Then it shows “YOUR MOOD, YOUR MUSIC”
  • Press “OKAY TO CAPTURE THE IMAGE”
  • After that press “c” to capture
  • You seem Happy please select your favorite genre
  • You seem Excited please select your favorite genre
  • You Seem Sad please select your favorite genre

CODE DESCRIPTION

  • All libraries are imported into this.
  • Model Initialization and building.
  • Training of test and testing.
  • Training our model
  • Model Building, Splitting of test and train set, and training of the model.
  • Saving a model.
  • Loading a saved model.
  • Saving image with OpenCV after cropping and loading it and then the prediction
  • Suggesting songs in Offline mode
  • Suggesting songs online(Youtube)
  • Rest of the GUI part
  • Variable Explorer

IPython Console

  • Importing Libraries
  • Model Training
  • Model Summary
  • Online Mode
  • Offline Mode

GUI

  • Splash Screen
  • Main Screen
  • Selection screen
  • Display songs and then select them, after that they will play

Summary

We successfully build a model for Facial Emotion Recognition(FER) and trained it with an average accuracy over various test sets of over 75%. Then we successfully build a Desktop application to suggest songs on the basis of their facial expression and hence completed our project. This FER model can be widely used for various purposes such as home automation, social media, E-commerce, etc and we have the motivation to take this project to a next level.

Download the complete Project code, report on MOODIFY – Suggestion of Songs on the basis of Facial Emotion Recognition Project

Detection of Currency Notes and Medicine Names for the Blind People Project

OBJECTIVE:-

We have seen blind people facing many problems like fake Currency Notes Detection in our society. So, we have come up with some solutions for some problems they face. As they are blind, they are not able to read the medicine’s name and they always depend on another person for help. Some people take advantage of their disability and cheat them by taking extra money or by giving them less money. And by this Currency Notes Detection project, we are making them independent in terms of medical benefits.

METHODOLOGY: –

To overcome the problem of blind people we have come up with an innovative idea, where we are making use of machine learning, image processing, OpenCV, text-to-speech, and OCR technologies. To make their life comfortable.
In this Currency Notes Detection project, we are using a camera for getting the input, where the inputs are pictures of medicine and currency. These images can be manipulated using image processing and OpenCV. Once the processed image is obtained then it is cropped and thresholding is done, In the next stage we will extract the name of the medicine, then we will convert that text into speech using text-to-speech technology.

Similarly, we will also take pictures of currency, and then by using image processing and machine learning we will compare the picture with a predefined database of the currency that we have already prepared. The next process will be to convert the value of currency into text and then the text is converted into speech using text-to-speech technology.

Block Diagram: –

Technology Used:

  • Image Processing: To extract necessary information
  • OpenCV: To threshold image, color shifting, scanning, and cropping, setting grey level, and extract contours
  • Python 3: To set up the environment and interact with devices
  • OCR (Optical Character Recognition)
  • Machine Learning: Handwritten data is trained in a classifier to process manual marks awarded.

Results

The Detection of Currency Notes and Medicine Names for the Blind People Project can help the blind person in the detection of currency notes and medicine names. By this, the blind person would take care of himself without the help of any caretakers. This would make their life easier and simpler. The talk-back feature used would help them to access the application easily without any complications.

  • This project would help blind people to detect the proper currency that they have received or which they need to give without being cheated for receiving the wrong currency or by avoiding giving the wrong currency. This would make them economically stable and strong
  • Not only in currency detection but also this project would help blind people to recognize the name of the tablet and also help them to know how many dosages they need to take as per the name of the tablet.

This Currency Notes Detection project would help blind persons both in an economical way and in the perspective of health. This would make their life easier and make them confident.

Applications

  • Blind persons will be able to recognize the correct currency without getting cheated in any type of money transaction.
  • Blind persons always need not be dependent on others to know which medicines they need to take at a particular time.

Advantages

  • This project will work on mobile phones only no need to buy any extra things.
  • This work is implemented using TalkBack for android and Voiceover for iOS which means blind people can easily access the application.
  • Easy to set up.
  • Open-source tools were used for this project.
  • Accessible to all devices irrespective of the OS.
  • Cheap and cost-efficient.

Disadvantages

  • It is very difficult to determine whether the currency is a fake one when it is an exact copy of the real currency.
  • For the medicine part, the image should be taken from any side where the name of the medicine is written.

Conclusion

This work shows how visually impaired people (blind persons) can protect themselves from getting cheated in terms of money transactions and also how to reduce the dependency on other people to take the right amount of medicine at the right time Whenever the blind person takes the image using his phone camera the image will be compared with the data set which is created.

After comparing the image if it gets the accuracy above the threshold value then it will give the spoken feedback to the person by saying the value of the currency Similarly in the case of medicine detection extract the name of the medicine and gives the spoken feedback as how many times that person needs to take the medicine, thus making this work as one of the assistants for a blind person.

Future Scope

• Include the data set of photos that contain a person’s images it can also be used to detect a person who has a blind person meets.
• It can also be used to track the blind person using GPS

Detecting Impersonators in Examination Centres using AI

 

Detecting impersonators in examination halls is important to provide a better way of examination handling system which can help in reducing malpractices happening in examination centers.  According to the latest news reports, 56 JEE candidates who are potential impersonators were detected by a national testing agency. In order to solve this problem, an effective method is required with less manpower.

With the advancement of machine learning and AI technology, it is easy to solve this problem. In this project we are developing an AI system where images of students are collected with names and hall ticket numbers are pre-trained using the KDTree algorithm and the model is saved. Whenever a student enters the classroom, the student should look at the camera and enter class, after the given time or class is filled the student’s information will store in a  video file with the student’s name and hall ticket no. The video will have a user with a hall ticket no and name on each face. If the admin finds any unknown user tag on the face admin can recheck and trace impersonators. 

Problem statement:

Detecting impersonators in examination halls is important to provide a better way of examination handling system which can help in reducing malpractices happening in examination centers.  According to the latest news reports, 56 JEE candidates who are potential impersonators were detected by a national testing agency.

Existing system:

Information given in the hall ticket is used as verification to check if the student is the impersonator or not.  Manual security checks performed are not perfect and sometimes students can even change images from the hall ticket.    

Advantages:

Manual verification methods are used for checking personally for each student which is not possible to check each student personally.

Chances of changing images from hall tickets are possible which doesn’t have a verification method.

Proposed system:

  • In the proposed system initially, images of each student are collected and each dataset consists of 50 images of each student. These images are trained using kdtree algorithm using the image processing technique and the model is saved in the system this model can be used for automatic prediction of students in exam halls from live video or images. 

Advantages:

  • The student verification process is fast and accurate with the least effort. Reduces impersonator’s issue with live verification.
  • The time taken for prediction and processing is less and prediction is done automatically using a trained model.
  • A trained model can be used to track live video and automates the process of detecting students at exam centers and display them in the video.  

SOFTWARE REQUIREMENT: 

  •  Operating system:           Windows XP/7/10
  • Coding Language:           python

  • Development Kit             anaconda

  • Library:     Keras, OpenCV

  • Dataset:   any student’s dataset

Movie Character Recognition From Video And Images Project

Live tracking of characters from movies is important for automating the process of classification for user-friendly information management systems like online platforms where characters in a movie can be seen before watching the movie. At present manual method is used which can be automated using this movie character classification method. The objective of this work is to collect a dataset of any movie characters and train a model which captures the facial features of all characters and the model is saved for prediction. 

For testing purposes, a real-time live video can be used to track characters. This application also works for images where users can give input as images of trained movie characters and get results with character names on the image as output. In this project for training dataset KDTree, the algorithm is used which takes images from a given folder and trains each image and saves the model into a dump file in the system. In the second stage using this trained model input image or input video is predicted with the model and the result is shown as a video or image.

Problem statement:

Classification of characters for each movie manually is a time taking process and the database should be managed.

Objective:

The objective of this project is to develop an automatic classification of characters after training from the dataset. If the one-time model is created it can be used for prediction at any time from images or video

Existing system:

In the existing system movie characters are managed in the database and which are used for displaying when required in this process database is the important to the time taken for processing is more.

Disadvantages:

  • The time taken for processing is more and the database should be managed and integrated with the required system whenever required.
  • This method includes the manual process of data collection and updating and deleting data. 

Proposed system:

In the proposed system initially, a dataset of respected move characters is collected and each dataset consists of 50 images. These images are trained using the KDDTree algorithm using the image processing technique and the model is saved in the system this model can be used for the automatic prediction of characters from live video or images.

Advantages:

  • The time taken for prediction and processing is less and prediction is done automatically using a trained model.
  • A trained model can be used to track live video and automates the process of detecting characters and displays on screens.

SOFTWARE REQUIREMENTS:

 Operating system:           Windows XP/7/10

  • Coding Language:  Python
  • Development Kit: Anaconda
  • Library:   TensorFlow, Keras, OpenCV
  • Dataset:  Any movie dataset

Drowsiness Detection using OpenCV Project

Abstract:

The new way of security system which will be discussed in this project is based on machine learning and artificial intelligence. Passenger security is the main concern of the vehicle’s designers where most accidents are caused due to drowsiness and fatigued driving in order to provide better security for saving the lives of passengers Airbags are designed but this method is useful after an accident accord.

But the main problem is still seeing many accidents happening and many of them are losing their lives. In this project we are using the OpenCV library for image processing and giving input as user live video and training data to detect if the person in the video is closing their eyes or showing any symptoms of drowsiness and fatigue then the application will verify with trained data and detect drowsiness and raise an alarm which will alert the driver.

Existing system:

There are various methods like detecting objects which are near a vehicle and front and rear cameras for detecting vehicles approaching near to vehicle and airbag systems that can save lives after an accident is accorded.

Disadvantages:

Most of the existing systems use external factors and inform the user about the problem and save users after an accident is an accord but from research, most of the accidents are due to faults in users like drowsiness and sleeping while driving.

Proposed system:

To deal with this problem and provide an effective system a drowsiness detection system can be developed which can be placed inside any vehicle it will take live video of the driver as input and compare it with training data and if the driver is showing any symptoms of drowsiness system will automatically detect and raise an alarm which will alert the driver and other passengers.

Advantages:

This method will detect a problem before any problem accord and inform the driver and other passengers by raising an alarm.

In this OpenCV-based machine learning techniques are used for the automatic detection of drowsiness.

SOFTWARE REQUIREMENTS: 

  • Operating system: Windows 7.
  • Coding Language: python
  • Tool: anaconda, visual studio code
  • Libraries: OpenCV