Body Fitness Prediction using Random Forest Classifier Project

Purpose of the Project

To avoid several health issues, we should monitor our body fitness by using various fitness prediction gadgets like smartwatches, oximeters, B-P machines, etc. we can monitor our B-P, calories burnt, bone weight, etc. the devices work with smart device technology to exchange data via Bluetooth communication protocol. Here, in this project, we import the data which consists of (date, step count, mood, calories burned, hours of sleep, bool of activity, and weight in kg) and split the dataset into the testing set and training set. We are using a random forest classifier in this project.

Existing problem

Body fitness prediction play’s a key role in leading a healthy life. Fitness is a state of health and well-being, more specifically the ability to perform daily activities body fitness is generally achieved through proper nutrition and physical exercise, and rest. By this, we are losing our body fitness and it leads to various chronic issues

Proposed solution

Importing Dataset

Exploratory Data Analysis ]: df. shape

Here, in this project, we import the data which consists of (date, step count, mood, calories burned, hours of sleep, bool of activity, and weight in kg) and split the dataset into the testing set and training set. We are using a random forest classifier in this project.

EXPERIMENTAL INVESTIGATIONS

Dataset:

We will use the body fitness prediction dataset which was retrieved from Kaggle.com.

  • Check if there are associations between physical activity (by counting steps), caloric expenditure, body weight hours of sleep, and the feeling of feeling active and/or inactive.
  • Compare caloric expenditure between the categories of mood and self-perceived activity (active and inactive)
  • Compare the hours of sleep between the categories of mood and self-perceived activity (active and inactive)
  • Compare body weight between categories of self-perceived activity (active and inactive)
  • Database The database has 96 observations, and 7 columns. Its quantitative variables are “number of steps” (step_count), “caloric expenditure” (calories_burned), “hours of sleep” (hours_of_sleep and “body weight” (weight_kg). And qualitative variables “dates” (date), “mood” “(mood), self-perceived activity” active or inactive “(bool_of_active). The variable” humor “was assigned the value” 300 “to mean” Happy “, the value” 200 “for” Neutral “and” 100 “for” sad “and for the variable” self-perceived activity
  • Contingency tables of categorical variables will be exposed.
  • A correlation matrix between variables will be presented
  • Bar charts and violins to demonstrate the distribution of quantitative variables by categories
  • Scatter plot for analysis of the possible linear relationship between two variables

RESULT

Output result

RANDOM FOREST CLASSIFIER

Random Forest Classifier

CORRELATION PLOT

Correlation Plot

FINAL RESULT:

Body fitness prediction Output

APPLICATIONS

There are so many different kinds of applications used to predict the fitness of Human beings today.

TRAINING AND TESTING:

Splitting the data:

We use sklearn. ensemble module train_test_split which is used for the training and testing part.

Dependent and Independent variables:

Independent variables contain a list of variables on which the bool of activity is dependent.

The dependent variable is the variable that is dependent on the other variable’s values.

Independent variables are mood,step_count, calories burned, hours of sleep,weightkg.

The dependent variables are bool_of_active.

MODEL BUILDING:

We use Random Forest Classifier for predicting Body Fitness Prediction. Because it gives an accurate prediction.

CONCLUSION

We have analyzed the Body fitness prediction Data and used Machine Learning to Predict the fitness of a human being. We have used a Random Forest classifier and its variations, to make predictions and compared their performance. xgboost regressor has the lowest RMSE and is a good choice for this problem.

Intelligent Customer Help Desk Python and Node-Red Project

Project Summary:

In this Intelligent Customer Help Desk project, we need to create a chatbot application that can answer the question(s) that falls outside the scope of the pre-determined question set.

This can be done using a chatbot that will use the intelligent document understanding feature of Watson Discovery. 

Project Requirements:

IBM Cloud, IBM Watson, Python, Node-Red.

Project Scope:

In this Python and Node-Red Project, we need to create a website first using HTML code. Next, we should create a chatbot with help of IBM Watson Assistant and Watson discovery.

Using Node-Red we need to build a web application that integrates all services and deploys the same on the IBM cloud.

This project will answer all queries of the user and if any question falls outside the scope of the predetermined question set then this project will use the Smart Document Understanding feature of Watson Discovery to train it on what text in the owner’s manual is important and what is not.

This will improve the answers returned from the queries.

Class Scheduling System Python Project using Django Framework

Present issues:

  • No digital class management system
  • Fixed timetable which cannot be changed throughout the semester
  • Cannot swap classes easily
  • No publishing mechanism
  • No administrator
  • Students cannot access the present-day schedule

Proposed solution:

  • Dynamic mechanism to change weekly class schedules
  • Publish new schedule after changes
  • Fully manageable through administrator privileges
  • Secured using username and password credentials
  • Schedule accessible on the internet
  • Administrators can access the portal onsite only
  • The system can be implemented in other departments and also

Architecture

  • Any machine can connect to the server
  • Administrators can access only the campus network
  • Students and faculty can access it as long as there is internet
  • Server-side will manage access and manipulation rights
  • The server will also publish a current schedule

Technologies

Django Framework:

  • Manages all 3 tiers(MVT – Model, View, Template) to run the web application.
  • Front-tier or client employs HTML and CSS via Django templates.
  • The Server-side uses Python to implement the logic for managing model-based objects.
  • The Server-side also enforces security standards.
  • The back end contains an in-built database, accessible via a web address generated by a virtual machine managed by the Django framework.
  • Can deploy web application after completion of web-application construction.

Use Case Diagram

Class Scheduling System
Interface Diagram:

Interface Diagram

Output Screenshot:

LSTM based Automated Essay Scoring System Python Project using HTML, CSS, and Bootstrap

Introduction

Essays are a widely used tool to assess the capabilities of a candidate for a job or an educational institution. Writing an essay given a prompt requires comprehension of a given prompt, followed by analysis or argumentation of viewpoints expressed in the prompt, depending on the needs of the testing authority. They give a deep insight into the reasoning abilities and thought processes of the author, and hence are an integral part of standardized tests like the SAT, TOEFL, and GMAT.

With essays comes the need for personnel qualified enough to carry out the process of grading the essays appropriately and ranking them on the basis of various testing criteria. Our project aims to automate this process of grading the essays with the aid of Deep learning, in particular, using Long Short Term Memory networks which is a special kind of RNN.

Automated Essay Scoring (AES) allows the instructor to assign scores easily to the participants with a pre-trained deep learning model. This model is trained in such a way that the scores assigned are in agreement with the previous scoring patterns of the instructor. So this needs the dataset which contains the information of scores given by the instructor previously. AES uses Natural Language processing, a branch of artificial intelligence enabling the trained model to understand and interpret human language, to assess essays written in human language.

Problem Definition

Given the growing number of candidates applying for standardized tests every year, finding a proportionate number of personnel to grade the essay component of these tests is an arduous task. This personnel must be skilled and capable of analyzing essays, scoring them according to the requirements of the institution, and be able to discern between the good and the excellent.

In addition to this, there are a lot of time constraints in grading multiple essays. This can prove to be cumbersome for a limited number of human essay graders. Having to grade several essays within a deadline can compromise the quality of grading done. Thus, there is a clear need to automate this process so that the institution carrying out the grading can focus on evaluating other aspects of the candidate’s profile.

The challenge was to create a web application to take in the essay and predict a score. We need to train a neural network model to predict the score of the essay in accordance with the rater. The model is to be made using LSTM.

Approach

In order to meet the need for automation of essay grading, we propose an application that provides an interface for users to choose an essay prompt of their choice and provide a response for the same. The user’s response is graded by the application within seconds and a score is displayed.

This application makes use of the technologies of Natural Language Processing that performs operations on textual input, and LSTM, which is used to train a model on how to grade essays. The application also uses the Word2Vec embedding technique to convert the essay into a vector so that the model can be trained addresses the issue of time constraints; automated grading takes place within seconds as compared to physical grading which requires minutes per essay. The net amount of time saved over a period of consistently using the application is vast; costs of maintaining human graders are also saved.

The application gives an output from the pre-trained LSTM model. The model is trained using a dataset provided by Hewlett Foundation in 2012 for a competition on Kaggle.

Web Application (Output)

The front end of the application was implemented using HTML, CSS, and Bootstrap. It provides the option for users to choose from a set of prompts and write an essay accordingly or to grade their own custom essay.

The landing page of the application:

Automated Essay Scoring System

Software Specifications

This application is developed primarily using Python, for the purposes of running the app. The model was built and trained on Jupyter Notebook. The front end of the application was designed with HTML, CSS, and Bootstrap. All the components of this application were integrated with the help of the Flask App, and the final project was deployed on IBM Cloud.

While training the model, the dataset was imported into the model with the Pandas library. Pandas library used was v1.3.0. Numpy v1.19.2 was used to handle array data structure. Natural Language ToolKit v3.6.2 was used to tokenize essays to sentences written in English and also to remove stopwords to make sure the sentences contain only relevant words. RegEx(re) package v2.2.1 was used to remove unnecessary punctuations and symbols present in the essay or sentences. Our model utilizes the Word2Vec technique to convert words to corresponding vectors. Word2Vec v0.11.1 was used to convert words into vectors. Tensorflow v2.5.0 was used to build the model. ScikitLearn v0.24.2 was used for data preprocessing.

To make use of the application, the user needs to have access to a stable internet connection and an operating system compatible with the latest versions of most browsers. In the absence of an internet connection, the application can be run locally. Still, the user needs to have the authorization to access the source code of our project for the same, which is not recommended for intellectual property purposes.

Future Scope

This application could be integrated and used by several testing institutions to meet their needs for essay grading. The model used could be trained with an increasing number of input essays to further improve its accuracy. The model could also be trained on giving a score on specific criteria of essay grading such as relevancy, linguistic and reasoning ability of the author. Research could be conducted on making the model faster. This technology could also be extended for use with languages other than the English language, effectively rendering it useful on a worldwide level.

Analysing Region Wise E-Commerce Data Using IBM Cognos Dashboard

Analysing E-Commerce Data Project Objectives 

  • Know fundamental concepts and can work on IBM Cognos Analytics.
  • Gain a broad understanding of plotting different graphs.
  • Able to create meaningful dashboards 

Project Flow

  • Users create multiple analysis graphs/charts.
  • Using the analyzed chart creation of a Dashboard is done.
  • Saving and Visualizing the final dashboard in the IBM Cognos Analytics.
  • To accomplish this, we have to complete all the activities and tasks listed below
  • Working with the Dataset
  • Understand the Dataset
  • Build a Data Module in Cognos Analytics.

Understand The Dataset 

The data was sourced from the Kaggle.

Let’s understand the data of the file we’re working with i.e. US Superstore data.csv and give a brief

overview of what each feature represents or should represent

  • Row ID – Unique ID for each entry.
  • Order ID – Unique ID for each order.
  • Order Date – Date on which the order was placed.
  • Ship Date – Date on which the order was shipped.
  • Ship Mode – Mode of shipping the order.
  • Customer ID – Unique ID for each Customer.
  • Customer Name – Name of the Customer.
  • Segment – Segment to which the Customer belongs.
  • Country – Country to which the Customer belongs.
  • City – City to which the Customer belongs.
  • State – State to which the Customer belongs.
  • Postal Code – Postal Code of the Customer.
  • Region – Region to which the Customer belongs.
  • Product ID – Unique ID for each Product.
  • Category – Category to which the product belongs.
  • Sub-Category – Sub-Category to which the product belongs.
  • Product Name – Name of the product.
  • Sales – Sales fetched.
  • Quantity – Quantity of the product sold.
  • Discount – Discount Given.
  • Profit – Profit fetched.

Build A Data Module In Cognos Analytics 

In Cognos Analytics, a Data Module serves as a data repository. It can be used to import external data from files on-premise, data sources, and cloud data sources. Multiple data sources can be shaped, blended, cleansed, and joined together to create a custom, reusable and shareable data module for use in dashboards and reports.

Visualization Of The Dataset 

In Cognos, we can create different numbers of visualization and in the data exploration part we will be going to plot multiple data visualization graphs for getting the insights from our data and once the explorations are done we will build our dashboard.

Once you’ve loaded all the CSV files on the data module for creating different explorations. 

RESULT

Order Id by Region

Order ID by Quantity:

Order Id by Quantity

Sales and Profit by Year:

Sales and Profit by Year

Analysing Region Wise E-Commerce Data

Analysing Region Wise E-Commerce Data Using IBM Cognos Dashboard

CONCLUSION

From this Analysing E-Commerce Data project, we have successfully:

  • Created multiple analysis charts/graphs
  • Used the analyzed chart creation of a dashboard
  • Saved and visualized the final dashboard in the IBM Cognos Analytics

Intelligent Access Control for Safety Critical Areas Project using IoT Analytics and IBM Cloud Services

Purpose of the Project

  • Access control is done by using a smart Analytic device. It verifies the entry of the person.
  • The Smart device verifies the persons entering into the industry.
  • The details of the person are being taken and uploaded into the cloud.
  • We can Restrict the entry of unknown persons and we can restrict the persons who are not following the safety measures by using this IoT device.

Existing Problem

The Intelligent Access Control problem with the present existing device is it cannot able to identifies the safety measures of the persons it just identifies the entry of the persons.

Proposed Solution

We can make use of IoT Analytics in Access Control, such that during working hours in the industry we can identify the persons who are following the safety measures and who are not following.

 Also, with the usage of IoT, automatically, the details of the person are taken and we can restrict them.

Hardware/Software Designing

The Intelligent Access Control Software design involves general We used IBM Cloud Services to create the Internet of Things platform. In the IoT platform, we create a virtual Raspberry Pi device. After creating the design we get the device credentials. We use these credentials in the Python program then we integrated the Node-Red platform with IoT. With the help of MIT APP Inverter, we designed the app & integrated it with the Node-Red to observe the values.

Experiment Investigation

To complete our Intelligent Access Control project work we collected the required data from Google & research papers. After getting complete knowledge we work according to our roles in the project. At first, we create the IBM Cloud account then we created the Internet of Things Platform after we wrote a python code in IDLE to connect IBM IoT Platform. Next, we created the Node-Red Services. This service helps us to show virtual flow graphs. We connect Node-Red to IBM IoT to get the current, and voltage, and calculate bills. From Node-Red we send values to the MIT APP. From the app, we can view the details of the person.

FLOWCHART

Flow Chart

MIT APP:

MIT App

ADVANTAGES & DISADVANTAGES

Advantages:

1) Increase ease of access for employers

2) Keep track of who comes and goes

3) Protect against unwanted visitors

4) create a safe work Environment

5) Reduce Theft and Accidents

6) Easy Monitoring

Disadvantages:

1) Access control systems can be hacked.

 APPLICATIONS

1) Large Industries

2) In Airports

3) Government Sectors.

Employee Work Appreciation based on Customers Feedback Project using IBM Cognitive Services

PURPOSE OF THE PROJECT

The purpose of the Employee Work Appreciation based on Customers Feedback project is to appreciate the employee’s work based on the feedback given by the customers and the employees. The feedback given by the customers to a respective employee is analyzed i.e. is it polite feedback/satisfied feedback…etc. Based on that, employees will be given appreciation.

Block Diagram:

Block Diagram

Flow Chart Diagram:

Flow Chart Diagram

HARDWARE/SOFTWARE SOLUTION

1. IBM Cloud
2. IBM Watson Tone Analyzer
3. Node-RED
4. Create an employee database in the IBM cloud and upload sample 4 employees feedback JSON files.

EXPERIMENTAL INVESTIGATION

1. Choose a Project Idea:

Employee Work Appreciation based on Customers Feedback.

2. Conduct Background Research
3. Compose a Hypothesis:
Based on our Study and the information gathered we can decide how well an employee is appreciable.
4. Design your Experiment:
First, we need to collect employee reports in which feedback is given by the customers.
Next, we give those reports as input to the Tone analyzer service which predicts the emotion behind the feedback.
5. Draw Conclusions:
After Building our model, we can able to know how well the employee is working and appreciate the employee’s work based on analysis of customer feedback.

Result Screenshots:

Sentiment Analysis:

Sentiment Analysis

Cloudant Dashboard
APPLICATIONS

This Employee Work Appreciation application is used for deciding whether the employee’s work is up to the mark or not.

This system can also be used for employees to check whether they receiving good or bad feedback from customers so that they will improve their work.

Node-RED Flow:

Node-RED Flow

IBM Cloud databases

Input employee reports stored in the employee database

IBM Cloud databases
Output sentiment by tone analyzer stored in sentiment database.

E-Commerce Application Project using Python Django Framework

PROBLEM STATEMENT FOR E-COMMERCE WEBSITE

An E-Commerce Website selling a wide variety of products needs to be developed. Products must be grouped into categories based on their characteristics. Some of the broad categories include Electronics, Apparel, Books & Media.

For eg, mobile phones and laptops come under the category Electronics, and T-shirts and pants come under the category Apparel.

The webpage should provide a search bar for the user to search for the products of his/her choice and should provide functionality for an admin to log in and modify the database.

The backend of the website should comprise a database to store:

1. The list of products available
2. The various categories of products available
3. The list of sellers available
4. Table of details of all the users who have purchased items.

The specifications of the various items in the database are given below.

A PRODUCT has the following requirements

– Each Product has the following attributes to identify it Name, ID, Seller, Price, Colour, Number of Items Left
– Each product may have a number of SELLERS.
– Each Seller has a location, products he/she is selling, discount he/she is willing to offer on the products as well as the time of delivery.

The products are organized into CATEGORIES.

– Each Category has a name and an ID.
– Each Category may be further subdivided into more categories.

Eg: Electronics is a broad category that is comprised of a number of products such as Laptops, of which Dell Inspiron is a type of Laptop.

The database must store data of the various USERS of the website

– Each user has a name, address, price to be paid, and ID of the product purchased.

Admin logs in to the PRODUCT database to add new products, and delete and modify the existing database.

Physical Design

E-Commerce Project Computation of the Blocking Factor for each of the Tables with the use of the standard block size of 512 bytes. The Blocking Factor is a lower-limit integer value as part of the tuple cannot be saved in one block of data storage.

List of Entity Types

Goods – This table has details of all the Goods in the Database.

Seller – This table has the details of all the Sellers in the database.

Product – This table has the details of all products being sold.

Customer – This table has the details of all customers who have registered with the website.

Customer Items – This table has the shopping cart of all the customers.

Book – This table has the specifications of all books being sold.

Fashion – This table has the specifications of all fashion apparel being sold.

Media – This table has the specifications of all Media being sold.

Mobile – This table has the specifications of all Mobiles being sold.

TV – This table has the specifications of all TVs being sold.

Laptop – This table has the specifications of all Laptops being sold.

All Columns are NOT NULL unless explicitly mentioned

Relational Schema:

Airbnb User Bookings Prediction Project Synopsis

Airbnb User Bookings Synopsis

1. Objective of work

The main objective of this project is to predict where will new guest book their first travel experience. 

2. Motivation

This project helps Airbnb to better predict their demand and take consequent informed decisions. Earlier a new user was overwhelmed with the various choices available for a perfect vacation or stay.

By predicting where a new user will book their first travel experience the company is better able to inform its users by sharing personalized content with their community. It will drastically decrease the time to first booking which will increase the company’s output and help them gain popularity among its user and an edge over its competitors in the market. 

3. Target Specifications if any

Predicting where a new guest books their first travel experience. 

4. Functional Partitioning of the project

4.1 Research and gaining knowledge

Undertaking various courses and familiarizing ourselves with the working process of Data Science problems. Exposure and exploration of the Kaggle website, understanding kernels, and datasets. Learning the prerequisites: programming in Python, and Pandas along with Machine Learning algorithms and data visualization methods.

4.2 Frequent Discussions and Guidance

Frequent discussions with our mentor along with his guidance in the same will allow us to work in the right direction and take informed decisions.

 4.3 Applying the knowledge gained

After much exposure to this field and gaining the knowledge, we will now apply our skills to real-life problems and contribute to society.

5. Methodology

5.1 Using the Kaggle platform

In the test set, we will predict all the new users with their first activities after 7/1/2014.In the sessions dataset, the data only dates back to 1/1/2014, while the user’s dataset dates back to 2010. Taking the help of the Kaggle platform for testing out datasets as it is not feasible to have a large dataset say 1TB be stored in a local machine.

5.2 Working on the dataset

 Using the dataset and studying various patterns of users’ first booking after signing up with Airbnb from different countries. Next plot out the observed and collected information. We can then apply various Machine Learning algorithms and calculate prediction scores. Finally, choose the algorithm with the highest score to recommend to users which are from that country the destinations that have been frequently used by travelers belonging to that region.

5.3 Submitting our work on the Kaggle platform

The result can now finally be uploaded on the platform and be used by Airbnb to better connect with their users.

6. Tools required

6.1 Kaggle Kernels

Kaggle is a platform for doing and sharing Data Science. Kaggle Kernels are essentially Jupyter notebooks in the browser that can be run right before your eyes, all free of charge. The processing power for the notebook comes from servers in the cloud, not our local machine allowing us to experience Data Science and Machine Learning without burning through the laptop’s battery and space.

6.2 Dataset

Airbnb will be providing us with the dataset, which would contain: Airbnb will be providing us with the dataset, which would contain

  • csv-the training set of users
  • csv-the test set of users
  • csv-web sessions log for users
  • csv-summary statistics of destination countries in this dataset and their locations
  • csv-summary statistics of users’ age group, gender, and country of destination.
  • csv-correct format for submitting our predictions

7. Work Schedule

(a) January

Enroll and start the course on Machine Learning using Kaggle. Start recapitulating the basics of Python and its various libraries such as NumPy, pandas, etc.

(b) February

End course and start analyzing the dataset

(c) March

Start coding and implementing various algorithms for the prediction

(d) April

Pick the final algorithm by trial and test and finish coding

(e) May

Appropriate documentation and upload our solution

Development of Speech Recognition AI Project with Python

Methodology

Working on the Speech Recognition Python Project. Design and Development of Speech Recognition AI Project with Python Source code, report, and ppt using NLP, PLP, and Deep Neural Networks.

Speak– The assistant will speak the following introduction, the output, and the following things according to which good is given. It will use the laptop microphone to hear the input from the user and later recognize the voice said by the user and match the code words and if anything matches it will show the output.

Wish Me-The assistant will speak the Message included in the introduction even if it will wish the morning afternoon and even the evening depending upon the real-time based scenario. It will wish the morning from 04HH to 11HH 59MM. It will wish the afternoon from 12HH to 17HH 59MM. It will wish the evening from 18HH to 03HH 59MM.

Take Command– The assistant will take microphone(speech) input from the user and returns string output. It will be sub-divide into many different parts as described below. Listening-The assistant will open the microphone and try to hear what the user wants to convey to it.

Recognizing– The assistant will try to recognize the input spoken by the user and then check the code whether the word that is recognized by the assistant is there or not if the input matches it will show the output otherwise it will speak “Say that again please” this line which means to give the input again by the user. If the word is correctly recognized, it will follow the instructions assigned to it.

Wikipedia– If the word is recognized as “Wikipedia” it will search Wikipedia according to the input given by the user. E.g. if we say Narendra Modi Wikipedia so the assistant will speak “searching Wikipedia Narendra Modi” and then after it “According to Wikipedia…” and the details of that particular person. Youtube- If the word is recognized as “YouTube”, it will open the internet explorer and directly start opening the default web browser by the link “youtube.com”.

Google– If the word is recognized as “Google”, it will open the internet explorer and directly start opening the Google by the link “google.com”.

Train Information– If the word is recognized as “Train info”. It will fetch the detail from a CSV file and returns the detail of all the train and display them on the terminal. Stack Overflow- If the word is recognized as “Stack Over Flow” it will open the internet explorer and directly start opening the Stack Over Flow website by the link “stackoverflow.com”.

Play Music– If the word is recognized as “Play Music” it will search the .mp3 or .mp4 file in the default path of the device that is provided by the programmer in the programming. E.g. if we say Play Music so the assistant will search in the path like “D:\\Non Critical\\songs\\Favourite Songs2” and it will play that particular song. The Time- If the word is recognized as “The Time” it will check the real-time from the device and speak the same in terms of “HH:MM: SS”. E.g. if we say the time so the assistant will check the time and if the time is 08:14:21 P.M. it will speak “Sir, the time is 20HH:14MM:21SS”.

Open Code– If the word is recognized as “Open Code” it will search the .java or .py file in the default path of the device that is provided by the programmer in the programming. E.g. if we say Open Code so the assistant will search in the path like “C:\\Users\\XYZ\\AppData\\Local\\Programs\\project.py” and it will open the code. Stop- If the word is recognized as “Stop” it will speak “Quitting sir thanks for your time” and the code terminates.

Code-Snippet

Speech Recognition Project Coding

Algorithms used in Speech Recognition

NLP (Natural Language Processing) & Tokenization
PLP
Deep Neural Networks
Discrimination training
WFST Frameworks etc;

The following must be installed-:

1. sudo pip install SpeechRecognition.
2. Sudo apt-get installs python-pyaudio python3-pyaudio or pip install pyaudio.
This is the most important module in your project as it provides the main functionality in our project to convert speech into text.

Future Scope

This specific area of AI ends up being productive in each specialized field. We have additionally actualized this to show how it is valuable in various fields as we have made a little undertaking to exhibit its use in various documented, for example, railroad, looking through feed and so on; Like PCs began to play chess better than human, speech recognition before long will be improved by PCs as well. Critically, that will include some significant information about nature in general and the human mind specifically. So speech recognition is a significant advance in our investigation of natural laws. Our venture can be utilized by railroads and another center point to show distinctive data utilizing speech recognition.