CSE COMPUTER SCIENCE SOFTWARE IEEE PROJECTS
COMPUTER COURSE ONLINE
PYTHON LANGUAGE DEVELOPED PROJECT LIST
1)Deep Fake Detection of Images using common Fake Feature Network(CFFN)
2) Traffic Violation Detection System
3) Facial Emotion Recognition Using Convolution Neural Network (CNN)
4) Workout Monitoring System Using Opencv and Mediapipe
5) Underwater Image Enhancement Using Wavelet Fusion
6) Sudoku Solver Using Convolution Neural Networks
7) Image In painting for Object Removal
8) Social Distance Monitoring
9) Deep fake Detection Using a Siamese Network
10) Drowsiness and Yawn Detection
11) Signature Verification
12) Hand Cricket Game Using CNN and Mediapipe
13) Voice Based Image Caption Generator
14) Facial Identification Approach For People With and Without Facemask during the COVID-19 pandemic
15) Lung Cancer Detection Using Image Processing
16) Human Pose Estimation
17) Snapchat Filters Using Open CV
18) Age and Gender Detection Using Convolution Neural Networks
19) Speech to Sign Language Translator using ML,DL,NLP and Morse Code in Python
20) Smart Attendance System
21) Image Based Product Shopping Search Engine
22) Drowsiness Detection Using Image Processing (Machine Learning...)
23) Sign Language Detection
24) Road Lane Detection Using Computer Vision
ABSTRACT OF THE PROJECTS ARE BELOW
PROJECT-1: DEEPFAKE DETECTION OF IMAGES USING CFFN
ABSTRACT: DeepFake is a popular term that has taken the world by storm. It refers to a deep-learning technique that allows for the fabrication of photos, audio, and videos of a target source. While there are certain positive use-cases of this technology, it is often largely misused. It is mainly used in spreading fake news and this could pose a threat to world security.In this project, we deal with the detection of DeepFake using a Deep Learning technique, employing Siamese Neural Networks as they provide their own set of benefits compared to other Neural Network architectures. A Siamese Neural Network is a network that contains at least two identical subnetworks, identical referring to having the same set of parameters and weights. The detection follows a two-phase learning architecture where we simulate a CFFN in the first phase and build a classification network to classify the images into real or fake, in the second phase. Keywords: DeepFake, Siamese Network, Deep Learning, CFFN, Classification Network
PROJECT-2: TRAFFIC VIOLATION DETECTION SYSTEM
ABSTRACT: The increasing number of cars in cities can cause high volume of traffic, and implies that traffic violations become more critical nowadays in metropolitan cities and also around the world. This causes severe destruction of property and more accidents that may endanger the lives of the people. To solve the alarming problem and prevent such unfathomable consequences, traffic violation detection systems are needed. For which the system enforces proper traffic regulations at all times, and apprehend those who does not comply. The traffic accidents can be reduced by penalizing violators. Different countries have addressed the is-sue by installing surveillance systems to monitor traffic violation at every intersection. However, such systems are expensive and require well installed infrastructure. It is convenient in developed countries due to existing infrastructure; whereas, in underdeveloped countries, the lack of budget and weak infrastructure makes it unfeasible. The deployment of extensive number of traffic constables to monitor violations is not viable either. A traffic violation detection system must be realized in real-time as the authorities track the roads all the time. Hence, traffic enforcers will not only be at ease in implementing safe roads accurately, but also efficiently; as the traffic detection system detects violations faster than humans. The goal of the project is to automate the traffic signal violation detection system and make it easy for the traffic police department to monitor the traffic and take action against the violated vehicle owner in a fast and efficient way. Detecting and tracking the vehicle and their activities accurately is the main priority of the system. In this project, Speed detection, Licence plate detection, Helmet detection, and Signal violation detection systems have been implemented.
PROJECT-3: FACIAL EMOTION RECOGNITION USING CNN
ABSTRACT: Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Convolutional neural network (CNN), a class of artificial neural networks that have become dominant in various computer vision tasks, is attracting interest across a variety of domains. In our project, we have developed convolutional neural networks (CNN) for a facial expression recognition task. The goal is to classify each facial image into one of the seven facial emotions. We trained CNN models with different depth gray-scale images from the Kaggle website. To reduce the overfitting of the models, we utilized different techniques including data augmentation. Finally, we have trained and saved the model. Then, this saved model is fed into our web-based GUI application which can detect facial emotions in real-time through webcam, through direct image URL, and also by manually uploading the image file from the local system.
PROJECT-4: WORKOUT MONITORING SYSTEM USING OPENCV AND MEDIAPIPE
ABSTRACT: Exercise can help keep your body at a healthy weight. Exercise can also help a person age well. Monitoring your form during exercise is essential to success in meeting your fitness goals. Powered by computer vision and natural language processing algorithms, the technologies lead end-users through a number of workouts and give real-time feedback. The Workout Monitoring system makes sure you perform your exercises in the right way. It detects the pose of your body from a camera feed and tells if you are doing it right. It also keeps track of the count to guide you through the whole workout effectively. The implementation will be done using OpenCV and MediaPipe. MediaPipe offers many ML solutions like Face Detection, Face Mesh, Hair Segmentation, pose, etc. Out of these, we are going to use the pose library to do pose estimation. The model used is the Blaze Pose model for pose estimation. We are going to run this program through a GUI made using pyqt5.
PROJECT-5: UNDERWATER IMAGE ENHANCEMENT USING WAVELET FUSION
ABSTRACT: Underwater images are not always clear as the image is taken in water and light behaves differently in water. In our project, we will be showing how to remove the interference in the image and try to get the exact colors of the object underwater using wavelet fusion and other functions in MATLAB. This project will help people to perform object detection underwater even if the image input is not very clear as images clicked underwater are subjected to dispersion & refraction of light resulting in blurred images with very little details.
PROJECT-6: SUDOKU SOLVER USING CONVOLUTIONAL NEURAL NETWORKS
ABSTRACT: Sudoku is a logic-based combinatorial number-placement puzzle. Sudoku is one of the most famous riddles rounds ever. In the traditional sudoku, the goal is to fill a nine × nine grid with digits so that each column, each row, and each of the nine three × three sub grids that comprise the grid comprises all of the digits from 1 to 9. The puzzle setter provides a somewhat completed grid. Sudoku puzzles are generally classified as easy, medium or hard, with puzzles having more starting clues generally but not always easier to solve. Occasionally, the player might find it challenging to solve the sudoku and require a sudoku solver. The solver is capable to find a solution for the given sudoku. A convolutional neural network (CNN) is an artificial neural network that is used to analyse pixel input in image recognition and processing. The sudoku solver can be implemented using convolution neural network and computer vision. This will be done using Python Programming Language using Keras, Tensorflow and OpenCV packages.
PROJECT-7: Image Inpainting for Object Removal
ABSTRACT: The task of rebuilding missing or masked areas of an image is known as image inpainting. Its core concept is to employ undamaged and effective information from the undamaged section of the image to reconstruct the damaged parts utilizingthe surrounding pixels while maintaining the image's authenticity and uniqueness. Its primary goal is to make the rebuilt image fulfil human vision standards, so that individuals who are unfamiliar with the original image would not see the restoration trace. Scratches restoration of old photos and valuable literatures, protection of cultural relics, robot vision, film and television special effects production, object removal, image restoration, manipulation, re-targeting, compositing, and image-based rendering are just some of the applications of inpainting technology. Object removal is the task of removing unwanted objects in an image. We use Image Painting to try to eliminate the unwanted object and create a new image that appears similar to the original.
PROJECT-8: SOCIAL DISTANCE MONITORING
ABSTRACT: Social distancing measures are important to reduce Covid spread. In order to break the chain of spread, social distancing is strictly followed as a norm. This project demonstrates a system which is useful in monitoring public places like ATMs, malls and hospitals for any social distancing violations. With the help of this proposed system, it would be conveniently possible to monitor individuals whether they are maintaining the social distancing in the area under surveillance and also to alert the individuals as and when there is any violations from the predefined limits. The proposed deep learning technology based system can be installed for coverage within a certain limited distance. The algorithm could be implemented on the live images of CCTV cameras to perform the task. The simulated model uses deep learning algorithms with OpenCV library to estimate distance between the people in the frame, and a YOLO model trained on COCO dataset to identify people in the frame. The system has to be configured according to the location it is being installed at. By implementing the algorithm, the number of violations are reported based on the distance and set threshold. Number of violations reported are one and two for two real time images respectively. The red boxes highlighting the violations are displayed along with distance.
KEYWORDS: Deep learning, Social distancing, COVID-19, Person detection, YOLOv3, COCO dataset.
PROJECT-9: Deep Fake Detection Using a Siamese-Network
ABSTRACT: Deep fakes are artificial media in which the data of another person is used to replace the data of a source image or video. This is done using strong and effective machine algorithms to operate or manufacture visual and audio content that has a high potential for deception. Every day, millions of videos/images/audios are shared through social media sites, owing to the increase in popularity and availability of gadgets incorporated with high-end camera s. A considerable amount of such phony videos and sounds made by digital modification has recently become a major privacy concern. They can be misused to create chaos in financial markets as well.Our project focuses on creating a robust meta-learning framework to obtain improved performance in detecting deepfakes irrespective of the number of data samples available, with the help of a Siamese network-based architecture.
PROJECT-10: Drowsiness and Yawn Detection
ABSTRACT: In this project we have implemented Drowsiness and Yawn detection using Face detection. Our main goal is to detect if the user is feeling drowsy or sleepy and alarm him to wake up. So whenever the person opens his/her mouth it alarms the person to take in some fresh air and if the user closes his eyes for more than a substantial period of time it warns the user to wake up. The motivation to this project has come due to numerous accidents that have occurred in recent times due to drowsy drivers and lack of smart attendance systems for students attending online classes.
ABSTRACT: Signature continue to be an important biometric for authenticating the identity of human beings. The fact that the signature is widely used as a means of personal identification tool for humans require that the need for an automatic verification system. Verification can be performed either Offline or Online based on the application. With modern computers, there is need to develop fast algorithms for signature recognition. There are various approaches to signature recognition with a lot of scope of research. Signature verification and recognition is a technology that can improve security in our day to day transaction held in society. This project presents a novel approach for offline signature verification using unique structural features extracted from the signature’s contour. In this project signature verification using Image Processing is projected, where the signature is written on a paper are obtained using a scanner or a camera captured and presented in an image format. This project involves the design and development of an efficient signature identification system. The pattern recognition algorithm designed for this project is based on general architecture of signature identification system. This technique is suitable for various applications such as bank transactions, passports with good authentication results .etc.
PROJECT-12: HAND CRICKET GAMEUSING CNN AND MEDIAPIPE
ABSTRACT: In the era of computer-based games, every other game is being computerized one or the other way. The Project involves in implementing a Hand Gesture Based Hand Cricket Game , i.e., Once The player starts Playing by his/her video on, it recognizes the hand gestures of the player and calculates the score until the player gets out. It is implemented in 2 Approaches i.e., one using own dataset and self-built Convolutional Neural Network and other using Media pipe inbuilt model in Python Using OpenCV, Keras, and MediaPipe Packages.
PROJECT-13: VOICE BASED IMAGE CAPTION GENERATOR
ABSTRACT: Image caption generator is a task that involves computer vision and natural language processing concepts to recognize the context of an image and describe them in a natural language like English. In our project we will obtain the caption for an image from the Image caption generator and convert that text into speech, so that it can be verbally communicated to the blind people. The caption generator is implemented using CNN (Convolutional Neural Networks) and LSTM (Long short-term memory). The programming language used is Python.
PROJECT-14: FACIAL IDENTIFICATION APPROACH FOR PEOPLE WITH AND WITHOUT FACE MASK DURING THE COVID-19 PANDEMIC
ABSTRACT: COVID-19 has affected the whole world very badly. It has a huge impact on our everyday life, and this crisis is increasing day by day. Soon, it seems difficult to eradicate this virus completely. To counter this virus, Face Masks have become an integral part of our lives. These Masks can stop the spread of this deadly virus, which will help to control the spread. As we have started moving forward in this new normal world, the necessity of the face mask has increased. So here, we are going to build a model that will be able to classify whether the person is wearing a mask or not. This model can be used in crowded areas like Malls, Bus stands, and other public places. To screen that individual are following this fundamental wellbeing standard, a procedure ought to be created. A face cover identifier framework is executed to check this. Face cover discovery intends to recognize if an individual is wearing a mask. In this paper, we will build a facial mask identifier on people with and without mask using TensorFlow, Open CV, Keras, Python’s different libraries.
PROJECT-15: LUNG CANCER DETECTION USING IMAGE PROCESSING
ABSTRACT: In the field of medicine, identification and treatment of cancer are considered as one of the biggest challenges in the treatment of chronic illness. The survival of patients depends on timely detection and cure. Experts use the CT scan or Computed Tomography Scan images of patients to detect and classify nodules, before proceeding with advanced treatment procedures. The present-day advances in artificial intelligence, machine learning based on deep learning models can be used to develop sophisticated Computer-Aided Diagnosis systems to detect cancerous nodules. The proposed system is based on Convolutional Neural Networks to categorize nodules detected in CT scan images as malignant or benign. Image processing and Neural Networks have been extensively used in the detection and classification of cancerous nodules. Hence CNNs are more appropriate, for the task of nodule detection and classification. CNN’s have more properties like multiple feature extraction. When convolution layer, sub sampling or pooling layer, fully connected layer such layers are combined, leading to Deep CNNs, it helps in increasing the accuracy of classification. The proposed CNN model will be suitable for the early detection and classification of CT scans images containing nodules with good accuracy, using the domain knowledge of the CT scan images of lungs in the field of medicine and Neural Network.
Keywords:-Cancer Detection, Image processing, CNN model, CT scan.
PROJECT-16: HUMAN POSE ESTIMATION
ABSTRACT: Human posture recognition has fascinated many researchers Because of its essential issues and wide range of applications. . The applications in surveillance systems range from simple posture recognition to complex behaviour comprehension. This leads to significant development in the techniques related to human body posture representation and human pose estimation. This project discusses applications, general framework of human body posture recognition. In this project we present a approach for Body Posture detection in OpenCV Using MediaPipe Technology. The project also emphasizes its advantages and disadvantages. The domain of human body posture recognition has been active for over two decades and has produced a substantial amount of literature. The report also discusses some different approaches for detecting human position, as well as their pros and disadvantages.
PROJECT-17: SNAPCHAT FILTERS USING OPENCV
ABSTRACT: Snap chat is a popularly known application where some cool and awesome face filters can be added on our pictures. This project put forward a technique for applying face filters on the detected facial region in a video frame and applies filters by taking input from the user to select the desired filter that could be applied on the face region. Applying face filters on a captured video frame will be done by tracking and detecting faces using face detection algorithms called Haar Cascades. It makes use of various features such as Edge Features, Line Features, and Centre-Surround Features to locate similar patterns in the images and hence identify objects. Results are obtained using Python code and with OpenCV. Filters are design overlays that you add on top of your pictures. After you take a picture, you can select filters from different dropdown menus. Filters can add color effects, or different face filters to your picture. In the past few years, face recognition has received significant consideration and appreciated as one of the most promising applications in the field of image analysis. Face detection can be a substantial part of face recognition operations. Object detection is one of the computer technologies, which is connected to image processing and computer vision and it interacts with detecting instances of an object such as human faces, buildings, trees, cars, etc. The primary aim of face detection algorithms is to determine whether there is any face in an image or not. In recent times, a lot of study work proposed in the field of Face Recognition and Face Detection to make it more advanced and accurate, but it makes a revolution in this field when Viola-Jones comes with its Real-Time Face Detector, which is capable of detecting the faces in real-time with high accuracy. Face Detection is the first and essential step for face recognition, and it is used to detect faces in the images. It is a part of object detection and can be used in many areas such as security, bio-metrics, law enforcement, entertainment, personal safety, etc.
PROJECT-18: AGE AND GENDER DETECTION USING CONVOLUTIONAL NEURAL NETWORKS
ABSTRACT: Age and gender predictions of unfiltered faces classify unconstrained real-world facial images into predefined age and gender. Automatic age and gender classification have become relevant to an increasing amount of applications, particularly since the rise of social platforms and social media. Significant improvements have been made in this research area due to its usefulness in intelligent real-world applications. However, the traditional methods on the unfiltered benchmarks show their incompetency to handle large degrees of variations in those unconstrained images. More recently, Convolutional Neural Networks(CNNs) based methods have been extensively used for the classification task due to their excellent performance in facial analysis. In this work, we propose a novel end-to-end CNN approach, to achieve robust age group and gender classification of unfiltered real-world faces. The two-level CNN architecture includes feature extraction and classification itself. The feature extraction extracts feature corresponding to age and gender, while the classification classifies the face images to the correct age and gender groups. We have implemented a classification model for age and gender classification using the two-level CNN architecture with better accuracy. We evaluated our method on the UTK Face dataset for age and gender estimation and showed it to dramatically outperform current state-of-the-art methods.
PROJECT-19: SPEECH TO SIGN LANGUAGE TRANSLATOR
ABSTRACT: Sign language is a language that is often used by deaf people. This basically conveys the thought of a person by making use of body language as well as manual communication. Communicating with deaf people is sometimes a tedious task because normal people find it difficult to understand sign language. This will become more challenging task because there will be a communication gap between normal people and specially challenged person (deaf person according to our project). In our project we are showing that live speech or audio recording is considered as input and is converted into text and displays appropriate Indian or British Sign Language images and GIFs. We use concepts like ML and DL. This project is big advantage for who are physically disabled. Using this system, communication gap between normal and deaf people gets easier and deaf person will also enjoy all sorts of things that normal people do from daily interaction to accessing the information. Morse code has also been used for this project.
PROJECT-20: SMART ATTENDANCE SYSTEM
ABSTRACT: The attendance system is utilized to track and screen whether or not a student attends a class. There are various sorts of attendance systems like Biometric-based, Radiofrequency card-based, face acknowledgement based and paper-based attendance systems. Comparing all of them, a Face Recognition based attendance system is safer and efficient. There are a few research papers focusing on just the acknowledgement pace of students. In this task we are focusing on a face acknowledgement-based attendance system with obtaining a less false positive rate utilizing a limit to confidence, for example, Euclidean distance esteem while recognizing obscure people and saving their pictures. In contrast with other Euclidean distance-based calculations like Eigen faces and Fisher faces, Local Binary Pattern Histogram (LBPH) calculation is better. We utilized Haar-cascade for face detection due to their robustness and LBPH algorithm for face acknowledgement. It is powerful against monotonic greyscale changes. Situations, for example, face acknowledgement rate, false-positive rate with and without utilizing a threshold in distinguishing obscure people are considered to assess our model. We additionally added a choice to send electronic mails with the attendance list of students to the specified mails like teachers or higher officials.
PROJECT-21: IMAGE BASED PRODUCT SHOPPING SEARCH
ABSTRACT: In today’s world there is lot of demand for user interface to facilitate users and make it easy for them in accessing something. Similarly, there is increase in demand for image-based searching and collecting information. We come across many e-commerce websites which do not support image-based product searching. So, we are developing an application for image-based product recognition and classification with Graphical User Interface where users can search for similar product by giving an input query image. Our work is mainly to help people who have dyslexia and find it difficult to read, by providing them a simple user-friendly interface to buy and shop for the products they need similar to the product they want. We worked on an image-based search engine which will identify top ten images in the database which is similar to the input query image. The user can search for images similar to the input image, and add them to cart after entering the quantity of the product required and finally buy then once the bill is generated. This project is an application developed on Image Processing. The techniques used include feature extraction, feature matching, and image distance calculation for similarity finding .etc. the module used to carry out this include VGG16 available in Keras.
PROJECT-22: DROWSINESS DETECTION
ABSTRACT: Nowadays, more and more professions require long-term concentration. Drivers must keep a close eye on the road, so they can react to sudden events immediately. Driver fatigue often becomes a direct cause of many traffic accidents. Therefore, there is a need to develop the systems that will detect and notify a driver of her/him bad psycho physical condition, which could significantly reduce the number of fatigue-related car accidents. However, the development of such systems encounters many difficulties related to fast and proper recognition of a driver’s fatigue symptoms. One of the technical possibilities to implement driver drowsiness detection systems is to use the vision-based approach. The technical aspects of using the vision system to detect driver drowsiness are also discussed. Drowsiness and Fatigue of drivers are amongst the significant causes of road accidents. Every year, they increase the amounts of deaths and fatalities injuries globally. In this paper, a module for Advanced Driver Assistance System (ADAS) is presented to reduce the number of accidents due to drivers' fatigue and hence increase transportation safety; this system deals with automatic driver drowsiness detection based on visual information and Artificial Intelligence. We propose an algorithm to locate, track, and analyze both the driver's face and eyes, a scientifically supported measure of drowsiness associated with slow eye closure.
PROJECT-23: SIGN LANGUAGE DETECTION
ABSTRACT: Computer vision has been rose as a significant space of examination these days. With the mechanical pattern in man-machine interfaces and the machine knowledge, these powers are utilized for making the existence of individuals simpler and less complex. In our daily life, Communication is playing an important role, but it became very difficult when it comes to the area of deaf and dumb people. The only way out from this is sign language which is complicated for people. Sign language helps us to build a bridge between hearing impaired people and rest of the society. There number SLR (sign language recognition) system have been developed but there are limited to sign gestures only. In this project we use LSTM (long short-term memory model) to recognize the signs and gestures and some OpenCV and contour method to recognize numbers and gesture. We used LSTM to detection because as because the gestures given are continuous and that has to recognized as a sequence of connected gestures. It will be also based upon splitting of continuous signs into sub-units and modelling them with neural networks. The average accuracy obtained is 92%has recorded for our own made dataset.
PROJECT-24: Road Lane Detection using Computer Vision
ABSTRACT: Autonomous driving is set to revolutionize travel in the coming decade and it needs a computer vision perception module to understand and navigate the environment. Road lane lines detection system is commonly used in autonomous vehicles. Apparently, among the complex and challenging tasks of future road vehicles is road lane detection or road boundaries detection. In this project, a vision-based lane detection approach capable of reaching real-time operation with robustness to lighting change and shadows is presented by using the OpenCV module that is available in python. OpenCV is a library of programming functions mainly aimed at real-time computer vision. The system acquires a set of images as its input. The images were the screenshots of the sample videos. After pre processing the input images, the edges of the lanes were detected by using a canny edge detector. Further, probabilistic Hough Lane transformation was used to draw the lines on the lane. The output is shown in GUI, which was built by using the Tkinter toolkit that was available in python. The proposed lane detection system can be applied on the painted road as well as the curved and straight road in different weather conditions. This approach was tested and the experimental results show that the proposed scheme was robust and fast enough for real-time requirements.
SEND WHATSAPP MESSAGE TO US ENGPAPER
FOR YOUR QUESTIONS
https://wa.me/message/IF5WM7KJ4RIPN1 or email engpaper2 @ gmail.com
FREE IEEE PAPERS