Real-Time Hand Gesture Detection for Sign Language Recognition using Python
Real-Time Hand Gesture Detection for Sign Language Recognition using Python
PROJECT ABSTRACT:
Sign languages, which have various regional variants, are used by thousands of people with hearing impairments around the world to communicate in their daily lives. Therefore, the automated translation of sign languages is seen as crucial and necessary for improving communication and inclusion for this group of people. However, there are several challenges that make this a difficult research area.
One of the main challenges is the variability in sign language across regions, which makes it impractical to develop a standardized system. Nonetheless, sign language recognition technology has the potential to provide better services for the deaf community, bridging communication gaps and contributing to overall societal well-being.
The project “Real-Time Hand Gesture Detection for Sign Language Recognition using Python” aims to develop a system that can recognize sign language gestures and translate them into text in real-time. The system uses computer vision techniques to detect and track hand movements, and machine learning algorithms to classify the gestures.
Our proposed system can detect the classes of “Ok, Open Hand, Peace, Thumb and No Hand Detected”. The project will be implemented using the Python programming language and the OpenCV library for computer vision tasks. In our proposed system we develop two different portions where the first with the Xception architecture model to detect the images of the hand gestures and predict the results and the second with OpenCV to detect real time using web camera.
Our proposed system with Xception architecture model achieved training accuracy of 99.34% and validation accuracy of 99.00%. The hand gesture recognition model will be trained using a dataset of hand gestures captured using a camera. The final system will provide a user-friendly interface to facilitate communication between sign language users and non-sign language users. It has the potential to improve communication and inclusivity for people with hearing or speech impairments in various settings such as classrooms, workplaces, and public spaces.
PROJECT OUTPUT VIDEO:
ALGORITHM / MODEL USED:
Xception Architecture.
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
- System : Pentium i3 Processor.
- Hard Disk : 500 GB.
- Monitor : 15’’ LED
- Input Devices : Keyboard, Mouse
- Ram : 4 GB.
- Camera: Web-Cam.
SOFTWARE REQUIREMENTS:
- Operating System : Windows 10 / 11.
- Coding Language : Python 3.8.
- Web Framework : Flask.
- Frontend : HTML, CSS, JavaScript.