YOLOv5 SignSense: Empowering Deaf and Mute Communication through Gesture Recognition
DOI:
https://doi.org/10.5281/zenodo.8332158Keywords:
Sign language, Yolov5, Real timeAbstract
In an era marked by advancing technology, addressing communication barriers faced by individuals with speech and listening challenges is paramount. This study presents an innovative approach to facilitate seamless communication for those who rely on sign language. However, the problem is that not everyone learns sign language and learning takes time and effort and is sometimes discouraging also, Thinking of places where sign language could benefit this community when put in use, there are a few places that come to mind like Restaurants, stores, supermarkets, etc. If the business owner understands the problem and would adopt technology, which translates gestures into words/short sentences, this accessible experience would draw more customers and can gain more profit. Leveraging the capabilities of the volov5 architecture, this project endeavors to create an AI system capable of real time translation of sign language gestures into textual representation. In the proposed model the Images of sign language are captured using a webcam annotate, labelled and create YOLO format datasets for sign language The model implementation has the potential to break down communication barrier and facilitate smoother interaction in diverse setting. The system has undergone real-time testing and achieved. Best accuracy with reduced computational cost.