Deep Learning For Recognizing Gesture States
DOI:
https://doi.org/10.5281/zenodo.8342173Keywords:
YOLO, hand, real time picturesAbstract
The device scans the area for people holding relevant objects, then transmits that information to a server for analysis and processing. In real time, the smartphone app sends the result image to the employees in reaction to the abnormal state of the hand. This research, based on the YOLOV3 Model, suggests a strategy for recognizing and naming abnormal hand states. The camera on the gadget takes continuous snapshots of the hand to detect bandages, rings, and bleeding areas in real time. After the network is optimized and the data is preprocessed, the accuracy of the algorithm may reach 99.7 percent. Additionally, the reduced computational complexity of the model will reduce stress on the underlying technology.