Real-time model to recognize Sinhala sign language based on CNN-LSTM approach

Show simple item record

dc.contributor.author Sandaruwan, A.M.L.S.
dc.contributor.author Laksiri, P.H.P.N.
dc.date.accessioned 2023-02-13T06:25:03Z
dc.date.available 2023-02-13T06:25:03Z
dc.date.issued 2023-01-18
dc.identifier.issn 1391-8796
dc.identifier.uri http://ir.lib.ruh.ac.lk/xmlui/handle/iruor/11045
dc.description.abstract Around the world, hearing-impaired and speech-impaired people are using different kinds of sign languages to communicate with each other and others. In Sri Lanka, they use Sinhala Sign Language (SSL) to communicate. SSL consists with more than 2000 sign-based words which cover the basic three parts of sign language which are isolated signs (Static signs), continuous signs, and annotations. Apart from the people who are using, others find it difficult to understand SSL. Due to the fact, the impaired people are facing difficulties in day-to-day communication. To address this difficulty in communication, a prototype model was proposed to translate the SSL signs to words in real-time by capturing the hand gestures of SSL with the aid of video processing, MediaPipe and Long Short-term Memory (LSTM) techniques. As a starting point, the proposed model was developed to recognize selected static SSL signs. Mobile phone captured 250 videos of the selected signs from impaired persons were used as inputs to the model. 30 extracted frames from each input video are then used to extract right hand, left hand, and face landmarks. Finally, the extracted landmarks are fed into a well-trained Convolutional Neural Network model. This development reached an overall accuracy of over 65% for the selected static SSL gestures. The model will be further developed to a simple and efficient mobile application to convert isolated signs (Static signs), continuous signs, and annotations made by an impaired person. en_US
dc.language.iso en en_US
dc.publisher Faculty of Science, University of Ruhuna, Matara, Sri Lanka en_US
dc.subject Sinhala Sign Language en_US
dc.subject Real time translator en_US
dc.subject Neural Network en_US
dc.subject Video Processing en_US
dc.title Real-time model to recognize Sinhala sign language based on CNN-LSTM approach en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account