Now showing items 1-20 of 23

      Authors Name
      S [5]
      Sajeewan [1]
      Samarakoon [1]
      Samarasinghe [2]
      Samaraweera [3]
      Sandaruwan [1]
      Sandeepanie [1]
      SB [1]
      SCM [1]
      Senananyake [1]
      Senavirathne [2]
      Shanaka [1]
      Sign Language is the main communication medium among deaf and speech impaired people. In order to express their thoughts and emotions, hand gestures are used. In Sri Lanka, the Sri Lankan Sign Language is considered as the native Sign Language. But unfortunately, in the Sri Lankan community, the deaf and speech impaired people are often ignored by society due to the language barrier. As a solution to the problem, this paper proposes a system to assist the deaf and speech impaired people in capturing their sign-based message via the camera and then convert it into Sinhala text and furthermore into audio form. So, the main aim of this research is to eliminate the communication gap and to improve interaction between them and the common people. Convolutional Neural Networks (CNN) has been used as the technology of this research. The proposed CNN model which consists of one convolution layer, one max pooling layer and two dense layers along with Relu and Softmax activation functions has the ability to automatically extract the features of the input static gesture and recognize it (out of 24 classes) and give it as the output in text form. And then a text to speech engine will eventually generate the audio output in the Sinhala language. The model was trained for more than 20 times and obtained an accuracy of 98.61%. The proposed model has been implemented through python and libraries like OpenCV, Keras, pickle, etc have been used in advance. Keywords— Convolutional Neural Networks, Static gestures, Gesture recognition, HSV [1]
      Silva [1]
      Sirisuriya [1]
      Sirisuriya SCM [1]
      SJMDP [1]
      SN [1]
      Sourjah [1]
      SS [1]