Toggle navigation
Login
University Home
Library Home
Lib Catalogue
Advance Search
Toggle navigation
Browsing Computing by Author
IR@KDU Home
INTERNATIONAL RESEARCH CONFERENCE ARTICLES (KDU IRC)
2019 IRC Articles
Computing
Browsing Computing by Author
IR@KDU Home
INTERNATIONAL RESEARCH CONFERENCE ARTICLES (KDU IRC)
2019 IRC Articles
Computing
Browsing Computing by Author
JavaScript is disabled for your browser. Some features of this site may not work without it.
Browsing Computing by Author
0-9
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
0-9
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
Go
Order:
ascending
descending
Results:
5
10
20
40
60
80
100
Update
Now showing items 1-20 of 23
ascending
descending
5
10
20
40
60
80
100
Authors Name
S
[5]
Sajeewan
[1]
Samarakoon
[1]
Samarasinghe
[2]
Samaraweera
[3]
Sandaruwan
[1]
Sandeepanie
[1]
SB
[1]
SCM
[1]
Senananyake
[1]
Senavirathne
[2]
Shanaka
[1]
Sign Language is the main communication medium among deaf and speech impaired people. In order to express their thoughts and emotions, hand gestures are used. In Sri Lanka, the Sri Lankan Sign Language is considered as the native Sign Language. But unfortunately, in the Sri Lankan community, the deaf and speech impaired people are often ignored by society due to the language barrier. As a solution to the problem, this paper proposes a system to assist the deaf and speech impaired people in capturing their sign-based message via the camera and then convert it into Sinhala text and furthermore into audio form. So, the main aim of this research is to eliminate the communication gap and to improve interaction between them and the common people. Convolutional Neural Networks (CNN) has been used as the technology of this research. The proposed CNN model which consists of one convolution layer, one max pooling layer and two dense layers along with Relu and Softmax activation functions has the ability to automatically extract the features of the input static gesture and recognize it (out of 24 classes) and give it as the output in text form. And then a text to speech engine will eventually generate the audio output in the Sinhala language. The model was trained for more than 20 times and obtained an accuracy of 98.61%. The proposed model has been implemented through python and libraries like OpenCV, Keras, pickle, etc have been used in advance. Keywords— Convolutional Neural Networks, Static gestures, Gesture recognition, HSV
[1]
Silva
[1]
Sirisuriya
[1]
Sirisuriya SCM
[1]
SJMDP
[1]
SN
[1]
Sourjah
[1]
SS
[1]
Search IR@KDU
This Collection
Browse
All of IR@KDU
Communities & Collections
By Issue Date
Authors
Titles
Subjects
Faculty
Document Type
This Collection
By Issue Date
Authors
Titles
Subjects
Faculty
Document Type
My Account
Login
Register