• Login
    • University Home
    • Library Home
    • Lib Catalogue
    • Advance Search
    View Item 
    •   KDU-Repository Home
    • INTERNATIONAL RESEARCH CONFERENCE ARTICLES (KDU IRC)
    • 2019 IRC Articles
    • Computing
    • View Item
    •   KDU-Repository Home
    • INTERNATIONAL RESEARCH CONFERENCE ARTICLES (KDU IRC)
    • 2019 IRC Articles
    • Computing
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    American Sign Language Translator

    Thumbnail
    View/Open
    com012.pdf (369.4Kb)
    Date
    2019
    Author
    Premarathna
    WCC
    Arudchelvam
    T
    Metadata
    Show full item record
    Abstract
    The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf or Muted people. Every normal human being sees, listens, and reacts to surrounding. But those are unlucky individuals who does not have this important blessing. Such individuals, mainly deaf and dumb, they depend on communication via sign language to interact with others. However, communication with ordinary individuals is a major impairment for them since not every typical people comprehend their sign language. This paper proposes an application which would help in recognizing the different signs which is called ASL (American Sign Language) by using Python, OpenCV, Tensorflow and Keras. The images are of the palm side of hand and are loaded at runtime. The method has been developed with respect to single user at a time. The real time images called as training data is captured first and then stored in directory. Then feature extraction will take place to identify which sign has been articulated by the user. Finally a CNN (Convolution Neural Network) model which is used sequential classifier and RELU (Recurrent Linear Units) activation function was created and saved as a json file. The comparisons will be performed and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the json model. There are 41 signs in ASL corresponding to each 26 English alphabet, 0 – 9 numbers and some simple words also. This model provided with 95% accurate results for input images captured at many possible angle and distance in a pleasant environment.
    URI
    http://ir.kdu.ac.lk/handle/345/2262
    Collections
    • Computing [68]

    Library copyright © 2017  General Sir John Kotelawala Defence University, Sri Lanka
    Contact Us | Send Feedback
     

     

    Browse

    All of KDU RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsFacultyDocument TypeThis CollectionBy Issue DateAuthorsTitlesSubjectsFacultyDocument Type

    My Account

    LoginRegister

    Library copyright © 2017  General Sir John Kotelawala Defence University, Sri Lanka
    Contact Us | Send Feedback