ABSTRACT VIEW
LEVERAGING ARTIFICIAL INTELLIGENCE FOR KENYAN SIGN LANGUAGE PRODUCTION TO SUPPORT DEAF LEARNERS
M. Ayere, L. Wanzare, J. Okutoyi, M. Kangahi, E. Maina
Maseno University (KENYA)
Language is a basic human right for expressing one’s feelings and exchange ideas in communication. Deaf learners experience language barriers as they cannot listen and express themselves using speech. They use sign language which most hearing people do not understand and are therefore always excluded from day-to-day activities that involve speech. According to the Kenya Census report, approximately 2% (1 million) of the estimated 50 million Kenyans are deaf. The language barrier is further compounded by scarcity of Kenyan sign language interpreters, most of whom are not well trained or certified, leading to misinterpretation of information which in turn compromises the quality of education among deaf learners. A technological solution can thus help in overcoming this language barrier. This project seeks to develop an assistive Artificial Intelligence-technology for Kenyan Sign Language (AI4KSL) that translates spoken English to Kenyan Sign Language (KSL) for deaf people with visual representation using virtual signing characters. The goal is to develop an assistive AI technology that converts spoken English to Kenyan Sign Language and have visual representation of the signs using virtual signing characters (Avatar) in real time. The specific objectives are to: Build a dataset for spoken English and video recorded KSL; Develop a prototype assistive AI technology from spoken and written text to KSL; Evaluate the assistive AI technology. The project used mixed methods research within an experimental design. The population consisted of 70 undergraduate sign language teacher trainees, 184 teachers of hearing-impaired learners and 1200 learners with hearing impairment. Stratified random sampling was used to select 21 sign language teacher trainees, 48 teachers of hearing-impaired learners and 400 learners with hearing impairment. The project curated 6000 English sentences across different topics according to KSL curriculum with each sentence signed at least 3 times using gloss representation from which 20000 video were signed and each video is a signed sentence. The video clips were split using the shortcut video editor software while the ELAN tool was used to segment the videos, demarcating the start and the end. Transcription was done using the HamNoSys structure based on symmetry operator, non-manual features, hand shapes (articulators), movement (manner of articulation), location (place of articulation), palm orientation (manner of articulation). Sign language representation of the dataset used gloss-based pose representation in 4 steps: Automatic text to gloss translation which is a rule-based word reordering and dropping component, and a neural machine translation; Video to Pose or skeletal pose using a state-of-the-art pose estimation framework (MediaPipe); Pose Generation: lookup of corresponding poses, pose cropping, concatenation and smoothing; Pose to Avatar generation – Deep motion, Ready Player Me. So far, about 2000 words have been transcribed using HamNoSys with a target of 4000. At the same time pose estimates or landmarks for all videos generated. It is hoped from this project that the AI4KSL innovation will break language barriers, increase learning outputs and inclusion; improve bilingual proficiency. This will lead to high quality education, increased transition and completion rates, and improved communication for deaf learners in the community when the technology is scaled up.

Keywords: Artificial Intelligence, Bridging Language Barriers, Building Datasets, Deaf Learners, Inclusion, Kenyan Sign Language (KSL).