Techniques are presented herein that support the efficient conversion of signs from one form of sign language to another while considering the cultural context (e.g., dialect, etc.) of a sign language. Aspects of the presented techniques support the conversion between different sign language forms through a neural machine translation (NMT)-based architecture. Further aspects of the techniques may encompass a contextual frame sampler (which may employ a sign language image database to filter out noise and which may sample frames from an input sign language video), an image normalizer (that may accept as input a sampled image frame and produce as output a skeletal structure of that frame), a translation layer (which may contain a NMT-based model and which may comprise feature extraction, feature conversion, and feature generation capabilities), and a video generator (which may stitch together the generated translated sign language output frames into a video). Under still further aspects of the techniques, such a conversion capability may be available during a video conference.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Kopparam, Anusha; Jairaj, Ananthi; Abburi, Vinay Kumar; and M Allen, PhD, Donald, "USING NEURAL MACHINE TRANSLATION TO TRANSLATE BETWEEN DIFFERENT SIGN LANGUAGE FORMS DURING A VIDEO CONFERENCE", Technical Disclosure Commons, (September 19, 2023)