Sign language recognition
This article needs additional citations for verification. (August 2021) |
Sign Language Recognition (shortened generally as SLR) is a computational task that involves recognizing actions from sign languages.[1] It is an essential problem to solve, particularly in the digital world, as it helps bridge the communication gap faced by individuals with hearing impairments.
Solving the problem typically requires annotated color (RGB) data; however, additional modalities such as depth and sensory information are also useful.
Isolated sign language recognition
[edit]Isolated sign language recognition (ISLR), also known as word-level SLR, is the task of recognizing individual signs or tokens, known as glosses, from a given segment of a signing video clip. It is commonly treated as a classification problem for isolated videos, but real-time applications also require handling tasks such as video segmentation.
Continuous sign language recognition
[edit]Continuous sign language recognition (CSLR), also referred to as sign language transcription, involves predicting all signs (or glosses) from a given sequence of sign language in a video. This task is more suitable for real-world transcription and is sometimes considered an extension of ISLR, depending on the approach used.
Continuous sign language translation
[edit]Sign language translation (SLT) refers to the task of translating a sequence of signs (or glosses) into a corresponding spoken language. It is generally modeled as an extension of the CSLR problem.
References
[edit]- ^ Cooper, Helen; Holt, Brian; Bowden, Richard (2011). "Sign Language Recognition". Visual Analysis of Humans. Springer. pp. 539–562. doi:10.1007/978-0-85729-997-0_27. ISBN 978-0-85729-996-3. S2CID 1297591.
{{cite book}}
:|journal=
ignored (help)