The ASLR project first began in Spring 2009 as a joint effort between theorists at the University of Chicago and technologists at the Toyota Technical Institute to create an efficient, precise, and linguistically revealing video recognition system for American Sign Language. Starting from the ground up, they elicited data on the complicated movement changes involved in the fingerspelling of American Sign Language. This data eventually led to statistical analyses of (1) durational variation between fingerspelled letters, and (2) fingerspelling errors.

Collaborators on this project are currently working from Hidden Markov Model approaches to video recognition and are applying the pilot study statistics to suggest several, very crude baseline parameters. The project will soon be eliciting more fingerspelled data.

The ASLR project team has also begun to construct a database of all instances of fingerspelled letters available from the collected data. Such a database will enable users to compare any give visual snap-shot of a fingerspelled letter with its distance from neighboring segments, its distance from errors, the word (and word type) in which it occurs, its signer, it's speed, and a coder confidence label. Several other investigations the ASLR project team hopes to pursue in the near future include coarticulation and signer-specific differences. If you are interested in collaborating on the ASLR project, and specifically have background in database management or programing languages (neither of these are absolutely necessary), visit our contact us page.

aslr in sign language