Yan Zhang, assistant professor in the School of Information at The University of Texas at Austin, is creating an interactive online game to help children with autism spectrum disorders (ASDs) communicate their emotions.
The game will help the State of Texas achieve a high-quality intervention program for children with ASDs and the research can be used to aid autism communities across the nation.
For families around the world, this new technology will, for the first time, allow much more meaningful communications with their autistic child.
ASDs are a group of neuro-developmental and psychological disorders affecting one in 110 children in the United States.
Children with the disorder often have impaired social and communication skills and are unable to recognize and understand the cognitive and emotional state of others, including those conveyed through non-verbal clues such as facial expressions.
The goal of the research is to develop an early intervention tool, an interactive and adaptive game system, to help children with ASDs recognize and understand emotions expressed through facial expressions.
“Over the past year I’ve worked closely with numerous families who have a child with autism,” said Zhang. “They are so caught up in the difficulties of daily routines that they can’t even fathom the chance of getting to have a meaningful discourse with their child. This would be life-changing for them.”
Existing computer-based interventions to help children with autism use static response-based software. The game being created by Zhang and colleague J.K. Aggarwal in the Cockrell School of Engineering, integrates adaptive and responsive components, along with real-time image capture (via webcam trained on the child’s face).
These components help children with autism identify their own emotional responses onscreen, helping them to then communicate those emotions to others.
Games that are available now also do not offer context, which is crucial for emotional communication. Zhang is incorporating meaningful, recognizable social scenarios into the game and embedding the child’s facial expressions onto the avatar, giving the child a chance to immediately recognize his or her own facial reaction to a situation.
With this knowledge, children can see not only their own emotional response to a situation, but can begin to recognize the emotional responses of others.
Children will directly interact with the game using both a keyboard and a mouse, while a webcam will capture their facial expressions. An expression analysis component will simultaneously track the facial movements captured by the webcam and analyze the expressions.
Based on the results of this analysis, a virtual character synthesis process will create an avatar, so that the child can see his or her facial expressions reflected on the avatar.
The results of the expression analysis can also be directly rendered to the interface component, so that the child can tell whether he or she is correctly mimicking the virtual avatar’s facial expressions and whether he or she is making a proper facial expression to match a particular social scenario.
“Due to the wide spectrum of symptoms and behaviors within ASDs, effective solutions need to be able to adapt to the characteristics of each child,” said Zhang.
“Thus, we will also make the game adaptive to each child’s behavior. For example, if a child is playing the same game mode repeatedly, in a manner that is not increasing learning, the game system will adapt the level or mode accordingly to encourage learning.”
The project, entitled LIFEisGAME, is being conducted in collaboration with Verónica Orvalho from the University of Porto, Portugal.