Research in Developmental Disabilities 36 (2015) 396–403

Contents lists available at ScienceDirect

Research in Developmental Disabilities

Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders Chien-Hsu Chen a,*, I-Jui Lee a, Ling-Yi Lin b a b

Ergonomics and Interaction Design Lab, Department of Industrial Design, National Cheng Kung University, No. 1 University Road, Tainan, Taiwan Department of Occupational Therapy, National Cheng Kung University, No. 1 University Road, Tainan, Taiwan

A R T I C L E I N F O

A B S T R A C T

Article history: Received 7 August 2014 Received in revised form 2 October 2014 Accepted 10 October 2014 Available online

Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions. This study assessed the possibility of enabling three adolescents with ASD to become aware of facial expressions observed in situations in a school setting simulated using augmented reality (AR) technology. The AR system provided three-dimensional (3-D) animations of six basic facial expressions overlaid on participant faces to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, the data indicated that AR intervention can improve the appropriate recognition and response to facial emotional expressions seen in the situational task. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Augmented reality (AR) Emotions Self-facial modeling Three-dimensional (3-D) facial expressions 3-D facial animation

1. Introduction Autism spectrum disorders (ASD) are characterized by atypical patterns of behavior and impaired social communication (American Psychiatric Association, 2000; Krasny, Williams, Provencal, & Ozonoff, 2003). The challenge of social interaction for people with ASD is appropriately recognizing and understanding facial expressions that indicate emotions (Dawson, Webb, & McPartland, 2005; Ryan & Charragain, 2010; Williams, Gray, & Tonge, 2012). People with ASD have difficulty understanding the expressions and emotional states of other people and determining their intentions and thought processes, which results in an impaired ability to respond with appropriate expressions and to interact appropriately with their peers (Krasny et al., 2003). In addition, facial expression processing is atypical in people with ASD (Annaz, Karmiloff-Smith, Johnson, & Thomas, 2009); although some people with high-functioning autism (HFA) are relatively adept at social communication involving complex facial emotions, they have difficulty with nonverbal communication (Elder, Caterino, Chao, Shacknai, & De Simone, 2006). Emotion recognition is among the skills most crucial to social interaction and developing empathy (Baron-Cohen, 2002). Relevant studies have described empathy as a lens through which people comprehend emotional expressions and respond appropriately (Sucksmith, Allison, Baron-Cohen, Chakrabarti, & Hoekstra, 2013). However, people with ASD have deficits that include being unable to view events from the perspective of other people and to respond with appropriate expressions (Baron-Cohen & Belmonte, 2005; Baron-Cohen, Leslie, & Frith, 1985). Research on emotional impairment in ASD has focused primarily on examining emotional recognition and understanding and on teaching facial expressions by labeling them on formatted photographs (Ashwin, Wheelwright, & Baron-Cohen, 2005;

* Corresponding author. Tel.: +886 6 2757575x54324. E-mail addresses: [email protected] (C.-H. Chen), [email protected] (I.-J. Lee), [email protected] (L.-Y. Lin). http://dx.doi.org/10.1016/j.ridd.2014.10.015 0891-4222/ß 2014 Elsevier Ltd. All rights reserved.

C.-H. Chen et al. / Research in Developmental Disabilities 36 (2015) 396–403

397

Begeer, Rieffe, Terwogt, & Stockmann, 2006; Ben Shalom et al., 2006; Castelli, 2005; Wang, Dapretto, Hariri, Sigman, & Bookheimer, 2004). For example, various facial expressions in photos and videos have been used to develop the communication skills of people with ASD, which enables them to focus on the specific visual representations and facial cues according to which the facial emotions of others can be determined (Blum-Dimaya, Reeve, Reeve, & Hoch, 2010). Current intervention systems used for people with ASD involve applying a third-person perspective to recognize and manipulate feelings based on the facial synthesis of 3-D characters; they support reusability of facial components, and have an avataruser interaction model with real time responses (Kientz, Goodwin, Hayes, & Abowd, 2013), for example, online games that depict an imaginary world from a third-person perspective to represent the actions and statuses of other people. However, although the expressions of an avatar or cartoon character (Tseng & Do, 2010) facilitate learning using emotions, methods in which information is not presented from the perspective of the participants do not enable people with ASD to see the expressions on their own faces and thereby connect the expression with thoughts (Young & Posselt, 2012). In addition, video self-modeling (VSM) has been used for social skills training and involves participants watching a video of a person modeling a desired behavior and then imitating the behavior of the person in the video (Axe & Evans, 2012). However, using VSM as an intervention strategy for individuals with ASD does not enable immediate feedback on the facial states of the participants during a scenario. These systems simply record the events occurring during the scenario and the physical behaviors imitated by the participants; therefore, participants have difficulty obtaining self-facial expression instruction. People with ASD experience difficulty in accessing self-facial expression treatment because training scenarios and real-time mood simulations in which people can pretend to feel various emotions are unavailable. Thus, emerging technologies such as augmented reality (AR) can be applied to teach learners to explore material from various perspectives (Asai, Kobayashi, & Kondo, 2005). Because these technologies have the potential to stimulate the senses of the user, they may be particularly useful for teaching subject matter that learners have difficulty experiencing in the real world (Chien, Chen, & Jeng, 2010; Shelton & Hedley, 2002) and for facilitating social interaction. In addition, unlike traditional learning content that provides only static texts and facial images to describe an emotional expression, the AR instructional model can be used to present the core learning content directly to participants with ASD and assist them in exploring self-facial expression. Therefore, we created an AR application that can be used to increase emotional expression recognition and social skills. 2. Methods 2.1. Participants The three adolescent participants with ASD (Zhu, Lin, and Lai) were recruited through the Autism Association in Taiwan. The inclusion criteria specified that participants must have: (1) been clinically diagnosed with ASD, (2) no other specific disabilities, (3) a full-scale Intelligence Quotient (IQ) of more than 85, and (4) been clinically evaluated according to DSMIV-TR criteria. All participants were fluent in Mandarin Chinese or Taiwanese and had no delays in cognitive development. The participants’ sensory abilities were within the normal range. However, they all had poor social and communication skills and rarely understood how to respond with appropriate facial expressions to other people’s emotions. The participants ranged in age from 10 to 13 years (n = 3; mean age: 12 years, 2 months). The mean (SD) full-scale IQ, verbal IQ, and performance IQ scores were 101 (9.07), 100 (8.73), and 101 (1.73), respectively. The participants’ intelligence, sensory abilities, and social and communication skills were based on multiple information sources, such as parental interviews, teachers’ reports, verbal IQ scores (Wechsler Intelligence Scale for Children), and levels of functional language and social adaptation levels (based on clinical observations or behavior and adaptation scales). The male:female ratio was 2:1. All participants had a medical disability identification card issued by medical institutions in Taiwan and had received counseling in special education schools and institutes in Taiwan. The participants’ vision and hearing were normal. In addition, the National Cheng Kung University Hospital internal review board gave ethical clearance for the study (B-BR103-028-T). Parental consent forms were obtained before the participants were enrolled in the study. All participants signed a youth consent form. 2.2. Developing the augmented-reality-based self-facial modeling learning system The 3-D facial models of virtual characters were designed to fit the heads of all the participants, and six facial expressions that communicated basic emotions (happiness, sadness, fear, disgust, surprise, and anger) were developed according to the Facial Action Coding System (FACS) for this study (Hamm, Kohler, Gur, & Verma, 2011). To create the 3-D head model, we used frontal and side-view pictures of the faces of each participant to generate facial skin on the models using Facial Studio 3.2 (Di-O-Matic, 2014), which provides the user with more than 500 controls over the head-creation process, and animated the models using 3ds Max1 2012 (Autodesk, San Rafael, CA). Three 3-D head avatars (2 males and 1 female) were created and used in this study. After we built the models, we used the Unity game engine, which enables customized modeling, rigging, and animation and can be integrated with 3-D modeling and animation software pipelines, such as 3ds Max1, if the pipelines can support exporting in standard formats, e.g., FBX and OBJ sequences. Finally, the AR system consolidation was built using Qualcomm1 AR (QCAR) in Unity with a VuforiaTM platform (Qualcomm, Inc., San Diego, CA), which enables rapid and accurate natural feature tracking of textured planar objects. Developing an AR environment with this software was easy

398

[(Fig._1)TD$IG]

C.-H. Chen et al. / Research in Developmental Disabilities 36 (2015) 396–403

Fig. 1. ARSFM development process.

because virtual content could be overlaid on printed pages and the participants could view their 3-D emotional expressions and perceived emotional states reflected in the LCD monitor (Fig. 1). 2.3. Setting In this study, an AR system used to teach crucial developmental abilities to adolescents with ASD was designed using an augmented mirror through which users could see themselves with virtual 3-D facial expressions. To begin the test, a therapist recounted a short story and showed participants 4 or 5 scene illustrations. The participants answered each test question after observing the scene illustrations, selected an appropriate mask to wear that corresponded with the scenes, and looked at the 3-D AR facial expressions overlaid on their faces. The AR system used in this study was an AR-based self-facial modeling (ARSFM) learning system that shows the six basic 3-D facial expressions mentioned in the previous subsection based on the facial features of the participants. The AR system ran the application in the background. When a participant wore one of the masks, a life-size virtual 3-D head with a face was

[(Fig._2)TD$IG]

C.-H. Chen et al. / Research in Developmental Disabilities 36 (2015) 396–403

399

Fig. 2. The physical setup: a large LCD monitor, a camera facing the user, a mask worn by the participant, and the augmented 3-D head model of an emotional facial expression.

[(Fig._3)TD$IG]

Fig. 3. A participant imitates the visual feedback provided by the 3-D facial model to make facial expressions that correspond to actions (the boy is simulating the facial action of opening his mouth).

shown on his or her body to represent the emotional expression appropriate for the scenario. Sessions were conducted in a quiet 3 m  6 m room of the day-treatment center at the school. The room contained a table and chairs, an Intel Core i7 personal computer, a 52-inch LCD monitor, and an autofocus Logitech C920 web camera (Fig. 2). The ARSFM learning system was set up in the day-treatment room at the school. We met with the participants and an accompanying therapist for 1.2 h each week. The participants sat one meter in front of the LCD monitor, and a tripodmounted video camera and six facial expression masks were placed on the desktop. When the test began, the therapist recounted a short story and showed illustrations, asking the participants to choose a mask depicting the emotional expression that corresponded to the situation. The ARSFM system superimposed the 3-D virtual animation of the facial expression on the participant’s face to reflect the emotions that should be expressed in the situation. Thus, AR and self-facial modeling were used to enable participants with ASD to observe their own emotions and facial expressions (Fig. 3). 2.4. Phases, sessions, and experimental conditions This study was done using a multiple baseline design across participants to demonstrate stimulus control (Backman & Harris, 1999). Length conducted all sessions. This certified occupational therapist has more than 3 years of experience

400

C.-H. Chen et al. / Research in Developmental Disabilities 36 (2015) 396–403

working with children with ASD. She provided to all participants instructions on using the ARSFM learning system. The experiment consisted of three phases: (a) the baseline phase, which involved collecting baseline data on the participants; (b) the intervention phase, in which the ARSFM learning system was used for 1.5 months to train the participants in social skills and obtain the performance data used in the assessment; and (c) the follow-up the phase, which occurred 2 weeks after the intervention phase was completed, and was used to assess the post-training performance of the participants. 2.4.1. Baseline phase In the baseline phase, the therapist asked (a) the participants, after they had read the scenario and looked at the corresponding illustrations, to determine the emotion described in each question. The entire scenario was played back using MicrosoftTM PowerPoint 2010 on a desktop personal computer. (b) The participants selected one of the six basic emotional facial expression pictures and one of the six emotion adjectives to answer each question. In this phase, the ability of participants to determine which emotion was expressed by which facial expression picture in the story and what emotions the participants felt when they saw each facial expression picture were assessed. The answer for each situation mirrored the corresponding emotional expressions; correct and incorrect answers were identified and recorded, and the ‘‘correct’’ rate was determined. 2.4.2. Intervention phase In the intervention phase, the ARSFM learning system enabled the participants to understand the contexts and to express their emotions. In the first session of the intervention phase, (a) the therapist instructed the participants on how to operate the system and perceive cues to ensure that they felt comfortable using the AR technology. The instruction time was 25– 30 min. (b) The participants began the experimental sessions by reading the scenario script and looking at the corresponding illustrations on the monitor screen, and then (c) selecting one of the six basic emotional masks to wear on their face. A therapist assessed the question to test and evaluate the participants’ learning performance. When an answer was incorrect, the therapist, to enrich the training process and increase motivation to engage in training, asked the participant to determine what each mask represented and why the appropriate mask should be selected in the scenario. 2.4.3. Follow-up phase Follow-up began two weeks after the intervention phase to determine whether the participants had retained the skills they had acquired. During this phase, the participants did not use the ARSFM system and determined the emotion described in each situation in the story after reading the scenario script and looking at the corresponding pictures. Performance assessment was done using the baseline phase procedure. 2.5. Measurement materials In this study, we used the six basic emotions of happiness, sadness, fear, disgust, surprise, and anger (Ekman, 2005) for the AR-based 3-D modeling of self-facial expressions. Emotions were combined with scenarios that the participants had experienced. The practical content depicted in the scenes was designed to train adolescents with ASD. Each emotion was associated with a short story. The 20 stories were consistent in their length and difficulties. All the participants received the same stories at each session. Each short story lasted 3 min, and each session lasted 60 min. All the stories that were used followed the same rules for creating content and discussing it with a special education expert and their teacher. The content was about social communication that always occurs in each participant’s daily life. The scenario content was intended primarily to depict emotional concerns related to the six basic emotions. The stories selected were given overall approval by the five educational experts. The stories used for the intervention phase were different from those used for the baseline and follow-up phases. Two questions were asked per story (see Appendix 1 for the sample) and there was no prompting. 3. Results Experimental data on Zhu, Lin, and Lai in each phase were analyzed. The baseline phase consisted of 3 sessions for Zhu, 5 sessions for Lin, and 7 sessions for Lai. The intervention phase consisted of 7 sessions for all participants. The follow-up phase consisted of 8 sessions for Zhu, 6 sessions for Lin, and 4 sessions for Lai. In the baseline phase, the participants could not easily determine the emotions that the six facial expressions represented; specifically, they frequently confused fear and disgust. Moreover, they could not understand the events and appropriately recognize and respond to the facial expressions. Although the participants could choose the correct adjectives to describe emotions, they could not identify the facial expressions that corresponded to the emotions. During the intervention phase, learning and practicing using the ARSFM enabled the participants to compare each 3-D facial model feature enthusiastically and actively, thereby improving their social skills and ability to differentiate emotional facial expressions. Fig. 4 shows the mean correct assessment rates of the three participants after using the ARSFM learning system. The curve indicates that the correct assessment rates of the participants had improved after training and that the participants retained in the follow-up phase the emotional expression and social skills that they had learned in the intervention phase. During the baseline phase (three sessions), the mean correct assessment rate was approximately 20% for Zhu. During the intervention phase (seven sessions), that rate rose to 96.43%. During the follow-up phase (eight sessions), it was 81.25%. The mean correct assessment rate for Lin was approximately 27% during the baseline phase (five sessions). It increased to 92.14% during the intervention phase (seven sessions), and was

[(Fig._4)TD$IG]

C.-H. Chen et al. / Research in Developmental Disabilities 36 (2015) 396–403

401

Fig. 4. Correct assessment rates of the participants during three testing phases.

80.83% during the follow-up phase (six sessions). During the baseline phase (seven sessions), the mean correct assessment rate was approximately 38.75% for Lai. During the intervention phase (seven sessions), that rate increased to 92.85%. During the follow-up phase (four sessions), it was 80.75% (Fig. 4). The Kolmogorov–Smirnov test (Siegel & Castellan, 1988) was used to analyze the data from the three phases. The mean difference in performance level between the baseline and intervention phases was significant (p < 05) for all participants. In addition, the mean difference in performance level between the baseline and follow-up phases was significant (p < 05). 4. Discussion We found that using the ARSFM learning system facilitated the social skills training of adolescents with ASD and enabled them to recognize various social signals in the daily content. We developed 20 stories to assess the participants’ perception of their self-facial emotions and their perception of the intentions of others. In addition, we assessed the effectiveness of the ARSFM system in conveying social skills content by using the multiple baseline design across three subjects. The ARSFM system enabled the participants to use AR technology to practice recognizing facial emotions. AR allows users to see the real world supplemented with virtual elements in real time. AR seamlessly merges real-world environments with computer-generated objects (Azuma, 1997), and enables users to experience and interact naturally with the objects (Chen, Duh, & Ke, 2004). The ARSFM system enables people with ASD to observe and be aware of 3-D facial emotion features from any perspective and to actively explore and manipulate content (Du¨nser & Hornecker, 2007). In addition, when people with ASD interact with educational content through AR, they have control over the manner in which the information is delivered, which enables them to manipulate and observe facial images from various angles and compare the differences (Fig. 5). Our findings provide useful information for clinicians and educators, who probably should consider using more lifelike stimulus materials and novel applications. We believe that this will enable them to create a treatment content process that will allow patients with ASD to more easily reflect their own feelings and status, and to become more aware of different situations. If this is not done, treatment outcomes may be difficult to implement in everyday real life. In addition, AR technology provides low-cost and quick-presence tools to reflect participants’ facial status, and also attracts their attention to observe it. Furthermore, a corresponding story helps them to respond appropriately in real situations, and, like actors, to fictional scenarios that are intended to mimic real life. AR technology does not require expensive equipment: a web camera application installed on a computer, which is easy to set up and use in school, can create a stable and efficacious ARSFM system. However, the time to construct the 3-D model of each participant’s face is required. In addition, people with ASD find it difficult to sustain selective attention during therapy sessions, but an ARSFM system can enable them to remain focused by increasing their sustained and selective attention and eliciting positive emotions

[(Fig._5)TD$IG]

C.-H. Chen et al. / Research in Developmental Disabilities 36 (2015) 396–403

402

Fig. 5. A participant receives visual feedback from the 3-D facial model by observing the screen and comparing various facial expressions on each mask by grasping them in his hands.

during therapy. AR technology can revolutionize the manner in which participants teach and learn, and enliven the learning environment of adolescents with ASD. Furthermore, after the treatment, parents of participants with ASD consistently provide positive reports that their children improve their social skills and try more frequently to express their own feelings. Through repeated ARSFM training, adolescents with ASD can more accurately recognize and more appropriately respond to the emotional facial expressions they see in everyday social situations. This augmented experience can increase the ability of people with ASD to understand others’ emotions and improve the assessments of emotional expression and social skills of adolescents with ASD. Future studies should include experiments involving more participants with ASD and should more thoroughly investigate AR technology. Appendix 1 Sample story: Narrative sentence: My dear grandfather passed away last night. We lived together for a long time when I was a kid. He always took care of me, and we had many delightful memories. I loved him very much. Today he is gone. Everyone sat huddled on the side of the room, eyes brimming with tears, and no one spoke. What was I feeling?

C.-H. Chen et al. / Research in Developmental Disabilities 36 (2015) 396–403

403

Photograph retrieved from the Japanese movie Departures (???? ) on the Internet (http://zh.wikipedia.org/wiki/ %E9%80%81%E8%A1%8C%E8%80%85%EF%BC%9A%E7%A6%AE%E5%84%80%E5%B8%AB%E7%9A%84%E6%A8%82%E7%AB%A0).

References American Psychiatric Association (2000). Diagnostic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: American Psychiatric Association. Annaz, D., Karmiloff-Smith, A., Johnson, M. H., & Thomas, M. S. (2009). A cross-syndrome study of the development of holistic face recognition in children with autism, Down syndrome, and Williams syndrome. Journal of Experimental Child Psychology, 102(4), 456–486. http://dx.doi.org/10.1016/j.jecp.2008.11.005 Asai, K., Kobayashi, H., & Kondo, T. (2005). Augmented instructions—A fusion of augmented reality and printed learning materials. Proceedings of the fifth IEEE international conference on advanced learning technologies (ICALT’05) (pp. 213–215) http://dx.doi.org/10.1109/ICALT.2005.71 Ashwin, C., Wheelwright, S., & Baron-Cohen, S. (2005). Laterality biases to chimeric faces in Asperger syndrome: What is ‘right’ about face-processing? Journal of Autism and Developmental Disorders, 35(2), 183–196. Axe, J. B., & Evans, C. J. (2012). Using video modeling to teach children with PDD-NOS to respond to facial expressions. Research in Autism Spectrum Disorders, 6(3), 1176–1185. http://dx.doi.org/10.1016/j.rasd.2012.03.007 Azuma, R. T. (1997). A survey of augmented reality. Presence-Teleoperators and Virtual Environments, 6(4), 355–385. Backman, C. L., & Harris, S. R. (1999). Case studies, single-subject research, and N of 1 randomized trials: Comparisons and contrasts. American Journal of Physical Medicine & Rehabilitation, 78(2), 170–176. Baron-Cohen, S. (2002). The extreme male brain theory of autism. Trends in Cognitive Science, 6(6), 248–254. Baron-Cohen, S., & Belmonte, M. K. (2005). Autism: A window onto the development of the social and the analytic brain. Annual Review of Neuroscience, 28, 109– 126. Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a theory of mind? Cognition, 21(1), 37–46. (citeulike-article-id: 550127). Begeer, S., Rieffe, C., Terwogt, M. M., & Stockmann, L. (2006). Attention to facial emotion expressions in children with autism. Autism, 10(1), 37–51. http:// dx.doi.org/10.1177/1362361306057862 Ben Shalom, D., Mostofsky, S. H., Hazlett, R. L., Goldberg, M. C., Landa, R. J., Faran, Y., et al. (2006). Normal physiological emotions but differences in expression of conscious feelings in children with high-functioning autism. Journal of Autism and Developmental Disorders, 36(3), 395–400. http://dx.doi.org/10.1007/ s10803-006-0077-2 Blum-Dimaya, A., Reeve, S. A., Reeve, K. F., & Hoch, H. (2010). Teaching children with autism to play a video game using activity schedules and game-embedded simultaneous video modeling. Education and Treatment of Children, 33(3), 351–370. http://dx.doi.org/10.1353/etc.0.0103 Castelli, F. (2005). Understanding emotions from standardized facial expressions in autism and normal development. Autism, 9(4), 428–449. http://dx.doi.org/ 10.1177/1362361305056082 Chen, C.-H., Duh, H. B. L., & Ke, H.-T. (2004). Visualizing a method of configuration design using augmented reality. Proceedings of the 7th International Conference on Work with Computing Systems (WWCS 2004) (pp. 691–693). Chien, C.-H., Chen, C.-H., & Jeng, T.-S. (2010). An interactive augmented reality system for learning anatomy structure. Proceedings of the international multiconference of engineers and computer scientists (IMECS 2010). Du¨nser, A., & Hornecker, E. (2007). An observational study of children interacting with an augmented story book. In Proceedings of Edutainment 2007. Hong Kong: CUHK. Dawson, G., Webb, S. J., & McPartland, J. (2005). Understanding the nature of face processing impairment in autism: Insights from behavioral and electrophysiological studies. Developmental Neuropsychology, 27(3), 403–424. http://dx.doi.org/10.1207/s15326942dn2703_6 Di-O-Matic, Inc. (2014). Facial Studio (Windows Edition). Retrieved from: http://www.di-o-matic.com/products/Software/FacialStudio/#page=overview Ekman, P. (2005). Basic emotions. In T. Dalgleish & T. Power (Eds.), The handbook of cognition and emotion (pp. 45–60). West Essex, UK: John Wiley & Sons Ltd. Elder, L. M., Caterino, L. C., Chao, J., Shacknai, D., & De Simone, G. (2006). The efficacy of social skills treatment for children with Asperger syndrome. Education and Treatment of Children, 29(4), 635–663. Hamm, J., Kohler, C. G., Gur, R. C., & Verma, R. (2011). Automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders. Journal of Neuroscience Methods, 200(2), 237–256. http://dx.doi.org/10.1016/j.jneumeth.2011.06.023 Kientz, J. A., Goodwin, M., Hayes, G. R., & Abowd, G. D. (2013). Interactive technologies for autism. In Synthesis lectures on assistive, rehabilitative, and healthpreserving technologies. Morgan & Claypool. Krasny, L., Williams, B. J., Provencal, S., & Ozonoff, S. (2003). Social skills interventions for the autism spectrum: Essential ingredients and a model curriculum. Child and Adolescent Psychiatric Clinics of North America, 12(1), 107–122. Ryan, C., & Charragain, C. N. (2010). Teaching emotion recognition skills to children with autism. Journal of Autism and Developmental Disorders, 40(12), 1505–1511. http://dx.doi.org/10.1007/s10803-010-1009-8 Shelton, B. E., & Hedley, N. R. (2002). Using augmented reality for teaching Earth–Sun relationships to Undergraduate Geography students. In The first IEEE international augmented reality toolkit workshop. Germany: Darmstadt. Siegel, S., & Castellan, N. J. (1988). Nonparametric statistics for the behavioral sciences (2nd ed.). New York: McGraw-Hill. Sucksmith, E., Allison, C., Baron-Cohen, S., Chakrabarti, B., & Hoekstra, R. A. (2013). Empathy and emotion recognition in people with autism, first-degree relatives, and controls. Neuropsychologia, 51(1), 98–105. http://dx.doi.org/10.1016/j.neuropsychologia.2012.11.013 Tseng, R.-Y., & Do, E.Y.-L. (2010). Facial expression wonderland (FEW): A novel design prototype of information and computer technology (ICT) for children with autism spectrum disorder (ASD). Proceedings of the 1st ACM International Health Informatics Symposium (pp. 464–468) http://dx.doi.org/10.1145/ 1882992.1883064 Wang, A. T., Dapretto, M., Hariri, A. R., Sigman, M., & Bookheimer, S. Y. (2004). Neural correlates of facial affect processing in children and adolescents with autism spectrum disorder. Journal of the American Academy of Child and Adolescent Psychiatry, 43(4), 481–490. http://dx.doi.org/10.1097/00004583-20040400000015 Williams, B. T., Gray, K. M., & Tonge, B. J. (2012). Teaching emotion recognition skills to young children with autism: A randomised controlled trial of an emotion training programme. Journal of Child Psychology and Psychiatry, 53(12), 1268–1276. http://dx.doi.org/10.1111/j.1469-7610.2012.02593.x Young, R. L., & Posselt, M. (2012). Using the transporters DVD as a learning tool for children with autism spectrum disorders (ASD). Journal of Autism and Developmental Disorders, 42(6), 984–991. http://dx.doi.org/10.1007/s10803-011-1328-4

Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders.

Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people; this ability involves recognizing f...
1MB Sizes 1 Downloads 7 Views