AVAST – Social Tutoring

Autonomous Virtual Assistant for Social Tutoring

Humans are highly social creatures, constantly engaging in complex and dynamic interactions with one other. To do this successfully, the ability to read and respond to the social cues of other individuals is essential, as is the ability to understand and manage one’s own emotions, thoughts and behaviours. Typically people develop social competency innately through their interactions with other people, however some need additional support and explicit instruction to acquire these skills. At Flinders University we are developing software to assist individuals with social difficulties such as autism spectrum disorders and hearing impairments, including providing training for their families, and children and grandchildren of people suffering from conditions such as dementia, stroke, Alzheimer's and Parkinson's.

The Social Tutor software utilises our Head X technology to incorporate virtual humans who model appropriate social cues and guide learners through various activities aimed at improving their social competency. Head X is capable of realistic speech and facial expressions, and can be presented in a range of ‘personas’ with unique appearances and voices. When combined with automated assessment of student knowledge, it is intended that this will provide learners an opportunity to develop their social skills with a variety of ‘people’ in an engaging, dynamic and non-judgemental environment prior to applying those skills in real-world situations. The Social Tutor is not intended to replace or reduce social interaction with peers, but instead aims to provide a stepping stone between theoretical knowledge and real world social interaction. The software can be used independently by the learner on a daily basis, reinforcing concepts learned at school or in therapy sessions and supporting improved educational outcomes.

Current Activities, Products and Directions

The initial AVAST system was developed with two lessons, one on bullying and one on conversational skills.  These were evaluated on a range of medium to high functioning children with Autism Spectrum Disorders who were recruited through AutismSA (Milne et al., 2009).

Head X Whiteboard software is also being developed, aimed at allowing non-programmers to write interactive lessons utilising Head X, and has also been trialed for an unrelated Teaching Head application. This software allows activities, lesson sequences, Head X behaviour and customisation to be described in basic XML. The Java-based software interprets the XML lesson descriptions and, using basic built-in automated assessment, presents the learner with a set of tasks to choose from based on their prior interaction with the system and the curriculum they are working with. Lesson creators can control task content, prerequisites, set minimum achievement requirements and specify tasks as ‘core’ or ‘extra’. Learners complete tasks at their own pace and are presented with new, more challenging tasks as their knowledge increases. This software can be used to develop lesson sequences for any topic, and is not restricted to social skills education.

Autism-focussed lesson sequences and tasks for the Social Tutor are currently being developed using the Head X Whiteboard software described above. This initial set of lessons is targeted at children in mainstream primary schools who have an existing diagnosis on the autism spectrum. Lessons extracted from validated social skills curricula are being implemented in short, modular lesson units. On completion of lesson implementation the Social Tutor will be evaluated for its effectiveness as a teaching tool in a three week trial with long term effects tested after two and four months. Given this trial is as successful as the pilot, additional sets of lessons will be developed for a wider audience including older children with autism spectrum disorders (ASD) and individuals with Attention Deficit Hyperactivity Disorder (ADHD).

Hearing-impaired children are also expected to benefit from the Social Tutor, and some of these lessons will also be applicable for children with hearing impairments, whilst DeafCanDo and Novita are keen to design further lessons specifically targeted to children who are to receive cochlear implants. The need for different lessons can be exemplified by considering eye gaze: a child with ASD will tend to look down and avoid your eyes, a child with ADHD may to look around everywhere but where you want him too, whilst a hearing impaired child will likely have the habit of staring at your mouth to pick up lip-reading cues and must be taught to consider other aspects of expression and gestural communication.

Related Research

Related research focuses on how to get computers, robots, and intelligent agents like our thinking heads, to operate and interact in a way that reflects the way people unconsciously recognize, display and adapt to the emotional levels and the gestural information that characterize every social interaction or conversation. In addition we are using Brain Computer Interface technology based on EEG to explore the emotional impact of the teaching head, and the user's learning and attention, as well as open the possibilities of using this technology to influence the lesson flow and teaching paradigm used.

Recent Publications

Book chapters

Anderson, T.A., Chen, Z., Wen, Y., Milne, M.K., Atyabi, A., Treharne, K., Matsumoto, T., Jia, X., Luerssen, M.H., Lewis, T.W., et al., 2012. Thinking Head MulSeMedia: A Storytelling Environment for Embodied Language Learning. In Multiple Sensorial Media Advances and Applications: New Developments in MulSeMedia. Hershey, Pennsylvania: IGI Global, pp. 182-203.

Milne, M.K., Luerssen, M.H., Lewis, T.W., Leibbrandt, R.E., & Powers, D.M., 2011. Designing and Evaluating Interactive Agents as Social Skills Tutors for Children with Autism Spectrum Disorder. In Conversational Agents and Natural Language Interaction: Techniques and Effective Practices. Hershey, USA: IGI Global, pp. 23-48.

Milne, M.K., Luerssen, M.H., Leibbrandt, R.E., Lewis, T.W., & Powers, D.M., 2011. Embodied Conversational Agents for Education in Autism. In A Comprehensive Book on Autism Spectrum Disorders. Rijeka, Croatia: InTech, pp. 387-412.

Refereed conference papers

Milne, M.K., Luerssen, M.H., Lewis, T.W., Leibbrandt, R.E., & Powers, D.M., 2010. Development of a Virtual Agent Based Social Tutor for Children with Autism Spectrum Disorders. Proceedings of the International Joint Conference on Neural Networks 2010, 1555-1563.

Powers, D.M., Luerssen, M.H., Lewis, T.W., Leibbrandt, R.E., Milne, M.K., Pashalis, J., & Treharne, K., 2010. MANA for the Ageing. Proceedings of the 2010 Workshop on Companionable Dialogue Systems, ACL 2010, 7-12.

Milne, M.K., Powers, D.M., & Leibbrandt, R.E., 2009. Development of a software-based social tutor for children with autism spectrum disorders. Conference proceedings : Australian Conference on Computer Human Interaction (OZCHI 2009), 265-268.


Current Products

  • Head X – Free Customizable Virtual Head developed under the ARC Thinking Systems SRI, "From Talking Heads to Thinking Heads". Free download for Windows systems (also tested on Mac virtual machines

  • Head X Whiteboard – Prototype XML-based tool to allow lesson development by non-programmers

  • AVAST Social Tutor – Pilot version, with two lessons, and current version, with a growing number of curriculum-based lessons

Recent Successes

  • Burnham, D. K., Dale, R., Stevens, C. J., Powers, D. M., Davis, C. W., Buchholz, J. M., Kuratate, T., Kim, J., Paine, G. C., Kitamura, C. M., Wagner, M., Moeller, S., Black, A. W., Schultz, T. and Bothe, H. H. (2006-2010). From Talking Heads to Thinking Heads: A Research Platform for Human Communication . ARC Thinking Systems: $3,400,000.
  • Burnham, D., Cox, F., Butcher, A., Fletcher, J., Wagner, M., Epps, J., Ingram, J., Arciuli, J., Togneri, R., Rose, P., Kemp, N., Cutler, A., Dale, R., Kuratate, T., Powers, D., Cassidy, S., Grayden, D., Loakes, D., Bennamoun, M., Lewis, T., Goecke, R., Best, C., Bird, S., Ambikairajah, E., Hajek, J., Ishihara, S., Kinoshita, Y., Tran, D., Chetty, G. and Onslow, M. (2010). The Big Australian Speech Corpus: An audio-visual speech corpus of Australian English. ARC LIEF: $650,000.



  • Dr Richard Leibbrandt
    – Research Fellow
  • Dr Trent Lewis
    – Research Fellow
  • Marissa Milne
    Postgraduate Researcher
  • Dr Martin Luerssen
    – Research Fellow
  • Prof. David Powers (Contact Person)
    – Principal Investigator 
  • Dr Pammi Raghavendra
    – Associate