I work as part of a multidisciplinary team in the Brain Signal Lab (BSL) investigating the underlying neurological mechanisms of various neurological diseases. This work has mainly utilised electroencephalograph (EEG) to capture, methods from signal processing to reveal and visualise brain processes, and machine learning to automatically classify brain patterns. My contribution has mainly been to the signal processing, visualisation and machine learning, but with a training as a Cognitive Scientist I also contribute biological and cognitive aspects of the work.
My other research interests lies in the domain of audio-visual speech processing, predominately by machines, but also investigating how humans perceive and integrate speech from different modalities to create more elegant computational algorithms. My PhD thesis, completed in 2005, involved research in dynamic weighting, classifier combination, confidence estimation and combination, and the use of linguistic distinctive features as units for fusion of audio visual speech. This work has now broaden to include other areas of multimodal classification of speech, speaker, emotion and cognitive state, primarly as part of the Centre for Knowledge and Interacation Techonologies (CKIT) and the AI Lab, but also the BSL including physiological measures.
My main post-doctoral research was on the Thinking Head project. A multi-instituional project, I was responsible for the development of the audio-visual sensing capacities of the Head. I am now also involved in a Flinder's spin out company, Clevertar, to commercialize "virtual pal" technology including the AnnaCares suite of products. Clevertar has been the recipient of a Commercialisation Australia Grant, named in the Smart 100, and winner of three awards at the Tech23.2014 awards.