Artificial Intelligence Laboratory
Within the Centre for Knowledge and Interaction Technologies (KIT), the Flinders University Artificial Intelligence and Language Technology Laboratory (AILab) focuses on those aspects of Artificial General Intelligence that have to do with language, learning and cognition. Our capabilities and the technologies we develop and deploy are broadly related to the School of Computer Science, Engineering and Mathematics' research concentrations:
- Centre for Knowledge and Interaction Technologies
- Flinders Medical Device Research Institute
- Centre for Maritime Electronics, Control & Imaging
- Flinders Mathematical Sciences Laboratory
A key focus of the AI/LT Lab is learning – learning the language and culture of our society and the world (including grounded semantics and ontology). A very interdisciplinary approach is taken that seeks to model and develop neurologically plausible psycholinguistic theories as well as engineering commercially viable interface technologies. Learning human language goes beyond understanding speech or parsing sentences or disambiguating multiple senses of words and includes understanding body language, gesture, facial expression and a wide variety of emotions as well as research in AudioVisual Speech Recognition. Learning about the world means understanding what’s out there and how it relates to us and represents the original philosophical concept of Ontology which was the term Powers adopted in the late 1970s and throughout the 1980s as a generalization of Semantics that emphasized the need to connect meanings with the real world rather than just chase words round a dictionary – this concept was popularized by Harnad as Symbol Grounding in the 1990s. Semantics and Ontology without Grounding are only a shadow of reality and are insufficient for understanding, according to Powers, Harnad and an increasing number of Cognitive Scientists.
Robots, physical and simulated, play a major role in our attempts to learn a grounded syntax and semantics in which the computer/system/robot really understands what is being talked about. Both grammar and meaning are learned by children in an unsupervised way by learning patterns in context, and we emulate this with our AI systems. The use of multimodal sensors, including touch, vision and sound, allows for a number of interesting enhancements of the way information is communicated to a computer. For example, combining camera and microphone input allows lip-reading to be used to enhance speech recognition under noisy conditions. In addition, this opens the door to the possibility of picking up additional expressional and emotional content, of tracking where a speaker is looking when talking, and conversely of synthesizing appropriate acoustic and facial expressions, and looking at the objects being talked about. This is a major focus of our Thinking and Teaching Head projects, in which area we hold two current ARC grants and have a commercialized spinoff company/product: CleverMe/Clevertar . This research involves interdisciplinary collaboration both extramurally, including with institutions in the US, Germany and China, and within Flinders, including the Flinders Educational Futures Research Institute
Biometric signals are another source of information, and can not only be used as inputs in their own right, but can be used to correlate with, and thus learn and validate, theories and models of language, learning and emotion. The signal processing and learning expertise developed for speech and language is also being applied to developing new techniques in biomedical signal processing, and in particular for the processing EEG in real world conditions. Our Brain Computer Interface has two facets:
- allowing us to understand more of what is going on in a person's brain (including their emotional state and their level of skill acquisition or situation awareness), and
- allowing a person to interact with a computer or control devices like a wheelchair or other vehicle, exploiting both conscious intentions and unconscious reactions.
Our EEG research is an interdisciplinary collabortion with the Centre for Neuroscience, the School of Medicine's Epilepsy Laboratory and Human EEG Unit and our joint Brain Systems Lab. We currently hold two joint ARC Discovery grants for this work. We also cooperate with researchers at the University of Southern Queensland in a number of related initiatives.
Artificial Intelligence and Language Technology at Flinders has expertise and capabilities in the following areas:
- Assistive & Educational Technologies
- Autonomous Vehicles, Intelligent Robots & Adhoc/Resilient Mobile Communications
- Cognitive & Behavioural Science & Cognitive Neuroscience
- Computational Psycholinguistics & Cognitive Linguistics
- Emotion, Expression & Interaction
- Human Factors and Evaluation
- Information Retrieval & Visualisation
- Language & Learning Technology
- Medical, Signals & Imaging & Brain Computer Interface
- AudioVisual Speech and Language Processing
- Talking Thinking Teaching Heads & Embodied Conversational Agents
- Vision/Image Processing
For More Information...
For more information on these projects, or if you're interested in joining the group, or enquiring about courses or scholarships, please contact Professor David Powers.