Project Details
Description
Background and Motivations: Children with Autism spectrum disorder (ASD) would have serious social communication and cognitive difficulties, which often exhibit atypical and awkward expressions. It is well-known that early screening of autistic children (i.e. aged 1-3 years) is crucial to the success of the therapeutic effect, but it is a nontrivial task. There are two main reasons: (1) most of parents have little knowledge of ASD whose symptoms are not very noticeable in an early stage, and (2) the current early ASD screening is achieved mainly by manual observation, which greatly relies on the experience of a clinician. It turns out that autistic children may not be treated in time and miss the best treatment time. The recent cognitive and neurological researches have shown that autistic children often produce emotional sentences with inconsistency across different expressions, and potentially in clinical phenotype appear greater difficulty in producing well- coordinated facial-speech synchrony [3]. As such, automatically detecting cross-modal emotional inconsistency is capable of providing a new emerging and promising way for the early ASD screening, which, however, has yet to be explored in the literature, to the best of our knowledge. The aim of this project is therefore attempting to detect such inconsistency between facial expressions (i.e. facial action, eye gaze, and lip motion) and speech production potentially made by autistic children, thus resulting in the early screening of autistic children.
Problem Definition and Challenges: The problem addressed in this project is: Given a set of pairs of emotional facial expressions and speech productions from both autistic and neurotypical (NT) children, this project will develop a Cross-modal Emotional Inconsistency Detection (Cm- EID) approach to detecting such emotional inconsistency towards the early screening of autistic children. This is a very meaningful but challenging work because children would show their emotions through complex and idiosyncratic combinations of the visual and acoustic modalities. Specifically, the challenges will be embodied in: (1) substantial data heterogeneity in terms of data distribution, types, sizes, and dimensions; (2) complex spatial-temporal dependency and ambiguous correlation across heterogeneous data sources; (3) the large within-modal variations of the same expressions, and (4) the very limited number of abnormal data.
Novelty of This Project: Automatically detecting the cross-modal emotional inconsistency has yet to be studied in the literature towards the early screening of autistic children, although such inconsistency found by manual observation is proved feasible for the ASD screening. This project will propose a novel deep Cm-EID approach, in which the major issues to be addressed include: (i) meticulously collect the multisensory emotional data with ASD; (ii) design a model to make the emotional features from different modalities become comparable; (iii) design and develop a Cm-EID approach; (iv) address the class imbalance problem in Cm-EID; and (v) evaluate and validate the proposed approach in the practical environment. Through this project, the cross-modal similarity metric, which can be utilized as an ASD index, will be presented to measure such inconsistency. Consequently, the normal values of this index will be specified for NT children, whereby the early screening of autistic children can be achieved.
Long-term Significance: The proposed approach in this project works in a non-contact way, which is therefore very useful in situations where the sensors and special equipment normally used in the ASD diagnosis are not available. That is, this innovative non-invasive method can make the distance screening of autistic children become true and let children’s parents be well aware of ASD in the long run. Furthermore, the results of this project will provide a promising way to support less-experienced practitioner, e.g. junior clinicians, in accomplishing early screening of the autistic children. All of these are surely of benefit to the ASD community. In addition, the findings and research results of this project will also promote the development of machine learning and image / video processing in a variety of medical applications, e.g. Asperger’s syndrome and Parkinson.
Problem Definition and Challenges: The problem addressed in this project is: Given a set of pairs of emotional facial expressions and speech productions from both autistic and neurotypical (NT) children, this project will develop a Cross-modal Emotional Inconsistency Detection (Cm- EID) approach to detecting such emotional inconsistency towards the early screening of autistic children. This is a very meaningful but challenging work because children would show their emotions through complex and idiosyncratic combinations of the visual and acoustic modalities. Specifically, the challenges will be embodied in: (1) substantial data heterogeneity in terms of data distribution, types, sizes, and dimensions; (2) complex spatial-temporal dependency and ambiguous correlation across heterogeneous data sources; (3) the large within-modal variations of the same expressions, and (4) the very limited number of abnormal data.
Novelty of This Project: Automatically detecting the cross-modal emotional inconsistency has yet to be studied in the literature towards the early screening of autistic children, although such inconsistency found by manual observation is proved feasible for the ASD screening. This project will propose a novel deep Cm-EID approach, in which the major issues to be addressed include: (i) meticulously collect the multisensory emotional data with ASD; (ii) design a model to make the emotional features from different modalities become comparable; (iii) design and develop a Cm-EID approach; (iv) address the class imbalance problem in Cm-EID; and (v) evaluate and validate the proposed approach in the practical environment. Through this project, the cross-modal similarity metric, which can be utilized as an ASD index, will be presented to measure such inconsistency. Consequently, the normal values of this index will be specified for NT children, whereby the early screening of autistic children can be achieved.
Long-term Significance: The proposed approach in this project works in a non-contact way, which is therefore very useful in situations where the sensors and special equipment normally used in the ASD diagnosis are not available. That is, this innovative non-invasive method can make the distance screening of autistic children become true and let children’s parents be well aware of ASD in the long run. Furthermore, the results of this project will provide a promising way to support less-experienced practitioner, e.g. junior clinicians, in accomplishing early screening of the autistic children. All of these are surely of benefit to the ASD community. In addition, the findings and research results of this project will also promote the development of machine learning and image / video processing in a variety of medical applications, e.g. Asperger’s syndrome and Parkinson.
Status | Finished |
---|---|
Effective start/end date | 1/10/21 → 30/09/24 |
UN Sustainable Development Goals
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.