Comutti - A Research Project Dedicated to Finding Smart Ways of Using Technology for a Better Tomorrow for Everyone, Everywhere. (COMUTTI)
Primary Purpose
Autism Spectrum Disorder
Status
Recruiting
Phase
Not Applicable
Locations
Italy
Study Type
Interventional
Intervention
Clinical evaluation of participants by means of Autism Diagnostic Observation Schedule
audio signal dataset creation and validation; machine learning analysis, empirical evaluations
Sponsored by
About this trial
This is an interventional basic science trial for Autism Spectrum Disorder focused on measuring non-verbal autistic children, minimally-verbal autistic children
Eligibility Criteria
Inclusion Criteria:
- having a clinical diagnosis of autism spectrum disorder according to DSM-5 criteria
- use fewer than 10 words
Exclusion Criteria:
- using any stimulant or non-stimulant medication affecting the central nervous system
- having an identified genetic disorder
- having vision or hearing problems
- suffering from chronic or acute medical illness
Sites / Locations
- Scientific Institute, IRCCS Eugenio MedeaRecruiting
Arms of the Study
Arm 1
Arm Type
Experimental
Arm Label
Experimental: audiosignal dataset creation and machine learning analysis
Arm Description
Experimental: audiosignal dataset creation and processing; machine learning analysis, empirical evaluations
Outcomes
Primary Outcome Measures
Frequency of audio signal samples and their associated labels
Frequency (measured in number per hour) of audio signal samples (sounds and verbalizations) produced by each participant recorded during the hospital stays, in various contexts (i.e., during educational interventions and / or in moments of unstructured play) labeled as self-talk, delight, dysregulation, frustration, request, or social exchange.
A small, wireless recorder (Sony TX800 Digital Voice Recorder TX Series) will be attached to the participant's clothing using strong magnets. Next, the adults (caregiver and / or operators) must associate the sounds produced by the child to an affective and / or to the probable meaning of the vocalization -labels- through the use of a web app.
Participant-specific harmonic features derived by the audio signal samples
Temporal and spectral audio features -i.e., pitch-related features, formants features, energy-related features, timing features, articulation features- extracted from the samples and used next for supervised and unsupervised machine learning analysis.
The collected audio signal samples will be segmented in the proximity of the temporal locations of labels. Next, it will be segmented and associated with temporally adjacent labels (affective states or probable meaning of vocalizations). Audio harmonic features (temporal/phonetic characteristics) will be then identified for each participant using supervised/unsupervised machine learning analysis of audio signal samples. Through this process, participant-specific patterns corresponding to specific communications purposes or emotional states will be identified.
Accuracy of machine learning prediction
The classification accuracy of machine learning analysis, i.e., the number of correct predictions divided by the total number of predictions, which will be tested in a retained test set of recorded audio signal samples.
This outcome measures will estimate the usability/utility of the developed tool for vocalization interpretion based on a machine learning analysis of the recorded audio signal samples.
Secondary Outcome Measures
Full Information
NCT ID
NCT05149144
First Posted
November 25, 2021
Last Updated
March 17, 2023
Sponsor
IRCCS Eugenio Medea
Collaborators
Politecnico di Milano, Massachusetts Institute of Technology
1. Study Identification
Unique Protocol Identification Number
NCT05149144
Brief Title
Comutti - A Research Project Dedicated to Finding Smart Ways of Using Technology for a Better Tomorrow for Everyone, Everywhere.
Acronym
COMUTTI
Official Title
Comutti - A Research Project Dedicated to Finding Smart Ways of Using Technology for a Better Tomorrow for Everyone, Everywhere.
Study Type
Interventional
2. Study Status
Record Verification Date
March 2023
Overall Recruitment Status
Recruiting
Study Start Date
July 27, 2021 (Actual)
Primary Completion Date
December 31, 2023 (Anticipated)
Study Completion Date
December 31, 2023 (Anticipated)
3. Sponsor/Collaborators
Responsible Party, by Official Title
Sponsor
Name of the Sponsor
IRCCS Eugenio Medea
Collaborators
Politecnico di Milano, Massachusetts Institute of Technology
4. Oversight
Studies a U.S. FDA-regulated Drug Product
No
Studies a U.S. FDA-regulated Device Product
No
Data Monitoring Committee
No
5. Study Description
Brief Summary
According to World Health Organization, worldwide one in 160 children has an ASD. About around 25% to 30% of children are unable to use verbal language to communicate (non-verbal ASD) or are minimally verbal, i.e., use fewer than 10 words (mv-ASD). The ability to communicate is a crucial life skill, and difficulties with communication can have a range of negative consequences such as poorer quality of life and behavioural difficulties. Communication interventions generally aim to improve children's ability to communicate either through speech or by supplementing speech with other means (e.g., sign language, pictures, or AAC - Advanced Augmented Communication tools). Individuals with non- verbal ASD or mv-ASD often communicate with people through vocalizations that in some cases have a self-consistent phonetic association to concepts (e.g., "ba" to mean "bathroom") or are onomatopoeic expressions (e.g., "woof" to refer to a dog). In most cases vocalizations sound arbitrary; even if they vary in tone, pitch, and duration depending it is extremely difficult to interpret the intended message or the individual's emotional or physical state they would convey, creating a barrier between the persons with ASD and the rest of the world that originate stress and frustration. Only caregivers who have long term acquaintance with the subjects are able to decode such wordless sounds and assign them to unique meanings.
This project aims at defining algorithms, methods, and technologies to identify the communicative intent of vocal expressions generated by children with mv-ASD, and to create tools that help people who are not familiar with the subjects to understand these individuals during spontaneous conversations.
6. Conditions and Keywords
Primary Disease or Condition Being Studied in the Trial, or the Focus of the Study
Autism Spectrum Disorder
Keywords
non-verbal autistic children, minimally-verbal autistic children
7. Study Design
Primary Purpose
Basic Science
Study Phase
Not Applicable
Interventional Study Model
Single Group Assignment
Masking
None (Open Label)
Allocation
N/A
Enrollment
25 (Anticipated)
8. Arms, Groups, and Interventions
Arm Title
Experimental: audiosignal dataset creation and machine learning analysis
Arm Type
Experimental
Arm Description
Experimental: audiosignal dataset creation and processing; machine learning analysis, empirical evaluations
Intervention Type
Diagnostic Test
Intervention Name(s)
Clinical evaluation of participants by means of Autism Diagnostic Observation Schedule
Intervention Description
Clinical evaluation of participants by means of Autism Diagnostic Observation Schedule
Intervention Type
Behavioral
Intervention Name(s)
audio signal dataset creation and validation; machine learning analysis, empirical evaluations
Intervention Description
The project tests and adapts the technology developed at MIT for vocalization collection and labeling, and contributes to data gathering among Italian subjects (and their quality validation) in order to create a multi-cultural dataset and to enable cross-cultural studies and analyses. Next, the focus is placed on the analysis of harmonic features of the audio in the vocalizations of the dataset to identify recurring individual features and patterns corresponding to specific communications purposes or emotional states. Supervised and unsupervised machine learning approaches are developed and different machine learning algorithms will be compared to identify the most accurate ones for the project goal. Last, an exploratory evaluation of the vocalization-understanding machine learning model is conducted to test the usability and utility of the tool for vocalization interpretation.
Primary Outcome Measure Information:
Title
Frequency of audio signal samples and their associated labels
Description
Frequency (measured in number per hour) of audio signal samples (sounds and verbalizations) produced by each participant recorded during the hospital stays, in various contexts (i.e., during educational interventions and / or in moments of unstructured play) labeled as self-talk, delight, dysregulation, frustration, request, or social exchange.
A small, wireless recorder (Sony TX800 Digital Voice Recorder TX Series) will be attached to the participant's clothing using strong magnets. Next, the adults (caregiver and / or operators) must associate the sounds produced by the child to an affective and / or to the probable meaning of the vocalization -labels- through the use of a web app.
Time Frame
immediately after the intervention
Title
Participant-specific harmonic features derived by the audio signal samples
Description
Temporal and spectral audio features -i.e., pitch-related features, formants features, energy-related features, timing features, articulation features- extracted from the samples and used next for supervised and unsupervised machine learning analysis.
The collected audio signal samples will be segmented in the proximity of the temporal locations of labels. Next, it will be segmented and associated with temporally adjacent labels (affective states or probable meaning of vocalizations). Audio harmonic features (temporal/phonetic characteristics) will be then identified for each participant using supervised/unsupervised machine learning analysis of audio signal samples. Through this process, participant-specific patterns corresponding to specific communications purposes or emotional states will be identified.
Time Frame
immediately after the intervention
Title
Accuracy of machine learning prediction
Description
The classification accuracy of machine learning analysis, i.e., the number of correct predictions divided by the total number of predictions, which will be tested in a retained test set of recorded audio signal samples.
This outcome measures will estimate the usability/utility of the developed tool for vocalization interpretion based on a machine learning analysis of the recorded audio signal samples.
Time Frame
immediately after the intervention
10. Eligibility
Sex
All
Minimum Age & Unit of Time
2 Years
Maximum Age & Unit of Time
10 Years
Accepts Healthy Volunteers
No
Eligibility Criteria
Inclusion Criteria:
having a clinical diagnosis of autism spectrum disorder according to DSM-5 criteria
use fewer than 10 words
Exclusion Criteria:
using any stimulant or non-stimulant medication affecting the central nervous system
having an identified genetic disorder
having vision or hearing problems
suffering from chronic or acute medical illness
Central Contact Person:
First Name & Middle Initial & Last Name or Official Title & Degree
Alessandro Crippa, Ph.D.
Phone
031877593
Ext
+39
Email
alessandro.crippa@lanostrafamiglia.it
Overall Study Officials:
First Name & Middle Initial & Last Name & Degree
Alessandro Crippa, Ph.D.
Organizational Affiliation
IRCCS Eugenio Medea
Official's Role
Principal Investigator
Facility Information:
Facility Name
Scientific Institute, IRCCS Eugenio Medea
City
Bosisio Parini
State/Province
Lecco
ZIP/Postal Code
23842
Country
Italy
Individual Site Status
Recruiting
Facility Contact:
First Name & Middle Initial & Last Name & Degree
Mariaelena Colombo
Phone
031877357
Ext
+39
Email
mariaelena.colombo@lanostrafamiglia.it
12. IPD Sharing Statement
Plan to Share IPD
No
Learn more about this trial
Comutti - A Research Project Dedicated to Finding Smart Ways of Using Technology for a Better Tomorrow for Everyone, Everywhere.
We'll reach out to this number within 24 hrs