search
Back to results

Vocal Emotion Communication With Cochlear Implants

Primary Purpose

Cochlear Hearing Loss

Status
Recruiting
Phase
Early Phase 1
Locations
United States
Study Type
Interventional
Intervention
Perception of acoustic cues to emotion
Production of acoustic cues to emotion
Sponsored by
Father Flanagan's Boys' Home
About
Eligibility
Locations
Arms
Outcomes
Full info

About this trial

This is an interventional basic science trial for Cochlear Hearing Loss focused on measuring Emotional prosody, Acoustic cues, Perception, Production, Children, Development, Cochlear Implant, Hearing Loss

Eligibility Criteria

6 Years - 80 Years (Child, Adult, Older Adult)All SexesAccepts Healthy Volunteers

Inclusion Criteria:

  • Prelingually deaf children with cochlear implants

    • Postlingually deaf adults with cochlear implants
    • Normally hearing children
    • Normally hearing adults

Exclusion Criteria:

  • Non-native speakers of American English

    • Prelingually deaf individuals who receive cochlear implants after age 12
    • Adults unable to pass a basic cognitive screen

Sites / Locations

  • Arizona State University
  • Boys Town National Research HospitalRecruiting

Arms of the Study

Arm 1

Arm Type

Experimental

Arm Label

Vocal emotion communication by children and adults with cochlear implants or normal hearing

Arm Description

Participants will be native speakers of American English and include pediatric cochlear implant recipients with unilateral or bilateral devices aged 6-19 years, children with normal hearing aged 6-19 years, postlingually deaf adults with cochlear implants, and adults with normal hearing. In Aim 1 participants will listen to emotional speech sounds and identify the talker's intended emotion. In Aim 2 participants will be invited to produce emotional speech by reading out scripted materials or in a more naturalistic conversational setting.

Outcomes

Primary Outcome Measures

Vocal emotion recognition accuracy
Percent correct scores in vocal emotion recognition
Vocal emotion recognition sensitivity
Sensitivity (d's) in vocal emotion recognition
Voice pitch (fundamental frequency) of vocal productions
Voice pitch (Hz) measured from acoustic analyses of recorded speech
Intensity of vocal productions
Intensity (decibel units) measured from acoustic analyses of recorded speech
Duration of vocal productions
Duration (1/speaking rate) measured from acoustic analyses of recorded speech
Recognition of recorded speech emotions by listeners -- percent correct scores
Accuracy and associated d's (sensitivity measure) in listeners' ability to identify the emotions recorded in participants' speech
Recognition of recorded speech emotions by listeners -- d' values (sensitivity measure)
Sensitivity (d's based on hit rates and false alarm rates) in listeners' ability to identify the emotions recorded in participants' speech

Secondary Outcome Measures

Reactions times (seconds) for vocal emotion identification
Time between the end of the stimulus recording and the response (button press)

Full Information

First Posted
July 26, 2022
Last Updated
June 7, 2023
Sponsor
Father Flanagan's Boys' Home
Collaborators
Arizona State University, House Institute Foundation, University of Nebraska, National Institute on Deafness and Other Communication Disorders (NIDCD), Office of Behavioral and Social Sciences Research (OBSSR)
search

1. Study Identification

Unique Protocol Identification Number
NCT05486637
Brief Title
Vocal Emotion Communication With Cochlear Implants
Official Title
Perception and Production of Emotional Prosody With Cochlear Implants
Study Type
Interventional

2. Study Status

Record Verification Date
June 2023
Overall Recruitment Status
Recruiting
Study Start Date
July 1, 2022 (Actual)
Primary Completion Date
June 30, 2027 (Anticipated)
Study Completion Date
June 30, 2027 (Anticipated)

3. Sponsor/Collaborators

Responsible Party, by Official Title
Principal Investigator
Name of the Sponsor
Father Flanagan's Boys' Home
Collaborators
Arizona State University, House Institute Foundation, University of Nebraska, National Institute on Deafness and Other Communication Disorders (NIDCD), Office of Behavioral and Social Sciences Research (OBSSR)

4. Oversight

Studies a U.S. FDA-regulated Drug Product
No
Studies a U.S. FDA-regulated Device Product
No
Product Manufactured in and Exported from the U.S.
Yes
Data Monitoring Committee
Yes

5. Study Description

Brief Summary
Patients with hearing loss who use cochlear implants (CIs) show significant deficits and strong unexplained intersubject variability in their perception and production of spoken emotions in speech. This project will investigate the hypothesis that "cue-weighting", or how patients utilize the different acoustic cues to emotion, accounts for significant variance in emotional communication with CIs. The results will focus on children with CIs, but parallel measures in postlingually deaf adults with CIs will be made, ensuring that results of these studies benefit social communication by CI patients across the lifespan by informing the development of technological innovations and improved clinical protocols.
Detailed Description
Emotion communication is a fundamental part of spoken language. For patients with hearing loss who use cochlear implants (CIs), detecting emotions in speech poses a significant challenge. Deficits in vocal emotion perception observed in both children and adults with CIs have been linked with poor self-reported quality of life. For young children, learning to identify others' emotions and express one's own emotions is a fundamental aspect of social development. Yet, little is known about the mechanisms and factors that shape vocal emotion communication by children with CIs. Primary cues to vocal emotions (voice characteristics such as pitch) are degraded in CI hearing, but secondary cues such as duration and intensity remain accessible to patients. It is proposed that individual CI users' auditory experience with their device plays an important role in how they utilize these different cues and map them to corresponding emotions. In previous studies, the Principal Investigator (PI) and the PI's team conducted foundational research that provided valuable information about key predictors of vocal emotion perception and production by pediatric CI recipients. The work proposed here will use novel methodologies to investigate how the specific acoustic cues used in emotion recognition by CI patients change with increasing device experience (Aim 1) and how the specific cues emphasized in vocal emotion productions by CI patients change with increasing device experience (Aim 2). Studies will include both a cross-sectional and a longitudinal approach. The team's long-term goal is to improve emotional communication by CI users. The overall objectives of this application are to address critical gaps in knowledge by elucidating how cue-utilization (the reliance on different acoustic cues) for vocal emotion perception (Aim 1) and production (Aim 2) are shaped by CI experience. The knowledge gained from these studies will provide the evidence-base to support the development of clinical protocols that support emotional communication by pediatric CI recipients, and will thus benefit quality of life for CI users. The hypotheses to be tested are: [H1] that cue-weighting accounts significantly for inter-subject variations in vocal emotion identification by CI users; [H2] that optimization of cue-weighting patterns is the mechanism by which predictors such as the duration of device experience and age at implantation benefit vocal emotion identification; and [H3] that in children with CIs, the ability to utilize voice pitch cues to emotion, together with early auditory experience (e.g., age at implantation and/or presence of usable hearing at birth) account significantly for inter-subject variation in emotional productions. The two Specific Aims will test these hypotheses while taking into account other factors such as cognitive and socioeconomic status, theory of mind, and psychophysical sensitivity to individual prosodic cues. This is a prospective design involving human subjects who are children and adults. The participants will perform two kinds of tasks: 1) listening tasks in which participants listen to speech or nonspeech sounds and make a judgment about it, interacting with a software program on a computer screen; and 2) speaking tasks, in which participants will read aloud a list of simple sentences in a happy way and a sad way or converse with a member of the research team, in which participants retell a picture book story or describe an activity of their choosing. Participants' speech will be recorded, analyzed for its acoustics, and also used as stimuli for listening tasks. In addition to these tasks, participants will also be invited to perform tests of cognition, vocabulary, and theory of mind. Participants will not be assigned to groups, and no control group will be assigned, in any of the Aims. In parallel with cochlear implant patients, the team will test normally hearing listeners spanning a similar age range to provide information on how the intact auditory system processes emotional cues in speech in perception and in production. Effects of patient factors such as their hearing history, experience with their cochlear implant, and cognition will be investigated using regression-based models. All patients will be invited to participate in all studies, with no assignment, until the sample size target is met for the particular study. The order of tests will be randomized as appropriate to avoid order effects.

6. Conditions and Keywords

Primary Disease or Condition Being Studied in the Trial, or the Focus of the Study
Cochlear Hearing Loss
Keywords
Emotional prosody, Acoustic cues, Perception, Production, Children, Development, Cochlear Implant, Hearing Loss

7. Study Design

Primary Purpose
Basic Science
Study Phase
Early Phase 1
Interventional Study Model
Single Group Assignment
Model Description
This is a prospective design involving human subjects who are children and adults with cochlear implants or with normal hearing. Participants will perform two kinds of tasks: 1) listening tasks in which participants listen to speech or nonspeech sounds and make a judgment regarding the emotion in the sound and 2) speaking tasks, in which participants will read aloud a list of simple sentences in a happy way and a sad way or converse with a member of the research team, in which participants retell a picture book story or describe an activity of their choosing. Participants' speech will be recorded, analyzed for its acoustics, and also used as stimuli for listening tasks. Child participants will also be invited to perform tests of cognition, vocabulary, and theory of mind. All participants will be invited to participate in all studies, with no assignment. The order of tests will be randomized as appropriate to avoid order effects.
Masking
None (Open Label)
Masking Description
No masking is involved.
Allocation
N/A
Enrollment
255 (Anticipated)

8. Arms, Groups, and Interventions

Arm Title
Vocal emotion communication by children and adults with cochlear implants or normal hearing
Arm Type
Experimental
Arm Description
Participants will be native speakers of American English and include pediatric cochlear implant recipients with unilateral or bilateral devices aged 6-19 years, children with normal hearing aged 6-19 years, postlingually deaf adults with cochlear implants, and adults with normal hearing. In Aim 1 participants will listen to emotional speech sounds and identify the talker's intended emotion. In Aim 2 participants will be invited to produce emotional speech by reading out scripted materials or in a more naturalistic conversational setting.
Intervention Type
Behavioral
Intervention Name(s)
Perception of acoustic cues to emotion
Intervention Description
Using novel methodologies and stimuli comprising both controlled laboratory recordings and materials culled from databases of ecologically valid speech emotions (e.g., from publicly available podcasts), the team aims to collect perceptual data to build a statistical model to test the hypothesis that experience-based changes in emotion identification by pediatric and adult CI recipients is mediated by improvements in cue-optimization.
Intervention Type
Behavioral
Intervention Name(s)
Production of acoustic cues to emotion
Intervention Description
The team will acoustically analyze vocal emotion productions by participants, quantify acoustic features of spoken emotions, and obtain behavioral measures of how well normally hearing listeners can identify those emotions.
Primary Outcome Measure Information:
Title
Vocal emotion recognition accuracy
Description
Percent correct scores in vocal emotion recognition
Time Frame
Years 1-5
Title
Vocal emotion recognition sensitivity
Description
Sensitivity (d's) in vocal emotion recognition
Time Frame
Years 1-5
Title
Voice pitch (fundamental frequency) of vocal productions
Description
Voice pitch (Hz) measured from acoustic analyses of recorded speech
Time Frame
Years 1-5
Title
Intensity of vocal productions
Description
Intensity (decibel units) measured from acoustic analyses of recorded speech
Time Frame
Years 1-5
Title
Duration of vocal productions
Description
Duration (1/speaking rate) measured from acoustic analyses of recorded speech
Time Frame
Years 1-5
Title
Recognition of recorded speech emotions by listeners -- percent correct scores
Description
Accuracy and associated d's (sensitivity measure) in listeners' ability to identify the emotions recorded in participants' speech
Time Frame
Years 1-5
Title
Recognition of recorded speech emotions by listeners -- d' values (sensitivity measure)
Description
Sensitivity (d's based on hit rates and false alarm rates) in listeners' ability to identify the emotions recorded in participants' speech
Time Frame
Years 1-5
Secondary Outcome Measure Information:
Title
Reactions times (seconds) for vocal emotion identification
Description
Time between the end of the stimulus recording and the response (button press)
Time Frame
Years 1-5

10. Eligibility

Sex
All
Minimum Age & Unit of Time
6 Years
Maximum Age & Unit of Time
80 Years
Accepts Healthy Volunteers
Accepts Healthy Volunteers
Eligibility Criteria
Inclusion Criteria: Prelingually deaf children with cochlear implants Postlingually deaf adults with cochlear implants Normally hearing children Normally hearing adults Exclusion Criteria: Non-native speakers of American English Prelingually deaf individuals who receive cochlear implants after age 12 Adults unable to pass a basic cognitive screen
Central Contact Person:
First Name & Middle Initial & Last Name or Official Title & Degree
Monita Chatterjee, Ph.D.
Phone
531-355-5069
Email
monita.chatterjee@boystown.org
First Name & Middle Initial & Last Name or Official Title & Degree
Dawna E Lewis, Ph.D.
Phone
531-355-6607
Email
dawna.lewis@boystown.org
Overall Study Officials:
First Name & Middle Initial & Last Name & Degree
Monita Chatterjee, Ph.D.
Organizational Affiliation
Father Flanagan's Boys' Home
Official's Role
Principal Investigator
Facility Information:
Facility Name
Arizona State University
City
Tempe
State/Province
Arizona
ZIP/Postal Code
85287
Country
United States
Individual Site Status
Not yet recruiting
Facility Contact:
First Name & Middle Initial & Last Name & Degree
Xin Luo, Ph.D.
Phone
480-965-9251
Email
xinluo@asu.edu
Facility Name
Boys Town National Research Hospital
City
Omaha
State/Province
Nebraska
ZIP/Postal Code
68131
Country
United States
Individual Site Status
Recruiting
Facility Contact:
First Name & Middle Initial & Last Name & Degree
Monita Chatterjee, Ph.D.
Phone
531-355-5069
Email
monita.chatterjee@boystown.org
First Name & Middle Initial & Last Name & Degree
Dawna E Lewis, Ph.D.
Phone
531-355-6607
Email
dawna.lewis@boystown.org
First Name & Middle Initial & Last Name & Degree
Sophie E Ambrose, Ph.D.

12. IPD Sharing Statement

Plan to Share IPD
Yes
IPD Sharing Plan Description
The team plans to potentially share relevant information regarding participant age, device (if cochlear implant user), age at implantation, outcome measures, etc., excluding all PHI (personal health identifier) information.
IPD Sharing Time Frame
When specific studies are completed and published, data will be shared within 6 months post-publication.
IPD Sharing Access Criteria
Data will be shared via Boys Town's Open Science Framework
Citations:
PubMed Identifier
32149924
Citation
Barrett KC, Chatterjee M, Caldwell MT, Deroche MLD, Jiradejvong P, Kulkarni AM, Limb CJ. Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users. Ear Hear. 2020 Sep/Oct;41(5):1372-1382. doi: 10.1097/AUD.0000000000000862.
Results Reference
background
PubMed Identifier
31632320
Citation
Chatterjee M, Kulkarni AM, Siddiqui RM, Christensen JA, Hozan M, Sis JL, Damm SA. Acoustics of Emotional Prosody Produced by Prelingually Deaf Children With Cochlear Implants. Front Psychol. 2019 Sep 30;10:2190. doi: 10.3389/fpsyg.2019.02190. eCollection 2019.
Results Reference
background
PubMed Identifier
31589545
Citation
Damm SA, Sis JL, Kulkarni AM, Chatterjee M. How Vocal Emotions Produced by Children With Cochlear Implants Are Perceived by Their Hearing Peers. J Speech Lang Hear Res. 2019 Oct 25;62(10):3728-3740. doi: 10.1044/2019_JSLHR-S-18-0497. Epub 2019 Oct 7.
Results Reference
background
PubMed Identifier
25448167
Citation
Chatterjee M, Zion DJ, Deroche ML, Burianek BA, Limb CJ, Goren AP, Kulkarni AM, Christensen JA. Voice emotion recognition by cochlear-implanted children and their normally-hearing peers. Hear Res. 2015 Apr;322:151-62. doi: 10.1016/j.heares.2014.10.003. Epub 2014 Oct 16.
Results Reference
background

Learn more about this trial

Vocal Emotion Communication With Cochlear Implants

We'll reach out to this number within 24 hrs