In recent years, the use of AI-enhanced ambient listening tools has transformed the way clinicians utilize vocal biomarkers for cognitive and behavioral health assessments. As these technologies expand, clinicians are exploring use-cases for a wider range of languages, populations, and more conditions as well. Looking ahead to 2025, expect vocal biomarkers to play a more essential role in disease detection, offering valuable insights while saving time for both doctors and patients.
As a clinical decision support tool (CDST), vocal biomarkers can provide timely information, usually at the point of care, to help inform decisions about a patient’s care. Here are five of the most relevant trends worth watching in the vocal biomarker space.
Growth of ambient listening
Ambient listening tools gained popularity among clinicians in 2024, and will continue to do so in 2025. These apps enhance digital vocal biomarker tools by picking up on everyday conversations between patients and providers, saving time for patients and doctors alike.
For example, instead of reciting a prewritten script, the patient can have a normal, unscripted conversation in a clinical setting recorded by the ambient listening app. The conversation can yield the same caliber of vocal biomarker data as the script while also giving the provider more time to ask follow-up questions — which can yield important insights, too.
By uploading an AI-generated transcript of this dialogue into a patient’s EMR, it empowers providers to simultaneously capture the data needed for cognitive and behavioral health assessments. The ambient listening assessment as a CDST is supported by Current Procedural Terminology (CPT) code insurance reimbursement.
Adapting models to a variety of languages
Using English speakers as test subjects has yielded promising commercial applications for vocal biomarkers as a CDST — for other English speakers. The logical next step: developing a robust dataset in as many languages as possible.
Already, multiple languages have demonstrated a strong correlation between vocal biomarkers and diseases including mild cognitive impairment and Alzheimer’s Disease, among others. English, Japanese, and Spanish models are already in commercial use. Look for more languages to come to market in 2025 and beyond.
As more languages become available to more patients, the technology will have a broader reach across different populations and socioeconomic statuses.
Using vocal biomarkers to detect more diseases
Currently, there are models for vocal biomarkers as CDSTs for Huntington’s disease, Parkinson’s disease, mild cognitive impairment and Alzheimer’s Disease. In the behavioral health area, current applications include anxiety and depression. In 2025, look for Post Traumatic Stress Disorder (PTSD) and Multiple Sclerosis (MS) as the next frontier for vocal biomarker models.
Having more powerful screening tools for each of these conditions has the potential to save lives. And because one audio recording can serve as a vocal biomarker screening for all of the above conditions, the screenings can effectively be performed simultaneously, at no time cost to the doctor and patient.
Broader commercial applications
In Europe, individuals buying insurance or financial products online are required to be evaluated for cognitive and behavioral health. Ambient listening tools can monitor a call center and use vocal biomarkers to flag potential cognitive and behavioral health issues before a major purchase is approved — a powerful safeguard toward assuring the mental and behavioral competence of potential clients.
In America, the practice has yet to take hold, but the potential is obvious. In many commercial settings, it makes sense that both the buyer and seller want the buyer to be of sound mind. Expect vocal biomarker technology to fortify efforts to provide consumer protections anywhere call centers exist.
Applying vocal biomarker technology to childhood disease
Current applications focus on vocal biomarker detection for adults. By focusing on childhood diseases, clinicians can potentially tap into diagnosing a new, large segment of the population for a variety of behavioral and cognitive conditions.
Many childhood conditions, such as ADHD and autism, are difficult to diagnose and validate for referral at the primary care level. Early referrals can significantly improve the likelihood of a long-term positive outcome.
About Henry O’Connell
Henry O’Connell is the CEO and co-founder of Canary Speech, the leading AI-powered health tech company that uses real-time vocal biomarkers to screen for mental health and neurological disorders. Henry has more than twenty years of experience in technology company leadership, both private and public. He has served on the board of directors for several technology companies in the U.S. and internationally. Among his medical diagnostic and technology experience, Henry worked for Hewlett-Packard, Gibson and the National Institutes of Health in neurological research.