• Job Application Form
Phone: 347.270.8850
HealthFlex
×
  • Our story
  • What is SLP?
  • Services
    • Speech and Language Therapy
    • Early Intervention Program (EIP)
  • Disorders
  • Careers
    • Careers in Georgia
    • Careers in Maryland
    • Careers in New York
  • Blog
  • Contact

How Will Artificial Intelligence Reshape Speech-Language Pathology Services and Practice in the Future?

How Will Artificial Intelligence Reshape Speech-Language Pathology Services and Practice in the Future?
June 18, 2025Articles

Following the lead of Joe Duffy (2016), I started asking graduate students in my motor speech disorders classes to imagine (a) what their clinical practice will look like in 10 years and (b) how technology will affect clinical practice. Invariably, these discussions include some version of artificial intelligence (AI) such as “Will robots replace speech-language pathologists (SLPs)?” I view these discussions as an opportunity to nudge the mindset from one of professional vulnerability to one of empowerment.

Artificial intelligence (AI) is a statistical approach that harnesses the power of data to create tools that can enhance our clinical practice and promote the health and success of our clients. Although Siri and Alexa cannot yet differentially diagnose dysarthria subtypes, rapid advances in technology already influence our clinical practice. SLPs will need to understand AI clinical tools, know their limitations, and employ them judiciously as they become available.

It is impossible to predict the many ways in which AI will impact our discipline. Here, we present one view of how AI may reshape speech-language pathology services, focusing on individuals with neurodegenerative disease. Also discussed are the broader implications for clinical practice. We identify four interrelated categories of AI that may change our profession, recognizing that the lines are blurred between some of these categories.

1. More efficient clinical documentation

AI is already with us. Automatic speech recognition (ASR) is an AI tool that turns smartphones and tablets into personal assistants whose accuracy improves the more we use them. ASR machine-learning algorithms are trained on tens of thousands of hours of speech produced by members of the general public. The extension of these tools to health care has the potential to free clinicians’ time by automating time-consuming clinical tasks such as documentation. In the not-too-distant future, clinicians may be able to verbally annotate their case notes, and an ASR algorithm will decode and parse the notes into the correct form fields. One can envision that this application of AI will free SLPs to spend additional time with patients.

2. Assistive technologies for clients

Intelligent assistive technologies are algorithms and devices that compensate for communicative and cognitive impairments. For example, imagine ASR algorithms that are trained to recognize a small, closed set of words produced by someone with reduced intelligibility. As long as the person can produce a word fairly consistently, the ASR system may be able to recognize the word. The system can then display the word as text for the communication partner to read—or, if using synthesized speech, to say. Another example of intelligent assistive technologies is voice banking; one use of this technology is to preserve the voices of clients who know that they will eventually lose the ability to communicate verbally. Clients will periodically record speech samples via a web-based platform (for example, https://vocalid.ai/). These “historical” samples may then be used to develop personalized assistive technologies or even customized synthesized voice outputs. These outputs may more closely reflect the speaker’s natural voice, capturing an individual’s unique identity and adding richness to the communicative exchange.

3. Objective assessment

The clinical examination conducted by an SLP is both a science and an art that is honed and refined through years of experience. The examination/evaluation process will benefit from the addition of tools permitting validated objective measures of speech and language. Historically, objective measures required manual coding and measuring—procedures that were most feasibly conducted in research labs. Technological advances support the transition of these manual approaches to automated methods that can provide objective speech and language measures quickly (for example, https://thelearningcorp.com/constant-therapy). For example, soon SLPs may be able to remotely and reliably assess language and articulation through a simple picture description task administered on the client’s smartphone. Such tools will permit more frequent assessments, remote therapy administration, and finer-grained resolution of features on continuous scales as well as mitigate the reliability issues that plague subjective evaluations.

4. Personalized practice

Personalized medicine, or individual medicine, refers to diagnosing and treating patients in ways that are of maximal benefit, based on (a) their genetic, behavioral, social, cultural, and economic profiles and (b) treatment response patterns. SLPs will be able to move more quickly into personalized practice as objective assessment data are integrated into the clinical arena. Large-scale databases that combine outcomes, client characteristics, and speech-language pathology SLP treatment decisions will enable new algorithms for personalizing treatment. For instance, adaptive learning algorithms can be used to maximize client progress efficiently and identify performance plateaus. We expect that individualized objective data will eventually be required by third-party payers, electronic health records, and professional outcome measure databases.

What does all of this mean for SLP clinicians?

AI will reshape our practice in three critical ways: 

  1. We will become more efficient and spend more time with clients because AI will help with documentation.
  2. We will have new tools at our disposal to evaluate clients and to improve communicative outcomes.
  3. We will play a larger and more integral role in the early identification of disorders and diseases. Currently, speech and language symptoms are collected as a window to brain health; acoustic and cognitive–linguistic metrics bring additional sensitivity to these models (for example, http://auralanalytics.com). As effective treatments or cures are developed for neurodegenerative diseases (and other disease or disorder categories), the composition of our caseloads will shift, and it will be important to use clinically interpretable objective measures that are sensitive to changes secondary to disease and/or interventions.

References

Duffy, J. R. (2016). Motor speech disorders: Where will we be in 10 years? Seminars in Speech and Language,37(3), 219–224. https://doi.org/10.1055/s-0036-1584154

Recent Posts

  • How Will Artificial Intelligence Reshape Speech-Language Pathology Services and Practice in the Future?
  • Exposure to bilingual or monolingual maternal speech during pregnancy affects the neurophysiological encoding of speech sounds in neonates differently
  • Screen time linked with developmental delays in toddlerhood, study finds
  • Scientists discover how mutations in a language gene produce speech deficits
  • The Decline of Play and Rise in Children’s Mental Disorders

Recent Comments

  • Jewel Mccomack on Asthma, Allergies, Articulation: a Speech Therapy Perspective
  • Carl Manalili on The Decline of Play and Rise in Children’s Mental Disorders
  • Jamesnic on The Decline of Play and Rise in Children’s Mental Disorders
  • Sophy on Academic Reading Strategies
  • AffiliateLabz on The Decline of Play and Rise in Children’s Mental Disorders

Archives

  • June 2025
  • June 2024
  • November 2023
  • May 2023
  • June 2019
  • January 2019
  • October 2018

Categories

  • Articles

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

Request Appointment

    Your Name (required)

    Your Email (required)

    Your Message

    Speech Kids New York located in New York City

    347.270.8850

    info@speechkidsny.com

    New York City, NY

    Quick Links

    • Our story
    • What is SLP?
    • Services
    • Disorders
    • Blog
    • Contact

    Latest Articles

    • How Will Artificial Intelligence Reshape Speech-Language Pathology Services and Practice in the Future? Jun 18

      Following the lead of Joe Duffy (2016), I started asking...

    • Exposure to bilingual or monolingual maternal speech during pregnancy affects the neurophysiological encoding of speech sounds in neonates differently Jun 12

      Introduction: Exposure to maternal speech during the prenatal period shapes speech...

    • Screen time linked with developmental delays in toddlerhood, study finds Nov 6

      CNN — Handing your baby a phone or tablet to play with...

    Copyright © 2020 Speech Kids New York. All Rights Reserved.
    A-Designo.com - Digital Marketing Agency.