top of page
Search

AI, Language Equity and English-Speaking Privilege

  • Rachel Barton
  • Mar 26
  • 10 min read

Some thoughts from a plane, somewhere over the North Sea



Rachel Barton, 24th March 2026


Knowing is not Understanding

I have spent the last few years enthusiastically advocating for the responsible use of generative AI in Speech and Language Therapy (SLT). I have run training sessions, delivered keynotes, and built resources to help speech and language therapists use these tools safely and effectively. Part of that advocacy has always included an honest account of limitations, including the fact that AI tools perform better in English than in less widely represented languages.


I knew that - I have said it many times, but knowing something and genuinely understanding it are two different things.


I am currently on a plane halfway between Copenhagen and London, returning from the Danish Association of Speech and Language Therapists conference in Nyborg where I delivered a workshop: AI in Speech and Language Therapy: Opportunities, Challenges and Responsibility.


Sitting with European SLT colleagues yesterday, I listened as Danish therapists described their day-to-day experience of using AI tools in clinical work. The outputs that large language models such as ChatGPT produce in Danish are less accurate and less clinically reliable than those they produce in English, so much so that some have adopted a workaround - using AI in English and then translating back into Danish.


Hearing it described as an everyday clinical limitation rather than a theoretical issue was both humbling and grounding. I had been naming this issue without understanding the true weight of it and my own privilege came into sharper focus: I am an English speaker, working in English, with mainly English-speaking clients. The tools work well for me. I had not been giving the challenges of non-English AI use the weight they deserve.


What's more, every conversation I had in Denmark was in English. Despite my attempts over the last few months to learn Danish, and my mildly impressive 920 day Duolingo streak (I try to learn some of the language of whichever country I'm visiting next), my ability to converse in anything other than English is extremely limited. In part this is my own fault for not persisting with one language, but it is also due to the fact that wherever I go, people generally speak English. The world adjusts itself to meet my language needs and so the motivation to truly persist with a second language is sadly lacking. In itself this is not a new epiphany, but the fact that this privilege extends to AI use is.

 

The Problem with "Work in English and Translate Back"


The workaround some clinicians have adopted, prompting in English and translating the response back, is not without challenges:

  • Cognitive load - working in a second language uses additional mental resources and executive function. For therapists who are already at capacity, this is an added pressure.

  • Loss of clinical nuance - some concepts and cultural references do not translate directly and so may be lost in the process.

  • Professional identity - there is something undermining about a tool that carries the message, however subtly, that you will get better results if you work in someone else's language. That does not sit well in a profession that values diversity in language and communication.

  • Equity of access - clinicians who are less proficient in English are more disadvantaged. An unwelcome widening gap between English-speaking and non-English-speaking clinicians, driven by differences in access to effective AI tools, has the potential to affect the quality of service each SLT can provide.


Why AI Tools Work Better in English


Large language models like ChatGPT, Claude and Gemini are trained on vast quantities of text data, and English dominates that data by a huge margin. The quality of a model's reasoning and accuracy in any given language is broadly proportional to how well represented that language is in its training data. Danish, like most of the world's languages, is vastly underrepresented in that data. This is even more the case for languages that are spoken but do not have a standard written form.


The result is outputs that are weaker in clinical reasoning and have a higher risk of hallucinations (false information that sounds plausible). Failure to spot these errors could have dangerous consequences in clinical contexts.


The Problem Extends Beyond ChatGPT


The issue does not stop with general-purpose tools such as ChatGPT. SLT is starting to see a growing number of specialist AI technologies, including speech recognition systems, automated phonetic transcription tools, voice analysis tools and language sample analysis software.


Although some tools are being developed in languages such as Spanish, Korean, French, Japanese, Hindi, Portuguese and German, many of the best-resourced and most clinically mature tools have been developed first in English, and much of the research base has relied on English-language datasets (Georgiou, 2025).


For a tool to work in another language such as Danish, it needs to be trained on Danish-language datasets. Labelled datasets of disordered speech in non-English languages are lacking. Data like this takes time and resources to build. Sadly, this is not where the financial incentives are for many large tech companies. Advances in technology are largely driven by profit, and that means a tech landscape that is still extremely English-language centric. The pace and quality of development across different languages is deeply uneven. If that inequity continues, AI could reinforce and worsen existing language hierarchies.

 

Language Is Only Part of the Equity Problem


Language is not the only inequity in these tools. A 2025 paper by Georgiou highlights substantially higher error rates in automated speech recognition systems for Black speakers. For SLTs, whose work depends on accurately hearing and interpreting speech, a tool that hears some voices less well than others is not fit for purpose.


A recent Stanford-led study tested 15 leading multimodal language models on core SLT tasks including disorder diagnosis and automatic speech recognition. None came close to an acceptable threshold for clinical use and all performed better on male speakers than female (Patel et al., 2025).


AI tools are not equally good at hearing everyone. Their performance can vary by accent, dialect, age and gender, depending on whose voices were most represented in the data they were trained on. The communities most likely to be poorly served by these tools may well be the same communities that already face the greatest barriers to accessing SLT services.


There is valuable work being done to address these inequities. Richard Cave, a UK speech and language therapist and co-Director of the Centre for Digital Language Inclusion (CDLI), has spent years working in AI and non-standard speech recognition. The CDLI provides open-source datasets of non-standard speech, with a particular focus on African languages, alongside trained speech recognition models designed to better recognise diverse speech patterns. These resources are freely available to researchers and developers worldwide (CDLI, 2026). This kind of work begins to shift the balance, but it also highlights how much further there is to go.


Language equity in AI also extends beyond spoken and written majority languages, raising important questions about signed languages, multimodal communication and who gets included in digital innovation at all. In reality, AI is a double-edged sword, with the potential to either reduce existing health inequalities or to deepen them. We need to position ourselves to advocate for the former.

 

Whose Responsibility Is This?


The question I keep coming back to is: who is responsible for changing this?

Big tech bears primary responsibility for the design choices that have created and reinforced English-language dominance in AI. Because these systems are trained on data drawn largely from the internet, and because English dominates that data, the imbalance is built in from the start. The landscape is also shaped by commercial priorities. Companies such as Google, OpenAI, Microsoft and Anthropic are in a relentless race to outdo one another in capability and market share, often drawing focus from the needs of minority language users.


In addition to the CDLI, there are other exceptions worth noting. Mozilla's Common Voice project is a great example of what a different set of priorities looks like. As a non-profit initiative, it has spent years crowdsourcing voice recordings across hundreds of languages, explicitly addressing the fact that most existing speech datasets underrepresented women, people with accents, and speakers of less widely spoken languages. The resulting datasets are released free of charge under a public domain licence, so that researchers and developers anywhere in the world can use them (Mozilla, 2024). It is exactly the kind of open, equity-driven infrastructure that the field needs more of.


Researchers and developers within the SLT world also have a responsibility to build the evidence base and data infrastructure needed for non-English languages. That work requires funding, recognition and a clear understanding that this is not a niche concern or an optional extra. It is central to whether AI becomes a force for greater equity or a tool that exacerbates existing divides.


Professional bodies cannot be passive here. Naming risks in guidance is a starting point. The profession then needs active advocacy for inclusive design, multilingual representation, and for AI development that serves all the populations we work with, not just the English-speaking ones.


English-speaking SLTs who benefit from the current system also have a responsibility here. While we may not be excluded by the system, it is our responsibility to notice who is excluded and advocate for inclusive systems.


For me personally, that means going further in what I teach. I have always included language limitations in my training, but not nearly enough. The clinical experiences of the Danish SLTs I spoke with yesterday deserve more than a passing mention. So do the SLTs working in Romanian, Welsh, Urdu and dozens of other languages whose clinical realities I have not needed to navigate.

 

The European Response

On 6 March 2026, I had the privilege of speaking at the launch of the European Speech and Language Therapy Association's (ESLA) position paper on AI, Shaping the Future of Speech and Language Therapy with Artificial Intelligence. ESLA represents 37 member associations across 34 countries and over 40,000 SLT professionals.


The organisation has put equity and inclusion at the centre of its AI vision. The paper explicitly recognises that AI tools may improve services for multilingual and culturally diverse populations, but also that unequal access to AI-supported services could widen existing health inequalities if not proactively addressed. It calls for AI tools to be co-designed with professionals and people with lived experience, and for SLT expertise to be embedded in AI health strategies and digital health governance at a European level (ESLA, 2026).


When I first read the paper, I was struck by the centrality of this principle. After spending an afternoon and evening with ESLA colleagues from Denmark, Sweden and Norway, I now understand its urgency more fully.

 

When Tech Companies Come Calling


There is another part of this story that the profession needs to grapple with. Increasingly, start-ups and tech companies are approaching speech and language therapists for advice. On one level, that is encouraging - it suggests that developers recognise that these tools cannot be built well without clinical input, and that SLTs have valuable expertise to offer.


My experience is that this creates another tension. Many SLTs do not have spare time to offer unpaid advice and support, particularly if they are self-employed or already working at capacity. Clinical expertise is needed to make these tools safe, useful and relevant. If companies want to draw on that expertise, they need to think seriously about how it is recognised and resourced.


At the same time, there is a risk in not engaging at all. If speech and language therapists are absent from these conversations, tools may be developed without the right questions being asked about language diversity, equity, privacy, clinical safety or whether the product is genuinely useful. Clinicians cannot give unlimited time away for free, but development should not be left entirely in the hands of people who do not understand clinical work.


In addition, not every speech and language therapist will know what questions to ask a technology company. Having clinical expertise does not automatically mean having confidence about technology-related data governance, bias, validation, regulation or implementation. Yet these are the issues that can determine whether a tool is helpful, harmful or exclusionary.


Importantly, none of this should happen without people with lived experience being involved. If a company is building a tool intended for people with communication difficulties, then those people and their families should be part of the design process from the start. Co-design is not just ethically right - it leads to better questions, better priorities and ultimately better tools.


What is needed is collective action. Rather than relying on a small number of motivated clinicians to absorb repeated requests, the profession may need shared resources, clearer expectations and a wider culture of contribution, so that the burden does not fall on the same few people repeatedly.


As a starting point, here are some questions to ask if you are approached by a tech company for your clinical expertise:

  • What problem is this tool trying to solve, and is it a real clinical need?

  • Who has helped design it, and have service users or people with lived experience been involved from the start?

  • Which languages, dialects and populations has it been trained and tested on?

  • What evidence is there that it is accurate, clinically useful and safe?

  • How have bias, privacy, regulation and clinical safety been addressed?

  • How will my time and expertise be recognised and compensated?


If SLTs are going to help shape this future, that work needs to be valued, distributed fairly and supported.

 

New Possibilities Need Equity

I do remain genuinely optimistic about what AI can offer SLT. The possibilities for reducing administrative burden, supporting early identification, personalising interventions and extending the reach of services are exciting.


The SLT profession now has a choice about how it engages with AI. We can adopt it as it stands, allowing tech companies large and small to dictate the parameters, or we can advocate collectively, for development that reflects the full linguistic and cultural diversity of the populations we serve.



My thanks to the SLTs in Nyborg who shared their experiences so openly. One of the things I love most about training is that I always learn something new in the process. This time it came with a side order of humility.



Continue the Conversation:

I'd love to hear about your experiences using AI in SLT. Connect with me via:


 

AI Acknowledgement

I used Claude AI and ChatGPT as a thought partners for this piece – sharing my reflections and exploring a range of perspectives.  I used them to help me refine structure, test phrasing, improve clarity and source references. I used Claude to write image prompts and ChatGPT to create the images. The ideas, reflections, critique and final editorial decisions are my own, and I take full responsibility for the final content. This is, I think, a reasonable example of what responsible generative AI use in professional writing can look like.

 

References

CDLI (2026) Centre for Digital Language Inclusion. Available at: https://www.cdl-inclusion.com/ (Accessed: 25 March 2026).


ESLA (2026) Shaping the Future of Speech and Language Therapy with Artificial Intelligence. Published 6 March 2026. Available at: https://eslaeurope.eu/whatson/esla-ai-position-paper/ (Accessed: 25 March 2026).


Georgiou, G.P. (2025) 'Transforming speech-language pathology with AI: opportunities, challenges, and ethical guidelines', Healthcare, 13(19), p. 2460. Available at: https://doi.org/10.3390/healthcare13192460 (Accessed: 25 March 2026).


Mozilla (2024) Common Voice. Available at: https://commonvoice.mozilla.org (Accessed: 25 March 2026).


Patel, F., Nguyen, D.Q., Truong, S.T., Vaynshtok, J., Koyejo, S. and Haber, N. (2025) The Sound of Syntax: Finetuning and Comprehensive Evaluation of Language Models for Speech Pathology. arXiv preprint arXiv:2509.16765. Available at: https://doi.org/10.48550/arXiv.2509.16765 (Accessed: 25 March 2026).

 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Award Winner 2022 logos large.png

Trading as "Chatterbox Sussex Speech and Language Therapy Ltd".   Registered Company 10859242.  Website created with Wix.com

bottom of page