Jamillah Knowles & Reset.Tech Australia / Better images from AI / People on phones (portrait) / CC-BY 4.0
In all the discussion about how artificial intelligence will change society, an important question is being overlooked: what does it mean if we are more willing to believe that a computer can communicate than one who does so in an atypical way? While AI can be promising, we continue to dismiss human intellect that does not meet our expectations of what intelligence looks like. Unless we confront the ableism in our collective understanding of intelligence and work with the disability community to shape AI, emerging technologies will cause damage that more critical reflection could have prevented.
Since OpenAI’s ChatGPT came out in November 2022, technologists, journalists, and the general public have been caught up in a wave of predictions and hype about what this new technology means for humanity. Media outlets have printed article after article explaining the profound effects of AI on society. Despite all this discussion, we have foregone any real reckoning with what intelligence is and have instead hastily accepted the claim that machines could have it. As a result, we continue to talk about AI, including large language models like ChatGPT, without seriously grappling with the risks and realities. To truly invest in creating and using AI for the benefit of humanity, we must fully include people with disabilities as experts in this work – because we are somehow still less willing to embrace the intelligence and humanity of people with a handicap than that of a machine.
There is a long history of equating communication with intelligence, often in ways that disadvantage marginalized groups. Dialects and slang, especially if racialized, are often described as “incorrect” language and are believed to mean that someone is less intelligent and less educated. Speech differences and impediments are treated as signs of intellect, or lack thereof, for example when stuttering is interpreted as a sign that a person’s thoughts are not yet fully formed. Spelling errors, grammatical errors and non-standard writing, even if they are the result of a specific disability such as dyslexia, are often seen as evidence of a less intelligent writer and used as a reason to dismiss the ideas presented. Most egregiously, people who cannot speak are routinely assumed to have no thoughts to communicate, and people who use augmentative and alternative communication (AAC) instead of speech are often treated with ridicule and suspicion.
Given this history, our willingness to accept ChatGPT’s production of writings that meet certain expectations as a sign of intelligence says much more about how we define intelligence than it does about technological progress. Intelligence is not a single human ability, and the term has been interpreted in different ways throughout history. However, the development of IQ tests in the late 19th and early 20th centuries furthered the understanding of intelligence as a measurable trait described by a numerical score on a standardized assessment. These tests, and their inventors, created a version of intelligence that was easy to measure and compare, but also skewed by racial, cultural, and socioeconomic biases—and that could be used to justify discrimination against certain groups of people whose scores labeled them as “flawed.”
Like IQ tests, ChatGPT and other major language models emphasize the form of intelligence over its function. These AI tools seem smart because they are effective imitations of what we expect from “intelligent” communications. However, parroting content that matches our ideas about how “smart” people write is not the same as actual understanding based on a deep engagement with ideas. In their attempts to produce intelligent writing, large language models consistently fabricate details and generate outright misinformation, demonstrating the risks of assuming a model is intelligent because of its performance on a single narrow task, such as producing text.
What all this means is that not only are we failing to develop AI that is truly intelligent, but we are also failing to learn from history. Time and time again, people with disabilities have been excluded, oppressed and eradicated, sometimes with the help of technologies that promise to ‘normalize’ them. Yet disability is an integral part of humanity. Building a more inclusive world and advancing the ability of people with disabilities to enjoy fundamental rights benefits society in ways that technology alone will never achieve. But more than that, people with disabilities are uniquely able to explain the difference between being human and being seen as human – something that is deeply relevant to AI. Much of the work we need to undertake to ensure that AI is safe and useful for all of society, and not just for the few people who have the power to shape its development and use, will require we think critically about how and when we delegate responsibility. for high-stakes decisions, to automated processes, including AI systems.
To do that, we need to be realistic in describing how AI and other algorithmic technologies produce outputs that we accept as indicative of thoughtful work when done by humans. But sometimes it also means taking a step back and reexamining all the ways we currently exclude certain people from having a say in decisions. Creating AI that is safe, ethical, and beneficial to humanity requires us to take into account the arbitrary ways we have defined intelligence. Who better to lead the way than those who know firsthand the consequences of such definitions?
The use and regulation of AI is more than a technical problem that needs technical solutions. These tools are created and used by humans and reflect human values and biases. In fact, they are too often evidence of whose perspectives we consider well-reasoned, and of the fact that we are sometimes quick to cede power to those whose communications are the most polished, regardless of the content of the ideas they convey. embrace. . Addressing the harm we are already seeing from AI cannot be done with software and legal codes alone. To use algorithmic technologies responsibly and mitigate the negative impacts of AI-fueled disinformation, we must address the ableism that is embedded in our collective assumptions about what it means to be intelligent, communicative, and human.