[ huh-loo-suh-neyt ]
(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual.
verb
Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.
The word strike had a high-profile role in the news narrative of the year, which included several prominent and lengthy labor strikes by screenwriters, actors, auto workers, healthcare professionals, service workers, and others.
Rizz was the year’s most durable—and, on Dictionary.com, most-searched—slang term. Popularized by streaming star Kai Cenat, it refers to attractiveness, charm, or skill in flirtation that allows one to easily attract romantic partners. It is thought to be taken from the middle part of the word charisma.
The evolution of woke and related terms like wokeism continues, with wokeism in particular emerging as a lightning rod and signifier of broad political opposition. We saw a massive 2,300% increase in pageviews for wokeism in 2023.
This year’s unprecedented legal activity in the context of U.S. government and politics was reflected in multiple significant search spikes this year, including for the terms indicted (300% increase), arraignment (198% increase), and exculpatory (15% increase).
This year’s devastating wildfires in Maui, Canada, and in many other parts of the world were some of the latest examples of how climate change is contributing to extreme weather events and a new potency in the terms we use to refer to them.
The terms on our Word of the Year short list reflect the role of Dictionary.com as a living resource for making sense of the terminology of complicated and rapidly changing times. These five terms represent the intersection of language with some of the year’s most significant events and trends.
This year, we saw an average 62% increase in year-over-year dictionary lookups for AI-related words, like chatbot, GPT, generative AI, LLM, and others.
According to Google search data, new all-time highs in searches for variations on AI and artificial intelligence were reached in 2023—an 89% year-over-year increase. We expect searches and dictionary lookups for AI to continue to trend.
In 2023, we saw dictionary lookups for hallucinate increase 46% over the previous year, alongside a comparable increase in the noun form hallucination.
We added this sense of hallucinate to our dictionary just this year. If this is the first time you’re learning about it, be prepared to start encountering the word—and what it refers to—with increasing frequency. Like AI itself, the word hallucinate is on an upward trajectory.
Find out what other words made our 2023 Word of the Year short list.
AI has already begun to fill our language—and, as a result, this dictionary—with terms that refer to its functional aspects, such as chatbot and LLM, along with new senses of words like prompt.
Hallucinate is particularly notable among the terms that AI has popularized because it refers not to an aspect of how AI functions but to one of the ways it can malfunction. In this way, it’s akin to other cautionary tech terms, like spam and virus, that are now entrenched in our language.
This is just one of the reasons that our lexicographers expect the word to stay relevant—at least into the near future.
Our choice of hallucinate as the 2023 Word of the Year represents our confident projection that AI will prove to be one of the most consequential developments of our lifetime. Data and lexicographical considerations aside, hallucinate seems fitting for a time in history in which new technologies can feel like the stuff of dreams or fiction—especially when they produce fictions of their own.
Awareness of the AI sense of hallucinate is extremely recent among non-insiders, but use of the term in the context of computer science is older than you might expect. One of its first documented uses (in noun form) comes from a 1971 research paper on training computers to accurately “read” handwriting and output it. The term, including the verb form, began to appear in the context of machine learning and AI by the 1990s.
Hallucinate ultimately derives from the Latin word ālūcinārī, meaning “to dream" or "to wander mentally.”
This evolution of the word’s meaning—with a new, figurative use growing out of its original, literal sense—is what linguists call metaphorical extension. This is extremely common, especially in terms that emerge for new technologies (the aforementioned spam and virus are also examples).
AI isn’t magic, nor is it “thinking.” The language models used by chatbots work by picking the words that are most similar to the texts they were trained on—and that’s not always the same as what’s true or factual. Hence, the tendency to “hallucinate.”
Currently, AI engineers and researchers can’t simply “look under the hood” to determine why a particular instance of “hallucination” occurred—the output is a result of the string of complex operations that proceed from its “training.” But our understanding of these workings is likely to change dramatically in the next few years.
Based on current understanding, though, many AI researchers and ethicists object to the use of hallucinate in the context of AI’s erroneous output, for a few different reasons.
Saying that AI hallucinates—as opposed to “produces output errors,” for example—can have the effect of anthropomorphizing it (ascribing human traits to it). According to some AI researchers, this can overstate AI’s capabilities while understating its limitations. Relatedly, some point out that using the word hallucinate can serve to help AI companies skirt responsibility for such errors—ignoring the fact that humans engineered the process. Some also caution that using hallucinate in this way could perpetuate mental health stigmas associated with the word.
Still, simple and straightforward alternative terms can face difficulty entering the discourse once a word—especially one as evocative as hallucinate—has taken hold.
AI itself has already begun to impact our work as a dictionary, providing both challenges and opportunities. The proliferation of AI-generated text adds new considerations to the lexicography process, including the need to determine whether online examples of a word’s use derive from machine-written sources. But it’s also very useful to observe how generative machine learning uses words, because such uses are a reflection of statistically likely (and human-written) use cases.
All of these data points underscore a story—of 2023 as the year we all began to witness, wonder at, and worry about how many aspects of life are impacted by AI.
The sudden arrival of OpenAI’s ChatGPT late last year stunned not only its tech rivals but just about everyone who tried it. Chatbot capabilities have forced educators to grapple with what it means to teach and learn in a world where essays can write themselves—often with perfect grammar but sometimes, as we all quickly learned, far-from-perfect facts. Now, AI-generated writing and its errors—often called hallucinations—have begun to reach beyond the margins of student writing, leaking into news feeds, search engine results, and even court cases.
As chatbots turned public attention toward AI like never before, we became more acutely aware of the race to harness AI’s unprecedented potential in countless areas, raising both hopes and alarms.
The monthslong strike by Hollywood screenwriters was motivated in part by concerns over AI-generated writing. Authors and artists have filed multiple copyright-focused lawsuits against AI companies. Calls to regulate AI—by independent watchdogs, the Biden administration, and others—have cited concerns including potential job loss, the human biases that can get baked into AI, and the doubts that AI-generated materials can create, particularly in the context of topics already targeted by disinformation campaigns.
Up-to-the-minute corpus data from thousands of publications reveals an 85% year-over-year increase in the use of hallucinate in digital media in 2023, reflecting a sustained interest in AI’s new prominence.
by Nick Norlen and Grant Barrett
When we look back on 2023 from whatever surreal future it forks into, we’ll remember it as the year that at least this much became clear: AI will forever change how we work, learn, create, interact with (mis)information, and think about ourselves.
December 12, 2023
Grant Barrett
Nigel Sussman. 2023.
Contributions:
Jamie Brandon
John McCain
Brett Plemons
Kory Stamper
To determine the 2023 Word of the Year, we, the humans here at Dictionary.com, gave ourselves a prompt: Using lexicography and data science, choose a single word that best represents, at this moment, AI’s many profound ramifications for the future of language and life.
Like the output of many prompts, the result may not be what you were expecting.
The Dictionary.com Word of the Year is hallucinate.