AI: An Orwellian Future?
In George Orwell’s Nineteen Eighty-Four, heresy and unorthodoxy, as they are defined by the totalitarian Ingsoc regime, are the enemy of the state. The consequence for simply appearing as though you may be harbouring heretic thoughts is a crime punishable by death, much less to communicate them out loud. In Oceania, the nightmarishly dystopian setting for the novel, the totalitarian regime is working towards establishing the ultimate tool of subjugation: Newspeak. Newspeak is a language defined by grammatical simplification and limited vocabulary; its intention is to make a heretical thought literally unthinkable, ‘at least so far as a thought is dependent on words’ (George Orwell, 1984). English, as it were, would be replaced by Newspeak, and so with it the ability to form thoughts related to personal identity, abstract concepts, and critical arguments in general.
Orwell’s experiences fighting fascists in the Spanish Civil War and his time serving as a British imperial officer in Burma, paired with his innate ability as a perspicacious social commentator, led him to his fascination with language and how it may be weaponised to manipulate thought and obscure reality. Orwell’s construction of Oceania is arguably a vessel through which, among other themes of surveillance and repression, he explores the dangers of language perversion and manipulation. Orwell uses narrative praxis to bring these themes to life, without which his observations of the mechanisms of the totalitarian machine would not have penetrated modern political discourse some seven decades on.
What makes 1984 such a seminal and important novel is that, despite being published over seventy-six years ago, it only seems to grow in relevance. In an age where technology accelerates societal change, Orwell’s words become heavier with as we go from one political crisis to another, each page feeling as perspicacious as the last. Now, with the advent of artificial intelligence and large language models (LLMs), we have perhaps reached the greatest litmus test for Orwell’s views on linguistic manipulation. Against a digital milieu already dominated by misinformation in social media and socio-algorithmic engineering, the various functions of AI converge to represent something of an inflection point for our linguistic futures. Are the ways in which we communicate bound to change in profoundly Orwellian ways? What can Newspeak teach us about the importance of language in what is becoming an increasingly difficult world to predict?
Before diving straight in, it’s worth first laying out how Newspeak presents itself. The core tenets of Newspeak include grammatical simplification, staccato rhythm, lexical erasure, elimination of semantic deviation, and the invention of ideological words. Without delving too deep, below are some of the most pertinent examples.
Grammatical Simplification
Any word in the language could be used either as verb, noun, adjective, or adverb, allowing for ‘think’ to replace ‘thought’ and become both a verb and noun.
Comparatives and superlatives are formed by simple prefixes and suffixes. As such, good become gooder, which is followed by goodest. In the negative, ungood takes the place of ‘bad’, ‘double ungood’ for very bad, and ‘doubleplusungood’ for ‘terrible’ or ‘worst’.
Lexical Erasure
With ‘good’, ‘gooder’, and ‘goodest’ the format for creating superlatives and comparatives, this renders other words redundant. An experience can no longer be ‘unsatisfactory’ or ‘torturous’, nor can someone be labelled ‘manipulative’ or ‘evil’.
Contractions and Staccato Rhythm
Newspeak is designed to be spoken in short, clipped words, reducing it to mere sounds which amount to simple, party-conforming statements. Words are also shortened to conceal meaning, with compound words such as ‘Minitrue’ for Ministry of Truth and ‘Ingsoc’ for English Socialism.
So, how is our language changing and how may it change in the future?
New Words and Algospeak
It comes as no surprise that most new words, or neologisms as they are referred to in linguistic circles, that enter our vocabulary begin online. What’s interesting is that a significant proportion of these appear to be short, monosyllabic words – often abbreviations or portmanteaus. To name just a few, these include terms such as ‘rizz’, ‘cap’, ‘cope’, ‘aura’, and ‘vibe’. While the latter two were already commonly used words, their use has dramatically increased in recent years.
Statements like ‘the vibes are off’ are perfectly functionable and can be useful for expressing vague emotions that are hard to articulate. However, these words are often substitutes for more accurate and nuanced descriptors. In simplifying our language, we’re diluting the capacity for introspection. If someone has aura, what actually is it that gives them such an aura? And if someone has rizz, are they charming? Quick? Witty? Intelligent? When we shorten a word, it tends to lose some of its meaning in a way that is not unlike the way Newspeak operates.
Contributing a great deal to this simplified lexicon is ‘algospeak’, a linguistic phenomenon that has arisen as a result of online algorithms which allegedly filter out and censor content on social media platforms like TikTok. As we are already aware, algorithms serve an important function in filtering out content that is potentially sensitive or harmful for younger users. This leads to words like ‘suicide’, ‘sex’, and ‘rape’ being censored, sometimes even when the content is educational or unharmful. In order to get round this censorship, users get creative with re-spellings of such words. Some notable examples include ‘seggs’ for ‘sex’, ‘SA’ for ‘sexual assault’ and ‘unalive’ for ‘kill’ or ‘suicide’. The latter term bearing an uncanny resemblance to Newspeak; in Oceania, the term for these would be ‘unperson’ or ‘vaporised’. Algospeak has now become a rather vast euphemistic vocabulary consisting of misspellings and emojis.
On one hand, algospeak represents a form of language as a form of resistance to censorship. It could be viewed as a kind of linguistic subterfuge that demonstrates the power of language to adapt and take new forms to convey messages undetected by online surveillance. Some may even argue that it allows young people to consume content with less emotionally trigger language. But what we are observing is our language being controlled in real time. When discussions around sexual assault, war, and suicide are cloaked in euphemisms, we risk desensitising audiences to the gravity of these topics and generally narrowing and simplifying the scope of online discourse.
Of course, pushing users into adopting algospeak naturally brings with it the potential for manipulation. For instance, it may be used as a tool to enforce puritanical ideologies or maintain conservative orthodoxies. In his article, ‘Language, Power, and Ideology: Orwell’s Newspeak’, Riaz Laghari explains that in some countries, ‘algorithmic curation functions as decentralized Newspeak: meanings are constrained through visibility rather than vocabulary’.
Selective algorithmic filtering and unintentional self-censorship on important issues amounts to a hermeneutical injustice, whereby a gap or absence of knowledge precludes an individual from being able to coherently conceptualise or articulate their suffering. The first widely recognised instance of this occurred in the 1970s when a woman was sexually harassed by a male colleague, but since there was no term for sexual harassment, she was unable to receive support, legally or otherwise. Algospeak, or algorithms more broadly, represent a new-age hermeneutical injustice, in the sense that it could be preventing users from receiving information on topics that may allow them to protect or liberate themselves.
Writing Prompts and Staccato Structures
A diminishing vocabulary represents a form of lexical erasure, but our lexicon isn’t the only thing that’s changing – there are changes happening on a grammatical and syntactical level, too.
Anyone who has written an email recently will be familiar with prompts from Outlook and plugins like Grammarly. These AI writing assistants tend to optimise for brevity and succinct language, and in an age where attention is our greatest currency, reducing redundant words and getting to the point can be useful. Yet, the marginal gains achieved from the brevity of our correspondence bring with them the danger of oversimplifying our language. What might feel like a useful if not harmless suggestion looks different when viewed from a broader lens, where mass use of AI writing assistants could gradually strip context and nuance from our writing. After all, what makes us human is our ability to imbue our language with extra flavour, context, and emotion. The additional words we add to sentences are crucial in achieving this, whether Outlook views them as redundant or not.
Optimising for shortened attention spans has also caused much of our written content to be structured in a staccato style. Short paragraphs and a proliferation of headings are commonplace on social media, with skimmablity and readability a top priority for marketers and LLM’s alike. Vertical posts composed of single-sentence paragraphs are the calling card of LinkedIn users desperate not to lose their audience’s attention.
In stripping down our language to a more functional level, we are stripping down our ability to convey meaning, emotion, and context. And in simplifying our language, we are also simplifying the way we think.
Atrophying Critical Thinking Skills
Writing is inextricably linked to thinking. When we try to articulate ourselves effectively, we are forced to deliberate over our thoughts and feelings, consider the veracity and impact of our arguments and, if we are not sufficiently prepared to communicate what we want to say, conduct additional research to better inform ourselves.
But why do all that when the easy option is right there? Enter, ChatGPT.
A study by MIT has shown that ChatGPT is harming students’ critical thinking skills. Their research showed that a control group, when compared to groups that used only Google or who relied solely on themselves ‘consistently underperformed at neural, linguistic, and behavioural levels.’ The reason is simple: when we ask ChatGPT to write something for us, we’re not doing it ourselves and therefore reaping the benefits that come with it. The increased use of students using ChatGPT (other LLM’s are available) to write essays has raised serious questions in academic settings—a separate debate entirely—but perhaps more alarmingly is its use in professional settings, where the use of AI doesn’t just lack repercussions but is actively celebrated as a tool to replace human endeavour.
‘Summarise this for me.’
ChatGPT doesn’t just take away the onus for cognition in terms of output, it hugely simplifies the process of learning and gathering information as well. AI summaries often do a great job of taking existing work and collating the most relevant posts into bullet points or a short paragraph. It saves us time and energy to use AI to do research for us rather than trudge our way through multiple websites.
But while using AI to quickly collate a list of the best seafood restaurants nearby can hardly be viewed as a bad thing (although perhaps there’s an irony there that the environmental cost of AI search includes the pollution of marine habitats), it is a different issue when it comes to using it as an educational tool. Since most people have a sense of self-pride, ‘explain this to me like I am five’ is not something you’d usually ask someone. But since there are no social consequences when it comes to conversing with Claude and ChatGPT (and our brains are hardwired to choose the path of least resistance) we often revert to being spoon-fed information. In doing so, we are unwittingly allowing our critical skills to deteriorate as this rapidly advancing technology feeds off the decaying remains of our enfeebled Wall-E-esque frontal lobes.
Then, to bring it full circle, there’s the issue of the feedback loop whereby writers, rather than start from nothing, are asked to edit AI-written content and in doing so subconsciously adopt its linguistic mannerisms and become more machine-like themselves. This iterative process may lead to a literary landscaped dominated by homogenised writing styles.
The overall outcome: a world dominated by individuals whose cognitive and linguistic abilities do not match those of the previous one.
Disinformation
Disinformation was the modus operandi in 1984. In 2026, AI’s capacity for exponential output makes it an unavoidable part of social media experience. This, paired with our quasi-oedipal relationship with technology and the decline in cognition makes creates fertile ground for manipulation.
The term fake news is often deployed in a manner that resembles Newspeak’s ‘blackwhite’: the ability to believe that black is white when asked to do so. This is an act of doublethink, the ability to hold contrasting beliefs simultaneously, such as ‘War is Peace’ and ‘Freedom is Slavery’. Similarly, fake news isn’t just a term for lies or disinformation, it’s a phrase infused with political strategy, designed to undermine the truth and discredit the source of the information in spite of how irrefutable it may seem. Fake news resembles an algorithmic filtering instruction, which orders the processing system (in this instance, someone’s mind) to simply ignore certain information in unequivocal terms - there is no deliberation, more of a reflex than anything. To simply brandish something as fake news is far more effective than engaging in a balanced debate, where one would be required to use language that provokes reflexion and consideration. Fake news and blackwhite are dangerous in part because simple language appeals to our most primitive states, preying on fear, shame, and disgust. When we strip our language to its most basic states, we appeal to our least civilised qualities.
Then there’s the case of Elon Musk, who trained his AI chatbot, Grok, to speak effusively about him. Grok has been known to tell users that Elon Musk is ‘strikingly handsome’. Perhaps even his most ardent supporters will have a hard time believing this, but it shows that these tools can be manipulated in favour an agenda. Other incidences of this include Russian and Chinese chatbots being found to peddle state-sponsored propaganda.
Disinformation is not unique to the age of social media. But the scale and speed of its dissemination by an incalculable number of bots, influencers, and social media accounts means we are now in age of semantic warfare. There are two sides to this argument though, because if language can be weaponised, it can also be used as an act of resistance.
Linguistic Relativity
‘He who controls the past controls the future. He who controls the present controls the past.’ (1984)
The Sapir-Whorf hypothesis, or linguistic relativity, asserts that words aren’t mere conduits for the thoughts and feelings that control our reality, they actively influence our perception of reality. In 1984, Newspeak ensures orthodoxy is the only possible outcome by eliminating the possibility to conceptualise alternative realities. In essence, when the building blocks of one’s language can only stack up to form pre-arranged formations with a limited scope of possibilities, orthodoxy is the natural endpoint to any ponderance or pontification. Whorf’s assessment that ‘we dissect nature along lines laid down by our native language’ echoes Orwell’s Newspeak.
An example of how our native language shapes our perceptions might be found in ‘untranslatable’ words. In Japanese, wabi-sabi (侘寂)refers to embracing the beauty of imperfection, impermanence, and the natural transience and cyclical nature of life. In Scottish dialect, the word ‘dreich’ doesn’t just describe the weather as overcast or grey but conveys a very particular kind of environment which saps your energy and colours your outlook. Both terms convey meaning that non-natives would find hard to conceptualise. Both also reflect something about their culture: Shinto and Confucianist influences in Japan and the dry, self-deprecating nature of Scottish humour.
Modern philosophers and philologists tend to come down on a weaker version of linguistic relativity in that language contributes to shaping thought rather than determining it. However, as Riaz Laghari states, ‘The Principles of Newspeak can be read as both a linguistic thought experiment and a proto-discursive critique’ rather than a declaration that language is the key to changing behaviour. And as Eve Clark, a professor of linguistics at Stanford University states, ‘one of the things that [language] does is allow you to put that information about anything belonging to that category and retrieve it easily, and therefore also talk about it to other people’. Eve’s assertions echo a sentiment that overlaps between those of Orwell, linguistic relativity, and others, that language bridges the gap between subconscious and abstract thoughts and makes them intelligible agents of change. After all, the laws and regulations by which our society abide are strictly defined by language. A slight deviation in terminology or language or grammar can significantly impact the way the law is interpreted and thus enforced. The parameters we have set on our society are scaffolded by the language we use.
As such, artificial intelligence represents a threat to our epistemic agency. As we watch our vocabulary diminish, our linguistic skills atrophy, and the truth of our language obfuscated, undermined and simplified, AI is fundamentally changing how information and knowledge is produced, interpreted, and perceived. When language structures and is our epistemological framework, then changing language has profound implications for how our beliefs are formed.
Conclusion
In 1984, subjugation through language manipulation is viewed as the final step in the consolidation of power. For if one creates a language so impenetrable to the creation of dissenting or unorthodox thoughts, the individual is no longer a threat and a vassal only capable of making calculations pre-determined by the party. Orwell’s dystopian depictions of a totalitarian superstate are thankfully far from being realised, but the reason we return to the themes of 1984 and is because they remain as relevant as ever.
Yet, if you were to ask a linguist if they think Newspeak could be legitimately implemented in our world, the straight answer would simply be ‘no’. That’s because it defies one of the enduring truths about language; it is gloriously malleable. Languages constantly evolve and take new forms. Language can never fully be constrained because human beings are creative and will always come up with new ways of changing the meaning of things. Even under the most oppressive regimes, language has remained a tool of resistance as well as oppression. The song ‘Wade in the Water’ helped enslaved African Americans escape to freedom by including hidden instructions to enter rivers and streams to avoid dogs and slave catchers.
The temptation from some has been to use this information to dismiss Orwell’s thought experiments as naïve. But to do so would be missing the point. 1984 is not a cautionary tale about a specific, predicted future but about the very real, demonstrable power of language to shape thought and reality for the consolidation of power. Indeed, Newspeak has not yet been fully realised in the book either. The final, perfected version of the dictionary is set to come in the yet-to-be-released eleventh edition.
The internet is now vaster than ever with the introduction of AI and it represents arguably the greatest change to our epistemological framework in our existence as a species. If reality is that which we perceive it to be, and knowledge and language the building blocks to shape perception, then we can deduce that those that wield the greatest control over these frameworks hold the power to shape reality in a direction that is very favourable to them.
It must be acknowledged at this point that, like all technology, AI isn’t inherently good or bad, it’s simply a tool. The responsibility lies with those who wield such powerful technology, and it is incumbent on its proprietors to use it for the betterment of humanity. It’s potential for good or bad is also a responsibility for regulators at a government level. Indeed, proper policies are already in place; AI is trained using filters to exclude hate speech, restrict potentially harmful content, and omit private and confidential information. It might even come as a surprise to hear that OpenAI initially began as a not-for-profit organisation intended to democratise knowledge curation, a cause we could all get behind.
These noble intentions didn’t last long though. Open AI is now an unequivocally for-profit organisation. Now, the primary goal of OpenAI is to maximise profits (although it has yet to do so) and just a couple of weeks ago Sam Altman stated at BlackRock’s 2026 Infrastructure Summit in Washington that he sees a future ‘where intelligence is a utility, like electricity or water, and people buy it from us on a metre’. Maybe another well-meaning entrepreneur will come along and get us back on track, but at present the future of opensource machine learning is guided by the unquenchable thirst for profit from greedy shareholders suckling at the teat of one of the markets biggest cash cows.