In this post, experts from Queen Mary, University of London discuss the impact of OpenAI's ChatGPT language model and the ethical implications of AI. They highlight concerns about privacy, intellectual property, misinformation, and power hierarchies.
This post is by Diego de Merich, Senior Lecturer in Politics and International Relations at Queen Mary, University of London.
Since November of 2022, when OpenAI’s most advanced AI large language model (LLM), ChatGPT, was released to the wider public, academia (like most other sectors in society) has been rapt with debates about artificial intelligence and what it means for the future of life and work as we know it.
From our more mundane considerations (how it might impact student assessments, classroom content, or our own research) to the more cutting-edge applications (as decision-assistance in military operations), we find ourselves at a pivotal moment in the advancement of these technologies.
I recently had a chance to sit down with our expert on all things socio-technological, SPIR’s own Dr Elke Schwarz. Elke is currently on research leave at the Käte Hamburger Center for Apocalyptic and Post-Apocalyptic Studies (CAPAS) where she has been contemplating this topic and others.
On one level, she thinks that the debates around AI have been overblown or badly focused. Yet on another level, she does think there are aspects of these new technologies which should be considered more carefully, politically and ethically.
As a political and moral theorist, Elke is most interested in the ways that we deliberate about the ethical implications of emergent technologies. As a scholar of IR, she has written widely on lethal autonomous weapons systems and the ways in which we risk losing fundamental human agency in pursuit of efficiency or not having to take challenging decisions.
I spoke with Elke about artificial intelligence, and about the social and political cues that she is keeping an eye out for at this time:
We often talk about technological advances as a ‘step-change’. Why has generative AI been described as significantly more than a mere ‘next step’?
Generative AI is, for many, a game changer because it has the capacity to ‘generate’ new synthetic output from a vast amount of input data – whether that is images, videos or texts. Traditional, non-generative AI is used for descriptive, predictive or prescriptive tasks.
This is still true for generative AI, but it also creates an output that generates new material from the material it is trained on. The fact that it is trained on enormous amount of data, usually scraped from internet sources, makes this generative output possible.
So, the scale of data is a step change, and the generation of new ‘original’ output is a step change. I deliberately put ‘original’ in quotation marks here, as the process is one of recombination and common pattern prediction, not necessarily one of producing truly original material. Although one might suggest that this is contested philosophical question.
It is also a step change, because we have little choice but to anthropomorphise the output as something human-like. The AI thus becomes ‘like us’ in that sense, bestowed with that most human of attributes “creativity”, which has been cause for much confusion, consternation and the conviction that this is the start of artificial consciousness.
Interested in the ethical implications of digital innovation? Discover Queen Mary Online's free, three-week Global Ethics course:
As a scholar of politics and ethics, what are you most interested in about these early stages of the roll-out of AI tools?
Well, we are now well into the fifth decade of gradual AI roll out, but you are right, these are the relatively early days of a much more ubiquitous presence of AI in all our lives and the integration of AI into daily life has accelerated significantly in the past decade.
As a political theorist, I am interested in two dimensions: first: how does this technology amplify, cement, or produce hierarchies of power, and second: how can we think best about the ethics of such technologies, what responsibilities arise?
And there are many issues that one should pay attention to as generative AI, for example, is rife with ethical problems at every stage of the product cycle (and we mustn’t forget that it IS a product, despite all allusions to ChatGPT being something more than software).
Consider, for example, the input stage. Ethical issues here relate to the source of the data. Where has it been taken from, have the people who have produced the data given their consent? It is likely that LLMs and other large scale data type generative AI systems scrape a lot of data – texts, images, video – from the internet. Often the artists whose images are used to create Midjourney images have not given consent to the use of their work.
Similarly, do I know if my published work has been used to train and optimise for the output of ChatGPT? And do I know for what purpose it might be used and monetized? It is a question of privacy and of intellectual property.
A second issue arising at the input stage is the question of who does the shadow labour of labelling the images to help train the GPT or image models and under what conditions. Here we already known that OpenAI, for example, uses Kenyan workers who are paid less that US$2 / hour to identify what text content is toxic and needs to be labelled appropriately.
As Billy Perrigo at Times Magazine shows , this included texts pulled from the darkest recesses of the internet which these Kenyan workers had to process and label. For a company that is valued in the billions, this is a shocking and deeply unethical practice.
Then we come to the output stage. ChatGPT is known to just make stuff up. Remember it is a statistical pattern recognition tool, so it does not have anything close to understanding. But it sounds incredibly plausible in the stories it produces. And if these stories are about humans, they can be very harmful.
ChatGPT has, for example, made up a sexual harassment scandal which names a particular, real world law professor as the accused. This is completely false, there was never any accusation against this professor .
Generative AI has also been implicated in driving a man in Belgium to suicide. The man had reportedly developed a relationship with a GPT chatbot – ELIZA – over several weeks. The chatbot is reported to have expressed jealousy about the man’s wife and children and suggested to the already anxious man that “we will live together in paradise”. Shortly thereafter the man took his own life.
And then, of course, we have actual possible malicious outputs by which a bad actor with ill intent could, technically, misappropriate GPT to find ways to subvert security practices or produce biological or chemical weapons at scale.
An illustration of such an AI use – which really does serve as a warning here – comes from the study of harmful biological agents. Or it could be misused for the dissemination of misinformation at scale – perhaps the most pressing danger right now.
So these are just the most egregious examples of actual harms identified. There are so many more which we cannot yet fully foresee but we can assume, simply based on the logic of how these systems work and how we interact with it.
Beyond that are the second and third order effects that relate to putting artists and creative writers, lawyers and many other professionals out of business; the effects of swamping the internet with mediocre and homogenising output; the possible subversion of our relationship with information finding, with trust and ultimately with truth; and the possible atrophy in our inter-personal and communication skills, the effects of which are not yet clear.
Or simply the effects of building our human world around digital logics (such as songs becoming shorter to accommodate the revenue model of Spotify and the attention-span produced by TikTok).
In 1954, Roald Dahl wrote a very beautiful, prescient story about a world in which automated machines produce most texts, which thematised some of these effects – The Great Automatic Grammatizator . The entire text has subtle barbs of critique, but that last line really is killer.
Within a few short months, Chat GPT became the single fastest growing application in human history. What sorts of applications of these technologies do you foresee in your own life and work? Have you used any of these tools, and what is your impression of their promise?
At this stage, I don’t foresee using ChatGPT or similar tools for my work. It is simply too unreliable to be useful for any purpose and I really like honing my skills to express my ideas and to comprehend complex texts myself, or at least try to.
Everyone reads and understands texts, for example, differently, and that, in turn can produce new and unexpected outcomes in conversation with others, for example. This is what Hannah Arendt calls natality, and it is a vital dimension for the exercise of politics proper.
I have played around with ChatGPT, with various prompts about various scholarly texts, and the output was at times extremely trivial and superficial and at times outright wrong. I asked, for example, for an academic article on military AI ethics. ChatGPT suggested an article with a very plausible title, written by two plausible authors, both exist and both write in the space of military AI ethics. It gave an abstract and a link to a Springer website for the article.
So far so good … except, it was all entirely made up. The article does not exist, the authors have never co-authored anything together, the abstract was also bs and the link was to an entirely different, completely unrelated article on a Springer Verlag page.
In the technology world this is called ‘hallucinating’ although I am not sure the term is appropriate. ChatGPT is a probabilistic engine designed to produce words and sentences based on a prediction as to what is most plausible. So, LLMs like chat GPT are like extremely fancy autofill engines that have been trained on an extraordinarily large amount of texts – typically scraped from internet sources. So it just does what it is told … it makes pattern associations, but it cannot understand the context. So as a research tool or text synopsis engine, for my purposes, it is not useful at all.
I would, at this point, also caution all students readily availing themselves of this technology – you are essentially shooting yourself in the foot by not taking seriously that learning how to write is a crucial component of the academic journey. Also, you run the risk of having to fact check the text, which might take a lot longer than just writing the assignment yourself.
And what keeps you up at night or worries you about the technologies?
What really irks me about the present day hype of generative AI is two things: one, the urgency with which it is proclaimed to change human life and with that the mandate that we must all get on board and, indeed, if my LinkedIn feed is to be believed, if we don’t already use ChatGPT for everything we will become fossils within the decade.
This ‘inevitability narrative’ is exhausting. There is nothing inevitable about this technology, nor is it in any way clear, how GPT, Midjourney, Bard, Bing, or any AI for that matter, is to better humanity, as it is often claimed.
In fact, although it is often hailed as the handmaid of a transcendence to a better humanity, as something that can bring about enormous good, this good is rarely, if ever, specified. What is this ‘good’? “Good’ for whom?
The answer I get to this question usually revolves around two things: medical break throughs – and here I concede that enormous gains could be made if the political and social will is there to allow these breakthroughs to translate into better healthcare.
But the answer most often given is that we will be able to be so much more productive, do more things faster. This usually means output. A productivity or profit motive cannot serve as a convincing ethical motivation.
In a recent New Yorker article, the sci-fi writer Ted Chiang phrased the moral question thus: “if you imagine AI as a semi-autonomous software programme that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse?”
The second issue that I am concerned with is what AI does to our relationship with other humans. My sense is that it strongly undermines the fabric necessary to relate to and to place trust in one another. It fosters a culture of suspicion and an culture of distance. It fosters a homogeneity rather than difference. It is likely to kill our sense of openness and wonder about and toward other humans, which is already strained.
It also strikes me that we are de-skilling ourselves in ways that are crucial for human interaction and relation – specifically in reading texts, understanding texts and being able to communicate our thoughts, simple an complex, to others to be heard and understood.
The more we leave these tasks to technologies, the less we will be able to express ourselves properly. It is a form of severely diminished agency. I worry whether we are flexible enough to learn how to adjust to this in a non-conflictual way. My next book will be about this very question: how can we be political and ethical in this kind of environment?
And of course, what keeps me up at night and what I consistently push against is the use of AI for lethal purposes. The normalisation of systematic kill-decisions with AI strikes me (and many others) as a dystopian nightmare to be avoided at all costs.
What do you make of the more alarmist suggestions that these technologies could either ‘take over’ or ‘destroy humanity’? Realistic possibility or the stuff of 1980s sci-fi movies?
The Terminator scenario is, in my view, quite an unhelpful spectre. There is a plausible danger that an AI system might be used in such a way that it initiates nuclear weapons and that this might lead to wide scale death and destruction.
I think it is also plausible that AI might be used by nefarious actors to produce toxic compounds which are used against certain communities with such wide-spread effects that it jeopardises a significant proportion of humanity.
There are a number of scenarios in which AI is used or misused in a way that could spell mass violence or mass harm. As the STS scholar Sheila Jasanoff says, “the potential for harm lodges not solely in the inanimate components of technological systems but in the myriad of ways that people interact with objects”.
What is really worrying is that these dystopian narratives are shifting attention away from the real and present harms AI already causes to certain communities and individuals. The harms that stem from biases, whether these are representational harms, i.e. what is visible and therefore matters, or allocative harms, i.e. who gets what, are already affecting people’s lives. Whether that is in the use of AI for welfare, for visa applications, for job applications, for credit applications, for medicine or in warfare.
This is just the tip of the broader AI iceberg when we consider the enormous environmental impact that is produce simply by training a large language model, the amount of energy, water, resources needed to do so which all have heavy material affects. And then the impact AI systems already have on our relationship with others, our ability to trust information, our ability to relate.
So, while I think that the existential risk discussion is one that should take place, it is not one that should take sole centre stage and relegate all other, real harms and existential threats to certain communities to the margins.
My real hope is that AI will turn out to be a very hot, but fleeting fire. It will cause enormous destruction in its wake, but it is not likely to be sustainable as the only pathway for the ostensible betterment of human life.
Elke was recently awarded a Leverhume Research Fellowship, and her new research will focus on the politics of Apocalyptic AI.
Wondering about the myriad ways that AI and similar digital technologies might impact your life (or about its wider impact on politics and society)? Elke and a number of colleagues in Queen Mary's School of Politics and International Relations have recently contributed to a free, three-week short course on this very topic.
You can explore a number of fascinating clips, readings, activities and thought experiments, relating to the question ‘Can Machines be Good or Evil?’. You’ll have a chance to apply the theories and concepts of global ethics, to your thinking about one of the most significant technological moments in human history: