Major digital ethical issues facing the world today

Unearth the ethical complexities surrounding technology and AI, from inappropriate AI behavior to technology's impact on human relationships.

With digital technology woven so intricately into our daily lives, it’s easy to forget just how much data we generate and share about ourselves. Every time we click on a news article, browse an online store, and engage with a social media post, organisations build profiles on us based on our interests and preferences. 

Do ethical issues surround technology itself, or misplaced faith in it? 

It’s important to remember what technology is in the simplest of terms: the application of scientific knowledge to achieve a specific goal. Consequently, technology itself doesn’t possess moral or ethical qualities – the ethical issues arise from the role that human beings have in creating, training and using it.

In fact, many of them stem from us wrongly attributing fundamentally human qualities to AI – like intent, empathy, emotion, and consciousness. A lot of other issues stem from underestimating the implications of technology being developed without being kept strictly in check. Often, there are unintended but nonetheless dire consequences. 

Major ethical issues at a glance 

When algorithms are trained, is the inputted data screened and moderated? 

In March 2016, Microsoft launched Tay, an AI chatbot that would “get smarter” as more users interacted with it. Within a day, it had to be taken offline because it was tweeting controversial, racist, and sexist content.

Microsoft promptly released a written apology on its blog, and it’s safe to assume that Tay’s developers didn’t design the chatbot specifically for the purpose of spewing hateful vitriol online. Likewise, the creators of ChatGPT didn’t intend for it to generate bomb-making instructions, write more convincing scam content, or help students submit wildly plagiarised assignments.  

When a tool is designed to be unmanned and learn autonomously, there is always a risk of it being flooded with potentially dangerous data. This content may already be present online, or web users may deliberately inundate the tool with inappropriate content in real-time. One of the supposed selling points of AI is its autonomy and the speed at which it learns, but with the examples listed above, it’s clear that this can come at a heavy cost. 

Many AI creators attempt to minimise risk by employing moderators to flag and label inappropriate content. However, this opens a whole new set of ethical concerns.

This moderation often involves workers having to trawl through disturbing content from the very darkest corners of the web – and even with moderation processes in place, dangerous and illegal content will inevitably slip through the net. The more independent a tool’s learning methods are, the higher the risk of this content reaching the output stage. 

Read a Q&A with Dr Elke Schwarz from Queen Mary University of London on the impact of ChatGPT's language model:

Take me to the interview 🡪

How can developers go about defining and limiting unethical prompts? 

Another selling point of AI is how limitless it is in terms of the prompts that users can enter, and how instantaneous the results are.

Want to see what the Alien franchise would look like as a musical production? Or what Freddie Mercury would look like if he were still alive today? How about a children’s story with an alligator as a central character?

All these types of content can be created with just one prompt each and the click of a button – and for most, it’s used as entertainment.  

However, where does one draw the line between harmless experimentation and deliberately fuelling misinformation? It’s just as easy to generate images of world leaders taking bribes from corrupt figures. It’s just as easy to use deepfake technology and edit celebrities’ faces onto pornographic images.

By extension, users can create fake revenge porn. There’s even a case of ChatGPT generating a false sexual harassment accusation against a high-profile US lawyer. It’s interesting to see how the media tackles these cases, using words like ‘lying’ and ‘making up’, as if an algorithm or software can have that intent. 

There is no way for generative AI technologies to provide accurate information reliably – they simply process massive volumes of man-made content, identify potential patterns, and learn to mimic the content fed into them. 

Most AI platforms have set up a system for refusing to serve prompts that are perceived as unethical. However, just as quickly as these safeguards are put in place, the internet is flooded with a plethora of guides for bypassing these restrictions. 

How do we avoid placing the wrong kind of trust in digital technology? 

For many sectors, AI has proven to be a massive asset when it comes to decision-making and serving a very specific purpose. It is arguably instrumental to the success of companies like Netflix, Spotify and Amazon due to its ability to identify user behaviour patterns, and make tailor-made recommendations based on their consumer history.

If the algorithm gets things wrong, the stakes are low. Users can generally hide suggestions that don’t interest them, or flag something as irrelevant. The technology will then use this information to refine its learning and outputs. 

However, in many instances, the stakes are much higher. Military forces are already deploying autonomous machines that identify and neutralise hostile threats. Many of these devices are unmanned.

Advocates for their usage argue that they’re able to analyse surroundings quickly and make split-second decisions in combat. But when we leave robots to make decisions on which lives to take, what does that say about how much we value human life?  

With guerilla soldiers wearing the same clothing as ordinary civilians, the margins for error on the part of AI are much higher – and since a machine doesn’t have the human qualities of empathy, conscience and nuanced understanding of situations, they will simply go ahead and kill. The stakes are quite literally a matter of life and death. 

While many don’t have issues distinguishing AI technology from human interactions, the issue of people attempting to substitute human relationships with machines has become disturbing.

In Belgium, a man began using Eliza – an AI chatbot released by Chai Research – as a confidant. It subsequently encouraged him to kill himself, and he did indeed take his own life. AI girlfriends are becoming increasingly popular in Japan, and many who turn to this cite an inability to find fulfilment from human relationships.  

Even more alarming was the trend of users downloading Replika, an AI companion bot, behaving abusively towards them, and sharing their interactions online 

While some turn to these technologies to compensate for a lack of human interaction, others use them as emotional punching bags. What’s clear in every instance, however, is that these technologies hardly ever improve the situation.

In fact, there’s a distinct pattern of users becoming emotionally dependent on these technologies, feeling greater social isolation, and developing unrealistic expectations of human relationships. 

Once again, technology itself is not capable of intending to inflict emotional damage on an individual – but with these ethical concerns becoming increasingly prevalent, where should the accountability lie?  

Some may argue moral objections to this technology being developed in the first place, whereas others would argue that the fault lies in how it is marketed. Replika is marketed as “the AI companion who cares […] always here to listen and talk”. Anima is marketed similarly, as a “virtual AI friend” -- and with these claims in mind, it’s easy to see how the lines between human and machine relationships are becoming increasingly blurred.

Others would argue that out of respect for users’ personal choices, we must trust in them being responsible for maintaining their own wellbeing, as opposed to holding developers accountable. For decades now, this dilemma has been the focus of lively discussion. 

Where does a Global Ethics course fit into all this? 

With the topics above barely scratching the surface of what is tackled in global ethics, it’s clear that there are infinite avenues to explore. What we’ve covered in this article may seem pessimistic, but it’s important to note that AI can absolutely have a positive impact in the world when applied mindfully.

Tools like Woebot provide easy access to cognitive behavioural therapy, and there is a lot of talk about AI tools successfully combatting loneliness – but a heightened awareness of these ethical issues is essential for the proper use and design of these technologies. 

It can be overwhelming to know where to start when it comes to familiarising yourself with a subject so broad – and this is where our Global Ethics and Digital Technologies short course comes in.

For 3 weeks, you’ll deep-dive into the moral and ethical implications of a digital world. Each week covers a new module, which you'll explore through fascinating reading material, clippings and videos. You'll also gain unique insights from expert tutors specialised in politics, international relations and global ethics.  

This short course is available online – so you can access your study materials from any internet-connected device and fit your learning around your own schedule. It’s also free, so you don’t need to worry about any financial setbacks to developing your knowledge. 

Expand your horizons and broaden your understanding of the world around you. By equipping yourself with this knowledge, you can influence the way that technology is developed and applied in the future – ultimately contributing to a fairer, healthier world. 

Learn more about our free short course in Global Ethics and Digital Technologies:

Show me short course details 🡪

Topics: Global Ethics

Recent Posts