Artificial intelligence systems must be ethically evaluated

Image of coding screen alongside the ethics symbol (Saisha Agarwal)

It’s the middle of the night, and you’re trying to finish up your schoolwork. Your fingers smash the keys of your computer while your exhausted mind ponders. A year ago you would’ve had to scroll endlessly through the millions of results from Google. Now, with the rapid growth of artificial intelligence (AI), you have access to automatic intelligence systems that can fulfill this task for you. As AI expands in our lives and society, it is important to assess its ethicality in today’s society. 

With the addition of popular AI systems such as Chat GPT and Hotpot AI, public interest and usage of AI have increased significantly. New AI applications appear in the world on almost a daily basis. For example, only four months after an AI research laboratory named OpenAI,  launched an interactive digital chatbox named ChatGPT, it unveiled its most capable large language model, accepting both text and image inputs, GPT-4

Graph of AI company revenues from global enterprise companies in million dollars (Statista.com)

This rapid expansion is so alarming, that modern-day pioneers in the tech industry like Elon Musk and Steve Wozniak have created a petition calling for a six-month pause on new AI experiments for the goal of “protecting humanity” from the “profound risks to society” AI systems may create.

Targeting the absence of planning and management going into distributing these advanced AI models, the petition criticizes the production of systems its creators can’t “understand, predict, or reliably control,” due to their rapid growth, which raises the question of whether or not these systems are ethical. 

This rise of AI technology has created an absence of ethical and moral oversight. Since the birth of civilization, humanity has controlled many facets of production in the past; from factories to local governments to more, humans have formulated systems to govern how things are controlled and should function. Like other systems in society, AI systems should also be moderated and controlled so that our society can retain its moral compass. 

In philosophy, the “body without organs” is a concept coined by postmodernist French philosopher, Gilles Deleuze. The idea of the “body without organs” describes a body or entity without organizational structures and elemental parts, operating freely. Deleuze used this term to describe the political and economic structures of French society at the time, operating freely, with no accountability, but today, it acts as an ode to the rapidly growing AI systems, operating freely, without any ethical or moral oversight. 

These operations, rid of any ethical or moral control, frighten me. Without any oversight, my perception of the world has changed as a member of society. In a world where so much is rapidly becoming digitalized through AI, I question why moral advancements aren’t progressing at the same pace. 

This notion of AI operating without any moral oversight also raises concerns about how these systems affect my own life. Even without AI, underlying biases against women and people of color, like me, exist. In AI tools and programs in the past, discrimination based on one’s gender, identity, or race has been revealed. 

In humanity’s previous experimentations with AI, these biases are disheartening. For example, in 2014, Amazon, one of the most powerful global technology companies, started building AI programs to review job applicants’ resumes. 

Image of Amazon company building (REUTERS/Pascal Rossignol)

While this system seemed like an economic tool, which could save Amazon thousands of dollars from paying job recruiters, it had a major flaw – it was biased against women. 

The tool was trained to assess applications by studying resumes submitted to the company over a span of a decade. Most of these resumes were submitted by men, which led the system to begin favoring male candidates. 

The AI eventually began to reject resumes with words such as “women’s” (for example: “women’s breast cancer awareness group member”). Similarly, graduates from historically women’s colleges and universities were also ranked lower in the system’s hiring process. 

It took one year for the company to realize the system’s bias and eventually disband the AI. The malfunction was eventually discovered by Reuters in 2018, outlining the damaging repercussions of AI systems without any moral control. 

As a female hoping to pursue a career somewhere in the media industry one day, where already only 27% of executive positions are held by women, these patterns among AI hiring systems are unfair and create a deeper gap in the discussion of gender inequality in the workspace. As these systems continue to grow and be utilized without any moral control, individuals already at a disadvantage in their career prospects will continue to be the guinea pigs in the tech industry’s race for artificial control. 

In fact, this is not the only scenario of AI systems raising ethical concerns. Facial recognition AI softwares have developed racial biases in the past, even leading to the false arrests of people of color. Some systems have noted tendencies to give false feedback and information so they can get more positive responses from humans. This rise in unsupervised AI systems continues to fuel systemic issues in society, making them even more unethical. 

A live demonstration using AI facial recognition at the Horizon Robotics exhibit at the Las Vegas Convention Center in Las Vegas on Jan. 10, 2019. (Getty Images/David McNew/AFP)

Some system developers argue that creators of AI can prevent AI bias by simply motivating creators to be unbiased while creating software. However, AI bias most often generates organically within itself. Even without external biases from its creator, an AI tool can operate with self-established biases to function more efficiently.  

In fact, this very virtue of “efficiency” is the fundamental intent behind AI system generation. Humans desire AI systems so they can help themselves function more efficiently in society. While these systems illuminate prospects of “efficiency” and “human advancement,” the risks of not assessing their ethicality and morality in the context of modern society pose a stark danger to users who are impacted. AI developers must retain the hunger for efficiency, and spend more of their time reflecting on the multifaceted behaviors and traits of the systems they create. 

In fact, system developers can find a further need to asses their systems by reflecting on philosophy. In Simulacra and Simulation, a philosophical treatise, French philosopher and sociologist Jean Baudrillard outlines society as a simulation of reality that masks reality through images and symbols (which include those in technology and media). 

According to Baudrillard, “it is dangerous to unmask images [which include non-organic aspects of society, such as technology], since they dissimulate the fact there is nothing behind them.”

Image of a virtual simulation (iStock/Getty Images)

For Baudrillard, looking at these prospects of rapid technological expansion beyond the surface level brings up many vital questions necessary to assess the risks of their existence in society. 

While “unmasking” the ethicality of AI brings forth dangerous outcomes, looking beneath the effective appearance of this technological expansion helps us channel some energy into the moral and ethical repercussions of AI advancement. This is imperative in this rapid process of extending AI programs to the public. 

Although, it should also be acknowledged that necessary change is being made. Researchers at universities and research labs across the globe are working towards analyzing AI’s impact on human life. For example, Stanford University’s Human-Centered Artificial Intelligence (HAI) group works towards advancing “AI research, education, policy, and practice to improve the human condition.” However, while these changes are a beneficial component to the growth of AI systems, research on AI’s possible societal repercussions should not be limited to academics and research groups. In order for real development to occur, AI development companies themselves must make research and moral examination a key aspect of system distribution to the public. 

While technological expansion and advancement will likely continue to grow and fuel progress in human societies, it is imperative this advancement is also observed more deeply through ethical and moral viewpoints.