The ethics of artificial intelligence (AI)

Most people have probably read news articles about the risks of AI and how it may or may not involve killer robots that take over the human race. And while that risk is mostly fiction, there are very real ethical concerns about the current use of AI. 

In this article, we’ll touch on issues like data bias, legal fault, and consumer privacy, as well as the regulations that may or may not exist in the AI space.

In this article we’ll cover:

  1. What is artificial intelligence?
  2. The history of artificial intelligence
  3. The ethics of AI
  4. How to rethink AI ethics
  5. Ethics and AI evolution

What is artificial intelligence?

AI is a specific branch of computer science that attempts to replicate the inner workings of the human mind through complex computer programs. These machines can perform more complicated tasks than many other applications, including tasks that may require logic or decision-making ability. Some kinds of AI can revise their algorithms from the data they take in and so improve their effectiveness. Others, on the other hand, still require the work of a developer to improve. 

Learn more about artificial intelligence.

The history of AI

To understand the current ethical issues with AI, it’s helpful to know about the history of AI and how it was developed. After all, the idea of “fake intelligence” dates back centuries, and even the idea of robots or “automatons” goes back to the ancient Greeks. 

The development of modern AI started as scientists and philosophers in the early 1900s asked the crucial question: is it possible to create an artificial brain? The public quickly become obsessed, as the idea of “thinking machines” appeared in books and plays all over the world. 

In the 1950s, AI research took off. But even the pioneers of those AI studies had no idea how quickly AI would develop. Especially in the 1980s and onward, AI research took enormous steps forward, hitting landmarks that experts didn’t think were possible to hit within the next hundred years. 

Now we’re at a crucial stage in the development of AI: where it has become so popular so fast that the regulations and legal requirements haven’t caught up. But billions of dollars are still being poured into AI research today, and average consumers interact with AI every day (whether through smartphones or search engines).

Learn more about the history of AI.

The ethics of AI

There are some deep, ethical questions around the modern and future use of AI – and some of them may not be exactly what you think. While there is certainly the question of whether AI will replace human workers and put people out of work, there are other pressing questions. Many of these revolve around transparency, data longevity, and lack of legal precedent or regulations.

Ethical issues

If asked, most experts will point to five major areas of concern when it comes to the ethical use of AI. Most of those areas revolve around how AI and humanity interact, or what effect AI could have on the future of consumers. The five areas of concern are:

  • Consumer privacy
  • Surveillance
  • Transparency
  • Bias
  • Automation

Consumer privacy

This area of ethics may not be what your average person thinks about. That said, it's an increasingly pressing concern for experts, since the data collected by and for AI has the ability to outlast the companies that use it, or the humans to whom the data belongs in the first place. 

According to the International Covenant on Civil and Political Rights established in 1992, American citizens have a reasonable right to privacy. However, both the government and many major businesses collect private consumer data regardless, leading to the ethical dilemma behind privacy and AI. After all, AI needs data in order to exist – the data trains it, and it continues to take in data to use as it performs its given tasks. 

The biggest concerns around consumer privacy and AI include:

  1. Data repurposing: The data which was originally collected on a person for a single purpose can be stored and then be used in the future for another purpose, without the person’s knowledge.
  2. Data persistence: The data collected on a person may be stored much longer than that person’s lifespan since data storage costs are quite low.
  3. Data spillover: The data originally collected on a single human who gave consent, it was also collected on those in the nearby vicinity, or who come in contact with that person regularly, without their consent (for example, photos or videos with other people in the background or call logs that trace both parties).

Transparency

With the rapid development of different AI technologies, two growing concerns are the lack of transparency in how the programs work and lack of traceability for what AI runs on what software.

The traceability factor of different AI technologies is an IT security risk for individuals and companies alike. So many programs run AI now -- to gather data or improve the customer experience -- that an unauthorized AI program can end up on a computer it shouldn’t be on, giving the program access to data it shouldn’t have. For example, downloading unsanctioned programs onto a work laptop can introduce AI to confidential data. Both the IT at the company and the company that owns the AI won’t necessarily be able to trace how and where that data got leaked, making it a security risk.

Transparency, on the other hand, has to do two things.  First, it must make sure that AI is explainable, so people outside the initial programmers can test and make sure all the models make sense and work properly. Second, it must enable outsiders like consumers and regulators to understand why and how the AI works to determine if it’s unbiased or needs to be regulated. 

Tableau の無料トライアル版で、お手元のデータを使って美しいビジュアライゼーションを作成しましょう。

Tableau を無料で試す

Bias

There is a common misconception about AI: because it is a machine, it can’t be biased. Humans have flawed decision-making because of societal and personal biases, but machines can make fair decisions without bias.

Unfortunately, that’s simply not true. AI is only as unbiased as the data used to train it. Since many companies purchase their data from data warehouses and don’t check it for accuracy and bias before using it, there are many kinds of AI in the world that operate with the same biases as humans. 

There are two major types of bias in AI:

  1. Data bias: When the training data used to create the AI has flaws or bias included, causing the resulting program to have the same biases.
  2. Societal bias: When the biases present in the society end up programmed into the AI – whether by the AI itself learning it through independent data gathering, or because of an oversight on the part of the programmer. 

An example of AI bias is how inaccurate many facial recognition programs are when it comes to people of color – particularly black women. A Harvard study found that some facial recognition programs were up to 35% less accurate at categorizing dark-skinned women than light-skinned men.

That’s a bad enough problem on its own, but worse is the fact that these flawed AI programs are already utilized by companies, law enforcement, and the government. In fact, many federal agencies are planning to expand their use of facial recognition for law enforcement or security purposes.  With such flawed technology so widely adopted, is it any wonder that racial and gender disparities aren’t getting better? 

Automation

With the prevalence of AI, most people have probably wondered if their jobs would eventually be given to a program to do instead of a human. And while the good news is that projections show that AI is likely to create more jobs than it destroys, there are still ethical questions around this shift. 

One such ethical consideration: the jobs removed are likely to be things done by relatively “low skilled” workers, such as data entry, assembly, etc. and the jobs created are comparatively “higher skilled” as they’ll revolve around the creation, maintenance, and reporting around the AI. 

Thus, in order to ensure the workforce can keep up with these changes, companies will have to consider vocational training for employees that would otherwise be laid off. Otherwise, the AI-provided automation will still cause the devastation to the workforce that people fear it may. 

How to rethink AI ethics

By now we’ve discussed the ethical issues with AI at length. So what can we do to rethink AI and make it more ethical to use, especially as we continue to progress toward more advanced AI?

There are four key areas that need attention: regulations on the national and international level, legal precedence, data responsibility, and AI trust.

Regulations

As of 2021, there are no regulations on the national or international level specifically around AI. However, in April, the European Commission proposed the “AI Act” to regulate AI systems deemed of “unacceptable risk.” And in the United States, the Director of the White House Office of Science and Technology co-authored an op-ed in Wired discussing why the US needs a bill of rights for an AI-powered world

The European Union (EU)’s, Article 22 of the General Data Protection Regulation loosely applies to AI in regards to privacy, and states that, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling…”

As AI continues to evolve in the future, it’s imperative that legislation and regulation be introduced to keep the rights of the consumer in mind. Some of these rights include:

  • Informed consent
  • Freely-given consent
  • Being able to opt-out of data collection
  • Limited data collection
  • Data deletion if requested
  • Description of the use of data in AI processing

Legal precedence

Because there are so few regulations around AI, the question of legal fault when something goes wrong gets murky. How do you identify fault if something goes wrong and someone gets hurt? The company that uses AI? The developer who created it? Or (if there is human supervision) the human supervisor?

A highly publicized case of a self-driving car used by Uber hitting and killing a civilian in 2018 set the precedent of fault falling on the human supervisor. The court decided it was the fault of the safety driver, who was not paying attention at the time of the crash, and decided that there was no basis for “criminal liability” on the part of Uber, and it faced no legal repercussions for this case. 

However, each time AI malfunctions, fault will need to be decided on a case-by-case basis, as each will have its own complexities. John Villasenor proposes in his Brookings research article that AI regulations should function much like product liability laws, assigning fault based on different levels of negligence and harm.

Data responsibility

Along with stronger legal regulations, companies need to prioritize data responsibility to support the ethical use of AI. This comes in three major parts: data privacy, bias, and sourcing.

  1. Data privacy: This means being committed to the privacy of the individuals in the data collected. This comes from having high-security standards, removing data if requested, notifying people when their data is collected and used, and transparency standards for collection and use. 
  2. Data bias: Ensuring the training data you use to train your AI has no bias means intentionally checking your data for areas of bias – intentional or unintentional. If you’re training a facial recognition program, you must use diversified data that includes people of different ethnicities, countries of origin, and gender. Otherwise, the AI will skew in the direction of whatever data was used most often.
  3. Data sourcing: This means ensuring that you know where the data came from and that the people who collected the data, and who had the data collected from them, were treated fairly and compensated. 

Ethics and AI evolution

As you can see, there are many ethical questions surrounding AI, even now. But what about as AI evolves in the future? There are other AI risks we didn’t discuss: such as the possibility of AI becoming sentient, or doing harm in its attempts to accomplish its goal.

The path toward less risky, ethical AI isn’t a quick road. But it is important. As AI is adopted by more businesses and used by more consumers, it becomes more important than ever that companies consider the ethics of how they use AI, and regulators seriously evaluate the risks.