Ethical AI: What is it (and is it Possible)?
As companies continue to expand the use-cases of AI, being able to define ‘good’ and ‘bad’ outcomes is critical.
Artificial Intelligence (AI), Machine Learning, Big Data: over the last few years, these technological buzz words have moved beyond their original confines in niche academic and development circles, and into the forefront of many companies’ strategic plans and operations.
The reasons are clear: businesses that are able to use human-generated algorithms to autonomously analyze, learn from and then use their growing troves of data are able to provide a vast array of services and products that would have previously seemed impossible.
From Google being able to offer suggestions based on your evolving search history, location and email conversations, to a website chatbot trying to answer your question on how to use a piece of software: AI is revolutionizing the way humans interact with technology, and the potential uses and outcomes that result from it.
However, AI is also perhaps the most controversial dimension of computer technology, due to its very nature: allowing an algorithm to make calculations and then formulate outcomes that that can affect human life throws up a wide range of serious ethical considerations. It’s precisely these ethical questions – and the resulting outcomes – that Ethical AI attempts to address and navigate.
First, it’s important to define what we mean by ‘Ethics’. Ethics refers to a broad and complex collection of concepts that try to define and conceptualize ideas of morality. The concept tries to find ways for humans to understand and think about what is good or bad, right or wrong, just or unjust, and other questions related to how humans view the world and interact with each other.
But for the purposes of this article, Ethics can more simply be understood as the study of how to define a ‘good’ outcome from a human decision or action.
Ethics and Technology
The relationship between technology and ethics has a long and often strained history. And ever since humans were first able to think of the idea of a computer or robot, the ethical questions and ideas surrounding these technologies have been guiding and shaping how they have evolved, which in turn has shaped how we view what is ethical or not.
For example, let’s take Google’s Gmail. Prior to Gmail, the idea that a Third Party could have access to your emails’ content would have seemed unethical to many. However, Google took steps to show that email content was only being analyzed by non-human algorithms in order to make better suggestions to users. It also provided an integrated, unequaled User Experience and Interface, and established credibility and trust when it came to its users’ data security. Through this, Google was able to change what we considered to be ethical when it comes to email confidentiality.
These types of trade-offs, and the ability for technology to shape human ethics, is part of what Ethical AI tries to study.
But for developers and businesses seeking to find a more humanistic approach to developing technology, Ethical AI above all means creating a moral framework in which technology can be created and used ethically.
Ethical AI: A Moral Framework for the Data Age
There is a growing school of thought within IT that identifies the need for a workable Ethical AI framework as the biggest challenge for businesses that use the technology.
As technology becomes increasingly autonomous, ensuring that humans are able to prevent algorithms from producing unethical outcomes is becoming increasingly important for many reasons: from staying compliant with national and global regulations, to maintaining customer trust, loyalty and safety.
However, this is not an easy feat. Companies have lost millions of dollars and squandered immense resources developing software that has had to be scrapped because they have produced unethical outcomes. For example, Amazon had to abandon its development of recruiting software because developers were not able to prevent its algorithms from discriminating against women. Microsoft also famously had to pull its chatbot out of operation in 2016 after it quickly began producing racist and offensive language.
Ultimately, it is going to take an enormous, global and collaborative effort on the part of industry pioneers, developers, businesses and regulatory authorities to establish clear and actionable frameworks for aligning AI with human interests and laws, and protecting those interests and laws from the encroachment of AI, while not denying humans its benefits.
While this is still a long way off, it’s an important goal to keep in mind for all players in the tech industry. As we continue to build new software, it’s essential to truly ask and understand: does it really have a good outcome?