Putting AI Ethics into Practice


Posted by on January 27, 2020  /  0 Comments

Working Bottom-Up, not just Top-Down

An image of a robot's outstretched hand Even as interest in and discourse around AI ethics has grown, the field has come in for criticism. Critics argue that while AI ethics may be important, it is still stuck in abstractions without real world guidelines for action. Others state that companies have begun to engage in “ethics washing,” i.e. pledging to work towards ethical AI but not taking requisite action. The field has also been criticized for not going far enough to ensure social justice.

There is some merit to these criticisms. AI ethics is a vital discourse because it helps frame the values that we want the development and use of AI to embody, such as fairness and accountability. There is some agreement about what these principles are, although they can be interpreted differently. However, there is still a lot of uncertainty about how to put these into practice, for several reasons. For one, the term “artificial intelligence” is used in a variety of ways as an umbrella term to describe a wide range of technologies; anything from Ex Machina’s Eva the Robot to deep neural networks that can learn and play Go, plus many others. Secondly, AI is a general-purpose technology which spans a large range of applications. As a result, the problem of applying AI ethics seems too vast to be tractable because of the many types and use cases of AI.

Furthermore, many ethics principles such as “fairness” and “accountability” are extremely broad. Let us say, for example, that we want AI to be fair. However, what does this mean in practice? Is it fair if an AI makes a decision that privileges certain groups at the expense of others? Is it fair if an AI makes a decision without a certain type of information? “Fairness” in and of itself is difficult to get a grip on because it is so expansive as a concept, therefore it’s hard to know where to even get started. Here’s one way to make the problem more tractable and therefore more conducive to solutions – work bottom up from specific applications of AI instead of only top down from “artificial intelligence” in general.

AI is seldom used in and of itself; it is always used in relation to a certain application, whether that is healthcare diagnostics, financial credit scoring, transport, or pest detection. Each of these sectors has their own existing codes of conduct in relation to that sector – there are expectations, regulations, and laws governing how human actors in these sectors are expected to conduct themselves. AI ethics should engage with these principles when figuring out how to integrate artificial intelligence into various sectors. It has been noted that working on ethical questions in AI through specific, real-world examples can enable more productive reasoning about AI ethics than starting from overarching principles.

Let’s look more closely at healthcare decisions as an example. Here are some norms from medical ethics: we would like a doctor to make healthcare decisions fairly, which includes having evaluated all the evidence and without malicious intent towards the patient. We can ask questions such as: at what level of expertise do we trust doctors to make healthcare decisions? By which ethical guidelines and regulations do we hold doctors accountable for their decisions? If a doctor makes a wrong decision that is dangerous to his or her patient, what are the procedures for investigating the doctor? How do we expect doctors to explain their decisions? Now apply similar questions to AIs. At what level of ability or sophistication would we trust an AI to take part in diagnostic decisions? Who should be held accountable for an AI’s decisions – the developer of the AI, the hospital using the AI, or the doctor who used the AI in his / her decision making? Note how we have gone from ethical values (fairness, acting without malice) to more specific, tractable questions that we can answer by engaging with existing ethical concerns in the domain application. A parallel to the domain-specific application of AI can be drawn with nuclear energy, another general-purpose technology. The ethics of nuclear power, which concerns itself with environmental sustainability among other questions, differ from the ethics of nuclear weaponry, which considers how to limit the use of such weapons.

Of course, we cannot simply transpose questions about human agents onto AI. There will be additional ethical questions about “Human + AI” functions, including how machines can influence human decision-making. For instance, if the decisions made by a healthcare diagnostics AI differ from the conclusions about a diagnosis reached by a doctor, how should the doctor weight his / her decision in relation to the AI’s decision?

Nonetheless, policymakers and others working on AI ethics should engage more closely with existing sectoral norms within the domain of application. The ethical issues that arise can also vary depending on the domain. At least we will have clearer standards and expectations for what the codes of conduct should be. Abstract concepts such as “fairness” and “accountability” can be translated into more specific benchmarks. Hence, these somewhat intractable concepts are turned into more specific, tractable problems that are easier to get a grip on.

To study this question further, researchers can ask if an application-based approach to AI ethics does in fact improve the implementation of AI ethics principles. This is a difficult question to study at present since the creation of AI ethics frameworks is a relatively recent phenomenon and the regulatory landscape surrounding AI is still quite sparse. Therefore, it is perhaps too soon to evaluate the impact of such frameworks empirically. However, this question may be taken up in the future.

Comments are closed.