“Ethics in AI”, probably one of the most researched topics on Google regarding the field of artificial intelligence. And when typing those 3 key words on the famous research engine, the word “issue” comes 7 times in the first page of the research results. Ethics is indeed often associated with the semantic field of “concern”. Why is that ? To illustrate this interesting question let me give you, as an example, the following analogy : ethics is to the development of AI what sustainability is to a commercial project - challenges seen as issues rather than opportunities. If we look at the sustainability challenge, many have fallen into the greenwashing pitfall. They have simply addressed the idea of sustainability instead of addressing the new perspective as a real springboard for a paradigmatic change for our economy and society.
What about AI ethics ? Is it more than the new sugar-coated trendy hot topic ?
FIRST THINGS FIRST : WHAT IS AI ? WHAT IS ETHICS ?
Before diving into ethical considerations, we should first agree on the definition of AI. In the collective consciousness, the term AI is made of facts and fears intertwined with imaginary, science (and) fiction thus creating a sort of myth that only remotely resembles what AI really is. The difficulty to apprehend the notion of AI constitutes an obstacle to the understanding of the various forms that AI can take in practice, which does not contribute to a climate of trust. In the context of this blog entry, we will narrow the umbrella word “AI” to machine learning.
The second thing we need to address is the notion of “ethics”. There is no single way to define ethics. Amongst many definitions, we will retain the one according to which ethics is considered as being: “A set of moral principles that govern a person's behavior or the conducting of an activity”.
Putting aside what is to be considered as moral or not, the definition here underlines well the idea of moral principles being the root of behaviors and decisions. In other words, the heart of the ethical approach is to try to tackle moral issues that can be raised by the emergence of AI.
WHY SHOULD WE CARE ABOUT ETHICS IN AI ?
One reason why ethics matter, is because an active and intentional reflection on it contributes to the development of a trustworthy relationship with AI. Another is that the interaction a machine can have with humans, although not particular to AI, impacts many different issues (security, autonomy, risk, responsibility, etc.) making clear ethics reflection necessary. Ethics matter because having an ethics reflection enables the anticipation of the above-mentioned possible issues.
If we can agree on the necessity of considering ethics, we could wonder why it matters more in the AI field than it does in any other IT field ? It is because a machine learning algorithm could be led to make decisions or actions that are contrary to the ethics of its creator and independently of her/his will. This “independence” factor inducing a high probability of “unconscious drifts” is really specific to AI and makes ethics a crucial point in the whole setup of an AI project.
THE ETHICS GUIDELINES FOR TRUSTWORTHY ARTIFICIAL INTELLIGENCE : A FIRST STEP
A little over a year ago now, the European Commission, with the help of a group of experts (High-Level Expert Group on Artificial Intelligence), put in place guidelines aimed at enabling the emergence of trustworthy AI. Although the ethical reflection on the issue of artificial intelligence has not yet been completed, and there is still a long way to go, the Commission's work of clearing the ground provides a starting point for a reflection on ethics and AI. They identified three components necessary for trustworthy AI: lawfulness, ethics and robustness. The European Commission’s approach is holistic and sees those three aspects as one. But what is to be considered under each of those layers? For the sake of this article we will only focus on ethics.
The European Guidelines suggest four ethical principles to consider when designing, implementing and deploying an AI application:
- The principle of respect for human autonomy
- The principle of prevention of harm
- The principle of fairness
- The principle of explicability
Principle of respect for human autonomy : According to this principle, people interacting with AI must be able to maintain complete control over their own determination. Therefore, AI systems should be designed according to this principle to support, complement and supplement human capabilities.
Principle of harm prevention : The design of an AI system should take into account that it should not adversely affect human beings, either physically or morally. This should be taken into consideration even more when an AI system is acting in an asymmetrical relationship (employee-employer; citizen-state; judge-preventive; etc.).
Principle of fairness : Whether developing or deploying an AI system, it must be fair. This implies that the system must be free from unfair bias (causing discrimination or stigmatization).
Principle of explicability : Transparency should be the key word in the conceptualization of an AI product/service. This implies communication about the capabilities of AI but also about the objectives pursued, the decisions taken, etc.
“ETHICS AND AI” FROM THEORY TO APPLICATION
One of the big questions which remains unanswered by the High-Level Expert Group on Artificial Intelligence of the European Commision is how to go from principle to action ? I can imagine some of you thinking “why bothering with all this questioning while the European Guidelines seem to be crystal clear?” But are they?
Ralph Waldo Emerson once said “An ounce of action is worth a ton of theory”. The European Guidelines are necessary in order to understand what are issues at stake and in this respect, they do a good job! They recently launched a platform (which is still a prototype) that aims at helping out companies deal with ethics questions in the context of AI. Nonetheless, all those efforts lack grounding in reality. The complexity of reality needs more than theoretical knowledge, it needs concrete actions (and not especially a long list of actions written in a supposedly “AI lingo”). Simple, concrete actions could be enough to start with. But what could possibly be the first stone to lay?
I would suggest, as we did it at B12, to start with having “the talk”. I have to admit that discussing ethics in our everyday job is counter-intuitive. We do not learn this at school and we are not recruited because of our ethical abilities. But we all have matters we stand for, values we live by, and choices we would rather not make, meaning, we all have personal ethics. Gathering together, in small groups and discussing what is the AI you would stand for, values you would like to live by as a data scientist, and choices you would like to be prepared for as AI professionals is the first stone to lay in order to go from theory to action. Several tools could help you initiate the conversation. MIT has worked on AI Blindspot Cards aimed at enabling conversations “that can help uncover potential blindspots during planning, building, and deploying of AI systems”.
At B12, we are currently working on the development of a framework for ethical reflection. This framework encompasses different topics and questions that should be covered when working on an AI system. “What should be considered as representative data ?”; “Did you inform the user of the possible biases of your algorithm?”; “Will the AI solution be explainable to the end user?”; … These are but a few of the questions that we thought necessary to ask ourselves.
Another tangible and practical step: cooperation. Reaching ethical AI goes hand in hand with a multidisciplinary collaboration. The key to an effective implementation of an ethical reflection lies in the understanding that potential solutions are not held by one side of the AI development process, but are instead held by a larger group of specialists who are responsible for collectively thinking of AI from their different areas of expertise.
One of the main obstacles to this cooperation is the “language gap” that can exist between disciplines. In order to tackle the challenge of “translation”, there must be along the way of building an AI application, regular exchange between different actors. Though Data Scientists might be at the center of the process, developers, designers, customers relations managers, jurists and so on have their words to speak. Another way to phrase this, is to proactively make room at the discussion table. Where you would usually set up one data scientist, two developers and one project manager, what about setting up another actor ? For example, if your AI system aims at enhancing the efficiency of a working team, why not put your Human Resource Manager on the line ? If your AI system is made to help gynecologists in their day to day practice, did you pay attention to the presence of at least one female on the team ?
Through cooperation, everyone’s expertise comes into play as well as everyone’s individuality. What does it have to do with AI? We have all probably heard the term “garbage in, garbage out” which relates to the quality of data used to “feed” the AI algorithm. If how you feed in your AI system matters, how you build it matters too, as well as who builds it. The richness and the diversity of individuals who participate in design, implementation, testing and usage of AI systems play an important part in cooperation and therefore, in ethics reflection. When we consider diversity in terms of gender, backgrounds, color, interests, etc., things like unfairness, biais, harm, human autonomy are no longer theoretical issues, but concrete ones.
This is one of the reasons why at B12 we insist on having an eclectic team made of various skills. Here, Data Scientists and I have managed our way to understand each other by debating about practical cases that we encounter in our projects or trendy questions related to the law in our sector (Can I use this database? Is this database diverse enough ? What are the biases we should expect when using this data?). At B12, we firmly believe that each point of view matters, and that we have something to learn from each other. That is why we highly value cooperation and collective intelligence (more about it here: https://www.b12-consulting.com/how-did-collective-intelligence-help-us-in-our-management-of-the-COVID-19-crisis.html).
More than that, we believe that cooperation will be the junction between AI and ethics that will lead to achieving an ethical AI.
“ETHICS AND AI” AN ISSUE OR AN OPPORTUNITY ?
Machine learning, or AI systems as we call it here, have the potential to go beyond the intended purpose of its creator, for good and for bad. This is the very reason why AI ethics matter. But saying so doesn’t magically make it easy to consider in practice, which is why I wanted to underline some practical ways to give a concrete dimension to ethics in the context of AI : discussing and cooperating on AI ethics issues is as important as the AI technology you are building and should hence become an integral part of an AI project flow.
The very core of this article aimed to point out how ethics gives us a real opportunity to challenge our perspective and see ethics as being an occasion to go beyond theory and build an AI ecosystem that considers the diversity and the complexity of our society. Is ethics more than the new sugar-coated trendy hot topic? Yes! Ethics in the field of AI is a new opportunity to build AI systems that make sense today for tomorrow.
At B12 we want to seize this opportunity, what about you ?