WTF Is Cloud Native

WTF Is Ethical AI? A Primer

Last night, while revising this article, I was trying to decide how I should introduce this topic. First, there is the question of artificial intelligence (AI) itself. It is an extensive area of research, and one of its applications is machine learning. It became the industry norm to refer to machine learning as “AI” in the general sense, which is not exactly true. In this article we will use AI to refer to the research field and all of its applications, including machine learning. 

Then there is the question of ethics.  I mean, ethics - if you look it up on Wikipedia, you will see that ethical theory predates philosophical reflection (although its meaning was much closer to “virtue” or “excellence”). As you could expect, the discussion over its definition alone is enormous.  Ethical issues crop up in other branches of software probably at least as frequently as they do in my own field. As Charles Humble and Heather Joslyn recently noted “Bad software ruins lives, even without AI”.   

When it comes to “AI ethics”, the typical association is with Asimov’s famous Three Laws of Robotics, which is unfortunate because they are based on a huge misconception. Perhaps we can find a better definition via a recent US news story: “Portman, Heinrich Urge National Science Foundation To Prioritize Safety and Ethics in Artificial Intelligence Research, Innovation”. The press release is about a letter that the two senators sent to the National Science Foundation (NSF), which - political and scientific merits apart - contains the following definition:

“Broadly, AI safety refers to technical efforts to improve AI systems in order to reduce their dangers, and AI ethics refers to quantitative analysis of AI systems to address matters ranging from fairness to potential discrimination.”

There were no specifics about which dangers might be related to AI safety, although the image that comes to mind is Terminator’s Skynet . However, “the potential AI has to discrimination” was repeated a couple of times in this letter. That is absolutely understandable, especially considering the current socio-political climate in the US. At the same time the repeated use of “discrimination” made it look like “AI ethics” was being reduced to “AI bias”. That is a hazardous reduction since, whilst AI bias is undoubtedly an issue, many other aspects of our daily lives impacted by AI can be seen through an ethical lens.

How can I help you help me help you?

Earlier this year I had the opportunity to watch a presentation titled “L’éthique ou comment protéger l’IA de l’Humain pour protéger l’Humaine de l’IA” - which can be loosely translated to “Ethics or how to protect AI from humans in order to protect humans from AI”. In this presentation, prof. Amal El Fallah-Seghrouchni - a world-class AI researcher and member of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) - divides the domains impacted by AI into:

  • Societies (more specifically, citing the urge for control that comes with all the available data and what can be inferred from it);
  • Ecosystems (increased use of computational resources leading to increased energy consumption); and
  • “Human life and the human spirit” (new ways of acting and thinking, human interactions, and consequently the cognitive decision-making process).

AI impacts myriad aspects of each of these domains - some easier to identify than others. If we take into account the massive use of user data by AI-powered systems, for example, it is fairly reasonable to understand that AI can directly impact everything privacy-related. Also, considering the nature of most AI-powered decision-making processes, transparency can be a problem - we will talk about this later. 

Other impacted aspects also easily inferred from a cause-consequence analysis are related to responsibility and accountability. Let’s say that a vehicle factory decides to use AI to create a more cost-effective manufacturing process. The AI-powered software concludes that a certain part of the vehicle can be assembled differently. If it doesn’t work out, and the new process actually costs more than the original one, the software vendor/development team will probably be blamed, there will be a rollback, the factory will lose some money, someone might get fired, but ultimately life goes on.

Now - imagine that we are talking about an AI-powered judicial system, in which software replaces the jury. As dystopian as it might sound, we don’t really need one huge software program to do that; the existing legal system (and decisions made within it) are already influenced by the data and algorithms used in localized systems, and other automated decision-making. If someone is deemed guilty when they are not as a consequence of a bug (as you know, no software is bug-free) - who is to blame? What are the consequences for all parties involved? And if we assume that transparency is a problem, how can we evaluate justice?

Analyzing the impacts of AI computation in our ecosystem might not have been so easy in the recent past, but the global chip crisis and increasing criticism of the massive use of resources by cryptocurrency mining allow us to draw a very clear parallel. AI modelling is computationally expensive.  Processing more data requires more resources, more energy - we can’t escape this. Many AI workloads are run on the cloud, but cloud data centres are also huge consumers of energy, and whilst the major cloud providers have pledged to reach carbon zero, they are not there yet and may never be.  And that is just data centres, which are a relatively easy problem to fix.  As our smartphones grow more powerful we need more energy, bigger batteries - a need fueled by more complex, resource-hungry apps. We are at a point where we already talk about green software engineering, as Holly Cummins’ recent article for WTF attests. Unlimited clean energy is still a utopia. How ethical is charging our phones twice, three times a day, just so we can keep using our very expensive data-processing apps (AI-powered or not)?

And then we have the not-so-obvious aspects, such as the ones related to the human psyche: we already know that digital addiction is a real condition. So - how much should we expect the number of digital addicts to change due to the use of AI in targeted advertising, for example? Is it ethical to design products that use AI and psychology to deliberately try to hook consumers, as people like Nir Eyal advocate?  And what about our relationships with other people once the already-new digital dating model becomes AI-powered? To what extent can delegating our reasoning affect the rest of our lives? And if you choose to do so - who is actually in charge?

Looking at these questions from a higher level leads us to two different conclusions. The first one - as mentioned before - is that AI ethics is not only about AI bias. The bias exists, of course, and is a problem since it affects anything related to decision-making (especially if it’s analytical decision-making). But if we analyze all domains impacted by AI considering what is right or wrong, there are still many questions to answer and problems to solve.

The second one - as proposed by prof. Fallah-Seghrouchni - is that we need to perceive ethics as a systematic study of behaviour based on a global and evolving framework of values, independent principles, and actions. Without that, we can’t provide answers in a responsible manner or even know whether we can adopt anything AI-powered in the first place. In order words, ethics must be seen as a dynamic basis for assessment, orientation, and the use of AI-powered technologies.  

In practical terms, it means that adopting AI ethically also means looking at each question we expect to answer using AI, and considering how it impacts everything else. This is not something we can even do as humans - there’s a whole discussion here on morals and ethics, and our diverse, often culturally-informed, notions of right and wrong. As humans we make decisions and are held responsible for them by our conscience, societal norms or ultimately through the law. But we also know that at least given our current state of technology, it is impossible to completely replicate the human reasoning process as a system. So, if it is hard for us to reach a “globally right” conclusion, it will be impossible for any software to do so. But that doesn’t mean we can’t start addressing the subject.

AI bias, or the problem of “this is my world”

We have mentioned this topic at least twice now, so let’s address it: what does it mean to have a bias when we talk about AI? Let’s first define what we consider “bias”. We can find a lot just by googling the term, but I’m particularly fond of this article. The author lists 20 types of bias that can influence (or compromise) our cognitive process - or, if you prefer, our decision-making process. 

We can immediately relate a few of these biases to our context. For example, let’s take the “availability heuristic”, which involves overestimating the importance of the information available. That is a classic “don’t” from econometry: if you give all data the same importance when using a purely analytical process, you end up falling into a “clustering illusion” bias. That is when you find correlations such as how hot it gets in Miami depending on the birth rate of kangaroos in Australia. It gets a bit more complicated with machine learning, but the principle is the same: if you (i) don’t choose the appropriate learning algorithm for your problem and (ii) feed in too much data, you may definitely expect your system not to give you the right answers. The same analogy can be used for a lot of the other biases listed in the article.

Other bias types that may influence your algorithm are related to “selective perception” and “stereotyping”. You are probably familiar with the “is this a chihuahua or a muffin?” problem (which is very real, by the way). In machine learning, selective perception can also result from using a learning algorithm over “the right data” - a set of data that already confirms your analysis. Using thousands of pictures of the same two or three people to train a facial recognition algorithm won't teach the algorithm to recognize any faces with characteristics too distinct from the original models. That is an obvious bias.  But what if your dataset is biased and you don’t even know about it? 

There are many articles on the problems of biased datasets or algorithms in automated decision-making. Staying with facial recognition algorithms, their failure rate is very different depending on factors such as gender or ethnicity. That happens because the data used to train the algorithm was biased - the majority of faces used were say white, or male, or both. So - if your software fails to recognize a person, and this failure (i) can be traced to a biased data set and (ii) has other repercussions to one or more individuals (emotional distress, economic loss) - who is responsible, and what should be the consequences? If there aren’t laws in place to regulate the data provider in the first place (which may very well be a different company providing a service to the software developer), where does the responsibility chain stop? 

There are, of course, a lot of other types of bias to consider. An algorithm may be biased if the hard-coded reasoning is based on a cultural/local norm or rule (e.g., legal drinking age in different countries). We already know that it’s impossible to be truly ethical (unless you actually solved the whole human-reasoning-as-a-bot problem, patent pending). But as developers, engineers, managers, or executives, you’ve probably already got to the real question: can we measure how ethical our software is?

Understanding the ethical rule

There is a lot of research related to AI ethics, ranging from how to implement ethical principles to actual ethics guidelines for AI. Curiously enough, in this context Asimov’s aforementioned Three Laws are not ethical guidelines - in fact, they may even be considered unethical: complex ethical problems don’t really have a simple yes or no answer. The Three Laws are often used to transmit a sentiment of trust - robots can be trusted not to harm any humans as long as they follow these laws. But that is a matter of safety - following the Three Laws doesn’t make all decisions inherently good or bad.

Establishing ethical guidelines for AI is hard. Last year AI ethics researcher Thilo Hagendorff published a comprehensive evaluation of guidelines used in the development of AI systems. In this paper, Hagendorff compared 22 different guidelines currently in use, examining to what extent ethical principles are implemented in AI systems (incidentally, the author also examined the existence of biases among the authors of these guidelines). The conclusion was straightforward: AI ethics is failing in many cases, mostly due to the lack of a reinforcement mechanism:

“In practice, AI ethics is often considered as extraneous, as surplus or some kind of “add-on” to technical concerns, as an unbinding framework that is imposed from institutions “outside” of the technical community. Distributed responsibility in conjunction with a lack of knowledge about long-term or broader societal technological consequences causes software developers to lack a feeling of accountability or a view of the moral significance of their work.”

The author also states that these considerations have two consequences for AI ethics (paraphrased): 

i. Stronger focus on technological details in the field of AI and machine learning is required to close the gap between ethics and technical discourse.
ii. AI ethics should turn away from the description of purely technological phenomena to focus more strongly on social and personality-related aspects. AI ethics then deals less with AI as such, and more with ways of deviation or distancing oneself from problematic routines of action, uncovering blind spots in knowledge, and gaining individual self-responsibility.

If you think these two consequences are somewhat contradictory, you are right. In order to actually measure and understand how ethical your system is, you need to find a way to technically implement abstract values. But for that to happen, you need to understand what is really happening within your AI-powered system (I won’t go into the matter here, but it suffices to know that “explainable AI” is a big research trend for a reason). At the same time, the more you focus on the abstract values, the more you distance yourself from the technology part. Although he does not explain how to solve this dilemma, Hagendorff recognizes that “finding the balance between the two approaches” is a challenge for AI ethics.

This, of course, leads us to the practical problem at hand: if we want to know if our AI system is ethical (or at least how ethical it is), we need to adhere to a set of guidelines - but also understand that they aren’t (and probably won’t be for some time) all-inclusive. There is a balance to be achieved, and recognizing that is a very difficult first step on its own.

New romantic cyborgs

Up to this point, all we know is that (i) AI ethics involve a whole universe of considerations and problems, and (ii) regardless of which aspect we choose to tackle, that’s probably only the tip of the iceberg. Also, huge as it is, we are still looking at the problem from the system perspective alone and how the direct AI application/implementation impacts us. You might be under the impression that this is a one-way street, as in “what the software is doing to us, humans'' - but unfortunately, that’s not the case.

We already understand that the human psyche can be impacted by AI. This impact can be either direct or indirect. A direct impact may be exemplified by the response from an e-learning system when you get the third question wrong in a row: seeing the same “you are wrong” message can be demotivating. The indirect impact comes from AI-powered algorithms used in dating apps, for example (as we discussed above). 

However, our relationship with AI-powered systems - the interaction per se - can also be perceived as ethical or not, depending on which abstract aspects we are taking into account. That means the AI experience is subject to an ethical analysis - follow all guidelines, and you still might have a problem if your system does not interact with the user in an ethical manner, or vice-versa. With dating apps, if you understand the user-input parameters well enough to “rig the game”, is it OK to do so? And back to accountability - if your actions have a reprehensible outcome, to what extent should the AI also be blamed? Or, in other words: is the AI unethical if it is susceptible to unethical interactions? That is an old discussion, actually, if you consider every judicial battle involving “terms of use” - but the problem is still there. 

And what of people who do not have access to smartphones, high-speed internet, or any modern communication technologies? If our behaviour changes as a result of AI usage, are these people outcasts now? Is our software ethical if it doesn’t deal with such scenarios? Are we (exposed and changed by AI) being ethical if we push our new ways to be the “new default”?

Facts do not cease to exist because they are ignored

At this point you might be feeling scared of even going near AI. Please - don’t be. “Ethical AI” is much, much bigger than accounting for isolated problems. We don’t have a perfect solution. The topic is deep, the problems are many, and the impact on our society is tremendous. We don’t, though, need to banish everything AI as if we were in a technological dark age. We do need, however, to understand that “AI ethics” is more than a hype marketing term or the new tech version of the neighborhood-friendly “hey, we recycle”. 

Before we start buying and using AI-powered systems (or building them ourselves) just because “they are the future”, we need to understand a lot of things - including the ethics involved. We are talking about something that influences how we think, decide, and - ultimately - live. Consequences, responsibility, accountability are all part of the package. This is not supposed to be scary, but it doesn’t change the fact that the discussion is already out there. So, let me ask you: how ethical is your AI?

WTF_ethics_ebook.png

Comments
Leave your Comment