Cloud Native Blog - Container Solutions

Ethical vs Unethical? Or Ethical vs Inadvertent?

Written by Anne Currie | Feb 6, 2018 1:23:27 PM

Next month I’ll be co-hosting the first QCon Tech Ethics track in London with Gareth Rushgrove. We’ll hear speakers from many different areas of technology talking about the ethical issues they face. We’ll discuss ethical (and unethical) algorithms, codes of ethics, machine learning, AI, hiring, education and diversity. We could talk about much more but it’s only one day! In this post, I’m going to try to get my own head around the idea of practical ethics in tech. Is it possible?

Let’s Talk About Talking About Ethics

There’s a risk in discussing ethics that we just run around the room yelling STOP!!! Strangely, that’s seldom an effective persuasion technique.

It’s the nature of humans to create new things. That’s good. We should invent. I don’t think the purpose of ethics is to apply a panic-stricken brake to technological progress.

But, if not to brake, what are ethics for?

Morals vs Ethics

I’ve spent a great deal of time recently asking techies their opinion on ethics and It’s interesting how often they mention the trolley problem. I’m beginning to suspect that that thought experiment tells us everything we need to know about morality and ethics in tech; but not in the way you might think.

The Ubiquitous Trolley

Unless you live inside a sealed room connected to the outside world only via a convoluted message passing system, you’ve probably heard of the Trolley Problem. The premise is you’re standing by a railway track with a runaway trolley (a train basically) coming towards you. If you don’t act it will kill the five people lounging on the track ahead. Alternatively, you can pull a lever and throw the train onto a quieter track with only one person hanging around on it. What do you do? Take no action and let five people die, or take action and be responsible for the death of one?

If you’ve ever discussed this problem you’ll have noticed there’s no consistent agreement. Different people feel strongly about and can morally justify either answer. There may statistically be some commonality of view based on location, religion or personality but there’s no right answer here. You’re just arguing over the lesser of two evils - usually vociferously.

Forget Morality, Try Maintenance

If morality fails to agree even on the least bad of two clear options, is ethics better? What would tech ethics tell us to do?

I would argue that practical and effective tech ethics would be about avoiding the Trolley Problem in the first place. What might a technically ethical solution look like?

  • Maintaining and enhancing the rolling stock and tracks to reduce or eliminate the chance of a runaway trolley?
  • Darwin award aside, putting up signs or alarms to discourage inattentive folk from hanging out on railway tracks?   

You might argue, “That’s not the point of the trolley problem, Anne. It’s a kind of moral Kobayashi Maru!”

My contention, however, is that’s a waste of my time. I don’t believe tech ethics is about navigating a moral maze. Let’s leave that as an exercise for philosophy graduates. We’re pragmatists and I suspect technical ethics is essentially pragmatic.

I’d define tech ethics as considering how preventable problems like individual or group disadvantage or injury might result from any system and taking realistic steps to avoid them.

I believe effective tech ethics is about deliberately choosing to invest from an early stage in planning and processes to avoid the moral tech problems that we’re only just becoming aware of. It’s about deliberateness or intentionality from the very start.

A Practical Example Please?

Let’s consider a similar problem in the field of security.

Last year, I interviewed the well known author of “Building Microservices”, Sam Newman, about building complex systems securely. He commented that one of the main differences between security and other forms of functionality is that security has to be thought through up front and designed in from the start. You can’t just feel your way.

That’s because

  1. it’s hard to retrofit security to many, particularly modern, system designs.
  2. You want to stop security failures from happening rather than just learning from them. Any major security failure like losing personal data could be very damaging to the business and customers. According to Sam, you need to deliberately plan up-front to avoid it.

So to do a good job, security is something you have to consider at the design stage. That made me think. Are ethics the same? Do they also need to be planned from the start?

Security and ethics do have elements in common:

  • It might be hard to retrofit an ethical behavioural change later (the new European Union GDPR privacy rules just forced Microsoft to spend the most money they ever had to comply with a legal requirement).
  • You may want to avoid unexpected unethical behaviour by your system rather than merely fix it post-hoc for many reasons, including bad publicity.

So, there’s some basis for thinking that ethical issues, like security ones, might need to be considered at the earliest design stages of a product rather than left as an afterthought.

However, there is a way that security differs wildly from ethics, and that's that we mostly know what we need to do to be secure. There are agreed behaviours that the industry learned from being unsecure in the past. In ethics we don't always have that basis yet.

Stepping back, do we have any genuinely ethical examples that we can consider where there are standards or agreed behaviours?

Are There Any Ethical Examples?

Accessibility is something we usually think about early for most new designs. As a result, most websites are reasonably accessible (at the very least zoom is good nowadays, which I appreciate as someone with visual impairments).

We didn’t do this to be nice. US govt purchasing and Google SEO rules, for example, highly incentivised us to be accessible. Standards were defined by the W3C. We knew what to do and we knew there were benefits to complying with the commonly agreed behaviours.

The results of agreed standards and clear commercial benefits were the same as if we’d chosen to be accessible merely to help folk like me. By thinking practically about accessibility up-front we did a better job of it. I suspect this is an example of effective tech ethics in action.

Are we perfect at accessibility? No. But I'm with Voltaire on this one: "the perfect is the enemy of the good". As long as we put processes in place to improve we are on the right track.

Intentionality and Ethics

When a website is hacked and user data stolen, it might be because an evil company has unethically chosen to save money by failing to keep their servers patched. However, more often it is inadvertent. They just didn’t have good patch processes in place. The problem is more likely to be lack of attention or ignorance than lack of ethics.

The same is true for more general ethical failures. Bitcoin is a terrible user of dirty energy (at current rates Bitcoin mining will consume as much energy as Italy by the end of the year). This probably wasn’t deliberate - it was inadvertent. There’s no need for cryptocurrencies to be eco-disastrous, BC just wasn’t designed not to be. So, it is and it’ll be very difficult to fix that retrospectively. New cryptocurrencies that anticipate a legal or social backlash or that want to be more ethical may design in renewable energy use up front, for example.

So What is Tech Ethics?

I suspect Technical Ethics is just the application of informed thought and care at the start of projects, ideally based on standards (when they become available). It is about deliberate choices and intentionality. In the same way that Sam Newman thinks that secure systems are ones where security has been a key intention from the start, ethical products will be ones where basic ethical standards have been a deliberate choice at the design stage. Like security, ethics may be expensive to retrofit.

So what are areas that we ethical techies might want to consider at the design stage along with accessibility and security? Here are a few ideas for things that are not yet always enforced legally but might well be soon

  • Algorithmic fairness (could the algorithms/training data unfairly disadvantage or harm people?)
  • Low carbon footprint
  • Low pollution/waste/plastics footprint
  • Not encouraging criminality, exclusion or other antisocial behaviour

Thinking about them at the design stage and doing something about them while it’s still relatively easy is being technically ethical but it’s also being smart. Young consumers and software engineers are becoming increasingly aware of the ethical footprint of the products they buy or work on. In the future, legislation is also likely to get heavier - especially in Europe.

We're still, however, at an early stage. I expect ethical standards to emerge for us to adhere to, but we are not there yet. The EU are trying to define and impose complex standards using, for example, their new privacy and algorithmic fairness law, the GDPR. I applaud the sentiment, but standards don't tend to work that way. We'll get useful standards when companies who are doing ethics successfully start publishing their decisions and processes for the rest of us to adopt - "rough consensus and working code" as the IETF might put it. In tech ethics let's aim for rough consensus and practical actions.

So what's the action for the average dev? It's to think, try and write. Think about how ethics might apply to your products. Try some things out that might make your products more socially good and then write about what happened (the good, the bad and the ugly) so rough consensus and practical actions can emerge. At some point someone will codify what works but at the moment you are part of contributing to that standard rather than following it.

Hopefully within 12 months we'll have some rough guidelines in place and we'll continue to evolve them. You are a key part of the process. Good luck!

Read more about our work in The Cloud Native Attitude.