WTF Is Cloud Native

Is Privacy Unethical?

It was sci-fi author William Gibson who first pointed out that the future is already here, it just isn’t evenly distributed. That comment applies particularly to privacy, surveillance, and trust. Right now, you may be able to keep a secret, but will that always be true? More importantly, should it? 

Keeping an eye on things

According to a 2019 report by Comparitech, London is the most watched city in Europe and the sixth most surveilled in the world. There’s at least one camera for every 14 Londoners and by 2025, cctv.co.uk, a closed-circuit television system supplier in Britain, expects the number of fixed recording devices in the capital to top 1 million. 

But that’s nothing compared to China. In the past decade, its internal security spending (mostly devoted to citizen monitoring) increased tenfold. It now significantly outstrips the country’s external defence budget. 

We’re all being watched! Apart from the obvious (dazzle makeup and big hats) is there anything we could do about it? 

Hang on! Before we come up with some cunning headgear-based camouflage scheme, do most people actually care? 

In the UK, there’s not much public complaint about how much we’re observed, and this is Britain! We complain about everything! In China, outside of Hong Kong, again there’s little noise. Why? In part, because observation could serve two purposes the public are not averse to: social scoring and increased security. But does it?

The eyes have it

China's social credit system is a form of scoring that affects citizens’ lives in a variety of ways, from public shaming of jaywalkers to whether individuals are allowed to use public transport. There isn't one single rating. There are loads of them and they’re often generated from data collected by businesses as well as the state (not just video, plenty of financial information goes into the mix too). However, most are tied in some way to the government. 

The use of social scoring in China varies by region, according to Samantha Hoffman, fellow at the Australian Strategic Policy Institute.  What is considered bad behaviour, she says, can range from “not paying fines when you're deemed fully able to, misbehaving on a train, standing up a taxi, or driving through a red light.”

In one example, each resident of the Chinese city of Rongcheng is initially allocated 1,000 civic points and the authorities deduct them for frowned upon activities like traffic violations and award them for social goods, like donating to charity.

To most Western individuals, a high level of surveillance coupled with automatic penalties seems like a bad thing. In Asian countries, however, it has improved governments’ response to the coronavirus outbreak and saved a lot of lives, particularly of poor and vulnerable folk who are the pandemic’s primary victims. 

For me, the idea that my government would stop me using the train because it reckoned I had a poor citizenship rating seems unthinkable. However, in mainland China, such scoring is often popular. Perhaps banning antisocial folk from buses seems slightly less dystopian if you’re actually on the bus. 

Stranger danger?

The main argument the Chinese state uses in favour of observation is it creates stability and social cohesion in the country’s sprawling new cities. If you’re building “a London every year”, most of the first inhabitants will be strangers with no way to decide who they can trust. With social credit, their government has provided a state-sanctioned, automated version of interpersonal trust that merely relies on your acceptance of the state’s judgement. Let’s hope it was right. 

Amazon uses the same rationale for its own version of scoring: customer reviews enable you to buy stuff confidently from unfamiliar sellers all over the world based on enough opinions from other folk you don’t know. Amazon successfully monetized high-scale, statistically-based trust and it is a significant benefit to consumers—as long as it isn’t fake. Unfortunately, a large number of the reviews on Amazon, and huge numbers of the goods sold (even by Amazon itself), are counterfeit.

Scoring, however, is not the only mechanism for handling credibility. Blockchain fans take a radically different approach. They want to deliver a future where human trust is no longer required—algorithms manage it for you. The most common use of blockchain, cryptocurrencies, are popular with both criminals and radical libertarians. Greener humans, however, hate the appalling carbon footprint — it turns out trust is expensive to replace, at least without a central authority.

Watching your words

We’ve talked about your actions (as viewed by cameras), your financial history, and reviews from customers as things you might be judged on. What about your speech? 

In the past, much user activity on the internet has been outside the reach of the law due to another aspect of privacy: anonymity. If you incite folk to insurrection on Twitter, Twitter isn’t legally responsible for it (note I can see the argument there). And if you are exerting your right to anonymity you may also be hard to hold to account. After the US Capitol riot, that situation may not continue. 

Is it better to preserve the privacy of people potentially committing crimes or to have big tech decide what gets posted online? Should we let crimes go unpunished in order to preserve both free speech and extreme privacy in the form of anonymity? 

Maybe it’s a price worth paying and your answer may depend on which country, and which laws, we’re talking about. In countries where the laws are both fair and fairly applied, upholding them is generally considered to be a good thing. You may argue there are no countries like that, and therefore there is always a place for anonymous protest. When and where is free speech and anonymity still worth it? Right now, that’s a question mostly being asked and answered by big technology companies.

Trust in the future?

What will the next generation of trust look like? Will we enhance the accuracy of interpersonal judgement through more data, perhaps by adding brain-scanning or lie-detection AI (also in use in China)—all of which reduce privacy even further? Or will we try to remove the need for human, or state, judgement by using tech like blockchain?

In reality, the privacy versus human-based trust debate isn’t even that straightforward. 

Surveillance technology is buggy.  It is particularly likely to result in incorrect, and usually discriminatory, results for minority groups and women because of poor testing and weak training data. This issue should not be one of mere ethics. In the UK, EU, and the US existing anti-discrimination legislation makes it illegal to penalise folk based on minority status. However, up until now tech (and its users, such as police forces) may have been getting away with it. 

For a variety of reasons, including lack of awareness and money to hire lawyers, victims haven’t been taking scoring-algorithm companies to court. However, there are signs that the honeymoon period could be coming to an end. If they break the law, more companies using machine learning badly may get sued in future. 

Privacy versus ethics

If everything were transparent to everyone, we would live in a Panopticon. 

It’s worth pointing out that when the 18th Century philosopher Jeremy Bentham came up with the idea for a system in which several people are watched by one person, without the observed knowing they are being watched, he was intending them for use in prisons. However, the Asian experience of 2020 suggests a Panopticon world might be a safer place. “Live Free or Die!” say the residents of New Hampshire, but do you agree? 

When states grab data about us we get social stability and crime reduction, but also potentially repression and, unless we can fix it, discrimination. When companies grab it, we get more personalised products and medicine, plus emotional, political, and cognitive manipulation. We also get discrimination again—unless we can fix it.

Right now, consumers and citizens seem content with (or at least willing to live with) the payoffs from loss of privacy, otherwise Facebook wouldn’t be as popular as it is. Multiple religions are also founded on the idea of an omniscient deity who sees all our sins and holds us to account, and they’ve always been surprisingly popular.

Christians suggest we’re better behaved if we believe our every act is observed by someone in authority. Those same folk might argue that social scoring is God’s job, but if you’re an atheist you believe that position is vacant. Should the state step into the role of judge—as China has? Or should Facebook? Or no-one? Or everyone? 

Who watches the watchers?

Surveillance is power:

  • Governments have pushed for it in 2020 in exchange for public safety.
  • Amazon, Facebook, and Google offer convenience in exchange for your data.

On the other hand:

  • Apple has pushed firmly the other way, using techniques like differential privacy/secure enclave as well as things like the privacy nutrition notices.  
  • The popularity of ad blockers also implies that at least some people care about their own internet privacy. Or maybe they just don’t like ads.

Google has said it will stop personalised ad tracking, but it isn’t at all clear that’s part of a drive for improved privacy. It could be a move to consolidate the company’s own position.

2020 has made things explicit. In future, we are increasingly likely to face the choice between what the West has championed in the past (individual privacy and freedom) and the preference of Asian nations (the stability of society). 

I can accept the argument that privacy is unethical when it is traded off against the safety of society, but it shouldn’t be a one-way street. Corporations and governments benefit from watching us, but we don’t have the same level of supervision over their behaviour or the equations they’re using to manage us. Amsterdam and Helsinki have made an excellent move in the right direction with the introduction of a register of algorithms. This is important because even if you believe the entity watching you has the best of motives, its algorithms don’t. They just have bugs.

WTF_subscription

Comments
Leave your Comment