WTF Is Cloud Native

How to Build Tech You Won’t Regret

If tech is building the future, do we ever stop to think about what sort of future we’re building? We focus on moving fast, breaking stuff and continuously delivering, but do we take time to consider who could be excluded from what we’re making? 

Maybe the tool we’re making won’t be used in the way we’re intending. Or maybe it will be used by far more people than we could possibly imagine—for instance, what could happen if it scaled to the 2.80 billion monthly active users of Facebook?

Building technology responsibly isn’t just about security, resiliency and uptime. It’s also about environmental impact. And privacy. Most importantly, we have to consider our social impact, and what we are asking our users to give their consent to—wittingly or unwittingly.

As tech becomes increasingly personal — in our homes, our work, our cars, and even our bodies — then our responsibility as its creators must also increase. And as the tech industry continues to face a huge talent gap, we have more job security than most to be able to speak up and ask questions. We should use that privilege. Everyone within an organisation owns the consequences of what we’re building and even what we’re choosing to connect to.

In this piece, we explain some simple reflections, mechanisms, and Design Thinking to incorporate ethical considerations into your sprint cycles. 

Responsible tech starts with a commitment.

When Kim Crayton, the anti-racist economist, was looking for an organisational strategy to scale belonging and psychological safety in the knowledge economy, she developed four Guiding Principles:

  • Tech is not neutral. Nor is it apolitical.
  • Intention without strategy is chaos.
  • Lack of inclusion is a risk- and crisis-management issue.
  • Prioritise the most vulnerable.

If we as teams and tech organisations establish these axioms as a foundation, would it change the way we look at what we’re building on top of it? How often are we prioritising those experiencing the most risk? In order to really create technology that not only can but should scale, Crayton says we have to learn to become comfortable with being uncomfortable.

And we certainly have to be working alongside users or at least find the teammates that are closest to the users—our colleagues in sales, customer success, developer experience—and bring them to the strategy table. Don’t simply rely on your current users, but make sure you are testing and building beside a much larger test base, designing products intentionally for a broader population that could eventually become your users.

Of course the easiest way to follow these guidelines is by having a diverse, equitable, and inclusive team to mitigate the risk of building something no one will use — or that will be used in a way you hadn’t even considered. Starting with Crayton’s principles is a great way to remember the power and risk that comes with building technology.

Responsible tech considers its consequences.

Doteveryone, a former UK-based responsible technology think tank, created a useful technique called consequence scanning. This is an Agile practice for product teams to continuously make sure that what an organisation is building aligns with its culture and values. It’s intended to be run during initial conception of a product, roadmap planning, and feature creation. 

You start by answering the following about your product:

  • What are the intended and unintended consequences of this product or feature?
  • What are the positive consequences we want to focus on? 
  • What are the consequences we want to mitigate?

We often spend a lot of time setting goals and maintaining a backlog of intended features. If we use behaviour-driven development, we also dedicate time to those intended consequences and user experience, however the unintended consequences allude us. Doteveryone has found that technology usually shares one or more of these six unintended consequences:

  • Lack of digital understanding: Unclear business models and policies lead to a lack of insight into what you are “consenting” to. People were willingly giving their DNA to free ancestry tools to find long-lost relatives, but it was revealed that police departments were using that data to investigate crimes. General Data Protection Regulation or GDPR guidelines now pop up on all European user screens, but most of us quickly clear that distraction from our screens, not learning any more about how those cookies will be used. Because women and people of color are underrepresented on IT teams, a lack of understanding results in men building health apps that don't include period trackers or virtual backgrounds that behead images of users with darker skin tones.
  • Unintended users and use cases: People will always find a way to use your app in a new and unexpected way. What3words was built as a simpler alternative to longitude and latitude, but is being used to organise social distancing at Black Lives Matter protests. And some human beings will always find new ways to hurt other people, ranging from harassment all the way up to state-sponsored election interference. 
  • Weak reliability, security, monitoring, and support: How will you monitor a site or service as it scales for unexpected or unplanned problems, such as YouTube becoming an unchecked pipeline for extremism. Who is validating the security and stability of track-and-trace apps governments created for the COVID-19 pandemic? Why do they crash? Will governments remind us to delete them later?
  • Change in behaviours and norms: These changes go from emojis becoming their own language, to screen addiction, to my four-year-old trying to swipe the TV screen to change the channel. The use of contactless mobile payments may keep germy currency out of our hands, but how does it change our spending habits?
  • Displacement: This ranges from technical unemployment—self-checkout and ATMs replacing cashiers and chatbots substituting for customer service reps—to enabling people to meet, study, and work from home.
  • Negative impact on the planet: The IT industry contributes at least between 10 and 12% of carbon emissions. Are we making considerations like turning off video autoplay to help cut our own products’ contribution? Are we even measuring our own footprints? Does the environmental impact factor into where we host our website?

We attribute consequences as negative, but, as you can see above, not all are. They just often weren’t the intended outcome. It’s important to brainstorm possible consequences and to make a plan to monitor, measure, and potentially remedy them.

Wellcome Data LabsAoife Spengeman has developed a complimentary workshop in pursuit of ethical product development, applying it to evaluating the lab’s own algorithms. This process categorises unintended consequences as:

  • Use cases: The product is used in a way it was intended to be used by both the creators and users. 
  • Stress cases: The product is used in the way it was intended to be used, but it has unintended consequences for users. These are what are usually dubbed “edge cases”. 
  • Abuse cases: The product is deliberately used by someone in a way that it wasn’t designed to be used. 
  • Misuse cases: The product is unintentionally used by someone in a way that it wasn’t designed to be used.

Wellcome Data Labs used this exercise to evaluate a machine-learning tool developed to help internal teams gain insights into how the Wellcome-funded research is cited, aimed at measuring the research’s impact on the public. 

The team uncovered a major risk: allowing users to misunderstand how the tool works and make assumptions about its accuracy, comprehensiveness, and output significance, which can, among other things, decrease diversity, inclusion and funding for some projects.

For example, papers written in the Global North are disproportionately cited. Without Wellcome providing that context, this stress case could cause funders using the tool to perpetuate these systemic inequities, only funding the most cited projects. Furthermore, the tool is more accurate for English language and science-centric journals. A user could misuse the tool, believing that research in other languages or disciplines is less influential, when in fact it's just less likely to be picked up by the algorithm. 

The outcome of this exercise was to work to make sure users have a higher understanding of how the algorithms work and its limitations. 

As Spengeman writes, ”Nobody can prevent the worst from happening, we can do our best to imagine where things can go wrong and do what we can to mitigate the risks.”

Responsible tech looks to minimise harm.

Open source has inherent diversity, equity, and inclusion problems. Stack Overflow’s 2020 survey found that “developers who are men are more likely to want specific new features, while developers who are women are more likely to want to change norms for communication on our site,” a request that’s frequently paired with the terms “toxic” and “rude.” The 2017 GitHub survey, of which just 3% of respondents were women and 1% were nonbinary, found half of all respondents had “witnessed bad behavior,” which ranged from rudeness, name-calling, and stereotyping, all the way to stalking and outright harassment.

In addition, the open-source community is distributed, predominantly volunteer, or external to the maintaining organisations, which makes it inherently much harder to control. Also, successful open-source projects can grow to a point where unintended consequences and use cases become the norm—because when you reach that Facebook-sized user base, there are no edge cases. 

Product Designer Kat Fukui described at a QCon event her then-employer GitHub as “the biggest platform for developers to connect and collaborate.”

But that wasn’t what it was built for. It began to fulfill a technical need, not a human one.

“It wasn’t really the original intention for GitHub to be a social network but here we are, and it really is because a lot of conversation and human interaction happens around building code,” Fukai said.

As GitHub scaled to become the largest host of source code in the world, however, focus pivoted and collaborative features were demanded. 

Fukai was on GitHub’s community and safety team, which over the years has grown around these responsibilities: 

  • Making sure the communities are healthy.
  • Doing feature reviews to make sure they don’t introduce new abuse vectors. 
  • Fixing technical debt.
  • Documenting and amplifying the team’s own work so unintended consequences aren’t repeated.

The team was built because, as Fukai said, “When GitHub was founded 10 years ago, we weren’t necessarily thinking about the ways that code-collaboration software could be used to harm other people.”

So now the job of this cross-functional team is to get creative and figure out the ways any feature could be used to harm someone.

According to Fukai, “Building user safety into the foundation of technology is everyone’s responsibility. Whether you’re a manager, an individual contributor, designer, researcher, [or] engineer, it is everyone’s reasonability... because every platform or feature can and will be abused.”

For every feature review, we have to ask: How could this feature be used to harm someone?

And of course it’s not just about the human impetus to create a safe space. If you don’t have the resources in place to act quickly when abuse is reported, you will lose your users.

Fukai’s team applied the Agile practice of user stories. User stories are very short descriptions of a feature told from the perspective of a user—As user type x I want y to happen for z reasoning

On the GitHub community team, user stories are leveraged to identify stress cases. Sara Wachter-Boettcher expands the creation of user stories of stress cases in her book Technically Wrong to specifically help create empathy for how users are feeling in scary situations, like escaping abuse.

Fukui says the intentional use of the term “stress case” humanises edge cases.

“Even if it happens rarely, stressful edge cases have a larger negative impact and you can quickly lose trust a lot faster, especially if it happens publicly,” she said. 

She gave the example of a user story for a stress case that came up when users requested private places to chat within GitHub. As we know, direct messages can enhance harassment. 

In this user story, Fukai illustrated the user is trying to escape harassing DMs from an abusive relationship. So the team traditionally asks the following questions, with answers for this particular use case:

  • What problems are they experiencing? It’s really easy to create “sock puppet accounts” that are for the sole purpose of spam or abuse.
  • How are they feeling? Powerless and fearing for their own personal safety and even those around them.
  • What does success look like? Support has to have the tools to minimise the impact of abuse at scale and users have to have the power to block or turn off DMs. 

User stories aren’t just great for creating user empathy, finding feature gaps, and aligning an organisation around feature releases, they also bring different viewpoints and specialised knowledge.

User stories also act as a point of validation. If you are, for example, working on other features that could open up to vectors of abuse, you can reference this user story in your future decision making.

You can then knit your user stories together to build a great technical foundation in the form of safety principles or guidelines. Or you can start with those, like this blog post did, and then hold all user stores up to those standards.

Like all things within Agile lifecycles, it’s essential to constantly reflect, review, and iterate.

WTF_ethics_ebook.png

Comments
Leave your Comment