If tech is building the future, do we ever stop to think about what sort of future we’re building? We focus on moving fast, breaking stuff and continuously delivering, but do we take time to consider who could be excluded from what we’re making?
Maybe the tool we’re making won’t be used in the way we’re intending. Or maybe it will be used by far more people than we could possibly imagine—for instance, what could happen if it scaled to the 2.80 billion monthly active users of Facebook?
Building technology responsibly isn’t just about security, resiliency and uptime. It’s also about environmental impact. And privacy. Most importantly, we have to consider our social impact, and what we are asking our users to give their consent to—wittingly or unwittingly.
As tech becomes increasingly personal — in our homes, our work, our cars, and even our bodies — then our responsibility as its creators must also increase. And as the tech industry continues to face a huge talent gap, we have more job security than most to be able to speak up and ask questions. We should use that privilege. Everyone within an organisation owns the consequences of what we’re building and even what we’re choosing to connect to.
In this piece, we explain some simple reflections, mechanisms, and Design Thinking to incorporate ethical considerations into your sprint cycles.
When Kim Crayton, the anti-racist economist, was looking for an organisational strategy to scale belonging and psychological safety in the knowledge economy, she developed four Guiding Principles:
If we as teams and tech organisations establish these axioms as a foundation, would it change the way we look at what we’re building on top of it? How often are we prioritising those experiencing the most risk? In order to really create technology that not only can but should scale, Crayton says we have to learn to become comfortable with being uncomfortable.
And we certainly have to be working alongside users or at least find the teammates that are closest to the users—our colleagues in sales, customer success, developer experience—and bring them to the strategy table. Don’t simply rely on your current users, but make sure you are testing and building beside a much larger test base, designing products intentionally for a broader population that could eventually become your users.
Of course the easiest way to follow these guidelines is by having a diverse, equitable, and inclusive team to mitigate the risk of building something no one will use — or that will be used in a way you hadn’t even considered. Starting with Crayton’s principles is a great way to remember the power and risk that comes with building technology.
Doteveryone, a former UK-based responsible technology think tank, created a useful technique called consequence scanning. This is an Agile practice for product teams to continuously make sure that what an organisation is building aligns with its culture and values. It’s intended to be run during initial conception of a product, roadmap planning, and feature creation.
You start by answering the following about your product:
We often spend a lot of time setting goals and maintaining a backlog of intended features. If we use behaviour-driven development, we also dedicate time to those intended consequences and user experience, however the unintended consequences allude us. Doteveryone has found that technology usually shares one or more of these six unintended consequences:
We attribute consequences as negative, but, as you can see above, not all are. They just often weren’t the intended outcome. It’s important to brainstorm possible consequences and to make a plan to monitor, measure, and potentially remedy them.
Wellcome Data Labs’ Aoife Spengeman has developed a complimentary workshop in pursuit of ethical product development, applying it to evaluating the lab’s own algorithms. This process categorises unintended consequences as:
Wellcome Data Labs used this exercise to evaluate a machine-learning tool developed to help internal teams gain insights into how the Wellcome-funded research is cited, aimed at measuring the research’s impact on the public.
The team uncovered a major risk: allowing users to misunderstand how the tool works and make assumptions about its accuracy, comprehensiveness, and output significance, which can, among other things, decrease diversity, inclusion and funding for some projects.
For example, papers written in the Global North are disproportionately cited. Without Wellcome providing that context, this stress case could cause funders using the tool to perpetuate these systemic inequities, only funding the most cited projects. Furthermore, the tool is more accurate for English language and science-centric journals. A user could misuse the tool, believing that research in other languages or disciplines is less influential, when in fact it's just less likely to be picked up by the algorithm.
The outcome of this exercise was to work to make sure users have a higher understanding of how the algorithms work and its limitations.
As Spengeman writes, ”Nobody can prevent the worst from happening, we can do our best to imagine where things can go wrong and do what we can to mitigate the risks.”
Open source has inherent diversity, equity, and inclusion problems. Stack Overflow’s 2020 survey found that “developers who are men are more likely to want specific new features, while developers who are women are more likely to want to change norms for communication on our site,” a request that’s frequently paired with the terms “toxic” and “rude.” The 2017 GitHub survey, of which just 3% of respondents were women and 1% were nonbinary, found half of all respondents had “witnessed bad behavior,” which ranged from rudeness, name-calling, and stereotyping, all the way to stalking and outright harassment.
In addition, the open-source community is distributed, predominantly volunteer, or external to the maintaining organisations, which makes it inherently much harder to control. Also, successful open-source projects can grow to a point where unintended consequences and use cases become the norm—because when you reach that Facebook-sized user base, there are no edge cases.
Product Designer Kat Fukui described at a QCon event her then-employer GitHub as “the biggest platform for developers to connect and collaborate.”
But that wasn’t what it was built for. It began to fulfill a technical need, not a human one.
“It wasn’t really the original intention for GitHub to be a social network but here we are, and it really is because a lot of conversation and human interaction happens around building code,” Fukai said.
As GitHub scaled to become the largest host of source code in the world, however, focus pivoted and collaborative features were demanded.
Fukai was on GitHub’s community and safety team, which over the years has grown around these responsibilities:
The team was built because, as Fukai said, “When GitHub was founded 10 years ago, we weren’t necessarily thinking about the ways that code-collaboration software could be used to harm other people.”
So now the job of this cross-functional team is to get creative and figure out the ways any feature could be used to harm someone.
According to Fukai, “Building user safety into the foundation of technology is everyone’s responsibility. Whether you’re a manager, an individual contributor, designer, researcher, [or] engineer, it is everyone’s reasonability... because every platform or feature can and will be abused.”
For every feature review, we have to ask: How could this feature be used to harm someone?
And of course it’s not just about the human impetus to create a safe space. If you don’t have the resources in place to act quickly when abuse is reported, you will lose your users.
Fukai’s team applied the Agile practice of user stories. User stories are very short descriptions of a feature told from the perspective of a user—As user type x I want y to happen for z reasoning.
On the GitHub community team, user stories are leveraged to identify stress cases. Sara Wachter-Boettcher expands the creation of user stories of stress cases in her book Technically Wrong to specifically help create empathy for how users are feeling in scary situations, like escaping abuse.
Fukui says the intentional use of the term “stress case” humanises edge cases.
“Even if it happens rarely, stressful edge cases have a larger negative impact and you can quickly lose trust a lot faster, especially if it happens publicly,” she said.
She gave the example of a user story for a stress case that came up when users requested private places to chat within GitHub. As we know, direct messages can enhance harassment.
In this user story, Fukai illustrated the user is trying to escape harassing DMs from an abusive relationship. So the team traditionally asks the following questions, with answers for this particular use case:
User stories aren’t just great for creating user empathy, finding feature gaps, and aligning an organisation around feature releases, they also bring different viewpoints and specialised knowledge.
User stories also act as a point of validation. If you are, for example, working on other features that could open up to vectors of abuse, you can reference this user story in your future decision making.
You can then knit your user stories together to build a great technical foundation in the form of safety principles or guidelines. Or you can start with those, like this blog post did, and then hold all user stores up to those standards.
Like all things within Agile lifecycles, it’s essential to constantly reflect, review, and iterate.