In Part 2 with dived into Eliezer Yudkowski’s theory of Coherent Extrapolated Volition and examined the component parts of the idea that, “our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.”
Part of the journey towards applying that to a business is taking a point of view that allows us to see businesses as “living organisms that have their own sense of direction”. If we do that we can understand its values, desires and volition and approach strategic planning in a way that maximises its chances of “doing what it wants to do”. Let’s explore that idea.
The Business as an entity (or: what if Nike was a person?)
How might we personify (not anthropomorphise) the company we work for in a way that would allow us to take steps towards understanding and extrapolating its volition? The starting point for such an endeavour requires a quick detour via the idea of evolutionary purpose. Brian Robertson, founder of Holacracy, speaking to Frédéric Laloux in Reinventing Organisations:
“It’s us humans that can tune into the organisation’s evolutionary purpose; but the key is about separating identity and figuring out “What is this organisation’s calling?” Not “what do we want to use this organisation for, as property?” but rather “What is this life, this living system’s creative potential?” That’s what we mean by evolutionary purpose: the deepest creative potential to bring something new to life, to contribute something energetically, valuably to the world… it’s that creative impulse or potential that we want to tune into, independent from what we want ourselves.”
With that in mind, I don’t think it’s too far of a stretch (nor is it particularly uncommon) to imagine the business as an entity that could be thought of as “behaving” as though it has its own volition—a desire to act that’s not too dissimilar to the way that you and I choose our courses of action based on what Anil Seth, Professor of Cognitive and Computational Neuroscience at the University of Sussex, describes as a “competence to control all [our] degrees of freedom, in ways that are aligned with our beliefs, values and goals…”
Seth sees competence to control as being “implemented by the brain not by any single region where ‘volition’ resides, but by a network of processes distributed over many regions in the brain.” Essentially the singular feeling of being in control of a decision relating to a specific course of action is created by a chain of events and processes over which you appear to have increasing degrees of control over as the point of action draws nearer. Seth, again, says we can think of this network as implementing three processes: “an early ‘what’ process, specifying which action to make, a subsequent ‘when’ process determining the timing of the action, and a late-breaking ‘whether’ process, which allows for its last-minute cancellation or inhibition”.
We can, therefore, view ourselves (as the staff of the business) as being in collective control of the what, when and whether. Assuming that is the case, it is our responsibility to ensure that we are sufficiently aware of—and signed up to—the beliefs, values and goals of the business to ensure that the action it takes is aligned with those things: with its evolutionary purpose.
We’ve explored some of these ideas around the importance of values and organisations previously on the CS blog in the context of remote organisations but it is a topic that is worth revising. Nike is one example of an example of an organisation that understands that. Its mission statement is to “Bring innovation and inspiration to every athlete* in the world” with “if you have a body, you’re an athlete” being the asterisk, and one of the values that sits behind that is to “build a more diverse, inclusive team that reflects the athletes* and communities where we live, work and play.”
That all sounds great, but talking about values is always easier than following them, particularly when the stakes are high. When Colin Kaepernick took the knee during the national anthem at NFL games, Nike didn’t drop him, despite plenty of criticism including from at-the-time-President Trump and people burning their Nike shoes on social media. Later Nike ran an advert featuring Kaeperick and the slogan “Believe in something, even if it means sacrificing everything”, demonstrating a thorough understanding and alignment of the beliefs, values and goals of the organisation with those of the people who are responsible for what it should do, when it should do it and whether it should do it. I wish I could claim that Nike slogan pun was intentional.
It would have been easy to avoid controversy (of one kind, at least) by dropping an athlete that was no longer playing professional American football and claiming neutrality. Instead, the organisation acted according to its stated beliefs, goals and values, saw stock values climb by 5% and, in the days following the release of the aforementioned advert, sales by 31%.
Initial dynamic over strategy
In the CEV essay, Yudkowski sets some “guiding principles” for the application of an initial dynamic, which are also useful for using something like CEV for strategic planning. I’ll do my best to translate them to this analogy but, once again, please read the paper in its entirety and make your own interpretations and judgements.
1. Defend humans, the future of humankind, and human nature.
In the context of friendly AI, this deals with existential risk and the nature of being a human. It concerns the weighting of current and future life—the potentially trillions of humans that might exist in the future against the relative handful that are alive now and the duty of those people to behave in a manner that allows those future generations to revel in humanity’s theoretical cosmic endowment. It does not, unlike some people currently building electric cars and sending them into space appear to believe, suggest that those theoretical people should be prioritised at the expense of those in the present.
Instead, it suggests that any initial dynamic should be as fulfilling and scrutable for those in the present—those who work for the business now—as it should be for those who come in the future. Those of us in the present carry “the seed complexity” and if we’re miserable how can we expect those who come after us not to be? A sure fire way of guaranteeing the future doesn’t exist is to make the present a nightmare: especially in the context of a company’s success and continued existence.
Perhaps we could suggest this principle becomes: “Defend human nature and your humans, and the future of your business will be assured.”
2. Encapsulate moral growth.
We have addressed this issue already, but “We need to extrapolate our moral growth far enough into the virtual future to discover our gasp of horrified realisation when we realise that the common modern practice of X is a crime against humanity.” You wouldn’t shouldn’t have to try particularly hard as the owner of an oil company to imagine a future in which the idea of sucking liquified dinosaurs out of the ground and burning them for the purposes of locomotion would be viewed, in retrospect, as entirely unconscionable. Indeed, is it too hard to imagine people looking at photographs of a butcher and wondering what on earth the people of 2023 were doing hanging corpses in the windows of shops? And I say these things as both a petrolhead and an omnivore!
3. Humankind should not spend the rest of eternity desperately wishing the programmers had done something differently.
Again, we can wind this one in a bit: we don’t have to worry too much about eternity. But this is about making sure that whatever initial dynamic we settle on is changeable and adaptable and—as in the first rule—acceptable both to those in the present and those who might exist in future. “If we scream, the rules change; if we predictably scream later, the rules change now.”
4. Avoid hijacking the future of mankind
I’ll hand over the description of this to Yudkowski with one small modification: “Friendly AI programmers Strategists are ordinary schmucks and do not deserve, a priori*, to cast a vote larger than anyone else. We are talking about people stuck with a job that is frankly absurd, doing their best not to be jerks.”
*relating to or denoting reasoning or knowledge which proceeds from theoretical deduction rather than from observation or experience.
5. Avoid creating a motive for modern day humans to fight over the initial dynamic.
Whatever initial dynamic you set must encapsulate the needs of all those who are involved in its actualisation. This is clearly a tough one, since something that tries to serve everyone rarely serves anyone, but it is possible to define something that is open to interpretation without being vague. This comes down to an ability for the initial dynamic to be clearly able to function as a way of allowing the direction to be set based on the company’s changing evolutionary purpose, rather than setting a direction in stone, a priori.
6. Keep humankind ultimately in charge of its own destiny.
See above.
7. Help people.
Customers, users, staff, your community, planet earth. Don’t be evil.
The conclusion we can draw, and the thing we need to remember, is that initial dynamic is as much about a healthy organisational culture as it is about having a strategy. One cannot work without the other.
Next time: Building the house
In the final blog in this series we’ll look at some things you can do to adhere to the guiding principles established in this part.
We’ll look, in some detail, at:
- How to view an organisation as an entity and understand its evolutionary purpose.
- How to avoid resistance to changes that need to be made to final goals.