Cloud native, Microservices, DevOps, Think, Design

How Your CI/CD Pipeline Reflects Your Organisation’s Structure

 

For a long time, engineers have noticed that system architectures bear a strong resemblance to the structures of the organisations that design and build them. 

In a text from the 1960s, the computer scientist Melvin Conway stated that organizations which design systems are constrained to produce designs which are copies of the communication structures of these organisations. That’s known as Conway’s Law, and while it can be applied to a broad range of systems and organisations, let’s explore here how it applies to the ways in which organisations deliver software to production.

The steps and procedures performed after pushing code into a repository are often seen as a pipeline, where a flow of changesets is verified while moving to its final destination: a production system. The way your pipeline is structured, intentionally designed or not, may reveal some interesting aspects of the communication channels and organisation of the teams involved in the process.

What your Continuous Integration steps can tell you

Continuous Integration (CI) is about the constant integration of smaller changes into the existing codebase, while maintaining the quality and integrity of the overall product. It does not matter if you run large applications or smaller service components, a good CI pipeline should verify as quickly as possible if software patches are working as they should, and should provide feedback to the right team if a changeset isn't doing what it should.

At this point, if the steps include any sort of human intervention, that intervention will slow the  process down; the overall flow of changes will come in bigger batches. For instance, another team member may have to peer-review the code or run a final verification test, manual or scripted, to approve and merge the change. In smaller and co-located teams, discussions about code design may happen more informally—over the desks, during the coffee break, or in front of a white board. 

The branching and merging strategy used by the team also influences when and how often code gets integrated and tested as a whole. Branches used in development will delay integration, but may be a good protection when teams are big and individuals are not supposed to commit code directly into the main line. That’s the case in open-source projects, where the number of contributors is open, they don’t know each other well, and communication is mostly asynchronous. In-house teams may use this approach for changes between the teams. 

For a single and small group, though, I’d question the benefits. It’s all about the right balance of ownership between individuals and codebases, and maybe about some trust between team members. 

How you test software mirrors your team’s roles

Software testing is not a trivial subject. Testability depends on the design of good interfaces (functions, classes, HTTP APIs, GUIs, etc). If a group writes application code and another group writes test code (integration, functional, or at any other level) the responsibility of keeping good application design will be split between the groups, or worse, no one will be responsible for it. It’s a bug in the design of development teams.

In the case of components not covered by automated tests, manual verifications will have an impact on development even if done by ‘efficient’ individuals. The overall product suffers in the long term because application code that is hard to test usually does not have good modularity and clear interfaces. It is usually hard to change, refactor, and enhance. 

This becomes more accentuated in either large applications or smaller components with complex interactions. In those cases, the delivery of software changes can be a real enterprise endeavor.

Maybe the team is constantly under pressure to deliver features and doesn’t have enough time to spend on tests. Maybe it’s focused on story points and velocity, where writing tests have little impact on the metrics observed by managers. If individuals are not rewarded for writing reliable software, accountability for the growing systems will decrease as complexity goes up.

Continuous Delivery processes need less coordination

Teams developing mobile apps or software libraries usually release new versions at a certain cadence, so end users don’t get annoyed with constant software updates. Teams working on server-side software, however, don’t have this concern. Small updates can be rolled out at any time, with very little to no impact on users.

In organisations with a certain number of development teams, and where each one of them can release updates independently at any time—with little external coordination as in a modern Continuous Delivery (CD) process—there’s a good chance the deployments include a certain level of automation, up to the point where the team itself can be responsible to drive the procedure. 

On the other hand, if coordination with members of operations teams is required days or weeks in advance, chances are that deployments are quite risky manual procedures involving a large group of individuals. It possibly requires management approvals, follow up from all related teams, and some application re-testing.

Building a better delivery process

It’s surprising how Conway’s Law can be observed in all kinds of organisations designing and building systems. In software delivery, the connection between the responsibility of teams delivering software and the resulting design of a CI/CD pipeline is quite evident, showing that improvements in this area may cover broader topics beyond technology choices.

Following the insights from such observations, managers and technical leaders can take advantage of different team structures and collaboration channels to facilitate more flexible software architectures and build more efficient delivery processes.

For more, request our free eBook, The Cloud Native Attitude, below:

New Call-to-action

Comments
Leave your Comment