Extent of Automation Caveat

Sacrifice automation only when it impedes the experts

While Dan North‘s recent Skills Matter talk on Where Is The Quality In Agile was as consummate as ever, the automation discussion afterwards was exhilarating. There has been a lot of chatter this year on whether automation can be safely sacrificed to reduce cycle time, and Dan surprisingly used a traditional project management tool to make a compelling argument for reducing automation.

The automation discussion has been rumbling along since James Shore said that acceptance testing tools “cost more than they’re worth. There’s a lot of value in getting concrete examples from real customers and business experts; not so much value in using natural language tools“. Steve Freeman described how Forward prefer Blue-Green Continuous Deployment and server traffic monitoring rather than automated tests to test new features. On the other hand, Ron Jeffries described his discomfort via Shuhari when he pointed out that “acceptance tests are at shu level and the bottom half part of ha level. When your team is at or near the ri level, you’ll have enough other rigour in place to safely drop them“.

In the past, my working with teams of varying agile and technical maturity has meant that jettisoning automation practices felt fraught with danger, but what made Dan’s argument so persuasive was his use of a risk impact/probability graph to measure the merits of automation in a given context.

Risk Likelihood vs. Risk Probability
By suggesting some low-probability, low-impact (green) situations where automation can arguably impede progress, and some high-probability, high-impact (red) situations where automation remains mandatory, Dan outlined a simple method to rationally appraise where the value of automation truly lies. This is a significant discovery that will aid any team assessing its working practices, suggesting that:

Extent of Automation = Evaluation of Risk

However, this model possesses the same assumption as the above scenarios that instinctively leaves me averse to change – that the practitioner is an expert.

Far too often, I have observed the misguided abandonment of well-established team rules in the name of efficiency. For example, I observed a team that installed a local Git repository in front of an overloaded department-wide CI server, to reduce their own wait time when checking code in. This solution immediately grated against The Theory Of Constraints and as a side-effect introduced Feature Branching by stealth. The team was not malicious in intent, rather operating at the bottom half of the shu level and upper half of ha, or to be blunt they were simply too immature as a team to make such a radical change to their automation practices. They should have investigated global optimisations to the department-wide CI server instead.

James has some excellent advice in The Art Of Agile Development and The Road To Mastery, saying that “rules can’t anticipate all situations. When established conventions thwart your attempts to succeed, it’s time to break the rules. First, you need to understand them and their reasons for existing. That comes with experience… embrace your ideals, but ground them in pragmatism“. For me, this is the missing piece in the puzzle – that we need to gauge expertise as well as risk when assessing automation practices.

Extent of Automation = Evaluation of Risk and Team Mastery

As experienced teams attain ri status and learn how best to break their own rules, automation will be gradually discarded in favour of a select few practices such as Continuous Delivery. However, the Extent of Automation Caveat is that whenever risk increases or team expertise degrades, the team will have to reflect upon their principles and possibly re-introduce stricter automation practices.