Accountability Sinks and Cries of Despair
Musings after reading Dan Davis's amazing 'The Unaccountablity Machine'
Finally finished reading Dan Davies ‘The Unaccountability Machine’ and my mind is whirring. Wondering whether many of our organisational control functions such as Health and Safety are little more than accountability sinks.
Where decision making has been delegated to the ‘outcome of a process’.
And wondering too whether shock events and awful trends in mental health and suicide and employee engagement and presenteeism are a cry of despair that our systems are not delivering what we need.
I’ve captured my rambling musing here, specifically touching on accountability sinks, the law of requisite variety, and systems and cries of despair.
Do let me know your thoughts.
As a note of caution Davies is an economist and the book delves deeply into cybernetics. I know nothing about these fields, but I am an advocate for the importance of diversity and learning from other disciplines. That said, I am fully accountable (see what I did 😊) for any errors or misunderstandings. Please do point them out to help me learn.
Accountability Sinks
A decision with no real owner had been created because it was the outcome of a process. This process of there being ‘nobody to blame’ is the definition of what constitutes an accountability sink. (Davies)
I could be accused on occasion of being vocal about my frustration with health and safety professionals who drive blind compliance and police adherence to process above all else. Davies has given some helpful distinctions and frames to better understand my frustration.
His argument is that over the last century, accountability has atrophied leading to a fundamental change in the relationship between decision makers and those ‘decided upon’. This atrophy is characterised by the way our institutions are run – not by human beings responding to individual circumstances, but by processes and systems, operating on standardised sets of information.
He says, ‘The fundamental law of accountability is ‘the extent to which you are able to change a decision is precisely the extent to which you can be accountable for it, and vice versa.’’
When you are unable to vary a decision, you have in effect an accountability sink. Organisations create these ‘sinks’ by implementing rules and processes to be followed at all and any cost. Of course, rules and processes can be efficient and effective, but when there is no process and way of making an exception based on feedback that sits outside the assumptions made in the creation of the rules, then no-one is accountable.
We see this in safety a lot, and I imagine in other organisational ‘control functions’, where we create these generalised rules and police people to follow them blindly without ensuring we have in place the mechanisms to allow people to make exceptions based on the feedback they are receiving from the system.
There was a moment where I was literally cheering as Dan described the experience of ‘sinks. It reminded me of a recent frustrating experience that I’m sure we can all relate to.
A parcel had not been delivered and I’d received a notice saying delivery had been attempted and a post-card with reference number and what to do next. Except, I’d been in the whole day, no delivery had been attempted and no post-card delivered. After 2 days of using the app to attempt to get the parcel re-delivered, I set about finding a human being to speak to. Finding a number to call was challenging itself - as if the system did not want me to speak to a human. Eventually I ended up speaking this absolute gem of a human being.
Who told me (very nicely and very apologetically) he could do two things: prioritise redelivery or get it sent elsewhere for me to collect. Could I speak to his manager to complain about the fact no delivery had been attempted – no – the procedures didn’t allow for that. Was there anything else I could do – no. Those were the only options, the protocols allowed.
After two further failed delivery attempts and more phone calls the parcel finally arrived. No-one was accountable for the failure; the protocols had been followed and I was frustrated and annoyed but with no access to doing anything with that.
A perfect accountability sink, and I imagine exactly what the organisation had intended, possibly because the assumption on which the procedures were made did not account for somebody not in fact attempting delivery when they said they had and/or not leaving a post-card with details of what to do.
The decision makers did not have to own up to any failure or deal with any upset, the decided upon had to live with the situation with no recourse or any semblance of accountability. And the gem of a human being made his living being gracious with people in the absence of accountability.
Accountability sinks must also be soul destroying for those we’re asking to blindly follow and enforce the rules.
I recall a story of a front-line housing officer who had a tenant suffering from poor mental health and addiction issues. The tenant had on a few occasions set fire to his flat as he loved chips. He’d put the deep fryer on and then fall asleep and start a fire or nearly start a fire.
He was on the brink of being evicted.
This angel of a frontline worker – breaking all protocol, purchased (I imagine from their own money) an air fryer for the tenant. Problem solved – safe (and healthy) chips and no fire risk.
What might the world look like if we allowed and trusted human beings to regulate the system by deviating from protocols when feedback said this was appropriate.
The Law of Requisite Variety and the weighting of information
We must consider accountability sinks in the context of the law of requisite variety and the weighting of information. Accountability sinks have resulted from the ‘managerial revolution’ where we have proceduralised the work of work to such an extent that we’ve lost the requisite variety to deal with the complexity of what the system is exhibiting.
Decision making systems break down when the variety of the environment doesn’t match the means of regulation, the system will drift out of control.
I’ve come across the ‘law of requisite variety’ before but not in the context of cybernetics and this was pretty eye opening to me. Fundamentally, anything that aims to be a ‘regulator’ of a system must have at least as much variety as that system. At a very simple level a train can go backwards and forwards, so needs only a single handle to regulate direction. But a car needs a steering wheel to regulate the variety of direction options available to it.
As the world becomes more complex, how many of our systems have sufficient requisite variety needed to match the complexity of the systems. Do we have the requisite variety to regulate systems for emergence? Having requisite variety is inextricably linked with the information available to the regulator of the system.
According to Davies, if a manager doesn’t have information handling capacity at least as great as the complexity of the thing they’re in charge of, control is not possible and eventually the system will become unregulated.
But the issue is we both reduce and weight the information coming in that we use to control many of our organisational systems, and this process of reduction and weighting is also a decision about what is not taken into account.
(As a sidenote, Davies speaks a lot to the reliance on shareholder value as the most valued information in our current systems and the consequences of this which I’m not touching on here but found incredibly insightful).
Davies argues that the consulting industry is largely a response to the need for more information in the system, consultants are hired to help match the information to the complexity of the system. But the ability for consultants to provide this information, can be compromised as consultants need to get more work, so may not give accurate information or bad news, for fear it will lead to less work in the future.
Regarding our health and safety decision-making systems, this has made me think deeply about whether we have requisite variety to regulate the current complexity of our systems?
On one level I wonder if health and safety has not become an accountability sink, where no-one is accountable, and policies and procedures are relied on to make decisions. This keeps the lawyers happy, and the organisations protected and when things fail, we blame the front-line individuals who deviated likely with good intention in the face of the feedback the system was giving them.
I suspect much of the current thinking of safety and human performance, such as work as imagined, prescribed and done, or organisational drift could be viewed well in the light of individuals who have (or don’t have) sufficient requisite variety deviating from prescribed way of working based on feedback from the system. How could we better ensure requisite variety and then create systems that allowed for deviation based on feedback from the system.
I wonder too about what information we weight and control in making health and safety decisions. Despite all the rhetoric, how much have we really moved away from simplistic lagging indicator metrics as the primary information we use to tell us about the state of the system. How much have we really moved towards considering stories, narratives and metaphors as valid information. Or towards considering risk scenarios and hazard combinations, rather than isolated piecemeal consideration of the risks posed by individual hazards?
On Systems and Cries of Despair
I realise I’ve used the term system without defining it, and to in some way appease for that, I want to end with some reflections on what I learned from Davies book about this. He draws on the work of Stafford Beer who I had never heard of before and am now completely fascinated by.
Stafford Beer, 1990 By Universitätsarchiv St.Gallen | CC-BY-SA 4.0, CC BY-SA 4.0,
Beer coined the term the purpose of a system is what it does (POSIWID). As Davies says, the only reason we may think that the purpose of an oil company is to destroy human life on the planet Earth is POSIWID – that is what they do. It’s the inevitable outcome of the system as it is currently functioning.
A genuinely complex system is one in which you cannot hope to get full or perfect information – rather than try to use a mixture of partial information or preconceived theories or guess work, to understand the intricacies of the system, accept that the system will keep its secrets and observe its behaviour. Watch and see what happened. Treat is as a black box, systems don’t make mistakes, they don’t have desires they don’t conspire – they are working within the structures that makes the outcomes they produce inevitable.
For a system to be viable five elements are needed according to Stafford Beer’s Viable
Systems Model. And these are now massively summarised here to make my final point.
System 1: Operations: The part of the system that does thing. E.g., a platoon.
System 2: Regulatory: This part is about preventing clashes and managing conflict. E.g., a quartermaster to assign equipment, a battle plan etc.
System 3: Optimisation and integration (here and now). directing the management of each individual operation. For e.g., the battlefield commander. This is where you find management jobs. This is about achieving a purpose.
System 4: Intelligence (there and then). This part of the system is now looking at the environment beyond which system 1 interacts. So, looking at the future and ensuring feasibility into the future.
System 5: Philosophy or Identity. This part of the system is concerned with the identity creating function of the system, this is also about managing excess variety.
So, there are these five elements of a viable system (and to repeat the above is a dangerous oversimplification) they are not all nicely and neatly packaged, nor do they follow company diagrams or job descriptions. Many systems are nested and it’s possible one person or function could fulfil more than one element for example.
Obviously how information flows will dramatically impact the viability of the system and it’s very important to have flows and channels of communication, particularly that link operations to the higher-level function (System 5: Philosophy and Identity). Because when this information flow is working, then it can deliver ‘red handle’ signals that all is not well.
But, as we’ve seen earlier, we have cut off these communications channels, to create more easily managed information channels and created accountability sinks where decisions are deferred to policies and procedures, and we’ve lost the requisite variety needed to regulate or adapt to or manage or hold the complexity of the systems we’re now operating in.
And the predominant thing I was left with from Davies book, is that in the face of these accountability sinks, and the lack of requisite variety needed to respond and regulate complex systems, maybe the intent or identity of our macro systems have become so wrong and off tune, and they are just not fulfilling what we as humans need.
But the communications channels have been cut off and the system isn’t responding to this. And we have all these accountability sinks where we no-one can respond appropriately to the feedback they are getting.
So, maybe, just maybe these ‘shock’ events and election results and terrible employee engagement scores, and presenteeism and mental health crises and suicide levels is a replacement communication channel that is crying in despair. Like an existential scream into an abys where no-one will respond or be accountable. That is saying the current system doesn’t work. The purpose (remember POSIWID) isn’t working for me.
And we should stop looking inside the black box and start responding to this communications channel – to this cry of despair - and notice all the red flags and handles it is communicating to us, and begin to be accountable for facing into this world.
More information
I first came across Dan Davis when he wrote about my book Catastrophe and Systemic Change, in piece called substacking tolerences.
His reflections on it fascinated me, especially around certification.
Certification is an information-saving technology. It attenuates information; rather than checking the exact properties of a building material and doing a load of engineering calculations, you look at the certificate that it’s fit for a particular purpose.
But in a lot of contexts, you have to assume that the information which has been attenuated is as unfavourable as it could possibly be. Because lots of systems set up incentives which ensure that rules will be barely obeyed to the letter, which causes stacked-tolerance problems if the rest of the system is based on the assumption that they will be obeyed in spirit with a margin of safety.
For a review of the Unaccountability Machine, see this one by Diane Coyle (another economist) and also editor of Catastrophe.
Thanks so much for reading! Would love to know your thoughts
Thanks so much for this Gill - as I think I said, I found your book on Grenfell extremely useful in helping me think about how health and safety systems work, and how they seem to be set up to create decisions with no owners. Which is a huge problem because the nature of these things is that they are set up to manage the average case - which is why they break down so often in the context of safety, which by definition is all about cases a long way from the norm!