By Jonathan West 

‘Hear no evil, see no evil’ was submitted by its authors Professor Eileen Munro and Dr Sheila Fish in September 2015.

There appear to be the following oversights in the submission.

High Reliability organisations

One of the major suggestions in the paper is to examine how “High Reliability Organisations” in industries such as aviation and nuclear power achieve their levels of safety. Techniques mentioned include looking beyond the immediate cause of any error, and encouraging staff to highlight potential problems for action before a major failure arises.It is all very well looking at how these organisations achieve these levels of reliability, but the report’s analysis neglects entirely to examine why these industries decide such high reliability is necessary.

The answer of course is that a catastrophic failure (a passenger aircraft crashing or major radiation leak at a nuclear power station) is almost impossible to cover up and often has a catastrophic effect on the victims of the accident who will likely include personnel as well as innocent bystanders.  And of course the failure can have a significant adverse impact on the company.  The cost of achieving such levels of reliability can be high, and no organisation will consider that cost to be justifiable unless the cost to the organisation of failure is even greater.

Even where the cost of failure to an organisation is very great, it is only incurred if knowledge of the failure becomes public. Therefore, if a failure occurs but can be covered up, there is a huge temptation to do so, either by active cover-up of a known problem, or by avoiding implementing procedures that would encourage the identification of problems in the first place.

It is therefore not sufficient to point to other industries and suggest that organisations which supervise children should adopt “high reliability” techniques unless you are willing in some fashion to recreate the conditions which cause other safety critical industries to adopt “high reliability” techniques. This essentially means that a catastrophic failure (in the form of abuse within an institution that goes unrecognised for a long time) must be catastrophic to the organisation and its senior leadership, and not merely to the victims of the abuse. Only when the costs to the organisation of failure outweigh the costs of prevention will there be the incentive to take effective preventive measures.

The obvious approach is to legislate so as to support staff to report a concern about child protection on reasonable grounds, and to deliver a consequence to the person and / or the organisation’s leadership for failing to do so. A system is this this is operating in the vast majority of countries on every continent – but not in England, Wales or Scotland.

This conclusion is carefully avoided by Munro and Fish. Page 6 “Organisational factors” if anything goes in the opposite direction when it points to the need to “create culture that understands the ambiguity of the behaviour so that innocent people’s reputations are not tainted by false reports”.  DfE research in 2010 DFE RR192 ISBN: 978-1-78105-065-1 deemed 2% of referrals to the LADO were false (malicious).

The Munro + Fish paper goes on to say that High Reliability Organisations “encourage an open culture where people can discuss difficult judgements and report mistakes so that organisations can learn”. This becomes impossible where there is pressure to keep thresholds high in order to ensure that “innocent people’s reputations are not tainted by false reports”.

Ineffective actions

The paper includes the following (p21, under Confirmation bias)

“A second example is that the perceived lack of evidence contributed, to the limited (and ineffective) actions taken by the Scouts in responses to concerns about Larkins in Case Study One, such as the one cited earlier when a youth worker reported seeing the message: ‘Hey, I love you, but you should go home tonight so we don’t get caught’” (transcript day 2, lines 22–‐24). The youth worker was told by his line manager: ‘”You don’t need to report that because it is third‐hand information, and you don’t know definitely that the text messages came from Steve.”’

This is taken to be an example of confirmation bias as described in the previous paragraphs of the paper. The organisation, having decided there was an innocent explanation for previous reports concerning Larkins’ behaviour; all further reports were treated in the light of that conclusion and made to fit. The possibility that the organisation (the Scouts in this case) might have an interest in not finding out about abuse (so that it need not be publicly reported with the resultant adverse publicity) is not addressed in the report.

That is not to say that this conflict of interest is consciously acknowledged by the organisations at the time, but rather that it contributes to cognitive biases of the organisation which discourage effective reporting.

Low level concerns.

The executive summary contains the following, expanded on later in the report.

“Balancing risks: Policies and actions that protect children can also create dangers. Workers who are fearful of being wrongly suspected of abuse may keep their distance from children and not provide the nurturing, healthy relationships that children need to have with adults. Organisations have to reach some conclusion as to what level of concern should be reported. Making it compulsory to report even a low level of concern will identify more cases of abuse but at the cost of including numerous non‐abusive cases. Efforts therefore need to be made to create a culture that understands the ambiguity of the behaviour so that innocent people’s reputations are not tainted by false reports.”

This has two errors. First, the research of Ben Mathews suggests that a well-designed system of mandatory reporting does not result in numerous spurious cases being reported. Second, if there is a desire to act before any serious harm has come to a child, then low-level or ambiguous behaviour needs to be reported, either so that a pattern can be discerned from multiple reports about the same person, or so that a person can be warned as to his behaviour, and in being made aware that he is being observed is then hopefully deterred from proceeding from grooming to actual abuse, if that was in fact his intention.

Even the high-reliability organisations that Munro likes to cite are not immune from forgetting about the need to address low level concerns before they result in catastrophic failure. Professor Richard Feynman served on the Rogers Commission into the loss of the space shuttle Challenger. The proximate cause was a failure of “O-ring” seals in the solid rocket boosters. He had this to say about  the underlying causes of the loss. He was unpopular with the other members of the commission in doing so, and his conclusions were relegated to an appendix “Personal observations on the reliability of the Shuttle” (http://science.ksc.nasa.gov/shuttle/missions/51-l/docs/rogers-commission/Appendix-F.txt)

“The phenomenon of accepting for flight, seals that had shown erosion and blow-by in previous flights, is very clear. …. But erosion and blow-by are not what the design expected. They are warnings that something is wrong. The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in this unexpected and not thoroughly understood way. The fact that this danger did not lead to a catastrophe before is no guarantee that it will not the next time, unless it is completely understood. … In spite of these variations from case to case, officials behaved as if they understood it, giving apparently logical arguments to each other often depending on the “success” of previous flights. For example, in determining if flight 51-L was safe to fly in the face of ring erosion in flight 51-C, it was noted that the erosion depth was only one-third of the radius. It had been noted in an experiment cutting the ring that cutting it as deep as one radius was necessary before the ring failed. Instead of being very concerned that variations of poorly understood conditions might reasonably create a deeper erosion this time, it was asserted, there was “a safety factor of three.” …  The O-rings of the Solid Rocket Boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety can be inferred.”

Even high-reliability organisations struggle to maintain good behaviour towards the discovery and elimination of error when the scope for it to be covered up remains. In England Wales and Scotland, Regulated Activities remain Petri dishes for abuse because the foundation of law is needed in these settings, on which to construct a functioning child protection system.

A .pdf of Jonathan West’s response is here.

26/10/15