Theory, Philosophy and Justification for Root Cause Analysis

The move to conduct root cause analysis is largely motivated by a growing recognition that the complexity of health care and health care delivery drives the incidence of adverse events uncomfortably and unacceptably high (Brennan et al, 1991). Consistent with this, the National Patient Safety Foundation (NPSF) maintains as its philosophy that:

 

Most errors result from faulty systems rather than human error, e.g., poorly designed processes that put people in situations where errors are more likely to be made. Those people are in essence “set up” to make errors for which they are not truly responsible.

 

Root cause analysis is a set of processes by which the underlying causes of adverse outcomes may be identified, with the goal in mind of preventing the reoccurrence of such events. There are many different processes by which root cause analyses are performed; the engineering and industrial risk management literature is rife with arguments for and against the different approaches. It is not the purpose of this current writing to explore those differences. Comments pertinent to root cause analyses performed outside the health care industry will not distinguish among such approaches, but will address as much as possible those areas of commonality.

 

Root Cause Analysis in Health Care. One area of undisputed agreement is the observation that without strong support by upper management, root cause analyses will be performed in a perfunctory manner, with the singular purpose of meeting JCAHO regulatory requirements. In order to be effective, it must be accepted throughout the organization that the result of any given root cause analysis will be for improvement purposes, not for assignation of blame. This is in keeping with basic philosophy and tenet of continuous improvement in any area of endeavor. Because, however, root cause analysis has been accepted for some two decades in industries other than health care, the level of acceptance by management and personnel is much greater in those industries than can be reasonably expected in health care organizations. Similarly, the value in this analytic procedure is already accepted in other industries and government, being part and parcel of policies pertinent to Departments of Energy and Transportation, Nuclear Regulatory Commission, etc. In the health care industry, root cause analysis is for the most part still viewed as yet another regulatory requirement which is neither value-added nor inexpensive.

 

As a consequence, there is resistance to the performance of root cause analyses, resistance to learning about their performance, and lack of support at all levels for their effective usage. Lack of familiarity with pertinent literature from other industries compounds this systemic and generally passive-aggressive though at times actively aggressive attitude against root cause analysis in all its aspects. Regrettably, a passing familiarity with such literature will in fact increase the above resistance for two reasons.

 

  1. Among health care administrators, the fact that it is not uncommon to spend substantial sums of money on a single root cause analysis raises the question of cost-effectiveness.
  2. Among health care providers the emphasis on human error in the root cause analysis literature of other industries raises the specter of blame, personal financial liability and the National Practitioner Databank, the last having no equivalent in other industries. Non-practitioners appear to have a tendency to underestimate the real impact of Databank reporting, as well as practitioners’ emotional reactions to possibility of such reporting. In sum, even if the risk manager and/or continuous improvement personnel at a given health care facility is convinced of the value of appropriately performed root cause analyses, there are very difficult obstacles to their effective and acceptable performance.

Clearly, education throughout the health care organization is the optimal means by which to address these problems. There are critical philosophical differences in error reduction in other industries versus the health care industry. These differences are not universal, but are very common. It has been our experience in discussions with root cause analysis experts in other industries that these differences are usually not appreciated, and in fact are at times considered to be antithetical to understanding of how an effective root cause analyses should be approached and conducted. Significantly and similarly, we have seen no awareness of these differences in the literature pertaining to medical applications of root cause analysis. These philosophical differences have impact upon both the process and outcome of root cause analyses. We have identified three basic philosophical differences:

 

  • issues of blame, responsibility, and emphasis upon human error,
  • contributing versus causative factors, and,
  • degree of efficacy of corrective action or solutions.

It is significant to note at this juncture that the experts with whom we spent the greatest amount of time discussing these and related issues were representatives of firms offering software designed to facilitate the root cause analysis process. It is largely their responses which are reflected in the following paragraphs, when expert’s opinions are reported. Regarding the first of these, we offer an assertion made by a prominent expert in root cause analysis outside of the health care arena, “All sentinel events are the result of human errors that queue up in a particular sequence.” This writer has just guaranteed that any health care provider who reads this line will adamantly oppose any efforts to institute root cause analytic processes, and has therefore devastated any provider, any hospital counsel and any risk manager who is trying to gain the trust of his or her provider staff in such an endeavor. That the above quotation may or may not be accurate is irrelevant of the fact of its extremely negative emotional impact.

 

That such comments are not uncommon in the root cause analysis literature means that very careful educational groundwork must be established prior to even encouraging health care personnel to read such literature; reality is not necessarily good if the recipient has not been adequately prepared to deal with it.

 

Going further, litigation for sentinel events may result from the root cause analysis in any industry if a plaintiff secures the product of such an analysis. Personal liability, however, is a far greater risk in the health care industry than in other industries. Issues of personal fear are correspondingly more prominent. Regarding the validity of the above assertion, it is interesting to note that Lucian L. Leape, MD, one of the foremost proponents of root cause analysis in medicine articulates his views thusly,

 

Errors must be accepted as evidence of systems flaws, not character flaws” (Leape, 1994, 1997).

 

In the area of risk management in general: (not limited to health care), James Reason asserts, “Indeed it could be argued that for certain complex, automated, well-defended systems, such as nuclear power plants, chemical process plants, modern commercial aircraft and various medical activities (emphasis added), organizational accidents are really the only kind left to happen. Such systems are largely proof against single failures, either human or technical….Perhaps the greatest risk to these high technology systems is the insidious accumulation of latent failures, hidden behind computerized, “intelligent” interfaces, obscured by layers of management, or lost in the interstices between various specialized departments” (Reason, 1994).

 

Cook and Woods (1994) present four distinct reasons that failures or accidents are attributed to human error, especially in “complex systems” when in fact this largely constitutes a mis-attribution. Moray (1994) asserts that, “…the systems of which humans are a part call forth errors from humans, not the other way around.”

 

The foremost experts in risk management both within and without the health care industry emphasize system failures and system-driven errors over direct human error, and the philosophy guiding the process of root cause analysis, be it manual or automated, should reflect this emphasis. In our research into root cause analysis in aviation, aerospace, transportation, electronics, security and energy industries, we found a nearly ubiquitous underlying assumption that causative factors had to be:

 

  • necessary and sufficient,
  • necessary but not sufficient, or,
  • irrelevant.

Of note is the fact that such rigidity in the rejection of contributing factors is directly contrary to views expressed by the most recognized experts in the fields of human behavior and risk management (Grandjean, 1980; Norman, 1981, 1988; Reason, 1990). As Reason eloquently describes, “… a detailed examination of the causes of these accidents reveals the insidious concatenation of often relatively banal factors, hardly significant in themselves, but devastating in their combination” (Reason, 1994). A partial solution to an identified root cause is worth consideration and implementation. It appears to be assumed that any root cause can be either “corrected” or is “non-correctable,” though the exact terminology varied with different consultant writers. Not only would we challenge this assumption in the health care arena, but we would likewise challenge the assumption in all areas of application. The difficulty appears to reside with the recognized requirement to monitor the results of any corrective action implemented. With sentinel events, we are generally discussing very low frequency occurrences, which means rate of occurrence may be a relatively meaningless metric. Every occurrence is critical, is sentinel, and anything less than a complete correction is less than adequate, e.g., is perceived by certain of these consultants and possibly by both internal and external customers to be a failure.

 

This perception belies, however, the underlying philosophy and guiding principles of continuous improvement; improvements are incremental and ongoing; perfection is targeted, but not attained.

 

Regrettably, sentinel events occur with certain “acceptable” levels of incidence, though for most sentinel events which result in an actual adverse outcome, even one instance is indeed unacceptable. It is our goal to progressively reduce the frequency of all classes of adverse events, knowing that many will not be eliminated. This does not necessarily define a failure. We would argue that this applies within and outside the health care industry.

 

Table of References

Brennan, T.A., Leape, L.L., Laird, N.M., Hebert, L., Localio, A.R., Lathers, A.G., Newhouse, J.P., Weiler, P.C., & Hiatt, H.H. (1991). Incidence of adverse events and negligence in hospitalized patients: Results from the Harvard Medical Practice Study I. New England Journal of Medicine, 324, 370-376.

Cook, R.I. & Woods, D.D. (1994). Operating at the Sharp End. In Human Error in Medicine, Marilyn Sue Bogner (Ed)., Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Grandjean, E. (1980). Fitting the Task to the Man. London: Taylor and Francis.

Healthcare Risk Management. Sentinel event policy changed, but it’s still a ‘lawsuit kit’ for attorneys. Healthcare Risk Management, July 1998.

The Joint Commission (1996). Conducting a Root Cause Analysis in Response to a Sentinel Event.

Leape, L.L., Brennan, T.A., Laird, N.M., Lawthers, A.G., Localio, A.R., Barnes, B.A., Herbert, L., Newhouse, J.P. & Hiatt, H.H. (1991). The nature of adverse events in hospitalized patients. New England Journal of Medicine, 324, 377-384.

Leape, L.L. (1994). Preventability of Medical Injury. In Human Error in Medicine, Marilyn Sue Bogner (Ed)., Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Moray, N. (1994). Error Reduction as a Systems Problem. In Human Error in Medicine, Marilyn Sue Bogner (Ed)., Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Norman, D.A. (1981). Categorization of action slips. Psychological Review, 88, 1-55.

Norman, D.A. (1988). The Psychology of Everyday Things. New York: Basic Books.

Prager, L.O. (1998). Keeping clinical errors out of criminal courts. American Medical News, March 16, 41-11, p. 1.

Reason, James T. (1990).Human Error. Cambridge, England: Cambridge University Press.

Reason, James T. (1994). Foreword to Human Error in Medicine, Marilyn Sue Bogner (Ed)., Hillsdale, NJ: Lawrence Erlbaum Associate

Tell us what you think