A disaster on the scale of the BP oil spill is unlikely to happen to the average electrical contractor and, therefore, provides little reason to perform catastrophe analysis. Even the experts know that the existing models are imperfect. After all, how can we predict the unpredictable?
Even risk-analysis experts admit that it is impossible to completely validate any model or simulation because they must simplify complex systems. In the case of natural phenomena, such as hurricanes, statisticians gather historical data and look for patterns that might be used to predict the conditions likely to produce future hurricanes. Assumptions are always made, and there are unknown factors that affect the model’s accuracy.
The interest in predicting risk has been driven by recent catastrophes, such as 9/11, Hurricane Katrina and the BP spill. Organizations such as the Center for Catastrophic Risk Management at the University of California, Berkeley, (ccrm.berkeley.edu) are attempting to analyze the causes of disasters as well as their potential cost to society. This is similar to the process used by actuaries working for insurance companies, sureties and financial institutions, as they set premiums and fees for their services. Because the effects of catastrophes can be so overwhelming (loss of life, cost and recovery time), the primary goal is the prevention of future occurrences.
When you attempt to analyze risk in your business, you may not use a sophisticated mathematical model, but you expect to evaluate the cost of an investment in preventing loss against the actual cost of that loss and the likelihood that it will occur. In other words, you are playing the odds. If you don’t spend the money and time to create a no-fail system, you are taking a chance on a negative impact later. Often, cost versus benefit calculations are done by the bean counters or left to the insurance and bonding professionals.
For most businesses, the final standard is a form of “good enough”—you spend enough resources to protect your people and other assets against the things that are most likely to happen. For the rare events, you take your chances because it isn’t possible to have a no-fail system.
Engineer Kenneth Brill might disagree. The Uptime Institute (www.uptimeinstitute.org), which he founded, helps data centers increase “uptime,” by preventing failures in their systems. In a recent Forbes.com online commentary (“The Real Cause of BP’s Oil Spill,” May, 26, 2010), Brill asserts that engineers are trained to design things to “mostly work,” which implies a tolerance for failure. Balancing risk of failure against the additional cost of perfect (never-fail) performance leads to a good-enough standard. When a catastrophe occurs, however, the focus shifts. That tolerance for a chance of failure may have affected the future of the company, not just its profits. In his opinion, some failures are intolerable.
Is that realistic? The expectation of failure appears to be hard-wired into human beings: A medical student achieves an A on a final exam without answering every single question correctly. Businesses prioritize safety, but expect some injuries. We expect to see typos in paperwork or errors in estimates.
Perfectionism seems arrogant and neurotic. Because “only God is perfect,” Jewish tapestry weavers deliberately left an error in each finished product. “Nobody’s perfect” is a common excuse for poor performance. Stockholders and boards of directors resist extreme expenditures for “unnecessary” redundancy. Customers fail to understand the effect of “no-fail” standards on cost and pressure contractors to cut corners. Perhaps worst of all, the pressure on employees to produce more work in less time leads to the perception that safety and quality are less important than profitability.
Some companies even reward failure as a learning tool, as in the case of the Southwest Airlines employee who cost the company more than a million dollars after losing a customer’s pedigreed dog. So, the cost of failure is merely a cost of doing business, and each company sets its own standard of good enough by measuring cost against risk.
Now, consider the good-enough standard from the customer side. Does the surgical patient think a 90 percent chance of survival is “good enough,” even though it’s an A for the surgeon? Does the B lawyer’s client want a trust document that is 80 percent accurate? Do you expect your television to work only 70 percent of the time, even though that is a passing grade?
The same is true of job site performance by electrical contractors. What is the real cost to your company of a 10 percent failure rate, and what is the cost to reduce that percentage? Unless your company has survived a real crisis caused by a nobody’s-perfect standard, you might not be willing to alter your risk versus benefit formula if it affects your bottom line. Next month we’ll see why raising the standard might be worth considering.
NORBERG-JOHNSON is a former subcontractor and past president of two national construction associations. She may be reached at firstname.lastname@example.org.