One of the reasons given by the Japanese government for the delay in generating plume forecasts is that their model system, SPEEDI, was hardwired to receive radiation data at the Fukushima nuclear plant site. When the sensors failed in the massive power outage, the data link broke and the model was not able to run. This breathtakingly fragile modus operandi apparently required much effort to sever the close connection between the data ingestion and the model, and prevented a timely delivery of plume forecasts of where the radiation might drift. When they were finally able to do the simulations in a “stand-alone” mode of hypothetical estimated source terms, the government was so skeptical of the veracity of the results that they delayed releasing them widely. (More details on the release of plume forecasts by the Japanese government, chronologically: here, here and here.)
The Economist has an insightful piece that allows us to frame this failure of operational response in a larger context of brittle extensions of technology. Outreaches of technological prowess like complex power plants or challenging deep-sea drilling operations can fail catastrophically. What we have seen ensue is profound destruction and “there is no ameliorative technology on a par with that which has failed.”
“…situational awareness is invaluable. Steven Chu, America’s energy secretary, was reportedly shocked to find that the only source of information from the Deepwater Horizon’s blowout preventer was a single gauge. So he should have been. Sensor systems for getting information out of containment vessels, off sea floors and from all sorts of other out-of-the-way places should be deployed widely and in redundant ways. They should also be kept independent of the related systems used for control; you want them to work even if—especially if—the control system does not.”
Delays in assessing or predicting outcomes at the time of a catastrophe are avoidable. One way to ensure that enough redundant measures are in place is to have regulators evaluate “safety cases” that the industry games out and for which they demonstrate a response plan.
“Better still if the companies make not just a case for safety, but also a case for their ability to react when things do go wrong, and they find themselves in the uncharted space between the spines of well developed technology. It really does help to think about the unthinkable.”