Today the Japan Meteorological Agency was criticized for not publicly releasing plume forecasts that it had conducted at the behest of the IAEA, and admonished by Chief Cabinet Secretary Edano for its failure to do so. And earlier, the Cabinet Office’s Nuclear Safety Commission (which runs the plume model SPEEDI) originally said on March 16 that its model simulations were not being carried out because of disruptions due to the disaster. But then they released a SPEEDI plume prediction on March 23 that had been run much earlier, on March 16. Thereafter they ceased the public release of maps citing concerns of low accuracy of the predictions. As a Japan Meteorological Agency official said: “We don’t know whether the IAEA basic data the agency uses for the forecasts really fit the actual situation. If the government releases two different sets of data, it may cause disorder in the society.” (from Daily Yomiuri)
It is a common stance among government agencies to treat plume predictions cautiously, especially where health effects are concerned. Many believe that the interpretation of plume forecasts requires a more nuanced approach than can be understood by the public. And so agencies are hesitant to release plume maps. They establish an evacuation zone, often informed by plume predictions, and then proceed to enforce the perimeter of the zone. They feel that a simple, consistent message is the best way to communicate with the public.
The crisis in Japan has turned this perception on its head. When the public has not gained access to the data they crave, they have taken to creating their own. Witness the explosion of crowd-sourcing sites that have sprung up to aggregate the radiation measurements taken by concerned individuals, and other sources. (See an overview of these sites here.) There are also a myriad of websites drawing together information from the international science community (government and academic) to help interpret the evolving situation in Japan. (See the Nature blog, as an example.) In this climate, it is difficult to control the flow of information. And indeed, agencies like the Department of Homeland Security are looking to social-networking communications as a valuable signal of the public’s concerns and interests in order to best respond to their anxiety in a crisis.
While a visiting fellow at Stanford’s Center for International Security and Cooperation, I served as an external evaluator for the TOPOFF2 exercises. Held every two years, these are intense scenario-driven realistic tests of the ability of top officials to manage an evolving incident. The exercises are complete with a CNN-like simulated news cycle to which the officials have to respond. During TOPOFF2, different agencies presented their plume model predictions to the officials as a basis to help them make difficult decisions throughout the simulated crisis. This approach created confusion since the forecast models were frequently at odds because they had different assumptions built in, and different models were used.
However, research has shown that utilizing multiple plume models and/or using slight variations of a single model can yield more realistic representation of the expected variability of the plume path. (For example, here is our recent paper on Japan coastal releases.) Embracing such an ensemble framework in operational plume prediction does require coordination among agencies and a recognition of the various agency strengths and weaknesses in this arena. But it is well worth the effort to attain more predictive skill from the aggregated model results. And given the public quest for information, agencies should open up their activities and communicate regarding their prediction processes. For the public has demonstrated that it is ready to peer inside the black box.