‘Project Athena’ wraps up 6 months of sims, produces more than 900 TB of data
by Gregory Scott Jones
Few areas of science are currently hotter than climate. Attempts to understand humankind’s impact on our planet are occupying the front pages of countless newspapers and the minds of an increasingly informed public.
Climate change is likewise occupying a good percentage of processors on some of the world’s most powerful computing systems. Via simulation, researchers are beginning to comprehend the nuances of Earth’s climate and in turn helping policymakers understand the risks and prepare for a myriad of future scenarios.
As sophisticated as today’s climate models are, however, one critical component continues to hamper their effectiveness: clouds. Turns out those white puffs of water vapor hovering overhead are computationally complex, requiring resources that even today’s most powerful supercomputers are hard pressed to furnish.
A recent massive attempt to resolve the effects of clouds, Project Athena: High Resolution Global Climate Simulations, just wrapped up a 6-month intensive experimentation period that consumed 70 million CPU-hours on Athena, a Cray XT4 system currently ranked the 36th fastest computer in the world. Located at the National Institute for Computational Sciences (NICS), a National Science Foundation (NSF)–funded research center managed by the University of Tennessee and located at Oak Ridge National Laboratory, Athena is the smaller sister to Kraken, a Cray XT5 ranked as the world’s fourth fastest computer and likewise located at NICS.
Figure 1: Animation from the NICAM model simulation of 21 May - 31 August 2009, showing cloudiness (based on outgoing long-wave radiation) in shades of gray and precipitation rate in rainbow colors, based on hourly data from the simulation. The cloudiness is shaded in brighter gray for thicker clouds, and the colors range from shades of green, indicating precipitation rate less than 1 mm/day, to yellow and orange (1 - 16 mm/day), to red (16-64 mm/day) and magenta (> 64 mm/day). The animation begins zoomed in over India and the Bay of Bengal, showing the fact that tropical cyclone Aila, which in reality made landfall near Calcutta killing dozens of Indian and Bangladeshi citizens and displacing over 100,000 people from their homes, was very accurately predicted in the simulation.
Besides explicitly modeling cloud systems, Project Athena featured hundreds of weather-prediction model simulations that sought to replicate the climate of the late 20th century and predict the impact of CO2 emissions on Earth’s climate in the final decades of the 21st century.
The Project Athena effort, which represented one of the most intensive and ambitious climate simulation projects in history, was funded by NSF and was the result of a partnership of climate research organizations from the United States (NICS, the Center for Ocean-Land-Atmosphere Studies [COLA]), Europe (the European Centre for Medium-Range Weather Forecasts [ECMWF]), and Japan (the University of Tokyo and the Japan Agency for Marine-Earth Science and Technology [JAMSTEC]).
“The ultimate goal of these simulations was to explore the possibility of revolutionizing climate and weather prediction, taking advantage of a large computing resource,” said principal investigator and COLA Director Jim Kinter.
According to Kinter clouds are too computationally complex to have been accurately included as a part of the global climate system in past climate models. They were treated in bulk, and their behavior was largely estimated using approximations called parameterizations. These parameterizations may be the primary restraint preventing climate models from evolving from good to great, said Kinter.
Two suites of simulations, one using ECMWF’s operational weather prediction code and the other using the University of Tokyo’s and JAMSTEC’s NICAM, a code that represents the global atmosphere at cloud-system-resolving scales, were run to test one hypothesis: poor resolution of climate system features and approximated clouds in a climate model negatively influence the accuracy of the model; high resolution and explicit clouds, or clouds modeled in greater detail, will enhance it.
Having the entire Athena system for 6 months allowed the team to study numerous phenomena at unprecedented scales, as low as 7 kilometers in the case of the cloud-system-resolving model. For comparison, the National Weather Service currently uses a 35-kilometer model for global weather prediction. The hope entering the project was that if these smaller scales were successfully resolved, the more detailed simulations would lead to more realistic forecasting of atmospheric circulation and precipitation, enhancing seasonal predictions and more accurately simulating changes in the distribution and intensity of extremes of precipitation and tropical cyclones associated with changing climate.
In total the cloud-system-resolving NICAM model simulations consisted of eight northern hemisphere summers (May 21–August 31). But did the use of explicit clouds rather than approximated ones improve the model? Yes and no.
Figure 2: Tropical cyclone intensity distribution, expressed as fractional frequency, as a function of maximum surface wind speed. The black bars are for the observed distribution. The colored bars are distributions from the ECMWF IFS model simulations. The inset shows an expanded view of the tail of the distribution. T1279, T511 and T159 signify the horizontal grid spacing of 16, 39 and 125 km, respectively.
The explicit cloud model produced excellent simulations of individual tropical cyclones and did very well on mean precipitation outside the tropics. However, when it came to the average precipitation in the deep tropics, the model produced “serious errors,” said Kinter, noting that tropical thunderstorms were modeled especially poorly. The problem with the tropical precipitation simulation in NICAM, said Kinter, was most likely the grid size, despite the enhanced resolution. “We probably need something like a 1-kilometer grid,” he said, noting that it will be some time before that is even remotely possible for global models, considering that given the geometry of the grid and numerical stability factors, the team would need the equivalent of 512 Athenas for 6 months to do the same amount of simulation at 1-kilometer grid spacing.
Needless to say it could be a while before tropical precipitation is modeled to the team’s liking. In the meantime the Project Athena team made the most of its 6 months, especially in the arena of simulating global atmospheric circulation, which plays a key role in weather prediction. The simulations using ECMWF’s code, which included all of the major variables that describe the global atmosphere, gave the team a good idea of the strength of its model and further demonstrated and quantified the need for enhanced resolution in predictive climate models in general.
All of the simulations were run at four different resolutions: 125, 39, 16, and 10 kilometers. For each grid size, the team ran individual 13-month simulations for 48 years (November 1960–December 2007) and continuous 48-year simulations. Furthermore, in an effort to gauge future climate scenarios impacted by CO2, Project Athena simulated the last 30 years of the 20th and 21st centuries, which confirmed some long-held fears.
For example, assuming CO2 levels continue to increase, the model showed definite snow cover decreases at high altitudes, which could lead to droughts in certain parts of the world. It also confirmed the need for more refined models because the outcomes of the higher-resolution models (16-kilometer grid spacing) seemed to be more accurate than those at lower resolutions (125-kilometer grid spacing). As an example Kinter cited the 2003 European heat wave: was it a freak incident or something likely to happen again in the future? The higher-resolution model showed it won’t necessarily become the norm but indicated that it will be much more likely than did the lower-resolution model, and the higher resolution has proven to be more accurate on things that affect European climate, said Kinter.
Team members from COLA, ECMWF, and JAMSTEC/University of Tokyo met in June 2010 at a workshop at ECMWF, said Kinter, and outlined about a half dozen papers to be extracted from the research that they expect to begin appearing in 2011. Furthermore, he said, there has been a large demand for the resulting data, which the team is now working to make public.
“Project Athena also has demonstrated that dedicated supercomputing resources—including the computational capability, data storage, processing, and archival—with the support of a dedicated staff of computer system experts, such as we received from the team at NICS, can greatly increase scientific productivity on large projects. In our case we achieved at least a factor of 4 increase in productivity,” said Kinter.
Ultimately, Project Athena has provided the climate community with a treasure trove of data and laid the foundation for future climate experiments and simulations. The full benefits of the effort have yet to be measured, but there is little doubt that it will be seen as a huge stepping stone on the path to truly understanding Earth’s climate.
“Project Athena has succeeded in showing that increasing the spatial resolution of models of the global climate system to levels that are currently only available for short-term weather forecasting can significantly increase the fidelity of climate simulations,” said Kinter. “Many features of the climate are better represented in high-resolution simulations, including obvious things like tropical cyclones and the distribution of snowfall and not-so-obvious things like the extent, duration, and frequency of drought. The implications of this experiment for how we simulate and predict the climate in the future are very important.”
The team included scientists from COLA (D. Achuthavarier, J. Adams, E. Altshuler, B. Cash, P. Dirmeyer, B. Doty, B. Huang, E. Jin, J. Kinter, L. Marx, J. Manganello, and C. Stan, with considerable technical support from T. Wakefield), ECMWF (M. Hamrud, T. Jung, M. Miller, T. Palmer, P. Towers, and N. Wedi), JAMSTEC (C. Kodama, H. Tomita, and Y. Yamada), and the University of Tokyo (M. Satoh); computational support from NICS (P. Andrews, T. Baer, M. Ezell, C. Halloy, D. John, B. Loftis, and K. Wong); and technical support from Cray (P. Johnsen and P. Nyberg).