The National Institute for Computational Sciences

University of Tennessee SC12 Lecture Schedule

A list of speakers who will give talks in the University of Tennessee booth at SC12.

 


An Overview of High Performance Computing and Future Requirements

Jack Dongarra
University of Tennessee Distinguished Professor
Monday November 12, 7:30pm
Jack Dongarra holds an appointment at the University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester. He specializes in numerical algorithms in linear algebra, parallel computing, use of advanced-computer architectures, programming methodology, and tools for parallel computers. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; and in 2011 he was the recipient of the IEEE IPDPS 2011 Charles Babbage Award. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a member of the National Academy of Engineering. In this talk we examine how high performance computing has changed over the last 10 years and look toward the trends of the future. Recent changes have had, and will continue to have, a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed, and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques. But the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run-time environment variability will make these problems much harder. In this talk we will focus on the redesign of software to fit multicore architectures.

GPU Computing Now and Tomorrow

Steve Scott
Chief Technology Officer, NVIDIA Tesla Business Unit
Tuesday November 13, 11:00am
Dr. Steve Scott is Chief Technology Officer of the Tesla business unit at NVIDIA, where he is responsible for the evolution of NVIDIAÕs GPU computing roadmap. Prior to joining NVIDIA in August 2011, Steve spent 19 years at Cray, where he architected multiple systems and routers, and was CTO since 2004. Steve holds thirty-one US patents, and has served on numerous advisory boards and program committees. He was the recipient of the 2005 ACM Maurice Wilkes Award and the 2005 IEEE Seymour Cray Computer Engineering Award. He received his PhD in computer architecture in 1992 from the University of Wisconsin at Madison, where he was a Wisconsin Alumni Research Foundation and Hertz Foundation Fellow. GPU computing has made tremendous strides over the past decade, evolving from powerful, yet difficult-to-program GPGPUs, to the much more general and easily programmed computational accelerators of today. With the debut of the Titan system at ORNL, GPUs now power the worldÕs largest open science supercomputer, and have been widely adopted across a range of scientific and technical disciplines. This talk will discuss the motivation behind GPU computing, the current landscape, and how GPU computing is likely to evolve over the coming decade.

Toward an Distributed Cyberinfrastructure Ecosystem

John Towns
XSEDE PI and Project Director
University of Illinois
Tuesday November 13, 1:30pm
John Towns is Director of Collaborative Cyberinfrastructure Programs at the National Center for Supercomputing Applications (NCSA) at the University of Illinois. He is also PI and Project Director for the Extreme Science and Engineering Discovery Environment (XSEDE) project. Towns plays significant roles in the deployment and operation of high-end resources and Grid-related projects and is also PI on awards for various resources operated at NCSA. His background is in computational astrophysics utilizing a variety of computational architectures with a focus on application performance analysis. At NCSA, he provides leadership and direction in the support of an array of computational science and engineering research projects making use of advanced computing resources and services. He earned M.S. degrees in Physics and Astronomy from the University of Illinois and a B.S in Physics from the University of Missouri Rolla. The state of the art of digital systems supporting the pursuit of research has been a rapidly evolving target for many years. Still, the community is reaching an inflection point in transitioning from the focus of individual resources and services to a more integrated view of how services, resources and support are coordinated in creating an environment that enhances the productivity of researchers. A natural consequence of this is the need to support the integration of resources and services delivered by providers who are in a spectrum of administrative domains. I will briefly introduce the XSEDE project and make some observations and comments on the need for type of distributed cyberinfrastructure ecosystem that XSEDE is developing in order to facilitate research.

High Performance and Parallel Computing at Intel

Joe Curly
Director of Marketing, Technical Computing Group
Wednesday November 14, 11:00am
Joseph (Joe) Curley serves Intel® Corporation as director of marketing for Technical Computing Group. The technical computing group marketing team manages marketing for high-performance computing (HPC), workstation segments and product lines, as well as future Intel® Many Integrated Core (Intel® MIC) products. Joe joined Intel in 2007 to manage planning activities that led up to the announcement of the Intel® MIC Architecture in May of 2010. Prior to joining Intel, Joe worked at Dell, Inc. leading the Dell Precision Workstation and consumer / small business desktop products, as well as a series of engineering roles. He began his career at graphics pioneer Tseng Labs. An overview of Intel’s evolving products in technical computing, as well as partnership with NICS and the University of Tennessee in developing these products

Disaster Mitigation and HPCC requirements

Harry McDonald
University of Tennessee Chattanooga/SimCenter Enterprises
Distinguished Professor
Wednesday November 14, 3:30pm
Dr. Henry McDonald is currently a Distinguished Professor and holds the Chair of Computational Engineering at the University of Tennessee in Chattanooga. Dr. McDonald graduated from the University of Glasgow in Scotland with a D.Sc. in Aerospace Engineering and started his career at United Technologies Research Center, where he concentrated on fluid mechanics and what eventually became known as Computational Fluid Dynamics. Dr. McDonald followed this by forming a small R&D company in Connecticut in 1976. Subsequently Dr. McDonald held a number of academic posts at Pennsylvania State University and Mississippi State before accepting an Appointment at NASA as the Center Director at NASA Ames Research Laboratory from 1996 to 2002. Dr McDonald subsequently joined the University of Tennessee at Chattanooga. Dr. McDonald is a member of the National Academy of Engineering, a Fellow of the Royal Academy of Engineering, a Fellow and Honorary Member of the American Society of Mechanical Engineers, an Honorary Fellow of the American Institute of Aeronautics and Astronautics and a Fellow of Royal Aeronautical Society. In the event of a toxic release in an urban environment situational awareness is critical to expedite evacuation and reduce the risk to first responders. Major computing facilities are a great asset that could be used to make reasonable and timely projection of where the local wind and weather conditions might take the resulting toxic plume and how the traffic might be routed to aid the safe movement of people out of harms way. Timeliness is vital in these circumstances and the talk will discuss the problem and current progress in providing detailed plans to manage the event using current and projected HPCC systems.

Advanced Computational Infrastructure: Continuing and Advancing the Support for Open, World Class Computational Science; 30 Years of NSF Leadership

Barry Schneider
Program Director for Cyberinfrastructure: XSEDE Program Manager
National Science Foundation
Thursday November 15, 10:30am
Beginning almost 30 years ago the NSF has provided the open scientific community with high performance computing resources of many types. Recently, the program has been expanded to include a broader array of hardware resources and now has a number of programs to develop and sustain software of broad use to the scientific community. I will present an overview of the currently available resources and programs and present some examples of interesting scientific problems that have been enabled by NSF support of ACI.

Application Accelerators in Computational Science: Challenges and Opportunities

Greg Peterson
Program Director for the National Institute for Computational Sciences
University of Tennessee
Thursday November 15, 1:30pm
Greg Peterson is Professor in Electrical Engineering and Computer Science and Director of the National Institute for Computational Sciences at The University of Tennessee. He is also the PI for the NSF Kraken supercomputer and Interim co-PI and Director of Operations for the Extreme Science and Engineering Discovery Environment (XSEDE) project. He earned his doctorate, master's and bachelor's degrees in Electrical Engineering as well as his master's and bachelor's degrees in Computer Science, all from Washington University in St. Louis. With the performance of serial threads of execution stagnating, a variety of emerging architectural approaches now present intriguing opportunities for next-generation HPC platforms. In particular, application accelerators now provide a significant, and growing, proportion of the computational work performed on supercomputers. As exascale computing becomes a reality, application accelerators will be critical components. Challenges and opportunities abound with power and energy efficiency, reliability and availability, and cost.