Supercomputing Could Revolutionize Pharmaceutical Research and Development
Supercomputing may play an important role in a radical change in the pharmaceutical industry by enabling investigations aimed at discovering polypharmacology, the ability of drugs to interact with multiple targets in the body, according to an article recently published in the journal Molecular Simulation by a team of researchers from the University of Tennessee, Knoxville.
The article, titled "Polypharmacology and supercomputer-based docking: opportunities and challenges," conveys the potential impact of a paradigm shift in drug research and development: via virtual high-throughput screening (docking) of drug candidates using high-performance computing (HPC), the cost of producing pharmaceuticals could be reduced by avoiding failures at late stages in research, many existing drugs could be repurposed and designed to more effectively address medical conditions, and potential side effects of medications could be better understood.
Computational scientist Sally Ellingson, lead author of the article, spoke on the subject last year at the SC13 supercomputing conference in Denver, and a related poster on the role of protein dynamics in computational docking that she and colleagues created won first place in the poster competition, Graduate Clinical Science Division, on May 13 at the University of Kentucky (UK) Markey Cancer Center annual Markey Research Day.
While the poster examines the use of virtual docking to discover drugs that could bind to locations on enzymes and alleviate symptoms of many cancers, the article goes in depth on such things as polypharmacology's pros and cons, drug repurposing, tapping into network-based computational tools for identifying drug targets, and the challenges of using virtual docking to systematically study the effects of large libraries of drug compounds against a wide variety of macromolecular targets.
The model that drug companies follow today involves the development of a drug for a single target, because design and implementation for multiple targets is so complex.
The universe of pharmaceutical possibilities consisting of the protein targets and the small medicinal molecules that interact in the body is so enormous that experimentation is not a feasible means of exploring the drug discovery space and polypharmacology. However, today's available computational power and virtual docking technologies are promising with respect to providing a means of delivering the necessary data in a reasonable amount of time for analysis.
Docking in the context of drug discovery pertains to drug–protein interaction and predicting how a molecule will bind to another to form a stable complex. A scoring function is used to predict the strength of association, or 'binding affinity,' between the molecules, and thus determines whether the pharmaceutical is likely to be effective.
In conducting the research detailed in the article, Ellingson, her Ph.D. advisor, UT-Knoxville Assistant Professor Jerome Baudry, and her co-advisor, UT-Knoxville Professor Jeremy Smith, used supercomputers to build and test virtual docking technologies such as VinaMPI based on the open-source tool Autodock Vina. They were able to perform more than four million dockings in just a couple of hours.
Ellingson explains that many of the calculations were done on the U.S. Department of Energy's Titan supercomputer at Oak Ridge National Laboratory (ORNL), and the development of the program VinaMPI, which was used to obtain the calculations, were performed on the now-decommissioned Kraken through National Science Foundation (NSF) Extreme Science and Engineering Discovery Environment (XSEDE) project number TG-MCA08X032. Kraken was managed by the National Institute for Computational Sciences (NICS) and housed at ORNL.
"Supercomputers provide us with a virtual laboratory that enables us to test the interactions between massive libraries of potential pharmaceuticals and proteins that would not be possible otherwise," Ellingson says. "In the future, we hope to be able to dock all possible drugs into the entire set of proteins with an individual's genetic variations in order to develop personalized treatment plans."
In the wake of the completion of her Ph.D. and this most recent research, Ellingson went straight to a research faculty position, skipping postdoc work because the Cancer Research Informatics Shared Resource Facility at the newly National Cancer Institute-designated UK Markey Cancer Center and the University of Kentucky Center for Computational Science were seeking someone with experience in HPC to develop their analysis pipelines.
"I plan to combine my previous experience with molecular simulations and drug discovery with current work in cancer informatics to study personalized medicines for cancer treatment, still in collaboration with UT, ORNL, and NICS," Ellingson says.
Ellingson describes her background this way: "After receiving B.S. degrees in computer science and mathematical sciences, I knew I wanted to do something applied in graduate school to work on exciting and important problems. I visited ORNL while interviewing for a computational biology fellowship, SCALE-IT NSF IGERT [Integrative Graduate Education and Research Traineeship] through UT, when I saw my first supercomputer—Kraken, which was number three among the most powerful computers in the world at the time, I believe—and I knew I wanted to work on it. I took every opportunity through course and extra training work, including participating in the annual supercomputing conference through the Broader Engagement program to be involved in high-performance computing. I found my home during graduate school at the Center for Molecular Biophysics, where I could use my technical skills to investigate ways of harnessing massive computational power to improve the drug discovery process. My successes in graduate school led me to a full-time faculty position at the University of Kentucky, where I can apply my technical skills with high-performance computing to a broader scope of computational biology and bioinformatics tools with a focus in cancer research." Ellingson's website can be viewed here.
Additional information about Ellingson's Ph.D. advisors: Jerome Baudry is affiliated with the Department of Biochemistry and Cell and Molecular Biology, UT-Knoxville; the Center for Molecular Biophysics, ORNL; and the Institute of Biomedical Engineering, UT-Knoxville. More about Baudry can be viewed here. Jeremy Smith is director and UT-ORNL Governor's Chair at the Center for Molecular Biophysics. More about Smith can be viewed here.
The team is now using the Titan supercomputer to continue this research effort on a massive scale, Baudry noted.
Scott Gibson, science writer, NICS
Article posting date: 9 June 2014
About JICS and NICS: The Joint Institute for Computational Sciences (JICS) was established by the University of Tennessee and Oak Ridge National Laboratory (ORNL) to advance scientific discovery and state-of-the-art engineering, and to further knowledge of computational modeling and simulation. JICS realizes its vision by taking full advantage of petascale-and-beyond computers housed at ORNL and by educating a new generation of scientists and engineers well versed in the application of computational modeling and simulation for solving the most challenging scientific and engineering problems. JICS runs the National Institute for Computational Sciences (NICS), which had the distinction of deploying and managing the Kraken supercomputer. NICS is a leading academic supercomputing center and a major partner in the National Science Foundation's eXtreme Science and Engineering Discovery Environment, known as XSEDE. In November 2012, JICS sited the Beacon system, which set a record for power efficiency and captured the number one position on the Green500 list of the most energy-efficient computers.