The National Institute for Computational Sciences

E&O Opportunities

Subscribe to E&O Opportunities feed
XSEDE
Updated: 7 min 40 sec ago

IBM, Red Hat Collaborate to Accelerate Hybrid Cloud Adoption

Tue, 03/21/2017 - 15:22

IBM and Red Hat, Inc. today announced a strategic collaboration designed to help enterprises benefit from the OpenStack platform's speed and economics while more easily extending their existing Red Hat virtualized and cloud workloads to the IBM Private Cloud. As part of this new collaboration, IBM has become a Red Hat Certified Cloud and Service Provider, giving clients greater confidence that they can use Red Hat OpenStack Platform and Red Hat Ceph Storage on IBM Private Cloud when the offering launches for general availability at the end of March 2017. Additionally, as part of the agreement, Red Hat Cloud Access will be available for IBM Cloud by the end of Q2 2017, allowing Red Hat customers to move eligible, unused Red Hat Enterprise Linux subscriptions from their data center to a public, virtualized cloud environment in IBM Cloud Data Centers worldwide. This enables companies to better preserve and extend their Red Hat software investments while providing the global scale and efficiency of IBM Cloud. "Our collaboration with IBM is aimed at helping enterprise customers more quickly and easily embrace hybrid cloud," said Radhesh Balakrishnan, General Manager of OpenStack, Red Hat. "Now, customers who don't have in-house expertise to manage an OpenStack infrastructure can more confidently consume Red Hat OpenStack Platform and Red Hat Ceph Storage on IBM Private Cloud." Learn more at https://www.enterprisetech.com/2017/03/21/ibm-red-hat-collaborate-accelerate-hybrid-cloud-adoption/

Afnan Abdul Rehman 2017-03-21T19:22:40Z

Women at SC Awarded the CENIC 2017 Innovations in Networking Award

Tue, 03/21/2017 - 15:21

In recognition of work to expand the diversity of the SCinet volunteer staff and to provide professional development opportunities to highly qualified women in the field of networking, the Women in IT Networking at SC (WINS) program has been selected by CENIC as a recipient of the 2017 Innovations in Networking Award for Experimental Applications. Project members being recognized include Wendy Huntoon (KINBER), Marla Meehl (UCAR), and Kate Petersen Mace, Lauren Rotman, and Jason Zurawski (ESnet). This powerful collaboration fosters gender diversity in the field of technology, a critical need. By funding women IT professionals to participate in SCinet and to attend the Supercomputing Conference, the program allows the next generation of technology leaders to gain critical skills. “Until you roll your sleeves up and dig into building and operating SCinet, which is an amazingly robust, high-bandwidth network that exists for just two weeks, it’s hard to imagine just how tough it is — and how rewarding it is,” said Inder Monga, Director of ESnet, the Department of Energy’s Energy Sciences Network. “Many of our ESnet engineers have been members of the SCinet team over the years, bringing back valuable skills in network operations, project management, teamwork, and on-the-spot problem-solving. Our support of WINS is one way of contributing back to the conference and the community’s growth and success.” Learn more at https://www.hpcwire.com/off-the-wire/women-sc-awarded-cenic-2017-innovations-networking-award/

Afnan Abdul Rehman 2017-03-21T19:21:44Z

DOE Office of Science Would Have to Grapple with $900 Million Cut Under Trump Budget

Tue, 03/21/2017 - 15:15

The Trump administration outlined dramatic cuts for nearly every federal agency in order to pay for a $54 billion increase in Department of Defense spending. Those rollbacks would include a 20 percent annual reduction at the Department of Energy (DOE) Office of Science, which would almost certainly put the agency’s pre-exascale and exascale programs in jeopardy. The $900 million Office of Science cut is apt to throw the US HPC research community into disarray, given that this agency is tasked with purchasing and maintaining the largest supercomputers in the nation that support open science research. It does the majority of this work under the Advanced Scientific Computing Research (ASCR) project, which encompasses the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab in California, the Leadership Computing Facility at Oak Ridge National Lab in Tennessee, and Argonne National Laboratory in Illinois. According to a report in Quartz, former Argonne director Peter Littlewood, summed it up like this: “It will cut science off at the knees,” he said, adding that Littlewood was specificially worried about exascale computing research. “It’s quite dangerous for the US to disable the science engine,” he noted. Learn more at https://www.top500.org/news/doe-office-of-science-would-have-to-grapple-with-900-million-cut-under-trump-budget/

Afnan Abdul Rehman 2017-03-21T19:15:48Z

Like Flash, 3D XPoint Enters the Datacenter as Cache

Tue, 03/21/2017 - 15:11

In the datacenter, flash memory took off first as a caching layer between processors and their cache memories and main memory and the ridiculously slow disk drives that hang off the PCI-Express bus on the systems. It wasn’t until the price of flash came way down and the capacities of flash card and drives came down that companies could think about going completely to flash for some, much less all of their workloads. So it will be with Intel’s Optane 3D XPoint non-volatile memory, which Intel is starting to roll out in its initial datacenter-class SSDs and will eventually deliver in DIMM memory, U.2 drive, possibly M.2 form factors for servers. Just like it did with 2D and 3D NAND flash drives so many years ago. This time, though, the performance and cost of 3D XPoint will be a lot closer to DRAM memory – and significantly will be addressable as memory – in the systems. This is going to change the way architects design systems and programmers push them to do so as they try to find a better balance of components to move more data in and out of systems faster and more predictably. We think, as does Intel, that certain systems where latency for reads and writes is critical will have Optane sprinkled in various memory tiers and we do not think that 3D XPoint will be a replacement for much more capacious and much less expensive flash cards, SSDs, and sticks that probably have much better serial as opposed to random read and write performance. Learn more at https://www.nextplatform.com/2017/03/20/like-flash-3d-xpoint-enters-datacenter-cache/

Afnan Abdul Rehman 2017-03-21T19:11:29Z

Supermicro Launches Intel Optane SSD Optimized Platforms

Tue, 03/21/2017 - 15:08

Super Micro Computer, Inc., a leader in compute, storage and networking technologies including green computing, expands the Industry’s broadest portfolio of Supermicro NVMe Flash server and storage systems with support for Intel Optane SSD DC P4800X, the world’s most responsive data center SSD. Supermicro’s NVMe SSD Systems with Intel Optane SSDs for the Data Center enable breakthrough performance compared to traditional NAND based SSDs. The Intel Optane SSDs for the data center are the first breakthrough that begins to blur the line between memory and storage, enabling customers to do more per server, or extend memory working sets to enable new usages and discoveries. The PCI-E compliant expansion card delivers an industry leading combination of 2 times better latency performance, up to more than 3 times higher endurance, and up to 3 times higher write throughput than NVMe NAND SSDs. Optane is supported across Supermicro’s complete product line including: BigTwin, SuperBlade, Simply Double Storage and Ultra servers supporting the current and next generation Intel Xeon Processors. These innovative solutions enable a new high performance storage tier that combines the attributes of memory and storage ideal for Financial Services, Cloud, HPC, Storage and overall Enterprise applications. Learn more at https://www.hpcwire.com/off-the-wire/supermicro-launches-intel-optane-ssd-optimized-platforms/

Afnan Abdul Rehman 2017-03-21T19:08:37Z

SDSC Summer Institute 2017

Tue, 03/14/2017 - 16:22

The San Diego Supercomputer Center Summer Institute is a week-long workshop held at the University of California, San Diego that focuses on a broad spectrum of introductory-to-intermediate topics in High Performance Computing and Data Science. The program is aimed at researchers in academia and industry, especially in domains not traditionally engaged in supercomputing, who have problems that cannot typically be solved using local computing resources. This year’s Summer Institute continues SDSC’s strategy of bringing High Performance Computing to the Long Tail of Science, i.e. providing resources to a larger number of modest-sized computational research projects that represent, in aggregate, a tremendous amount of scientific progress. The purpose of the Summer Institute is to give the attendees an overview of topics in High Performance Computing and Data Science and accelerate their learning process through highly interactive classes with hands-on tutorials on the Comet Supercomputer. Learn more at http://si17.sdsc.edu/

Afnan Abdul Rehman 2017-03-14T20:22:43Z

Student Program Practice & Experience in Advanced Research Computing

Tue, 03/14/2017 - 16:21

PEARC17 will offer a dynamic student program and diversity efforts, bringing together researchers, students, and prospective users from under-represented groups and new disciplines. PEARC17 provides students with a range of opportunities in the Technical Program and in targeted student activities. For all student attendees, PEARC17 will include a one-day intensive collaborative modeling and analysis challenge; a session on careers in modeling and large data analytics; a mentorship program; and opportunities for students to volunteer to assist with conference activities. Participation from traditionally under-represented communities including women, minorities, and people with disabilities, is strongly encouraged. Learn more at http://www.pearc.org/student-program

Afnan Abdul Rehman 2017-03-14T20:21:54Z

Calculations on Supercomputers Help Reveal the Physics of the Universe

Tue, 03/14/2017 - 16:20

On their quest to uncover what the universe is made of, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are harnessing the power of supercomputers to make predictions about particle interactions that are more precise than ever before. Argonne researchers have developed a new theoretical approach, ideally suited for high-performance computing systems, that is capable of making predictive calculations about particle interactions that conform almost exactly to experimental data. This new approach could give scientists a valuable tool for describing new physics and particles beyond those currently identified. The framework makes predictions based on the Standard Model, the theory that describes the physics of the universe to the best of our knowledge. Researchers are now able to compare experimental data with predictions generated through this framework, to potentially uncover discrepancies that could indicate the existence of new physics beyond the Standard Model. Such a discovery would revolutionize our understanding of nature at the smallest measurable length scales. Read more at https://www.hpcwire.com/off-the-wire/calculations-supercomputers-help-reveal-physics-universe/

 

Afnan Abdul Rehman 2017-03-14T20:20:38Z

Call for Papers: Workshop On Performance and Scalability of Storage Systems (WOPSSS) - Deadline: March 31, 2017

Tue, 03/14/2017 - 16:17

The Workshop On Performance and Scalability of Storage Systems (WOPSSS) aims to present state-of-the-art research, innovative ideas, and experience that focus on the design and implementation of HPC storage systems in both academic and industrial worlds, with a special interest on their performance analysis. The arrival of new storage technologies and scales unseen in previous practice lead to significant loss of performance predictability. This will leave storage system designers, application developers and the storage community at large in the difficult situation of not being able to precisely detect bottlenecks, evaluate the room for improvement, or estimate the matching of applications with a given storage architecture. WOPSSS intends to encourage discussion of these issues through submissions of researchers or practitioners from both academic and industrial worlds. All accepted papers will be published in the Proceedings by Springer Extended versions of the best papers will be published in the ACM SIGOPS (http://www.sigops.org/osr.html) journal Papers need to be submitted via Easychair. Read more at https://isc-hpc-io.org/

Afnan Abdul Rehman 2017-03-14T20:17:57Z

Call for Papers: The International Conference for High Performance Computing, Networking, Storage and Analysis (SC17) - Deadline: March 27, 2017

Tue, 03/14/2017 - 16:16

The SC17 Conference Committee is now accepting submissions for technical papers. The Technical Papers Program at SC is the leading venue for presenting the highest-quality original research, from the foundations of HPC to its emerging frontiers. The Conference Committee solicits submissions of excellent scientific merit that introduce new ideas to the field and stimulate future trends on topics such as applications, systems, parallel algorithms, data analytics and performance modeling. SC also welcomes submissions that make significant contributions to the “state-of-the-practice” by providing compelling insights on best practices for provisioning, using and enhancing high-performance computing systems, services, and facilities. The SC conference series is dedicated to promoting equality and diversity and recognizes the role that this has in ensuring the success of the conference series. We welcome submissions from all sectors of society.  SC17 is committed to providing an inclusive conference experience for everyone, regardless of gender, sexual orientation, disability, physical appearance, body size, race, or religion. Read more at https://www.hpcwire.com/off-the-wire/sc17-technical-paper-submissions-now-open/

Afnan Abdul Rehman 2017-03-14T20:16:25Z

Call for Papers: Women in IT Networking at SC17 (WINS) - Deadline: March 24, 2017

Tue, 03/14/2017 - 16:14

SCinet at SC17 has issued its Call for Applications for the Women in IT Networking (WINS) program. Created each year for the SC conference, SCinet brings to life a very high-capacity network that supports the revolutionary applications and experiments that are a hallmark of the SC conference. SCinet will link the convention center to research and commercial networks around the world. SCinet provides an ideal “apprenticeship” opportunity for engineers and technologists looking for direct access to the most cutting-edge network hardware and software, while working side-by-side with the world’s leading network and software engineers, and the top network technology vendors. Created each year for the SC conference, SCinet brings to life a very high-capacity network that supports the revolutionary applications and experiments that are a hallmark of the SC conference. SCinet will link the convention center to research and commercial networks around the world. SCinet provides an ideal “apprenticeship” opportunity for engineers and technologists looking for direct access to the most cutting-edge network hardware and software, while working side-by-side with the world’s leading network and software engineers, and the top network technology vendors. Read more at http://insidehpc.com/2017/03/call-applications-women-networking-sc17-wins/

Afnan Abdul Rehman 2017-03-14T20:14:59Z

Simons Foundation’s Flatiron Institute to Repurpose SDSC’s ‘Gordon’ Supercomputer

Tue, 03/14/2017 - 16:12

The San Diego Supercomputer Center (SDSC) at the University of California San Diego and the  Simons Foundation’s Flatiron Institute in New York have reached an agreement under which the majority of SDSC’s data-intensive Gordon supercomputer will be used by Simons for ongoing research following completion of the system’s tenure as a National Science Foundation (NSF) resource on March 31. Under the agreement, SDSC will provide high-performance computing (HPC) resources and services on Gordon for the Flatiron Institute to conduct computationally-based research in astrophysics, biology, condensed matter physics, materials science, and other domains. The two-year agreement, with an option to renew for a third year, takes effect April 1, 2017. Under the agreement, the Flatiron Institute will have annual access to at least 90 percent of Gordon’s system capacity. SDSC will retain the rest for use by other organizations including UC San Diego's Center for Astrophysics & Space Sciences (CASS), as well as SDSC’s OpenTopography project and various projects within the Center for Applied Internet Data Analysis (CAIDA), which is based at SDSC. Learn more at http://www.sdsc.edu/News%20Items/PR20170314_Gordon.html

Afnan Abdul Rehman 2017-03-14T20:12:34Z

AMD Collaborates with Microsoft to Advance Open Source Cloud Hardware

Tue, 03/14/2017 - 16:11

At the 2017 Open Compute Project U.S. Summit, AMD (NASDAQ: AMD) announced their collaboration with Microsoft to incorporate the cloud delivery features of AMD's next-generation "Naples" processor with Microsoft's Project Olympus -- Microsoft's next-generation hyperscale cloud hardware design and a new model for open source hardware development with the OCP community. Through Microsoft's contribution of the Project Olympus design much earlier in the cycle than many OCP projects, AMD was able to engage early on in the design process and foster a deep collaboration around the strategic integration of AMD's upcoming "Naples" processor. The performance, scalability and efficiency found at the core of Project Olympus and AMD's "Naples" processor means the updated cloud hardware design can adapt to meet the application demands of global datacenter customers. "Next quarter AMD will bring hardware innovation back into the datacenter and server markets with our high-performance 'Naples' x86 CPU, that was designed with the needs of cloud providers, enterprise OEMs and customers in mind," said Scott Aylor, corporate vice president of enterprise systems, AMD. "Today we are proud to continue our support for the Open Compute Project by announcing our collaboration on Microsoft's Project Olympus." Learn more at https://www.enterprisetech.com/2017/03/09/amd-collaborates-microsoft-advance-open-source-cloud-hardware/

Afnan Abdul Rehman 2017-03-14T20:11:33Z

How AMD’s Naples X86 Server Chip Stacks Up to Intel’s Xeons

Tue, 03/14/2017 - 16:10

Ever so slowly, and not so fast as to give competitor Intel too much information about what it is up to, but just fast enough to build interest in the years of engineering smarts that has gone into its forthcoming “Naples” X86 server processor, AMD is lifting the veil on the product that will bring it back into the datacenter and that will bring direct competition to the Xeon platform that dominates modern computing infrastructure. It has been a bit of a rolling thunder revelation of information about the Zen core used in the “Naples” server chip, the brand of which has not been released as yet and which will probably not be Opteron as past server chips have been, and in the “Summit Ridge” Ryzen desktop processors. AMD talked quite a bit about the Zen architecture at last year’s Hot Chips conference, and showed a demo of the Ryzen desktop part coincident with Intel Developer Forum. Now, as the Open Compute Project Summit spearheaded by Facebook is underway in Silicon Valley, AMD is telling us a little more about the Naples chip, which is the first server processor based on the Zen architecture and the first real server chip we have seen out of AMD since the sixteen core “Abu Dhabi” Opteron 6300s came out in November 2012. (We might call the follow-on to the Opterons “Zazen,” in honor of Zen meditation and because it is very likely impossible to fight a trademark on this term, but no one knows what it will be called outside of AMD as yet.) Learn more at https://www.nextplatform.com/2017/03/08/amds-naples-x86-server-chip-stacks-intels-xeons/

Afnan Abdul Rehman 2017-03-14T20:10:36Z

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

Tue, 03/14/2017 - 16:09

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Midweek at the Open Compute Project (OCP) Summit in Santa Clara, Calif., the GPU technology leader unveiled blueprints for a new open source Tesla P100-based accelerator – HGX-1 – developed for clouds with Microsoft under its Project Olympus. (We’ll make an educated guess that the D in DGX-1 stands for Deep Learning and the H in HGX-1 for Hyperscale.) At roughly the same time, Facebook introduced Big Basin, the successor to its Big Sur GPU server, which also uses Nvidia P100s (in a similar 8-way configuration, which we’ll get into in a moment). And in the embedded world, Nvidia announced the Jetson TX2, billed as a “drop-in supercomputer,” with an ARM-based CPU supporting Pascal graphics. That’s a productive week by any standard and there are multiple threads to follow here. Most of the activity was driven by artificial intelligence/deep learning’s continued drive into upper-end HPC and the cloud. Nvidia has been striving to leverage its GPU strength in both traditional scientific computing as well as in AI/DL whose applications often require lower precision (32-, 16-, and even 8-bit) computation. Learn more at https://www.hpcwire.com/2017/03/09/nvidia-debuts-hgx-1-cloud-announces-fujitsu-ai-deal/

Afnan Abdul Rehman 2017-03-14T20:09:23Z

High Performance Computing Summer Institute at UC San Diego

Tue, 03/07/2017 - 15:37

Applications are open for the San Diego Supercomputer Center Summer Institute 2017 at UC San Diego from July 31st to August 4th, 2017. The Summer Institute provides attendees with an overview of topics in High Performance Computing and Data Science to accelerate their learning process through highly interactive classes with hands-on tutorials on the Comet Supercomputer. The program is aimed at researchers in academia and industry, especially in domains not traditionally engaged in supercomputing. Participants only need to have basic programming experience and should be comfortable working in a Linux environment. Checkout all classes, apply now or signup for a reminder at http://si17.sdsc.edu

Afnan Abdul Rehman 2017-03-07T20:37:19Z

SC17 Experiencing Robust Exhibitor Participation

Thu, 03/02/2017 - 12:47

Even though SC17 is still more than eight months away, the SC17 Exhibits Committee is reporting significant positive exhibitor response to participating in the Exhibition in Denver this coming November.  In fact, they already eclipsed the total SC16 exhibitor count and booth space. “As HPC’s importance only continues to grow, the most important HPC decision makers from both research and industry from all corners of the world recognize that attending SC is essential to their success,” said Bronis R. de Supinski, SC17 Exhibits Chair from Lawrence Livermore National Laboratory.  “We also spend considerable time and resources on adding new components to the exhibition that enhance both the attendee and the exhibitor experience.” According to de Supinski some of the items being discussed for SC17 are more social media outreach for real-time exhibit floor updates as well as adding some informal meeting or networking points that are not tied to a particular booth. Further, the SC17 Emerging Technologies booth will be incorporated into the exhibit floor. Learn more at https://www.hpcwire.com/off-the-wire/sc17-experiencing-robust-exhibitor-participation/

Afnan Abdul Rehman 2017-03-02T17:47:28Z

Apply Now for the SC17 Student Cluster Competition - Deadline: April 7, 2017

Thu, 03/02/2017 - 12:44

SC17 is excited to hold another nail-biting Student Cluster Competition, or SCC, now in its eleventh year, as an opportunity to showcase student expertise in a friendly yet spirited competition. Held as part of SC17’s Students@SC, the Student Cluster Competition is designed to introduce the next generation of students to the high-performance computing community. Over the years, the competition has drawn teams from around the United States and around the world. The Student Cluster Competition is an HPC multi-disciplinary experience integrated within the HPC community’s biggest gathering, the Supercomputing Conference. The competition is a microcosm of a modern HPC center that teaches and inspires students to pursue careers in the field. It demonstrates the breadth of skills, technologies and science that it takes to build, maintain and utilize a supercomputer. In this real-time, non-stop, 48-hour challenge, teams of undergraduate and/or high school students assemble a small cluster on the exhibit floor and race to complete a real-world workload across a series of applications and impress HPC industry judges. Learn more at http://insidehpc.com/2017/02/apply-now-sc17-student-cluster-competition/

Afnan Abdul Rehman 2017-03-02T17:44:21Z

Scientists develop new high-precision method for analyzing and comparing functioning and structure of complex networks

Thu, 03/02/2017 - 12:37

Researchers at the Universitat Politècnica de Catalunya (UPC) and the University of Barcelona (UB) published a paper in Nature Communications presenting a scientific method for identifying, comparing and precisely determining objective differences between large nodes of complex networks. The new method makes it possible, for example, to compare and differentiate the functioning of brain networks in drug addicts and healthy individuals, thus advancing the study of the symptoms and effects of addiction on the brain. The method can also be used to more effectively analyze the functioning of critical complex systems, such as power distribution networks, airport connections and even social networks like Facebook and Twitter. Researcher Cristina Masoller explained the advantages of the new approach: "Imagine you have a power distribution system consisting of two interconnected networks, each with the same number of links, and one network loses a link because of a breakdown. With the methods we've had up until now, it's only been possible to determine the difference due to that missing link. With our method, we can also determine the precise location of the lost link and its importance in relation to the system—that is, whether its absence will significantly hinder the distribution of power." Read more at https://phys.org/news/2017-02-scientists-high-precision-method-analysing-functioning.html

Afnan Abdul Rehman 2017-03-02T17:37:07Z

AMD reveals Radeon Vega's final name, infuses Bethesda games with Vulkan

Thu, 03/02/2017 - 12:36

The expected showdown between Radeon Vega GPUs and the GeForce GTX 1080 Ti at GDC on Tuesday won’t be a showdown after all. New technical details about AMD’s hotly anticipated enthusiast-class graphics cards were almost nowhere to be found during the company’s “Capsaicin & Cream” livestream—though Radeon head Raja Koduri did reveal that the brand name for Vega GPUs will indeed be “Radeon RX Vega,” rather than RX 490 or RX 580. What a tease. Koduri also showcased a brief Deus Ex: Mankind Divided demo that suggested that Vega’s high-bandwidth cache controller can increase average and minimum frame rates by 50 and 100 percent, respectively, in memory-limited games. Impressive! Another quick demo with AMD’s TressFX technology revealed that Vega’s rapid packed math feature could double compute rates, which let the demo render twice as many hair strands as a Vega system with RPM disabled. Most notable is a new technology deal with Bethesda, the publisher of Doom, Fallout, The Elder Scrolls, Dishonored, and more. While partnerships between graphics companies and developers have typically involved just a single, specific games, AMD’s deal with Bethesda spans multiple games across a range of series. The crux is primarily to implement Vulkan, the open DirectX 12 alternative that rose from the ashes of AMD’s Mantle technology, as well as “the computing and graphics power of AMD Ryzen CPUs [and] Radeon GPUs.” Read more at http://www.pcworld.com/article/3174806/gaming/amd-radeon-infuses-bethesda-games-with-vulkan-cozies-up-to-a-geforce-now-rival.html

Afnan Abdul Rehman 2017-03-02T17:36:02Z

Pages