HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 20 min ago

NCSA Releases 2017 Blue Waters Project Annual Report

Thu, 11/09/2017 - 08:47

URBANA, Ill., Nov. 9, 2017 — The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign released today the 2017 Blue Waters Project Annual Report. For the project’s fourth annual report, research teams were invited to present highlights from their research that leveraged Blue Waters, the National Science Foundation’s (NSF) most powerful system for sustained computation and data analysis. Spanning economics to engineering, geoscience to space science, Blue Waters has accelerated research and impact across an enormous range of science and engineering disciplines throughout its more than 4-year history covered by the report series. This year is no different.

“To date, the NSF Blue Waters Project has provided over 20 billion core-hour equivalents to science, engineering and research projects, supported not just by the NSF, but also by NIH, NASA, DOE, NOAA, and other funders. Without Blue Waters, these funded investigations might not even be possible,” said Blue Waters Director and Principal Investigator, Dr. William “Bill”  Kramer. “In this year’s report, we are using a ‘badge’ to show the projects that are Data-intensive (39 projects), GPU-accelerated (34), Large Scale greater than 1,000 nodes (65), Memory-intensive (18), Only on Blue Waters (27), Multi-physics/multi-scale (47), Machine learning (9), Communication-intensive (32) and Industry (5).  This shows the breadth and depth of the uses world-class science is making on Blue Waters.”

“I continue to be amazed by the vast range of creative, limit-pushing research that scientists submit to this publication year after year. With the support of the National Science Foundation and the University of Illinois, the National Center for Supercomputing Applications’ Blue Waters Project continues to empower scientists to make discoveries that have immense impact in a diverse range of fields, spark new understanding of our world, and open new avenues for future research,” said Dr. William “Bill” Gropp, Director of NCSA.

The annual report also highlights the Blue Waters Project’s strong focus on education and outreach. Blue Waters provides the equivalent of 60 million core-hours of the system’s computational capacity each year for educational projects, including seminars, courseware development, courses, workshops, institutes, internships, and fellowships. To date, there have been more than 200 approved education, outreach, and training projects from organizations across the country. These allocations have directly benefitted over 3,700 individuals in learning about different aspects of computational and data-enabled science and engineering at more than 160 institutions, including 41 institutions in EPSCoR jurisdictions and at 14 Minority Serving Institutions.

The Blue Waters Annual Report highlights how the project is helping other domain specialists reach petascale sustained performance, specifically through their recently-expanded Petascale Application Improvement Discovery (PAID) program, where the Project provided millions of dollars to science teams and computational and data experts to improve the performance of applications.

Gropp continued, “Even more remarkable breakthroughs will be forthcoming as NCSA continues to partner with scientists around the nation to change the world as we know it.”

This year’s annual report features 130 research abstracts from various allocation types, categorized by space science, geoscience, physics and engineering, biology, chemistry, and social science, economics, health.

This year’s annual report features 130 research abstracts from various allocation types, categorized by space sciencecomputer sciencegeosciencephysics and engineeringbiology, chemistry, and health, and social science, economics, and humanities. Click here to download the full report.

Read the original release in full here: http://www.ncsa.illinois.edu/news/story/ncsa_releases_2017_blue_waters_project_annual_report_detailing_innovative_r

Source: NCSA

The post NCSA Releases 2017 Blue Waters Project Annual Report appeared first on HPCwire.

Atos Launches Next Generation Servers for Enterprise AI

Thu, 11/09/2017 - 08:25

PARIS, Nov. 9, 2017 — Atos, a global leader in digital transformation, launches BullSequana S, its new range of ultra-scalable servers enabling businesses to take full advantage of AI. With their unique architecture, developed in-house by Atos, BullSequana S enterprise servers are optimized for Machine Learning, business‐critical computing applications and in-memory environments.

In order to utilize the extensive capabilities of AI, businesses require an infrastructure with extreme performance. BullSequana S tackles this challenge with its unique combination of powerful processors (CPUs) and GPUs (Graphics Processing Unit).  The BullSequana S server’s flexibility leverages a proven unique modular architecture, and provides customers with the agility to add Machine Learning and AI capacity to existing enterprise workloads, thanks to the introduction of a GPU. Within a single server, GPU, storage and compute modules are mixed for a tailor-made server, for ready availability of all workloads worldwide.

Ultra-scalable server to answer both challenges: from classical use-case to AI

BullSequana S combines the most advanced Intel Xeon Scalable processors – codenamed Skylake – and an innovative architecture designed by Atos’ R&D teams. It helps reduce infrastructure costs while improving application performance thanks to ultra-scalability – from 2 to 32 CPUs – with innovative high capacity storage and booster capabilities such as GPU (Graphics Processing Unit) and, potentially other technologies such as FPGA in further developments. 

“Atos is a prominent global SAP partner delivering highly performant and scalable solutions for deployments of SAP HANA. We have been working together to accelerate SAP HANA deployments by providing a full range of SAP HANA applications certified up to 16TB. The new BullSequana S server range developed by Atos is one of the most scalable platforms in the market, optimized for critical deployments of SAP HANA. It is expected to open new additional collaboration areas between SAP and Atos around artificial intelligence and machine learning,” said Dr. Jörg Gehring, senior vice president and global head of SAP HANA Technology Innovation Networks.

BullSequana S – to reach extreme performance whilst optimizing investment:

  • Up to 32 processors, 896 cores, and 32 GPUs in a single server delivering an outstanding performance and supporting long-term investment protection, as capacities evolve smoothly according to business needs.
  • With up to 48TB RAM and 64TB NV-RAM in a single server, real‐time analytics of enterprise production databases will run much faster than on a conventional computer by using in‐memory technology whilst ensuring both security and high quality of service.
  • With up to 2PB internal data storage, BullSequana S efficiently supports data lake and virtualization environments.

Availability

The first BullSequana S machines manufactured at France-based Atos factory are available worldwide from today. 

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around € 12 billion. The European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.

Source: Atos

The post Atos Launches Next Generation Servers for Enterprise AI appeared first on HPCwire.

DDN’s Massively Scalable Storage Empowers Pawsey Supercomputing Centre to Speed Scientific Discoveries

Thu, 11/09/2017 - 08:19

SANTA CLARA, Calif., Nov. 9, 2017 — DataDirect Networks (DDN) today announced that Pawsey Supercomputing Centre in Western Australia has deployed a pair of DDN GRIDScaler parallel file system appliances with 5PBs of storage as well as an additional 2PBs of DDN capacity to support diverse research, simulations and visualizations in radio astronomy, renewable energy and geosciences, among several other scientific disciplines. At Pawsey, DDN’s GRIDScaler delivers the performance and stability needed to address 50 large data collections and contribute towards scientific outcomes for some of the thousand scientists who benefit from Pawsey services.

The prevalence of Pawsey’s data-intensive projects creates a constant need to store massive media files, videos, images, text and metadata that must be stitched together, accessed, searched, shared and archived. “Scientists come to us with lots of speculative technologies with different protocols and access methods, so flexibility is key,” said Neil Stringfellow, executive director of the Pawsey Supercomputing Centre. “We need to adapt to changing research requirements, whether we need to support Big Data analytics, HPC processing and global data sharing as well as connect with some new service or technology.”

Pawsey plays a pivotal role in the trailblazing Square Kilometer Array (SKA) project, which focuses on building a next-generation radio telescope that will be more sensitive and powerful than today’s most advanced telescopes, to survey the universe with incredible depth and speed. The center also uses DDN storage to support the game-changing Desert Fireball Network (DFN) project, which uses cameras to track fireballs as they shoot across the Australian desert night sky, aiding in the discovery and retrieval of newly fallen meteorites.

To keep pace with rapid data growth, which is expected to grow by 15PBs annually to support the SKA precursor projects alone, Pawsey needed a massively scalable yet highly reliable storage platform. DDN integrated storage with various features, including IBM’s Spectrum Scale parallel file system, tiering choices, replication, data protection and data management to support various front-end data access services that required high-speed storage connections over Ethernet. Additionally, DDN’s leadership role in establishing an IBM Spectrum Scale user group in Western Australia has proven instrumental in engaging the broader scientific and technical community in benefiting from file system advancements for academic, research and industrial applications.

With DDN underpinning its research projects, Pawsey continues to push research boundaries. Pawsey recently made headlines for supporting a research team in surveying the entire Southern sky as part of the Galactic and Extragalactic All-Sky Murchison Widefield Array survey (GLEAM). GLEAM Researchers have stored and processed more than 600 TBs of radio astronomy data to produce the world’s first radio-color panoramic view of the universe and have cataloged more than 300,000 radio galaxies from the extensive sky survey.

For the Pawsey Supercomputing Centre, the future holds the promise of additional scientific breakthroughs and constant spikes in HPC and storage needs, which DDN is able to meet. In terms of computing power and storage requirements, the SKA project alone is predicted to surpass the world’s most demanding research to date, including the Large Hadron Collider.

About DDN

DataDirect Networks (DDN) is a leading big data storage supplier to data-intensive, global organizations. For almost 20 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers.

Source: DDN

The post DDN’s Massively Scalable Storage Empowers Pawsey Supercomputing Centre to Speed Scientific Discoveries appeared first on HPCwire.

CoolIT Systems Launches Rack DCLC AHx10 Heat Exchange Module

Thu, 11/09/2017 - 08:12

CALGARY, Alberta, Nov. 9, 2017 — CoolIT Systems (CoolIT), a world leader in energy efficient liquid cooling solutions for HPC, Cloud, and Enterprise markets, has expanded its Rack DCLC product line with the release of the AHx10 Heat Exchange Module. This latest member of the CoolIT lineup is a rack mounted Liquid-to-Air heat exchanger that enables dramatic increases in server density without the need for any facility water.

The AHx10 strengthens CoolIT Systems’ extensive range of Rack DCLC heat exchange solutions featuring centralized pumping architecture. Customers can now easily deploy high density liquid cooled servers inside their existing data centers without the requirement for facility liquid being brought to the rack. The AHx10 mounts directly in a standard rack, supporting a cold aisle to hot aisle configuration. The standard 5U system manages 7kW at 25°C ambient air temperature and can be expanded to 6U or 7U configurations (via the available expansion kit), scaling capacity up to 10kW of heat load. CoolIT will officially launch the AHx10 at the Supercomputing Conference 2017 (SC17) in Denver, Colorado (booth 1601).

“Data centers struggling to manage the heat from high density racks now have a solution with the AHx10 that requires no facility modification,” said CoolIT Systems VP of Product Marketing, Pat McGinn. “The plug and play characteristics of the AHx10 make it a high flexible, simple to install solution that delivers the great benefits of liquid cooling immediately.”

The AHx10 supports front to back air flow management and is compatible with CoolIT Manifold Modules and Server Modules. It boasts redundant pumping as well as integrated monitoring and control system with remote access.

The AHx10 is the perfect solution to manage cooling for HPC racks within an existing data center. SC17 attendees can learn more about the solution by visiting CoolIT at booth 1601. To set up an appointment, contact Lauren Macready at lauren.macready@coolitsystems.com.

About CoolIT Systems

CoolIT Systems is a world leader in energy efficient liquid cooling technology for the Data Center, Server and Desktop markets. CoolIT’s Rack DCLC platform is a modular, rack-based, advanced cooling solution that allows for dramatic increases in rack densities, component performance, and power efficiencies. The technology can be deployed with any server and in any rack making it a truly flexible solution. For more information about CoolIT Systems and its technology, visit www.coolitsystems.com.

About Supercomputing Conference (SC17)

Established in 1988, the annual SC conference continues to grow steadily in size and impact each year. Approximately 5,000 people participate in the technical program, with about 11,000 people overall. SC has built a diverse community of participants including researchers, scientists, application developers, computing center staff and management, computing industry staff, agency program managers, journalists, and congressional staffers. This diversity is one of the conference’s main strengths, making it a yearly “must attend” forum for stakeholders throughout the technical computing community. For more information, visit http://sc17.supercomputing.org/.

Source: CoolIT Systems

The post CoolIT Systems Launches Rack DCLC AHx10 Heat Exchange Module appeared first on HPCwire.

Equus Launches SDX Platforms at SC17

Thu, 11/09/2017 - 07:59

Nov. 9, 2017 — Equus Compute Solutions, one of America’s largest manufacturers of custom computer hardware systems, has launched its SDX Platforms line of white box custom servers and storage solutions for cost-optimized software defined infrastructures. The new SDX Platforms will be shown at SC17 in Denver, CO, November 13-16, 2017, in both the Equus Booth #252 and the Intel Partner Pavilion.

The Equus SDX Platforms are Intel Xeon Scalable CPU-based white box 1U, 2U and 4U servers and storage chassis that provide high performance, densely integrated, cost-effective hardware configurations. Each of the new SDX Platforms can be customized to support such applications as software defined storage, virtualization, containers, hyper-converged scale-out, content delivery networks, hybrid cloud services, Apache Hadoop big data analysis, machine learning, and surveillance storage. SDX Platforms for hyper-converged infrastructure, hybrid cloud services, and GPU-based deep learning will be on display in the Equus and Intel booths.

“We have worked closely with our customers’ architecture and engineering teams to design specific hardware platforms that are cost-optimized and deliver maximum performance for the customer’s specific workloads,” said Costa Hasapopoulos, Equus President. “Our SDX Platforms enable software-defined infrastructures in customer datacenters, dramatically lowering IT costs, and make life easier through the cost-effective deployment of hyper-converged scale-out, private, and hybrid cloud frameworks.”  

As a world-renowned custom computer manufacturer, Equus offers extremely competitive prices – with superior logistics and support – to a wide range of customers, including resellers, content delivery networks, cloud service providers, software vendors, and OEMs. Equus offers solutions for software defined infrastructures – storage, virtualization and management – that offer a level of customization options for these solutions that is unavailable from legacy vendors. In addition, Equus dramatically reduces life-cycle costs by enabling customers to support systems using white box server management and in-house resources, rather than costly outside vendors.

Source: Equus Compute Solutions

The post Equus Launches SDX Platforms at SC17 appeared first on HPCwire.

Gidel First to Market with Application Support Package for Intel’s HLS

Wed, 11/08/2017 - 15:15

SANTA CLARA, Calif., and OR-AKIVA, Israel, Nov. 8, 2017 – Gidel, a technology leader in high performance accelerators utilizing FPGAs, today announced the availability of development tools that take advantage of Intel’s HLS, producing a speed increase of 5x over prior development options.

Intel’s High Level Synthesis (HLS) compiler turns untimed C++ into Register Transfer Level (RTL) — a lowlevel FPGA code. Gidel’s development tools map board resources to application needs, and provide the  glue between the host computer and the FPGA logic by building an Application Support Package (ASP). Gidel’s tools provide access for software developers to be able to work with HLS, and simplify integration of new IP that may utilize HLS into existing designs.

Standard HLS does not provide system middleware and board support, but simply accelerates FPGA code development; HLS is intended for traditional FPGA designers. Gidel’s development tools grant software developers easy access to the same level of OpenCL design for FPGA by tailoring the ASP(s), using C++ as the programing language. Developers of all types can now work faster and more efficiently, with the freedom to mix between C++ and HDL as most appropriate to the application. For more complex needs Gidel’s tools allow for several applications to be accelerated at the same time via the same FPGA by splitting resources between them. Each such part may be developed by a totally separate HLS code.

Gidel has already concluded several successful case studies in FPGA development utilizing its development tools in combination with HLS. “HLS allows for a tremendous speed increase in FPGA development,” explains Reuven Weintraub, Founder and CTO of Gidel. “Utilizing Gidel’s development tools makes it easy for both software designers and HDL designers to use HLS, according to application needs.”

Gidel development tools are optimized for maximum system performance and effective ease-of-use. The generated API maps the relevant user’s variables directly into the FPGA design. The on-board DRAM can be split into multiple logical memories accessed in parallel by the users’ FPGA code as most optimized to the application need. When the FPGA is used to support multiple applications, each application’s API enables it to access only its own variables, thus keeping the system safe from the hardest bugs to find and fix.
Gidel is one of a limited number of FPGA companies selected by Intel to participate in their HLS early access program, and is the first company to announce currently available development tools for HLS.

Visit Gidel in booth 1242 at SC17 in Denver, Colorado (Nov 13-16) for an in-depth presentation, or hear Gidel’s talk on “Intel HLS for FPGA with Gidel Development Tools” at the Intel Nerve Center (booth 1203) on Tuesday, November 14, from 2-3pm.

About Gidel

For 25 years, Gidel has been a technology leader in high performance, innovative, FPGA-based accelerators. Gidel’s reconfigurable platforms and development tools have been used for optimal application tailoring and for reducing the time and cost of project development. Gidel’s dedicated support and its products’ performance, ease-of-use, and long-life cycles have been well appreciated by satisfied customers in diverse markets who continuously use Gidel’s products, generation after generation. For more information visit www.gidel.com.

Source: Gidel

The post Gidel First to Market with Application Support Package for Intel’s HLS appeared first on HPCwire.

Spectra Logic to Present at Denver’s SC17 on the Future of Tape Technology

Wed, 11/08/2017 - 11:47

BOULDER, Colo., Nov. 8, 2017 — Spectra Logic CTO Matt Starr will present on the future of Spectra tape technology within an Exhibitor Forum session open to all registered attendees of Supercomputing 2017 (SC17). The major high-performance computing (HPC) conference is being held November 12-17, 2017, at the Colorado Convention Center in Denver. On the heels of several LTO-8 announcements made recently, Starr will discuss why tape library systems are the most cost-effective high-performance research storage solutions available, as well as how the current tape technology roadmap makes tape an ideal storage medium for data-intensive environments.

Title: “Spectra Logic Delivers a New Paradigm in Tape Library Deployment”
Date: Wednesday, November 15, 2017
Time:  3:30-4:00 p.m. MST
Location: SC17, Colorado Convention Center, 503-504

“Tape libraries are the best storage platform for high performance environments, given the solid tape technology roadmap, longevity of tape media for long-term storage needs and overall low cost for larger archives,” said Matt Starr, Spectra Logic’s CTO. “Spectra Logic’s tape library family offers unique features and interfaces, and is backed by excellent support options specifically designed for our HPC customers.”

Additionally, Hyperion Research senior research vice president Steve Conway and CEO Earl Joseph, along with Spectra Logic CSO Brian Grainger, will also be hosting an educational breakfast seminar for SC17 attendees as well as Denver-area HPC end users during SC17. Click here for more information and to register for the event.

Title: “Best Practices Breakfast Seminar: Does Moving Data to the Cloud Make Sense?”
Date: Wednesday, November 15, 2017
Time:  7:30 a.m. – 9:30 a.m. MST
Location: University of Colorado Denver—Terrace Room

“HPC end users today are faced with a cloud-related dilemma as they map out their storage infrastructures,” saidSteve Conway, senior research vice president, Hyperion Research. “The CIO level recognizes the fast adoption and cost value of cloud storage, while the IT manager understands the benefits of retaining a copy of data onsite in a hybrid cloud environment. This seminar will feature the latest market research and best practices for customer network environments to help IT managers choose the best strategy for their HPC storage environment.”

Visit Spectra Logic in booth # 863 at SC17.

About SC17

SC17 is the premier international conference showcasing the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. The annual event, created and sponsored by the ACM (Association for Computing Machinery) and the IEEE Computer Society, attracts HPC professionals and educators from around the globe to participate in its complete technical education program, workshops, tutorials, world-class exhibit area, demonstrations and opportunities for hands-on learning. Visit http://sc17.supercomputing.org for more information.

Source: SC17

The post Spectra Logic to Present at Denver’s SC17 on the Future of Tape Technology appeared first on HPCwire.

Co-Design Center Develops Next-Gen Simulation Tools

Wed, 11/08/2017 - 10:20

The exascale architectures on the horizon for supercomputing pose intricate challenges to the developers of large-scale scientific applications. Running efficiently at exascale will not only require porting and optimizing code but also rethinking the design of core algorithms to take advantage of the greater parallelism and more efficient calculation capabilities that exascale systems will offer.

With the aim of efficiently exploiting exascale resources, the Center for Efficient Exascale Discretizations (CEED) of the Exascale Computing Project (ECP) is working closely with a wide variety of application scientists, vendors, and software technology developers to create highly optimized discretization libraries and next-generation mini apps based on what are called advanced high-order finite element methods.

CEED is one of five ECP co-design centers established to enable developers of software technology, hardware technology, and computational science applications to forge multidisciplinary collaborations.

The co-design centers are targeting the most important computational algorithms and patterns of communication in scientific research, referred to as scientific application motifs. By taking advantage of sophisticated mathematical algorithms and advanced hardware such as GPUs, for example, the high-order methods being developed in CEED are delivering improved simulation quality without an increase in computational time.

The computational motif addressed by CEED pertains to discretization, the process of dividing a large simulation into smaller components (finite elements) in preparation for computer analysis. Computer modeling and simulation of physical phenomena such as fluid flow, vibration, and heat is central to understanding the behavior of real-world systems such as solar, wind, combustion, and nuclear fusion and fission. Meshes (or grids) that represent the domain of interest (a wind farm, for example) are created for finite element analysis, in which the equations governing the physics are approximated by polynomials on each element.

CEED’s focus is on efficient implementations of high-order polynomial approximations on unstructured meshes. These methods require advanced mathematical theory and algorithms, but can provide high-fidelity approximations of the fields inside each element, leading to higher-quality solutions with fewer unknown coefficients to be determined. The resulting smaller linear systems need less data movement, which is a critical performance feature of high-performance computing today and on future exascale systems, allowing the CEED algorithms to better exploit the hardware and deliver a significant performance gain over conventional low-order methods, said CEED director Tzanio Kolev of Lawrence Livermore National Laboratory (LLNL).

Members of the Center for Efficient Exascale Discretizations (CEED) team gathered in August at Lawrence Livermore National Laboratory for the center’s first meeting with representatives of projects, vendors, and industry associated with the Exascale Computing Project. CEED plans to have the meeting annually. (Image credit: CEED)

To serve as an integration hub for the ECP efforts, CEED is organized around four interconnected research-and-development thrusts focused on its customers: applications, hardware, software, and finite elements, which ties together and coordinates the efforts of the other three thrusts.

The center is assisting the applications groups through the development of discretization libraries that extract maximal performance out of new hardware as it is deployed and provides this performance to multiple applications without duplication of development effort. Members of the CEED team are designated as liaisons to selected applications, ensuring that the center accounts for their needs and that algorithmic improvements are integrated into applications when the new machines are delivered.

“CEED helps ECP applications by providing them with leading-edge simulation algorithms that can extract much more of the performance from exascale hardware than what’s currently available,” Kolev said.

The first wave of ECP application projects with which CEED has been actively collaborating includes ExaSMR, a coupled Monte Carlo neutronics and fluid flow simulation tool of small modular nuclear reactors, and MARBL, a next-generation multiphysics code at LLNL.

“A discretization library can help a wide variety of applications to bridge the gap to exascale hardware,” Kolev said. “Once we’ve made the connection and the library has been integrated into an application, we can quickly deliver new benefits, such as improved kernels [the core components of numerical algorithms], in the future by upgrading the application to an updated version of the library.”

In addition to libraries of highly performant kernels, key products of the CEED project are new and improved mini apps designed to highlight performance critical paths and provide simple examples of meaningful high-order computations.

The mini apps can be used for interactions with vendors and software technologies projects, and for procurement activities. Two of the mini apps that CEED has developed for those purposes are called Nekbone and Laghos, which are part of the recently released ECP Proxy Applications Suite 1.0. Nekbone and Laghos represent subcomponents of interest from ExaSMR and MARBL.

The Nekbone mini app has a long history, including use in a recent procurement to represent incompressible flow simulations with implicit time stepping. Under CEED, Nekbone is being updated to run on GPUs, an important component of the Titan supercomputer and the upcoming Summit and Sierra machines.

In contrast, Laghos is a new mini app developed in CEED as a proxy for compressible flow simulations with explicit time stepping. “Laghos consists of a particular class of algorithms that pertains to compressible flow with unstructured moving meshes that represents interactions of materials and shock waves as extreme densities and pressures,” Kolev said. “This is the first time we’ve had a really good mini app that captures high-order discretizations for these types of problems. Laghos is important for the activities at many of the National Nuclear Security Administration labs, and we have a lot of interest already from vendors who want to optimize the mini app.”

(The video of a simulation below is an example of the type of high-order calculations that CEED addresses. Shown is a Lagrangian compressible hydrodynamics simulation of triple-point shock interaction in axisymmetric coordinates. The new Laghos mini app is the first proxy for these types of problems. For more details and additional examples of CEED-related simulations, see CEED Publications and OutreachBLAST: High-Order Finite Element Hydrodynamics,  NEK5000, and MFEM. (Simulation movie credit: CEED))

 

Kolev described CEED’s interactions with vendors as a two-way connection. The center is working closely with them to ensure its discretization libraries will run efficiently on the hardware in development. At the same time, CEED is providing feedback to the vendors concerning hardware changes that can improve the performance of high-order algorithms.

“We represent many physicists and applications scientists when we interact with vendors,” Kolev said. “We learn from them how to make low-level optimizations and discover which memory models and programming models are good for our libraries’ algorithms. But we also advocate to the vendors that they should consider certain types of algorithms in their designs.”

When exascale becomes a reality, CEED wants high-order application developers to feel comfortable that they have full support for all tools that are part of their simulation pipeline.

“That means we’re thinking not only about discretization but also meshing, visualization, and the solvers that work with high order at exascale,” Kolev said. “We are collaborating with the teams in the ECP’s Software Technology focus area to develop new mathematical algorithms and expand the tools they are developing to meet the needs of CEED’s computational motif.”

CEED has also proposed important lower-level benchmarking problems (known as bake-off problems) to compare different high-order approaches and engage with the external high-order community via GitHub.

“These problems are designed to exercise really hot kernels that are central to the performance of high-order methods,” Kolev said. “This is an activity we use internally to push each other and learn from each other and make sure we are delivering to the users the best possible performance based on our collective experience. Engaging with the community is a win-win for everyone, and we are already starting to do this: a project in Germany picked up our benchmark problems from GitHub, and ran some very interesting tests. The engagement is ultimately going to benefit the applications because we are going to use the kernels that stem from such interactions in our libraries.”

CEED is a research partnership of two US Department of Energy labs and five universities. The partners are Lawrence Livermore National Laboratory; Argonne National Laboratory; the University of Illinois Urbana-Champaign; Virginia Tech; the University of Tennessee, Knoxville; the University of Colorado, Boulder; and the Rensselaer Polytechnic Institute. The center held its first project meeting in August at LLNL, which it plans to hold annually. The meeting brought together more than 50 researchers from ECP projects, vendors, and industry.

Link to the original article on the ECP web site: https://www.exascaleproject.org/co-design-center-develops-next-generation-simulation-libraries-and-mini-apps/

The post Co-Design Center Develops Next-Gen Simulation Tools appeared first on HPCwire.

Rescale and X-ISS to Provide Services for Migrating from On-Premise Systems to Cloud HPC

Wed, 11/08/2017 - 10:04

SAN FRANCISCO, Nov. 8, 2017 — Rescale, a leading provider of enterprise big compute and cloud high performance computing (HPC) today announced a partnership with X-ISS, an HPC systems integration firm focused on on-premise HPC management solutions, to additionally offer professional services to enterprises migrating HPC workloads to the cloud.

Rescale’s ScaleX platform provides a managed HPC cloud environment by seamlessly connecting to public cloud providers, bare metal providers, or a customer’s hardware system whether on or off-premise, to enable a centrally-managed hybrid compute system. Similarly, X-ISS provides deployment, management, and analytics for on-premise clusters. The partnership will enable X-ISS’ on-premise customers to take advantage of HPC in the cloud. As a Rescale partner, X-ISS will perform integration services with Rescale’s ScaleX platform, including hybrid integration, private networking implementation, and integration of custom applications, amongst others.

Tyler Smith, Rescale’s Head of Partnerships, says that the arrangement is perfect for large enterprises: “As cloud HPC adoption accelerates, our partnership with X-ISS will make it easier for the enterprise to leverage cloud without disrupting workloads on existing on-premise infrastructure. Bursting to cloud or migrating certain workloads offers a more elastic and scalable HPC environment, and Rescale working with X-ISS will aim to provide the optimal solution.”

Deepak Khosla at X-ISS also praised the partnership, saying, “At X-ISS, we are very excited to partner with Rescale to eliminate the compute ceilings and queuing wait times that our customers sometimes deal with. Together, X-ISS and Rescale will provide a comprehensive HPC solution and services that will delight our joint customers.”

About Rescale

Rescale is a global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

About X-ISS

X-ISS has been providing cross-platform management and analytics solutions for the High Performance Computing (HPC) and Big Data industry for over 15 years. With a reputation for the highest levels of quality and customer satisfaction, X-ISS is able to help customers design, integrate, and manage HPC cluster systems, as well as monitor, report, and deliver analytics that are important to HPC users. These solutions include ProjectHPC for vendor neutral cluster installation and configuration services; ManagedHPC for full-service remote cluster management services; CloudHPC for helping bring the Cloud elasticity to on-premise clusters; and DecisionHPC for insightful business analytics that allow customers to keep their clusters operating at maximum efficiency. For more information about X-ISS Services, visit www.x-iss.com.

Source: Rescale

The post Rescale and X-ISS to Provide Services for Migrating from On-Premise Systems to Cloud HPC appeared first on HPCwire.

Ethernet Alliance SC17 Demo Highlights Rich Solutions Portfolio For HPC

Wed, 11/08/2017 - 09:57

BEAVERTON, Ore., Nov. 8, 2017 — The Ethernet Alliance, a global consortium dedicated to the continued success and advancement of Ethernet technologies, today released details of its SC17 multivendor interoperability demo. Showcasing technologies ranging from 25 Gigabit (25G) to 400 Gigabit (400G) Ethernet, the display highlights the Ethernet ecosystem’s unified efforts to give high-performance computing (HPC) designers with forward-looking solutions for creating custom-built HPC environments for today and tomorrow. The Ethernet Alliance demo can be experienced at booth 842 at the Colorado Convention Center in Denver, Colo., from November 13 – 17, 2017. Please follow #EASC17 on Twitter for Ethernet Alliance news and events from the SC17 expo floor.

“Ethernet continues to provide diverse choices and options for the HPC community. From component and cabling vendors, to server and switch makers, to test and measurement equipment providers, the Ethernet ecosystem is working in tandem to produce a rich portfolio of solutions that can be leveraged for custom-built HPC environments. And with its journey drawing to a close, the technologies supporting 400G are emerging, opening the path to a new generation of hyperscale computers,” said John D’Ambrosia, chairman, Ethernet Alliance; and senior principal engineer, Huawei. “The Ethernet Alliance SC17 demo shows the next Ethernet era is here, bringing with it technologies supporting today’s supercomputing needs, and laying out a roadmap for not just tomorrow, but tomorrow’s tomorrow.”

Ethernet remains a commanding presence in the supercomputing TOP500, representing some 40 percent of systems named to the list. As the overall performance of Ethernet continues to expand, it is moving toward servers supporting 10 Gigabit (10G), 25G, 40 Gigabit (40G), 50 Gigabit (50G), and 100 Gigabit (100G) speeds. The hyperscale architectures enabling high performance at a low cost-per-bit threshold offer HPC designers the flexibility and scalability necessary to construct custom-fit solutions while enjoying the robust reliability and advantages that Ethernet delivers.

The Ethernet Alliance SC17 demo reflects the commitment of the whole of the Ethernet ecosystem to providing the HPC market with an array of best-in-class solutions, from the component and system level, to test and measurement, and beyond. Incorporating copper and optical equipment encompassing servers, switches, NIC adapters, cabling, interconnects, and test and measurement solutions, the demo simulates a real-world HPC environment supporting Ethernet speeds of 1 Gigabit (1G) up to 100G; a variety of interfaces including 25/100G; and 50G optical and electrical signaling. Additionally, the display provides an advanced look at 400G, with a fully realized demonstration that includes network traffic generation, monitoring, analysis, and testing.

The organization’s multivendor interoperability SC17 exhibit features participants from across the Ethernet landscape. Among Ethernet Alliance member companies taking part in this year’s demo are Anritsu Corporation (TSE: 6754); Amphenol Corporation (NYSE: APH);Hitachi Ltd. (TSE: 6501); Intel Corporation (NASDAQ: INTC); Mellanox Technologies, Ltd. (NASDAQ: MLNX); Teledyne LeCroy, Inc. (NYSE: TDY); and TE Connectivity, Ltd. (NYSE: TEL).

Ethernet’s continued relevance and value to HPC will be explored in The Ethernet Portfolio for HPC SC17 Birds of a Feather session on Thursday, November 16, 2017, at 12:15pm MST. Led by Ethernet Alliance chair John D’Ambrosia, panelists Ran Almog, product manager, Mellanox Technologies; and Nathan Tracy, technologist and manager of industry standards, TE Connectivity, will explore key Ethernet solutions enabling development and deployment of the next generation of computing, networking, and storage.

To view the Ethernet Alliance live demo, please visit SC17 booth number 842. For more information about the Ethernet Alliance, visit http://www.ethernetalliance.org, follow @EthernetAllianc on Twitter, visit its Facebook page, or join the EA LinkedIn group.

About the Ethernet Alliance

The Ethernet Alliance is a global consortium that includes system and component vendors, industry experts, and university and government professionals who are committed to the continued success and expansion of Ethernet technology. The Ethernet Alliance takes Ethernet standards to market by supporting activities that span from incubation of new Ethernet technologies to interoperability demonstrations and education.

Source: Ethernet Alliance

The post Ethernet Alliance SC17 Demo Highlights Rich Solutions Portfolio For HPC appeared first on HPCwire.

Complete ISC 2018 Contributed Program Now Open for Submission

Wed, 11/08/2017 - 09:50

FRANKFURT, Germany, Nov. 8, 2017 — The ISC High Performance 2018 contributed program is now open to diverse opportunities. We welcome submissions for the following sessions – birds of a feather, research posters, project posters and the PhD forum. ISC 2018 also calls on regional and international STEM undergraduate and graduate students interested in the field of high performance computing (HPC) to volunteer as helpers at the five-day conference.

It is the HPC community’s active participation in the above-mentioned programs that ultimately create a productive and distinct contributed program. In return, contributors will enjoy sharing their ideas, knowledge and interests in a dynamic setting and also have the chance to meet a wide network of people representing various organizations in the HPC space. ISC 2018 expects to attract 3,500 attendees.

The next conference will be held from June 24 – 28, 2018, in Frankfurt, Germany, and will continue its tradition as the largest HPC conference and exhibition in Europe. It is well attended by academicians, industry leaders and end users from around the world. The ISC exhibition attracts around 150 organizations, including supercomputing, storage and network vendors, as well as universities, research centers, laboratories and international projects.

Please take note of these important deadlines:

ISC Opportunities Submission Deadlines   Notification of Acceptance PhD Forum Wednesday, February 21, 2018   Thursday, April 19, 2018 Birds of a Feather Wednesday, February 28, 2018   Wednesday, March 28, 2018 Research Poster Friday, March 9, 2018   Friday, April 6, 2018 Student Volunteer Thursday, March 15, 2018   Thursday, March 29, 2018 Project Poster Friday, March 16, 2018   Friday, April 6, 2018

Birds of a Feather

These informal BoF sessions often bring together like-minded people to discuss current HPC topics, network and share ideas. Each session will be allocated 60 minutes to address a different topic and is led by one of more individuals with expertise in the area. If you are interested in hosting a BoF, please refer to the topics and submission guidelines on the ISC website.

The BoFs Committee will review all submitted BoF proposals based on their originality, significance, quality, clarity, and diversity with respect to the overall diversity goals of the conference. Each proposal will be evaluated by at least three reviewers.

The BoF sessions will be held from Monday, June 25 through Wednesday, June 26, 2018.

PhD Forum

The PhD forum is a great platform for PhD students to present research results in a setting that sparks scientific exchange and lively discussions. It consists of two parts: a set of back-to-back lightning talks followed by a poster presentation. The lightning talks are meant to give a quick to-the-point presentation of research objectives and early results, while the posters will provide more in-depth information as a starting point for deeper discussions.

Submitted proposals will be reviewed by the ISC 2018 PhD Forum Program Committee. An award and travel funding are available to students. Detailed information is offered here. The PhD Forum will be held on Monday, June 25, 2018.

Research Posters

The ISC research poster session is an excellent opportunity to present your latest research results, projects and innovations to a global audience including your HPC peers. Poster authors will have the opportunity to give short presentations on their posters, and to informally present them to the attendees during lunch and coffee breaks. Research posters are intended to cover all areas of interest as listed in the call for research papers. Visit the website for the full submission process. The ISC organizers will sponsor the call for research posters with awards for outstanding research posters: the ISC Research Poster Awards.

All submitted posters will be double-blind reviewed by at least three reviewers. Poster authors will give short presentations on their posters on Tuesday, June 26, and will also have the opportunity to informally present them to the attendees during lunch and coffee breaks. The accepted research posters will be displayed from Tuesday, June 26 through Wednesday, June 27, 2018.

Project Posters 

Held for the second time, the project poster session allows submitters to share their research ideas and projects at ISC. The session provides scientists with a platform for information exchange, whereas for the attendees it is an opportunity to gain an overview of new developments, HPC research and engineering activities. In contrast to the dissemination of research results in the research poster sessions, project posters enable participants to present ongoing and emerging projects by sharing fundamental ideas, methodology and preliminary work. Researchers with upcoming and recently funded projects are encouraged to submit a poster of their work. Those with novel project ideas are invited to present as well.

All accepted project posters will be displayed from Monday, June 25 through Wednesday, June 27, 2018 in the exhibition hall in a prominent spot. During the coffee breaks on Tuesday and Wednesday afternoons, ISC attendees will have the opportunity to meet the authors at their posters to discuss their projects.

ISC Student Volunteer Program

If you are a student pursuing an undergraduate or graduate degree in computer science, or any STEM-related fields, and high performance computing (HPC) is on your radar, volunteering at ISC 2018 can help steer your future career in the right direction.

The organizers are looking for enthusiastic and reliable young people to help them run the conference. In return they offer you the opportunity to attend the tutorials, conference sessions, and workshops for free. They also encourage you to use the event to build your professional network. The conference after-hours experience is also fun. You will have the opportunity to intermingle with your peers from other educational institutions and international backgrounds. Such connections can go a long way as you develop your career. Visit the website to find out more.

Source: ISC

The post Complete ISC 2018 Contributed Program Now Open for Submission appeared first on HPCwire.

ACM Recognizes 2017 Distinguished Members as Pioneering Innovators that Are Advancing the Digital Age

Wed, 11/08/2017 - 09:02

NEW YORK, Nov. 8, 2017 — ACM, the Association for Computing Machinery, has named 43 Distinguished Members for outstanding contributions to the field. As a group, the 2017 Distinguished Members are responsible for an extraordinary array of achievements, reflecting the many distinct areas of research and practice in the computing and information technology fields.

“Computing technology is becoming an increasingly dominant force in our daily lives and is transforming society at every level,” explains ACM President Vicki L. Hanson. “In naming a new roster of Distinguished Members each year, ACM underscores that the innovations which improve our lives do not come about by accident, but rather are the result of the hard work, inspiration and creativity of leading professionals in the field. We honor the 2017 class of ACM Distinguished Members for the essential role their accomplishments play in how we live and work.”

The 2017 ACM Distinguished Members work at leading universities, corporations and research institutions around the world, including Australia, Belgium, Canada, France, Hong Kong, Italy, The Netherlands, Portugal, Qatar, Singapore, South Africa, South Korea, and the United States. These innovators have made contributions in a wide range of technical areas including accessibility, computational geometry, cryptography, computer security, computer science education, data structures, healthcare technologies, human-computer interaction, nanoscale computing, robotics, and software engineering —to name a few.

The ACM Distinguished Member program recognizes up to 10 percent of ACM worldwide membership based on professional experience as well as significant achievements in the computing field.

2017 ACM DISTINGUISHED MEMBERS

For Educational Contributions to Computing:

 

Gail Chapman

Exploring Computer Science

 

James H. Cross II

Auburn University

 

Cay S. Hortsmann

San Jose State University

 

Renée A. McCauley

College of Charleston


Judithe Sheard

Monash University

For Engineering Contributions to Computing:

 

Sharad Agarwal

Microsoft AI & Research

 

Ashish Kundu

IBM T.J. Watson Research Center

 

Sam H. Noh

Ulsan National Institute of Science & Technology


Theo Schlossnagle

Circonus, Inc.

For Contributions to Computing:

 

Kirk W. Cameron

Virginia Tech

 

Matt Huenerfauth
Rochester Institute of Technology

 

Wessel Kraaij
Leiden University & TNO

 

For Scientific Contributions to Computing:

 

David Atienza Alonso

Ecole Polytechnic Federale de Lausanne (EPFL)


Srinivas Aluru

Georgia Institute of Technology

 

Sihem Amer-Yahia

Centre National de Recherche Scientifique (CNRS)

 

Winslow Burleson

New York University

 

Jian-Nong Cao

Hong Kong Polytechnic University

 

Siu-Wing Cheng
The Hong Kong University of Science and Technology

 

Christopher W. Clifton
Purdue University

 

Myra B. Cohen
University of Nebraska-Lincoln

 

Ian Goldberg

University of Waterloo

 

Jimmy Xiangji Huang

University of Toronto

 

Joaquim Armando Pires Jorge

INESC-ID / Técnico / Universidade de Lisboa


James B. D. Joshi

University of Pittsburgh

 

Vijay Kumar

University of Missouri-Kansas City

 

Hai “Helen” Li
Duke University

 

Qiaozhu Mei

University of Michigan

 

Mohamed F. Mokbel

Qatar Computing Research Institute / University of Minnesota

 

Meredith Ringel Morris

Microsoft Research

 

John Owens

University of California, Davis

 

Lynne E. Parker

University of Tennessee, Knoxville

 

Mauro Pezzè

Università della Svizzera italiana
Università degli studi di Milano Bicocca

 

Lucian Popa
IBM Research-Almaden

 

Hridesh Rajan

Iowa State University

 

Kui Ren

University at Buffalo, the State University of New York

 

Ken Salem

University of Waterloo

 

Jean Vanderdonckt
Université catholique de Louvain, Belgium

 

Willem C. Visser
Stellenbosch University, South Africa

 

Rebecca N. Wright

Rutgers University

 

Cathy H. Wu

University of Delaware

 

Dong Yu

Tencent

 

Roger Zimmermann

National University of Singapore

 

Thomas Zimmermann

Microsoft Research

  

About ACM

ACM, the Association for Computing Machinery www.acm.org, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

About the ACM Recognition Program

The ACM Fellows program, initiated in 1993, celebrates the exceptional contributions of the leading members in the computing field. These individuals have helped to enlighten researchers, developers, practitioners and end users of information technology throughout the world. The ACM Distinguished Member program, initiated in 2006, recognizes those members with at least 15 years of professional experience who have made significant accomplishments or achieved a significant impact on the computing field. The ACM Senior Member program, also initiated in 2006, includes members with at least 10 years of professional experience who have demonstrated performance that sets them apart from their peers through technical leadership, technical contributions and professional contributions. The new ACM Fellows, Distinguished Members, and Senior Members join a list of eminent colleagues to whom ACM and its members look for guidance and leadership in computing and information technology.

Source: ACM

The post ACM Recognizes 2017 Distinguished Members as Pioneering Innovators that Are Advancing the Digital Age appeared first on HPCwire.

Cavium ThunderX2 Motherboard Specification for Microsoft’s Project Olympus Contributed to the Open Compute Project

Wed, 11/08/2017 - 08:54

LONDON, Nov. 8, 2017 — Cavium, Inc. (NASDAQ: CAVM), announced today that they are collaborating with Microsoft to contribute the ThunderX2 mother board specification to the Open Compute Project (OCP) as part of the design specification for Microsoft’s Project Olympus. The contribution enables the adoption and iteration by members of OCP, a global community of technology leaders who are reimagining hardware to make it more efficient, flexible, and scalable.

The ThunderX2 product family is Cavium’s second generation 64-bit ARMv8-A server processor SoCs for Data Center, Cloud and High-Performance Computing applications. The family integrates fully out-of-order high performance custom cores supporting single and dual socket configurations. ThunderX2 is optimized to drive high computational performance delivering outstanding memory bandwidth and memory capacity. The new line of ThunderX2 processors includes multiple workload optimized SKUs for both scale up and scale out applications and is fully compliant with ARMv8-A architecture specifications as well as ARM’s SBSA and SBBR standards. It is also widely supported by industry leading OS, Hypervisor and SW tool and application vendors.

Cavium and Microsoft originally announced their collaboration at the OCP U.S. Summit in March 2017, where the two companies demonstrated cloud service workloads developed for Microsoft’s internal use running on ThunderX2 based server platform.  During theDCD: Zettastructure summit today, the companies released the detailed specification of ThunderX2 Server Motherboard for Microsoft’s Project Olympus including block diagram, management sub-system, power management, FPGA Card support, IO connectors, and physical specifications.

“Cavium is pleased to collaborate with Microsoft on contributing world’s first dual socket ARM server mother board design to the Open Compute Project,” said Gopal Hegde, VP/GM, Data Center Processor Group at Cavium. “ThunderX2 delivers best-in-class compute, memory and IO performance to most demanding Data Center workloads and this contribution will enable interested server OEMs and ODMs to quickly design and proliferate ThunderX2 based Project Olympus platforms.”

Kushagra Vaid, GM, Azure Hardware Infrastructure, Microsoft Corp. said, “We designed Microsoft’s Project Olympus with the ability to accommodate a variety of workloads and processor architectures. We’ve been closely collaborating with Cavium to integrate ThunderX2 into Microsoft’s Project Olympus design, and to drive innovation within the ARM ecosystem especially for workloads that benefit from high-throughput computing. The completion and contribution of our Project Olympus specification shows our continued commitment to the Open Compute Project and community developed innovation.”

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data Center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware-reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan.

Source: Cavium

The post Cavium ThunderX2 Motherboard Specification for Microsoft’s Project Olympus Contributed to the Open Compute Project appeared first on HPCwire.

TACC Key Partner in Dell Medical School’s Data Science Initiative for Women’s Health

Wed, 11/08/2017 - 08:40

Nov. 8, 2017 — The Texas Advanced Computing Center (TACC) at The University of Texas at Austin today announced a new partnership with the Dell Medical School’s Women’s Health research team to be spearheaded by Kelly Gaither, who joins DMS as an associate professor. Gaither will remain in her position as director of Visualization and a senior research scientist at TACC and split her time between the two appointments.

Gaither has been TACC’s director of Visualization for 16 years and is a prominent leader in scientific visualization and data analytics for the open science community. In her new role, she will help the medical school develop an innovative data-driven infrastructure that will include data collections, data mining, machine learning, statistics, visualization, and computational and statistical models. These methodologies will help physicians make real-time decisions and convey the results to patients, other physicians, the lay public, and policy makers.  

“Kelly and the experience she brings from TACC are invaluable,” said Radek Bukowski, associate chair for Discovery and Investigation in the Department of Women’s Health and professor of Obstetrics and Gynecology at the Dell Medical School. “It’s very clear that the Dell Medical School and TACC have a lot of overlapping interests and complimentary skillsets. Health care providers recognize that they need to change the way things are done in an increasingly data-driven society. Technology and data science are crucial in finding causes for a variety of health outcomes.”

The Dell Medical School at The University of Texas at Austin welcomed its first class in June 2016. Dell Seton Medical Center at The University of Texas, the new teaching hospital, opened in May 2017.

In the current health care environment, many clinicians make decisions that are based on data, but a vast majority of collected data is not used and very often the data may not be optimal, complete, or entirely accurate. Clinicians would benefit from a rigorous data-driven infrastructure that would allow them to make decisions and discover causal factors for a variety of different health-related issues, including stillbirth, preterm birth, and neonatal death. The collaboration as planned would also help lead to new care delivery models and could be applied to other conditions such as maternal mortality.  

“TACC will play a large role in providing the infrastructure and the resources for computation and data storage, but we will also provide intellectual capital,” Gaither said. “TACC has the type of people with skillsets that the medical school needs right now. I think, long-term, we’re going to play a big partnership role because in many ways we need each other.”

TACC research associates Tomislav Urban and Dave Semeraro will work with Gaither to contribute expertise in data extraction and visualization. TACC’s data, visualization, life sciences, and high performance computing groups will also be in close collaboration with the Dell Medical School.

Every year, an estimated 15 million babies are born preterm (before 37 completed weeks of gestation), and this number is rising globally and in the U.S., according to the World Health Organization.

“We have a large collection of data on women’s health and we’re going to run statistical models to see if we can find initial indicators that contribute to preterm birth in the U.S.,” Gaither said. “We will start by investigating causality for stillbirth because mathematically, it has a well-defined outcome, but more importantly, it is emotionally devastating, and the long-term cost both physically and emotionally is significant.”

Along with data science and scientific visualization, TACC will make some of the world’s best supercomputers available to researchers and students at the Dell Medical School.

“Our medical school is interested in generating a new kind of doctor, one that is technologically savvy, one that is very plugged into data sources and technology,” Gaither said. “We’ve talked about creating a new culture of students that use different kinds of technology to change the way they think about medicine and to aid the decision-making process. It could revolutionize the way that healthcare is conducted, administered and conveyed.”

According to Bukowski, Austin is a unique environment for this kind of initiative to take place. “The University of Texas at Austin has excellent expertise in computer science, mathematics and data science,” he said. “The new Dell Medical School has a mission of improving health in our community and throughout the Central Texas region, and Austin’s population is committed to improving healthcare.”

The Dell Medical School Data Science Group aims to hire five faculty and five non-faculty members with a variety of data science skills. Gaither is the first hire.

“Data science is important to medicine—we’re not getting the information we could from what we collect,” Bukowski said. “We are spending a huge amount of money for diagnostic tests and therapeutic intervention that may not translate into improvement in life expectancy and the quality of life. That’s where we come in with this initiative.”

Source: TACC

The post TACC Key Partner in Dell Medical School’s Data Science Initiative for Women’s Health appeared first on HPCwire.

DDN Partners with Hewlett Packard Enterprise to Deliver HPC Storage Solutions

Wed, 11/08/2017 - 08:21

SANTA CLARA, Calif., Nov. 8, 2017 — DataDirect Networks (DDN), a world leader in storage solutions, has entered into a partnership with global high-performance computing (HPC) leader Hewlett Packard Enterprise (HPE) to integrate DDN’s parallel file system storage and flash storage cache technology with HPE’s HPC platforms. The focus of the partnership is to accelerate and simplify customers’ workflows in technical computing, artificial intelligence and machine learning environments.

Enhanced versions of EXAScaler and GRIDScaler parallel file systems solutions, and the latest release of IME, a software defined scale out NVMe data caching solution, will be tightly integrated with HPE servers and the HPE Data Management Framework (DMF) software, enabling optimized workflow solutions with large-scale data management, protection, and disaster recovery capabilities. They will also provide an ideal complement to HPE’s Apollo portfolio, aimed at high-performance computing workloads in complex computing environments.

“Congratulations to DDN and HPE on this expanded collaboration, which seeks to maximize data throughput and data protection,” said Rich Loft, director of the technology development division at the National Center for Atmospheric Research (NCAR) – and a user of an integrated DDN and HPE solution as the foundation of the Cheyenne supercomputer.  “Both of these characteristics are important to the data workflows of the atmospheric and related sciences.”

“With this partnership two trusted leaders in the high-performance computing market have come together to deliver high value solutions and as well as a wealth of technical field expertise for customers with data intensive needs,” said Paul Bloch, president, DDN. “In this hybrid age of hard drive-based parallel file systems, web/cloud and flash solutions, customers are demanding truly scalable storage systems that deliver deeper insight in their datasets. They want smarter productivity, better TCO, and best in class data management and protection. DDN’s and HPE’s integrated solutions will provide them with just that.”

DDN has been a trusted market leader for storage and parallel file system implementations at scale for nearly twenty years. The integrated offerings from DDN and HPE combine compute and storage in the fastest, most scalable and most reliable way possible.

“At HPE we’re committed to providing best practice options for our customers in the rapidly growing markets for high-performance computing, artificial intelligence and machine learning,” said Bill Mannel, vice president and general manager for HPC and AI Segment Solutions, HPE. “HPE and DDN have collaborated on many successful deployments in a variety of leading-edge HPC environments. Bringing these capabilities to the broader community of HPC users based on this partnership will accelerate the time to results and value that our customers see from their compute and storage investment.” 

About DDN

DataDirect Networks (DDN) is a leading big data storage supplier to data-intensive, global organizations. For almost 20 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Partners with Hewlett Packard Enterprise to Deliver HPC Storage Solutions appeared first on HPCwire.

DARPA Selects Northrop Grumman to Support Technical Innovations in Computer Processing Development

Wed, 11/08/2017 - 07:56

LINTHICUM, Md., Nov. 8, 2017– The Defense Advanced Research Projects Agency (DARPA) selected Northrop Grumman Corporation (NYSE: NOC) to collaborate on the development of a graph processor chip that aims to vastly enhance efficiencies and capabilities of today’s top processors.

As a part of DARPA’s newly instated Hierarchical Identify Verify Exploit (HIVE) program, Northrop Grumman will work with five other entities to implement and evaluate real-time performance of various graph algorithms in a newly developed HIVE chip. HIVE seeks to create and integrate technologies that will potentially lead to the development of a generic graph processor, responsible for quickly analyzing large data sets to determine correlations and dependencies that were not able to be discovered before.

This type of development could be critical in cybersecurity, intelligence integration and network analysis, especially as it relates to the Department of Defense (DoD). Northrop Grumman contributions to HIVE will assess the potential for graph analytics to resolve DoD processing challenges while also gaining a better understanding into how the analytics are currently used in DoD systems.

“The goal of the HIVE program is to enable our customers to make better decisions with the copious amounts of data generated every day. This program endeavors to produce technology breakthroughs that will transform cognitive systems and advance analytics for many years,” said Vern Boyle, vice president, cyber and advanced processing, Northrop Grumman Mission Systems.

Northrop Grumman will work to solve the problem facing today’s top processors. Currently, there are few programming models and generalized processor architectures that can effectively support the irregular memory accesses and fine grained concurrency requirements of static and dynamic/streaming graph analytics, while also providing accelerated run-time support. The ability to quickly identify commonalities, patterns and dependencies in order to predict outcomes is vital due to the high volume and variety of data being generated every day.

The HIVE program will address three key technical areas including: graph analytic processors, graph analytics toolkits and system evaluation. As the single performer selected for system evaluation, Northrop Grumman will identify and develop static and streaming graph analytics to solve five types of problem areas including: anomaly detection, domain specific search, dependency mapping, N-x contingency analysis and causal modeling of events. This project will focus on identifying new uses for graph analytics that have not been included in previous research due to processing, power or size constraints.

Northrop Grumman is a leading global security company providing innovative systems, products and solutions in autonomous systems, cyber, C4ISR, strike, and logistics and modernization to customers worldwide. Please visitnews.northropgrumman.com and follow us on Twitter, @NGCNews, for more information.

Source: Northrop Grumman

The post DARPA Selects Northrop Grumman to Support Technical Innovations in Computer Processing Development appeared first on HPCwire.

Deep Learning for Science: A Q&A with NERSC’s Prabhat

Tue, 11/07/2017 - 17:27

Deep learning is enjoying unprecedented success in a variety of commercial applications, but it is also beginning to find its footing in science. Just a decade ago, few practitioners could have predicted that deep learning-powered systems would surpass human-level performance in computer vision and speech recognition tasks.

These tools are now poised to help scientists contend with some of the most challenging data analytics problems in a number of domains. For example, extreme weather events pose great potential risk on ecosystem, infrastructure and human health. Analyzing extreme weather data from satellites and weather stations and characterizing changes in extremes in simulations is an important task. Similarly, upcoming astronomical sky surveys will obtain measurements of tens of billions of galaxies, enabling precision measurements of the parameters that describe the nature of dark energy. But in each case, analyzing the mountains of resulting data poses a daunting challenge.

Prabhat, NERSC

A growing number of scientists are already employing HPC systems for data analytics, and many are now beginning to apply deep learning and other types of machine learning to their large datasets. Toward this end, in 2016 the U.S. Department of Energy’s National Energy Research Scientific Computing Center (NERSC) expanded its support for deep learning and began forming hands-on collaborations with scientists and industry. NERSC users from science domains such as geosciences, high energy physics, earth systems modeling, fusion and astrophysics are now working with NERSC staff, software tools and services to explore how deep learning can improve their ability to solve challenging science problems.

In this Q&A with Prabhat, who leads the Data and Analytics Services Group at NERSC, he talks about the history of deep learning and machine learning and the unique challenges of applying these data analytics tools to science. Prabhat is also an author on two related technical papers being presented at SC17, “Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data” and “Galactos: Computing the 3-pt Anisotropic Correlation for 2 Billion Galaxies,” and is conducting two deep learning roundtables in the DOE Booth (#613) at SC17. He is also giving a plenary talk on deep learning for science on Sunday, November 12 at the Intel HPC Developer Conference held in conjunction with SC17.

How do you define deep learning, and how does it differ from machine learning?

At the Department of Energy, we tackle inference problems across numerous domains. Given a noisy observation, you would like to infer properties of the object of interest. The discipline of statistics is ideally suited to solve inference problems. The discipline of Machine Learning lies at the intersection of statistics and computer science, wherein core statistical methods were employed by computer scientists to solve applied problems in computer vision and speech recognition. Machine learning has been around for more than 40 years, and there have been a number of different techniques that have fallen in and out of favor: linear regression, k-means, support vector machines and random forests. Neural networks have always been part of machine learning – they were developed at MIT starting in the 1960s – there was the major development of the back-propagation algorithm in the mid-1980s, but they never really picked up until 2012. That is when the new flavor of neural networks – that is, deep learning – really gained prominence and finally started working. So the way I think of deep learning is as a subset of machine learning, which in turn is closely related to the field of statistics, and all of them have to do with solving inference problems of one kind or another.

What technological changes occurred that enabled deep learning to finally start working?

Three important trends have happened over the last 20 years or so. First, thanks to the internet, “big Data,” or large archives of labeled and unlabeled datasets, has become readily accessible. Second, thanks to Moore’s Law, computers have become extremely powerful. A laptop featuring a GPU and a CPU is more capable than supercomputers from previous decades. These two trends were prerequisites for enabling the third wave of modern neural nets, deep learning, to take off. The basic machinery and algorithms have been in existence for three decades, but it is only the unique confluence of large datasets and massive computational horsepower that enabled us to explore the expressive capabilities of Deep Networks.

What are some of the leading types of deep learning methods used today for scientific applications?

As we’ve gone about systematically exploring the application of deep learning to scientific problems over the last four years, what we have found is that there are two dominant architectures that are relevant to science problems. The first is called the convolutional network. This architecture is widely applicable because a lot of the data that we obtain from experimental and observational sources (telescopes and microscopes) and simulations – tend to be in the form of a grid or an image. Similar to commodity cameras, we have 2D images, but we also typically deal with 3D, 4D and multi-channel images. Supervised pattern classification is a common task shared across commercial and scientific use cases; applications include face detection, face recognition, object detection and object classification.

The second approach is more sophisticated and has to do with the recurrent neural network: the long short-term memory (LSTM) architecture. In commercial applications, LSTMs are used for translating speech by learning the sequence-to-sequence mapping between one language and another. In our science cases, we also have sequence-to-sequence mapping problems, such as gene sequencing, for example, or in earth systems modeling, where you are tracking storms in space and time. There are also problems in neuroscience that take recordings from the brain and use LSTM to predict speech. So broadly those two flavors of architectures – convolutional networks and LSTMs – are the dominant deep learning methodologies for science today.

In recent years, we have also explored auto-encoder architectures, which can be used for unsupervised clustering of datasets. We have had some success in applying such methods for analysis of galaxy images in astronomy, and Data Bay sensor data for neutrino discovery. The latest trend in deep learning is the generative adversarial network (GAN). This architecture can be used for creating synthetic data. You can feed in examples from a certain domain, say cosmology images or Large Hadron Collider (LHC) images, and the network will essentially learn a process that can explain these images. Then you can ask that same network to produce more synthetic data that is consistent with other images it has seen. We have empirical evidence that you can use GANs to produce synthetic cosmology or synthetic LHC data without resorting to expensive computational simulations.

What is driving NERSC’s growing deep learning efforts, and how did you come to lead these efforts?

I have a long-standing interest in image processing and computer vision. During my undergrad at IIT Delhi, and grad studies at Brown, I was intrigued by object recognition problems, which seemed to be fairly hard to solve. There was incremental progress in the field through the 1990s and 2000s, and then suddenly in 2012 and 2013 you see this breakthrough performance in solving real problems on real datasets. At that point, the MANTISSA collaboration – a research project originally begun when I was part of Berkeley Lab’s Computational Research Division – was exploring similar pattern detection problems, and it was natural for us to explore whether deep learning could be applied to science problems. We spent the next three to four years exploring applications in earth systems modeling, neuroscience, astronomy and high energy physics.

When a new method/technology comes along, one has to make a judgment call on how long you want to wait before investing time and energy in exploring the possibilities. I think the DAS group at NERSC was one of the early adopters. We recognized the importance of this technique and demonstrated that it could work for science. In the experimental and observational data community, there are a lot of examples of domain scientists who have been struggling with pattern recognition problems for a long time. And now the broader science community is waking up to the possibilities of machine learning to help them solve these problems.

What is NERSC’s current strategy for bringing deep learning capabilities to its users?

Since NERSC is a DOE Office of Science national user facility, we listen to our users, track their emerging requirements and respond to their needs. Our users are telling us that they would like to explore machine learning/deep learning and see what it can do for them. We currently have about 70 users who are actively using deep learning software at NERSC, and we want to make sure that our software, hardware, policies and documentation are all up to speed. Over the past two years, we have worked with the vendor community and identified a few popular deep learning frameworks (TensorFlow, Caffe, Theano and Torch) and have deployed them on Cori. In addition to making the software available, we have documentation and case studies in place. We also have in-depth collaborations in about a dozen areas where NERSC staff, mostly from the DAS group, have worked with scientists to help them explore the application of deep learning. And we are forming strategic relationships with commercial vendors and other research partners in the community to explore the frontier of deep learning for science.

Do certain areas of scientific research lend themselves more than others to applying deep learning?

Right now our success stories span research sponsored by several DOE Office of Science program offices, including BER, HEP and NP. In earth systems modeling, we have shown that convolutional architectures can extract extreme weather patterns in large simulations datasets. In cosmology, we have shown that CNNs can predict cosmological constants, and GANs can be potentially used to supplement existing cosmology simulations.  In astronomy, the Celeste project has effectively used auto-encoders for modeling galaxy shapes. In high energy physics, we are using convolutional architectures for discriminating between different models of particle physics, exploring LSTM architectures for particle tracking. We’ve also shown that deep learning can be used for clustering and classifying various event types at the Daya Bay experiment.

So the big takeaway here is that for the tasks involving pattern classification, regression and creating fast simulators, deep learning seems to do a good job – IF you can find training data. That’s the big catch – if you have labeled data, you can employ deep learning. But it can be a challenge to find training data in some domain sciences.

Looking ahead, what are some of the challenges in developing deep learning tools for science and applying them to research projects at NERSC and other scientific supercomputing facilities?

We can see a range of short-term and long-term challenges in deep learning for science. The short-term challenges are mostly pragmatic issues pertaining to development, enhancement and deployment of tools. These include handling complex data; scientific data tends to be very diverse (compared to images and speech), we are working with 2D, 3D, even 4D data and the datasets can be sparse or dense and defined over a regular, or irregular grid. Deep learning frameworks will need to account for this diversity going forward. Performance and scaling are also barriers. Our current networks can take several days to converge on O(10) GB datasets, but several scientific domains would like to apply deep learning to 10TB-100TB datasets. Thankfully, this problem is right up our alley at HPC centers.

Another important challenge faced by domain scientists is hyper-parameter tuning: Which network architecture do you start with? How do you choose an optimization algorithm? How do you get the network to converge? Unfortunately, only a few deep learning experts know how to address this problem; we need automated strategies/tools. Finally, once scientific communities realize that deep learning can work for them, and access to labeled datasets is the key barrier to entry, they will need to self-organize and conduct labeling campaigns.

The longer-term challenges for deep learning in science are harder, by definition, and include a lack of theory, interpretability, uncertainty quantification and the need for a formal protocol. I believe it’s very early days in the application of deep learning to scientific problems. There’s a lot of low-hanging fruit in publishing easy papers that demonstrate state-of-the-art accuracy for classification, regression and clustering problems. But in order to ensure that the domain science community truly embraces the power of deep learning methods, we have to keep the longer term, harder challenges in mind.

About the Author

Kathy Kincade is a science & technology writer and editor with the Berkeley Lab Computing Sciences Communications Group.

The post Deep Learning for Science: A Q&A with NERSC’s Prabhat appeared first on HPCwire.

SC17: AI and Machine Learning are Central to Computational Attack on Cancer

Tue, 11/07/2017 - 08:48

Enlisting computational technologies in the war on cancer isn’t new but it has taken on an increasingly decisive role. At SC17, Eric Stahlberg, director of the HPC Initiative at Frederick National Laboratory for Cancer Research in the Data Science and Information Technology Program, and two colleagues will lead the third Computational Approaches for Cancer workshop being held the Friday, Nov. 17, at SC17.

It is hard to overstate the importance of computation in today’s pursuit of precision medicine. Given the diversity and size of datasets it’s also not surprising that the “new kids” on the HPC cancer fighting block – AI and deep learning/machine learning – are also becoming the big kids on the block promising to significantly accelerate efforts understand and integrate biomedical data to develop and inform new treatments.

Eric Stahlberg

In this Q&A, Stahlberg discusses the goals of the workshop, the growing importance of AI/deep learning in biomedical research, how programs such as the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) are progressing, the need for algorithm assurance and portability, as well as ongoing needs where HPC technology has perhaps fallen short. The co-organizers of the workshop include Patricia Kovatch, Associate Dean for Scientific Computing at the Icahn School of Medicine at Mount Sinai and the Co-Director for the Master of Science in Biomedical Informatics, and Thomas Barr, Imaging Manager, Biomedical Imaging Team, Research Institute at Nationwide Children’s Hospital.

HPCwire: Maybe set the framework of the workshop with an overview of its goals. What are you and your fellow participants trying to achieve and what will be the output?

Eric Stahlberg: Great question. Cancer is an extremely complex disease – with hundreds of distinct classifications. The scale of the challenge is such that it requires a team approach to make progress. Now in its third year, the workshop continues to provide a venue that brings together communities and individuals from all interests and backgrounds to work together to impact cancer with HPC and computational methods. The workshops continue to be organized to help share information and updates on new capabilities, technologies and opportunities involving computational approaches, with an eye on seeing new collaborative efforts develop around cancer.

The first year of the workshop in 2015 was somewhat remarkable. The HPC community attending SC has had a long history of supporting the cancer research community in many ways, yet an opportunity to bring the community together had not yet materialized. The original intent was simply to provide a venue for those with in interest in cancer to share ideas, bring focus to potential priorities and look ahead as to what might be possible. The timing was incredible, with the the launching of the National Strategic Computing Initiative in July, opening up a whole new realm of potential possibilities and ideas.

By the time of the workshop last year, many of these possibilities started to gain traction – the Cancer Moonshot Initiative providing a strong and motivating context to get started, accelerate and make progress rapidly. Many new efforts were just getting off the ground, creating huge potential for collaboration – with established efforts employing computational approaches blending with new initiatives being launched as part of the Cancer Moonshot.

The workshop this year continues to build on the direction and success of the first two workshops. Similar to the first two workshops, speakers are being invited to help inform on opportunities for large scale computing in cancer research. This year’s workshop will feature Dr. Shannon Hughes from the NCI Division of Cancer Biology delivering a keynote presentation highlighting many efforts at the NCI where HPC can make a difference, particularly in the area of cancer systems biology. In addition, this year’s workshop brings a special emphasis to the role of machine learning in cancer – a tremendously exciting area, while also providing an opportunity to update the HPC community on the progress of collaborative efforts of the NCI working with the Department of Energy.

The response to the call for papers this year was also very strong – reflecting the rapidly growing role of HPC in accelerating cancer research. In addition to a report to compile summaries the workshop contributions, highlight key issues, and identify new areas of exploration and collaboration, the organizers are anticipating a special journal issue where these contributions can also be shared in full.

HPCwire: In the fight against cancer, where has computational technology had the greatest impact so far and what kinds of computational infrastructure have been the key enablers? Where has its application been disappointing?

Stahlberg: Computational technology has been part of the cancer fight for many years, providing many meaningful contributions along the way to advance understanding, provide insight, and deliver new diagnostic capabilities. One can find computational technology at work in many areas including imaging systems, in computational models used in cancer drug discovery, in assembling, mapping and analyzing genomes that enable molecular level understanding, even in the information systems used to manage and share information about cancer patients.

Answering the question as to where has computational technology been most disappointing in the fight against cancer is also difficult to answer given the breadth of areas where computational technology has been employed. However, given the near-term critical need to enable greater access to clinical data including patient history and outcomes, an area where great promise remains is in the use of computational technologies that make it possible for more patient information to be brought together more fully, safely and securely to accelerate progress in the direction of clinical impact.

HPCwire: What are the key computational gaps in the fight against cancer today? How do you see them being addressed and what particular technologies, life science and computational, are expected to have the greatest impact?

Stahlberg: The computational landscape for cancer is changing so rapidly that the key gaps continue to also change. Not too long ago, there was great concern about the amount of data being generated and whether this data could all be used effectively. Within just a few years, the mindset has changed significantly, where the challenge is now focusing on bringing what data we have available together and recognizing that even then, the complexity of the disease demands even more [data] as we aim for more precision in diagnosis, prognosis, and associated treatments.

With that said, computational gaps exist in nearly every domain and area of the cancer fight. At the clinical level, computation holds promise to help bring together, even virtually, the large amounts of existing data locked away in organizational silos. At the clinical level, there are important gaps in the data available for cancer patients pre and post treatment, a gap that may well be filled by both better data integration capabilities as well as mobile health monitoring. Understanding disease progression and response also presents a gap as well as an associated opportunity for computing and life science to work together to find efficient ways to monitor patient progress, track and monitor the disease at the cellular level, and create profiles of how different treatments impact the disease over time in the laboratory and in the clinic.

It is clear that technologies in two areas will be key in the near term – machine learning and technologies that enable algorithm portability.

In the near term, machine learning is expected to play a very significant role, particularly as the amount of data generated is expected to grow. We have already seen progress in automating feature identification as well as delivering approaches for predictive models for complex data. As the amount of available, reliable cancer data across all scales increases, the opportunity for machine learning to accelerate insight and leverage these new levels of information will continue to grow tremendously.

A second area of impact, related to the first, is in technologies for algorithm assurance and portability. As computing technology has become increasingly integrated within instruments and diagnostics at the point of acquisition exploding the volume of data collected, the need grows tremendously to move algorithms closer to the point of acquisition to enable processing before transport. The need for consistency and repeatability in the scientific research process requires portability and assurance of the analysis workflows. Portability and assurance of implementation are also important keys to eventual success in a clinical setting.

Efforts in delivering portable workflows through containers are also demonstrating great promise in moving the compute to the data, are providing an initial means at overcoming existing organizational barriers to data access.

Extending a bit further, technologies that enable portability of algorithms and of trained predictive models will also become keys to future success for HPC in cancer research. As new ways to encapsulate knowledge in the form of a trained neural network, parameterized set of equations, or other forms of predictive models, having reliable, portable knowledge will be a key factor to share insight and build the collective body of knowledge needed to accelerate cancer research.

HPCwire: While we are talking about infrastructure could you provide a picture of the HPC resources that NIH/NCI have, are they sufficient as is, and what the plans are for expanding them?

Biowulf phase three

Stahlberg: The HPC resource map for the NIH and NCI, like many large organizations, ranges from small servers to one of the largest systems created to support biological and health computation. There has been a wonderful recent growth in the available HPC resources available to NIH and NCI investigators, as part of a major NIH investment. The Biowulf team has done a fantastic job in raise the level of computing to now include 90,000+ processors. This configuration includes an expanded role of heterogeneous technologies in the form of GPUs, and presents a new level of computing capability available to the NIH and to NCI investigators.

In addition, NCI supports a large number of investigators through its multiple grant programs, where large scale HPC resources supported by NSF and others are being put to use in the war on cancer. While the specific details on the magnitude of computing this represents is not immediately available, the breadth of this level of support is expected to be quite substantial. At the annual meeting earlier this year for CASC (Coalition for Academic Scientific Computing), when asked which centers were supporting NCI funded investigators, nearly every attendee raised their hand.

Looking ahead, it would be difficult to make the case that even this level of HPC resources, will be sufficient as is, knowing the dramatic increases in the amount of data being generated currently, being forecast for the future, and the overall deepening complexity of cancer as new insights are revealed on an ongoing basis. With the emphasis on precision medicine and in the case of NCI, precision oncology, new opportunities to accelerate research and insight using HPC are quickly emerging.

Looking to the future of HPC and large scale computing in cancer research was one of the many factors supporting the new collaborative effort between the NCI and DOE. With the collaboration now having wrapped up the first year, new insights are being provided here that will merged with additional insights and information from the many existing efforts, to help inform future planning for HPC resources in the context of the emerging Exascale computing capabilities and emerging HPC technologies.

HPCwire: Interdisciplinary expertise is increasingly important in medicine. One persistent issue has been the relative lack of computational expertise among clinicians and life science researchers. To what extent is this changing and what steps are needed to raise computational expertise among this group?

Stahlberg: Medicine has long been a team effort, drawing from many disciplines and abilities to deliver care to the patient. There have been long-established centers in computational and mathematical aspects of medicine. One such example is the Advanced Biomedical Computing Center at the Frederick National Laboratory for cancer research which has been at the forefront of computational applications of medicine for more than twenty-five years. The difference today is the breadth of disciplines that are working together, and the depth of demand for computational scientists, as sources and volumes of available data in medicine and life sciences have exploded. The apparent shortage of computational expertise among clinicians and life sciences researchers is largely a result of the rapid rate of change in these areas, where the workforce and training have yet to catch up to the accelerating pace of technology and data-driven innovation.

Fortunately, many have recognized the need and opportunity for cross-disciplinary experience in computational and data sciences to enable ongoing advances in medicine. This appreciation has led to many new academic and training programs supported by NIH, NCI, as well as many catalyzed by health organizations and universities themselves that will help full future demand.

Collaborative opportunities between computational scientists and the medical research community are helping fill the immediate needs. One such example is the Joint Design of Advanced Computing Solutions for Cancer, a collaboration between the National Cancer Institute and the Department of Energy, brings together world-class computational scientists together with world-class cancer scientists in shared efforts to advance missions aims in both cancer and Exascale computing by pushing the limits of each together.

More organizations, seeing similar opportunities for cross-disciplinary collaboration in medicine, will certainly be needed to address the near-term demand while existing computational and data science programs adapt to embrace the medical and life sciences, and new programs begin to deliver the cross-trained, interdisciplinary workforce for the future.

HPCwire: Deep learning and Artificial Intelligence are the big buzzword in advanced scale computing today. Indeed, the CANDLE program efforts to learn how to apply deep learning in areas such as simulation (RAS effort), pre-clinical therapy evaluation, and outcome data mining are good examples. How do you see deep learning and AI being used near term and long-term in the war on cancer?

Stahlberg: While AI has a long history of application in medicine and life sciences, the opportunities for deep learning based AI in the war on cancer are just starting to be developed. As you mention, the application of deep learning in the JDACS4C pilots involving molecular simulation, pre-clinical treatment prediction, and outcome modeling are just developing the frontier of how this technology can be applied to effect and accelerate the war on cancer.  The CANDLE Exascale Computing project, led by Argonne National Laboratory, was formed out of the recognition that AI and deep learning in particular was intrinsic to each pilot, and had broad potential application across the cancer research space. The specific areas being explored by the three pilot efforts as part of the CANDLE project provide some insight into how deep learning and AI can be expected to have future impact in the war on cancer.

The pilot collaboration on cancer surveillance (pilot 3) led by investigators from the NCI Division of Cancer Control and Population Science and Oak Ridge National Laboratory demonstrating how deep learning can be applied to extract information from complex data, extracting biomarker information from electronic pathology reports. Similar capabilities have been shown to be possible with the processing of image information. Joined with automation, in the near term, deep learning can be expected to deepen and broaden the available insight about the cancer patient population in ways not otherwise possible.

The pilot collaboration on RAS-related cancers (pilot 2), led by investigators from Frederick National Laboratory and Lawrence Livermore National Laboratory follows in this direction, applying deep learning to extract and correlate features of potential interest from complex molecular interaction data.

The pilot collaboration on predictive cancer models (pilot 1), led by investigators from Frederick National Laboratory and Argonne National Laboratory are using deep learning based AI in a different manner, using deep learning to develop predictive models of tumor response.  While still very early, the potential use of deep learning for the development of predictive models in cancer is very exciting, opening doors to many new avenues to develop a ‘cancer learning system’ that will join data, prediction, and feedback in a learning loop that holds potential to revolutionize how we prevent, detect, diagnose, and treat cancer.

In the era that combines scale of big data, exascale computing and deep learning, new levels of understanding about the data are also possible and extremely valuable. Led by scientists at Los Alamos National Laboratory, Uncertainty Quantification, or UQ, is also an important element of the NCI collaboration with the DOE. Providing key insights into limits of the data and limits of the models, the information provided by UQ is helping to inform new approaches and priorities to improve both the robustness of the data and the models being employed.

These are just a few of the near-term areas where deep learning and AI are anticipated to have an impact. Looking long-term, the role of these technologies is difficult to forecast, but in drawing parallels from other disciplines, some specific areas begin to emerge.

First, for making predictions on complex systems such as is with cancer, ensemble and multi-model approaches are likely to be increasingly required to build consensus among likely outcomes across a range of initial conditions and parameters. Deep learning is likely to be used in both representing the complex systems being modeled, but also to inform the selections and choices to be involved in the ensembles. In a second future scenario, data-driven deep learning models may also form a future basis for portably representing knowledge about cancer, particularly in recognition of the complexity of the data and ongoing need to maintain data provenance and security. Deep learning models may be readily developed with locally accessible datasets, then shared with the community without sharing the actual data.

As a third future scenario, in translation to critical use scenarios, as core algorithms for research or central applications in the clinic, deep learning models provide a means for maintaining consistency, validation and verification in translation from research to clinical setting.

Anton 1 supercomputer specialized for life sciences modeling and simulation

HPCwire: One area that has perhaps been disappointing is predictive biology, and not just in cancer. Efforts to start with first principles, such as is done building modern jetliners, or even with experimental data from elucidation of various pathways, to ‘build’ drugs have had mixed results. Leaving aside things like structure scoring (docking, etc.), where is predictive biology headed in terms of fighting cancer and what’s the sense around needed technology requirements, for example specialized supercomputers such Anton1 and 2, and what’s the sense of needed basic knowledge to plug into predictive models?

Stahlberg: Biology has been a challenge for computational prediction given the overall complexity of the system and the role that subtle changes and effects can have in the overall outcome. The potential disappointment that may be assigned to predictive biology is most likely relative – relative to the what has been demonstrated in other disciplines such as transportation.

This is what makes the current era so very promising for accelerating progress on cancer and predictive biology. A sustained effort employing the lessons learned from other industries, where it is now increasingly possible to make the critical observations of biology at the fundamental level of the cell, combined with the computing capabilities that are rapidly becoming available, sets the stage for transforming predictive biology in a manner observed in parallel industries. Two elements of that transformation highlight the future direction.

First, the future for predictive biology is likely to be multi-scale, both in time and space, where models for subsystems are developed, integrated and accelerated computationally to support and inform predictions across multiple scales and unique biological environments, and ultimately for increasingly precise predictions for defined groups of individuals.  Given the multi-scale nature of biology itself, the direction is not too surprising. The challenge is in getting there.

One of the compelling features for deep learning in the biological domain is in its flexibility and applicability across the range of scales of interest. While not a substitute for fundamental understanding, deep learning enables a first step to support a predictive perspective for the complexity of data available in biology. This first step enables active learning approaches, where data is used to develop predictive models that are progressively improved with new data and biological insight obtained from experimentation aimed at reducing uncertainty around the prediction.

A critical area of need already identified is the need more longitudinal observations and data with which to have both greater insight into outcomes for patients, but also greater insight into the incremental changes of biological state over time at all scales. In the near term, by starting with the data we currently have available, advances will be made to help inform on the data expected to be required for improved predictions, whereby insights will be gained to define the information truly needed for confident predictions.

The role of specialized technologies will be critical, particularly in the context of predictive models, as the size, complexity and number of subsystems are studied and explored to align predictions across scales. These specialized technologies will lead the forefront of efficient implementations of predictive models, increasing the speed and reducing the costs required to study and inform decisions for increasingly precise and predictive oncology.

Brief Bio:
Eric Stahlberg is director of the HPC Initiative at Frederick National Laboratory for Cancer Research in the Data Science and Information Technology Program. In this role he also leads HPC strategy and exploratory computing efforts for the National Cancer Institute Center for Biomedical Informatics and Information Technology (CBIIT). Dr. Stahlberg also spearheads collaborative efforts between the National Cancer Institute and the US Department of Energy in such efforts as that Joint Design for Advanced Computing Solutions for Cancer (JDACS4C), the CANcer Distributed Learning Environment (CANDLE), and Accelerating Therapeutics for Opportunities in Medicine (ATOM). Prior to joining Frederick National Laboratory, he directed an innovative undergraduate program in computational science, led efforts in workforce development, led HPC initiatives in bioinformatics, and multiple state and nationally funded projects.

The post SC17: AI and Machine Learning are Central to Computational Attack on Cancer appeared first on HPCwire.

AWS Announces Availability of C5 Instances for Amazon EC2

Tue, 11/07/2017 - 07:57

SEATTLE, Nov. 7, 2017 — Today, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ:AMZN), announced the availability of C5 instances, the next generation of compute optimized instances for Amazon Elastic Compute Cloud (Amazon EC2). Designed for compute-heavy applications like batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding, C5 instances feature 3.0 GHz Intel Xeon Scalable processors (Skylake-SP) up to 72 vCPUs, and 144 GiB of memory—twice the vCPUs and memory of previous generation C4 instances—providing the best price-performance of any Amazon EC2 instance. To get started with C5 instances, visit: https://aws.amazon.com/ec2/instance-types/c5/.

Optimized to deliver the right combination of CPU, memory, storage, and networking capacity for a wide range of workloads, all latest generation Amazon EC2 instance families—including C5—feature AWS hardware acceleration that delivers consistent, high performance, low latency networking and storage resources. C5 instances provide networking through the Elastic Network Adapter (ENA), a scalable network interface built by AWS to provide direct access to its networking hardware. Additional dedicated hardware and network bandwidth for Amazon Elastic Block Store (Amazon EBS) enables C5 instances to offer high performance storage through the scalable NVM Express (NVMe) interface. C5 instances introduce a new, lightweight hypervisor that allows applications to use practically all of the compute and memory resources of a server, delivering reduced cost and even better performance. C5 instances are available in six sizes—with the four smallest instance sizes offering substantially more Amazon EBS and network bandwidth than the previous generation of compute optimized instances.

“Customers have been happily using Amazon EC2’s unmatched selection of instances for more than 11 years, yet they’ll always take higher and more consistent performance if it could be offered in a cost-effective way. One of the challenges in taking this next step is how to leverage the cost efficiency of virtualization while consuming hardly any overhead for it,” said Matt Garman, Vice President, Amazon EC2, AWS. “We’ve been working on an innovative way to do this that comes to fruition with Amazon EC2 C5 instances. Equipped with our new cloud-optimized hypervisor, C5 instances set a new standard for consistent, high-performance cloud computing, eliminating practically any virtualization overhead through custom AWS hardware, and delivering a 25 percent improvement in compute price-performance over C4 instances—with some customers reporting improvements of well over 50 percent.”

Netflix is the world’s leading internet television network with 104 million members in over 190 countries enjoying more than 125 million hours of TV shows and movies per day. “In our testing, we saw significant performance improvement on Amazon EC2 C5, with up to a 140 percent performance improvement in industry standard CPU benchmarks over C4,” said Amer Ather, Cloud Performance Architect at Netflix. “The 15 percent price reduction in C5 will deliver a compelling price-performance improvement over C4.”

iPromote provides digital advertising solutions to 40,000 small and medium-sized businesses (SMBs). “iPromote processes billions of ad serving bid transactions every day,” said Matt Silva, COO at iPromote. “During testing, C5 instances improved our application’s request execution time by over 50 percent and significantly improved our network performance overall.”

Grail is a life sciences company whose mission is to detect cancer early, when it can be cured. “Our platform processes a huge amount of DNA sequencing data to detect faint tumor DNA signals in a sea of background noise,” said Cos Nicolaou, Head of Technology at Grail. “We are eager to migrate onto the AVX-512 enabled c5.18xlarge instance size. With this change, we expect to decrease the processing time of some of our key workloads by more than 30 percent.”

Alces Flight Compute makes it easy for researchers to spin up High Performance Computing (HPC) clusters of any size on AWS. “With the support for AVX-512, the new c5.18xlarge instance provides a 200 percent improvement in FLOPS compared to the largest C4 instance,” said Wil Mayers, Director of Research and Development for Alces. “This will reduce the execution time of the scientific models that our customers run on the Alces Flight platform. The larger c5.18xlarge size with 72vCPUs reduces the number of instances in the cluster, and has a direct benefit for our user base on both price and performance dimensions.”

Rescale enables customers in the aerospace, automotive, life sciences and energy sectors to run utility supercomputers using AWS. “C5 fully supports NVMe and is ideal for the I/O intensive HPC workloads seen on Rescale’s ScaleX® platform,” Ryan Kaneshiro, Chief Architect at Rescale, said. “C5’s higher clock speed and AVX-512 instruction set will allow our customers to run their CAE simulations significantly faster than on C4 instances.”

Customers can purchase Amazon EC2 C5 instances as On-demand, Reserved, or Spot instances. C5 instances are generally available today in the US East (N. Virginia), US West (Oregon) and, EU (Ireland) regions, with support for additional regions coming soon. They are available in six sizes with 2, 4, 8, 16, 36, and 72 vCPUs.

About Amazon Web Services

For 11 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 90 fully featured services for compute, storage, networking, database, analytics, application services, deployment, management, developer, mobile, Internet of Things (IoT), Artificial Intelligence (AI), security, hybrid, and enterprise applications, from 44 Availability Zones (AZs) across 16 geographic regions in the U.S., Australia, Brazil, Canada, China, Germany, India, Ireland, Japan, Korea, Singapore, and the UK. AWS services are trusted by millions of active customers around the world — including the fastest-growing startups, largest enterprises, and leading government agencies — to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit https://aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit www.amazon.com/about and follow @AmazonNews.

Source: Amazon

The post AWS Announces Availability of C5 Instances for Amazon EC2 appeared first on HPCwire.

Cosmos Code Helps Probe Space Oddities

Tue, 11/07/2017 - 07:52

Nov. 7, 2017 — Black holes make for a great space mystery. They’re so massive that nothing, not even light, can escape a black hole once it gets close enough. A great mystery for scientists is that there’s evidence of powerful jets of electrons and protons that shoot out of the top and bottom of some black holes. Yet no one knows how these jets form.

Computer code called Cosmos now fuels supercomputer simulations of black hole jets and is starting to reveal the mysteries of black holes and other space oddities.

Cosmos code simulates wide-ranging astrophysical phenomena. Shown here is a multi-physics simulation of an Active Galactic Nucleus (AGN) jet colliding with and triggering star formation within an intergalactic gas cloud (red indicates jet material, blue is neutral Hydrogen [H I] gas, and green is cold, molecular Hydrogen [H_2] gas. (Chris Fragile)“Cosmos, the root of the name, came from the fact that the code was originally designed to do cosmology. It’s morphed into doing a broad range of astrophysics,” explained Chris Fragile, a professor in the Physics and Astronomy Department of the College of Charleston. Fragile helped develop the Cosmos code in 2005 while working as a post-doctoral researcher at the Lawrence Livermore National Laboratory (LLNL), along with Steven Murray (LLNL) and Peter Anninos (LLNL).

Fragile pointed out that Cosmos provides astrophysicists an advantage because it has stayed at the forefront of general relativistic magnetohydrodynamics (MHD). MHD simulations, the magnetism of electrically conducting fluids such as black hole jets, add a layer of understanding but are notoriously difficult for even the fastest supercomputers.

“The other area that Cosmos has always had some advantage in as well is that it has a lot of physics packages in it,” continued Fragile. “This was Peter Anninos’ initial motivation, in that he wanted one computational tool where he could put in everything he had worked on over the years.” Fragile listed some of the packages that include chemistry, nuclear burning, Newtonian gravity, relativistic gravity, and even radiation and radiative cooling. “It’s a fairly unique combination,” Fragile said.

The current iteration of the code is CosmosDG, which utilizes discontinuous Gelarkin methods. “You take the physical domain that you want to simulate,” explained Fragile, “and you break it up into a bunch of little, tiny computational cells, or zones. You’re basically solving the equations of fluid dynamics in each of those zones.” CosmosDG has allowed much higher order of accuracy than ever before, according to results published in the Astrophysical Journal, August 2017.

“We were able to demonstrate that we achieved many orders of magnitude more accurate solutions in that same number of computational zones,” stated Fragile. “So, particularly in scenarios where you need very accurate solutions, CosmosDG may be a way to get that with less computational expense than we would have had to use with previous methods.”

XSEDE ECSS Helps Cosmos Develop

Since 2008, the Texas Advanced Computing Center (TACC) has provided computational resources for the development of the Cosmos code—about 6.5 million supercomputer core hours on the Ranger system and 3.6 million core hours on the Stampede system. XSEDE, the eXtreme Science and Engineering Discovery Environment funded by the National Science Foundation, awarded Fragile’s group with the allocation.

“I can’t praise enough how meaningful the XSEDE resources are,” Fragile said. “The science that I do wouldn’t be possible without resources like that. That’s a scale of resources that certainly a small institution like mine could never support. The fact that we have these national-level resources enables a huge amount of science that just wouldn’t get done otherwise.”

And the fact is that busy scientists can sometimes use a hand with their code. In addition to access, XSEDE also provides a pool of experts through the Extended Collaborative Support Services (ECSS) effort to help researchers take full advantage of some of the world’s most powerful supercomputers.

Fragile has recently enlisted the help of XSEDE ECSS to optimize the CosmosDG code for Stampede2, a supercomputer capable of 18 petaflops and the flagship of TACC at The University of Texas at Austin. Stampede2 features 4,200 Knights Landing (KNL) nodes and 1,736 Intel Xeon Skylake nodes.

Taking Advantage of Knights Landing and Stampede2

The manycore architecture of KNL presents new challenges for researchers trying to get the best compute performance, according to Damon McDougall, a research associate at TACC and also at the Institute for Computational Engineering and Sciences, UT Austin. Each Stampede2 KNL node has 68 cores, with four hardware threads per core. That’s a lot of moving pieces to coordinate.

“This is a computer chip that has lots of cores compared to some of the other chips one might have interacted with on other systems,” McDougall explained. “More attention needs to be paid to the design of software to run effectively on those types of chips.”

Through ECSS, McDougall has helped Fragile optimize CosmosDG for Stampede2. “We promote a certain type of parallelism, called hybrid parallelism, where you might mix Message Passing Interface (MPI) protocols, which is a way of passing messages between compute nodes, and OpenMP, which is a way of communicating on a single compute node,” McDougall said. “Mixing those two parallel paradigms is something that we encourage for these types of architectures. That’s the type of advice we can help give and help scientists to implement on Stampede2 though the ECSS program.”

“By reducing how much communication you need to do,” Fragile said, “that’s one of the ideas of where the gains are going to come from on Stampede2. But it does mean a bit of work for legacy codes like ours that were not built to use OpenMP. We’re having to retrofit our code to include some OpenMP calls. That’s one of the things Damon has been helping us try to make this transition as smoothly as possible.”

McDougall described the ECSS work so far with CosmosDG as “very nascent and ongoing,” with much initial work sleuthing memory allocation ‘hot spots’ where the code slows down.

“One of the things that Damon McDougall has really been helpful with is helping us make the codes more efficient and helping us use the XSEDE resources more efficiently so that we can do even more science with the level of resources that we’re being provided,” Fragile added.

Black Hole Wobble

Some of the science Fragile and colleagues have already done with the help of the Cosmos code has helped study accretion, the fall of molecular gases, and space debris into a black hole. Black hole accretion powers its jets. “One of the things I guess I’m most famous for is studying accretion disks where the disk is tilted,” explained Fragile.

Black holes spin. And so do the disk of gasses and debris that surrounds it and falls in. However, they spin on different axes of rotation. “We were the first people to study cases where the axis of rotation of the disk is not aligned with the axis of rotation of the black hole,” Fragile said. General relativity shows that rotating bodies can exert a torque on other rotating bodies that aren’t aligned with it.

Fragile’s simulations showed the black hole wobbles, a movement called precession, from the torque of the spinning accretion disk. “The really interesting thing is that over the last five years or so, observers—the people who actually use telescopes to study black hole systems—have seen evidence that the disks might actually be doing this precession that we first showed in our simulations,” Fragile said.

Fragile and colleagues use the Cosmos code to study other space oddities such as tidal disruption events, which happen when a molecular cloud or star passes close enough that a black hole shreds it. Other examples include Minkowski’s Object, where Cosmos simulations support observations that a black hole jet collides with a molecular cloud to trigger star formation.

Golden Age of Astronomy and Computing

“We’re living in a golden age of astronomy,” Fragile said, referring to the wealth of knowledge generated from space telescopes like Hubble to the upcoming James Webb Space Telescope, to land-based telescopes such as Keck, and more.

Computing has helped support the success of astronomy, Fragile said. “What we do in modern-day astronomy couldn’t be done without computers,” he concluded. “The simulations that I do are two-fold. They’re to help us better understand the complex physics behind astrophysical phenomena. But they’re also to help us interpret and predict observations that either have been, can be, or will be made in astronomy.”

Source: Texas Advanced Computing Center

The post Cosmos Code Helps Probe Space Oddities appeared first on HPCwire.

Pages