HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 32 min ago

Linguists Use HPC to Develop Emergency-Response Translator

Sun, 12/03/2017 - 14:58

We live on a planet of more than seven billion people who speak more than 7,000 languages. Most of these are “low-resource” languages for which there are a dearth of human translators and no automated translation capability. This presents a big challenge in emergency situations where information must be collected and communicated rapidly across languages.

To address this problem, linguists at Ohio State University are using the Ohio Supercomputer Center’s Owens Cluster to develop a general grammar acquisition technology.

This graph displays an algorithm that explores the space of possible probabilistic grammars and maps out the regions of this space that have the highest probability of generating understandable sentences. (Source: OSC)

The research is part of an initiative called Low Resource Languages for Emergent Incidents (LORELEI) that is funded through the Defense Advanced Research Projects Agency (DARPA). LORELEI aims to support emergent missions, e.g., humanitarian assistance/disaster relief, peacekeeping or infectious disease response by “providing situational awareness by identifying elements of information in foreign language and English sources, such as topics, names, events, sentiment and relationships.”

The Ohio State group is using high-performance computing and Bayseian methods to develop a grammar acquisition algorithm that can discover the rules of lesser-known languages.

“We need to get resources to direct disaster relief and part of that is translating news text, knowing names of cities, what’s happening in those areas,” said William Schuler, Ph.D., a linguistics professor at The Ohio State University, who is leading the project. “It’s figuring out what has happened rapidly, and that can involve automatically processing incident language.”

Schuler’s team is using Bayseian methods to discover a given language’s grammar and build a model capable of generating grammatically valid output.

“The computational requirements for learning grammar from statistics are tremendous, which is why we need a supercomputer,” Schuler said. “And it seems to be yielding positive results, which is exciting.”

The team originally used CPU-only servers but is now using GPUs in order to model a larger number of grammar categories. The goal is to have a model that can be trained on a target language in an emergency response situation, so speed is critical. In August, the team ran two simulated disaster simulations in seven days using 60 GPU nodes (one Nvidia P100 GPU per node) but a real-world situation with more realistic configurations would demand even greater computational power, according to one of the researchers.

Read the full announcement here:

Owens Cluster technical specs here:

The post Linguists Use HPC to Develop Emergency-Response Translator appeared first on HPCwire.

HDF5-1.8.20 Release Now Available

Fri, 12/01/2017 - 08:54

Dec. 1, 2017 — The HDF5-1.8.20 release is now available. It can be obtained from The HDF Group Support Download page: https://support.hdfgroup.org/downloads/

It can also be obtained directly from: https://support.hdfgroup.org/HDF5/release/obtain518.html

HDF5-1.8.20 is a minor release with a few new features and changes. Important changes include:

  • An issue with H5Zfilter_avail was fixed where it was not finding available plugins.
  • Improvements were made to the h5repack, h5ls, h5dump, h5diff, and h5import utilities.

    In particular, please be aware that the behavior of the h5repack utility changed.

    A parameter was added to the “UD=” option of h5repack to allow the user defined filter flag to be changed to either H5Z_FLAG_MANDATORY (0) orH5Z_FLAG_OPTIONAL (1).

    An example of the command with the new parameter is shown here:
    h5repack -f UD=307,0,1,9 h5repack_layout.h5 out.h5repack_layout.h5

    Previously, the command would have been:
    h5repack -f UD=307,1,9 h5repack_layout.h5 out.h5repack_layout.h5

  • Support for the NAG compiler was added.
  • Many C++ APIs were added or modified to better reflect the HDF5 model.

This release contains many other changes that are not listed here. Please be sure to read the Release Notes for a comprehensive list of new features and changes.

Changes that affect maintainers of HDF5-dependent applications are listed on the HDF5 Software Changes from Release to Release page.

Future Changes to Supported Compilers

Please be aware that the minimum supported CMake version will be changing in the next release to CMake 3.8 or greater.

Source: The HDF Group

The post HDF5-1.8.20 Release Now Available appeared first on HPCwire.

UT Dallas Researcher Uses Supercomputers to Explore Nanopill Design

Fri, 12/01/2017 - 08:37

Dec. 1, 2017 — Imagine a microscopic gold pill that could travel to a specific location in your body and deliver a drug just where it is needed. This is the promise of plasmonic nanovesicles.

These minute capsules can navigate the bloodstream, and, when hit with a quick pulse of laser light, change shape to release their contents. It can then exit the body, leaving only the desired package.

This on-demand, light-triggered drug release method could transform medicine, especially the treatment of cancer. Clinicians are beginning to test plasmonic nanovesicles on head and neck tumors. They can also help efforts to study the nervous system in real-time and provide insights into how the brain works.

However, like many aspects of nanotechnology, the devil is in the details. Much remains unknown about the specific behavior of these nanoparticles – for instance, the wavelengths of light they respond to and how best to engineer them.

Writing in the October 2017 issue of Advanced Optical Materials, Zhenpeng Qin, an assistant professor of Mechanical Engineering and Bioengineering at the University of Texas at Dallas, his team, and collaborators from the University of Reims (Dr. Jaona Randrianalisoa), reported the results of computational investigations into the collective optical properties of complex plasmonic vesicles.

They used the Stampede and Lonestar supercomputers at the Texas Advanced Computing Center, as well as systems at the ROMEO Computing Center at the University of Reims Champagne-Ardenne and the San Diego Supercomputing Center (through the Extreme Science and Engineering Discovery Environment) to perform large-scale virtual experiments of light-struck vesicles.

“A lot of people make nanoparticles and observe them using electron microscopy,” Qin said. “But the computations give us a unique angle to the problem. They provide an improved understanding of the fundamental interactions and insights so we can better design these particles for specific applications.”

Striking Biomedical Gold

Gold nanoparticles are one promising example of a plasmonic nanomaterial. Unlike normal substances, plasmonic nanoparticles (typically made of noble metals) have unusual scattering, absorbance, and coupling properties due to their geometries and electromagnetic characteristics. One consequence of this is that they interact strongly with light and can be heated by visible and ultraviolet light, even at a distance, leading to structural changes in the particles, from melting to expansion to fragmentation.

Gold nanoparticle-coated liposomes — spherical sacs enclosing a watery core that can be used to carry drugs or other substances into the tissues — have been demonstrated as promising agents for light-induced content release. But these nanoparticles need to be able to clear the body through the renal system, which limits the size of the nanoparticles to less than few nanometers.

The specific shape of the nanoparticle — for instance, how close together the individual gold molecules are, how large the core is, and the size, shape, density and surface conditions of the nanoparticle — determines how, and how well, the nanoparticle functions and how it can be manipulated.

Qin has turned his attention in recent years to the dynamics of clusters of small gold nanoparticles with liposome cores, and their applications in both diagnostic and therapeutic areas.

“If we put the nanoparticles around a nano-vesicle, we can use laser light to pop open the vesicle and release molecules of interests,” he explained. “We have the capability to assemble a different number of particles around a vesicle by coating the vesicle in a layer of very small particles. How can we design this structure? It’s a quite interesting and complex problem. How do the nanoparticles interact with each other – how far are they apart, how many are there?”

Simulations Provide Fundamental and Practical Insights

To gain insights into the ways plasmonic nanoparticles work and how they can be optimally designed, Qin and colleagues use computer simulation in addition to laboratory experiments.

In their recent study, Qin and his team simulated various liposome core sizes, gold nanoparticle coating sizes, a wide range of coating densities, and random versus uniform coating organizations. The coatings include several hundred individual gold particles, which behave collectively.

“It is very simple to simulate one particle. You can do it on an ordinary computer, but we’re one of the first to looking into a complex vesicle,” Randrianalisoa said. “It is really exciting to observe how aggregates of nanoparticles surrounding the lipid core modify collectively the optical response of the system.”

The team used the discrete dipole approximation (DDA) computation method in order to make predictions of the optical absorption features of the gold-coated liposome systems. DDA allows one to compute the scattering of radiation by particles of arbitrary shape and organization. The method has the advantage of allowing the team to design new complex shapes and structures and to determine quantitatively what their optical absorption features will be.

The researchers found that the gold nanoparticles that make up the outer surface have to be sufficiently close together, or even overlapping, to absorb sufficient light for the delivery system to be effective. They identified an intermediate range of optical conditions referred to as the “black gold regime,” where the tightly packed gold nanoparticles respond to light at all wavelengths, which can be highly useful for a range of applications.

“We’d like to develop particles that interact with light in the near-infrared range – with wavelengths of around 700 to 900 nanometers — so they have a deeper penetration into the tissue,” Qin explained.

They anticipate that this study will provide design guidelines for nano-engineers and will have a significant impact on the further development of complex plasmonic nanostructures and vesicles for biomedical applications.

[In a separate study published in ACS Sensors in October 2017, Qin and collaborators showed the effectiveness of gold nanoparticles for assays that detect infectious diseases and other biological and chemical targets.]

Inspired by recent developments in optogenetics, which uses light to control cells (typically neurons) in living tissues, Qin and his team plan to use the technology to develop a versatile optically-triggered system to perform real-time studies of brain activity and behavior.

He hopes the fast release feature of the new technique will provide sufficient speed to study neuronal communication in neuroscience research.

“There are a lot of opportunities for using computations to understand fundamental interactions and mechanisms that we can’t measure,” Qin said. “That can feed back into our experimental research so we can better advance these different techniques to help people.”

Source: Aaron Dubrow, TACC

The post UT Dallas Researcher Uses Supercomputers to Explore Nanopill Design appeared first on HPCwire.

NCSA SPIN Intern Daniel Johnson Develops Open Source HPC Python Package

Thu, 11/30/2017 - 19:05

Nov. 30 — At the National Center for Supercomputing Applications (NCSA), undergraduate SPIN (Students Pushing INnovation) intern Daniel Johnson joined NCSA’s Gravity Group to study Albert Einstein’s theory of general relativity, specifically numerical relativity. Daniel has used the open source, numerical relativity software, the Einstein Toolkit on the Blue Waters supercomputer to numerically solve Einstein’s general relativity equations to study the collision of black holes, and the emission of gravitational waves from these astrophysical events. During his SPIN internship, Daniel developed an open source, Python package to streamline these numerical analyses in high performance computing (HPC) environments.

This image shows the collision of two black holes with masses of 28 to 36 and 21 to 28 times the mass of the sun, respectively. The spheres at the center represent the event horizons of the black holes. The size of the black holes has been increased by a factor of 4 to enhance visibility. Elevation and color of the surface gives an indicating of the strength of the gravitational field at that point. Orange is strongest, dark blue is weakest. The collision happened between 1.1 to 2.2 billion light years from Earth, and was observed from a direction near the Eridanus constellation. The mass equivalent of 3 suns was converted to gravitational radiation and radiated into space. Authors: Numerical simulation: R. Haas, E. Huerta (University of Illinois); Visualization: R. Haas (University of Illinois)

Just this month, Johnson’s paper “Python Open Source Waveform Extractor (POWER): An open source, Python package to monitor and post-process numerical relativity simulations” was accepted by Classical and Quantum Gravity, a remarkable feat for an undergraduate student, and one that will benefit the numerical relativity community.

“With a long history of developing scientific software and tools, NCSA provides a rich environment where HPC experts, faculty, and students can work together to solve some of the most challenging problems facing us today,” said NCSA Director Bill Gropp. “This is a great example of what young researchers can accomplish when immersed in an exciting and supportive research program.”

“It’s very gratifying to have an undergraduate’s work published in a world-leading journal in relativity,” said Eliu Huerta, research scientist at NCSA who mentored Johnson. “Given the lack of open source tools to post-process the data products of these simulations in HPC environments, there was an opportunity for Daniel to create something that could be very useful to the numerical relativity community at large.”

When Albert Einstein published his theory of general relativity in 1915, he probably couldn’t have imagined its transformative impact across science domains, from mathematics to computer science and cosmology. Einstein would have been pleased to foresee that the advent of HPC would eventually allow detailed numerical studies of his theory, providing key insights into the physics of colliding neutron stars and black holes—the sources of gravitational waves that the LIGO (Laser Interferometer Gravitational-Wave Observatory) detectors observed for the first time on September 14, 2015, and which are now becoming routinely detected by ground-based gravitational wave detectors.

Johnson developed an open-source Python package that seeks to streamline the process of monitoring and post-processing the data products of large scale numerical relativity campaigns in HPC environments. This, in turn, will allow researchers an end-to-end infrastructure within the Einstein Toolkit where they can submit, monitor, and post-process numerical relativity simulations.

“This whole project came to be when we were trying to compute numerical relativity waveforms from a large dataset we had created with the Einstein Toolkit in Blue Waters,” said Johnson. “We realized it would be much more efficient if we could post-process numerical relativity simulations directly on Blue Waters without having to move the massive amounts of data to another environment. I was able to adapt existing code and functions to Python, and now we can post-process huge amounts of data more efficiently than before.”

“It’s become clear that open source software is a key component in most research today, and increasingly, researchers are coming to realize that it can be an intellectual contribution on its own, separate from any one research result, as can be seen in the fact that Classical and Quantum Gravity accepted this software paper,” said Daniel S. Katz, assistant director of Science Software and Applications at NCSA. “Young researchers like Daniel are pushing the limits of what open source software can do and then getting credit for their software, which is a victory for the entire science community.”

The software Johnson developed, POWER, is an open-source Python package of great use to the larger numerical relativity community. It will lay the groundwork for future research and simulations on high performance computing systems across the globe.

“The SPIN program allowed me to combine my interests in physics and computer science,” said author and SPIN Intern Daniel Johnson. “The SPIN program provided me an avenue to learn about computational astrophysics, which is what I now plan to study in graduate school.”

About the National Center for Supercomputing Applications

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About NCSA’S Students Pushing INnovation (SPIN) Program

NCSA has a history of nurturing innovative concepts, and some of the best ideas have come from highly motivated, creative undergraduate students. The SPIN (Students Pushing INnovation) internship program was launched to provide University of Illinois undergraduates with the opportunity to apply their skills to real challenges in high-performance computing, software, data analysis and visualization, cybersecurity, and other areas of interest to NCSA.

Source: NCSA

The post NCSA SPIN Intern Daniel Johnson Develops Open Source HPC Python Package appeared first on HPCwire.

HPE In-Memory Platform Comes to COSMOS

Thu, 11/30/2017 - 17:04

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing aboard the International Space Station (ISS) and this week the company revealed Stephen Hawking’s Centre for Theoretical Cosmology (COSMOS) would be using one of its systems to probe deeper into the mysteries of the space and time.

On Tuesday, HPE and the Faculty of Mathematics at the University of Cambridge announced COSMOS will leverage the new HPE Superdome Flex in-memory computing platform to process massive data sets that represent 14 billion years of history. The cosmologists are “searching for tiny signatures in huge datasets to find clues that will unlock the secrets of the early Universe and of black holes,” said Professor Hawking.

The HPE Superdome Flex, which HPE bills as the world’s most scalable and modular in-memory computing platform,” incorporates in-memory technology gained in the SGI acquisition. The Intel Skylake-based system is designed to scale from 4 to 32 sockets and supports 768 GB to 48 TB of shared memory in a single system. Although the exact configuration of the new supercomputer wasn’t disclosed, project leads say the shared-memory platform will enable the COSMOS group to process massive data sets much faster and will also ease the programming burden.

“In a fast-moving field we have the twofold challenge of analyzing larger data sets while matching their increasing precision with our theoretical models,” said Professor Paul Shellard, director of the Centre for Theoretical Cosmology and head of the COSMOS group in an official statement. “In-memory computing allows us to ingest all of this data and act on it immediately, trying out new ideas, new algorithms. It accelerates time to solution and equips us with a powerful tool to probe the big questions about the origin of our Universe.”

In order to expand and hone their scientific understanding of the cosmos, the team first forms theories about the nature of the universe, then creates precise simulations that are used to make predictions, and then tests those predictions against data from new sources, such as gravitational waves, the cosmic microwave background, and the distribution of stars and galaxies. A large in-memory computing system makes it possible to analyze the data through visualization and in real-time while the simulation is running.

More than 50 researchers will leverage the Cosmos system for a diverse range of fields. In addition to the COSMOS work, the Faculty of Mathematics at the University of Cambridge will use the in-memory computer to solve problems ranging from environmental issues to medical imaging and even experimental biophysics, for example, using light-sheet microscopy to study the dynamics of early embryonic development.

“Curiosity is essential to being human,” said Hawking in a video describing the collaboration (see below). “From the dawn of humanity we’ve looked up at the stars and wondered about the Universe around us. My COSMOS group is working to understand how space and time work, from before the first trillion trillionth of a second after the big bang up to today, fourteen billion years later.”

Professor Hawking, who led the founding of the COSMOS supercomputing facility in 1997, was on the forefront of transforming cosmology from a largely speculative endeavor to a quantitative and predictive science. He concludes, “Without supercomputers, we would just be philosophers.”

The post HPE In-Memory Platform Comes to COSMOS appeared first on HPCwire.

AccelStor Empowers All-Flash Array to Advance Genomics Data Analysis

Thu, 11/30/2017 - 12:16

FREMONT, Calif., Nov. 30, 2017 — AccelStor, the software-defined all-flash array provider, will be revealing the upcoming NeoSapphire all-NVMe flash array model, targeting high performance computing (HPC) applications for scientists, researcher’s and engineer’s. The high performance, outstanding throughput and low latency of NeoSapphire all-NVMe flash array makes it the perfect storage solution for HPC applications, such as genomics, computational chemistry and engineering simulation. This storage solution has been proved in Atgenomix’s enterprise-ready bioinformatics environment. The company offers a reliable data platform where researchers can analyze genomics data fast and easy with any tools they need.

Due to the vast amount of data that genome sequencing produces, the Atgenomix adapts NeoSapphire all-flash array to quickly scale the storage workload needed to analyze the algorithms that are used in genomics simulation. Atgenomix leverages Apache Spark as a cluster compute platform for data analysis in genomics. Such HPC applications require high network speed, fast storage, large amounts of memory, very high compute capabilities, or all of these. Some HPC systems include numerous servers with powerful CPU and internal hard disks. However, this approach may offer enough compute capability yet still exists I/O bottlenecks and it will drag down the overall time for results. With AccelStor’s NeoSapphire storage solutions, Ategeomix and their client are able to accelerate genome data faster and more cost effectively.

To remove bottlenecks in data backed scientific projects and other HPC applications, AccelStor FlexiRemap software technology speeds up dynamic data queue supports for computing nodes. This approach will deliver complete removal of I/O bottlenecks and responds rapidly to cater different demands of HPC usage regardless of implementing more costly computing nodes. The most I/O intensive data demands in HPC applications are those where whole process of simulations need to be compared to previous checkpoints all the time. Data lifecycle in genome study in a clinical trial is a good example.

“Atgenomix SeqsLab enterprise-ready cluster computing delivers accelerated performance with multiple orders of magnitude in whole genome and exome analysis. Integrating with AccelStor NeoSapphire high-performance parallel storage further increases 3x acceleration in data processing. Together we can bring to our customers one complete, reliable, scalable genomics informatics platform with unmatched performance,” wrote Allen Chang, CEO of Atgenomix.

General Manager of AccelStor USA, Luc Yu, said “Storage performance and IT infrastructure has become a clear competitive advantage to enterprises and organizer’s business model. AccelStor promises to revolutionize HPC storage solution across myriad industries. With AccelStor NeoSapphire series, enterprise and scientists will be able to rapidly deliver the research result, maximize the amount of data processed and enable customer to find greater insights.”

About AccelStor, Inc.

AccelStor is accelerating the paradigm shift from conventional disk arrays to modern all-flash storage. AccelStor’s NeoSapphire all-flash arrays, powered by FlexiRemap software technology, deliver sustained high IOPS for business-critical applications. With streamlined storage management, multi-protocol support, and front-access, hot-swappable solid-state drives, the NeoSapphire series promises to resolve performance bottlenecks for I/O-intensive applications like artificial intelligence, IoT, data center, virtualization, high-performance computing, database, media processing, fintech and gaming. For more information about AccelStor and NeoSapphire, please visit www.accelstor.com.

Source: AccelStor, Inc.

The post AccelStor Empowers All-Flash Array to Advance Genomics Data Analysis appeared first on HPCwire.

Shantenu Jha Named Chair of Brookhaven Lab’s Center for Data-Driven Discovery

Thu, 11/30/2017 - 11:59

UPTON, N.Y., Nov. 30, 2017 — Computational scientist Shantenu Jha has been named the inaugural chair of the Center for Data-Driven Discovery(C3D) at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, effective October 1. Part of the Computational Science Initiative(CSI), C3D is driving the integration of domain, computational, and data science expertise across Brookhaven Lab’s science programs and facilities, with the goal of accelerating and expanding scientific discovery. Outside the Lab, C3D is serving as a focal point for the recruitment of future data scientists and collaboration with other institutions.

Shantenu Jha

Jha holds a joint appointment with Rutgers University, where he is an associate professor in the Department of Electrical and Computer Engineering and principal investigator of the Research in Advanced Distributed Cyberinfrastructure and Applications Laboratory (RADICAL). He also leads a project called RADICAL-Cybertools, which are a suite of building blocks enabling the middleware (software layer between the computing platform and application programs) that supports large-scale science and engineering applications.

“Brookhaven hosts four DOE Office of Science User Facilities—the Accelerator Test Facility, Center for Functional Nanomaterials, National Synchrotron Light Source II, and Relativistic Heavy Ion Collider—and participates in leading roles for at least two more facilities,” said Jha. “Further, there are unprecedented collaborative opportunities that CSI provides with other Lab divisions and nearby high-tech, pharmaceutical, and biotech companies. Leading C3D, which will be at the nexus of these collaborations and the intersection of high-performance computing and data science, is truly an unique opportunity.”

Jha’s research interests lie at the intersection of high-performance and distributed computing, computational science, and cyberinfrastructure (computing and data storage systems, visualization environments, and other computing infrastructure linked by high-speed networks). He has experience collaborating with scientists from multiple domains, including the molecular and earth sciences and high-energy physics.

In his new role, Jha will work with domain science researchers, computational scientists, applied mathematicians, computer scientists and engineers. Together, they will develop, deploy, and operate novel solutions for data management, analysis, and interpretation that accelerate discovery in science and industry and enhance national security. These solutions include methods, tools, and services—such as machine-learning algorithms, programming models, visual analytics techniques, and data-sharing platforms. Initially, his team will focus on scalable software systems; distributed computing systems, applications, and middleware; and extreme-scale computing for health care and precision medicine. Partnerships with other national laboratories, colleges and universities, research institutions, and industry will play a critical role in these efforts.

“Shantenu’s leading research in infrastructures for streaming data analysis and novel approaches to high-performance workflows and workload management systems—as well as his expertise in application areas such as materials science and health care—ideally position him for the role of C3D chair,” said CSI Director Kerstin Kleese van Dam. “We are excited to work with him to realize our vision for C3D.”

Prior to joining Rutgers in 2011, Jha was the director of cyberinfrastructure research and development at Louisiana State University’s Center for Computation and Technology. He was also a visiting faculty member in the School of Informatics at the University of Edinburgh and a visiting scientist at the Centre for Computational Science at University College London.

Jha is the recipient of the National Science Foundation’s Faculty Early Career Development Program (CAREER) award, several best paper awards at supercomputing conferences, a Rutgers Board of Trustees Research Fellowship for Scholarly Excellence, and the Inaugural Rutgers Chancellor’s Award for Excellence in Research (the highest award for research contributions that is bestowed to Rutgers faculty). He serves on many program committees—including those for the annual SuperComputing Conference, Platform for Advanced Scientific Computing Conference, International Symposium on Cluster, Cloud and Grid Computing, and International Parallel and Distributed Processing Symposium—and has presented his research at invited talks and keynotes around the world. He holds a PhD and master’s degree in computer science from Syracuse University and a master’s degree in physics from the Indian Institute of Technology Delhi.

Source: Brookhaven Lab

The post Shantenu Jha Named Chair of Brookhaven Lab’s Center for Data-Driven Discovery appeared first on HPCwire.

Microsemi Announces Libero SoC PolarFire v2.0 for Designing With its Mid-Range FPGAs

Thu, 11/30/2017 - 10:02

ALISO VIEJO, Calif., Nov. 30, 2017 — Microsemi Corporation (Nasdaq: MSCC), a leading provider of semiconductor solutions differentiated by power, security, reliability and performance, today announced the availability of its Libero system-on-chip (SoC) PolarFire version 2.0 comprehensive design software tool suite, used for the development of the company’s lowest power, cost-optimized mid-range PolarFire field programmable gate array (FPGAs) and supporting all PolarFire FPGA family devices and packages.

Microsemi’s Libero SoC PolarFire Design Suite provides a complete design environment for customers working on designs requiring high-speed transceivers and memories with low power consumption. It enables high productivity with its comprehensive, easy to learn, easy to adopt development tools and enables a design launching point for customers with key quick start demonstration designs for rapid evaluation and prototyping. Several full design files for Libero SoC PolarFire targeting the company’s complementary PolarFire Evaluation Kit are also available, including JESD204B Interface, PCI Express (PCIe) Endpoint, 10GBASE-R Ethernet, digital signal processing (DSP) finite impulse response (FIR) filter and multi-rate transceiver demonstration, with additional reference designs planned over the coming months.

“Our Libero SoC PolarFire v2.0 release supports all of the PolarFire product family’s devices and packages, enabling customers to further leverage the high-performance capabilities of our lowest power, cost-optimized mid-range FPGAs for their designs,” said Jim Davis, vice president of software engineering at Microsemi. “Feature enhancements to best-in-class debug tool SmartDebug provide the ability to evaluate transceiver performance while modifying transceiver lane signal integrity parameters on the fly, and to evaluate the channel noise of the transceiver receiver through the eye monitor. In addition, the demonstration mode allows customers to evaluate SmartDebug features without connecting to a hardware board—a capability unique to Microsemi FPGAs.”

The enhanced design suite also includes significant runtime improvement for SmartPower, with a 4x speed up of invocation time and almost instantaneous switching between different views. In addition, Libero SoC PolarFire v2.0 introduces a brand new SmartDesign canvas with higher quality, higher speed of displaying nets and easier design navigation.

While Microsemi’s PolarFire FPGAs are ideal for a wide variety of applications within the communicationsindustrial and aerospace and defense markets, the new software provides new capabilities for high-speed applications, offering particular suitability for access networkswireless infrastructure, and the defense and industry 4.0 markets. Application examples include wireline access, network edge, wireless heterogeneous networks, wireless backhaul, smart optical modules, video broadcasting, encryption and root of trustsecure wireless communicationsradar and electronic warfare (EW), aircraft networking, actuation and control.

With the release of Libero SoC PolarFire v2.0, Microsemi has added support for PolarFire MPF100, MPF200, MPF300 and MPF500 devices for all package options, enabling customers to design with all members of the PolarFire family. It also adds the MPF300TS-FCG484 (STD) device to Libero Gold License and introduces the MPF100T device supported by the free Libero Silver License.

Microsemi’s PolarFire FPGA devices provide cost-effective bandwidth processing capabilities with the lowest power footprint. They feature 12.7 Gbps transceivers and offer up to 50 percent lower power than competing mid-range FPGAs, and include hardened PCIe controller cores with both endpoints and root port modes available, as well as low power transceivers. The company’s complementary PolarFire Evaluation Kit is a comprehensive platform for evaluating its PolarFire FPGAs which includes a PCIe edge connector with four lanes and a demonstration design. The kit features a high-pin-count (HPC) FPGA mezzanine card (FMC), a single full-duplex lane of surface mount assemblies (SMAs), PCIe x4 fingers, dual Gigabit Ethernet RJ45 and a small form-factor pluggable (SFP) module.


Microsemi’s Libero SoC PolarFire v2.0 software toolset is now available for download from Microsemi’s website at https://www.microsemi.com/products/fpga-soc/design-resources/design-software/libero-soc-polarfire#downloads and its PolarFire FPGA devices are available for engineering sample ordering with standard lead times. For more information, visit https://www.microsemi.com/products/fpga-soc/design-resources/design-software/libero-soc-polarfire and www.microsemi.com/polarfire or email sales.support@microsemi.com.

About PolarFire FPGAs

Microsemi’s new cost-optimized PolarFire FPGAs deliver the industry’s lowest power at mid-range densities with exceptional security and reliability. The product family features 12.7 Gbps transceivers and offer up to 50 percent lower power than competing FPGAs. Densities span from 100K to 500K logic elements (LEs) and are ideal for a wide range of applications within wireline access networks and cellular infrastructuredefense and commercial aviation markets, as well as industry 4.0 which includes the industrial automation and internet of things (IoT) markets.

PolarFire FPGAs’ transceivers can support multiple serial protocols, making the products ideal for communications applications with 10Gbps Ethernet, CPRI, JESD204B, Interlaken and PCIe. In addition, the ability to implement serial gigabit Ethernet (SGMII) on GPIO enables numerous 1Gbps Ethernet links to be supported. PolarFire FPGAs also contain the most hardened security intellectual property (IP) to protect customer designs, data and supply chain. The non-volatile PolarFire product family consumes 10 times less static power than competitive devices and features an even lower standby power referred to as Flash*Freeze. For more information, visit www.microsemi.com/polarfire.

About Microsemi

Microsemi Corporation (Nasdaq: MSCC) offers a comprehensive portfolio of semiconductor and system solutions for aerospace & defense, communications, data center and industrial markets. Products include high-performance and radiation-hardened analog mixed-signal integrated circuits, FPGAs, SoCs and ASICs; power management products; timing and synchronization devices and precise time solutions, setting the world’s standard for time; voice processing devices; RF solutions; discrete components; enterprise storage and communication solutions, security technologies and scalable anti-tamper products; Ethernet solutions; Power-over-Ethernet ICs and midspans; as well as custom design capabilities and services. Microsemi is headquartered in Aliso Viejo, California and has approximately 4,800 employees globally. Learn more at www.microsemi.com.

Source: Microsemi

The post Microsemi Announces Libero SoC PolarFire v2.0 for Designing With its Mid-Range FPGAs appeared first on HPCwire.

EXDCI Opens Call for Workshops for the European HPC Summit Week 2018

Thu, 11/30/2017 - 08:43

Nov. 30, 2017 — EXDCI is pleased to announce an open call for workshops for HPC stakeholders (institutions, service providers, users, communities, projects, vendors and consultants) to shape and contribute to the European HPC Summit Week 2018 (EHPCSW18), that will take place from 28 May to 1 June in Ljubljana, Slovenia.

This call for workshops is addressed to all possible participants interested in including a session or workshop in the EHPCSW18. The procedure is to send an expression of interest to have a session/workshop and agree on a joint programme for this week. If you are an HPC project, initiative or company and you want to include a workshop or session in the European HPC Summit Week, please send the document attached before 18 December 2017.

PRACEdays18 is the central event of the European HPC Summit Week, and is hosted by PRACE’s Slovenian Member ULFME – University of Ljubljana, Faculty of Mechanical Engineering. The conference will bring together experts from academia and industry who will present their advancements in HPC-supported science and engineering. PRACE opens also a call for contributions and posters forPRACEdays18 within the EHPCSW18 week.

For more information on the timeline and the submission process, please follow this link: https://exdci.eu/newsroom/news/exdci-opens-call-workshops-european-hpc-summit-week-2018

Source: EXDCI

The post EXDCI Opens Call for Workshops for the European HPC Summit Week 2018 appeared first on HPCwire.

Two Quantum Simulators Cross 50-Qubit Threshold

Wed, 11/29/2017 - 18:00

2017 has been quite the year for quantum computing progress with D-Wave continuing to build its quantum annealing approach and Google, Microsoft, IBM and more recently Intel making steady advances toward the threshold that Google has dubbed quantum supremacy, when quantum machines will be able to solve select problems that are outside the purview of their classical counterparts.

Along with efforts to build general quantum computers, researchers are also developing quantum simulators, which enable the study of quantum systems that are too complex to model with conventional supercomputers. Today two independent teams of researchers published papers in the journal Nature describing their work creating the largest quantum simulators yet at over 50 qubits. These projects mark major milestones as previously quantum simulators have been limited to around a dozen qubits.

In one of the two studies, researchers from the University of Maryland (UMD) and the National Institute of Standards and Technology (NIST) create a trapped ion device comprised of 53 individual ytterbium atoms (ions), held in place by electric fields.

“Each ion qubit is a stable atomic clock that can be perfectly replicated,” said UMD team lead Christopher Monroe, who is also the co-founder and chief scientist at the startup IonQ Inc. “They are effectively wired together with external laser beams. This means that the same device can be reprogrammed and reconfigured, from the outside, to adapt to any type of quantum simulation or future quantum computer application that comes up.”

In a separate paper, published in the same issue of Nature, a group of physicists from MIT and Harvard University reported a new way to manipulate quantum bits of matter using finely tuned lasers to generate, control and “read” a 51-atom array.

“Our method provides a way of exploring many-body phenomena on a programmable quantum simulator and could enable realizations of new quantum algorithms,” the authors write.

Potential applications for the new quantum simulator include optimization problems such as the traveling salesman problem, variations of which are used in DNA sequencing, materials science and data processing.

Further reading

MIT News: http://news.mit.edu/2017/scientists-demonstrate-one-largest-quantum-simulators-yet-51-atoms-1129

Joint Quantum Institute press release: http://jqi.umd.edu/news/quantum-simulators-wield-control-over-more-50-qubits

Feature image caption: Artist’s depiction of quantum simulation. Lasers manipulate an array of over 50 atomic qubits in order to study the dynamics of quantum magnetism (credit: E. Edwards/JQI).

The post Two Quantum Simulators Cross 50-Qubit Threshold appeared first on HPCwire.

Ireland Ranks Number One in Top500 Systems per Capita

Wed, 11/29/2017 - 14:57

Nov. 29, 2017 — The 9th Irish Supercomputer List was released today featuring two new world-class supercomputers. This is the first time that Ireland has four computers ranked on the Top500 list of the fastest supercomputers on Earth. Ireland is now ranked number one globally in terms of number of Top-500 supercomputers per capita (stats here). In terms of performance per capita, Ireland is ranked 4th globally (stats here). These new supercomputers boost the Irish High Performance Computing capacity by nearly one third, up from 3.01 to 4.42 Pflop/s. Ireland has ranked on the Top500 list 33 times over a history of 23 years with a total of 20 supercomputers (full history here). Over half of these rankings (19) and supercomputers (12) have been in the last 6 years, representing Ireland’s increasing pace of High Performance Computing investment. The new entrants, from two undisclosed software and web services companies, feature at spots 423 and 454 on the 50th Top500 Supercomputer List, with Linpack Rmax scores of 635 and 603 TFlop/s respectively.

Not considering Ireland’s admittedly low population (which does help the above rankings), Ireland still ranks admiraly. In terms of Top500 installations, Ireland ranks 9th place globally, tied with Australia, India and Saudi Arabia, and 18th in the world in terms of supercomputing performance.

The Irish Supercomputer List now ranks 30 machines (2 new, 1 upgraded), with a total of more than 207,000 CPU cores and over 106,000 accelerator cores.

Source: Irish Supercomputer List

The post Ireland Ranks Number One in Top500 Systems per Capita appeared first on HPCwire.

ORNL-Designed Algorithm Leverages Titan to Create High-Performing Deep Neural Networks

Wed, 11/29/2017 - 14:40

Nov. 29, 2017 — Deep neural networks—a form of artificial intelligence—have demonstrated mastery of tasks once thought uniquely human. Their triumphs have ranged from identifying animals in images, to recognizing human speech, to winning complex strategy games, among other successes.

Now, researchers are eager to apply this computational technique—commonly referred to as deep learning—to some of science’s most persistent mysteries. But because scientific data often looks much different from the data used for animal photos and speech, developing the right artificial neural network can feel like an impossible guessing game for nonexperts. To expand the benefits of deep learning for science, researchers need new tools to build high-performing neural networks that don’t require specialized knowledge.

Using the Titan supercomputer, a research team led by Robert Patton of the US Department of Energy’s(DOE’s) Oak Ridge National Laboratory (ORNL) has developed an evolutionary algorithm capable of generating custom neural networks that match or exceed the performance of handcrafted artificial intelligence systems. Better yet, by leveraging the GPU computing power of the Cray XK7 Titan—the leadership-class machine managed by the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL—these auto-generated networks can be produced quickly, in a matter of hours as opposed to the months needed using conventional methods.

The research team’s algorithm, called MENNDL (Multinode Evolutionary Neural Networks for Deep Learning), is designed to evaluate, evolve, and optimize neural networks for unique datasets. Scaled across Titan’s 18,688 GPUs, MENNDL can test and train thousands of potential networks for a science problem simultaneously, eliminating poor performers and averaging high performers until an optimal network emerges. The process eliminates much of the time-intensive, trial-and-error tuning traditionally required of machine learning experts.

“There’s no clear set of instructions scientists can follow to tweak networks to work for their problem,” said research scientist Steven Young, a member of ORNL’s Nature Inspired Machine Learning team. “With MENNDL, they no longer have to worry about designing a network. Instead, the algorithm can quickly do that for them, while they focus on their data and ensuring the problem is well-posed.”

Pinning down parameters

Inspired by the brain’s web of neurons, deep neural networks are a relatively old concept in neuroscience and computing, first popularized by two University of Chicago researchers in the 1940s. But because of limits in computing power, it wasn’t until recently that researchers had success in training machines to independently interpret data.

Today’s neural networks can consist of thousands or millions of simple computational units—the “neurons”—arranged in stacked layers, like the rows of figures spaced across a foosball table. During one common form of training, a network is assigned a task (e.g., to find photos with cats) and fed a set of labeled data (e.g., photos of cats and photos without cats). As the network pushes the data through each successive layer, it makes correlations between visual patterns and predefined labels, assigning values to specific features (e.g., whiskers and paws). These values contribute to the weights that define the network’s model parameters. During training, the weights are continually adjusted until the final output matches the targeted goal. Once the network learns to perform from training data, it can then be tested against unlabeled data.

Although many parameters of a neural network are determined during the training process, initial model configurations must be set manually. These starting points, known as hyperparameters, include variables like the order, type, and number of layers in a network.

Finding the optimal set of hyperparameters can be the key to efficiently applying deep learning to an unusual dataset. “You have to experimentally adjust these parameters because there’s no book you can look in and say, ‘These are exactly what your hyperparameters should be,’” Young said. “What we did is use this evolutionary algorithm on Titan to find the best hyperparameters for varying types of datasets.”

Unlocking that potential, however, required some creative software engineering by Patton’s team. MENNDL homes in on a neural network’s optimal hyperparameters by assigning a neural network to each Titan node. The team designed MENNDL to use a deep learning framework called Caffe to carry out the computation, relying on the parallel computing Message Passing Interface standard to divide and distribute data among nodes. As Titan works through individual networks, new data is fed to the system’s nodes asynchronously, meaning once a node completes a task, it’s quickly assigned a new task independent of the other nodes’ status. This ensures that the 27-petaflop Titan stays busy combing through possible configurations.

“Designing the algorithm to really work at that scale was one of the challenges,” Young said. “To really leverage the machine, we set up MENNDL to generate a queue of individual networks to send to the nodes for evaluation as soon as computing power becomes available.”

To demonstrate MENNDL’s versatility, the team applied the algorithm to several datasets, training networks to identify sub-cellular structures for medical research, classify satellite images with clouds, and categorize high-energy physics data. The results matched or exceeded the performance of networks designed by experts.

Networking neutrinos

One science domain in which MENNDL is already proving its value is neutrino physics. Neutrinos, ghost-like particles that pass through your body at a rate of trillions per second, could play a major role in explaining the formation of the early universe and the nature of matter—if only scientists knew more about them.

Large detectors at DOE’s Fermi National Accelerator Laboratory (Fermilab) use high-intensity beams to study elusive neutrino reactions with ordinary matter. The devices capture a large sample of neutrino interactions that can be transformed into basic images through a process called “reconstruction.” Like a slow-motion replay at a sporting event, these reconstructions can help physicists better understand neutrino behavior.

“They almost look like a picture of the interaction,” said Gabriel Perdue, an associate scientist at Fermilab.

Perdue leads an effort to integrate neural networks into the classification and analysis of detector data. The work could improve the efficiency of some measurements, help physicists understand how certain they can be about their analyses, and lead to new avenues of inquiry.

Teaming up with Patton’s team under a 2016 Director’s Discretionary application on Titan, Fermilab researchers produced a competitive classification network in support of a neutrino scattering experiment called MINERvA (Main Injector Experiment for v-A). The task, known as vertex reconstruction, required a network to analyze images and precisely identify the location where neutrinos interact with the detector—a challenge for events that produce many particles.

In only 24 hours, MENNDL produced optimized networks that outperformed handcrafted networks—an achievement that would have taken months for Fermilab researchers. To identify the high-performing network, MENNDL evaluated approximately 500,000 neural networks. The training data consisted of 800,000 images of neutrino events, steadily processed on 18,000 of Titan’s nodes.

“You need something like MENNDL to explore this effectively infinite space of possible networks, but you want to do it efficiently,” Perdue said. “What Titan does is bring the time to solution down to something practical.”

Having recently been awarded another allocation under the Advanced Scientific Computing Research Leadership Computing Challenge program, Perdue’s team is building off its deep learning success by applying MENDDL to additional high-energy physics datasets to generate optimized algorithms. In addition to improved physics measurements, the results could provide insight into how and why machines learn.

“We’re just getting started,” Perdue said. “I think we’ll learn really interesting things about how deep learning works, and we’ll also have better networks to do our physics. The reason we’re going through all this work is because we’re getting better performance, and there’s real potential to get more.”

AI meets exascale

When Titan debuted 5 years ago, its GPU-accelerated architecture boosted traditional modeling and simulation to new levels of detail. Since then, GPUs, which excel at carrying out hundreds of calculations simultaneously, have become the go-to processor for deep learning. That fortuitous development made Titan a powerful tool for exploring artificial intelligence at supercomputer scales.

With the OLCF’s next leadership-class system, Summit, set to come online in 2018, deep learning researchers expect to take this blossoming technology even further. Summit builds on the GPU revolution pioneered by Titan and is expected to deliver more than five times the performance of its predecessor. The IBM system will contain more than 27,000 of Nvidia’s newest Volta GPUs in addition to more than 9,000 IBM Power9 CPUs. Furthermore, because deep learning requires less mathematical precision than other types of scientific computing, Summit could potentially deliver exascale-level performance for deep learning problems—the equivalent of a billion billion calculations per second.

“That means we’ll be able to evaluate larger networks much faster and evolve many more generations of networks in less time,” Young said.

In addition to preparing for new hardware, Patton’s team continues to develop MENNDL and explore other types of experimental techniques, including neuromorphic computing, another biologically inspired computing concept.

“One thing we’re looking at going forward is evolving deep learning networks from stacked layers to graphs of layers that can split and then merge later,” Young said. “These networks with branches excel at analyzing things at multiple scales, such as a closeup photograph in comparison to a wide-angle shot. When you have 20,000 GPUs available, you can actually start to think about a problem like that.”

Source: ORNL

The post ORNL-Designed Algorithm Leverages Titan to Create High-Performing Deep Neural Networks appeared first on HPCwire.

High-Performance Computing Cuts Particle Collision Data Prep Time

Wed, 11/29/2017 - 14:34

Nov. 29, 2017 — For the first time, scientists have used high-performance computing (HPC) to reconstruct the data collected by a nuclear physics experiment—an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries.

The demonstration project used the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC), a high-performance computing center at Lawrence Berkeley National Laboratory in California, to reconstruct multiple datasets collected by the STAR detector during particle collisions at the Relativistic Heavy Ion Collider (RHIC), a nuclear physics research facility at Brookhaven National Laboratory in New York. By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed 4.73 petabytes of raw data into 2.45 petabytes of “physics-ready” data in a fraction of the time it would have taken using in-house high-throughput computing resources, even with a two-way transcontinental data journey.

“The reason why this is really fantastic,” said Brookhaven physicist Jérôme Lauret, who manages STAR’s computing needs, “is that these high-performance computing resources are elastic. You can call to reserve a large allotment of computing power when you need it—for example, just before a big conference when physicists are in a rush to present new results.” According to Lauret, preparing raw data for analysis typically takes many months, making it nearly impossible to provide such short-term responsiveness. “But with HPC, perhaps you could condense that many months production time into a week. That would really empower the scientists!”

The accomplishment showcases the synergistic capabilities of RHIC and NERSC—U.S. Department of Energy (DOE) Office of Science User Facilities located at DOE-run national laboratories on opposite coasts—connected by one of the most extensive high-performance data-sharing networks in the world, DOE’s Energy Sciences Network (ESnet), another DOE Office of Science User Facility.

“This is a key usage model of high-performance computing for experimental data, demonstrating that researchers can get their raw data processing or simulation campaigns done in a few days or weeks at a critical time instead of spreading out over months on their own dedicated resources,” said Jeff Porter, a member of the data and analytics services team at NERSC.

Billions of data points

To make physics discoveries at RHIC, scientists must sort through hundreds of millions of collisions between ions accelerated to very high energy. STAR, a sophisticated, house-sized electronic instrument, records the subatomic debris streaming from these particle smashups. In the most energetic events, many thousands of particles strike detector components, producing firework-like displays of colorful particle tracks. But to figure out what these complex signals mean, and what they can tell us about the intriguing form of matter created in RHIC’s collisions, scientists need detailed descriptions of all the particles and the conditions under which they were produced. They must also compare huge statistical samples from many different types of collision events.

Cataloging that information requires sophisticated algorithms and pattern recognition software to combine signals from the various readout electronics, and a seamless way to match that data with records of collision conditions. All the information must then be packaged in a way that physicists can use for their analyses.

Since RHIC started running in the year 2000, this raw data processing, or reconstruction, has been carried out on dedicated computing resources at the RHIC and ATLAS Computing Facility (RACF) at Brookhaven. High-throughput computing (HTC) clusters crunch the data, event-by-event, and write out the coded details of each collision to a centralized mass storage space accessible to STAR physicists around the world.

But the challenge of keeping up with the data has grown with RHIC’s ever-improving collision rates and as new detector components have been added. In recent years, STAR’s annual raw data sets have reached billions of events with data sizes in the multi-Petabyte range. So the STAR computing team investigated the use of external resources to meet the demand for timely access to physics-ready data.

To read the full article, follow this link: https://www.bnl.gov/newsroom/news.php?a=212581

Source: Brookhaven National Laboratory

The post High-Performance Computing Cuts Particle Collision Data Prep Time appeared first on HPCwire.

Enabling Scientific Discovery through HPC at the University of Hull

Wed, 11/29/2017 - 14:20

HULL, United Kingdom, Nov. 28, 2017 — ClusterVision, Europe’s dedicated high performance computing (HPC) solutions providers, and the University of Hull, one of the United Kingdom’s leading research universities, have published a joint case study today detailing the university’s research on their state-of-the-art HPC cluster, Viper.

HPC enables faster and more robust scientific discovery by significantly enhancing data processing capability. The University of Hull has appropriately named their cluster Viper to reflect this speed and power. For Alex Sheardown, a PhD student at the university’s E.A. Milne Centre, HPC is vital to his research in astrophysics. His look into the growth and composition of galaxy clusters demands a lot of processing power, and with Viper he is finally able to produce high resolution density shots for his research.

Read up on his research, the university’s investments in HPC, and how ClusterVision exceeded the needs of the university in this case study.

You can view, read, and download the case study here.

Source: ClusterVision

The post Enabling Scientific Discovery through HPC at the University of Hull appeared first on HPCwire.

New Director Named at Los Alamos National Laboratory

Wed, 11/29/2017 - 14:09

LOS ALAMOS, New Mexico, Nov. 28, 2017 — Dr. Terry Wallace has been appointed Director of Los Alamos National Laboratory (LANL) and President of Los Alamos National Security, LLC (LANS), the company that manages and operates the Laboratory for the National Nuclear Security Administration (NNSA). The appointments were announced today by Norman J. Pattiz and Barbara E. Rusinko, Chair and Vice Chair of the Los Alamos National Security (LANS) Board of Governors, and are effective January 1, 2018.

Terry Wallace

“Dr. Wallace’s unique skills, experience and national security expertise make him the right person to lead Los Alamos in service to the country” said Pattiz. “Terry’s expertise in forensic seismology, a highly-specialized discipline, makes him an acknowledged international authority on the detection and quantification of nuclear tests.”

Wallace, age 61, will succeed Dr. Charlie McMillan, who announced in September his plans to retire from the Laboratory by the end of the year. Wallace becomes the 11th Director in the Laboratory’s nearly 75-year history.

Presently, Wallace serves as Principal Associate Director for Global Security (PADGS), and leads Laboratory programs with a focus on applying scientific and engineering capabilities to address national and global security threats, in particular, nuclear threats.

Dr. Wallace served as Principal Associate Director for Science, Technology, and Engineering (PADSTE) from 2006 to 2011 and as Associate Director of Strategic Research from 2005 to 2006. In those positions, he integrated the expertise from all basic science programs and five expansive science and engineering organizations to support LANL’s nuclear-weapons, threat-reduction, and national-security missions.

Wallace was selected following a search and selection process conducted by members of the LANS Board.

“I am honored and humbled to be leading Los Alamos National Laboratory,” said Wallace. “Our Laboratory’s mission has never been more important than it is today.  “As Director, I am determined to extend, if not strengthen our 75-year legacy of scientific excellence in support of our national interests well into the future.”

Dr. Wallace holds Ph.D. and M.S. degrees in geophysics from California Institute of Technology and B.S. degrees in geophysics and mathematics from New Mexico Institute of Mining and Technology.

Wallace will oversee a budget of approximately $2.5 billion, employees and contractors numbering nearly 12,000, and a 36-square-mile site of scientific laboratories, nuclear facilities, experimental capabilities, administration buildings, and utilities.

Pattiz praised outgoing Director McMillan’s dedication and 35 years of service to Los Alamos, Lawrence Livermore and LANS: “Charlie McMillan has led Los Alamos National Laboratory with a rare combination of commitment, intelligence and hard work. We believe he has put this iconic institution in a strong position to continue serving the country for many years to come.”

Additional background

Career Details

Wallace first worked at Los Alamos Scientific Laboratory as an undergraduate student in 1975, and returned to the Laboratory in 2003.

Before returning to the Laboratory, Wallace spent 20 years as a professor with the University of Arizona with appointments to both the Geoscience Department and the Applied Mathematics Program. His scholarly work has earned him recognition as a leader within the worldwide geological community; he was awarded the American Geophysical Union’s prestigious Macelwane Medal, and has the rare honor of having a mineral named after him by the International Mineralogical Association Commission on New Minerals, Nomenclature and Classification

Wallace is a Fellow of the American Geophysical Union (AGU). Wallace has served as President of the Seismological Society of America, Chairman of the Incorporated Institutions for Research in Seismology. He is the co-author of the most widely used seismology textbook, “Modern Global Seismology”, and has authored more than 100 peer review articles on various aspects of seismology. Wallace chaired The National Academy of Science Committee on Seismology and Geodynamics for 6 years, and was a member of the Board of Earth Science and Resources.


Wallace currently resides in Los Alamos. He has been married to Dr. Michelle Hall for over 29 years and they have a son, David, and two grandchildren. He was raised in Los Alamos and is a 1974 graduate of Los Alamos High School.

Dr. Wallace is the son of the late Terry Wallace, Sr. and the late Jeanette Wallace and is a second-generation Laboratory employee.

About Los Alamos National Laboratory

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWXT Government Group, and URS, an AECOM company, for the Department of Energy’s National Nuclear Security Administration.

Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

Source: Los Alamos National Laboratory

The post New Director Named at Los Alamos National Laboratory appeared first on HPCwire.

NCSA Paves a New Way for Using Geopolymers

Wed, 11/29/2017 - 08:25

Nov. 29, 2017 — “It was a perfect recipe,” said Dr. Seid Koric, Technical Director for Economic and Societal Impact at the National Center for Supercomputing Applications (NCSA) and Research Associate Professor in the Department of Mechanical Science and Engineering at the University of Illinois. Koric, this year’s winner of the Top Supercomputing Achievement award in the annual HPCwire Editors’ Choice Awards, teamed up with NCSA Faculty Fellow and PI, Professor Ange-Therese Akono, geopolymers expert Professor Waltraud “Trudy” Kriven and NCSA research scientist Dr. Erman Guleryuz. Their goal is to understand the impact of nanoporosity on stiffness and strength of geopolymers via molecular dynamics and finite element modeling.

Professor Akono sees a great need for geopolymers to address the issue of sustainable and affordable housing. “One of the challenges in affordable housing is finding materials alone for suitable conditions. Geopolymers represent cost effective alternatives. Because the chemistry of geopolymers is so versatile, we can cast geopolymers by using local solutions, with less of a carbon footprint than concrete and in less time,” said professor Akono.

Geopolymer composites are a novel class of inorganic, nano-porous polymeric hybrids known for their high threshold for heat and anti-corrosive qualities with a potential for high strength and high strength-to-weight ratio. An inherent challenge to their novelty is the lack of long-term data. “We’re still inventing the futures of geopolymers,” said professor Kriven, winner of the Mueller Award for her twenty-year work on geopolymers.

Additionally, what makes them of particular interest in industry is their versatility and efficiency when compared to cement. Beyond housing, Kriven sees potential for geopolymers in renewable energy storage, military, road repair, emergency housing, levees and a more environmentally friendly substitute for all concrete.

Akono set out to use finite element analysis and molecular dynamics at extreme scales to investigate the processing microstructure properties relationships in inorganic geopolymer cements from the nanometer length scale up to the macroscopic length-scale using numerical modeling from results of multi-scale experiments using NCSA’s Blue Waters supercomputer.

“We want to understand the basic behavior of the geopolymer matrix,” said Akono, “and we needed a supercomputer to carry it out and measure the response of a material from nano to macro level. Blue Waters provided great resources to bridge the gap with computing power.”

They used Blue Waters to produce a 3D framework that can be used to design strong geopolymer composites with a wide range of application including advanced low-emitting construction materials, recycling of type F fly ash, low-level radioactive waste encapsulation, fire- and corrosion-resistant coatings and thermal barrier coatings. “Parallel processing and memory were key to this project,” said Koric, “and so was memory.” Blue Waters has more memory and faster data storage than any other open system in the world. Koric and Guleryuz helped write the Blue Waters allocation proposal for time on the supercomputer, which led to this work being presented in four conferences, and a journal submission. Less than a year since they began their research, Akono’s group wrote a joint proposal for funding by the National Science Foundation (NSF). Their work, Multi-scale and Multi-physics Modeling of Na-PS Geopolymer Cement ​Composites was awarded funding in September 2017.

Looking to the future, Koric says he hopes to apply the success of this collaboration to NCSA’s industry partners. “One more thread that we haven’t tried yet, is the idea is to introduce our industry partners to this material, for concrete and construction materials.”

Source: NCSA

The post NCSA Paves a New Way for Using Geopolymers appeared first on HPCwire.

Simulations Predict that Antarctic Volcanic Ash can Disrupt Air Traffic in Vast Areas of the South Hemisphere

Tue, 11/28/2017 - 15:23

BARCELONA, Nov. 28, 2017 — Simulations performed by Barcelona Supercomputing Center in collaboration with the Institut de Ciències de la Terra Jaume Almera – CSIC demonstrated that Antarctic volcanoes might pose a higher threat than previously considered. A research focused on the potential impacts of ash dispersal and fallout from Deception Island highlights how ash clouds entrapped in circumpolar upper-level winds have the potential to reach lower latitudes and disrupt Austral hemisphere air traffic. The study has been published today in the Nature group journal, Scientific Reports.

Image courtesy of BSC

The research has been based in different sets of simulations, considering different meteorological scenarios and different eruption characteristics. These simulations demonstrated that ash from lower-latitudes, as those in Deception Island, are likely to encircle the globe even in case of moderate eruptions, as it could reach up to tropical latitudes, a vast part of Atlantic coast of South America, South Africa and/or South Oceania. Thus, a wider dispersion of volcanic particles than previously believed can result in significant consequences for aviation safety in these areas.

The experiments have been conducted with BSC’s NMMB-MONARCH-ASH meteorological and atmospheric dispersion model at regional and global scales. One of the aims of the study is to raise concern for the need of performing dedicated hazard assessments to better manage air traffic in case of an eruption.. Several volcanic events having occurred in recent years, including Eyjafjallajökull (Iceland, 2010), Grímsvötn (Iceland, 2010) and Cordón Caulle (Chile, 2010) have led to large economic losses to the aviation industry and its stakeholders.

The paper concludes that, in specific circumstances, volcanic ash from Antarctic volcanoes can disrupt air traffic, not only in proximity, but as far as South Africa (6.400 KM) and in flying routes connecting Africa with South America and Australia.

About volcanos in Antarctica

From the tens of volcanoes located in Antarctica, at least nine (Berlin, Buckle Island, Deception Island, Erebus, Hudson Mountains, Melbourne, Penguin Island, Takahe, and The Pleiades) are known to be active and five of them, all stratovolcanoes, have reported frequent volcanic activity in historical times. Deception Island is an active composite volcano with several tens of eruptions in the last 10,000 years.

Located at the spreading center of the Bransfield Strait marginal basin, Deception Island consists of a horse-shoe-shaped composite volcanic system truncated by the formation of a collapse caldera represented as a sea-flooded depression known as Port Foster. Tephra deposits from Deception and neighboring islands, reveal over 30 post-caldera Holocene eruptions. However, it is inferred that a considerably higher number of eruptions have actually occurred. Indeed, over 50 relatively well-preserved craters and eruptive vents, scattered across the island, can be reconstructed and mapped.

The eruption record in Deception Island since the 19th century reveals periods of high activity (1818–1828, 1906-1912), followed by decades of dormancy (e.g. 1912–1967) . The unrest episodes recorded in 1992, 1999 and 2014-2015 demonstrate that the volcanic system is still active and could be a cause of concern in the future.

During the most recent explosive eruptions occurred in 1967, 1969 and 1970, ash fall and lahars destroyed or severely damaged the scientific bases operating on the island at that time.


NMMB-MONARCH-ASH is a novel on-line meteorological and atmospheric transport model to simulate the emission, transport and deposition of tephra (ash) particles released from volcanic eruptions. The model predicts ash cloud trajectories, concentration at relevant flight levels, and deposit thickness for both regional and global domains.

Reference Paper: A.Geyer, A. Martí, S. Giralt, A. Folch. “Potential ash impact from Antarctic volcanoes: Insights from Deception Island’s most recent eruption”. Scientific Reports,  28th of November 2017. www.nature.com/articles/s41598-017-16630-9

Simulations’ videos: https://www.bsc.es/ashvideos

About BSC

Barcelona Supercomputing Center (BSC) is the national supercomputing centre in Spain. BSC specialises in High Performance Computing (HPC) and its mission is two-fold: to provide infrastructure and supercomputing services to European scientists, and to generate knowledge and technology to transfer to business and society.

BSC is a Severo Ochoa Center of Excellence and a first level hosting member of the European research infrastructure PRACE (Partnership for Advanced Computing in Europe). BSC also manages the Spanish Supercomputing Network (RES).

It was created in 2005 and is a consortium formed by the Spanish Government Ministry of Economy, Industry and Competitiveness (60%), the Catalan Government Department of Enterprise and Knowledge (30%) and the Universitat Politècnica de Catalunya (UPC) (10%).

Source: Barcelona Supercomputing Center

The post Simulations Predict that Antarctic Volcanic Ash can Disrupt Air Traffic in Vast Areas of the South Hemisphere appeared first on HPCwire.

HPE Partners with COSMOS Research Group and the Cambridge Faculty of Mathematics

Tue, 11/28/2017 - 14:56

MADRID, Spain, Nov. 28, 2017 — Hewlett Packard Enterprise and the Faculty of Mathematics at the University of Cambridge today announced a collaboration to accelerate new discoveries in the mathematical sciences. This includes partnering with Stephen Hawking’s Centre for Theoretical Cosmology (COSMOS) to understand the origins and structure of the universe. Leveraging the HPE Superdome Flex in-memory computing platform, the COSMOS group will search for clues hiding in massive data sets—spanning 14 billion years of information—that could unlock the secrets of the early universe and black holes.

“Our COSMOS group is working to understand how space and time work, from before the first trillion trillionth of a second after the Big Bang up to today,” said Professor Hawking, the Tsui Wong-Avery Director of Research in Cambridge’s Department of Applied Mathematics and Theoretical Physics. “The recent discovery of gravitational waves offers amazing insights about black holes and the whole Universe. With exciting new data like this, we need flexible and powerful computer systems to keep ahead so we can test our theories and explore new concepts in fundamental physics.”

In 1997, a consortium of leading U.K. cosmologists brought together by Professor Hawking founded the COSMOS supercomputing facility to support research in cosmology, astrophysics and particle physics using shared in-memory computing. Access to new data sets transformed cosmology from speculative theory to quantitative science.

“The influx of new data about the most extreme events in our Universe has led to dramatic progress in cosmology and relativity,” said Professor Paul Shellard, Director of the Centre for Theoretical Cosmology and head of the COSMOS group. “In a fast-moving field we have the twofold challenge of analyzing larger data sets while matching their increasing precision with our theoretical models. In-memory computing allows us to ingest all of this data and act on it immediately, trying out new ideas, new algorithms. It accelerates time to solution and equips us with a powerful tool to probe the big questions about the origin of our Universe.”

The latest supercomputer supporting the work of the faculty, which combines an HPE Superdome Flex with an HPE Apollo supercomputer and Intel Xeon Phi systems, will enable COSMOS to confront cosmological theory with data from the known universe—and incorporate data from new sources, such as gravitational waves, the cosmic microwave background, and the distribution of stars and galaxies. The powerful computational power helps them search for tiny signatures in huge data sets that could unlock the secrets of the universe.

HPE Superdome Flex leverages the principles of Memory-Driven Computing, the architecture central to HPE’s vision for the future of computing, featuring a pool of memory accessed by compute resources over a high-speed data interconnect. The shared memory and single system design of HPE Superdome Flex enables researchers to solve complex, data-intensive problems holistically and reduces the burden on code developers, enabling users to find answers more quickly.

“The in-memory computing capability of HPE Superdome Flex is uniquely suited to meet the needs of the COSMOS research group,” said Randy Meyer, vice president and general manager, Synergy & Mission Critical Servers, Hewlett Packard Enterprise. “The platform will enable the research team to analyze huge data sets and in real time. This means they will be able to find answers faster.”

The supercomputer and its in-memory platform will support not only the COSMOS work but research in a diverse range of fields across the Faculty of Mathematics, from environmental sciences to medical imaging.  The importance of access to computational tools—and the ability to optimize them using local expertise—has been recognized in research projects related to the formation of extra-solar planetary systems, statistical linguistics and brain injuries.

“We are pleased to be partnering with HPE by now offering these unique computing capabilities across the whole Cambridge Faculty of Mathematics,” said Professor Nigel Peake, Head of the Cambridge Department of Applied Mathematics and Theoretical Physics. “High performance computing has become the third pillar of research and we look forward to new developments across the mathematical sciences in areas as diverse as ocean modeling, medical imaging and the physics of soft matter.”

Professor Ray Goldstein, Cambridge’s Schlumberger Professor of Complex Physical Systems, heads a research group using light-sheet microscopy to study the dynamics of morphological transformations occurring in early embryonic development. He is enthusiastic about future opportunities: “The new HPC system will transform our ability to understand these types of processes and to develop quantitative theories for them. It is also a wonderful opportunity to educate researchers about the exciting overlap between high performance computing and experimental biophysics.”

HPE Superdome Flex

The HPE Superdome Flex is the world’s most scalable and modular in-memory computing platform. Designed leveraging principles of Memory-Driven Computing, HPE Superdome Flex can scale from 4 to 32 sockets and 768GB to 48TB of shared memory in a single system, delivering unmatched compute power for the most demanding applications.

The HPE Superdome Flex is now available. For more information, please visit the Superdome Flex page here.

About Hewlett Packard Enterprise
Hewlett Packard Enterprise is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.

About University of Cambridge, Faculty of Mathematics
This consists of the Department of Applied Mathematics and Theoretical Physics and its sister Department of Pure Mathematics and Mathematical Statistics, which together form one of the largest and strongest mathematics faculties in Europe. Located in the award-winning Centre for Mathematical Sciences (see www.maths.cam.ac.uk), there are about 400 staff members (including PhD students) and over 800 undergraduate and postgraduate students enrolled in Parts I to III of the Mathematical Tripos.

About Cosmos Group
The Centre for Theoretical Cosmology (CTC) was established by Professor Stephen Hawkingin 2007 within the Department of Applied Mathematics and Theoretical Physics. It exists to advance the scientific understanding of our Universe, developing and testing mathematical theories for cosmology and black holes. CTC is one of the largest research groups within DAMTP, also supporting postdoctoral fellowships, academic programmes and topical workshops (see www.ctc.cam.ac.uk).

Source: HPE

The post HPE Partners with COSMOS Research Group and the Cambridge Faculty of Mathematics appeared first on HPCwire.

UMass Amherst Computer Scientist and International Team Offer Theoretical Solution to 36-Year-Old Computation Problem

Tue, 11/28/2017 - 14:47

AMHERST, Mass., Nov. 28, 2017 – University of Massachusetts Amherst computer science researcher Barna Saha, with colleagues at MIT and elsewhere, are reporting the theoretical solution to a 36-year-old problem in RNA folding predictions, which is widely used in biology for understanding genome sequences.

The authors presented preliminary results in an extended abstract at the Foundations of Computer Science conference in New Brunswick, N.J. Their final article will appear as one of the conferences’ select papers in an upcoming special issue expected in 2019 of the Society for Industrial and Applied Mathematics’s (SIAM) Journal of Computing.

As Saha, an expert in algorithms, explains, computational approaches to find the secondary structure of RNA molecules are used extensively in bioinformatics applications. Knowing more about RNA structure may reveal clues to its role in the origin and evolution of life on earth, but experimental approaches are difficult, expensive and time-consuming. Computational methods can be helpful, and when integrated with experimental data can add to knowledge, she adds.

Among early researchers to take on the problem were structural chemist Ruth Nussinov in Israel and microbiologist Ann Jacobson in Stony Brook, N.Y., who in 1980 published an algorithm for predicting the secondary structure of single-strand RNA, an accomplishment that “has been in the heart of many further developments in this basic problem” ever since, Saha says. “Over the past 36 years, the cubic running time for their algorithm has not been improved,” she adds, which made many to believe that it is the best running time possible for this problem.

Cubic running time refers to the length of time it will take a computer to do the calculations required, Saha explains. It is a function of the length of the RNA base pair string entered as data. For example, if the string has 1,000 base pairs, the running time for Nussinov and Jacobson’s algorithm will be 1,000 cubed, or 1,000 x 1,000 x 1,000.

However, Saha and colleagues Virginia Vassilevska Williams at MIT, Karl Bringmann at the Max Planck Institute, Saarbrücken, and Fabrizio Grandoni at the Istituto Dalle Molle di Studi sull’Intelligenza Artificiale, Switzerland, now say they have shown theoretically that there is a faster, subcubic algorithm possible for RNA folding computations.

Saha says, “Our algorithm is the first one that takes the Nussinov and Jacobson  model and improves on it. We show that you can solve the problem faster than cubic running time.” She and colleagues developed a new faster algorithm for a special kind of matrix multiplication using which they reduced running time from 3 times the length of the base pair string to 2.82 times. “It may not be the fastest yet, there might be room for improvement,” she says. “This is the first work that breaks the barrier.”

Vassilevska adds, “Before our algorithm, it seemed quite plausible that there is a cubic barrier for RNA-folding. In fact, with co-authors I tried to prove this. We failed – the cubic barrier didn’t seem to follow from any known hypotheses. This failure later helped us break the cubic barrier – if a problem is not hard, it’s likely easy. One of the most fascinating things about our solution is that it is more general than RNA-folding. It breaks the cubic barrier for several other problems, and could potentially lead to further breakthroughs in our understanding for fundamental problems such as shortest paths in networks, pattern recognition and so on.”

Earlier this year, Saha was awarded a five-year Faculty Early Career Development (CAREER) grant from the National Science Foundation, its highest award in support of junior faculty, which supported her work on this project. She says that in future papers, she plans to show that running time can be improved further if the researcher is willing to allow the algorithm to yield “slightly suboptimal folding structures,” that is, the solution will be very close but not precisely correct.

Source: UMass Amherst

The post UMass Amherst Computer Scientist and International Team Offer Theoretical Solution to 36-Year-Old Computation Problem appeared first on HPCwire.

SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed

Tue, 11/28/2017 - 14:08

Everyone by now knows that Nanyang Technological University of Singapore (NTU) took home the highest LINPACK Award and the Overall Championship from the recently concluded SC17 Student Cluster Competition.

We also already know how the teams did in the Highest LINPACK and Highest HPCG competitions, with Nanyang grabbing bragging rights for both benchmarks.

Now it’s time to dive into the results, see how the other teams did, and figure out what’s what. Let’s walk through the application and task results and see what happened.

The Interview: All of the teams did pretty well on the interview portion of the competition. This is a pressure packed part of the competition. HPC subject matter experts grill the students on everything from how they configured their cluster to detailed questions about each of the benchmarks and applications. There’s no hiding from their probing questions.

Nanyang had the highest interview score, notching an almost perfect 97%, but they were closely followed by Team Texas and Tsinghua, who tied for second with 96%.

Team Chicago Fusion (IIT/MHS/SHS) deserves an honorable mention for only being 3% off the winning mark on the interview portion of the scoring.

All of the teams did well in this area, as you can tell by the average/median score of 93%.

The ‘mystery application’ is an app that students only learn about when they’re at the competition. There’s no preparing for it, it’s like suddenly being told in a basketball game that for one quarter, the hoop height will be increased to 15 feet or decreased to five.

The mystery app for 2017 is MPAS A, an application developed Los Alamos National Lab and the National Center for Atmospheric Research to build weather simulations. Students were given the task of modeling what would happen to the rest of the atmosphere if excess carbon was sequestered in Antarctica.

This is Team Chicago Fusion’s best application – they nailed it and left it for dead with a score of 100%. Nanyang almost scored the bullseye with a score of 99% and Tsinghua was an eyelash behind, posting a score of 98%. NTHU finished just out of the money with a 97% score.

As you can see by the high median score, most of the teams were bunched up on the good side of the average – meaning that most teams scored well on this application with a few outliers on the low low side.

The next task up is the Reproducibility exercise. This is where the teams take a paper that was submitted for SC16 and try to reproduce the results – either proving the paper is valid, or…well, not so valid.

The paper this year has an intriguing title, “The Vectorization of the Tersoff Multi-Body Potential:  An Exercise in Performance Portability”, and shows how to use a vectorization scheme to achieve high cross-platform (CPU and accelerator) performance.

Student teams have to use the artifact section of the paper to reproduce the results and either prove or disprove the paper, then submit a report detailing and justifying how they arrived at their conclusion.

Nanyang posted another win, building on their lead over the rest of the pack. Team Texas took home second place, only six points behind Nanyang. NEU finds the winner board for the first time in the competition with their third place showing.

Team Chicago Fusion gets an Honorable Mention for their score of 82%, just a couple of points away from second and third place, while Team Illinois Urbana-Champaign and Taiwan’s NTHU finish in a virtual tie at 80% and 79% (and some change) separating them.

The rest of the teams had at least some trouble with this task as witnessed by the median being significantly higher than the mean score. This indicates that there are several teams who encountered difficulties completing this task. But, hey, who said this was going to be easy?

Speaking of things that are difficult, how about that MrBayes? This year, the students were using MrBayes to examine how viruses transmitted by white flies are impacting cassava production in Africa.

This wasn’t an easy application for most of the teams. While Tsinghua pulled down a 99% score, closely trailed by Nanyang with 98%, the average score on this app was only 67% and the median was 64%.

This was a great app for NEU, however, with their 96% score putting them in the winners circle. Team Chicago Fusion was just a few fractions of a point behind NEU, nabbing the Honorable Mention.

The most difficult application in this edition of the cluster competition looks to be Born, a seismic imaging app used to identify oil and gas reserves. It’s not that this was necessarily the most complicated or difficult to understand application, it’s that it was so damned time consuming. And it’s the time consuming nature of Born that separated the teams in the final accounting.

The teams had to try to process 1,136 Born “shots.” Each shot is independent of the others, which makes for an embarrassingly parallel application – great, right? Well, no. Running on CPUs alone, each Born shot takes somewhere between two and three hours. Ouch.

Several of the teams decided to use their cloud budget and run a bunch of Born instances in the cloud. While this was a good idea, the teams didn’t have enough cloud capacity to run all that much Born – particularly since each shot took so long to complete.

The best approach was to port Born onto GPUs, as four or five teams proved. The top teams on our leaderboard all ported Born and realized great dividends. Tsinghua completed the entire 1,136 slate of datasets and posted a score of 99%. Nanyang also completed all of the datasets and took home second place with their score of 90%. NTHU was a nanometer behind and grabbed third place. USTC gets an honorable mention for posting a score of 83%.

The rest of the teams didn’t do so hot on this one. The average score was competition low of 63% with the median score at 55%. This was a tough mountain to climb and if you didn’t port Born over to GPUs, you didn’t have a chance to complete all of the datasets, even if you were able to devote your entire cluster to it.

Looking at the final stats, Nanyang was the clear winner with an astounding 95 out of a possible 100 points. NTHU and Tsinghua finished very close together, with NTHU nabbing second place by fractions of a percent. Team Peking, a relative newbie with this being their second appearance in the competition, takes home an honorable mention.

These teams finished above the rest of the pack by a respectable margin, as shown by the average score of 70% and the median score of 71%. But, at the end of the day, all of the teams were winners. Everyone showed up, no one gave up, and everyone learned a lot (including me, in fact).

So that’s another student cluster competition in the books. If you’ve missed any of our coverage, you can catch up on it using the following links.

For an intro into the high-stakes world of student cluster competitions, look here.

If you want to see what kind of hardware the teams are driving, here are their configs.

If you want to see the applications and tasks for this year’s event, click your clicker here.

To meet this year’s teams via our video interviews, click here for the American teams, here for the European teams, and here for the Asian teams.

One final note: the bettors in our betting pool were woefully uninformed. The ‘smart money’ was pretty dumb this year, given that the winning Nanyang team was placed as a 35-1 underdog. Wow, if this was a real pool, anyone betting on Nanyang would have really cleaned up!

We’ll be back with more Student Cluster Competition features, more competitions, and even better coverage in 2018. Stay tuned.

The post SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed appeared first on HPCwire.