Feed aggregator

Shantenu Jha Named Chair of Brookhaven Lab’s Center for Data-Driven Discovery

HPC Wire - Thu, 11/30/2017 - 11:59

UPTON, N.Y., Nov. 30, 2017 — Computational scientist Shantenu Jha has been named the inaugural chair of the Center for Data-Driven Discovery(C3D) at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, effective October 1. Part of the Computational Science Initiative(CSI), C3D is driving the integration of domain, computational, and data science expertise across Brookhaven Lab’s science programs and facilities, with the goal of accelerating and expanding scientific discovery. Outside the Lab, C3D is serving as a focal point for the recruitment of future data scientists and collaboration with other institutions.

Shantenu Jha

Jha holds a joint appointment with Rutgers University, where he is an associate professor in the Department of Electrical and Computer Engineering and principal investigator of the Research in Advanced Distributed Cyberinfrastructure and Applications Laboratory (RADICAL). He also leads a project called RADICAL-Cybertools, which are a suite of building blocks enabling the middleware (software layer between the computing platform and application programs) that supports large-scale science and engineering applications.

“Brookhaven hosts four DOE Office of Science User Facilities—the Accelerator Test Facility, Center for Functional Nanomaterials, National Synchrotron Light Source II, and Relativistic Heavy Ion Collider—and participates in leading roles for at least two more facilities,” said Jha. “Further, there are unprecedented collaborative opportunities that CSI provides with other Lab divisions and nearby high-tech, pharmaceutical, and biotech companies. Leading C3D, which will be at the nexus of these collaborations and the intersection of high-performance computing and data science, is truly an unique opportunity.”

Jha’s research interests lie at the intersection of high-performance and distributed computing, computational science, and cyberinfrastructure (computing and data storage systems, visualization environments, and other computing infrastructure linked by high-speed networks). He has experience collaborating with scientists from multiple domains, including the molecular and earth sciences and high-energy physics.

In his new role, Jha will work with domain science researchers, computational scientists, applied mathematicians, computer scientists and engineers. Together, they will develop, deploy, and operate novel solutions for data management, analysis, and interpretation that accelerate discovery in science and industry and enhance national security. These solutions include methods, tools, and services—such as machine-learning algorithms, programming models, visual analytics techniques, and data-sharing platforms. Initially, his team will focus on scalable software systems; distributed computing systems, applications, and middleware; and extreme-scale computing for health care and precision medicine. Partnerships with other national laboratories, colleges and universities, research institutions, and industry will play a critical role in these efforts.

“Shantenu’s leading research in infrastructures for streaming data analysis and novel approaches to high-performance workflows and workload management systems—as well as his expertise in application areas such as materials science and health care—ideally position him for the role of C3D chair,” said CSI Director Kerstin Kleese van Dam. “We are excited to work with him to realize our vision for C3D.”

Prior to joining Rutgers in 2011, Jha was the director of cyberinfrastructure research and development at Louisiana State University’s Center for Computation and Technology. He was also a visiting faculty member in the School of Informatics at the University of Edinburgh and a visiting scientist at the Centre for Computational Science at University College London.

Jha is the recipient of the National Science Foundation’s Faculty Early Career Development Program (CAREER) award, several best paper awards at supercomputing conferences, a Rutgers Board of Trustees Research Fellowship for Scholarly Excellence, and the Inaugural Rutgers Chancellor’s Award for Excellence in Research (the highest award for research contributions that is bestowed to Rutgers faculty). He serves on many program committees—including those for the annual SuperComputing Conference, Platform for Advanced Scientific Computing Conference, International Symposium on Cluster, Cloud and Grid Computing, and International Parallel and Distributed Processing Symposium—and has presented his research at invited talks and keynotes around the world. He holds a PhD and master’s degree in computer science from Syracuse University and a master’s degree in physics from the Indian Institute of Technology Delhi.

Source: Brookhaven Lab

The post Shantenu Jha Named Chair of Brookhaven Lab’s Center for Data-Driven Discovery appeared first on HPCwire.

Microsemi Announces Libero SoC PolarFire v2.0 for Designing With its Mid-Range FPGAs

HPC Wire - Thu, 11/30/2017 - 10:02

ALISO VIEJO, Calif., Nov. 30, 2017 — Microsemi Corporation (Nasdaq: MSCC), a leading provider of semiconductor solutions differentiated by power, security, reliability and performance, today announced the availability of its Libero system-on-chip (SoC) PolarFire version 2.0 comprehensive design software tool suite, used for the development of the company’s lowest power, cost-optimized mid-range PolarFire field programmable gate array (FPGAs) and supporting all PolarFire FPGA family devices and packages.

Microsemi’s Libero SoC PolarFire Design Suite provides a complete design environment for customers working on designs requiring high-speed transceivers and memories with low power consumption. It enables high productivity with its comprehensive, easy to learn, easy to adopt development tools and enables a design launching point for customers with key quick start demonstration designs for rapid evaluation and prototyping. Several full design files for Libero SoC PolarFire targeting the company’s complementary PolarFire Evaluation Kit are also available, including JESD204B Interface, PCI Express (PCIe) Endpoint, 10GBASE-R Ethernet, digital signal processing (DSP) finite impulse response (FIR) filter and multi-rate transceiver demonstration, with additional reference designs planned over the coming months.

“Our Libero SoC PolarFire v2.0 release supports all of the PolarFire product family’s devices and packages, enabling customers to further leverage the high-performance capabilities of our lowest power, cost-optimized mid-range FPGAs for their designs,” said Jim Davis, vice president of software engineering at Microsemi. “Feature enhancements to best-in-class debug tool SmartDebug provide the ability to evaluate transceiver performance while modifying transceiver lane signal integrity parameters on the fly, and to evaluate the channel noise of the transceiver receiver through the eye monitor. In addition, the demonstration mode allows customers to evaluate SmartDebug features without connecting to a hardware board—a capability unique to Microsemi FPGAs.”

The enhanced design suite also includes significant runtime improvement for SmartPower, with a 4x speed up of invocation time and almost instantaneous switching between different views. In addition, Libero SoC PolarFire v2.0 introduces a brand new SmartDesign canvas with higher quality, higher speed of displaying nets and easier design navigation.

While Microsemi’s PolarFire FPGAs are ideal for a wide variety of applications within the communicationsindustrial and aerospace and defense markets, the new software provides new capabilities for high-speed applications, offering particular suitability for access networkswireless infrastructure, and the defense and industry 4.0 markets. Application examples include wireline access, network edge, wireless heterogeneous networks, wireless backhaul, smart optical modules, video broadcasting, encryption and root of trustsecure wireless communicationsradar and electronic warfare (EW), aircraft networking, actuation and control.

With the release of Libero SoC PolarFire v2.0, Microsemi has added support for PolarFire MPF100, MPF200, MPF300 and MPF500 devices for all package options, enabling customers to design with all members of the PolarFire family. It also adds the MPF300TS-FCG484 (STD) device to Libero Gold License and introduces the MPF100T device supported by the free Libero Silver License.

Microsemi’s PolarFire FPGA devices provide cost-effective bandwidth processing capabilities with the lowest power footprint. They feature 12.7 Gbps transceivers and offer up to 50 percent lower power than competing mid-range FPGAs, and include hardened PCIe controller cores with both endpoints and root port modes available, as well as low power transceivers. The company’s complementary PolarFire Evaluation Kit is a comprehensive platform for evaluating its PolarFire FPGAs which includes a PCIe edge connector with four lanes and a demonstration design. The kit features a high-pin-count (HPC) FPGA mezzanine card (FMC), a single full-duplex lane of surface mount assemblies (SMAs), PCIe x4 fingers, dual Gigabit Ethernet RJ45 and a small form-factor pluggable (SFP) module.


Microsemi’s Libero SoC PolarFire v2.0 software toolset is now available for download from Microsemi’s website at https://www.microsemi.com/products/fpga-soc/design-resources/design-software/libero-soc-polarfire#downloads and its PolarFire FPGA devices are available for engineering sample ordering with standard lead times. For more information, visit https://www.microsemi.com/products/fpga-soc/design-resources/design-software/libero-soc-polarfire and www.microsemi.com/polarfire or email sales.support@microsemi.com.

About PolarFire FPGAs

Microsemi’s new cost-optimized PolarFire FPGAs deliver the industry’s lowest power at mid-range densities with exceptional security and reliability. The product family features 12.7 Gbps transceivers and offer up to 50 percent lower power than competing FPGAs. Densities span from 100K to 500K logic elements (LEs) and are ideal for a wide range of applications within wireline access networks and cellular infrastructuredefense and commercial aviation markets, as well as industry 4.0 which includes the industrial automation and internet of things (IoT) markets.

PolarFire FPGAs’ transceivers can support multiple serial protocols, making the products ideal for communications applications with 10Gbps Ethernet, CPRI, JESD204B, Interlaken and PCIe. In addition, the ability to implement serial gigabit Ethernet (SGMII) on GPIO enables numerous 1Gbps Ethernet links to be supported. PolarFire FPGAs also contain the most hardened security intellectual property (IP) to protect customer designs, data and supply chain. The non-volatile PolarFire product family consumes 10 times less static power than competitive devices and features an even lower standby power referred to as Flash*Freeze. For more information, visit www.microsemi.com/polarfire.

About Microsemi

Microsemi Corporation (Nasdaq: MSCC) offers a comprehensive portfolio of semiconductor and system solutions for aerospace & defense, communications, data center and industrial markets. Products include high-performance and radiation-hardened analog mixed-signal integrated circuits, FPGAs, SoCs and ASICs; power management products; timing and synchronization devices and precise time solutions, setting the world’s standard for time; voice processing devices; RF solutions; discrete components; enterprise storage and communication solutions, security technologies and scalable anti-tamper products; Ethernet solutions; Power-over-Ethernet ICs and midspans; as well as custom design capabilities and services. Microsemi is headquartered in Aliso Viejo, California and has approximately 4,800 employees globally. Learn more at www.microsemi.com.

Source: Microsemi

The post Microsemi Announces Libero SoC PolarFire v2.0 for Designing With its Mid-Range FPGAs appeared first on HPCwire.

EXDCI Opens Call for Workshops for the European HPC Summit Week 2018

HPC Wire - Thu, 11/30/2017 - 08:43

Nov. 30, 2017 — EXDCI is pleased to announce an open call for workshops for HPC stakeholders (institutions, service providers, users, communities, projects, vendors and consultants) to shape and contribute to the European HPC Summit Week 2018 (EHPCSW18), that will take place from 28 May to 1 June in Ljubljana, Slovenia.

This call for workshops is addressed to all possible participants interested in including a session or workshop in the EHPCSW18. The procedure is to send an expression of interest to have a session/workshop and agree on a joint programme for this week. If you are an HPC project, initiative or company and you want to include a workshop or session in the European HPC Summit Week, please send the document attached before 18 December 2017.

PRACEdays18 is the central event of the European HPC Summit Week, and is hosted by PRACE’s Slovenian Member ULFME – University of Ljubljana, Faculty of Mechanical Engineering. The conference will bring together experts from academia and industry who will present their advancements in HPC-supported science and engineering. PRACE opens also a call for contributions and posters forPRACEdays18 within the EHPCSW18 week.

For more information on the timeline and the submission process, please follow this link: https://exdci.eu/newsroom/news/exdci-opens-call-workshops-european-hpc-summit-week-2018

Source: EXDCI

The post EXDCI Opens Call for Workshops for the European HPC Summit Week 2018 appeared first on HPCwire.

Two Quantum Simulators Cross 50-Qubit Threshold

HPC Wire - Wed, 11/29/2017 - 18:00

2017 has been quite the year for quantum computing progress with D-Wave continuing to build its quantum annealing approach and Google, Microsoft, IBM and more recently Intel making steady advances toward the threshold that Google has dubbed quantum supremacy, when quantum machines will be able to solve select problems that are outside the purview of their classical counterparts.

Along with efforts to build general quantum computers, researchers are also developing quantum simulators, which enable the study of quantum systems that are too complex to model with conventional supercomputers. Today two independent teams of researchers published papers in the journal Nature describing their work creating the largest quantum simulators yet at over 50 qubits. These projects mark major milestones as previously quantum simulators have been limited to around a dozen qubits.

In one of the two studies, researchers from the University of Maryland (UMD) and the National Institute of Standards and Technology (NIST) create a trapped ion device comprised of 53 individual ytterbium atoms (ions), held in place by electric fields.

“Each ion qubit is a stable atomic clock that can be perfectly replicated,” said UMD team lead Christopher Monroe, who is also the co-founder and chief scientist at the startup IonQ Inc. “They are effectively wired together with external laser beams. This means that the same device can be reprogrammed and reconfigured, from the outside, to adapt to any type of quantum simulation or future quantum computer application that comes up.”

In a separate paper, published in the same issue of Nature, a group of physicists from MIT and Harvard University reported a new way to manipulate quantum bits of matter using finely tuned lasers to generate, control and “read” a 51-atom array.

“Our method provides a way of exploring many-body phenomena on a programmable quantum simulator and could enable realizations of new quantum algorithms,” the authors write.

Potential applications for the new quantum simulator include optimization problems such as the traveling salesman problem, variations of which are used in DNA sequencing, materials science and data processing.

Further reading

MIT News: http://news.mit.edu/2017/scientists-demonstrate-one-largest-quantum-simulators-yet-51-atoms-1129

Joint Quantum Institute press release: http://jqi.umd.edu/news/quantum-simulators-wield-control-over-more-50-qubits

Feature image caption: Artist’s depiction of quantum simulation. Lasers manipulate an array of over 50 atomic qubits in order to study the dynamics of quantum magnetism (credit: E. Edwards/JQI).

The post Two Quantum Simulators Cross 50-Qubit Threshold appeared first on HPCwire.

Ireland Ranks Number One in Top500 Systems per Capita

HPC Wire - Wed, 11/29/2017 - 14:57

Nov. 29, 2017 — The 9th Irish Supercomputer List was released today featuring two new world-class supercomputers. This is the first time that Ireland has four computers ranked on the Top500 list of the fastest supercomputers on Earth. Ireland is now ranked number one globally in terms of number of Top-500 supercomputers per capita (stats here). In terms of performance per capita, Ireland is ranked 4th globally (stats here). These new supercomputers boost the Irish High Performance Computing capacity by nearly one third, up from 3.01 to 4.42 Pflop/s. Ireland has ranked on the Top500 list 33 times over a history of 23 years with a total of 20 supercomputers (full history here). Over half of these rankings (19) and supercomputers (12) have been in the last 6 years, representing Ireland’s increasing pace of High Performance Computing investment. The new entrants, from two undisclosed software and web services companies, feature at spots 423 and 454 on the 50th Top500 Supercomputer List, with Linpack Rmax scores of 635 and 603 TFlop/s respectively.

Not considering Ireland’s admittedly low population (which does help the above rankings), Ireland still ranks admiraly. In terms of Top500 installations, Ireland ranks 9th place globally, tied with Australia, India and Saudi Arabia, and 18th in the world in terms of supercomputing performance.

The Irish Supercomputer List now ranks 30 machines (2 new, 1 upgraded), with a total of more than 207,000 CPU cores and over 106,000 accelerator cores.

Source: Irish Supercomputer List

The post Ireland Ranks Number One in Top500 Systems per Capita appeared first on HPCwire.

ORNL-Designed Algorithm Leverages Titan to Create High-Performing Deep Neural Networks

HPC Wire - Wed, 11/29/2017 - 14:40

Nov. 29, 2017 — Deep neural networks—a form of artificial intelligence—have demonstrated mastery of tasks once thought uniquely human. Their triumphs have ranged from identifying animals in images, to recognizing human speech, to winning complex strategy games, among other successes.

Now, researchers are eager to apply this computational technique—commonly referred to as deep learning—to some of science’s most persistent mysteries. But because scientific data often looks much different from the data used for animal photos and speech, developing the right artificial neural network can feel like an impossible guessing game for nonexperts. To expand the benefits of deep learning for science, researchers need new tools to build high-performing neural networks that don’t require specialized knowledge.

Using the Titan supercomputer, a research team led by Robert Patton of the US Department of Energy’s(DOE’s) Oak Ridge National Laboratory (ORNL) has developed an evolutionary algorithm capable of generating custom neural networks that match or exceed the performance of handcrafted artificial intelligence systems. Better yet, by leveraging the GPU computing power of the Cray XK7 Titan—the leadership-class machine managed by the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL—these auto-generated networks can be produced quickly, in a matter of hours as opposed to the months needed using conventional methods.

The research team’s algorithm, called MENNDL (Multinode Evolutionary Neural Networks for Deep Learning), is designed to evaluate, evolve, and optimize neural networks for unique datasets. Scaled across Titan’s 18,688 GPUs, MENNDL can test and train thousands of potential networks for a science problem simultaneously, eliminating poor performers and averaging high performers until an optimal network emerges. The process eliminates much of the time-intensive, trial-and-error tuning traditionally required of machine learning experts.

“There’s no clear set of instructions scientists can follow to tweak networks to work for their problem,” said research scientist Steven Young, a member of ORNL’s Nature Inspired Machine Learning team. “With MENNDL, they no longer have to worry about designing a network. Instead, the algorithm can quickly do that for them, while they focus on their data and ensuring the problem is well-posed.”

Pinning down parameters

Inspired by the brain’s web of neurons, deep neural networks are a relatively old concept in neuroscience and computing, first popularized by two University of Chicago researchers in the 1940s. But because of limits in computing power, it wasn’t until recently that researchers had success in training machines to independently interpret data.

Today’s neural networks can consist of thousands or millions of simple computational units—the “neurons”—arranged in stacked layers, like the rows of figures spaced across a foosball table. During one common form of training, a network is assigned a task (e.g., to find photos with cats) and fed a set of labeled data (e.g., photos of cats and photos without cats). As the network pushes the data through each successive layer, it makes correlations between visual patterns and predefined labels, assigning values to specific features (e.g., whiskers and paws). These values contribute to the weights that define the network’s model parameters. During training, the weights are continually adjusted until the final output matches the targeted goal. Once the network learns to perform from training data, it can then be tested against unlabeled data.

Although many parameters of a neural network are determined during the training process, initial model configurations must be set manually. These starting points, known as hyperparameters, include variables like the order, type, and number of layers in a network.

Finding the optimal set of hyperparameters can be the key to efficiently applying deep learning to an unusual dataset. “You have to experimentally adjust these parameters because there’s no book you can look in and say, ‘These are exactly what your hyperparameters should be,’” Young said. “What we did is use this evolutionary algorithm on Titan to find the best hyperparameters for varying types of datasets.”

Unlocking that potential, however, required some creative software engineering by Patton’s team. MENNDL homes in on a neural network’s optimal hyperparameters by assigning a neural network to each Titan node. The team designed MENNDL to use a deep learning framework called Caffe to carry out the computation, relying on the parallel computing Message Passing Interface standard to divide and distribute data among nodes. As Titan works through individual networks, new data is fed to the system’s nodes asynchronously, meaning once a node completes a task, it’s quickly assigned a new task independent of the other nodes’ status. This ensures that the 27-petaflop Titan stays busy combing through possible configurations.

“Designing the algorithm to really work at that scale was one of the challenges,” Young said. “To really leverage the machine, we set up MENNDL to generate a queue of individual networks to send to the nodes for evaluation as soon as computing power becomes available.”

To demonstrate MENNDL’s versatility, the team applied the algorithm to several datasets, training networks to identify sub-cellular structures for medical research, classify satellite images with clouds, and categorize high-energy physics data. The results matched or exceeded the performance of networks designed by experts.

Networking neutrinos

One science domain in which MENNDL is already proving its value is neutrino physics. Neutrinos, ghost-like particles that pass through your body at a rate of trillions per second, could play a major role in explaining the formation of the early universe and the nature of matter—if only scientists knew more about them.

Large detectors at DOE’s Fermi National Accelerator Laboratory (Fermilab) use high-intensity beams to study elusive neutrino reactions with ordinary matter. The devices capture a large sample of neutrino interactions that can be transformed into basic images through a process called “reconstruction.” Like a slow-motion replay at a sporting event, these reconstructions can help physicists better understand neutrino behavior.

“They almost look like a picture of the interaction,” said Gabriel Perdue, an associate scientist at Fermilab.

Perdue leads an effort to integrate neural networks into the classification and analysis of detector data. The work could improve the efficiency of some measurements, help physicists understand how certain they can be about their analyses, and lead to new avenues of inquiry.

Teaming up with Patton’s team under a 2016 Director’s Discretionary application on Titan, Fermilab researchers produced a competitive classification network in support of a neutrino scattering experiment called MINERvA (Main Injector Experiment for v-A). The task, known as vertex reconstruction, required a network to analyze images and precisely identify the location where neutrinos interact with the detector—a challenge for events that produce many particles.

In only 24 hours, MENNDL produced optimized networks that outperformed handcrafted networks—an achievement that would have taken months for Fermilab researchers. To identify the high-performing network, MENNDL evaluated approximately 500,000 neural networks. The training data consisted of 800,000 images of neutrino events, steadily processed on 18,000 of Titan’s nodes.

“You need something like MENNDL to explore this effectively infinite space of possible networks, but you want to do it efficiently,” Perdue said. “What Titan does is bring the time to solution down to something practical.”

Having recently been awarded another allocation under the Advanced Scientific Computing Research Leadership Computing Challenge program, Perdue’s team is building off its deep learning success by applying MENDDL to additional high-energy physics datasets to generate optimized algorithms. In addition to improved physics measurements, the results could provide insight into how and why machines learn.

“We’re just getting started,” Perdue said. “I think we’ll learn really interesting things about how deep learning works, and we’ll also have better networks to do our physics. The reason we’re going through all this work is because we’re getting better performance, and there’s real potential to get more.”

AI meets exascale

When Titan debuted 5 years ago, its GPU-accelerated architecture boosted traditional modeling and simulation to new levels of detail. Since then, GPUs, which excel at carrying out hundreds of calculations simultaneously, have become the go-to processor for deep learning. That fortuitous development made Titan a powerful tool for exploring artificial intelligence at supercomputer scales.

With the OLCF’s next leadership-class system, Summit, set to come online in 2018, deep learning researchers expect to take this blossoming technology even further. Summit builds on the GPU revolution pioneered by Titan and is expected to deliver more than five times the performance of its predecessor. The IBM system will contain more than 27,000 of Nvidia’s newest Volta GPUs in addition to more than 9,000 IBM Power9 CPUs. Furthermore, because deep learning requires less mathematical precision than other types of scientific computing, Summit could potentially deliver exascale-level performance for deep learning problems—the equivalent of a billion billion calculations per second.

“That means we’ll be able to evaluate larger networks much faster and evolve many more generations of networks in less time,” Young said.

In addition to preparing for new hardware, Patton’s team continues to develop MENNDL and explore other types of experimental techniques, including neuromorphic computing, another biologically inspired computing concept.

“One thing we’re looking at going forward is evolving deep learning networks from stacked layers to graphs of layers that can split and then merge later,” Young said. “These networks with branches excel at analyzing things at multiple scales, such as a closeup photograph in comparison to a wide-angle shot. When you have 20,000 GPUs available, you can actually start to think about a problem like that.”

Source: ORNL

The post ORNL-Designed Algorithm Leverages Titan to Create High-Performing Deep Neural Networks appeared first on HPCwire.

High-Performance Computing Cuts Particle Collision Data Prep Time

HPC Wire - Wed, 11/29/2017 - 14:34

Nov. 29, 2017 — For the first time, scientists have used high-performance computing (HPC) to reconstruct the data collected by a nuclear physics experiment—an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries.

The demonstration project used the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC), a high-performance computing center at Lawrence Berkeley National Laboratory in California, to reconstruct multiple datasets collected by the STAR detector during particle collisions at the Relativistic Heavy Ion Collider (RHIC), a nuclear physics research facility at Brookhaven National Laboratory in New York. By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed 4.73 petabytes of raw data into 2.45 petabytes of “physics-ready” data in a fraction of the time it would have taken using in-house high-throughput computing resources, even with a two-way transcontinental data journey.

“The reason why this is really fantastic,” said Brookhaven physicist Jérôme Lauret, who manages STAR’s computing needs, “is that these high-performance computing resources are elastic. You can call to reserve a large allotment of computing power when you need it—for example, just before a big conference when physicists are in a rush to present new results.” According to Lauret, preparing raw data for analysis typically takes many months, making it nearly impossible to provide such short-term responsiveness. “But with HPC, perhaps you could condense that many months production time into a week. That would really empower the scientists!”

The accomplishment showcases the synergistic capabilities of RHIC and NERSC—U.S. Department of Energy (DOE) Office of Science User Facilities located at DOE-run national laboratories on opposite coasts—connected by one of the most extensive high-performance data-sharing networks in the world, DOE’s Energy Sciences Network (ESnet), another DOE Office of Science User Facility.

“This is a key usage model of high-performance computing for experimental data, demonstrating that researchers can get their raw data processing or simulation campaigns done in a few days or weeks at a critical time instead of spreading out over months on their own dedicated resources,” said Jeff Porter, a member of the data and analytics services team at NERSC.

Billions of data points

To make physics discoveries at RHIC, scientists must sort through hundreds of millions of collisions between ions accelerated to very high energy. STAR, a sophisticated, house-sized electronic instrument, records the subatomic debris streaming from these particle smashups. In the most energetic events, many thousands of particles strike detector components, producing firework-like displays of colorful particle tracks. But to figure out what these complex signals mean, and what they can tell us about the intriguing form of matter created in RHIC’s collisions, scientists need detailed descriptions of all the particles and the conditions under which they were produced. They must also compare huge statistical samples from many different types of collision events.

Cataloging that information requires sophisticated algorithms and pattern recognition software to combine signals from the various readout electronics, and a seamless way to match that data with records of collision conditions. All the information must then be packaged in a way that physicists can use for their analyses.

Since RHIC started running in the year 2000, this raw data processing, or reconstruction, has been carried out on dedicated computing resources at the RHIC and ATLAS Computing Facility (RACF) at Brookhaven. High-throughput computing (HTC) clusters crunch the data, event-by-event, and write out the coded details of each collision to a centralized mass storage space accessible to STAR physicists around the world.

But the challenge of keeping up with the data has grown with RHIC’s ever-improving collision rates and as new detector components have been added. In recent years, STAR’s annual raw data sets have reached billions of events with data sizes in the multi-Petabyte range. So the STAR computing team investigated the use of external resources to meet the demand for timely access to physics-ready data.

To read the full article, follow this link: https://www.bnl.gov/newsroom/news.php?a=212581

Source: Brookhaven National Laboratory

The post High-Performance Computing Cuts Particle Collision Data Prep Time appeared first on HPCwire.

Enabling Scientific Discovery through HPC at the University of Hull

HPC Wire - Wed, 11/29/2017 - 14:20

HULL, United Kingdom, Nov. 28, 2017 — ClusterVision, Europe’s dedicated high performance computing (HPC) solutions providers, and the University of Hull, one of the United Kingdom’s leading research universities, have published a joint case study today detailing the university’s research on their state-of-the-art HPC cluster, Viper.

HPC enables faster and more robust scientific discovery by significantly enhancing data processing capability. The University of Hull has appropriately named their cluster Viper to reflect this speed and power. For Alex Sheardown, a PhD student at the university’s E.A. Milne Centre, HPC is vital to his research in astrophysics. His look into the growth and composition of galaxy clusters demands a lot of processing power, and with Viper he is finally able to produce high resolution density shots for his research.

Read up on his research, the university’s investments in HPC, and how ClusterVision exceeded the needs of the university in this case study.

You can view, read, and download the case study here.

Source: ClusterVision

The post Enabling Scientific Discovery through HPC at the University of Hull appeared first on HPCwire.

New Director Named at Los Alamos National Laboratory

HPC Wire - Wed, 11/29/2017 - 14:09

LOS ALAMOS, New Mexico, Nov. 28, 2017 — Dr. Terry Wallace has been appointed Director of Los Alamos National Laboratory (LANL) and President of Los Alamos National Security, LLC (LANS), the company that manages and operates the Laboratory for the National Nuclear Security Administration (NNSA). The appointments were announced today by Norman J. Pattiz and Barbara E. Rusinko, Chair and Vice Chair of the Los Alamos National Security (LANS) Board of Governors, and are effective January 1, 2018.

Terry Wallace

“Dr. Wallace’s unique skills, experience and national security expertise make him the right person to lead Los Alamos in service to the country” said Pattiz. “Terry’s expertise in forensic seismology, a highly-specialized discipline, makes him an acknowledged international authority on the detection and quantification of nuclear tests.”

Wallace, age 61, will succeed Dr. Charlie McMillan, who announced in September his plans to retire from the Laboratory by the end of the year. Wallace becomes the 11th Director in the Laboratory’s nearly 75-year history.

Presently, Wallace serves as Principal Associate Director for Global Security (PADGS), and leads Laboratory programs with a focus on applying scientific and engineering capabilities to address national and global security threats, in particular, nuclear threats.

Dr. Wallace served as Principal Associate Director for Science, Technology, and Engineering (PADSTE) from 2006 to 2011 and as Associate Director of Strategic Research from 2005 to 2006. In those positions, he integrated the expertise from all basic science programs and five expansive science and engineering organizations to support LANL’s nuclear-weapons, threat-reduction, and national-security missions.

Wallace was selected following a search and selection process conducted by members of the LANS Board.

“I am honored and humbled to be leading Los Alamos National Laboratory,” said Wallace. “Our Laboratory’s mission has never been more important than it is today.  “As Director, I am determined to extend, if not strengthen our 75-year legacy of scientific excellence in support of our national interests well into the future.”

Dr. Wallace holds Ph.D. and M.S. degrees in geophysics from California Institute of Technology and B.S. degrees in geophysics and mathematics from New Mexico Institute of Mining and Technology.

Wallace will oversee a budget of approximately $2.5 billion, employees and contractors numbering nearly 12,000, and a 36-square-mile site of scientific laboratories, nuclear facilities, experimental capabilities, administration buildings, and utilities.

Pattiz praised outgoing Director McMillan’s dedication and 35 years of service to Los Alamos, Lawrence Livermore and LANS: “Charlie McMillan has led Los Alamos National Laboratory with a rare combination of commitment, intelligence and hard work. We believe he has put this iconic institution in a strong position to continue serving the country for many years to come.”

Additional background

Career Details

Wallace first worked at Los Alamos Scientific Laboratory as an undergraduate student in 1975, and returned to the Laboratory in 2003.

Before returning to the Laboratory, Wallace spent 20 years as a professor with the University of Arizona with appointments to both the Geoscience Department and the Applied Mathematics Program. His scholarly work has earned him recognition as a leader within the worldwide geological community; he was awarded the American Geophysical Union’s prestigious Macelwane Medal, and has the rare honor of having a mineral named after him by the International Mineralogical Association Commission on New Minerals, Nomenclature and Classification

Wallace is a Fellow of the American Geophysical Union (AGU). Wallace has served as President of the Seismological Society of America, Chairman of the Incorporated Institutions for Research in Seismology. He is the co-author of the most widely used seismology textbook, “Modern Global Seismology”, and has authored more than 100 peer review articles on various aspects of seismology. Wallace chaired The National Academy of Science Committee on Seismology and Geodynamics for 6 years, and was a member of the Board of Earth Science and Resources.


Wallace currently resides in Los Alamos. He has been married to Dr. Michelle Hall for over 29 years and they have a son, David, and two grandchildren. He was raised in Los Alamos and is a 1974 graduate of Los Alamos High School.

Dr. Wallace is the son of the late Terry Wallace, Sr. and the late Jeanette Wallace and is a second-generation Laboratory employee.

About Los Alamos National Laboratory

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWXT Government Group, and URS, an AECOM company, for the Department of Energy’s National Nuclear Security Administration.

Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

Source: Los Alamos National Laboratory

The post New Director Named at Los Alamos National Laboratory appeared first on HPCwire.

Mines Ethics Bowl team qualifies for national finals

Colorado School of Mines - Wed, 11/29/2017 - 13:00

Students from Colorado School of Mines are going to the National Intercollegiate Ethics Bowl for the third year in a row.

The Mines Ethics Bowl team beat out student groups from six other schools to win the Rocky Mountain Regional Ethics Bowl on Nov. 11 in Lincoln, Nebraska.  

Mines and the second-place team from Macalester College will move on to the national competition, set to coincide with the 2018 Association for Practical and Professional Ethics Annual Meeting in Chicago in March. Also competing in the Rocky Mountain Regional were Colorado State University, Metropolitan State University of Denver, University of Denver, University of Colorado Denver and Simpson College. 

Earlier this year, Mines placed in the top 20 at the 2017 nationals. The school began fielding Ethics Bowl teams four years ago.

“I'm unbelievably impressed with this year's team – we have only one returning member from last year, so everyone pulled it together really quickly. Not only that, we faced serious competition from high-caliber liberal arts colleges,” said Sandy Woodson, teaching professor of humanities, arts and social sciences and Ethics Bowl coach. “Again, Mines students rise to the occasion, performing under pressure with intelligence and poise.”

Making up the team headed to the 2018 nationals are: Meghan Anderson (electrical engineering); Parker Bolstad (environmental engineering); Amara Hazlewood (chemical engineering); Blake Jones (chemical engineering); Nia Watts (computer science); and Daisy White (geophysics).

In Ethics Bowl, teams of three to five students face off to argue and defend moral assessments of the most complex ethical issues facing today’s society. Teams are judged on their ability to demonstrate understanding of the facts, articulate ethical principles, present an effective argument and respond effectively to challenges from the opposing team and judges. 

Around Labor Day, the teams received 15 cases, with brief narratives outlining some of the issues raised by each case. At regionals, 10 of those 15 cases were debated, but no team knew which would be chosen in advance. This year, the cases addressed in competition included the Dakota Access Pipeline, the U.S. Electoral College, the rise of fake news and the ethics of 13 Reasons, a TV show about teen suicide.

“As engineers, we benefit from being very logical and that shows in our presentations,” said Bolstad, a junior who was also on the 2016-2017 squad. “While we’re not always the most philosophy-driven, the community members who are judges at regionals appreciate the logic.” 

The Mines team will get the cases for nationals in early January, giving them about three months to prepare their arguments, he said.

“We came in this year with a better understanding of how to utilize what we’re good at,” Bolstad said. “We use this term, ‘Don’t name drop.’ There’s a lot of terminology we can use – the categorical imperative and Kant – but we try not to because that’s not our strength. It’s an Ethics Bowl, not a Philosophy Bowl.”

“I’m excited to take what we learned last year and try to apply that and see if we can do a little better this year,” he said.

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Kee celebrates 40 years of research at Mines

Colorado School of Mines - Wed, 11/29/2017 - 10:51

Friends, family, colleagues and current and former students from as far away as Saudi Arabia and Germany came together Nov. 11 to celebrate Colorado School of Mines professor Bob Kee.

More than 80 guests gathered at Red Rocks Amphitheatre in honor of Kee’s 40 years of research at Mines and his 70th birthday. Guests shared stories, photos and research posters and then toasted Kee and sang “Happy Birthday.”

Kee holds the George R. Brown Distinguished Chair in Mechanical Engineering. His research focuses primarily on the modeling and simulation of chemically reacting fluid flow. Applications are generally in the area of clean energy, including fuel cells, photovoltaics and advanced combustion.

The distinguished guest list included Bob Dibble (KAUST), Linda Petzold (University of California, Santa Barbara), Jim Miller (Argonne National Laboratory), Olaf Deutschmann (Karlsruhe Institute of Technology), Uwe Reidel (DLR – Institute of Combustion Technology), Joe Shepherd (CalTech), Scott Barnett (Northwestern University), Wenhua Yang (Shell Global Solutions) and Kevin Walters (EtaGen, Inc.).

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

NCSA Paves a New Way for Using Geopolymers

HPC Wire - Wed, 11/29/2017 - 08:25

Nov. 29, 2017 — “It was a perfect recipe,” said Dr. Seid Koric, Technical Director for Economic and Societal Impact at the National Center for Supercomputing Applications (NCSA) and Research Associate Professor in the Department of Mechanical Science and Engineering at the University of Illinois. Koric, this year’s winner of the Top Supercomputing Achievement award in the annual HPCwire Editors’ Choice Awards, teamed up with NCSA Faculty Fellow and PI, Professor Ange-Therese Akono, geopolymers expert Professor Waltraud “Trudy” Kriven and NCSA research scientist Dr. Erman Guleryuz. Their goal is to understand the impact of nanoporosity on stiffness and strength of geopolymers via molecular dynamics and finite element modeling.

Professor Akono sees a great need for geopolymers to address the issue of sustainable and affordable housing. “One of the challenges in affordable housing is finding materials alone for suitable conditions. Geopolymers represent cost effective alternatives. Because the chemistry of geopolymers is so versatile, we can cast geopolymers by using local solutions, with less of a carbon footprint than concrete and in less time,” said professor Akono.

Geopolymer composites are a novel class of inorganic, nano-porous polymeric hybrids known for their high threshold for heat and anti-corrosive qualities with a potential for high strength and high strength-to-weight ratio. An inherent challenge to their novelty is the lack of long-term data. “We’re still inventing the futures of geopolymers,” said professor Kriven, winner of the Mueller Award for her twenty-year work on geopolymers.

Additionally, what makes them of particular interest in industry is their versatility and efficiency when compared to cement. Beyond housing, Kriven sees potential for geopolymers in renewable energy storage, military, road repair, emergency housing, levees and a more environmentally friendly substitute for all concrete.

Akono set out to use finite element analysis and molecular dynamics at extreme scales to investigate the processing microstructure properties relationships in inorganic geopolymer cements from the nanometer length scale up to the macroscopic length-scale using numerical modeling from results of multi-scale experiments using NCSA’s Blue Waters supercomputer.

“We want to understand the basic behavior of the geopolymer matrix,” said Akono, “and we needed a supercomputer to carry it out and measure the response of a material from nano to macro level. Blue Waters provided great resources to bridge the gap with computing power.”

They used Blue Waters to produce a 3D framework that can be used to design strong geopolymer composites with a wide range of application including advanced low-emitting construction materials, recycling of type F fly ash, low-level radioactive waste encapsulation, fire- and corrosion-resistant coatings and thermal barrier coatings. “Parallel processing and memory were key to this project,” said Koric, “and so was memory.” Blue Waters has more memory and faster data storage than any other open system in the world. Koric and Guleryuz helped write the Blue Waters allocation proposal for time on the supercomputer, which led to this work being presented in four conferences, and a journal submission. Less than a year since they began their research, Akono’s group wrote a joint proposal for funding by the National Science Foundation (NSF). Their work, Multi-scale and Multi-physics Modeling of Na-PS Geopolymer Cement ​Composites was awarded funding in September 2017.

Looking to the future, Koric says he hopes to apply the success of this collaboration to NCSA’s industry partners. “One more thread that we haven’t tried yet, is the idea is to introduce our industry partners to this material, for concrete and construction materials.”

Source: NCSA

The post NCSA Paves a New Way for Using Geopolymers appeared first on HPCwire.

Simulations Predict that Antarctic Volcanic Ash can Disrupt Air Traffic in Vast Areas of the South Hemisphere

HPC Wire - Tue, 11/28/2017 - 15:23

BARCELONA, Nov. 28, 2017 — Simulations performed by Barcelona Supercomputing Center in collaboration with the Institut de Ciències de la Terra Jaume Almera – CSIC demonstrated that Antarctic volcanoes might pose a higher threat than previously considered. A research focused on the potential impacts of ash dispersal and fallout from Deception Island highlights how ash clouds entrapped in circumpolar upper-level winds have the potential to reach lower latitudes and disrupt Austral hemisphere air traffic. The study has been published today in the Nature group journal, Scientific Reports.

Image courtesy of BSC

The research has been based in different sets of simulations, considering different meteorological scenarios and different eruption characteristics. These simulations demonstrated that ash from lower-latitudes, as those in Deception Island, are likely to encircle the globe even in case of moderate eruptions, as it could reach up to tropical latitudes, a vast part of Atlantic coast of South America, South Africa and/or South Oceania. Thus, a wider dispersion of volcanic particles than previously believed can result in significant consequences for aviation safety in these areas.

The experiments have been conducted with BSC’s NMMB-MONARCH-ASH meteorological and atmospheric dispersion model at regional and global scales. One of the aims of the study is to raise concern for the need of performing dedicated hazard assessments to better manage air traffic in case of an eruption.. Several volcanic events having occurred in recent years, including Eyjafjallajökull (Iceland, 2010), Grímsvötn (Iceland, 2010) and Cordón Caulle (Chile, 2010) have led to large economic losses to the aviation industry and its stakeholders.

The paper concludes that, in specific circumstances, volcanic ash from Antarctic volcanoes can disrupt air traffic, not only in proximity, but as far as South Africa (6.400 KM) and in flying routes connecting Africa with South America and Australia.

About volcanos in Antarctica

From the tens of volcanoes located in Antarctica, at least nine (Berlin, Buckle Island, Deception Island, Erebus, Hudson Mountains, Melbourne, Penguin Island, Takahe, and The Pleiades) are known to be active and five of them, all stratovolcanoes, have reported frequent volcanic activity in historical times. Deception Island is an active composite volcano with several tens of eruptions in the last 10,000 years.

Located at the spreading center of the Bransfield Strait marginal basin, Deception Island consists of a horse-shoe-shaped composite volcanic system truncated by the formation of a collapse caldera represented as a sea-flooded depression known as Port Foster. Tephra deposits from Deception and neighboring islands, reveal over 30 post-caldera Holocene eruptions. However, it is inferred that a considerably higher number of eruptions have actually occurred. Indeed, over 50 relatively well-preserved craters and eruptive vents, scattered across the island, can be reconstructed and mapped.

The eruption record in Deception Island since the 19th century reveals periods of high activity (1818–1828, 1906-1912), followed by decades of dormancy (e.g. 1912–1967) . The unrest episodes recorded in 1992, 1999 and 2014-2015 demonstrate that the volcanic system is still active and could be a cause of concern in the future.

During the most recent explosive eruptions occurred in 1967, 1969 and 1970, ash fall and lahars destroyed or severely damaged the scientific bases operating on the island at that time.


NMMB-MONARCH-ASH is a novel on-line meteorological and atmospheric transport model to simulate the emission, transport and deposition of tephra (ash) particles released from volcanic eruptions. The model predicts ash cloud trajectories, concentration at relevant flight levels, and deposit thickness for both regional and global domains.

Reference Paper: A.Geyer, A. Martí, S. Giralt, A. Folch. “Potential ash impact from Antarctic volcanoes: Insights from Deception Island’s most recent eruption”. Scientific Reports,  28th of November 2017. www.nature.com/articles/s41598-017-16630-9

Simulations’ videos: https://www.bsc.es/ashvideos

About BSC

Barcelona Supercomputing Center (BSC) is the national supercomputing centre in Spain. BSC specialises in High Performance Computing (HPC) and its mission is two-fold: to provide infrastructure and supercomputing services to European scientists, and to generate knowledge and technology to transfer to business and society.

BSC is a Severo Ochoa Center of Excellence and a first level hosting member of the European research infrastructure PRACE (Partnership for Advanced Computing in Europe). BSC also manages the Spanish Supercomputing Network (RES).

It was created in 2005 and is a consortium formed by the Spanish Government Ministry of Economy, Industry and Competitiveness (60%), the Catalan Government Department of Enterprise and Knowledge (30%) and the Universitat Politècnica de Catalunya (UPC) (10%).

Source: Barcelona Supercomputing Center

The post Simulations Predict that Antarctic Volcanic Ash can Disrupt Air Traffic in Vast Areas of the South Hemisphere appeared first on HPCwire.

HPE Partners with COSMOS Research Group and the Cambridge Faculty of Mathematics

HPC Wire - Tue, 11/28/2017 - 14:56

MADRID, Spain, Nov. 28, 2017 — Hewlett Packard Enterprise and the Faculty of Mathematics at the University of Cambridge today announced a collaboration to accelerate new discoveries in the mathematical sciences. This includes partnering with Stephen Hawking’s Centre for Theoretical Cosmology (COSMOS) to understand the origins and structure of the universe. Leveraging the HPE Superdome Flex in-memory computing platform, the COSMOS group will search for clues hiding in massive data sets—spanning 14 billion years of information—that could unlock the secrets of the early universe and black holes.

“Our COSMOS group is working to understand how space and time work, from before the first trillion trillionth of a second after the Big Bang up to today,” said Professor Hawking, the Tsui Wong-Avery Director of Research in Cambridge’s Department of Applied Mathematics and Theoretical Physics. “The recent discovery of gravitational waves offers amazing insights about black holes and the whole Universe. With exciting new data like this, we need flexible and powerful computer systems to keep ahead so we can test our theories and explore new concepts in fundamental physics.”

In 1997, a consortium of leading U.K. cosmologists brought together by Professor Hawking founded the COSMOS supercomputing facility to support research in cosmology, astrophysics and particle physics using shared in-memory computing. Access to new data sets transformed cosmology from speculative theory to quantitative science.

“The influx of new data about the most extreme events in our Universe has led to dramatic progress in cosmology and relativity,” said Professor Paul Shellard, Director of the Centre for Theoretical Cosmology and head of the COSMOS group. “In a fast-moving field we have the twofold challenge of analyzing larger data sets while matching their increasing precision with our theoretical models. In-memory computing allows us to ingest all of this data and act on it immediately, trying out new ideas, new algorithms. It accelerates time to solution and equips us with a powerful tool to probe the big questions about the origin of our Universe.”

The latest supercomputer supporting the work of the faculty, which combines an HPE Superdome Flex with an HPE Apollo supercomputer and Intel Xeon Phi systems, will enable COSMOS to confront cosmological theory with data from the known universe—and incorporate data from new sources, such as gravitational waves, the cosmic microwave background, and the distribution of stars and galaxies. The powerful computational power helps them search for tiny signatures in huge data sets that could unlock the secrets of the universe.

HPE Superdome Flex leverages the principles of Memory-Driven Computing, the architecture central to HPE’s vision for the future of computing, featuring a pool of memory accessed by compute resources over a high-speed data interconnect. The shared memory and single system design of HPE Superdome Flex enables researchers to solve complex, data-intensive problems holistically and reduces the burden on code developers, enabling users to find answers more quickly.

“The in-memory computing capability of HPE Superdome Flex is uniquely suited to meet the needs of the COSMOS research group,” said Randy Meyer, vice president and general manager, Synergy & Mission Critical Servers, Hewlett Packard Enterprise. “The platform will enable the research team to analyze huge data sets and in real time. This means they will be able to find answers faster.”

The supercomputer and its in-memory platform will support not only the COSMOS work but research in a diverse range of fields across the Faculty of Mathematics, from environmental sciences to medical imaging.  The importance of access to computational tools—and the ability to optimize them using local expertise—has been recognized in research projects related to the formation of extra-solar planetary systems, statistical linguistics and brain injuries.

“We are pleased to be partnering with HPE by now offering these unique computing capabilities across the whole Cambridge Faculty of Mathematics,” said Professor Nigel Peake, Head of the Cambridge Department of Applied Mathematics and Theoretical Physics. “High performance computing has become the third pillar of research and we look forward to new developments across the mathematical sciences in areas as diverse as ocean modeling, medical imaging and the physics of soft matter.”

Professor Ray Goldstein, Cambridge’s Schlumberger Professor of Complex Physical Systems, heads a research group using light-sheet microscopy to study the dynamics of morphological transformations occurring in early embryonic development. He is enthusiastic about future opportunities: “The new HPC system will transform our ability to understand these types of processes and to develop quantitative theories for them. It is also a wonderful opportunity to educate researchers about the exciting overlap between high performance computing and experimental biophysics.”

HPE Superdome Flex

The HPE Superdome Flex is the world’s most scalable and modular in-memory computing platform. Designed leveraging principles of Memory-Driven Computing, HPE Superdome Flex can scale from 4 to 32 sockets and 768GB to 48TB of shared memory in a single system, delivering unmatched compute power for the most demanding applications.

The HPE Superdome Flex is now available. For more information, please visit the Superdome Flex page here.

About Hewlett Packard Enterprise
Hewlett Packard Enterprise is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.

About University of Cambridge, Faculty of Mathematics
This consists of the Department of Applied Mathematics and Theoretical Physics and its sister Department of Pure Mathematics and Mathematical Statistics, which together form one of the largest and strongest mathematics faculties in Europe. Located in the award-winning Centre for Mathematical Sciences (see www.maths.cam.ac.uk), there are about 400 staff members (including PhD students) and over 800 undergraduate and postgraduate students enrolled in Parts I to III of the Mathematical Tripos.

About Cosmos Group
The Centre for Theoretical Cosmology (CTC) was established by Professor Stephen Hawkingin 2007 within the Department of Applied Mathematics and Theoretical Physics. It exists to advance the scientific understanding of our Universe, developing and testing mathematical theories for cosmology and black holes. CTC is one of the largest research groups within DAMTP, also supporting postdoctoral fellowships, academic programmes and topical workshops (see www.ctc.cam.ac.uk).

Source: HPE

The post HPE Partners with COSMOS Research Group and the Cambridge Faculty of Mathematics appeared first on HPCwire.

UMass Amherst Computer Scientist and International Team Offer Theoretical Solution to 36-Year-Old Computation Problem

HPC Wire - Tue, 11/28/2017 - 14:47

AMHERST, Mass., Nov. 28, 2017 – University of Massachusetts Amherst computer science researcher Barna Saha, with colleagues at MIT and elsewhere, are reporting the theoretical solution to a 36-year-old problem in RNA folding predictions, which is widely used in biology for understanding genome sequences.

The authors presented preliminary results in an extended abstract at the Foundations of Computer Science conference in New Brunswick, N.J. Their final article will appear as one of the conferences’ select papers in an upcoming special issue expected in 2019 of the Society for Industrial and Applied Mathematics’s (SIAM) Journal of Computing.

As Saha, an expert in algorithms, explains, computational approaches to find the secondary structure of RNA molecules are used extensively in bioinformatics applications. Knowing more about RNA structure may reveal clues to its role in the origin and evolution of life on earth, but experimental approaches are difficult, expensive and time-consuming. Computational methods can be helpful, and when integrated with experimental data can add to knowledge, she adds.

Among early researchers to take on the problem were structural chemist Ruth Nussinov in Israel and microbiologist Ann Jacobson in Stony Brook, N.Y., who in 1980 published an algorithm for predicting the secondary structure of single-strand RNA, an accomplishment that “has been in the heart of many further developments in this basic problem” ever since, Saha says. “Over the past 36 years, the cubic running time for their algorithm has not been improved,” she adds, which made many to believe that it is the best running time possible for this problem.

Cubic running time refers to the length of time it will take a computer to do the calculations required, Saha explains. It is a function of the length of the RNA base pair string entered as data. For example, if the string has 1,000 base pairs, the running time for Nussinov and Jacobson’s algorithm will be 1,000 cubed, or 1,000 x 1,000 x 1,000.

However, Saha and colleagues Virginia Vassilevska Williams at MIT, Karl Bringmann at the Max Planck Institute, Saarbrücken, and Fabrizio Grandoni at the Istituto Dalle Molle di Studi sull’Intelligenza Artificiale, Switzerland, now say they have shown theoretically that there is a faster, subcubic algorithm possible for RNA folding computations.

Saha says, “Our algorithm is the first one that takes the Nussinov and Jacobson  model and improves on it. We show that you can solve the problem faster than cubic running time.” She and colleagues developed a new faster algorithm for a special kind of matrix multiplication using which they reduced running time from 3 times the length of the base pair string to 2.82 times. “It may not be the fastest yet, there might be room for improvement,” she says. “This is the first work that breaks the barrier.”

Vassilevska adds, “Before our algorithm, it seemed quite plausible that there is a cubic barrier for RNA-folding. In fact, with co-authors I tried to prove this. We failed – the cubic barrier didn’t seem to follow from any known hypotheses. This failure later helped us break the cubic barrier – if a problem is not hard, it’s likely easy. One of the most fascinating things about our solution is that it is more general than RNA-folding. It breaks the cubic barrier for several other problems, and could potentially lead to further breakthroughs in our understanding for fundamental problems such as shortest paths in networks, pattern recognition and so on.”

Earlier this year, Saha was awarded a five-year Faculty Early Career Development (CAREER) grant from the National Science Foundation, its highest award in support of junior faculty, which supported her work on this project. She says that in future papers, she plans to show that running time can be improved further if the researcher is willing to allow the algorithm to yield “slightly suboptimal folding structures,” that is, the solution will be very close but not precisely correct.

Source: UMass Amherst

The post UMass Amherst Computer Scientist and International Team Offer Theoretical Solution to 36-Year-Old Computation Problem appeared first on HPCwire.

SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed

HPC Wire - Tue, 11/28/2017 - 14:08

Everyone by now knows that Nanyang Technological University of Singapore (NTU) took home the highest LINPACK Award and the Overall Championship from the recently concluded SC17 Student Cluster Competition.

We also already know how the teams did in the Highest LINPACK and Highest HPCG competitions, with Nanyang grabbing bragging rights for both benchmarks.

Now it’s time to dive into the results, see how the other teams did, and figure out what’s what. Let’s walk through the application and task results and see what happened.

The Interview: All of the teams did pretty well on the interview portion of the competition. This is a pressure packed part of the competition. HPC subject matter experts grill the students on everything from how they configured their cluster to detailed questions about each of the benchmarks and applications. There’s no hiding from their probing questions.

Nanyang had the highest interview score, notching an almost perfect 97%, but they were closely followed by Team Texas and Tsinghua, who tied for second with 96%.

Team Chicago Fusion (IIT/MHS/SHS) deserves an honorable mention for only being 3% off the winning mark on the interview portion of the scoring.

All of the teams did well in this area, as you can tell by the average/median score of 93%.

The ‘mystery application’ is an app that students only learn about when they’re at the competition. There’s no preparing for it, it’s like suddenly being told in a basketball game that for one quarter, the hoop height will be increased to 15 feet or decreased to five.

The mystery app for 2017 is MPAS A, an application developed Los Alamos National Lab and the National Center for Atmospheric Research to build weather simulations. Students were given the task of modeling what would happen to the rest of the atmosphere if excess carbon was sequestered in Antarctica.

This is Team Chicago Fusion’s best application – they nailed it and left it for dead with a score of 100%. Nanyang almost scored the bullseye with a score of 99% and Tsinghua was an eyelash behind, posting a score of 98%. NTHU finished just out of the money with a 97% score.

As you can see by the high median score, most of the teams were bunched up on the good side of the average – meaning that most teams scored well on this application with a few outliers on the low low side.

The next task up is the Reproducibility exercise. This is where the teams take a paper that was submitted for SC16 and try to reproduce the results – either proving the paper is valid, or…well, not so valid.

The paper this year has an intriguing title, “The Vectorization of the Tersoff Multi-Body Potential:  An Exercise in Performance Portability”, and shows how to use a vectorization scheme to achieve high cross-platform (CPU and accelerator) performance.

Student teams have to use the artifact section of the paper to reproduce the results and either prove or disprove the paper, then submit a report detailing and justifying how they arrived at their conclusion.

Nanyang posted another win, building on their lead over the rest of the pack. Team Texas took home second place, only six points behind Nanyang. NEU finds the winner board for the first time in the competition with their third place showing.

Team Chicago Fusion gets an Honorable Mention for their score of 82%, just a couple of points away from second and third place, while Team Illinois Urbana-Champaign and Taiwan’s NTHU finish in a virtual tie at 80% and 79% (and some change) separating them.

The rest of the teams had at least some trouble with this task as witnessed by the median being significantly higher than the mean score. This indicates that there are several teams who encountered difficulties completing this task. But, hey, who said this was going to be easy?

Speaking of things that are difficult, how about that MrBayes? This year, the students were using MrBayes to examine how viruses transmitted by white flies are impacting cassava production in Africa.

This wasn’t an easy application for most of the teams. While Tsinghua pulled down a 99% score, closely trailed by Nanyang with 98%, the average score on this app was only 67% and the median was 64%.

This was a great app for NEU, however, with their 96% score putting them in the winners circle. Team Chicago Fusion was just a few fractions of a point behind NEU, nabbing the Honorable Mention.

The most difficult application in this edition of the cluster competition looks to be Born, a seismic imaging app used to identify oil and gas reserves. It’s not that this was necessarily the most complicated or difficult to understand application, it’s that it was so damned time consuming. And it’s the time consuming nature of Born that separated the teams in the final accounting.

The teams had to try to process 1,136 Born “shots.” Each shot is independent of the others, which makes for an embarrassingly parallel application – great, right? Well, no. Running on CPUs alone, each Born shot takes somewhere between two and three hours. Ouch.

Several of the teams decided to use their cloud budget and run a bunch of Born instances in the cloud. While this was a good idea, the teams didn’t have enough cloud capacity to run all that much Born – particularly since each shot took so long to complete.

The best approach was to port Born onto GPUs, as four or five teams proved. The top teams on our leaderboard all ported Born and realized great dividends. Tsinghua completed the entire 1,136 slate of datasets and posted a score of 99%. Nanyang also completed all of the datasets and took home second place with their score of 90%. NTHU was a nanometer behind and grabbed third place. USTC gets an honorable mention for posting a score of 83%.

The rest of the teams didn’t do so hot on this one. The average score was competition low of 63% with the median score at 55%. This was a tough mountain to climb and if you didn’t port Born over to GPUs, you didn’t have a chance to complete all of the datasets, even if you were able to devote your entire cluster to it.

Looking at the final stats, Nanyang was the clear winner with an astounding 95 out of a possible 100 points. NTHU and Tsinghua finished very close together, with NTHU nabbing second place by fractions of a percent. Team Peking, a relative newbie with this being their second appearance in the competition, takes home an honorable mention.

These teams finished above the rest of the pack by a respectable margin, as shown by the average score of 70% and the median score of 71%. But, at the end of the day, all of the teams were winners. Everyone showed up, no one gave up, and everyone learned a lot (including me, in fact).

So that’s another student cluster competition in the books. If you’ve missed any of our coverage, you can catch up on it using the following links.

For an intro into the high-stakes world of student cluster competitions, look here.

If you want to see what kind of hardware the teams are driving, here are their configs.

If you want to see the applications and tasks for this year’s event, click your clicker here.

To meet this year’s teams via our video interviews, click here for the American teams, here for the European teams, and here for the Asian teams.

One final note: the bettors in our betting pool were woefully uninformed. The ‘smart money’ was pretty dumb this year, given that the winning Nanyang team was placed as a 35-1 underdog. Wow, if this was a real pool, anyone betting on Nanyang would have really cleaned up!

We’ll be back with more Student Cluster Competition features, more competitions, and even better coverage in 2018. Stay tuned.

The post SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed appeared first on HPCwire.

Vintage Cray Supercomputer Rolls Up to Auction

HPC Wire - Tue, 11/28/2017 - 14:08

Where do you go to scratch your itch for a vintage Cray? Why eBay of course.

Our search wizards were up early this morning and spotted a listing for a “Vintage Cray C90/J916 Super Computer” which includes the 48-foot Ellis & Watts trailer that the supercomputer is mounted to. [Note: this is a J90 series system, not the earlier C90, as the included images confirm.]

Source: eBay auction item

Codenamed “Jedi” during development, the Cray J90 series was first sold by Cray Research in 1994. It was an entry-level, air-cooled vector processor supercomputer that evolved from the Cray Y-MP EL minisupercomputer. It is compatible with Y-MP software and runs the same UNICOS operating system, Cray’s version of Unix.

As Wikipedia notes, “the J90 supported up to 32 CMOS processors with a 10 ns (100 MHz) clock.” The J916 is the 16 processor model. There was also the J98 with up to eight processors, and the J932 with up to 32 processors. Fully configured with 4 GB of main memory and up to 48 GB/s of memory bandwidth, the J90 offered “considerably less performance than the contemporary Cray T90,” but was “a strong competitor to other technical computers in its price range,” according to the Wikipedia entry.

The seller reports that the unit, which comes equipped with cooling systems (that “need some restoring work”), is untested and requires a 480v connection to hook up the trailer.

Source: eBay auction item

Currently, one person has bid $3,999 on the auction. Shipping is listed at $3,000 with free local pickup in San Jose, California.

If you want a piece of Cray history in time for the holidays but aren’t looking to spend quite that much, there’s other Cray memorabilia to choose from, like this vintage Cray Research champagne glass (buy it now: $27.50), a Cray Y-MP C90 bomber jacket direct from Wisconsin (buy it now: $99), or, if it’s hardware you crave, a Cray X-MP memory board (buy it now: $146).

As for the “portable” Cray J90, the eBay lister doesn’t say much about its provenance, stating only, “Pickup from company been storing for many years. Please send me an email for more info.”

The post Vintage Cray Supercomputer Rolls Up to Auction appeared first on HPCwire.

Globus downtime

University of Colorado Boulder - Tue, 11/28/2017 - 10:41
Categories: Partner News


Subscribe to www.rmacc.org aggregator