HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 42 min ago

Technical Program Chair David Keyes Announces Changes for SC18

Thu, 02/15/2018 - 08:44

Feb. 15, 2018 — SC18 Technical Program Chair David Keyes today announced major changes to the program and planning for SC18. Keyes outlined the changes in an article, which is included in full below.

Important News from SC18 Technical Program Chair David Keyes

David Keyes

How do we make the best even better? It’s an important question to ask as a team of more than 400 volunteers undertakes to create a world-class SC18 technical program. It is a daunting task to live up to the 30-year history of distinction SC has carved for itself.

No other HPC conference delivers such a broad diversity of topics and depth of insight, and it’s a thrill to be helming such an international effort.

As we seek to achieve even more with our technical program, you’ll see some exciting changes built into the planning for SC18 in Dallas this November.

With the help and support of our accomplished team of diverse and talented chairs who hail from industry, nonprofit organizations, laboratories and academia, we have determined to:

–Build on SC’s program quality reputation by strengthening the already rigorous double-blind review process for technical paper submissions, resulting in a new two-stage submission process and some revised submission dates worth noting;

–Add the option of reproducibility supplements to Workshops and Posters;

–Include new proceedings venues for Workshops and the SciViz Showcase;

–Call on technical submissions to emphasize the theme of “convergence” among our four charter areas of exploration—High Performance Computing, Networking, Storage and Analysis;

–Consolidate all five poster categories into a single exhibit space;

–Offer career-themed Fireside Chats;

–Move to a code-enabled online content distribution onsite for fully up-to-date tutorial materials and away from the USB stick distribution method;

–Adapt physical meeting spaces to better accommodate registrants for technical programs.

We welcome the opportunity to hear your comments and questions. If you have insights to share, please send them to techprogram@info.supercomputing.org.

Source: David Keyes, SC18

The post Technical Program Chair David Keyes Announces Changes for SC18 appeared first on HPCwire.

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

Thu, 02/15/2018 - 08:05

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled out to current HPC platforms, it might be helpful to explore how future HPC systems could be better insulated from CPU or operating system security flaws that could cause massive disruptions. Surprisingly, most of the core concepts to build supercomputers that are resistant to a wide range of threats have already been invented and deployed in HPC systems over the past 20 years. Combining these technologies, concepts, and approaches not only would improve cybersecurity but also would have broader benefits for improving HPC performance, developing scientific software, adopting advanced hardware such as neuromorphic chips, and building easy-to-deploy data and analysis services. This new form of “Fluid HPC” would do more than solve current vulnerabilities. As an enabling technology, Fluid HPC would be transformative, dramatically improving extreme-scale code development in the same way that virtual machine and container technologies made cloud computing possible and built a new industry.

In today’s extreme-scale platforms, compute nodes are essentially embedded computing devices that are given to a specific user during a job and then cleaned up and provided to the next user and job. This “space-sharing” model, where the supercomputer is divided up and shared by doling out whole nodes to users, has been common for decades. Several non-HPC research projects over the years have explored providing whole nodes, as raw hardware, to applications. In fact, the cloud computing industry uses software stacks to support this “bare-metal provisioning” model, and Ethernet switch vendors have also embraced the functionality required to support this model. Several classic supercomputers, such as the Cray T3D and the IBM Blue Gene/P, provided nodes to users in a lightweight and fluid manner. By carefully separating the management of compute node hardware from the software executed on those nodes, an out-of-band control system can provide many benefits, from improved cybersecurity to shorter Exascale Computing Project (ECP) software development cycles.

Updating HPC architectures and system software to provide Fluid HPC must be done carefully. In some places, changes to the core management infrastructure are needed. However, many of the component technologies were invented more than a decade ago or simply need updating. Three key architectural modifications are required.

  1. HPC storage services and parallel I/O systems must be updated to use modern, token-based authentication. For many years, web-based services have used standardized technologies like OAuth to provide safe access to sensitive data, such as medical and financial records. Such technologies are at the core of many single-sign-on services that we use for official business processes. These token-based methods allow clients to connect to storage services and read and write data by presenting the appropriate token, rather than, for example, relying on client-side credentials and access from restricted network ports. Some data services, such as Globus, MongoDB, and Spark, have already shifted to allow token-based authentication. As a side effect, this update to HPC infrastructure would permit DOE research teams to fluidly and easily configure new storage and data services, both locally or remotely, without needing special administration privileges. In the same way that a website such as OpenTable.com can accept Facebook or Google user credentials, an ECP data team could create a new service that easily accepted NERSC or ALCF credentials. Moving to modern token-based authentication will improve cybersecurity, too; compromised compute nodes would not be able to read another user’s data. Rather, they would have access only to the areas for which an authentication token had been provided by the out-of-band system management layer.
  2. HPC interconnects must be updated to integrate technology from software-defined networking (SDN). OpenFlow, an SDN standard, is already implemented in many commercial Ethernet switches. SDN allows massive data cloud computing providers such as Google, Facebook, Amazon, and Microsoft to manage and separate traffic within a data center, preventing proprietary data from flowing past nodes that could be maliciously snooping. A compromised node must be prevented from snooping other traffic or spoofing other nodes. Essentially, SDN decouples the control plane and data movement from the physical and logical configuration. Updating the HPC interconnect technology to use SDN technologies would provide improved cybersecurity and also isolate errant HPC programs from interfering or conflicting with other jobs. With SDN technology, a confused MPI process would not be able to send data to another user’s node, because the software-defined network for the user, configured by the external system management layer, would not route the traffic to unconfirmed destinations.
  3. Compute nodes must be efficiently reinitialized, clearing local state between user jobs. Many HPC platforms were designed to support rebooting and recycling compute nodes between jobs. Decades ago, netbooting Beowulf clusters was common. By quickly reinitializing a node and carefully clearing previous memory state, data from one job cannot be leaked to another. Without this technique, a security vulnerability that escalates privilege permits a user to look at data left on the node from the previous job and leave behind malware to watch future jobs. Restarting nodes before each job improves system reliability, too. While rebooting sounds simple, however, guaranteeing that RAM and even NVRAM is clean between reboots might require advanced techniques. Fortunately, several CPU companies have been adding memory encryption engines, and NVRAM producers have added similar features; purging the ephemeral encryption key is equivalent to clearing memory. This feature is used to instantly wipe modern smartphones, such as Apple’s iPhone. Wiping state between users can provide significant improvements to security and productivity.

These three foundational architectural improvements to create a Fluid HPC system must be connected into an improved external system management layer. That layer would “wire up” the software-defined network for the user’s job, hand out storage system authentication tokens, and push a customized operating system or software stack onto the bare-metal provisioned hardware. Modern cloud-based data centers and their software communities have engineered a wide range of technologies to fluidly manage and deploy platforms and applications. The concepts and technologies in projects such as OpenStack, Kubernetes, Mesos, and Docker Swarm can be leveraged for extreme-scale computing without hindering performance. In fact, experimental testbeds such as the Chameleon cluster at the University of Chicago and the Texas Advanced Computing Center have already put some of these concepts into practice and would be an ideal location to test and develop a prototype of Fluid HPC.

These architectural changes make HPC platforms programmable again. The software-defined everything movement is fundamentally about programmable infrastructure. Retooling our systems to enable Fluid HPC with what is essentially a collection of previously discovered concepts, rebuilt with today’s technology, will make our supercomputers programmable in new ways and have a dramatic impact on HPC software development.

  1. Meltdown and Spectre would cause no performance degradation on Fluid HPC systems. In Fluid HPC, compute nodes are managed as embedded systems. Nodes are given completely to users, in exactly the way many hero programmers have been begging for years. The security perimeter around an embedded system leverages different cybersecurity techniques. The CPU flaws that gave us Meltdown and Spectre can be isolated by using the surrounding control system, rather than adding performance- squandering patches to the node. Overall cybersecurity will improve by discarding the weak protections in compute nodes and building security into the infrastructure instead.
  2. Extreme-scale platforms would immediately become the world’s largest software testbeds. Currently, testing new memory management techniques or advanced data and analysis services is nearly impossible on today’s large DOE platforms. Without the advanced controls and out-of-band management provided by Fluid HPC, system operators have no practical method to manage experimental software on production systems. Furthermore, without token-based authentication to storage systems and careful network management to prevent accidental or mischievous malformed network data, new low-level components can cause system instability. By addressing these issues with Fluid HPC, the world’s largest platforms could be immediately used to test and develop novel computer science research and completely new software stacks on a per job basis.
  3. Extreme-scale software development would be easier and faster. For the same reason that the broader software development world is clamoring to use container technologies such as Docker to make writing software easier and more deployable, giving HPC code developers Fluid HPC systems would be a disruptive improvement to software development. Coders could quickly test deploy any change to the software stack on a per-job basis. They could even use machine learning to automatically explore and tune software stacks and parameters. They could ship those software stack modifications across the ocean in an instant, to be tried by collaborators running code on other Fluid HPC systems. Easy performance regression testing would be possible. The ECP community could package software simply. We can even imagine running Amazon-style lambda functions on HPC infrastructure. In short, the HPC community would develop software just as the rest of the world does.
  4. The HPC community could easily develop and deploy new experimental data and analysis services. Deploying an experimental data service or file system is extremely difficult. Currently, there are no common, practical methods for developers to submit a job to a set of file servers with attached storage in order to create a new parallel I/O system, and then give permission to compute jobs to connect and use the service. Likewise, HPC operators cannot easily test deploy new versions of storage services against particular user applications. With the Fluid HPC model, however, a user could instantly create a memcached-based storage service, MongoDB, or Spark cluster on a few thousand compute nodes. Fluid HPC would make the infrastructure programmable; the impediments users now face deploying big data applications on big iron would be eliminated.
  5. Fluid HPC would enable novel, improved HPC architectures. With intelligent and programmable system management layers, modern authentication, software-defined networks, and dynamic software stacks provided by the basic platform, new types of accelerators—from neuromorphic to FPGAs—could be quickly added to Fluid HPC platforms. These new devices could be integrated as a set of disaggregated network-attached resources or attached to CPUs without needing to support multiuser and kernel protections. For example, neuromorphic accelerators could be quickly added without the need to support memory protection or multiuser interfaces. Furthermore, the low-level software stack could jettison the unneeded protection layers, permission checks, and security policies in the node operating system.

It is time for the HPC community to redesign how we manage and deploy software and operate extreme-scale platforms. Computer science concepts are often rediscovered or modernized years after being initially prototyped. Many classic concepts can be recombined and improved with technologies already deployed in the world’s largest data centers to enable Fluid HPC. In exchange, users would receive improved flexibility and faster software development—a supercomputer that not only runs programs but is programmable. Users would have choices and could adapt their code to any software stack or big data service that meets their needs. System operators would be able to improve security, isolation, and the rollout of new software components. Fluid HPC would enable the convergence of HPC and big data infrastructures and radically improve the environments for HPC software development. Furthermore, if Moore’s law is indeed slowing and a technology to replace CMOS is not ready, the extreme flexibility of Fluid HPC would speed the integration of novel architectures while also improving cybersecurity.

It’s hard to thank Meltdown and Spectre for kicking the HPC community into action, but we should nevertheless take the opportunity to aggressively pursue Fluid HPC and reshape our software tools and management strategies.

*Acknowledgments: I thank Micah Beck, Andrew Chien, Ian Foster, Bill Gropp, Kamil Iskra, Kate Keahey, Arthur Barney Maccabe, Marc Snir, Swann Peranau, Dan Reed, and Rob Ross for providing feedback and brainstorming on this topic.

About the Author

Pete Beckman

Pete Beckman is the co-director of the Northwestern University / Argonne Institute for Science and Engineering and designs, builds, and deploys software and hardware for advanced computing systems. When Pete was the director of the Argonne Leadership Computing Facility he led the team that deployed the world’s largest supercomputer for open science research. He has also designed and built massive distributed computing systems. As chief architect for the TeraGrid, Pete oversaw the team that built the world’s most powerful Grid computing system for linking production HPC centers for the National Science Foundation. He coordinates the collaborative research activities in extreme-scale computing between the US Department of Energy (DOE) and Japan’s ministry of education, science, and technology and leads the operating system and run-time software research project for Argo, a DOE Exascale Computing Project. As founder and leader of the Waggle project for smart sensors and edge computing, he is designing the hardware platform and software architecture used by the Chicago Array of Things project to deploy hundreds of sensors in cities, including Chicago, Portland, Seattle, Syracuse, and Detroit. Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985).

The post Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre appeared first on HPCwire.

DOE Gets New Office of Cybersecurity, Energy Security, and Emergency Response

Wed, 02/14/2018 - 14:35

WASHINGTON, D.C., Feb. 14 – Today, U.S. Secretary of Energy Rick Perry is establishing a new Office of Cybersecurity, Energy Security, and Emergency Response (CESER) at the U.S. Department of Energy(DOE). $96 million in funding for the office was included in President Trump’s FY19 budget request to bolster DOE’s efforts in cybersecurity and energy security.

The CESER office will be led by an Assistant Secretary that will focus on energy infrastructure security, support the expanded national security responsibilities assigned to the Department and report to the Under Secretary of Energy.

“DOE plays a vital role in protecting our nation’s energy infrastructure from cyber threats, physical attack and natural disaster, and as Secretary, I have no higher priority,” said Secretary Perry. “This new office best positions the Department to address the emerging threats of tomorrow while protecting the reliable flow of energy to Americans today.”

The creation of the CESER office will elevate the Department’s focus on energy infrastructure protection and will enable more coordinated preparedness and response to natural and man-made threats.

Source: US Department of Energy

The post DOE Gets New Office of Cybersecurity, Energy Security, and Emergency Response appeared first on HPCwire.

Intel Touts Silicon Spin Qubits for Quantum Computing

Wed, 02/14/2018 - 13:54

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offered a quick take on the promise of silicon spin qubits, one of two approaches to quantum computing that Intel is exploring.

Silicon spin qubits, which leverage the spin of a single electron on a silicon device to perform quantum calculations, offer several advantages over the more familiar superconducting qubit counterparts, contends Intel. They are physically smaller, expected to have longer coherence times, should scale well, and are likely to be able to be fabricated using familiar processes.

“Intel has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers sourced specifically for the production of spin-qubit test chips. Fabricated in the same facility as Intel’s advanced transistor technologies, Intel is now testing the initial wafers. Within a couple of months, Intel expects to be producing many wafers per week, each with thousands of small qubit arrays,” according to the Intel news brief posted online today.

Intel has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers, like this one. (Credit: Walden Kirsch/Intel)

The topic isn’t exactly new. Use of quantum dots for qubits has long been studied. The new Nature paper, A programmable two-qubit quantum processor in silicon, demonstrated overcoming some of the cross-talk obstacles presented when using quantum dots.

Abstract excerpt: “[W]e overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 percent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.”

Intel emphasizes silicon spin qubits can operate at higher temperatures than superconducting qubits (1 kelvin as opposed to 20 millikelvin). “This could drastically reduce the complexity of the system required to operate the chips by allowing the integration of control electronics much closer to the processor. Intel and academic research partner QuTech are exploring higher temperature operation of spin qubits with interesting results up to 1K (or 50x warmer) than superconducting qubits. The team is planning to share the results at the American Physical Society (APS) meeting in March.”

A link to a simple but neat video (below) explaining programming on a silicon chip is below.

Link to Nature paper: https://www.nature.com/articles/nature25766

Link to full Intel release: https://newsroom.intel.com/news/intel-sees-promise-silicon-spin-qubits-quantum-computing/

The post Intel Touts Silicon Spin Qubits for Quantum Computing appeared first on HPCwire.

PNNL, OHSU Create Joint Research Co-Laboratory to Advance Precision Medicine

Wed, 02/14/2018 - 13:30

PORTLAND, Ore., Feb. 14, 2018 — Pacific Northwest National Laboratory and OHSU today announced a joint collaboration to improve patient care by focusing research on highly complex sets of biomedical data, and the tools to interpret them.

The OHSU-PNNL Precision Medicine Innovation Co-Laboratory, called PMedIC, will provide a comprehensive ecosystem for scientists to utilize integrated ‘omics, data science and imaging technologies in their research in order to advance precision medicine — an approach to disease treatment that takes into account individual variability in genes, environment and lifestyle for each person.

“This effort brings together the unique and complementary strengths of Oregon’s only academic medical center, which has a reputation for innovative clinical trial designs, and a national laboratory with an international reputation for basic science and technology development in support of biological applications,” said OHSU President Joe Robertson. “Together, OHSU and PNNL will be able to solve complex problems in biomedical research that neither institution could solve alone.”

“The leading biomedical research and clinical work performed at OHSU pairs well with PNNL’s world-class expertise in data science and mass spectrometry analyses of proteins and genes,” said PNNL Director Steven Ashby. “By combining our complementary capabilities, we will make fundamental discoveries and accelerate our ability to tailor healthcare to individual patients.”

The co-laboratory will strengthen and expand the scope of existing interactions between OHSU and PNNL that already include cancer, regulation of the cardiovascular function, immunology and infection, and brain function, and add new collaborations in areas from metabolism to exposure science. The collaboration brings together the two institution’s strengths in data science, imaging and integrated ‘omics, which explores how genes, proteins and various metabolic products interact. The term arises from research that explores the function of key biological components within the context of the entire cell — genomics for genes, proteomics for proteins, and so on.

“PNNL has a reputation for excellence in the technical skill sets required for precision medicine, specifically advanced ‘omic’ platforms that measure the body’s key molecules — genes, proteins and metabolites — and the advanced data analysis methods to interpret these measurements,” said Karin Rodland, director of biomedical partnerships at PNNL. “Pairing these capabilities with the outstanding biomedical research environment and innovative clinical trials at OHSU will advance the field of precision medicine and lead to improved patient outcomes.”

In the long term, OHSU and PNNL aim to foster a generation of biomedical researchers fluent in all the aspects of the science underlying precision medicine, from clinical trials to molecular and computational biology to bioengineering and technology development — a new generation of scientists skilled in translating basic science discoveries to clinical care.

“Just as we have many neuroscientists focused on brain science at OHSU, we have many researchers taking different approaches on the path toward the goal of precision medicine,” said Mary Heinricher, associate dean of basic research, OHSU School of Medicine. “I believe one of the greatest opportunities of this collaboration is for OHSU graduate students and post-docs to have exposure to new technologies and collaborations with PNNL. This will help train the next generation of scientists.”

OHSU and PNNL first collaborated in 2015, when they formed the OHSU-PNNL Northwest Co-Laboratory for Integrated ‘Omicsand were designated a national Metabolomics Center for the Undiagnosed Disease Network, an NIH-funded national network designed to identify underlying mechanisms of very rare diseases. In 2017, the two organizations partnered on a National Cancer Institute-sponsored effort to become a Proteogenomic Translational Research Center focused on a complex form of leukemia.

Source: PNNL

The post PNNL, OHSU Create Joint Research Co-Laboratory to Advance Precision Medicine appeared first on HPCwire.

NCSA Researchers Create Reliable Tool for Long-Term Crop Prediction in the U.S. Corn Belt

Wed, 02/14/2018 - 13:21

Feb. 14, 2018 — With the help of the Blue Waters supercomputer, at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, Blue Waters Professor Kaiyu Guan and NCSA postdoc fellow, Bin Peng implemented and evaluated a new maize growth model. The CLM-APSIM model combines superior features in both Community Land Model (CLM) and Agricultural Production Systems sIMulator (APSIM), creating one of the most reliable tools for long-term crop prediction in the U.S. Corn Belt. Peng and Guan recently published their paper, “Improving maize growth processes in the community land model: Implementation and evaluation” in the Agricultural and Forestry Meteorology journal. This work is an outstanding example of the convergence of simulation and data science that is a driving factor in the National Strategic Computing Initiative announced by the White House in 2015.

Conceptual diagram for phenological stages in the original CLM, APSIM and CLM-APSIM models. Unique features in CLM-APSIM crop model are also highlighted. Note that the stage duration in this diagram is not proportional to real stage length, and only presented for illustrative purpose. Image courtesy of NCSA.

“One class of crop models is agronomy-based and the other is embedded in climate models or earth system models. They are developed for different purposes and applied at different scales,” says Guan. “Because each has its own strengths and weaknesses, our idea is to combine the strengths of both types of models to make a new crop model with improved prediction performance.” Additionally, what makes the new CLM-APSIM model unique is the more detailed phenology stages, an explicit implementation of the impacts of various abiotic environmental stresses (including nitrogen, water, temperature and heat stresses) on maize phenology and carbon allocation, as well as an explicit simulation of grain number.

With support from the NCSA Blue Waters project (funded by the National Science Foundation and Illinois), NASA and the USDA National Institute of Food and Agriculture (NIFA) Foundational Program, Peng and Guan created the prototype for CLM-APSIM. “We built this new tool to bridge these two types of crop models combining their strengths and eliminating the weaknesses.”

The team is currently conducting a high resolution regional simulation over the contiguous United States to simulate corn yield at each planting corner. “There are hundreds of thousands of grids, and we run this model over each grid for 30 years in historical simulation and even more for future projection simulation,” said Peng, “currently it takes us several minutes to calculate one model-year simulation over a single grid. The only way to do this in a timely manner is to use parallel computing with thousands of cores in Blue Waters.”

Peng and Guan examined the results of this tool at seven different locations across the U.S. Corn Belt, revealing that the CLM-APSIM model more accurately predicted and simulated phenology of leaf area index and canopy height, surface fluxes including gross primary production, net ecosystem exchange, latent heat, sensible heat and especially in simulating the biomass partition and maize yield in comparison to the earlier CLM4.5 model. The CLM-APSIM model also corrected a serious deficiency in the original CLM model that underestimated aboveground biomass and overestimated the Harvest Index, which led to a reasonable yield estimation with wrong mechanisms.

Additionally, results from a 13-year simulation (2001-2013) at three sites located in Mead, NE, (US-Ne1, Ne2 and Ne3) show that the CLM-APSIM model can more accurately reproduce maize yield responses to growing season climate (temperature and precipitation) than the original CLM4.5 when benchmarked with the site-based observations and USDA county-level survey statistics.

“We can simulate the past, because we already have the weather datasets, but looking into the next 50 years, how can we understand the effect of climate change? Furthermore, how can we understand what farmers can do to improve and mitigate the climate change impact and improve the yield?” Guan said.

Their hope is to integrate satellite data into the model, similar to that of weather forecasting. “The ultimate goal is to not only have a model, but to forecast in real-time, the crop yields and to project the crop yields decades into the future,” said Guan. “With this technology, we want to not only simulate all the corn in the county of Champaign, Illinois, but everywhere in the U.S. and at a global scale.”

From here, Peng and Guan plan to expand this tool to include other staple crops, such as wheat, rice and soybeans. They are projected to complete a soybean simulation model for the entire United States within the next year.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About the Blue Waters Project

The Blue Waters petascale supercomputer is one of the most powerful supercomputers in the world, and is the fastest sustained supercomputer on a university campus. Blue Waters uses hundreds of thousands of computational cores to achieve peak performance of more than 13 quadrillion calculations per second. Blue Waters has more memory and faster data storage than any other open system in the world. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenges. Recent advances that were not possible without these resources include computationally designing the first set of antibody prototypes to detect the Ebola virus, simulating the HIV capsid, visualizing the formation of the first galaxies and exploding stars, and understanding how the layout of a city can impact supercell thunderstorms.

Source: NCSA

The post NCSA Researchers Create Reliable Tool for Long-Term Crop Prediction in the U.S. Corn Belt appeared first on HPCwire.

Physics Data Processing at NERSC Dramatically Cuts Reconstruction Time

Wed, 02/14/2018 - 13:15

Feb. 14, 2018 — In a recent demonstration project, physicists from Brookhaven National Laboratory (BNL) and Lawrence Berkeley National Laboratory (Berkeley Lab) used the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC) to reconstruct data collected from a nuclear physics experiment, an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries.

The researchers reconstructed multiple datasets collected by the STAR (Solenoidal Tracker At RHIC) detector during particle collisions at the Relativistic Heavy Ion Collider (RHIC), a nuclear physics research facility at BNL. By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed raw data into “physics-ready” data at the petabyte scale in a fraction of the time it would have taken using in-house high-throughput computing resources—even with a two-way transcontinental journey via ESnet, the Department of Energy’s high-speed, high-performance data-sharing network that is managed by Berkeley Lab.

Preparing raw data for analysis typically takes many months, making it nearly impossible to provide such short-term responsiveness, according to Jérôme Lauret, a senior scientist at BNL and co-author on a paper outlining this work that was published in the Journal of Physics.

“This is a key usage model of high performance computing (HPC) for experimental data, demonstrating that researchers can get their raw data processing or simulation campaigns done in a few days or weeks at a critical time instead of spreading out over months on their own dedicated resources,” said Jeff Porter, a member of the data and analytics services team at NERSC and co-author on the Journal of Physics paper.

Billions of Data Points

The STAR experiment is a leader in the study of strongly interacting QCD matter that is generated in energetic heavy ion collisions. STAR consists of a large, complex set of detector systems that measure the thousands of particles produced in each collision event. Detailed analyses of billions of such collisions have enabled STAR scientists to make fundamental discoveries and measure the properties of the quark-gluon plasma. Since RHIC started running in the year 2000, this raw data processing, or reconstruction, has been carried out on dedicated computing resources at the RHIC and ATLAS Computing Facility (RACF) at BNL. High-throughput computing clusters crunch the data event by event and write out the coded details of each collision to a centralized mass storage space accessible to STAR physicists around the world.

In recent years, however, STAR datasets have reached billions of events, with data volumes at the multi-petabyte scale. The raw data signals collected by the detector electronics are processed using sophisticated pattern recognition algorithms to generate the higher-level datasets that are used for physics analysis. So the STAR computing team investigated the use of external resources to meet the demand for timely access to physics-ready data, ultimately turning to NERSC. Among other things, NERSC operates the PDSF cluster for the HEP/NP experiment community, which represents the second largest compute cluster available to the STAR collaboration.

A Processing Framework

Unlike the high-throughput computers at the RACF and PDSF, which analyze events one by one, HPC resources like those at NERSC break large problems into smaller tasks that can run in parallel. So the challenge was to parallelize the processing of STAR event data in a way that can scale out to run on large amounts of data with reproducible results.

The processing framework run at NERSC was built upon several core features. Shifter, a Linux container system developed at NERSC, provided a simple solution to the difficult problem of porting complex software to new computing systems and keep its expected behavior. Scalability was achieved by eliminating bottlenecks in accessing both the event data and experiment databases that record environmental changes—voltage, temperature, pressure and other detector conditions—during data taking. To do this, the workload was broken up into data chunks, sized to run on a single node onto which a snapshot of the STAR database could also be stored. Each node was then self-sufficient, allowing the work to automatically expand out to as many nodes as available without any direct intervention.

“Several technologies developed in-house at NERSC allowed us to build a highly fault-tolerant, multi-step, data-processing pipeline that could scale to practically unlimited number of nodes with the potential to dramatically fold the time it takes to process data for many experiments,” noted Mustafa Mustafa a Berkeley Lab physicist who helped design the system.

Another challenge in migrating the task of raw data reconstruction to an HPC environment was getting the data from BNL in New York to NERSC in California and back. Both the input and output datasets are huge. The team started small with a proof-of-principle experiment—just a few hundred jobs—to see how their new workflow programs would perform. Colleagues at RACF, NERSC and ESnet—including Damian Hazen of NERSC and Eli Dart of ESnet—helped identify hardware issues and optimize the data transfer and the end-to-end workflow.

After fine-tuning their methods based on the initial tests, the team started scaling up, initially using 6,400 computing cores on Cori; in their most recent test they utilized 25,600 cores. The end-to-end efficiency of the entire process—the time the program was running (not sitting idle, waiting for computing resources) multiplied by the efficiency of using the allotted supercomputing slots and getting useful output all the way back to BNL—was 98 percent.

“This was a very successful large-scale data processing run on NERSC HPC,“ said Jan Balewski, a member of the data science engagement group at NERSC who worked on this project. “One that we can look to as a reference as we actively test alternative approaches to support scaling up the computing campaigns at NERSC by multiple physics experiments.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. »Learn more about computing sciences at Berkeley Lab.

Source: NERSC

The post Physics Data Processing at NERSC Dramatically Cuts Reconstruction Time appeared first on HPCwire.

Brookhaven Ramps Up Computing for National Security Effort

Wed, 02/14/2018 - 10:41

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. A week earlier, a report surfaced in a Russian media outlet that a group of Russian nuclear scientists had been arrested for using a government supercomputer to mine crypto-currency.

These very public episodes of computer misdeeds are a small portion of a growing and largely hidden iceberg of computer-related dangers with the potential to harm society. There are, of course, many active efforts to mitigate the ‘hacker’ onslaught as well as to use computational capabilities for U.S. national security purposes. Now, a formal effort is ramping up at Brookhaven National Laboratory (BNL).

Last fall, Adolfy Hoisie then at the Pacific Northwest National Laboratory was tapped to join Brookhaven’s expanding computing research and to become chair of the new Computing for National Security (CSN) Department. Since then Hoisie has been quickly drawing up the roadmap for the new effort – it’s charged with researching and developing novel technologies and applications for use in solving computing challenges in the national security arena.

The new CNS department is a recent addition to Brookhaven’s roughly three-year old Computational Science Initiative (CSI) which is intended to further computational capability and research at Brookhaven with a distinct emphasis on data science. Brookhaven is probably best known for its high-energy physics research. Most recently its National Synchrotron Light Source II is grabbing attention – it will be the brightest in the world when completed and accommodate 60 to 70 beamlines. Brookhaven also houses RHIC (Relativistic Heavy Ion Collider), which among other things is currently looking for the missing spin of the proton.

Adolfy Hoisie, Chair, Computing for National Security (CSN) Department, Brookhaven National Laboratory

Not surprisingly, the synchrotron, RHIC, and a variety of other experimental instruments at Brookhaven produce a lot of data. “We have the second largest scientific data archive in the U.S. and the fourth largest in the world,” said Hoisie, founding chairman of the CSN department. “On an annual basis, data to the tune of 35 petabytes are being ingested, 37 petabytes are being exported, and 400 petabytes of data analyzed. [What’s more] given the scientific community nature of this work, a lot of this data needs to be accessed at high bandwidth in and out of the experimental facilities and the Lab’s storage systems.”

Dealing with that mountain of experimental data is the main computational challenge at Brookhaven and Hoisie noted the CNS mission is ‘highly synergistic’ with those efforts.

“A large spectrum, if not a preponderance of applications, inspired by national security challenges, are in actual fact data sciences problems. It is speed of collection from various sources, whether the volume or velocity of data, the quality of data, analysis of data, which sets performance bounds for [security-related] applications. Just like data being streamed from a detector on an x-ray beam, data that is being streamed from a UAV (unmanned aerial vehicle) also has the challenges of too much data being generated and not enough bandwidth-to-the-ground in order for it to become actionable information and then make it back to the flying vehicle,” he said.

“The methodologies for data analysis, including machine learning and deep learning, required for national security concerns are very much synergistic with the challenges in data sciences. The spectrum of applications of interest to my department includes intelligence apps, cybersecurity, non-proliferation activities including international aspects of that, supply chain security, and a number of computational aspects of security of the computing infrastructure.”

Hoisie is no stranger to HPC or to building focused HPC research organizations. He joined Brookhaven from PNNL where he was the Director of the Advanced Computing, Mathematics, and Data Division, and the founding director of the Center for Advanced Technology Evaluation (CENATE). He plans to significantly expand the breadth, depth, and reach of the technologies and applications considered, with a focus on the full technology pipeline (basic research through devices, board, systems, to algorithms and applications).

Brookhaven, of course, already has substantial computational resources, a big chunk of which are co-located with the new synchrotron and dedicated to it. Predictably, I/O and storage is a particularly thorny issue and Hoisie noted Brookhaven has a large assortment of storage solutions and devices “from novel solutions all the way to discs and tapes of many generations that require computational resources in order to operate and do the data management.”

Brookhaven Light Source II

Currently, there is a second effort to centralize and expand the remaining computational infrastructure. The new CSN, along with much of the CSI, will be located in the new center.

“The first floor of the old synchrotron (National Synchrotron Light Source I) is being refurbished to a modern machine room through a DOE sponsored project. The approximate size of the area is 50,000 square feet. Significant power will be added to house the existing large scale computing and storage systems, and provide for the ability to grow in the future commensurate with the computing aspirations of Brookhaven. The new facility will also include computing Lab space for high accuracy and resolution measurement of computing technologies from device to systems, and to house computing systems “beyond Moore’s law” that will likely require special operating conditions,” said Hoisie.

Brookhaven has a diverse portfolio of ongoing research some of which will be tapped by CNS. “For example, there’s a significant scientific emphasis in materials design. That includes a center for nano materials, developing methodologies for material design and actual development of materials. We are trying to enmesh this expertise in materials with that in computing to tackle the challenges of computing at the device level,” Hoise said.

Hoisie’s group will also look at emerging technologies such as quantum computing. “That’s an area of major interest. We are looking at not only creating the appropriate facilities for siting quantum computing, such as the infrastructure for deep cooling and whatnot, but also looking at very significantly expanding the range of applications that are suitable for quantum computing. On that we have active discussions with IBM and others. You know, quantum computing is a little bit of a work in progress. I know I am stating the obvious but a lot depends on expanding significantly the range of applications to which quantum computing is applicable. We too often say, yes, quantum computing is very good for quantum chemistry or studying quantum effects in all kinds of processes, and cryptography, but there are many other areas we are trying to explore.”

Industry collaboration is an important part of the plan. In fact, noted Hoisie, “CSI, for example, is partly endowed by a New York State grant and part of the rules of engagement related to the grant and the management structure of Brookhaven [requires] development of a bona fide, high quality, high bandwidth interaction with regional powerhouses in computing including IBM. So we have quite a few ongoing in-depth discussions with potential partners that we hope soon to materialize to tackle together specific technologies.”

Throughout his computing career, Hoisie developed fruitful collaborations with technology providers with many collaborators such as IBM, AMD, Nvidia, and Data Vortex, just to name a few. He expects to do the same now.

Also, the modeling and simulation (ModSim) workshop series he helped organize and run will also continue including through his leadership of it and the participation of his new group. “The series of ModSim meetings will continue. Although I am not on the West Coast now we decided to organize them for continuity in Seattle at the University of Washington. These are events in which we are going to showcase technologies and applications including those national security interests and how ModSim is going to help. We’ve refreshed the committee to expand its base. That will continue as an interagency-funded operation that involves DOE, NSF, and a number of sectors from DoD,” Hoisie said.

Obviously these are still early days for the Computing for National Security initiative. A limited number of projects are still taking shape and there are few details available. That said Hoisie has high expectations:

“We have very significant plans to grow this department. The goal is to bring this Computing for National Security department, which is small at the moment, to the level of a high quality, and the emphasis is on the highest possible quality, of a top-notch national laboratory division level effort.

“This is the way in which we conducted HPC research for decades in my groups. There is the highest quality staff that we hire. There is active integration across the spectrum from technology and systems to the system software to applications and algorithms. And there is a healthy mixture of applied mathematics and computer science and domain sciences that are all contributing to the team effort. And there is a pipeline that we are interested in at all stages: as the technology matures you get more and more into areas that are related to computer science and mathematics and algorithm development and end up in tech development arena. These technologies materialize into boards, devices, systems and then into very large scale supercomputers that offer efficient solutions for solving science or national security problems. We absolutely plan to follow this way.”

The post Brookhaven Ramps Up Computing for National Security Effort appeared first on HPCwire.

OLCF-Developed Visualization Tool Offers Customization and Faster Rendering

Wed, 02/14/2018 - 07:15

Feb. 14, 2018 — At the home of America’s most powerful supercomputer, the Oak Ridge Leadership Computing Facility (OLCF), researchers often simulate millions or billions of dynamic atoms to study complex problems in science and energy. The OLCF is a US Department of Energy Office of Science User Facility located at Oak Ridge National Laboratory.

SIGHT visualization from a project led by University of Virginia’s Leonid Zhigilei to explore how lasers transform metal surfaces. Image courtesy of OLCF.

Finding fast, user-friendly ways to organize and analyze all this data is the job of the OLCF Advanced Data and Workflow Group and computer scientists like Benjamín Hernández, who has developed a new visualization tool called SIGHT for OLCF users.

“The amount of data users deal with is huge, and we want them to be able to easily visualize their datasets remotely and in real-time to see what they are simulating,” Hernández said. “We also want to provide ‘cinematic rendering’ to enhance the visual perception of visualizations.”

Through scientific visualizations, researchers can better compare experimental and computational data. Using a type of scientific visualization known as exploratory visualization, researchers can interactively manipulate 3D renderings of their data to make new connections between atomic structure and physical properties, thereby improving the effectiveness of the visualization.

However, as scientific data grow in complexity, so too do memory requirements—especially for exploratory visualization. To provide an easy-to-use, remote visualization tool for users, Hernández developed SIGHT, an exploratory visualization tool customized for OLCF user projects and directly deployed on OLCF systems.

As opposed to traditional visualization in which images are often rendered during post-processing, exploratory visualization can enable researchers to improve models before starting a simulation; make previously unseen connections in data that can inform modeling and simulation; and more accurately interpret computational results based on experimental data.

Hernández incrementally developed the exploratory visualization tool SIGHT by working with a few teams of OLCF users to fold in the specific features they needed for their projects.

To study how lasers transform metal surfaces to create complex, multiscale roughness and drive the ejection of nanoparticles, a team led by materials scientist Leonid Zhigilei of the University of Virginia used Titan to simulate more than 2 billion atoms over thousands of time steps.

“The initial attempts to visualize the atomic configurations were very time-consuming and involved cutting the system into several pieces and reassembling the images produced for different pieces,” Zhigilei said. “SIGHT, however, enabled the researchers at the University of Virginia to take a quick look at the whole system, monitor the evolution of the system over time, and identify the most interesting regions of the system that require additional detailed analysis.”

SIGHT provides high-fidelity “cinematic” rendering that adds visual effects—such as shadowing, ambient occlusion, and photorealistic illumination—that can reveal hidden structures within an atomistic dataset. SIGHT also includes a variety of tools, such as a sectional view that enables researchers to see inside the chunks of atoms forming the dataset. From there, the research team can further explore sections of interest and use SIGHT’s remote capabilities to present results on different screens, from mobile devices to OLCF’s Powerwall, EVEREST.

SIGHT also reduces the amount of work users must wade through upfront. Out-of-the-box tools come with many customization options and an underlying data structure that adapts the product to run on a variety of systems but slows performance by taking up more memory.

Hernández has so far deployed SIGHT on Rhea, OLCF’s 512-node Linux cluster for data processing, and DGX-1, a NVIDIA artificial intelligence supercomputer with eight GPUs. On both Rhea and DGX-1, SIGHT takes advantage of high-memory nodes and optimizes CPU and GPU rendering through the CPU-rendering OSPRay and GPU-rendering NVIDIA OptiX libraries.

Keeping data concentrated on one or a few nodes is critical for exploratory visualization, which is possible on machines like Rhea and DGX-1.

“With traditional visualization tools, you might be able to perform exploratory visualization to some extent on a single node at low interactive rates, but as the amount of data increases, visualization tools must use more nodes and the interactive rates go down even further,” Hernández said.

To model fundamental energy processes in the cell, another user project team led by chemist Abhishek Singharoy of Arizona State University simulated 100 million atoms of a membrane that aids in the production of adenosine triphosphate, a molecule that stores and transports energy in the cell. Using Rhea, collaborator Noah Trebesch from Emad Tajkhorshid’s laboratory at the University of Illinois at Urbana-Champaign then used SIGHT to extended molecular visualization of biological systems to billions of atoms with a model of a piece of the endoplasmic reticulum called the Terasaki ramp.

Compared to a typical desktop computer with a commodity GPU that a researcher might use for running SIGHT at their home university or institution, using SIGHT for remote visualization on Rhea enabled higher particle counts and frame rates by several orders of magnitude. With 1 terabyte of memory available in a single Rhea node, SIGHT using the OSPRay backend to reveal over 4 billion particles and there is still memory available for larger counts.

Furthermore, running SIGHT on the DGX-1 system with the NVIDIA OptiX backend resulted in frame rates up to 10 times faster than a typical desktop computer and almost 5 times faster than a Rhea node.

Anticipating the arrival of Summit later this year, Hernández is conducting tests on how remote interactive visualization workloads can be deployed on the OLCF’s next-generation supercomputer.

Source: Oak Ridge National Laboratory

The post OLCF-Developed Visualization Tool Offers Customization and Faster Rendering appeared first on HPCwire.

Hampleton Partners Advises High-Performance Computing Company CPU 24/7 In Sale To IAV

Tue, 02/13/2018 - 12:43

LONDON, POTSDAM, and BERLIN, Feb. 13, 2018 — Hampleton Partners, the international mergers and acquisitions and corporate finance advisory firm for technology companies, has advised CPU 24/7 GmbH, a cutting-edge provider of HPC managed services, on its acquisition by IAV GmbH, a leading engineering services firm for the automotive industry.

This acquisition equips IAV with unparalleled experience in CAE and HPC applications. CPU 24/7’s singular knowledge in the configuration, provision and maintenance of supercomputing systems will give IAV the ability to offer its automotive clients hosted HPC services, outsourcing resource-intensive CAE tasks that would otherwise be uneconomic or impossible.

“We are very happy we found such a good home for our company”, says Dr. Matthias Reyer, Co-Founder at CPU 24/7. “By combining CPU 24/7’s strength in CAE and HPC applications with IAV’s leading automotive engineering services, the company is perfectly positioned to continue providing their clients with the best services available on the market today.”

Hampleton Director Axel Brill, transaction lead and IT services and enterprise software expert, comments: “It has been a distinct pleasure to advise the founders of CPU 24/7 on this exciting new chapter in the life of their company with IAV. Their unique knowledge of, but also passion and vision for the industry, are plain to see in the successful company they have built. By providing such highly-configured HPC solutions, CPU 24/7 has not only been able to deliver market-leading solutions to the industry giants, but has also democratised access to mission-critical CAE applications for SMEs as well.”

The engagement was led by Axel Brill with Frank Berger and Nicholas Milligan rounding out the transaction team for Hampleton.

About Hampleton

Hampleton Partners is at the forefront of international mergers and acquisitions advisory for companies with technology at their core. Hampleton’s experienced deal makers have built, bought and sold over 100 fast-growing tech businesses and provide hands-on expertise and unrivalled international advice to tech entrepreneurs and the companies who are looking to accelerate growth and maximise value.

About CPU 24/7

CPU 24/7 provides unparalleled know-how in the field of high-performance computing for computer-aided engineering applications. Built on a fully-configurable cloud infrastructure, CPU 24/7 allows OEMs, IT services and engineering services firms to eliminate capex and expertise restraints imposed by the high-precision technical requirements of CAE applications. The Potsdam-based company offers fully-customised and managed HPC clusters to enterprise-grade as well as SME customers. By eliminating the costs and effort of in-housing technical computations, CPU 24/7 offers its customers faster time-to-market and superior ROI on projects.

About IAV

IAV is one of the world’s leading providers of engineering services to the automotive industry, employing over 6,500 members of staff. The company has been developing innovative concepts and technologies for future vehicles for more than 30 years. Core competencies include production-ready solutions in all fields of electronics, powertrain and vehicle development. Clients include all of the world’s premier automobile manufacturers and suppliers. In addition to development centres in Berlin, Gifhorn and Chemnitz/Stollberg, IAV also operates from other locations in Germany, Europe, Asia as well as North and South America.

Source: Hampleton

The post Hampleton Partners Advises High-Performance Computing Company CPU 24/7 In Sale To IAV appeared first on HPCwire.

Cloud Chip Company Tachyum Receives Venture Capital Investment from IPM

Tue, 02/13/2018 - 11:34

CAMPBELL, Calif., February 13, 2018 – Silicon Valley startup Tachyum Inc. announced today it has received an outside investment from IPM Growth, the venture capital division of  InfraPartners Management LLP (“IPM LLP”), a global multi-asset class fund management and advisory firm.

IPM Growth focuses on addressing the scarcity of scale-up capital for innovative companies with global ambitions, developing disruptive technologies that tackle the most pressing global issues in the sectors of energy, mobility, security.  IPM’s investments drive the commercialization of its portfolio companies’ solutions while offering investors with “smart money” a platform for obtaining access to this underserved and opportunistic sector.  IPM’s global footprint – from Europe to Asia Pacific – and Tachyum’s base in the U.S. provides for the necessary reach to bring Tachyum’s unique approach to progressing computer processing technology to the global markets.

“Tachyum’s founders have a proven track record at previous companies of solving problems caused by device physics and in delivering disruptive products to market,” said Adrian Vycital, Managing Partner of IPM who joins Tachyum’s board of directors as part of the investment. “We recognize that one of the obstacles in developing ambitious projects in a number of fields is the lack of access to high-performance processing chips needed to tackle the ever-increasing number and complexity of algorithms demanded of today’s computing problems, as well as the high energy consumption of the rapidly expanding data center market.  From our base in Europe, we see that the EU is currently losing the competitiveness of having access to our own proprietary technology in that respect, and part of our investment thesis for Tachyum is aimed at addressing that demand and helping to drive the R&D projects of the future forward, faster.”

Tachyum will unlock unprecedented performance, power efficiency and cost advantages to solve the world’s most complex problems in data centers globally. By exploiting device physics, Tachyum will deliver a product family of “Cloud Chips” that offer unprecedented power savings, performance and value to help overcome the energy demands of exponential growth in data center capacity.

“IPM’s investment is giving Tachyum a significant boost in our ability to bring our Cloud Chip to life and to address the sea change in semiconductor development threatening to topple industries that depend on processing performance and density,” said Tachyum CEO Dr. Radoslav “Rado” Danilak. “Their established presence in Europe and Asia will help us to more readily deliver radical efficiency improvements that the enterprise and hyperscale markets around the world require in order to sustain and grow.”

Tachyum’s principals hold deep experience in semiconductors, solid-state storage, machine learning, and other sophisticated technologies. Danilak – who has more than 100 granted patents – and the founding team have advanced the state of the art of computing power, speed, and intelligence for more than 100 years combined.

Tachyum’s Cloud Chip, under development, is expected to reduce data center compute and storage hardware capital expenditure by 3x, total cost of ownership by 4x and hardware power consumption by 10-15x, and reduce rack space by over 90 percent.

About Tachyum

Named for the Greek “tachy,” meaning speed, combined with “-um,” indicating an element, Tachyum emerged from stealth mode in 2017 to engineer disruptive intelligent information processing products. Tachyum’s founders have a track record of solving problems caused by device physics to deliver transformational products to market. For more information visit: http://tachyum.com.

About IPM

InfraPartners Management LLP (“IPM”) is a global fund management and advisory company, established in 2014 by a team of experienced investment professionals. IPM is headquartered in London and Bratislava with a presence in Korea, China, Turkey, and Guernsey. IPM specializes in alternative asset classes, including infrastructure, venture capital, private equity, real estate and commodities. At IPM we aim to create a positive economic impact and long-term value for our clients, the companies we invest in, and the communities in which we work.  IPM’s team has previously worked on a number of award-winning transactions on behalf of global financial institutions, being responsible for various industrial and geographical ‘firsts’, and having deployed billions of capital across their careers. InfraPartners Management LLP is authorized and regulated by the United Kingdom Financial Conduct Authority (FCA) to advise on and arrange investments. For more information visit: http://www.ipmllp.com.

Source: Tachyum

The post Cloud Chip Company Tachyum Receives Venture Capital Investment from IPM appeared first on HPCwire.

Gen-Z Consortium Announces the Public Release of Its Core Specification 1.0

Tue, 02/13/2018 - 11:21

BEAVERTON, Ore., Feb. 13, 2018 — The Gen-Z Consortiuman organization developing an open standard interconnect designed to provide high-speed, low latency, memory-semantic access to data and devices, today shared the Gen-Z Core Specification 1.0 is publicly available on its website.

The Gen-Z Core Specification 1.0 enables silicon providers and IP developers to begin the development of products enabling Gen-Z technology solutions. Gen-Z’s memory-centric standards-based approach focuses on providing an Open, reliable, flexible, secure, and high performance architecture for housing and analyzing the incredible amount of information at the edge coming into the data center.

“Our membership has grown significantly throughout 2017, now totaling more than fifty members, and we are proud of the hard work that has culminated with the release of our first Core Specification,” said Gen-Z President, Kurtis Bowman. “We anticipate great things in 2018 as silicon developers begin implementing Gen-Z technology into their offerings and the ecosystem continues to grow.”

Gen-Z technology supports a wide range of new storage class memory media and acceleration devices, features new hybrid and memory-centric computing technologies, and uses a highly efficient, performance-optimized solution stack. Its memory media independence and high bandwidth coupled with low latency enables advanced workloads and technologies for end-to-end secure connectivity from node level to rack scale.

Learn more and download the Gen-Z Core Specification 1.0 on our website.

About Gen-Z Consortium

Gen-Z is an open systems interconnect designed to provide memory semantic access to data and devices via direct-attached, switched or fabric topologies. The Gen-Z Consortium is made up of leading computer industry companies dedicated to creating and commercializing a new data access technology. The Consortium’s 12 initial members were: AMD, ARM, Broadcom, Cray, Dell EMC, Hewlett Packard Enterprise, Huawei, IDT, Micron, Samsung, SK hynix, and Xilinx with that list expanding as reflected on our Member List.

Source: Gen-Z Consortium

The post Gen-Z Consortium Announces the Public Release of Its Core Specification 1.0 appeared first on HPCwire.

NCSA Allocates Over $2.4 Million in New Blue Waters Supercomputer Awards to Illinois Researchers

Tue, 02/13/2018 - 08:24

Feb. 13, 2018 — Fifteen research teams at the University of Illinois at Urbana-Champaignhave been allocated computation time on the National Center for Supercomputing Applications (NCSA) sustained-petascale Blue Waters supercomputer after applying in the Fall of 2017. These allocations range from 75,000 to 582,000 node-hours of compute time over either six months or one year, and altogether total nearly four million node-hours (128 million core hour equivalents), valued at $2.48 million. The research pursuits of these teams are incredibly diverse, ranging anywhere from studies on very small HIV capsids to massive binary star mergers.

Blue Waters, one of the world’s most powerful supercomputers, is capable of sustaining 1.3 quadrillion calculations every second and at peak speed can reach a rate of 13.3 petaflops (calculations per second). Its massive scale and balanced architecture help scientists and scholars alike tackle projects that could not be addressed with other computing systems.

NCSA’s Blue Waters project provides University of Illinois faculty and staff a valuable resource to perform groundbreaking work in computational science and is integral to Illinois’ mission to foster discovery and innovation. To-date the project has allocated a total of 64.6 million node-hours, a $40 million equivalent, to Illinois-based researchers through the Illinois General Allocations program. The system and the University’s robus HPC community presents a unique opportunity for the U of I faculty and researchers, with about 4 percent of the capacity of Blue Waters allocated annually to projects being done at the University through a campuswide peer-review process.

The next round of proposals will be due March 15, 2018. To learn how you could receive an allocation to accelerate your research, visit https://bluewaters.ncsa.illinois.edu/illinois-allocations.

FALL 2017 ILLINOIS ALLOCATIONS

General Awards

Tandy Warnow (Computer Science): Improved Methods for Phylogenomics, Proteomics, and Metagenomics

Aleksei Aksimentiev (Physics): Epigenetic Regulation of Chromatin Structure and Dynamics

Ryan Sriver (Atmospheric Sciences): The response of tropical cyclone activity to global warming in the Community Earth System Model (CESM)

Eliu Huerta Escudero (Astronomy): Simulations of Compact Object Mergers in support of NCSA’s LIGO commitment

Juan Perilla and Jodi Hadden (Beckman Institute): Unveiling the functions of the HIV-1 and hepatitis B virus capsids through the computational microscope

Brad Sutton (Bioengineering): HPC-based computational imaging for high-resolution, quantitative magnetic resonance imaging

Nancy Makri (Chemistry): Quantum-Classical Path Integral Simulation of Charge Transfer Reactions

Stuart Shapiro, Milton Ruiz, and Antonios Tsokaros (Physics): Studies In Theoretical Astrophysics and General Relativity

Narayana Aluru (Mechanical Engineering): Systematic thermodynamically consistent structural-based coarse graining of room temperature ionic liquids

Matthew West (Mechanical Engineering), Nicole Riemer (Atmospheric Sciences) and Jeffrey H. Curtis (Atmospheric Sciences): Verification of a global aerosol model using a 3D particle-resolved model (WRF-PartMC)

Gustavo Caetano-Anolles (Crop Sciences): How function shapes dynamics in evolution of protein molecules

Benjamin Hooberman (Physics): Employing deep learning for particle identification at the Large Hadron Collider

Brian Thomas (Mechanical Engineering): Multiphysics Modeling of Steel Continuous Casting

Hsi-Yu Schive (NCSA) and Matthew Turk (Information Sciences): Ultra-high Resolution Astrophysical Simulations with GAMER

Exploratory Awards

Seid Koric (NCSA): Enhancing Alya Multiphysics Code with SWMP Solver and Solving Large Scale Ill-Conditioned Problems

Andrew Ferguson (Materials Science and Engineering): In Silico Vaccine Design through Empirical Fitness Landscapes and Population Dynamics

Oluwasanmi Koyejo (Computer Science): Dynamic Brain Network Estimation

William Gropp (NCSA), Erin Molloy (Computer Science) and Tandy Warnow (Computer Science): Designing scalable algorithms for constructing large phylogenetic trees (almost without alignments) on supercomputers

Iwona Jasiuk (Mechanical Engineering): Hierarchical Modeling of Novel Metal-Carbon Materials

Mark Bell (Math): Minimal Mixing: Exploring the Veering Automata

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About Blue Waters

Blue Waters is one of the most powerful supercomputers in the world. Located at the University of Illinois, it can complete more than 1 quadrillion calculations per second on a sustained basis and more than 13 times that at peak speed. The peak speed is almost 3 million times faster than the average laptop. Blue Waters is supported by the National Science Foundation and the University of Illinois; the National Center for Supercomputing Applications (NCSA) manages the Blue Waters project and provides expertise to help scientists and engineers take full advantage of the system for their research.

Source: NCSA

The post NCSA Allocates Over $2.4 Million in New Blue Waters Supercomputer Awards to Illinois Researchers appeared first on HPCwire.

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

Mon, 02/12/2018 - 20:48

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended to make it easier, faster and cheaper to train and run machine learning/deep learning systems, while Amazon is reportedly developing its own AI chip portfolio. It’s the latest in a series of processor-related moves by the two companies, along with Microsoft Azure, IBM Cloud and other public cloud services providers, have made in recent months to position themselves as AI becomes increasingly integrated into our business and home lives.

Google is making Cloud TPU (Tensor Processing Units) accelerators available starting today on the Google Cloud Platform (GCP), an offering the company said will help get machine learning (ML) models trained and running more quickly.

Cloud TPUs is Google-designed hardware designed to speed and scale up ML workloads programmed with TensorFlow. Built with four custom ASICs, each Cloud TPU has up to 180 teraflops of floating-point performance and 64 GB of memory on a single board.

“Instead of waiting for a job to schedule on a shared compute cluster, you can have interactive, exclusive access to a network-attached Cloud TPU via a Google Compute Engine VM that you control and can customize,” said John Barrus, product manager for Cloud TPUs, Google Cloud, and Zak Stone, product manager for TensorFlow and Cloud TPUs, Google Brain Team, in a jointly written blog post. “Rather than waiting days or weeks to train a business-critical ML model, you can train several variants of the same model overnight on a fleet of Cloud TPUs and deploy the most accurate trained model in production the next day.”

Meanwhile, Reuters reports that Amazon two months ago paid $90 million for home security camera maker Blink and its energy efficient chip technology, according to unnamed sources.

“The deal’s rationale and price tag, previously unreported, underscore how Amazon aims to do more than sell another popular camera, as analysts had thought,” Reuters reported. “The online retailer is exploring chips exclusive to Blink that could lower production costs and lengthen the battery life of other gadgets, starting with Amazon’s Cloud Cam and potentially extending to its family of Echo speakers, one of the people said.”

According to the report, Amazon seeks to strengthen its ties to consumers via in-house devices. And while Amazon’s Cloud Cam and Echo need a plug-in power source, Blink claims its cameras can last two years on two AA lithium batteries.

Amazon declined to comment on the acquisition’s terms or strategy.

In addition, a published report from The Information states that Amazon is developing its own AI chip designed to work on the Echo and other hardware powered by Amazon’s Alexa virtual assistant. The chip reportedly will help its voice-enabled products handle tasks more efficiently by enabling processing to take place locally at the edge, by the device, rather than in AWS.

HPCwire reported last October that the surging demand for HPC and AI compute power has been shrinking the time gap between the introduction of high-end GPUs, primarily developed by Nvidia and adoption by cloud vendors. “With the Nvidia V100 launch ink still drying and other big cloud vendors still working on Pascal generation rollouts, Amazon Web Services has become the first cloud giant to offer the Tesla Volta GPUs, beating out competitors Google and Microsoft,” HPCwire reported. “Google had been the first of the big three to offer P100 GPUs, but now we learn that Amazon is skipping Pascal entirely and going directly to Volta with the launch of V100-backed P3 instances that include up to eight GPUs connected by NVLink.”

As for Google’s Cloud TPUs, the company said it is simplifying ML training by providing high-level TensorFlow APIs, along with open-sourced reference Cloud TPU model implementations. Using a single Cloud TPU, the authors said ResNet-50 (and other popular models for image classification) “to the expected accuracy on the ImageNet benchmark challenge in less than a day” for less than $200.

Barrus and Stone also said customers will be able to use Cloud TPUs either alone or connected via “an ultra-fast, dedicated network to form multi-petaflop ML supercomputers that we call ‘TPU pods.’”Customers who start now with Cloud TPUs, they said, will benefit from time-to-accuracy improvements wne TPU pods are introduced later this year. “As we announced at NIPS 2017, both ResNet-50 and Transformer training times drop from the better part of a day to under 30 minutes on a full TPU pod, no code changes required.”

“We made a decision to focus our deep learning research on the cloud for many reasons,” said Alfred Spector, CTO at investment management firm Two Sigma, “but mostly to gain access to the latest machine learning infrastructure. Google Cloud TPUs are an example of innovative, rapidly evolving technology to support deep learning, and we found that moving TensorFlow workloads to TPUs has boosted our productivity by greatly reducing both the complexity of programming new models and the time required to train them. Using Cloud TPUs instead of clusters of other accelerators has allowed us to focus on building our models without being distracted by the need to manage the complexity of cluster communication patterns.”

On-demand transportation  company Lyft also said it’s impressed with the speed of Google Cloud TPUs. “What could normally take days can now take hours,” said Anantha Kancherla, head of software, self-driving Level 5, Lyft. “Deep learning is fast becoming the backbone of the software running self-driving cars. The results get better with more data, and there are major breakthroughs coming in algorithms every week. In this world, Cloud TPUs help us move quickly by incorporating the latest navigation-related data from our fleet of vehicles and the latest algorithmic advances from the research community.”

Barras and Stone highlighted in Cloud TPUs the usual advantages offered by public cloud computing.  “Instead of committing the capital, time and expertise required to design, install and maintain an on-site ML computing cluster with specialized power, cooling, networking and storage requirements,” they said, “you can benefit from large-scale, tightly-integrated ML infrastructure that has been heavily optimized at Google over many years.”

Google said Cloud TPUs are available in limited quantities today and usage is billed by the second at the rate of $6.50 USD / Cloud TPU / hour.

This article first appeared in our sister publication, EnterpriseTech.

The post AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip appeared first on HPCwire.

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

Mon, 02/12/2018 - 20:25

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Located at the Federal Nuclear Center in the Russian city of Sarov, the site is home to a 1 petaflops (peak) supercomputer, installed in 2011. Due to the organization’s high secrecy level the supercomputer is not publicly ranked, although it’s purported to have a Linpack score of 780 teraflops.

A 2005 graphic from Lawrence Livermore’s Science and Technology Review showing Russia’s Nuclear Cities Initiative, active in three of Russia’s ten closed nuclear cities—Sarov, Seversk, Snezhinsk—with plans to work with a fourth, Zheleznogorsk. Source.

The scientists’ plans were foiled when they attempted to connect the classified nuclear resource to the internet. The facility’s security team was alerted of the breach and the involved parties were turned over to the FSB, Russia’s principal security agency.

“There was an attempt at unauthorized use of office computing capacities for personal purposes, including for so-called mining,” Tatyana Zalesskaya, head of the research institute press service, told Interfax on Friday.

“Their activities were stopped in time,” she added. “The bungling miners have been detained by the competent authorities. As far as we are aware, a criminal case has been opened.”

The Interfax article did not specify how many individuals are being detained or their names.

The closed city of Sarov is where USSR’s first nuclear bomb was produced leading to the testing of “First Lightning” on August 29, 1949. The city is overseen by Rosatom, Russia’s nuclear energy corporation, which its website attests, “produces supercomputers and software as well as different nuclear and non-nuclear innovative products” and is the largest electricity generating company in Russia.

If you’ve ever wondered whether US lab supercomputers would be used to mine for cryptocurrency, the topic was addressed in a Reddit AMA conducted with the staff of Livermore Computing (LC) at the Lawrence Livermore National Laboratory (LLNL) last year.

Here’s their response:

DOE supercomputers are government resources for national missions. Bitcoin mining would be a misuse of government funds.

In general, though, it’s fun to think about how you could use lots of supercomputing power for Bitcoin mining, but even our machines aren’t big enough to break the system. The number of machines mining bitcoin worldwide has been estimated to have a hash rate many thousands of times faster than all the Top 500 machines combined, so we wouldn’t be able to decide to break the blockchain by ourselves (https://www.forbes.com/sites/peterdetwiler/2016/07/21/mining-bitcoins-is-a-surprisingly-energy-intensive-endeavor/2/#6f0cae8a30f3). Also, mining bitcoins requires a lot of power, and it’s been estimated that even if you used our Sequoia system to mine bitcoin, you’d only make $40/day (https://bitcoinmagazine.com/articles/government-bans-professor-mining-bitcoin-supercomputer-1402002877/). The amount we pay every day to power the machine is a lot more than that. So even if it were legal to mine bitcoins with DOE supercomputers, there’d be no point. The most successful machines for mining bitcoins use low-power custom ASICs built specifically for hashing, and they’ll be more cost-effective than a general purpose CPU or GPU system any day.

Privately-networked government supercomputers are a hard target, but government websites this week proved vulnerable to cryptomining attacks. As reported by the BBC, hackers exploited a number of government websites, including the U.K. Information Commissioner’s Office (ICO), using malware called Coinhive to turn visitors’ compute cycles into cryto-cash. According to British security researcher Scott Helme, more than 4,000 websites around the world were infected with the malicious program that mines for the anonymous cryptocurrency Monero by hijacking vistors’ computers.

The post Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer appeared first on HPCwire.

The Food Industry’s Next Journey — from Mars to Exascale

Mon, 02/12/2018 - 12:38

Mars, the world’s leading chocolate company and one of the largest food manufacturers, has a unique perspective on the impact that exascale computing will have on the food industry.

Creating a Safer and More Sustainable Food Supply Chain

“The food industry needs to address several grand challenges by developing innovative and sustainable solutions at the intersection of food, agriculture and health. Leveraging the power of technology will be critical on this journey. Exascale, for example, is going to be a radical enabler for helping the food, nutrition and agriculture sectors to evolve and possibly even revolutionize themselves to address these grand challenges,” said Harold Schmitz, chief science officer for Mars and director of the Mars Advanced Research Institute. Schmitz is a member of the US Department of Energy’s Exascale Computing Project Industry Council, a group of external advisors from some of the most prominent companies in the United States.

The Exascale Computing Project represents the next frontier in computing. An exascale ecosystem, expected in the 2021 time frame, will provide computational and data analysis performance at least 50 times more powerful than the fastest supercomputers in use today, and will maximize the benefits of high-performance computing (HPC) for many industries. In the case of the food industry, exascale will offer new solutions that can improve food manufacturing practices, yielding safer and more healthful products, more efficient industrial processes and a reduced carbon footprint.

“The power of exascale has the potential to advance the work of a first-of-its-kind effort led by Mars and the IBM Research – Almaden Lab, called the Consortium for Sequencing the Food Supply Chain,” Schmitz said. The consortium is centered on surveillance, risk assessment, and diagnoses of food-borne pathogens, and it is one of the few efforts in the world using the best tools of genomics, biology, and chemistry to understand nutrition, public health, and food safety.

“Although food safety has progressed immensely over the last hundred years—most notably through improvements in shelf life and the addition of macronutrients for preventive health—it remains a major challenge for food manufacturers,” Schmitz said. One in six Americans suffers a food-borne illness each year, and 3,000 of those affected die, according to the US Centers for Disease Control. Across the globe, almost 1 in 10 people fall ill every year from eating contaminated food and 420,000 perish as a result, reports the World Health Organization.

Increased industry and regulatory attention on pathogens such as Salmonella, Campylobacter, Listeria and aflatoxin has led to breakthroughs that make our food safer, but more must be done. Scientists need a method by which they can understand the pathogens in various contexts, including the microbial community, the microbiome and the broader food chain. Going one step further, they need a method that enables them to anticipate how the pathogen would behave in real scenarios, such as: a field where crops are grown and harvested; during travel on various transportation channels; or in factory environments where ingredients are processed.

“The consortium aims to revolutionize our understanding of how to predict pathogen outbreaks and discover what environments stimulate pathogens to behave badly, or what microbial environments are able to keep pathogen outbreaks under control,” Schmitz said. “In essence, we want to sequence the genome of the food supply chain and then use data analytics to understand its microbial community. We’re working at the intersection of HPC and the field of systems biology. In this case, the system is the food supply chain, from farm to fork”

Mars has used genome sequencing to progress its efforts to improve the supply-chain sustainability of one of its key ingredients: cocoa. It is a low-yield crop grown primarily in countries that lack the scientific and technological resources to modernize it.

“We realized we needed to give the most talented agricultural scientists a tool box to make the cocoa crop sustainable,” Schmitz said. That tool box is the genome. So, from 2008 to 2010, Mars, IBM, and the US Department of Agriculture Research Service and several other collaborators sequenced the genome of Theobroma cacao, an economically important tropical fruiting tree that is the source of chocolate.

“Analyzing genomic data allowed us to understand how diverse genotypes of cacao perform in different environments. This information is then used to breed superior varieties, with increased yields, quality and stress tolerance,” said Jim Kennedy, computational science leader at the Mars Advanced Research Institute. “We also use data analytics to understand how genetic and environmental factors contribute to pest and disease losses.  This information is used to develop environmentally friendly strategies to improve crop health.”

“Since our breakthrough on Theobroma cacao, we’ve already seen great improvements in cocoa,” Schmitz said. “When exascale comes online it will introduce food and agriculture data scientists to an exciting new world of opportunity.”

He explained that exascale will provide food data scientists with an unprecedented level of computing power to probe molecular food chemistry in a manner akin to how the pharmaceutical industry uses technology to study protein molecular dynamics.

“Modeling, simulation and data analytics with exascale will inform food design in a way that the empirical method, or trial and error, never could,” Schmitz said. “There is possibility for this to help unlock some of the biggest food and nutritional challenges that we face today.”

Designing More Efficient Manufacturing Processes

The HPC teams at Mars, which partner with DOE National Laboratories to bolster their computational science efforts, use modeling and simulation and data analytics to optimize not only the company’s supply-chain processes but also its design manufacturing processes. The teams employ tools such as computational fluid dynamics, discrete element method, and multiphysics-type deterministic models in HPC.

“We’re applying multiphysics models to better understand some of our essential processes such as extrusion,” Kennedy said. Extrusion is a fundamental process in which product ingredients are fed into a barrel and forced through a screw. The functions of mixing, sterilization, or cooking may take place in the barrel. Mars products such as confection candy, toffee candy, and pet food undergo extrusion.

“If we’re designing a new extrusion process, we’ll use modeling to optimize the design,” Kennedy said. “In the past, we would over-engineer and end up with an extruder that was one-and-a-half times bigger than what we needed. Modeling enables us to understand what the design parameters should be before we cut steel and build anything. But we’ve learned we need more computing power and speed, like what exascale will provide, to handle the complexity of our processes.”

Reducing the Greenhouse Gas Footprint

Exascale will enable the food industry to pioneer more efficient manufacturing processes that use less energy, in turn lessening its environmental impact.

“The food and agriculture sectors are among the largest contributors to climate change and the loss of biodiversity,” Schmitz said. “The energy required in global agriculture, the greenhouse gases emitted, and the vast amount of land used are all contributors. The good news is that the advancements in HPC and the eventual arrival of exascale computing will enable the industry to better use data science advances to improve its environmental and ecological footprint.”

Spreading the Use of Data Science

“The advent of exascale will help spread the use of data science more widely,” Kennedy said. At present, most companies are facing a shortage of data scientists while the need for digitization is expanding. At the same time, companies are trying to automate some of the tasks that would normally require a data scientist, such as cleaning, normalizing, or preprocessing data for analysis, simulation, or modeling.

“Exascale will make it possible for computers to run through scenarios faster and provide the end-user with data output in language that non-experts can understand,” Kennedy said. “Then they can go about slicing and dicing the data to prepare it for simulation. I think exascale will bring that capability to the masses so that they can directly work with their data and gain the insights and ask the questions they need for their research.”

Mars recently confirmed a collaboration agreement with the Joint Institute for Computational Sciences, an Institute of the University of Tennessee and Oak Ridge National Lab. The business plans to leverage the DOE computational infrastructure to find solutions for some of its most complex challenges and opportunities.

The post The Food Industry’s Next Journey — from Mars to Exascale appeared first on HPCwire.

NEC and Tohoku University Succeed in AI-Based New Material Development

Mon, 02/12/2018 - 11:35

TOKYO, Feb 12, 2018 — NEC Corporation and Tohoku University applied new technologies developed by NEC, which use AI to predict the characteristics of unknown materials, to the joint development of cutting edge thermoelectric conversion technology known as a thermoelectric (TE) device(1) using spin current(2), and achieved 100 times better thermoelectric conversion efficiency over the course of approximately 1 year.

These new technologies consist of a group of AI technologies for material development which incorporate the various types of knowledge required for material development, and technology for the batch acquisition of the large amounts of material data that AI needs to learn material properties.

The AI technologies for material development use heterogeneous mixture learning technology(3) and a number of machine learning technologies specific to material development(4), (5) independently developed by NEC. These technologies were combined with technology for the batch acquisition of material data, which enabled the acquisition and evaluation of data regarding more than 1,000 types of materials with different compositions data and resulted in much more accurate AI learning.

NEC and Tohoku University applied a development technique combining these technologies to the development of a spin-current thermoelectric conversion device, and demonstrated that it is possible to enhance thermoelectric conversion efficiency by designing an actual material in line with AI-derived new material design guidelines.

In the future, NEC and Tohoku University will develop more sophisticated AI-based technologies for predicting the physical properties of new materials and conduct further research and development aimed at the practical application of spin-current thermoelectric conversion device technology and the realization of IoT devices that continue working for years without a power supply.

Background

In recent years, cases in which machine learning technologies and analysis technologies based on informatics are applied to large amounts of data to extract hidden information and predict future trends are becoming increasingly common in a wide range of fields.

In the realm of natural sciences, there has long been research into the use of informatics in the search for new substances and materials and, especially in the biomedical, pharmaceutical and chemical fields, advances in so-called combinatorial technologies(6), which involve acquiring data through comprehensive investigation of search objects, have resulted in the extensive use of informatics in projects such as the human genome project.

Similarly, in fields that deal with solid state materials such as metals, semiconductors and oxides, techniques that make full use of the advantages of machine learning and analysis technologies from the viewpoint of shortening the duration of research and development and reducing costs are attracting attention in recent years as materials informatics (MI). Harnessing knowledge gained over many years of basic research in the natural sciences field using informatics, NEC began conducting research into analysis technologies specifically for solid state materials data more than five years ago. However, it was challenging because it took time and effort to produce and evaluate data in this field, making it difficult to obtain enough groups of data of sufficient quality required for the application of MI.

The initiatives of NEC and Tohoku University have now resulted in NEC’s independent development of AI technologies for material development that effectively analyze material data, and technology for the batch acquisition of data about many different types of solid state materials. NEC and Tohoku University have also applied a material development cycle incorporating these two types of technologies to the development of thermoelectric conversion materials using spin currents and considerably shortened the duration of material development.

Features of the new technologies

1. Development of AI technologies for material development
A material development cycle using MI handles an enormous amount of material data and it is, therefore, inevitable that the data processing and classification and the data evaluation that was previously painstakingly performed by specialist researchers can no longer keep up. Also, in the development process, the amount of material candidates is generally greater than the amount of data that can be obtained, and selection of the process for searching these candidates needs to be performed more efficiently than in the past. To address these issues, NEC developed the following AI technologies for material development which correspond to the various characteristics of actual material data.

  • AI technology for combinatorial data analysis: Machine learning technology for processing and classifying large amounts of data acquired using combinatorial techniques at high speed. Knowledge of physics/materials science is partially incorporated into existing machine learning algorithms to realize high precision, high speed data processing(4).
  • AI technology for physical property modeling: Machine learning technology for evaluating large amounts of material data. Inductive modeling of physical properties using heterogeneous mixture learning with high readability (white-box) and high precision prediction enables researchers and AI to increase their understanding of data in concert. This technology plays a very important part in the extraction of the material parameter candidates that characterize the physical properties model.(3)
  • AI technology for materials screening: Machine learning technology for efficiently searching for the target material from among a large amount of materials candidates. On the selection (screening) of materials with reference to the material parameters that characterize the physical properties model, the technology performs high-speed searches of ultra-multi-dimensional systems, which was difficult with existing Bayesian Optimization(7). This is achieved through the application of branch-and-bound type algorithm that predict moves into the future based on Combinatorial Game Theory.(5)

2. Establishment of combinatorial data acquisition platform (batch evaluation and acquisition of large amounts of data) 

NEC and Tohoku University developed a combinatorial data acquisition platform that combines theoretical calculation data acquired through material property simulation with test data acquired through proprietary thin film materials preparation/characteristics evaluation technologies, to create enough data groups of sufficient quality.

In one example of composition-spread materials shown in the photograph, thin film material with more than 1,000 different types of composition was produced on a single substrate in one film preparation process. Electrode pads, etc. according to the purpose of measurement are patterned on the evaluation sample and experimental data can be acquired efficiently through an automatic evaluation system, which was also developed independently. The quality and amount of data is vastly improved compared to previous techniques used to produce and evaluate a single material in one experiment.

In the material physical property simulation, various calculation techniques are used according to the purpose, for example, first-principles calculations, and theoretical calculations are performed to ensure correspondence with experimental data. Various types of physical property parameter group, ranging from general physical properties such as electric resistance and thermal conductivity to detailed physical properties relating to electronic state are acquired as theoretical calculation data.

3. Application to the development of a spin-current thermoelectric conversion device

A spin-current thermoelectric conversion device recovers waste heat that exists widely in society by converting it into electricity and will, therefore, enable the countless IoT devices that will be installed in the future to continue working for many years even without a power source.

The main issue with a spin-current thermoelectric conversion device is that the thermoelectric conversion efficiency performance has still not reached a practically applicable level. When the recently established material development cycle combining a combinatorial data acquisition platform and AI technologies for materials development was applied on a trial basis to the search for a new platinum (Pt) alloy, this led to various discoveries, including that the Pt alloy was a magnet and the importance of the Pt itself contained in the alloy being spin polarized.

Evaluation of the characteristics of a new alloy CoPtN that is designed to enhance spin polarization of Pt according to AI-derived knowledge confirmed that thermoelectric conversion efficiency around 100 times higher than that of previous Pt alloy was obtained. This level was also much closer to the output level of commercially available thermoelectric conversion elements that use semiconductors. It was also demonstrated that it was possible to significantly shorten development time to around 1 year.

“We will continue to further expand the search for materials using AI in the future, focusing on further improvement in the thermoelectric conversion efficiency of spin-current thermoelectric conversion elements and the development of new low-cost materials,” said Eiji Saitoh, Professor, Tohoku University.

“We aim for the early pranctical application of spin-current thermoelectric conversion elements as power source technology for IoT devices that will continue to work for years without a battery or other power source and as low cost, high performance electronic cooling technology,” said Soichi Tsumura, General Manager, NEC IoT Devices Research Laboratories.

These results were achieved as part of the Exploratory Research for Advanced Technology (ERATO) “SAITOH Spin Quantum Rectification Project” (Research Director: Eiji Saitoh, Professor, Tohoku University; Research Period: 2014 – 2020 fiscal year) of the Japan Science and Technology Agency (JST) and the Promoting Individual Research to Nurture the Seeds of Future Innovation and Organizing Unique, Innovative Network (PRESTO) “Development of Basic Technologies for Advanced Materials Informatics through Comprehensive Integration among Theoretical, Experimental, Computational and Data-Centric Sciences” (Researcher: Yuma Iwasaki, NEC Corporation; Research Period: 2017 -2021 fiscal year) of JST.

NEC plans to exhibit and showcase these research findings at nano tech 2018 – The 17th International Nanotechnology Exhibition & Conference from February 14 (Wed)-16 (Fri), 2018 at Tokyo Big Sight, Japan.

(1) A thermoelectric device is a device which converts thermal energy directly into electricity and vice versa.
(2) The spin current is a flow of a magnetic property of an electron, so-called “spin.”
(3) http://jpn.nec.com/ai/analyze/pattern.html
(4) “Comparison of dissimilarity measures for cluster analysis of X-ray diffraction data from combinatorial libraries” Y. Iwasaki et al., nature partner journal Computational Materials 3, 4 (2017).
(5) Proceedings of the 78th JSAP Autumn Meeting 5p-C18-4 (2017) R. Sawada et al.
(6) The combinatorial approach involves comprehensively investigating predicted combinations to gain an understanding of the overall trend of search candidates.
(7) Bayesian Optimization algorithms are stochastic optimization techniques that are used to search for the maximum or minimum value based on observed facts.

About NEC Corporation

NEC Corporation is a leader in the integration of IT and network technologies that benefit businesses and people around the world. By providing a combination of products and solutions that cross utilize the company’s experience and global resources, NEC’s advanced technologies meet the complex and ever-changing needs of its customers. NEC brings more than 100 years of expertise in technological innovation to empower people, businesses and society. For more information, visit NEC at http://www.nec.com.

Source: NEC Corporation

The post NEC and Tohoku University Succeed in AI-Based New Material Development appeared first on HPCwire.

Scholarship Applications and Sponsorships Open for Tapia 2018

Mon, 02/12/2018 - 11:26

Feb. 12, 2018 — Scholarship applications and Sponsorships for the 2018 ACM Richard Tapia Celebration of Diversity in Computing are now open. The Tapia Conference provides conference travel scholarships for students (community college/undergraduate/graduate), post-docs and a limited number for faculty at colleges/universities in the U.S and U.S. Territories.

Scholarships include conference registration, meals during the conference, hotel accommodations, and a reimbursable travel stipend. Tapia scholarships are generously funded by government and industry organizations. The Tapia Conference is unable to provide scholarships for individuals studying/working at foreign colleges/universities. Scholarships are reviewed by over 90 professional volunteers in industry and academia.

Tapia 2018 Scholarship Application Submission Deadlines: 
  • Program Submitters (Posters and Doctoral Consortium): March 28, 2018
  • General Tapia Scholarship Applicants: March 30, 2018 

To learn more or to start an application, follow this link.

Source: Tapia Conference

The post Scholarship Applications and Sponsorships Open for Tapia 2018 appeared first on HPCwire.

TACC, DOD Engage in Four-Year Transformational Design Project

Mon, 02/12/2018 - 11:05

Feb. 12, 2018 — The Texas Advanced Computing Center (TACC) at The University of Texas at Austin today announced that it has partnered with the U.S. Department of Defense (DOD) to provide researchers with access to advanced computing resources as part of an effort to develop novel computational approaches for complex manufacturing and design problems.

TRADES, which stands for TRAnsformative DESign, is a program within the DOD Defense Advanced Research Projects Agency (DARPA). The essence of the program is to synthesize components of complex mechanical platforms (e.g., ground vehicles, ships, or air and space craft), which leverage advanced materials and manufacturing methods such as direct digital manufacturing. To balance the freedom of shaping with material distribution requires rethinking the relationship between computers and users as the number of design possibilities exceeds human capacity and current state of the art systems. One of the major thrusts in this program is to understand how massive amounts of compute power could fundamentally change the way we approach design problems and associated computational tasks.

The program is a 48-month effort addressing two Technical Areas (TAs). In TA1, performer teams will explore and develop new mathematical and computational foundations that can transform the traditional design process. Ten teams from industry, academia, and government are evaluating competing approaches in TA1. TRADES leadership will evaluate these approaches against a set of challenge projects that will be released over the life of the effort.

TA2 is for the provision of a software integration platform for TA1, creating a common resource for collaboration and sharing of prototype implementations; this technical area is awarded to TACC.

When DOD deploys a new system in the field, total system costs include not only the cost to produce each article, but also the R&D costs that went into developing it, and the costs of maintaining and operating that system over its functional life. Bringing advanced computation to the manufacturing and design processes allows designers to evaluate new designs from a physics perspective to answer questions like: How do we balance shaping versus material distribution? How can we create infield spares and compensate for dramatic different material properties (e.g., create a polymer component to replace a metal part)? What design spaces open up with advanced materials and additive manufacturing processes? How do we effectively convey these new design options to a designer?

TACC’s approach will be to provide a comprehensive platform that these researchers with varying technical backgrounds can use to easily construct new environments in support of developing computational methods, creating and using data visualizations, and analyzing large or complex experimental data sets. TACC cyberinfrastructure tools and services are built upon open source whenever possible.

The technologies TACC will provide in support of the TRADES program feature:

  • High performance computing
  • Rapidly configurable cloud computing
  • Flexible storage
  • Diverse user interfaces
  • Existing training

“TACC has constructed an unparalleled cyberinfrastructure for engineering and science researchers,” said Dan Stanzione, executive director of TACC. “Applying this with DARPA in advanced manufacturing will show how higher education computing technologies can transform U.S. manufacturing and provide a competitive edge.”

Source: Faith Singer-Villalobos, TACC

The post TACC, DOD Engage in Four-Year Transformational Design Project appeared first on HPCwire.

Optalysys Optical Co-processor Hits Milestone with GENESYS Project

Mon, 02/12/2018 - 10:57

Optalysys, a U.K company seeking to commercialize optical co-processor technology, today announced completion of its Genetic Search System (GENESYS) project conducted with the prestigious Earlham Institute (EI). Citing a dramatic power saving and performance speedup for computing a traditional genomics alignment problem, Optalysys says the work demonstrates the effectiveness and maturity of its optical processing technology, which the company promotes as a post-Moore’s Law alternative.

Broadly, Optalysys technology involves passing light through layers of tiny LCDs using a low power laser. Data is encoded by applying varying voltages to the LCDs which changes their optical density and modulates the resulting waveform. The net result is analogue encoding of numerical data onto the light. The emerging waveforms diffract from the LCD lattice and interact in ways that essentially perform Fourier transforms. By arranging LCDs it’s possible to perform a variety of functions.

Optalysys argues that because its optical processing approach is naturally parallel it eliminates the intense data management required to perform parallel processing on traditional (Von Neumann) processors and that it uses dramatically less energy and is faster (see link to short video on the technology below).

Optalysys System

The benchmark GENESYS project aligned metagenomics reads sequenced from the Human Microbiome Project Mock Community (a well characterized microbial community) against a database consisting of 20 bacterial genomes totaling 64 million base pairs. “The optical system exceeded the original targets delivering a 90 percent energy efficiency saving compared to the same test run on EI’s HPC cluster, with an accuracy comparable to the highly sensitive nucleotide form of BLAST, BLASTn (part of a family of Basic Local Alignment Search Tools used to compare query sequences with a library or database of sequences),” reported Optalysys.

Technology from the GENESYS project is launching in February 2018 as a cloud-based platform to a closed beta program of a select group of genomic institutes including EI, the University of Manchester, Oregon State University, and Zealquest Scientific Technology Co. in cooperation with the Shanghai Bioinformatics Center, Chinese Academy of Science.

Optalysys says the GENESYS technology holds promise for long-read sequence alignment, where speed improvements of several orders of magnitude over conventional software algorithms are possible – as well as in Deep Learning, specifically convolutional neural networks (CNNs).

“The collaboration with EI has been a great success,” said Nick New, founder and CEO of Optalysys in today’s release. “We have demonstrated the technology at several international conferences including Advances in Genome Biology and Technology, Plant and Animal Genome Conference and Genome 10K/Genome Science, to an overwhelmingly enthusiastic response. We are looking forward to continuing our strong relationship with EI through the beta program and beyond.”

“Genomic institutes are being faced with analyzing more and more data, and it is really exciting that new technologies like the Optalysys optical processing platform can support bioinformaticians processing data accurately, at a low cost and at high speeds,“ said Daniel Mapleson, Analysis Pipelines Project Leader at EI who was the EI lead for the genomics tests during the project.

The GENESYS project was funded in part by U.K. Innovate, a government agency. Jon Mitchener, innovation lead for emerging technologies, Innovate UK, said, “GENESYS is a project that not only exemplifies the R&D done (combining novel optical HPC and deep learning AI techniques with a really important health-screening application), but also the way that support from Innovate UK can help accelerate the move from a prototype towards commercial product launch for a high-potential British SME like Optalysys.”

The post Optalysys Optical Co-processor Hits Milestone with GENESYS Project appeared first on HPCwire.

Pages