HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 26 min ago

Researchers Use Supercomputer Simulations to Study Brain Damage and Space Shuttle Materials

Wed, 01/24/2018 - 11:39

Jan. 24, 2018 — Explosions produce unique patterns of injury seldom seen outside combat. They have the potential to cause life-threatening injuries and take a particular toll on the brain.

Ashfaq Adnan, an associate professor of mechanical engineering at The University of Texas at Arlington (UTA), and his postdoctoral associate Yuan Ting Wu published research findings in Nature Scientific Reports in July 2017 revealing how battlefield blasts may cause bubbles in the brain’s perineuronal nets which, in turn, may collapse and damage neurons.

“This study reveals that if a blast-like event affects the brain under certain circumstances, the mechanical forces could damage the perineuronal net located adjacent to the neurons, which could lead to damage of the neurons themselves,” Adnan said. “It is important to prove this concept so that future research may address how to prevent cavitation damage and better protect our soldiers,”

Cavitation is the development of bubbles, much like those that form around a ship’s spinning propellers. Existing scans cannot detect whether cavitation bubbles form inside the brain due to blasts or how these blasts affect a person’s individual neurons, the brain cells responsible for processing and transmitting information.

Adnan’s research used supercomputer-powered molecular dynamics simulations to study structural damage in the perineuronal nets (PNN) area in the brain. He then determined the point at which mechanical forces may damage the PNN or injure the neurons.

Adnan’s research was supported by a grant through the Office of Naval Research’s Warfighter Performance Department and UTA.

Modeling the Effects of Bomb Blasts

(a) Neurons surrounded by the ECM in the CNS. The region in ECM in the immediate vicinities of neurons are called Perinuronal Net (PNN). The components of PNN are shown in the magnified view (adapted from Fig. 1 of 37) (Permitted reprint) and (b–d) Schematic of pre-, during, and post-collapse bubble. Image courtesy of TACC.

Understanding the details of the substructure of the PNN requires extremely high-resolution modeling, which were enabled by more than 1 million compute hours on the Stampede supercomputer at the Texas Advanced Computing Center (TACC). Adnan and his team were able to access TACC resources through a unique initiative, called the University of Texas Research Cyberinfrastructure (UTRC), which gives researchers from the state’s 14 public universities and health centers access to TACC’s systems and staff expertise.

The team ran 36 sets of simulations, each modeling the interactions of more than a million atoms, and used thousands of computer processors simultaneously.

“The study suggests that when a shock waves comes in the brain, the wave can reach the atomistic scale and interact with the water molecules, biomolecules and even the ions,” Adnan said. “The science spans from the atomistic scale to the macroscopic scale — nine orders of magnitude larger. At different length scales, we have different physics and time-frames that we have to capture and we can’t ignore one over the other. So, we have to model this complicated system in the most detailed way possible to see what’s going on.”

The team focused on the damage in hyaluronan, which is the net’s main structural component. Their results show that the localized supersonic forces created by an asymmetrical bubble collapse may generate a phenomenon known as a “water hammer” – a powerful pressure wave – which can break the hyaluronan. The research improves current knowledge and understanding of the connection between damage to the perineuronal net and neurodegenerative disorders.

“Dr. Adnan’s recently published findings offer important insight into how the brain is affected in combat scenarios,” said Duane Dimos, UTA vice president for research. “Understanding the effects of blast injuries on the brain and knowing that cavitation occurs is an important step toward finding better ways to prevent traumatic brain injuries on the battlefield.”

Studying Space Shuttle Materials with Supercomputers

Parallel to his brain research, Adnan works on way to develop strong ceramic-based materials for advanced structural applications, notably for space shuttle reentry vehicles.

His computational designs of novel multiphase ceramic-ceramic and ceramic-metal materials are helping to better understand these materials, so new, better ones can be created.

In January 2018, Adnan and his PhD student Md. Riaz Kayser published a paper in the Journal of the American Ceramic Society, in collaboration with experimentalists from Missouri Science and Tech describing a molecular study of the mechanical properties of ZrB2 (Zirconium diboride) and ZrC-ZrB2 (a Zirconium carbide-Zirconium diboride nano-composite).

“These materials belong to a class of refractory ceramics called Ultra-High-Temperature Ceramics or UHTCs, one of only a few material systems that can be used for hypersonic vehicles,” Adnan said. “The vehicles go at such high speed that they need to survive temperatures above 3600 degree Fahrenheit and most materials will just melt. UHTCs are the only materials that can survive under extreme conditions.”

Though tough and heat-resistant, these metal-ceramic hybrids are fragile. In the Columbia shuttle disaster of 2003, a ceramic tile broke and came off and the material underneath melted, leading to the crash. Adnan’s overall goal is to improve the properties of the material so they don’t easily shatter.

“We revealed through our study that the conventional wisdom, that if you put a nanoparticle in the system you’d always get better results, is not necessarily guaranteed,” he explained. “What we observed is that the strength of grain-boundary materials at the nanoscale are weaker than any other part of the material. As such, the presence of nanoparticles doesn’t improve their strength. The paper is about finding the fundamental reason behind why nano-reinforcement isn’t always very effective. We need to design our manufacturing process to get the best out of the nanoparticle infusion in ceramic materials.”

Though this line of research seems a far cry from simulations of brain-damaging bomb blasts, it is actually much more similar than it first appears.

“My interest is in the behavior of materials at the atomic scale. The tools that I use are the same, it’s just the application that are different,” Adnan said. “We have the experience in our group and among our collaborators that allows us to be highly diversified and multidisciplinary.”

The research is supported a grant from the National Science Foundation’s Materials Engineering and Processing program.

Source: Aaron Dubrow and Jeremy Agor, TACC

The post Researchers Use Supercomputer Simulations to Study Brain Damage and Space Shuttle Materials appeared first on HPCwire.

HPC Systems, Netlist, and Nyriad to Accelerate the Adoption of Persistent Memory and GPUs for Storage

Wed, 01/24/2018 - 11:26

SAN JOSE, Calif., Jan. 24, 2018 — Netlist, Inc. (NASDAQ: NLST), Nyriad and HPC Systems today announced their joint collaboration to deliver superior throughput, IOPs and resilience for high performance Lustre storage solutions for large and small files targeted at large-scale cluster computing.

HPC Systems is a leading expert in configuring Lustre solutions for scientific work-loads.  In collaboration with Netlist and Nyriad, they bring to market next generation solutions that achieve superior performance and resilience at a lower cost than previous solutions based on RAID controllers.  Netlist’s NVvault, non-volatile NVDIMM combined with Nyriad’s GPU-accelerated storage enables larger, more parallel Lustre OSD nodes to achieve higher throughput and higher IOPs with scale.

Teppei Ono, President of HPC Systems, said, “This collaboration is one of the exciting revolutions in high performance computing where larger scale solutions result in a leap in efficiency. With Nyriad’s NSULATE technology, we get higher performance and resilience the more drives we add to a node.  At larger storage scales with higher throughput, we can dispense with RAID controllers and use NVDIMM’s from Netlist to achieve millions of IOPs for large and small file transactions.”

Mario Martinez, Netlist Senior Director of Marketing, said, “The CPU and RAID controller have become major obstacles to efficiently scaling parallel storage.  With the team at HPC Systems and Nyriad, we introduce a new architecture based on our NVvault NVDIMM to bypass these obstacles altogether and deliver an innovative and superior performing solution.”

Nyriad Chief Technology Officer Alex St. John stated, “Netlist’s NVvault combined with using the GPU for storage processing gives us the maximum possible IOPs and a solution for overcoming the performance challenges introduced by the Meltdown and Spectre patches to the operating system.  Through the utilization of NVDIMMs to transfer data directly to the GPU for storage-processing, the Linux kernel obstacles are bypassed and we can deliver enhanced performance and resilience.”

To find out more about the HPC Systems, Netlist and Nyriad collaboration please visit Netlist’s booth at the Persistent Memory Summit, in San Jose, CA on Wednesday January 24th, held during the SNIA Annual Membership Symposium.

Netlist’s NVvault DDR4 is an NVDIMM-N that provides data acceleration and protection in a JEDEC standard DDR4 interface. It is designed to be integrated into industry standard server or storage solutions.  NVvault is a persistent memory technology that has been widely adopted by industry standard servers and storage systems.  By combining the high performance of DDR4 DRAM with the non-volatility of NAND Flash, NVvault improves the performance and data preservation found in storage virtualization, RAID, cache protection, and data logging applications requiring high-throughput.

Nyriad’s NSULATE solves these problems by replacing RAID controllers with GPUs for all Linux storage applications. This enables the GPUs to perform double duty as both I/O controllers and compute accelerators in the same integrated solution. The combination of Netlist NV Memory with NSULATE produces the best of both worlds, with low-latency IOPS achievable by any storage solution combined with maximum data resilience, security, throughput and efficiency in the same architecture.

About HPC Solutions

HPC Systems Inc. is a leading system integrator of High Performance Computing (HPC) solutions. Since its inception in 2006, the company has quickly established itself as a technology and performance leader in Japanese small-to-mid-range HPC market. The company plans for further growth and developments in world class HPC Cloud solutions, to support customers research and technological development worldwide.

About Netlist

Netlist is a leading provider of high-performance modular memory subsystems serving customers in diverse industries that require superior memory performance to empower critical business decisions. Flagship products NVvault and EXPRESSvault enable customers to accelerate data running through their servers and storage and reliably protect enterprise-level cache, metadata and log data by providing near instantaneous recovery in the event of a system failure or power outage. HybriDIMM, Netlist’s next-generation storage class memory product, addresses the growing need for real-time analytics in Big Data applications and in-memory databases. Netlist holds a portfolio of patents, many seminal, in the areas of hybrid memory, storage class memory, rank multiplication and load reduction. Netlist is part of the Russell Microcap Index.  To learn more, visit www.netlist.com.

About Nyriad

Nyriad is a New Zealand-based exascale computing company specializing in advanced data storage solutions for big data and high-performance computing. Born out of its consulting work on the Square Kilometre Array Project, the company was forced to rethink the relationship between storage, processing and bandwidth to achieve a breakthrough in system stability and performance capable of processing and storing over 160Tb/s of radio antennae data in real-time, within a power budget impossible with any modern IT solutions.

Source: Netlist, Inc.

The post HPC Systems, Netlist, and Nyriad to Accelerate the Adoption of Persistent Memory and GPUs for Storage appeared first on HPCwire.

Asetek to Liquid Cool Fujitsu Supercomputer at Tohoku University

Wed, 01/24/2018 - 10:54

OSLO, Norway, Jan. 24, 2018 — Asetek today announced a new order from OEM partner Fujitsu for the Institute of Fluid Science at Tohoku University in Japan. The supercomputing system will consist of multiple computational sub-systems using the latest liquid-cooled Fujitsu PRIMERGY x86 servers, and is planned to deliver a peak theoretical performance exceeding 2.7 petaflops.

The order has a value of USD 420,000 with delivery to be completed in Q1 2018.

About Asetek

Asetek is a global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK.OL). For more information, visit www.asetek.com

Source: Asetek

The post Asetek to Liquid Cool Fujitsu Supercomputer at Tohoku University appeared first on HPCwire.

NCSA 2018 Blue Waters Webinar Series Begins January 24

Wed, 01/24/2018 - 08:11

Jan. 24, 2018 — The National Center for Supercomputing Applications’ (NCSA) popular Blue Waters Webinar series kicks off its 2018 offerings on Wednesday, January 24. The free NCSA Blue Waters Webinars are designed to increase high performance computing (HPC) knowledge and skills among the research community.

All webinars are broadcast on YouTube and recorded for viewing anytime. Registered participants will receive the link to the live broadcast and will be able to ask questions during the presentation using NCSA’s Blue Waters Slack channel. All webinars are held on Wednesdays and begin at 10 AM Central Time. Most webinars generally run one hour.

Upcoming webinars include:

  • January 24 — NumFOCUS: An approach to sustaining major scientific software projects by Andy Terrel, President of NumFOCUS
    • A discussion about NumFOCUS, a non-profit focused on open-source scientific computing.
  • January 31 — Blue Waters Overview by Blue Waters Project Office
    • Members of the Blue Waters staff will provide an overview of the Blue Waters resources and services, and guidelines for submitting requests for allocations.
  • February 7 — Machine Learning by Aaron Saxton, NCSA
    • The webinar will provide a walk through the pipeline of image feature detection on the Blue Waters supercomputer.
  • February 14 — Software Sustainability by Daniel S. Katz, NCSA
    • A discussion of the work of six WSSSPE workshops that have brought together communities to focus on software sustainability in high performance computing.
  • February 28 — Analysis and Visualization with yt by Matt Turk, NCSA

Additional webinars will continue to be added throughout the year. Suggestions for topics and offers to present content are welcome. Please send your suggestions to bw-eot@ncsa.illinois.edu.

Additional details about the NCSA Blue Waters Webinars and a registration form are available at https://bluewaters.ncsa.illinois.edu/webinars.

About Blue Waters

Blue Waters is one of the most powerful supercomputers in the world. Located at the University of Illinois, it can complete more than 1 quadrillion calculations per second on a sustained basis and more than 13 times that at peak speed. The peak speed is almost 3 million times faster than the average laptop. Blue Waters is supported by the National Science Foundation and the University of Illinois; the National Center for Supercomputing Applications (NCSA) manages the Blue Waters project and provides expertise to help scientists and engineers take full advantage of the system for their research.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

Source: NCSA

The post NCSA 2018 Blue Waters Webinar Series Begins January 24 appeared first on HPCwire.

Eni Takes the Lead in Industrial Supercomputing

Tue, 01/23/2018 - 22:24

No sooner had one system, used by BP, been declared the most powerful supercomputer in the industrial sphere than it quickly has been displaced. Last Thursday (Jan. 18) it was Italian energy company Eni’s turn to take the lead with the launch of its latest petawhopper. At 18.6 petaflops (peak) the new cluster, HPC4, becomes the world’s most powerful commercial system (that we know of) and quadruples the company’s computing capacity to an aggregate peak performance of 22.4 petaflops.

With the debut of HPC4, the baton for industrial HPC leadership is passed from BP to Eni. In December, BP boosted its top supercomputer’s processing speed to 9 petaflops. Total was the previous record-holder with the 6.7-petaflop “Pangea” system, introduced in 2016.

“HPC4 is an important achievement in Eni’s digitalisation process,” commented Eni CEO Claudio Descalzi. The new machine will host the ecosystem of algorithms developed by Eni to support its activities in the exploration and production sector. In the risky and competitive energy exploration space, operating the latest HPC tech is essential for high-accuracy, high-resolution seismic imaging, geological modelling and reservoir simulation.

“These technologies will enable us, on the one hand, to accelerate and make the entire upstream process more efficient and accurate, reducing risks in the exploration phase and, at the same time, giving us a significant technological advantage, but also to increase the level of reliability, technical integrity and operability of all our productive plants, while minimising operational risks, with benefits both in terms of safety and environmental impact,” said Descalzi.

If HPC4 were given a run at the current Top500 list, there’s a good chance it would crack the top ten, breaking new ground for a commercially-owned supercomputer. By the time the new system is benchmarked for the June list, a top 20 position is a safe bet.

HPC4 joins the Lenovo-built HPC3 cluster, which ranks 51st on the current Top500 list with 2.6 petaflops Linpack (3.8 petaflops peak). Designed by Hewlett Packard Enterprise (HPE), the new cluster encompasses 1,600 ProLiant DL380 nodes, linked with EDR InfiniBand. Each node is equipped with two Intel 24-core Skylake processors and two Nvidia Tesla P100 GPU accelerators. It also includes a 15 petabytes storage subsystem. The system has a purported pricetag of $25 million, according to a Bloomberg report.

Eni Green Data Center (click to enlarge)

Both systems will be housed inside Eni’s Green Data Center, located in Ferrera Erbognone in Pavia, Italy. Built on a former rice paddy, the Green Data Center opened in 2013 to host all of Eni’s HPC architecture and its business applications.

From the earliest days of supercomputing, the oil and gas industry has led the commercial sector in its demand for HPC. Energy companies were the first to cross the petascale threshold for industry and now they are looking to the exascale horizon.

With competitive pressures mounting to process ever-increasing amounts of data to accelerate cycle times in the upstream process, Eni has its eye on exascale. “With HPC4 we are tracing the path for the use of exascale supercomputers in the energy sector that could revolutionise the way in which oil & gas activities are managed,” said CEO Descalzi.

Based in Italy, Eni is active in 73 countries and has more than 32,000 employees. The company’s exploration and production (E&P) division, which is focused on finding and producing oil and gas, has a presence in 42 countries.

The post Eni Takes the Lead in Industrial Supercomputing appeared first on HPCwire.

CESNET, GÉANT Deploy 300 Gbps Wavelength in R&E Community

Tue, 01/23/2018 - 16:08

PRAGUE, Jan. 23 2018 — CESNET, marking a R&E community first deployment of Ciena’s Waveserver Ai platform, has interconnected GÉANT’s high-capacity routers using a 300 Gbps alien wavelength over 530km of CESNET network with Czech Light Open Line System. This accomplishment shows how the research and education (R&E) community can establish reliable all-optical transmission that transit multiple networks for the high-performance computer (HPC) networks, data storage and transfer capabilities and collaborative working environments that are crucial to Europe’s modern academic and scientific organizations.

This solution has made it possible for CESNET to improve GEANT network resiliency through additional network meshing by combining spectrum sharing of CESNET network with DCI technology from Ciena. By deploying 300 Gbps wavelength, CESNET and Ciena demonstrate how R&E institutions may set up international capacity to run exceptional bandwidth-intensive service and applications such as HPC, with the future Exascale Super Computers, large data sets transfer and data storage migration. GÉANT is now carrying live traffic over unprecedented wavelength capacity of 300 Gbps across CESNET’s existing fibre that was originally engineered as a 10 Gbps compensated network, between the Prague and Vienna nodes located in two neighboring countries.

With national research and education network (NREN) traffic growing more than 70 percent each year, the high-density Waveserver Ai stackable interconnect platform, equipped with industry-leading WaveLogic Ai coherent optics, allows CESNET to deliver to GÉANT a solution that supports high-capacity applications while reducing power, footprint and cost per bit. GÉANT is also reducing network cost by spectrum sharing with NRENs while improving network resiliency by using network meshing as a safeguard against unpredictable spikes in traffic. Additionally, alien wavelength implementation enables the sharing of dark fiber infrastructure within the NREN community, ensuring these organizations are prepared for future growth by allowing them to quickly scale capacity up or down based on network demand.

“Our network is designed to support the transfer of extremely large data sets and dedicated photonic services among geographically dispersed locations, especially, due to unique concepts used for network development, for example Nothing in Line (NIL). The 300 Gbps wavelength is seamlessly running in parallel with precise time and ultra-stable optical frequency transmissions” said Josef Vojtěch, head of optical networks department of CESNET. “Together with Ciena and GÉANT, we are ensuring Europe’s R&E community can reliably connect anywhere, at any time, with any capacity.”

“CESNET, an active representative of NRENs in Europe, is evaluating the newest technologies to take a critical step toward future-proofing data networks for educational and research communities throughout the region,” said Rod Wilson, Chief Technologist for Research of Ciena. “Waveserver Ai is designed to address evolving density and power requirements for ultra-high-capacity interconnect applications of this nature.”

 —

Source: CESNET

The post CESNET, GÉANT Deploy 300 Gbps Wavelength in R&E Community appeared first on HPCwire.

Google Showcases 2017 AI Research Highlights

Tue, 01/23/2018 - 16:04

Looking for a good snapshot of the state of AI research? Cloud giant Google recently reviewed its 2017 AI research and application highlights in a two-part blog. While hardly comprehensive, it’s a worthwhile, fast read for AI watchers. Few companies are as actively involved in AI as Google so a bit of chest thumping seems warranted. The work is also full of links to supporting resources (papers, videos & sound clips, other blogs).

“In Part 1 of this blog post, we shared some of our work in 2017 related to our broader research, from designing new machine learning algorithms and techniques to understanding them, as well as sharing data, software, and hardware with the community. In this [second] post, we’ll dive into the research we do in some specific domains such as healthcare, robotics, creativity, fairness and inclusion, as well as share a little more about us,” wrote Jeff Dean, Google senior fellow and member of the Google Brain Team.

Here’s a snippet of one section on machine learning:

“The use of machine learning to replace traditional heuristics in computer systems also greatly interests us. We have shown how to use reinforcement learning to make placement decisions for mapping computational graphs onto a set of computational devices that are better than human experts. With other colleagues in Google Research, we have shown in “The Case for Learned Index Structures” that neural networks can be both faster and much smaller than traditional data structures such as B-trees, hash tables, and Bloom filters. We believe that we are just scratching the surface in terms of the use of machine learning in core computer systems, as outlined in a NIPS workshop talk on Machine Learning for Systems and Systems for Machine Learning.”

It’s interesting to watch as the cloud giant has transformed from mostly technology consumers, at least in hardware, into vast technology invention machines even in the processor area. They still buy a lot but they invent a lot too, as noted here.

“We provided design input to Google’s Platforms team and they designed and produced our first generation Tensor Processing Unit (TPU): a single-chip ASIC designed to accelerate inference for deep learning models (inference is the use of an already-trained neural network, and is distinct from training). This first-generation TPU has been deployed in our data centers for three years, and it has been used to power deep learning models on every Google Search query, for Google Translate, for understanding images in Google Photos, for the AlphaGo matches against Lee Sedol and Ke Jie, and for many other research and product uses. In June, we published a paper at ISCA 2017, showing that this first-generation TPU was 15X – 30X faster than its contemporary GPU or CPU counterparts, with performance/Watt about 30X – 80X better.”

Avoiding bias in another ongoing challenge and not surprisingly an area where Google is active:

“As ML plays an increasing role in technology, considerations of inclusivity and fairness grow in importance. The Brain team and PAIR have been working hard to make progress in these areas. We’ve published on how to avoid discrimination in ML systems via causal reasoning, the importance of geodiversity in open datasets, and posted an analysis of an open dataset to understand diversity and cultural differences. We’ve also been working closely with the Partnership on AI, a cross-industry initiative, to help make sure that fairness and inclusion are promoted as goals for all ML practitioners.”

You get the idea. There’s fair bit more in the blog and much of the value is reviewing the attached (via links) supporting work. Yes, Google indulges in a bit of bragging here, but then again, why not.

Link to blog: https://research.googleblog.com

The post Google Showcases 2017 AI Research Highlights appeared first on HPCwire.

DoD Adds 14 Petaflops of Computing Power with Seven New Systems

Tue, 01/23/2018 - 10:22

Jan. 23 — New HPC systems at the Air Force Research Laboratory and Navy DoD Supercomputing Research Centers will provide an additional 14 petaflops of computational capability.

The Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) completed its fiscal year 2017 investment in supercomputing capability supporting the DoD Science and Technology (S&T), Test and Evaluation (T&E), and Acquisition Engineering communities. The acquisition consists of seven supercomputing systems with corresponding hardware and software maintenance services. At 14 petaflops, this procurement will increase the DoD HPCMP’s aggregate supercomputing capability to 47 petaflops. These systems significantly enhance the Program’s capability to support the Department of Defense’s most demanding computational challenges.

The new supercomputers will be installed at the Air Force Research Laboratory (AFRL) and Navy DoD Supercomputing Resource Centers (DSRCs), and will serve users from all of the services and agencies of the Department.

The AFRL DSRC in Dayton, Ohio, will receive four HPE SGI 8600 systems containing Intel Xeon Platinum 8168 (Skylake) processors. The architecture of the four systems are as follows:

– A single system of 56,448 Intel Platinum Skylake compute cores and 24 NVIDIA Tesla P100 General-Purpose Graphics Processing Units (GPGPUs), 244 terabytes of memory, and 9.2 petabytes of usable storage.

– A single system of 13,824 Intel Platinum Skylake compute cores, 58 terabytes of memory, and 1.6 petabytes of usable storage.

– Two systems, each consisting of 6,912 Intel Platinum Skylake compute cores, 30 terabytes of memory, and 1.0 petabytes of usable storage.

The Navy DSRC at Stennis Space Center, Mississippi, will receive three HPE SGI 8600 systems containing Intel Xeon Platinum 8168 (Skylake) processors. The architecture of the three systems are as follows:

– Two systems, each consisting of 35,328 Intel Platinum Skylake compute cores, 16 NVIDIA Tesla P100 GPGPUs , 154 terabytes of memory, and 5.6 petabytes of usable storage.

– A single system consisting of 7,104 Intel Platinum Skylake compute cores, four NVIDIA Tesla P100 GPGPUs, 32 terabytes of memory, and 1.0 petabytes of usable storage.

The systems are expected to enter production service in the second half of calendar year 2018.

About the DOD High Performance Computing Modernization Program (HPCMP)

The HPCMP provides the Department of Defense supercomputing capabilities, high-speed network communications and computational science expertise that enable DOD scientists and engineers to conduct a wide-range of focused research and development, test and evaluation, and acquisition engineering activities. This partnership puts advanced technology in the hands of U.S. forces more quickly, less expensively, and with greater certainty of success. Today, the HPCMP provides a comprehensive advanced computing environment for the DOD that includes unique expertise in software development and system design, powerful high performance computing systems, and a premier wide-area research network. The HPCMP is managed on behalf of the Department of Defense by the U.S. Army Engineer Research and Development Center located in Vicksburg, Mississippi. For more information, visit our website at: https://www.hpc.mil.

Source: DOD High Performance Computing Modernization Program

The post DoD Adds 14 Petaflops of Computing Power with Seven New Systems appeared first on HPCwire.

U of I Researcher Recognized with ACM Fellowship for Contributions to Parallel Programming

Tue, 01/23/2018 - 08:22

Jan. 23, 2018 — Laxmikant “Sanjay” Kale, a professor of Computer Science at the University of Illinois at Urbana-Champaign, and a NCSA Faculty Affiliate, was named to the the 2017 class of fellows from the Association for Computing Machinery (ACM), the scientific computing community’s largest society.

At Illinois, Kale has pioneered an effort to integrate adaptive runtime systems into parallel programming, leading to collaboration and the development of scalable applications across industries, from biophysics to quantum chemistry and even astronomy. Kale also leads the Parallel Programming Laboratory at the University of Illinois.

The ACM Fellows Program, the organization’s most prestigious honor, recognizes the top 1 percent of ACM members for outstanding accomplishments in computing and information technology. This prestigious honor comes on the back of the implementation of Adaptive Runtime Systems in parallel computing, which has been pioneered by Kale and his group, and implemented into the Charm++ parallel programing framework, maintained at the University of Illinois at Urbana-Champaign.

“We co-developed several science and engineering applications using Charm++, which allowed us to validate and improve the Adaptive Runtime techniques we were developing in our research in the context of full applications,” said Kale. “The application codes developed include NAMD (biophysics), OpenAtom (quantum chemistry/materials modeling), ChaNGa (astronomy), EpiSimdemics (simulation of epidemics), etc. These are highly scalable codes that run from small clusters to supercomputers, including Blue Waters, on hundreds of thousands of processor cores.”

This new adaptive runtime system allows code to run much more efficiently than before, keeping an ever-vigilant digital eye on individual processors and how they are processing data, eliminating downtime and ultimately shortening processing time.

“Our approach allows parallel programmers to write code without worrying about where (i.e. on which processor) the code will execute, or which data will be on what processor,” explained Kale. “The runtime system continuously watches the program behavior and moves data and code-execution among processors so as to automatically improve performance, for example, via dynamic load balancing. This approach especially helps development of complex or sophisticated parallel algorithms.”

Thus, Kale’s continued work will enable the continued expansion of parallel algorithms for high performance computing, and as such, see expanded use as time goes on.

“The credit for my success and for this award certainly goes to generations of my students who worked on various aspects of adaptive runtime systems,” Kale concluded.

Sanjay Kale is also a Professor of Computer Science at the University of Illinois at Urbana-Champaign, and a faculty affiliate at the National Center for Supercomputing Applications. More information on the ACM Fellow program can be found here.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About NCSA’s Faculty Affiliate Program

NCSA’s Faculty Affiliate Program provides an opportunity for faculty and researchers at the University of Illinois at Urbana-Champaign to catalyze and develop long-term research collaborations between Illinois departments, research units, and NCSA.

Source: NCSA

The post U of I Researcher Recognized with ACM Fellowship for Contributions to Parallel Programming appeared first on HPCwire.

ClusterVision Announces Completion of GPU Cluster System for ASTRON

Mon, 01/22/2018 - 08:29

Jan. 22, 2018 — ClusterVision, Europe’s dedicated specialist in high performance computing solutions, has announced the successful completion of a new high performance computing GPU cluster system for the Netherlands Institute for Radio Astronomy (ASTRON). The 2 PFLOPS installation, codenamed ARTS, will be used to assist the institute’s Westerbork Synthesis Radio Telescope with analysing and deciphering large pulsar flashes.

As part of the APERTIF project, ASTRON has installed new high-speed cameras on the telescopes to better capture the pulsar flashes. To be able to process the large amounts of data these cameras will capture, the institute was looking for a state of the art HPC solution that would satisfy all their needs. ClusterVision designed a GPU based cluster that is able to process all this data. By employing a large number of GPU nodes, data from the telescopes could be processed much faster and way more precisely.

Furthermore, by utilising the deep learning capabilities a GPU cluster brings to the table, the telescopes will be able to detect pulsar flashes with much greater accuracy through self-learning. In the past, ASTRON scientists had to manually detect and input pulsar patterns. With deep learning, ARTS does it for them.

As one of the country’s most advanced HPC installations and the most powerful GPU based supercomputer of the Netherlands, ARTS has attracted widespread attention from the media. Public television news programme EenVandaag will feature the ARTS cluster and its use cases in their Saturday episode. See how HPC and ClusterVision have enabled ASTRON to advance their scientific discovery on 20 January, 2018 at 18:15 (CEST) on NPO1.

About ASTRON

ASTRON, part of the Netherlands Organisation for Scientific Research (NWO), is the Netherlands Institute for Radio Astronomy. ASTRON’s goal is to make discoveries in radio astronomy happen. ASTRON provides front-line observing capabilities for its own astronomers in-house, and for the wider national/ international community. The institute expects this strategy to regularly result in astronomical discoveries that significantly influence our understanding of the content, the structure and the evolution of the Universe.

Source: ClusterVision

The post ClusterVision Announces Completion of GPU Cluster System for ASTRON appeared first on HPCwire.

CesgaHack GPU Hackathon to Return in March

Fri, 01/19/2018 - 09:58

A CORUÑA, Spain, Jan. 19, 2018 — The Galicia Supercomputing Center (CESGA) and Appentra Solutions will host the second edition of the GPU Hackathon  at the Galicia Supercomputing Center, in Santiago de Compostela, from March 5 to 9, this time with an international scope and English as the event  language.

CesgaHack18 aims to help scientists and developers accelerate the execution of their scientific simulation applications using the hardware, software and a team of expert mentors in optimization, parallelization and execution of simulation programs.

At the end of the event, participants will either have a new GPU-accelerated version or a roadmap to reach that goal. The winning team will receive a GeForce GTX 1080Ti, offered by NVIDIA.

Participants will be able to create success stories, as happened last year with the team working on a model for tsunami predictions at the University of Málaga, which with the collaboration of Appentra and Fernanda Foertter (ORNL) published a research paper presented at SuperComputing 17, in Denver, the most important event in the field of High Performance Computing (HPC).

The Hackathon will be divided into a presentation day for all those who want to attend, and 5 intensive days to optimize the scientific simulation application of each participating team.

The deadline to participate is February 11, and participation is free of charge.

CESGA and Appentra encourage scientific projects to participate and obtain this necessary individualized training that will help them move forward and save time in their applications. More science and less coding!

 

About CESGA

Fundación Pública Galega Centro Tecnolóxico de Supercomputación de Galicia (CESGA) is the centre of computing, high performance communications systems, and advanced services of the Galician Scientific Community, the University academic system, and the National Scientific Research Council (CSIC).

About Appentra

Technology-based spin-off company of the University of Coruña established on 2012. Appentra provides top quality software tools allowing for an extensive use of High Performance Computing (HPC) techniques in all application areas of engineering, science and industry. Appentra’s target clients are companies and organizations that run frequently updated compute-intensive applications in markets like aerospace, automotive, civil engineering, biomedicine or chemistry.

Source: Appentra

The post CesgaHack GPU Hackathon to Return in March appeared first on HPCwire.

Mellanox Announces Quarterly and Annual Results

Fri, 01/19/2018 - 08:54

SUNNYVALE, Calif. & YOKNEAM, Israel, Jan. 19, 2019 — Mellanox Technologies, Ltd. (NASDAQ: MLNX) has announced financial results for its fourth quarter and full year 2017 ended December 31, 2017.

“We are pleased to achieve record quarterly and full year revenues,” said Eyal Waldman, President and CEO of Mellanox Technologies. “2017 represented a year of investment and product transitions for Mellanox. Fourth quarter Ethernet revenues increased 11 percent sequentially, due to expanding customer adoption of our 25 gigabit per second and above Ethernet products across all geographies. We are encouraged by the acceleration of our 25 gigabit per second and above Ethernet switch business, which grew 41 percent sequentially, with broad based growth across OEM, hyperscale, tier-2, cloud, financial services and channel customers. During the fourth quarter, InfiniBand revenues grew 2 percent sequentially, driven by growth from our high-performance computing and artificial intelligence customers. For the full fiscal 2017, our revenues from the high performance computing market grew 13 percent year over year. Our 2017 results demonstrate the successful execution of our multi-year revenue diversification strategy, and our leadership position in 25 gigabit per second and above Ethernet adapters.”

Fourth Quarter 2017 – Highlights

  • Revenues were $237.6 million in the fourth quarter, and $863.9 million in fiscal year 2017.
  • GAAP gross margins were 64.1 percent in the fourth quarter, and 65.2 percent in fiscal year 2017.
  • Non-GAAP gross margins were 68.8 percent in the fourth quarter, and 70.4 percent in fiscal year 2017.
  • GAAP operating loss was $(6.7) million, or (2.8) percent of revenue, in the fourth quarter, and was $(17.1) million, or (2.0) percent of revenue, in fiscal year 2017.
  • Non-GAAP operating income was $38.0 million, or 16.0 percent of revenue, in the fourth quarter, and $118.7 million, or 13.7 percent of revenue, in fiscal year 2017.
  • GAAP net loss was $(2.6) million in the fourth quarter, and was $(19.4) million in fiscal year 2017.
  • Non-GAAP net income was $42.9 million in the fourth quarter, and $116.6 million in fiscal year 2017.
  • GAAP net loss per diluted share was $(0.05) in the fourth quarter, and $(0.39) in fiscal year 2017.
  • Non-GAAP net income per diluted share was $0.82 in the fourth quarter, and $2.28 in fiscal year 2017.
  • $66.9 million in cash was provided by operating activities during the fourth quarter.
  • $161.3 million in cash was provided by operating activities during fiscal year 2017.
  • Cash and investments totaled $273.8 million at December 31, 2017.

Mr. Waldman continued, “As we enter 2018, we expect to build on our momentum in Ethernet and InfiniBand. With the recent release of our BlueField system-on-chip, and the future introduction of our 200 gigabit per second InfiniBand and Ethernet products, Mellanox is well positioned to begin reaping the benefits from prior investments. Looking ahead, we anticipate seeing acceleration of revenue growth, while delivering on our commitment to more efficiently manage costs and achieve fiscal 2018 non-GAAP operating margins of 18 to 19 percent. We continue to drive improvements in profitability and identify further efficiencies that can be realized as our prior investments begin to yield positive results and we transition towards new product introductions in 2018 and beyond.”

First Quarter 2018 Outlook

We currently project:

  • Quarterly revenues of $222 million to $232 million
  • Non-GAAP gross margins of 68.5 percent to 69.5 percent
  • Non-GAAP operating expenses of $120 million to $122 million
  • Share-based compensation expense of $16.3 million to $16.8 million
  • Non-GAAP diluted share count of 52.4 million to 52.9 million

Full Year 2018 Outlook

We currently project:

  • Revenues of $970 million to $990 million
  • Non-GAAP gross margins of 68.0 percent to 69.0 percent
  • Non-GAAP operating margin of 18.0 percent to 19.0 percent
  • Non-GAAP operating margin of more than 20.0 percent exiting 2018

Recent Mellanox Press Release Highlights

• January 16, 2018 Mellanox ConnectX®-5 Ethernet Adapter Wins Linley Group Analyst Choice Award for Best Networking Chip • January 9, 2018 Mellanox Discontinuing 1550nm Silicon Photonics Development Activities • January 4, 2018 Mellanox Ships BlueField System-on-Chip Platforms and SmartNIC Adapters to Leading OEMs and Hyperscale Customers • December 18, 2017 Meituan.com Selects Mellanox Interconnect Solutions to Accelerate its Artificial Intelligence, Big Data and Cloud Data Centers • December 12, 2017 Mellanox Interconnect Solutions Accelerate Tencent Cloud High-Performance Computing and Artificial Intelligence Infrastructure • December 4, 2017 Mellanox and NEC Partner to Deliver Innovative High-Performance and Artificial Intelligence Platforms • November 14, 2017 Mellanox Propels NetApp to New Heights with 100Gb/s InfiniBand Connectivity • November 13, 2017 Deployment Collaboration with Lenovo will Power Canada’s Largest Supercomputer Centre with Leading Performance, Scalability for High Performance Computing Applications • November 13, 2017 Mellanox InfiniBand Solutions to Accelerate the World’s Next Fastest Supercomputers • November 13, 2017 Mellanox InfiniBand to Accelerate Japan’s Fastest Supercomputer for Artificial Intelligence Applications • November 13, 2017 InfiniBand Accelerates 77 Percent of New High-Performance Computing Systems on TOP500 Supercomputer List

Conference Call

Mellanox will hold its fourth quarter and fiscal year 2017 financial results conference call today, at 2 p.m. Pacific Time, to discuss the company’s financial results. To listen to the call, dial 1-800-459-5343, or for investors outside the U.S., +1-203-518-9553, approximately 10 minutes prior to the start time.

The Mellanox financial results conference call will be available via live webcast on the investor relations section of the Mellanox website at: http://ir.mellanox.com. Access the webcast 15 minutes prior to the start of the call to download and install any necessary audio software. A replay of the webcast will also be available on the Mellanox website.

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software, cables and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Announces Quarterly and Annual Results appeared first on HPCwire.

Tabor Communications and nGage Events Announce ‘Advanced Scale Forum 2018’

Fri, 01/19/2018 - 06:00

SAN DIEGO, Calif., Jan. 19, 2018 — Tabor Communications and nGage Events today announced the renaming of their 2018 summit. The summit, formerly known as Leverage Big Data + EnterpriseHPC, will now be known as Advanced Scale Forum and is scheduled for May 6-8 2018 at the Hyatt Regency Lost Pines Resort & Spa in Austin, TX.

As with previous events, the summit will focus on bridging the challenges that CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions face as they work to build systems and applications that require increasing amounts of performance and throughput. Focus areas will include, artificial intelligence/ deep and machine learning, IOT – edge to core, cloud, HPC, blockchain, big data, and hybrid IT.

By uniting the Leverage Big Data and EnterpriseHPC conferences in 2017, Tabor Communications and nGage Events maximized the opportunity for conference attendees to collaborate with industry experts across disciplines to maximize their performance as well as their ROI. Now, with a new name, the conference best reflects the scope of what it will offer attendees in its 2018 event.

This power forum serves as a platform for enterprise executives to share, discover, and explore the ways in which High Performance Computing (HPC) technologies harness the ability to transform their business. Through 1:1 meetings, boardroom sessions, networking and entertainment, the summit unifies industry leaders, solution providers, and end users to promote collaboration in developing industry themes and cultivating future success.

“Streaming analytics and high-performance computing loom large in the future of enterprises which are realizing the scaling limitations of their legacy environments,” said Tom Tabor, CEO of Tabor Communications. “As organizations develop analytic models that require increasing levels of compute, throughput and storage, there is a growing need to understand how businesses can leverage high performance computing architectures that can meet the increasing demands being put on their infrastructure.”

“In renaming and reshaping our event,” Tabor continues, “we look to support the leaders who are trying to navigate their scaling challenges, and connect them with others who are finding new and novel ways to succeed.”

Advanced Scale Forum 2018 Summit brings together industry leaders who are tackling these streaming and high-performance challenges and are responsible for driving the new vision forward. Attendees of this invitation-only summit will interact with leaders across industries faced with similar technical challenges for a summit that aims to build dialogue and share solutions and approaches to delivering both systems and software performance in this emerging era of computing.

The summit will be co-chaired by EnterpriseTech Managing Editor, Doug Black, and Datanami Managing Editor, Alex Woodie.

ATTENDING THE SUMMIT

This is an invitation-only hosted summit that is fully paid for qualified attendees, including flight, hotel, meals and summit badge. Targets of the summit include, CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions. To apply for an invitation to this exclusive event, please fill out the qualification form at the following link: Hosted Attendee Interest Form

SUMMIT SPONSORS

Current sponsors for the summit include Arcadia Data, Attunity, Cray, HDF Group, Intel, Kinetica, Lawrence Livermore National Lab, MemSQL, NetApp, Penguin Computing, Quantum, Striim, System Fabric Works, Talking Data, & WekaIO, with more to be announced. For sponsorship opportunities, please contact us at summit@advancedscaleforum.com.

The summit is hosted by Datanami, EnterpriseTech and HPCwire through a partnership between Tabor Communications and nGage Events, the leader in host-based, invitation-only business events.

The post Tabor Communications and nGage Events Announce ‘Advanced Scale Forum 2018’ appeared first on HPCwire.

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

Thu, 01/18/2018 - 17:00

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 15-year relationship between UCSD and Japan’s National Institute of Advanced Industrial Science and Technology (AIST) that goes back to 2002 with the establishment of the Pacific Rim Application and Grid Middleware Assembly (PRAGMA).

With the upcoming spring launch of AIST’s AI Bridging Cloud Infrastructure (ABCI), the deployment of the GPU-powered CHASE-CI machine learning infrastructure (see our coverage here), COMET’s GPU expansion, and the announcement of UCSD’s Data Science Institute, it’s easy to understand the enthusiasm for the opportunities afforded by the MOU, which builds on a shared history and mutual interests and activities around cutting-edge developments in supercomputing AI and deep learning.

The MOU covers research, education, and application of scientific knowledge in AI and more broadly data-intensive science and robotics. Target activities include the organization of workshops between the U.S. and Japan; exchange of faculty, scholars and researchers between the two campuses; collaborative infrastructure projects between UC’s Pacific Research Platform (PRP) and AIST’s AI Bridging Cloud Infastructure (ABCI) and the use of ABCI for collaborative research projects.

UCSD’s soon-to-be-launched Data Science Institute will also play a role. The institute, made possible thanks to a $75 million endowment from Taner Halicioglu (the largest
ever by a UC San Diego alumnus
), will be physically colocated with the San Diego Supercomputer Center (SDSC).

Phil Papadopoulos, chief technology officer of SDSC, and Satoshi Sekiguchi, vice president of AIST, at the UCSD campus signing ceremony on Jan. 10, 2018

Leaders from both groups took part in the signing ceremony and shared remarks. In addition to the two respective project leads, Satoshi Sekiguchi, vice president of AIST, and Phil Papadopoulos, chief technology officer of SDSC, we heard from Michael Norman, director, SDSC; Larry Smarr, director, Calit2; Jeff Elman, UCSD Distinguished Professor of Cognitive Science and one of the co-directors of the new Data Science Institute; and Jason Haga of AIST, speaking on behalf of AIST President Ryoji Chubachi.

Papadopoulos recounted how the groups had developed close ties under PRAGMA and are well-aligned due to their mutual interest in deep learning, GPU-based computation, big data, and very high speed networks. “With all these things happening every day at UCSD at the Data Sciences Institute, the Pacific Research Platform (PRP) out of Calit2, the GPU expansions on COMET, CHASE-CI which is a distributed deep learning platform that is just being built on top of the PRP, it made sense that this really should be a UCSD-wide agreement. AIST is really a terrific organization and is collaborative by nature in the global sense of the word.”

Satoshi Sekiguchi, vice president of AIST, shared similar sentiments and an appreciation of the extended research family. “UCSD’s strength in application and infrastructure areas aligns with AIST’s primary research interest of IT platforms and AI accelerations. These activities also align very well with the Pacific Research Platform that Larry Smarr and Tom DeFanti have been leading.”

At AIST and at the Department of Information Technology and Human Factors, where Sekiguchi serves as director general, one of the key messages on artificial intelligence research is embedding AI in the real world. “AI should be deployed in the physical space to help solve the real problems in life such as in the manufacturing industries, health care and so on and we wish to contribute to the private sectors to help them realize development of AI technologies,” said Sekiguchi. To this end, AIST has established partnerships with several well-known companies, including NEC, Panasonic, and Toyota Industries.

Sekiguchi also expressed his appreciation for the hard work that made it possible for this MOU to come together in only three months. “The short MOU negotiations happened because of our years of friendly relationships. For example, when the Calit2 building opened, they kindly offered us an office to accommodate the AIST research staff and to collaborate continuously together on the PRAGMA program and beyond that,” said Sekiguchi.

SDSC Director Michael Norman praised AIST as a world leader in developing HPC systems and applications in AI, deep learning for science and society. He referred to the ABCI system that is currently being developed with nearly 5,000 GPUs as “the mother of all GPU clusters.”

“This will be one of the most powerful systems for the areas of AI and deep learning. And so at a very practical level this MOU with UCSD will allow UCSD to have a front row seat to this bold experiment in the future of computing and we will be able to participate in it with a bidirectional visitor exchange program. Through this MOU we hope to broaden UCSD’s interactions with the scientists and engineers at AIST across the organization, building on our long-standing relationship in computing,” said Norman.

Programmatic synergies between the two groups are numerous and include energy and the environment, materials and chemistry, life sciences and biotechnology, information technology and electronics and manufacturing.

Larry Smarr, director of Calit2, emphasized the diverse nature of the joint MOU as well as the complementarity between the university and AIST. In 2002, when Calit2 had the largest information technology research grant from NSF in the country to build the OptIPuter, AIST was a formal international partner to that grant from the beginning. This resulted in a long history of high speed optical networking between the institutions. Smarr stated that one of the goals of the MOU will be to set up a 10-100 gigabit per second link directly into AIST from UCSD to accommodate the next phase of artificial intelligence and deep learning on massive amounts of data.

Smarr is co-PI on CHASE-CI (the Cognitive Hardware and Software Ecosystem Cyberinfrastructure), the NSF-funded GPU cloud being built on top of the Pacific Research Platform. “This framework allows for investigators here with the variety of big data including cognitive science to make use of what is essentially the broadest set of architectures to support machine learning anywhere in the world,” said Smarr.

Jeff Elman, one of the co-directors of the new Data Science Institute along with Rajesh Gupta, spoke of the possibilities afforded by the MOU in relation to the new institute and the shared focus on being a force for good in the world. He also emphasized the cross-disciplinary nature of the collaboration.

“The institute has both a research mission in terms of stimulating and supporting research, innovation, but also an educational mission, in terms of training students, post-docs and also interacting with training opportunities from partners and here’s where I see really exciting opportunities with AIST,” said Elman.

“We are entering and in fact have entered an era where the kinds of data that we now have available surpass, I think, the scope of our imaginations to grapple with both in terms of scope, the range of things we can now quantify and measure and the magnitude, the scale, from the nano to the peta, and now there’s an exa and a zetta,” Elman continued. “These data have tremendous potential on the one hand to help us understand phenomena that are global in nature or micro or nano in nature, not only to understand but also to guide action because I think ultimately science and technology are about understanding the world so that one can change it to intervene when there are harmful things but also to benefit and make improvements. Reading AIST’s mission statement clearly the focus on technology for the social good is something that you value and it is clearly a very important part of the ethos of this campus and of the new institute.”

The final set of remarks were delivered by Jason Haga, senior research scientist in the Information Technology Research Institute of AIST, on behalf of AIST President Dr. Ryoji Chubachi. “[As part of this MOU] we will create joint projects between AIST and UCSD using our new ABCI infrastructure to help establish the largest collaboration platform based on AI. Both institutions will aim to build a cyberinfrastructure that enables mutual access to big data accumulated both in the U.S. and Japan. Furthermore we will expand these activities to other institutions in the U.S. as well as Asia to create a larger global network. I would like to conclude by wishing that our collaboration will lead the way in U.S.-Japan innovation in the future.”

From left to right: Jeff Elman, co-director of UCSD Data Science Institute; Michael Norman, director, SDSC; Larry Smarr, director, Calit2; Satoshi Sekiguchi, vice president of AIST; Jason Haga, senior research scientist in the Information Technology Research Institute of AIST; and Phil Papadopoulos, chief technology officer of SDSC

The post UCSD, AIST Forge Tighter Alliance with AI-Focused MOU appeared first on HPCwire.

IBM Reports 2017 Fourth-Quarter and Full-Year Results

Thu, 01/18/2018 - 17:00

ARMONK, NY, Jan. 18, 2018 — IBM (NYSE:IBM) today announced fourth-quarter and full-year 2017 earnings results.

“Our strategic imperatives revenue again grew at a double-digit rate and now represents 46 percent of our total revenue, and we are pleased with our overall revenue growth in the quarter,” said Ginni Rometty, IBM chairman, president and chief executive officer. “During 2017, we strengthened our position as the leading enterprise cloud provider and established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.”

“Over the past several years we have invested aggressively in technology and our people to reposition IBM,” said James Kavanaugh, IBM senior vice president and chief financial officer. “2018 will be all about reinforcing IBM’s leadership position in key high-value segments of the IT industry, including cloud, AI, security and blockchain.”

Strategic Imperatives Revenue

Fourth-quarter cloud revenues increased 30 percent to $5.5 billion (up 27 percent adjusting for currency). Cloud revenue over the last 12 months was $17.0 billion, including $9.3 billion delivered as-a-service and $7.8 billion for hardware, software and services to enable IBM clients to implement comprehensive cloud solutions. The annual exit run rate for as-a-service revenue increased to $10.3 billion from $8.6 billion in the fourth quarter of 2016. In the quarter, revenues from analytics increased 9 percent (up 6 percent adjusting for currency). Revenues from mobile increased 23 percent (up 21 percent adjusting for currency) and revenues from security increased 132 percent (up 127 percent adjusting for currency).

Full-Year 2018 Expectations

The company will discuss 2018 expectations during today’s quarterly earnings conference call.

Cash Flow and Balance Sheet

In the fourth quarter, the company generated net cash from operating activities of $5.7 billion, or $7.8 billion excluding Global Financing receivables. IBM’s free cash flow was $6.8 billion. IBM returned $1.4 billion in dividends and $0.7 billion of gross share repurchases to shareholders. At the end of December 2017, IBM had $3.8 billion remaining in the current share repurchase authorization.

The company generated full-year free cash flow of $13.0 billion, excluding Global Financing receivables. The company returned $9.8 billion to shareholders through $5.5 billion in dividends and $4.3 billion of gross share repurchases.

IBM ended the fourth quarter of 2017 with $12.6 billion of cash on hand. Debt totaled $46.8 billion, including Global Financing debt of $31.4 billion. The balance sheet remains strong and is well positioned over the long term.

Segment Results for Fourth Quarter

  • Cognitive Solutions (includes solutions software and transaction processing software) — revenues of $5.4 billion, up 3 percent (flat adjusting for currency), driven by security and transaction processing software.
  • Global Business Services (includes consulting, global process services and application management) —revenues of $4.2 billion, up 1 percent (down 2 percent adjusting for currency). Strategic imperatives revenue grew 9 percent led by the cloud practice, mobile and analytics.
  • Technology Services & Cloud Platforms (includes infrastructure services, technical support services and integration software) — revenues of $9.2 billion, down 1 percent (down 4 percent adjusting for currency). Strategic imperatives revenue grew 15 percent, driven by hybrid cloud services, security and mobile.
  • Systems (includes systems hardware and operating systems software) — revenues of $3.3 billion, up 32 percent (up 28 percent adjusting for currency) driven by growth in IBM Z, Power Systems and storage.
  • Global Financing (includes financing and used equipment sales) — revenues of $450 million, up 1 percent (down 2 percent adjusting for currency).

Tax Rate

The enactment of the Tax Cuts and Jobs Act in December 2017 resulted in a one-time charge of $5.5 billion in the fourth quarter. The charge encompasses several elements, including a tax on accumulated overseas profits and the revaluation of deferred tax assets and liabilities. As a result, IBM’s reported GAAP tax rate, which includes the one-time charge, was 124 percent for the fourth quarter, and 49 percent for the full year. IBM’s operating (non-GAAP) tax rate, which excludes the one-time charge, was 6 percent for the fourth quarter; and 7 percent for the full year, which includes the effect of discrete tax benefits in the first and second quarters. Without discrete tax items, the full-year operating (non-GAAP) tax rate was 12 percent, at the low end of the company’s previously estimated range.

Full-Year Results

  • Full-year GAAP EPS from continuing operations of $6.14– Includes a one-time charge of $5.5 billion associated with the enactment of U.S. tax reform
  • Full-year operating (non-GAAP) EPS of $13.80– Excludes the one-time charge of $5.5 billion associated with the enactment of U.S. tax reform
  • Full-year revenue of $79.1 billion, down 1 percent

Link to full announcement.

Source: IBM

The post IBM Reports 2017 Fourth-Quarter and Full-Year Results appeared first on HPCwire.

PASC18 Announces Keynote Speaker, Extends Paper Deadline to Jan. 21

Thu, 01/18/2018 - 13:23

Jan. 18, 2018 — The PASC18 Organizing Team is pleased to announce a keynote presentation by Marina Becoulet from CEA, and that the deadline for papers submissions has been extended until January 21, 2018.

Marina Becoulet. Image courtesy of PASC18.

PASC18 keynote presentation: Challenges in the First Principles Modelling of Magneto Hydro Dynamic Instabilities and their Control in Magnetic Fusion Devices

The main goal of the International Thermonuclear Experimental Reactor (ITER) project is the demonstration of the feasibility of future clean energy sources based on nuclear fusion in magnetically confined plasma. In the era of ITER construction, fusion plasma theory and modelling provide not only a deep understanding of a specific phenomenon, but moreover, modelling-based design is critical for ensuring active plasma control.

The most computationally demanding aspect of the project is first principles fusion plasma modelling, which relies on fluid models – such as Magneto Hydro Dynamics (MHD) – or increasingly often on kinetic models. The challenge stems from the complexity of the 3D magnetic topology, the large difference in time scales from Alfvenic (10-7s) to confinement time (hundreds of s), the large difference in space scales from micro-instabilities (mm) to the machine size (few meters), and most importantly, from the strongly non-linear nature of plasma instabilities, which need to be avoided or controlled.

The current status of first principles non-linear modelling of MHD instabilities and active methods of their control in existing machines and ITER will be presented, focusing particularly on the strong synergy between experiment, fusion plasma theory, numerical modelling and computer science in guaranteeing the success of the ITER project.

About the presenter

Marina Becoulet is a Senior Research Physicist in the Institute of Research in Magnetic Fusion at the French Atomic Energy Commission (CEA/IRFM). She is also a Research Director and an International expert of CEA, specializing in theory and modelling of magnetic fusion plasmas, in particular non-linear MHD phenomena. After graduating from Moscow State University (Physics Department, Plasma Physics Division) in 1981, she obtained a PhD in Physics and Mathematics from the Institute of Applied Mathematics, Russian Academy of Science (1985). She worked at the Russian Academy of Science in Moscow, on the Joint European Torus in the UK, and since 1998 has been employed at CEA/IRFM, France.

Call for submissions reminder: deadlines are rapidly approaching!

The deadline for papers submissions has been extended to Sunday, January 21, 2018.

PASC18 upcoming submission deadlines:

  • Papers: January 21, 2018
  • Posters: February 4, 2018

Submit your contributions through the online submissions portal.

Full submission guidelines are available at: pasc18.pasc-conference.org/submission/submissions-portal/

PASC18 Scientific Committee: pasc18.pasc-conference.org/about/organization

Further information on the conference and submission possibilities are available at: pasc18.pasc-conference.org/

Source: PASC18

The post PASC18 Announces Keynote Speaker, Extends Paper Deadline to Jan. 21 appeared first on HPCwire.

Supercomputer Simulations Enable 10-Minute Updates of Rain and Flood Predictions

Thu, 01/18/2018 - 11:36

Jan. 18, 2018 — Using the power of Japan’s K computer, scientists from the RIKEN Advanced Institute for Computational Science and collaborators have shown that incorporating satellite data at frequent intervals—ten minutes in the case of this study—into weather prediction models can significantly improve the rainfall predictions of the models and allow more precise predictions of the rapid development of a typhoon.

Weather prediction models attempt to predict future weather by running simulations based on current conditions taken from various sources of data. However, the inherently complex nature of the systems, coupled with the lack of precision and timeliness of the data, makes it difficult to conduct accurate predictions, especially with weather systems such as sudden precipitation.

As a means to improve models, scientists are using powerful supercomputers to run simulations based on more frequently updated and accurate data. The team led by Takemasa Miyoshi of AICS decided to work with data from Himawari-8, a geostationary satellite that began operating in 2015. Its instruments can scan the entire area it covers every ten minutes in both visible and infrared light, at a resolution of up to 500 meters, and the data is provided to meteorological agencies. Infrared measurements are useful for indirectly gauging rainfall, as they make it possible to see where clouds are located and at what altitude.

For one study, they looked at the behavior of Typhoon Soudelor (known in the Philippines as Hanna), a category 5 storm that wreaked damage in the Pacific region in late July and early August 2015. In a second study, they investigated the use of the improved data on predictions of heavy rainfall that occurred in the Kanto region of Japan in September 2015. These articles were published in Monthly Weather Review and Journal of Geophysical Research: Atmospheres.

For the study on Typhoon Soudelor, the researchers adopted a recently developed weather model called SCALE-LETKF—running an ensemble of 50 simulations—and incorporated infrared measurements from the satellite every ten minutes, comparing the performance of the model against the actual data from the 2015 tropical storm. They found that compared to models not using the assimilated data, the new simulation more accurately forecast the rapid development of the storm. They tried assimilating data at a slower speed, updating the model every 30 minutes rather than ten minutes, and the model did not perform as well, indicating that the frequency of the assimilation is an important element of the improvement.

To perform the research on disastrous precipitation, the group examined data from heavy rainfall that occurred in the Kanto region in 2015. Compared to models without data assimilation from the Himawari-8 satellite, the simulations more accurately predicted the heavy, concentrated rain that took place, and came closer to predicting the situation where an overflowing river led to severe flooding.

According to Miyoshi, “It is gratifying to see that supercomputers along with new satellite data, will allow us to create simulations that will be better at predicting sudden precipitation and other dangerous weather phenomena, which cause enormous damage and may become more frequent due to climate change. We plan to apply this new method to other weather events to make sure that the results are truly robust.”

Source: RIKEN Advanced Institute for Computational Science

The post Supercomputer Simulations Enable 10-Minute Updates of Rain and Flood Predictions appeared first on HPCwire.

ALCF Now Accepting Proposals for Data Science and Machine Learning Projects for Aurora ESP

Thu, 01/18/2018 - 11:25

Jan. 18, 2018 — The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, is now accepting proposals for data science and machine learning projects for its Aurora Early Science Program (ESP). The deadline to apply is April 8, 2018.

Now slated to be the nation’s first exascale system when it is delivered in 2021, Aurora will be capable of performing a quintillion calculations per second. The system is expected to have more than 50,000 nodes and more than 5 petabytes of total memory, including high-bandwidth memory.

The shift in plans for Aurora, which was initially scheduled to be a 180 petaflops supercomputer, is central to a new paradigm for scientific computing at the ALCF—the expansion from traditional simulation-based research to include data science and machine learning approaches. Aurora’s revolutionary architecture will provide advanced capabilities in the three pillars of simulation, data, and learning that will power a new era of scientific discovery and innovation.

The Aurora ESP, which kicked off with 10 projects in 2017, is expanding to align with this new paradigm for leadership computing. Currently underway, the 10 original projects will serve as simulation-based projects for the new system.

This call for proposals will bring in 10 additional projects in the areas of data science (e.g., data analytics, data-intensive computing, advanced statistical analyses) and machine learning (e.g., deep learning, neural networks). Proposals for crosscutting projects that involve simulation, data, and learning are also encouraged.

Aurora ESP projects will prepare key applications for the architecture and scale of the exascale supercomputer and solidify libraries and infrastructure to pave the way for other applications to run on the system.

The program provides an exciting opportunity to be among the first researchers in the world to run calculations and workflows on an exascale system. With substantial allocations of pre-production time on Aurora, ESP teams will be able to pursue scientific computing campaigns that are not possible on today’s leadership-class supercomputers.

For more information, visit the Aurora ESP webpage and the Proposal Instructions webpage.

For hands-on assistance in preparing for ESP proposals, the ALCF is hosting the Simulation, Data, and Learning Workshop from February 27–March 1, 2018. To register, visit the workshop webpage.

Source: ALCF

The post ALCF Now Accepting Proposals for Data Science and Machine Learning Projects for Aurora ESP appeared first on HPCwire.

Groundbreaking Conference Examines How AI Transforms Our World

Thu, 01/18/2018 - 10:35

NEW YORK, Jan. 18, 2018 — ACM, the Association for Computing Machinery; AAAI, the Association for the Advancement of Artificial Intelligence; and SIGAI, the ACM Special Interest Group on Artificial Intelligence have joined forces to organize a new conference on Artificial Intelligence, Ethics and Society (AIES). The conference aims to launch a multi-disciplinary and multi-stakeholder effort to address the challenges of AI ethics within a societal context. Conference participants include experts in various disciplines such as computing, ethics, philosophy, economics, psychology, law and politics. The inaugural AIES conference is planned for February 1-3 in New Orleans.

“The public is both fascinated and mystified about how AI will shape our future,” explains AIES Co-chair Francesca Rossi, IBM Research and University of Padova. “But no one discipline can begin to answer these questions alone. We’ve brought together some of the world’s leading experts to imagine how AI will transform our future and how we can ensure that these technologies best serve humanity.”

Conference organizers encouraged the submission of research papers on a range of topics including building ethical AI systems, the impact of AI on the workforce, AI and the law, and the societal impact of AI. Out of 200 submissions, only 61 papers have been selected and will be presented during the conference.

The program of AIES 2018 also includes invited talks by leading scientists, panel discussions on AI ethics standards and the future AI, and the presentation of the leading professional and student research papers on AI. Co-chairs include Francesca Rossi, a computer scientist and former president of the International Joint Conference on Artificial Intelligence; Jason Furman, a Harvard economist and former Chairman of the Council of Economic Advisors (CEA); Huw Price, a philosopher and Academic Director of the Leverhulme Centre for Future of Intelligence; and Gary Marchant, Regent’s Professor of Law and Director of the Center for Law, Science and Innovation at Arizona State University.

AIES 2018 HIGHLIGHTS

INVITED TALKS

The Moral Machine Experiment: 40 Million Decisions and the Path to Universal Machine Ethics

Iyad Rahwan and Edmond Awad, Massachusetts Institute of Technology

Rahwan and Awad describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game they developed enabled them to gather 40 million decisions from 3 million people in 200 countries and territories. We report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. We also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. Rahwan and Ewad discuss how these three layers of preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics.

AI, Civil Rights and Civil Liberties: Can Law Keep Pace with Technology?

Carol Rose, American Civil Liberties Union

At the dawn of this era of human-machine interaction, human beings have an opportunity to shape fundamentally the ways in which machine learning will expand or contract the human experience, both individually and collectively. As efforts to develop guiding ethical principles and legal constructs for human-machine interaction move forward, how do we address not only what we do with AI, but also the question of who gets to decide and how? Are guiding principles of ‘Liberty and Justice for All’ still relevant? Does a new era require new models of open leadership and collaboration around law, ethics, and AI?

AI Decisions, Risk, and Ethics: Beyond Value Alignment

Patrick Lin, California Polytechnic State University

When we think about the values AI should have in order to make right decisions and avoid wrong ones, there’s a large but hidden third category to consider: decisions that are not-wrong but also not-right. This is the grey space of judgment calls, and just having good values might not help as much as you’d think here. Autonomous cars are used as the case study here. Lessons are offered for broader AI: such as  ethical dilemmas that can arise in everyday scenarios such as lane positioning and navigation—and not just in crazy crash scenarios. This is the space where one good value might conflict with another good value, and there’s no “right” answer or even broad consensus on an answer.

The Great AI/Robot Jobs Scare: reality of automation fear redux 

Richard Freeman, Harvard University

This talk will consider the impact of AI/robots on employment, wages and the future of work more broadly. We argue that we should focus on policies that make AI robotics technology broadly inclusive both in terms of consumption and ownership so that billions of people can benefit from higher productivity and get on the path to the coming age of intolerable abundance.

PANELS

What Will Artificial Intelligence Bring?

Brent Venable, Tulane University (Moderator); Paula Boddington, Oxford University; Wendell Wallach, Yale University; Jason Furman, Harvard University; and Peter Stone, UT Austin

World class researchers from different disciplines and best-selling authors will elaborate on the impact of AI on modern society and will answer questions. This panel is open to the public.

Prioritizing Ethical Considerations in Intelligent and Autonomous Systems: Who Sets the Standards? 

Takashi Egawa, NEC Corporation; Simson L. Garfinkel, USACM; John C. Havens, IEEE (moderator); Annette Reilly, IEEE; and Francesca Rossi, IBM and University of Padova

While dealing with intelligent and autonomous technologies, safety standards and standardization projects are providing detailed guidelines or requirements to help organizations institute new levels of transparency, accountability and traceability. The panelists will explore how we can build trust and maximize innovation while avoiding negative unintended consequences.

BEST PAPER AWARD (sponsored by the Partnership on AI)

Shared between the following two papers:

Transparency and Explanation in Deep Reinforcement Learning Neural Networks

Rahul Iyer, InSite Applications; Yuezhang Li, Google; Huao Li, University of Pittsburgh; Michael Lewis, Facebook; Ramitha Sundar, Carnegie Mellon; and Katia Sycara, Carnegie Mellon

For AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system and to form coherent explanations of the systems decisions and actions. This paper presents a novel and general method to provide a vizualization of internal states of deep reinforcement learning models, thus enabling the formation of explanations that are intelligible to humans.

An AI Race: Rhetoric and Risks 

Stephen Cave, Leverhulme Centre for the Future of Intelligence, Cambridge University  and; Seán S ÓhÉigeartaigh, Centre for the Study of Existential Risk, Cambridge University

The rhetoric of the race for strategic advantage is increasingly being used with regard to the development of AI. This paper assesses the potential risks of the AI race narrative, explores the role of the research community in responding to these risks, and discusses alternative ways to develop AI in a collaborative and responsible way.

For a complete list of research papers and posters which will be presented at the AIES Conference, visit http://www.aies-conference.com/accepted-papers/. The proceedings of the conference will be published in the AAAI and ACM Digital Libraries.

About ACM

ACM, the Association for Computing Machinery (www.acm.org), is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post Groundbreaking Conference Examines How AI Transforms Our World appeared first on HPCwire.

New Blueprint for Converging HPC, Big Data

Thu, 01/18/2018 - 10:31

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), William Gropp (National Center for Supercomputing Applications), and Thomas Schulthess (Swiss National Supercomputing Centre), among others, has issued a comprehensive Big Data and Extreme-Scale Computing Pathways to Convergence Report. Not surprisingly it’s a large work not easily plumbed in a single sitting.

Convergence – harmonizing computational infrastructures to accommodate HPC and big data – isn’t a new topic. Recently, big data’s close cousin, machine learning, has become part of the discussion. Moreover, the accompanying rise of cyberinfrastructure as a dominant force in science computing has complicated convergence efforts.

The central premise of this study is that a ‘data-driven’ upheaval is exacerbating divisions – technical, cultural, political, economic – in the cyberecosystem of science. The report tackles in some depth a narrower slice of the problem. Big data, say the authors, has caused or worsened two ‘paradigm splits’: 1) one between the traditional ‘HPC and High-end Data Analysis (HDA)’ and 2) another between ‘stateless networks and stateful services’ provided by end systems. The report lays out a roadmap for mending these fissures.

 

This snippet from the report’s executive summary does a nice job of summing up the challenge:

“Looking toward the future of cyberinfrastructure for science and engineering through the lens of these two bifurcations made it clear to the BDEC community that, in the era of Big Data, the most critical problems involve the logistics of wide-area, multistage workflows—the diverse patterns of when, where, and how data is to be produced, transformed, shared, and analyzed. Consequently, the challenges involved in codesigning software infrastructure for science have to be reframed to fully take account of the diversity of workflow patterns that different application communities want to create. For the HPC community, all the imposing design and development issues of creating an exascale-capable software stack remain; but the supercomputers that need this stack must now be viewed as the nodes (perhaps the most important nodes) in the very large network of computing resources required to process and explore rivers of data flooding in from multiple sources.”

There’s a lot to digest here, including a fair amount of technical guidance. Issued at the end of 2017, the report is the result of workshops held in the U.S. (2013), Japan (2014), Spain (2015), Germany (2016), and China (2017); it grew out of prior efforts of the International Exascale Software Project (IESP). Descriptions and results of the five workshops (agendas, white papers, presentations, attendee lists) are available at the BDEC site (http://www.exascale.org/bdec/).

Jack Dongarra

Commenting on the work, Dongarra said, “Computing is at a profound inflection point, economically and technically. The end of Dennard scaling and its implications for continuing semiconductor-design advances, the shift to mobile and cloud computing, the explosive growth of scientific, business, government, and consumer data and opportunities for data analytics and machine learning, and the continuing need for more-powerful computing systems to advance science and engineering are the context for the debate over the future of exascale computing and big data analysis.”

The broad hope is that the ideas presented in the report will guide community efforts. Dongarra emphasized “High-end data analytics (big data) and high-end computing (exascale) are both essential elements of an integrated computing research-and-development agenda; neither should be sacrificed or minimized to advance the other.” Shown below are typical differences in the BDEC software ecosystem.

 

There’s too much in the report to adequately cover here. Here are the report’s summary recommendations:

“Our major, global recommendation is to address the basic problem of the two paradigm splits: the HPC/HDA software ecosystem split and the wide area data logistics split. For this to be achieved, there is a need for new standards that will govern the interoperability between data and compute, based on a new, common and open Distributed Services Platform (DSP), that offers programmable access to shared processing, storage and communication resources, and that can serve as a universal foundation for the component interoperability that novel services and applications will require.

“We make five recommendations for decentralized edge and peripheral ecosystems:

  • Converge on a new hourglass architecture for a Common Distributed Service Platform (DSP).
  • Target workflow patterns for improved data logistics.
  • Design cloud stream processing capabilities for HPC.
  • Promote a scalable approach to Content Delivery/Distribution Networks.
  • Develop software libraries for common intermediate processing tasks.

“We make five actionable conclusions for centralized facilities:

  • Energy is an overarching challenge for sustainability.
  • Data reduction is a fundamental pattern.
  • Radically improved resource management is required.
  • Both centralized and decentralized systems share many common software challenges and opportunities: 
(a) Leverage HPC math libraries for HDA.
(b) More efforts for numerical library standards.
(c) New standards for shared memory parallel processing.
(d) Interoperability between programming models and data formats.
  • Machine learning is becoming an important component of scientific workloads, and HPC architectures must be adapted to accommodate this evolution.”

Link to BDEC Report: http://www.exascale.org/bdec/

The post New Blueprint for Converging HPC, Big Data appeared first on HPCwire.

Pages