HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 35 min ago

Nvidia Focuses Its Cloud Containers on HPC Applications

Tue, 11/14/2017 - 21:30

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing tasks as well as AI development with the addition of a container registry designed to deploy its GPU cloud for everything from visualization to drug discovery.

Nvidia CEO Jensen Huang, SC17 in Denver

In its drive to expand access to its Volta architecture, Nvidia announced the availability of its Tesla V100 GPU on Microsoft Azure. Azure is the latest cloud service to join the chipmaker’s growing list of public and private cloud services providers along server makers. Most are offering the “GPU accelerated” services for AI development projects such as training deep learning models that require more processing cores and access to big data.

Moving beyond the AI market, Nvidia on Monday (Nov. 13) unveiled a container registry designed to ease deployment of HPC applications in the cloud. The container registry for scientific computing applications and visualization tools would connect researchers with most GPU-optimized HPC software, the company said during this week’s SC17 conference in Denver.

Last month, the company introduced deep learning applications and AI frameworks in its Nvidia GPU Cloud (NGC) container registry. The AI container registry was rolled out on Amazon Web Services’ Elastic Compute Cloud instances running on Tesla V100 GPUs.

The HPC application containers announced this week include a long list of third-party scientific applications. HPC visualization containers are available in beta on the GPU cloud.

As GPU processing moves wholesale to the cloud and datacenters, easing application deployment was the next logical step as Nvidia extends its reach beyond AI development to scientific computing. (The company notes that the 2017 Nobel Prize winners in chemistry and physics used it CUDA parallel computing and API model. Nvidia’s Volta architecture includes more than 5,000 CUDA cores.)

HPC containers are designed to package the libraries and dependencies needed to run scientific applications on top of container infrastructure such as Docker Engine. The cloud container registry for delivering HPC applications uses Nvidia’s Docker distribution to run visualizations and other tasks in GPU-accelerated clouds. The service is available now.

Underpinning these scientific workloads in the cloud is the Volta architecture, asserts Nvidia CEO Jensen Huang. “Volta has now enabled every researcher in the world to access…the most advanced high-performance computer in the world at the lowest possible price,” Huang claimed during SC17. “You can rent yourself a supercomputer for three dollars” per hour.

The other part of the GPU equation is the software stack and how it remains optimized. Hence, Nvidia has placed software components in the GPU cloud via its container registry. The containerized software stack can then be downloaded from Nvidia’s cloud and datacenter partners.

Emphasizing Nvidia’s drive to make GPU processing more accessible, Huang concluded: “In the final analysis, it’s got to be simple.”

Nvidia also took advantage of the SC17 launch pad to announce it is building a new supercomputer to enable high-performance workflows inside its company. SaturnV with Volta continues the tradition of the DGX-1 SaturnV that the company announced last year at SC16, but swaps out the Pascal-based P100s with Volta-based V100s. Nvidia is also greatly expanding the system: from 124 nodes to 660 nodes. Once complete in early 2018, it will offer 40 petaflops of peak double-precision floating point performance, Nvidia said. An early version of the system appeared on the 50th Top500 list (revealed Monday), delivering 1.07 Linpack petaflops in 30 DGX-1 nodes, sufficient for a 149th ranking. That system, installed at Nvidia headquarters in San Jose, Calif., also secured the fourth highest spot on the Green500 listing.

“This is one of the fastest and greenest supercomputers in the world and we use it for our high-performance computing software stack development,” said Huang.

“I believe this is the future of software development,” he continued. “Until now, most of our software engineers coded on their laptop, they compiled it and ran regression tests in the datacenter. Now we have to have our own supercomputing infrastructure. I believe every single industry, every single company will eventually have to have high performance computing infrastructures, opening up the opportunity for the HPC industry.”

–Tiffany Trader contributed to this report.

The post Nvidia Focuses Its Cloud Containers on HPC Applications appeared first on HPCwire.

SkyScale Announces Cloud Solutions Partnership with Rescale and Availability of NVIDIA’s Tesla V100 GPUs on its Cloud Platform

Tue, 11/14/2017 - 10:44

DENVER, Nov. 14, 2017 — SkyScale, a leader in accelerated cloud solutions, today simultaneously announced its partnership with Rescale, the foremost cloud simulation and HPC solution provider, and availability of NVIDIA Tesla V100 GPU accelerators on its GPU as a Service cloud platform. Enhancing SkyScale’s dedicated, cloud-based ultra-fast multi-GPU platforms for deep learning and HPC applications, the Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle the most difficult deep learning and HPC challenges.

SkyScale’s V100 offerings are dedicated bare-metal systems with four, eight, or sixteen V100 GPUs in a single node, which can also be clustered. Interconnect technologies include PCIe and NVIDIA NVLink™ high-speed interconnects, running up to 300 GB/s; five times the bandwidth of PCIe.

“There has been a lot of anticipation for the V100 processor, and justifiably so,” said Tim Miller, president of SkyScale. “The performance of the NVIDIA V100 with NVLink™ truly represents a leap forward over previous generations, particularly for AI and HPC. SkyScale is proud to be among the very first to offer such advanced GPU performance as a service in dedicated systems, which are generally faster and more secure than virtualized machines.”

Partnership with Rescale

“Customers can access our systems a couple of different ways,” remarked Jim Ison, V.P. of Sales for SkyScale. “They can rent our ever-expanding, fully-provisioned and dedicated systems for a week or longer through our website, www.SkyScale.com, and these same systems are now accessible by the hour through our on-demand partnership with Rescale (www.Rescale.com).”

“We are excited to have added SkyScale’s dedicated, ultra-fast V100 and P100 multi-GPU nodes to our ScaleX platform,” said Gabriel Broner, VP and GM of HPC at Rescale. “In turn, our partnership brings SkyScale the benefits of on-demand elasticity and access flexibility to their non-virtualized systems, along with industry-leading software applications for deep learning and scientific and engineering simulations.”

For more information on SkyScale’s GPU as a Service, please visit www.SkyScale.com or call (888) Visit SkyScale at SC17 in Denver, CO from November 13th-16th in Booth #2049

About SkyScale

SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU platforms for HPC and the fastest deep learning performance available as a service anywhere in the world, and hosts cutting-edge and highly secure private clouds: all in industry-leading datacenters featuring unmatched reliability and physical and cyber security. Related services include solution consulting, simulation and tuning. SkyScale is an NVIDIA Authorized Cloud Service Provider. Visit www.SkyScale.com.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit http://www.rescale.com.

Source: SkyScale

The post SkyScale Announces Cloud Solutions Partnership with Rescale and Availability of NVIDIA’s Tesla V100 GPUs on its Cloud Platform appeared first on HPCwire.

Nyriad Names HPC Systems as Japanese Distributor for GPU Storage Solution NSULATE at SC17

Tue, 11/14/2017 - 10:37

DENVER, Nov. 14, 2017 — Exascale computing company Nyriad today named HPC Systems as distribution partner for NSULATE to enable GPU-based storage solutions in Japan.

HPC Systems Inc. is a leading system integrator of High Performance Computing (HPC). The company has quickly established itself as a technology and performance leader and is now expanding its product offering to include GPU-accelerated storage solutions.

Nyriad’s NSULATE software enables GPUs to function as storage controllers, replacing RAID controllers and enabling fast, extremely resilient, low power HPC storage solutions.

HPC Systems President Teppei Ono said, “For years we have had to deal with the problems and limitations of RAID. Now we don’t have to make a tradeoff between performance, integrity, and cost. With NSULATE we can have all three, by swapping the RAID card for a GPU.”

NSULATE enables a GPU to accelerate storage processing, providing functions traditionally performed by CPUs and RAID controllers at high speeds in GPU memory, performing cryptographic checksums and high-parity erasure coding, enabling instant recovery from hundreds of simultaneous device failures, which is key for exascale storage.

Nyriad Chief Executive and Co-Founder Matthew Simmons stated, “We are thrilled to have HPC Systems as our distribution partner in Japan. They are respected leaders in the field, so it was a logical choice for us.”

About Nyriad

Nyriad is a New Zealand-based exascale computing company specialising in advanced data storage solutions for big data and high performance computing. Born out of its consulting work on the Square Kilometre Array Project, the company was forced to rethink the relationship between storage, processing and bandwidth to achieve a breakthrough in system stability and performance capable of processing and storing over 160Tb/s of radio antennae data in real-time, within a power budget impossible with any modern IT solutions.

About HPC Systems

HPC Systems Inc. is a leading system integrator of High Performance Computing (HPC) solutions. Since its inception in 2006, the company has quickly established itself as a technology and performance leader in Japanese small-to-mid-range HPC market. The company plans for further growth and developments in world class HPC Cloud solutions, to support our customers research and technological development worldwide.

Source: Nyriad

The post Nyriad Names HPC Systems as Japanese Distributor for GPU Storage Solution NSULATE at SC17 appeared first on HPCwire.

Sandia, Boston University Win Award for Using Machine Learning to Detect Issues

Tue, 11/14/2017 - 10:32

ALBUQUERQUE, N.M., Nov. 14, 2017 — A team of computer scientists and engineers from Sandia National Laboratories and Boston University recently received a prestigious award at the International Supercomputing conference for their paper on automatically diagnosing problems in supercomputers.

The research, which is in the early stages, could lead to real-time diagnoses that would inform supercomputer operators of any problems and could even autonomously fix the issues, said Jim Brandt, a Sandia computer scientist and author on the paper.

Supercomputers are used for everything from forecasting the weather and cancer research to ensuring U.S. nuclear weapons are safe and reliable without underground testing. As supercomputers get more complex, more interconnected parts and processes can go wrong, said Brandt.

Physical parts can break, previous programs could leave “zombie processes” running that gum up the works, network traffic can cause a bottleneck or a computer code revision could cause issues. These kinds of problems can lead to programs not running to completion and ultimately wasted supercomputer time, Brandt added.

Selecting artificial anomalies and monitoring metrics

Brandt and Vitus Leung, another Sandia computer scientist and paper author, came up with a suite of issues they have encountered in their years of supercomputing experience. Together with researchers from Boston University, they wrote code to re-create the problems or anomalies. Then they ran a variety of programs with and without the anomaly codes on two supercomputers — one at Sandia and a public cloud system that Boston University helps operate.

While the programs were running, the researchers collected lots of data on the process. They monitored how much energy, processor power and memory was being used by each node. Monitoring more than 700 criteria each second with Sandia’s high-performance monitoring system uses less than 0.005 percent of the processing power of Sandia’s supercomputer. The cloud system monitored fewer criteria less frequently but still generated lots of data.

With the vast amounts of monitoring data that can be collected from current supercomputers, it’s hard for a person to look at it and pinpoint the warning signs of a particular issue. However, this is exactly where machine learning excels, said Leung.

Training a supercomputer to diagnose itself

Machine learning is a broad collection of computer algorithms that can find patterns without being explicitly programmed on the important features. The team trained several machine learning algorithms to detect anomalies by comparing data from normal program runs and those with anomalies.

Then they tested the trained algorithms to determine which technique was best at diagnosing the anomalies. One technique, called Random Forest, was particularly adept at analyzing vast quantities of monitoring data, deciding which metrics were important, then determining if the supercomputer was being affected by an anomaly.

To speed up the analysis process, the team calculated various statistics for each metric. Statistical values, such as the average, fifth percentile and 95th percentile, as well as more complex measures of noisiness, trends over time and symmetry, help suggest abnormal behavior and thus potential warning signs. Calculating these values doesn’t take much computer power and they helped streamline the rest of the analysis.

Once the machine learning algorithm is trained, it uses less than 1 percent of the system’s processing power to analyze the data and detect issues.

“I am not an expert in machine learning, I’m just using it as a tool. I’m more interested in figuring out how to take monitoring data to detect problems with the machine. I hope to collaborate with some machine learning experts here at Sandia as we continue to work on this problem,” said Leung.

Leung said the team is continuing this work with more artificial anomalies and more useful programs. Other future work includes validating the diagnostic techniques on real anomalies discovered during normal runs, said Brandt.

Due to the low computational cost of running the machine learning algorithm these diagnostics could be used in real time, which also will need to be tested. Brandt hopes that someday these diagnostics could inform users and system operation staff of anomalies as they occur or even autonomously take action to fix or work around the issue.

This work was funded by National Nuclear Security Administration’s Advanced Simulation and Computing and Department of Energy’s Scientific Discovery through Advanced Computing programs.

About Sandia National Laboratories

Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

The post Sandia, Boston University Win Award for Using Machine Learning to Detect Issues appeared first on HPCwire.

Aldec to Showcase FPGA-Based Algorithm Accelerators at SC17

Tue, 11/14/2017 - 10:14

DENVER, Nov. 14, 2017 — Aldec Inc., unveils FPGA-based accelerators, HES-HPC at SC17 to be held on November 12-16, 2017 in Denver Colorado.  Utilizing the latest and most powerful Xilinx UltraScale and UltraScale+ devices with the highest DSP slices, the accelerators are ideal for on premise environments running low-latency applications and algorithms used in Genome Short Reads Alignment, Cloud Computing, Data Mining, High-Frequency Trading and Encryption Code Breaking.

The scalable HES-HPC platforms support PCIe x8 / PCIe x16, include on-board memories, and available in multiple configurations including:

  • 1U or 2U rack built with 7x low profile, PCIe x16 cards with UltraScale+ XCVU9P (~2.5M Logic Cells, 6,840 DSPs), 3x QDR-II+ 144Mb memory, 2x QSFP28 connectors up to 131 Gb/s
  • Desktop System with 1x UltraScale Kintex XCKU115 (~1.5M Logic Cells, 5,520 DSPs) connected to 1x Zynq XC7Z100 as host,  2x 16GB DDR4, 4x 576Mb RLDRAM-3, 2x QSFP+ connectors, USB3.0
  • Single Board with 1x UltraScale Virtex XCVU440 connected to  1x Zynq XC7Z100 as host, 2x 16GB DDR4 and 2x 576Mb RLDRAM-3 memories, 4x FMC and 1x QSFP+ connectors.

“In order to ease the host-to-FPGA transmission, we provide the PCIe-to-AXI hardware infrastructure and C/C++ software API with link speed of 2+GB/s, and it can be used when writing high-level applications for Linux or Windows.” said Louie De Luna, Director of Marketing. “We also provide the Hes.Asic.Proto software package with necessary drivers and utilities for programming and communication with the board.”

Aldec Demonstrations @ Booth#254:

  • Design Encryption Standard (DES) Code Breaker – a reference design for DES Brute Force Code Breaker with a total key combination of 2^56, max input clock frequency 175MHz and 175M combinations/sec for 1 DES instance. The total hack time for 6144 DES instances is ~20h using HES-HPC with 6 Xilinx UltraScale US-440 chips.
  • Vibe Motion Detection – a reference design based on ViBe Background Subtraction algorithm and HES-HPC FPGA-based Accelerator running @1920×1080, 30fps. The image processing background subtraction techniques are utilized to transform and detect moving objects in recorded video. HES-HPC platform provides performance enhancement by utilizing extreme parallel processing capabilities of FPGAs to execute computationally intensive image transformations.
  • Genome Short Reads Alignment – a high-performance reconfigurable FPGA accelerator engine for Renelife ReneGENE, offered on HES-HPC for accurate, and ultra-fast big data mapping and alignment of DNA short-reads from the Next Generation Sequencing (NGS) platforms. AccuRA demonstrates a speedup of over ~ 1500+ x compared to standard heuristic aligners in the market like BFAST which was run on an 8-core 3.5 GHz AMD FX processor with a system memory of 16 GB.

About Aldec

Aldec Inc., headquartered in Henderson, Nevada, is an industry leader in Electronic Design Verification and offers a patented technology suite including: RTL Design, RTL Simulators, Hardware-Assisted Verification, SoC and ASIC Prototyping, Design Rule Checking, IP Cores, Embedded, Requirements Lifecycle Management, DO-254 Functional Verification and Military/Aerospace solutions. www.aldec.com

Source: Aldec

The post Aldec to Showcase FPGA-Based Algorithm Accelerators at SC17 appeared first on HPCwire.

Colorado Engineering, Inc. Unveils First Dual Intel Stratix 10 FPGA at SC17

Tue, 11/14/2017 - 09:31

COLORADO SPRINGS, Colo., Nov. 14, 2017 — With a legacy of delivering leading-edge custom engineering innovations for industry and military applications, Colorado Engineering, Inc. (CEI), unveiled at Super Computing 2017 in Denver, CO (Booth 1976) the industry’s fastest FPGA Accelerator Card—the WARP II eXtreme High Performance Compute Node (Figure 1).

With its dual Intel Stratix 10 FPGAs, the WARP II PCIe card delivers up to 136GB DDR4 per FPGA, 100GbE, and programmable OpenCL support allowing it to address today’s most challenging data-intense computing problems found in high-density datacenter and cloud service applications.

A Powerful Real-World Solution 

“WARP II represents the most advanced off-the-shelf, PCIe, FPGA compute acceleration card in production,” says Larry Scally, PhD., President and Chief Technology Officer for CEI. “It delivers the power required to turn big data into actionable intelligence. High performance computing applications like machine learning and cognitive computing can all benefit from the performance gains our WARP II delivers.”

Performance Features Abound 

In addition to the dual implementation of Intel’s Stratix® 10 FPGAs, the WARP II supports up to 136GB DDR4 per FPGA and 20 TFLOPS of peak performance (10 TFLOPS per FPGA), making it the most efficient, high performance acceleration card in CEI’s long history.

The WARP II features a myriad of technological advances, including:

  •     4x RDIMM – 264GB Max.
  •     QSFP+ 40/100GbE
  •     PEX PCIe x16 Gen 3
  •     139GB/s FPGA-to-FPGA
  •     Advanced SolidWorks Heatsink Design
  •     8GB Discrete DDR4 RAM
  •     Intel Max 10 FPGA
  •     Freescale K61 Microcontroller
  •     GPU-sized PCIe Form Factor

Designed with a Holistic Approach to Engineering
CEI has long embraced a philosophy of cross-training their engineers. In this way engineers do not simply focus on digital design, RF design, software or firmware design—they focus on the entire process and deliverable.

“Our engineers are involved in all disciplines of engineering,” summarizes Scally. “As a result, the way they treat cause, size, weight, power and performance proves far superior to a product developed by someone with say a software emphasis who will produce a product that is a software solution. We don’t have that problem. Our engineers optimize design by taking this unique holistic approach to product development. The results speak for themselves!”

CEI’s WARP II is currently available and can be ordered directly from CEI at (719) 388-8582 or sales(at)coloradoengineering(dot)com.

About Colorado Engineering, Inc. 

Founded in 2003 with engineering and production facilities located in Colorado Springs, Colorado, Colorado Engineering, Inc. (CEI) supplies Commercial-Off-The-Shelf (COTS) high performance computing hardware and software, as well as tailored solutions, working directly with government agencies and for commercial prime contractors, offering quick turn, innovative solutions with lower cost and higher quality while minimizing risk. CEI leads or partners on new radar systems, software and hardware projects, including leading edge efforts in MOSA applications, reconfigurable RF, THz EM propagation modeling, and associated remote sensing system development.

Colorado Engineering Inc. is certified as a women’s business enterprise by the Women’s Business Enterprise National Council (WBENC), the nation’s largest third-party certifier of the businesses owned and operated by women in the U.S.

A Platinum member of Intel’s FPGA Design Solutions Network, CEI can deliver custom hardware, software, sensor, or mechanical design support for the most demanding computing challenges.

Source: Colorado Engineering, Inc.

The post Colorado Engineering, Inc. Unveils First Dual Intel Stratix 10 FPGA at SC17 appeared first on HPCwire.

One Stop Systems Announces Composable Infrastructure Solutions at SC17

Tue, 11/14/2017 - 09:06

DENVER, Nov. 14, 2017 — One Stop Systems (OSS), a leading provider of ultra-dense high performance computing (HPC) systems for a multitude of HPC applications, will exhibit several HPC composable infrastructure solutions with partners at SC17 today. The solutions are ideal for data scientists requiring flexible HPC infrastructure across multiple nodes. By adding NVIDIA GPU and NVMe expansion, customers can add unlimited flexibility to their HPC architecture by decoupling the latest innovations in CPU capabilities, NVIDIA GPU performance and NVMe storage into a system called “composable HPC infrastructure.”

“OSS continues to provide the newest solutions to our customers and composable infrastructure is the latest and greatest,” said Steve Cooper, CEO of OSS. “Composable infrastructure using expansion systems allows large numbers of NVIDIA GPUs on the same PCIe or network fabric for use by any node in the datacenter. This flexibility is invaluable for AI, deep learning, RTM, Monte Carlo and image processing applications that benefit from peer-to-peer communication with moderate CPU interaction. Servers, GPUs and storage upgrade on different schedules from the various vendors so composable infrastructure decouples HPC components allowing upgrading at different times, spreading the capital expenditures over many fiscal periods.”

“OSS recently announced new systems that use NVIDIA Tesla V100 GPUs, the most advanced data center GPUs ever built to accelerate AI, HPC, and graphics,” said Paresh Kharya, Group Product Marketing Manager at NVIDIA. “One Stop Systems’ customers can now harness the power of our Volta architecture in their composable infrastructure systems.”

Composable infrastructure allows customers to utilize any number of CPU nodes to dynamically map the optimum number of NVIDIA® Tesla® V100 GPU accelerators and NVMe storage resources to each node required to complete a specific task. When the task completes, the resources return to the cluster pool so they can be mapped to the next set of nodes to run the next task. The composable infrastructure demos in the OSS booth utilize One Stop Systems expansion hardware and composable infrastructure software solutions from Liqid and Dolphin to provide the dynamic reallocation of GPU and NVMe resources.

“We’re excited to be working with One Stop Systems’ expansion technology because it provides industry-leading GPU and NVMe hardware density, increasing resource availability across the data center,” said Jay Breakstone, CEO of Liqid. “Liqid’s technology platform allows users to manage, scale out, configure and even automate physical, bare-metal server systems. With the ability to treat GPUs as a disaggregated, shared resource for the first time, scaled via OSS expansion systems, composable solutions from Liqid deliver the infrastructure to meet today’s most taxing HPC challenges, such as peer-to-peer transfers and memory access for AI and machine learning.”

“Dolphin is pleased to partner with OSS to provide customers with the highest performance hardware and software,” said Hugo Kohmann, CEO of Dolphin. “The Dolphin eXpressWare SmartIO software offers a flexible way to enable PCIe IO devices such as NVMe, FPGAs and GPUs to be accessed within a PCIe Network. Devices can be borrowed over the PCIe network without any software overhead at the performance of PCI Express.”
Composable infrastructure is also available as a cloud solution. Data scientists can rent the latest technology composable infrastructure systems and software using operational expenditure budgets rather than capital equipment budgets. One Stop Systems is partnering with SkyScale to provide our composable infrastructure solutions in the cloud.

“Utilizing One Stop Systems hardware, SkyScale offers flexible composable infrastructure solutions in the Cloud,” said Tim Miller, President of SkyScale. “We already offer users cutting edge technology for GPU computing with unprecedented customization. By adding composable infrastructure solutions, we’re increasing the level of customization for our customers.”

Visitors to SC17 in Denver, Colorado can view these composable infrastructure demos in One Stop Systems’ booth #2049. Customers can order the hardware utilized in these demos from One Stop Systems’ highly-trained sales engineers.

About One Stop Systems

One Stop Systems designs and manufactures ultra-dense high performance computing (HPC) systems for deep learning, oil and gas exploration, financial trading, media and entertainment, defense and traditional HPC applications requiring the fastest and most efficient data processing. By utilizing the power of the latest GPU accelerators and NVMe flash cards, our systems stay on the cutting edge of the latest technologies. We have a reputation as innovators in hyperconverged and composable infrastructure solutions using the latest technology and design equipment to operate with the highest efficiency. Now OSS offers these exceptional systems to customers for lease or purchase. OSS continuously works to meet our customers’ greater needs.

Source: One Stop Systems

The post One Stop Systems Announces Composable Infrastructure Solutions at SC17 appeared first on HPCwire.

Human Genetics Centre at University of Oxford Deploys Univa Solutions

Tue, 11/14/2017 - 08:51

CHICAGO, Nov. 14, 2017 — Univa, a leading innovator of workload management products, today announced its Univa Grid Engine distributed resource management system is powering the Wellcome Centre for Human Genetics’ (WHG) high performance computing (HPC) environment.

WHG is a research institute within the Nuffield Department of Medicine at the University of Oxford. The Centre is an international leader in genetics, genomics, statistics and structural biology with more than 400 researchers and 70 administrative and support personnel. WHG’s mission is to advance the understanding of genetically-related conditions through a broad range of multi-disciplinary research.

To support its research community, the Centre operates a shared HPC cluster comprising over 4,000 InfiniBand-connected, high-memory compute cores and 4PB of high performance, parallel storage running 250 applications. WHG’s previous open source scheduler lacked practical software support and did not address the increasing use of containerized machine learning applications. To plan for growth and accommodate mixed workload types (serial-batch, array, MPI, container, Spark) on the same shared cluster, the Centre evaluated a variety of open source and commercial options. The review committee awarded Univa Grid Engine as the replacement, citing its modern scheduler, expert technical support and minimal user re-training for its selection.

“The conversion from the previous scheduler to Univa Grid Engine was virtually painless. Our users are happy that their hard-won knowledge continues to be relevant, significant scheduler bugs and vulnerabilities were fixed, and we also save on our own precious system administration time,” said Dr. Robert Esnouf, Head of Research Computing Core, Wellcome Centre for Human Genetics. “We can now plan for significant future growth with Univa as a key component of our infrastructure offering.”

The transition to Univa Grid Engine also provided WHG with new capabilities like GPU-aware scheduling, DRMAA2, and container support, placing WHG in a position to embrace emerging research techniques and support a wider range of research. To learn more how WHG expanded workloads for their life-science research, download this detailed case study.

About Wellcome Centre for Human Genetics

The Wellcome Centre for Human Genetics is a large interdisciplinary research centre comprising 400 scientists in 45 research groups, within the University of Oxford. It is one of the leading institutes, globally, in human genetics. Since its founding 21 years ago, the WHG has played a pioneering role in the progress and success of human disease genetics. The Centre’s focus is the development and implementation of novel approaches to exploit human genetics and uncover disease biology so as to improve healthcare.

About Univa Corporation

Univa is the largest independent provider of workload management products that optimize performance of applications, services and containers. Univa enables enterprises to modernize their scaled compute resources and run mixed workloads across on-premise, cloud, and hybrid infrastructures. Over 2 million computer cores are currently managed by Univa products in industries such as life sciences, manufacturing, oil and gas, transportation and financial services. Univa is headquartered in Chicago, with offices in Canada and Germany. For more information, please visit www.univa.com.

Source: Univa Corporation

The post Human Genetics Centre at University of Oxford Deploys Univa Solutions appeared first on HPCwire.

HPE Launches ARM-based Apollo System for HPC, AI

Tue, 11/14/2017 - 08:12

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-based high performance computing system (not counting the ARM-based Moonshot, which targeted the datacenter and didn’t pan out), along with other purpose-built solutions designed to help enterprises adopt HPC and AI applications.

HPE’s new Apollo 70 system uses Cavium’s 64-bit ARMv8-A ThunderX2 Server Processor and is designed for memory-intensive HPC workloads, with up to 33 percent more memory bandwidth than industry standard servers, according to Bill Mannel, HPE’s vice president and general manager, HPC and AI Segment Solutions. He said the Cavium chip reverses recent trends in which “pretty much all of the characteristics of memory – Gbytes per core, bytes per FLOP – have been declining.”

HPE describes the Apollo 70 as a dense, scalable platform that supports standard HPE provisioning, cluster management and performance software. It provides access Red Hat Enterprise Linux, SUSE Linux Enterprise Server for ARM and Mellanox InfiniBand and Ethernet fabrics.

“Enterprises are looking for ways to leverage HPC and AI in their business processes to gain faster insights and intelligence for competitive advantage,” said Steve Conway, SVP of Research, Hyperion Research. “HPE’s next-generation and new Apollo systems will facilitate that adoption by providing easier integration and management while delivering extreme density to reduce data center footprint and extend the range of HPC and AI use cases.”

HPE also introduced the Apollo 4510 Gen10 System, built for object storage and, according to HPE, delivering one of the highest storage capacities in any standard depth 4U server, with up to 600TB per system and 16 percent more cores than the previous generation of the product.

“The platform is ideal for customers looking to optimize the retention and placement of massive amounts of data, using object storage as an active archive with immediate access to structured and unstructured data,” HPE said. The system supports NVMe cards that can be used as a Scality RING metadata cache enabling 100 percent of bulk drive bays for object data storage.

HPE’s new Apollo 2000 Gen10 System is a multi-server platform with a “plug and play” system configuration designed for customers with limited data center space who need to support enterprise HPC and deep learning applications.

The system has a scale-out architecture and a shared infrastructure. It supports NVIDIA Tesla V100 GPU accelerators for deep learning training and inference, and uses Intelligent System Tuning to accelerate application performance. It also includes proprietary HPE firmware security features, such as the HPE iLO5 server management and Silicon Root of Trust.

HPE’s Bill Mannel

HPE’s StoreEver LTO-8 Tape is designed to provide an added layer of protection against cybercrime and ransomware attacks with offline and off-premises data protection. HPE called it a long-term retention solution that allows customers to offload primary storage while reducing the cost of storing data overtime. It has up to 30 terabytes of capacity per tape cartridge, double the capacity in the same data center footprint as previous generation tape, making it suitable for HPC data centers, HPE said. The system stores up to 300 petabytes of data in the HPE T950 tape library and 1.6 exabytes of data in the HPE TFinity tape libraries. Full height drives offer up to 360MB/s transfer rate speeds, a 20 percent increase from the LTO-7 generation, according to HPE.

According to Hyperion Research (formerly a group within industry analyst firm IDC), HPE is the HPC market leader, with 36.4 percent market share and a 4.4 percent share gain in CYQ2. The company, along with other systems providers, has attempted to expand their market reach by offering systems designed to accelerate adoption of advanced scale and HPC-class technologies in the enterprise. But Mannel admitted that these attempts by HPE, and its competitors, have met with mixed results.

The problem, he said, is in the nature and accessibility of the lower end of the HPC market, for which pre-packaged solutions are best suited. The upper portions of the HPC market are organizations with advanced computing and sophisticated staff resources, some of which even run their own HPC stacks and cluster managers. The middle market tends to run packaged solutions on an a la carte basis, Mannel said, elements that they tie together themselves into an integrated HPC capability.

He said it’s the lower portion of the market – start-ups or companies with 50-100 employees that are new to HPC and to AI – has the strongest need for help with pre-built solutions.

“But can I say any of it us have been successful at that?” Mannell said. “I’d say not as much as we’d like to be… I think all the companies, including ourselves, have a challenge getting to those customers. The format is right, they want these kinds of turnkey solutions. But how we get to them is a challenge.”

HPE said the Apollo 2000 and Apollo 4510 are available today. The StoreEver LTO-8 Tape will be available next month and the Apollo 70 in 2018.

The post HPE Launches ARM-based Apollo System for HPC, AI appeared first on HPCwire.

ESnet Renews, Upgrades Transatlantic Network Connections

Tue, 11/14/2017 - 07:49

Nov. 14, 2017 — Three years after ESnet first deployed its own transatlantic networking connection, the project is now being upgraded to four 100 gigabits-per-second links. These links gives researchers at America’s national laboratories and universities ultra-fast access to scientific data from the Large Hadron Collider (LHC) and other research sites in Europe.

The original configuration that went into service in December 2014 consisted of three 100 Gbps and one 40 Gbps links. Since December 2014, the LHC traffic being carried by ESnet alone has grown nearly 1600%, from 1.7 Petabytes per month in January 2015, to nearly 30 Petabytes per month in August 2017.

The four new connections link peering points in New York City and London, Boston and Amsterdam, New York and London, and Washington, D.C. and CERN in Switzerland. The contracts are with three different telecom carriers.

“Our initial approach was to build in redundancy in terms of both infrastructure and vendors and the past three years proved the validity of that idea,” said ESnet Director Inder Monga. “So, we stuck with those design principles while upgrading the fourth link to 100G.”

Overall goals of the new agreements accomplished:

  • Increase in overall capacity to meet projected demand
  • Reduction in the overall cost
  • Increase in the diversity of the cable systems providing ESnet circuits, and
  • Maintain as much R&E network community transatlantic cable diversity as possible, including that of the Advanced North Atlantic Collaboration.

Another new component is a collaboration with Indiana University funded by the National Science Foundation with its Networks for European, American and African Research (NEAAR) award within the International Research Network Connections (IRNC) program. The goal of NEAAR is to make science data from Africa, such as that collected by the Square Kilometer Array, and Europe, like data from CERN’s Large Hadron Collider, available to a broader research community.

With the upgrade, the total transatlantic capacity for Research and Education networks  is now 800 Gbps, continuing the close collaboration between the seven partners providing transatlantic connectivity under the broader umbrella of the Global Network Architecture Initiative (GNA).

Source: ESnet

The post ESnet Renews, Upgrades Transatlantic Network Connections appeared first on HPCwire.

CoolIT Systems Announces Liquid Cooled Intel Buchanan Pass Server

Tue, 11/14/2017 - 07:16

CALGARY, Alberta, Nov. 14, 2017 — CoolIT Systems (CoolIT), a global leader in energy efficient liquid cooling solutions for HPC, Cloud and Hyperscale markets, today announced a liquid cooling solution to support Intel Compute Module HNS2600BPB (Buchanan Pass).

CoolIT has developed an innovative Rack DCLC coldplate solution featuring patented Split-Flow design, for Intel Buchanan Pass. The liquid cooling solution for this 2U, four node server manages heat from the dual Intel Xeon Scalable processors (Skylake), voltage regulators, and memory. A sample of the DCLC enabled Buchanan Pass server will be showcased by CoolIT during SC17 from November 13-16 in Denver, Colorado (booth 1601).

“When Intel approached us to support this server with a high efficiency liquid cooling solution, our team was excited to accept the challenge,” says CoolIT Systems VP of Product Marketing, Patrick McGinn. “This tightly integrated solution creates a very dense, energy saving platform that showcases how liquid cooling can be implemented without sacrificing serviceability.”

CoolIT’s Rack DCLC technology has quickly become the leading choice for tier 1 server OEM’s around the world. With the highest performing coldplates on the market, HPC data centers around the world are using CoolIT technology to enable higher performance servers, increase rack density, and lower their total cost of ownership.

About CoolIT Systems

CoolIT Systems, Inc. is a world leader in energy efficient liquid cooling technology for the Data Center, Server and Desktop markets. CoolIT’s Rack DCLC platform is a modular, rack-based, advanced cooling solution that allows for dramatic increases in rack densities, component performance, and power efficiencies. The technology can be deployed with any server and in any rack making it a truly flexible solution. For more information about CoolIT Systems and its technology, visit www.coolitsystems.com.

About Supercomputing Conference (SC17)

Established in 1988, the annual SC conference continues to grow steadily in size and impact each year. Approximately 5,000 people participate in the technical program, with about 11,000 people overall. SC has built a diverse community of participants including researchers, scientists, application developers, computing center staff and management, computing industry staff, agency program managers, journalists, and congressional staffers. This diversity is one of the conference’s main strengths, making it a yearly “must attend” forum for stakeholders throughout the technical computing community. For more information, visit http://sc17.supercomputing.org/.

Source: CoolIT Systems

The post CoolIT Systems Announces Liquid Cooled Intel Buchanan Pass Server appeared first on HPCwire.

Netlist, Nyriad and TYAN to Accelerate the Adoption of NVDIMMs and GPUs for Storage

Mon, 11/13/2017 - 10:49

DENVER, Nov. 13, 2017 — Netlist, Inc. (NASDAQ: NLST), Nyriad and TYAN today announced a solution to support Netlist NVvault non-volatile memory for cache acceleration in Nyriad’s graphics processing unit (GPU)-accelerated storage platform, NSULATE on a TYAN Thunder server.

By adopting Netlist’s NVvault DDR4 NVDIMM-N non-volatile memory, Nyriad NSULATE-based storage systems can be configured to achieve millions of IOPS, sustaining high throughput while also enabling levels of storage resilience and integrity –  impossible with traditional central processing unit (CPU) or redundant array of independent disks (RAID) -based solutions.

The Netlist and Nyriad technologies will be showcased on a TYAN Thunder HX FT77D-B7109 dual root complex 4U 8GPU server configured with Netlist’s NVvault at the SuperComputing 2017 Conference Exhibition taking place in Denver, CO from November 13-16. Additional information on the demonstration will be available at Netlist’s booth #2069 and TYAN’s booth #1269.

C.K. Hong, Netlist Chief Executive Officer, said, “NVvault, which is part of our storage-class memory family of solutions, is vital to Nyriad’s NSULATE accelerated and resilient storage-processing architecture. When combined with TYAN’s latest server targeted at big data and high-performance computing applications, we have created a game changing platform to drive improved IOPS (input/output operations per second), security, scale, performance and total storage array cost per terabyte. The solution enables NVvault to bring substantial performance benefits to end user applications such as big-data analytics by storing data in a way that is directly accessible to high performance GPUs.”

Nyriad Chief Executive Officer Matthew Simmons stated, “Processing and storing large volumes of data has become so I/O (input/output) intensive that traditional storage and network fabrics can’t cope with the volume of information that needs to be processed and stored in real-time. However, GPUs have become the dominant solution for modern high-performance computing, big-data and machine learning applications.  Our collaboration with Netlist and TYAN has broken this bottleneck and will enable major leaps in exascale storage performance and efficiency.”

Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit stated, “For many years, TYAN has met the ongoing challenge to provide efficient and powerful products that can support demanding applications in many areas, including the storage and high-performance computer space. Towards this goal, we are working with Netlist and Nyriad to define a new kind of computing solution to address vastly larger data sets and analytics, offering huge performance gains for customers worldwide.”

Netlist’s NVvault DDR4 is an NVDIMM-N that provides data acceleration and protection in a JEDEC standard DDR4 interface. It is designed to be integrated into industry standard server or storage solutions.  NVvault is a persistent memory technology that has been widely adopted by industry standard servers and storage systems.  By combining the high performance of DDR4 DRAM with the non-volatility of NAND Flash, NVvault improves the performance and data preservation found in storage virtualization, RAID, cache protection, and data logging applications requiring high-throughput.

Nyriad’s NSULATE solves these problems by replacing RAID controllers with GPUs for all Linux storage applications. This enables the GPUs to perform double duty as both I/O controllers and compute accelerators in the same integrated solution. The combination of Netlist NV Memory with NSULATE produces the best of both worlds, with low-latency IOPS achievable by any storage solution combined with maximum data resilience, security, throughput and efficiency in the same architecture.

The first next-generation solutions based on the Netlist and Nyriad technology is expected to appear in the market from leading industry partners early next year.

About Netlist

Netlist is a leading provider of high-performance modular memory subsystems serving customers in diverse industries that require superior memory performance to empower critical business decisions. Flagship products NVvault and EXPRESSvault enable customers to accelerate data running through their servers and storage and reliably protect enterprise-level cache, metadata and log data by providing near instantaneous recovery in the event of a system failure or power outage. HybriDIMM, Netlist’s next-generation storage class memory product, addresses the growing need for real-time analytics in Big Data applications and in-memory databases. Netlist holds a portfolio of patents, many seminal, in the areas of hybrid memory, storage class memory, rank multiplication and load reduction. Netlist is part of the Russell Microcap Index.  To learn more, visit www.netlist.com.

About Nyriad

Nyriad is a New Zealand-based exascale computing company specializing in advanced data storage solutions for big data and high-performance computing. Born out of its consulting work on the Square Kilometre Array Project, the company was forced to rethink the relationship between storage, processing and bandwidth to achieve a breakthrough in system stability and performance capable of processing and storing over 160Tb/s of radio antennae data in real-time, within a power budget impossible with any modern IT solutions.

About TYAN

TYAN, as a leading server brand of MiTAC Computing Technology Corporation under the MiTAC Group (TSE:3706), designs, manufactures and markets advanced x86 and x86-64 server/workstation board technology, platforms and server solution products. Its products are sold to OEMs, VARs, System Integrators and Resellers worldwide for a wide range of applications. TYAN enables its customers to be technology leaders by providing scalable, highly-integrated, and reliable products for a wide range of applications such as server appliances and solutions for HPC, hyper-scale/data center, server storage and security appliance markets. For more information, visit MiTAC’s website athttp://www.mic-holdings.com or TYAN’s website at http://www.tyan.com

Source: Netlist

The post Netlist, Nyriad and TYAN to Accelerate the Adoption of NVDIMMs and GPUs for Storage appeared first on HPCwire.

Red Hat Introduces Arm Server Support for Red Hat Enterprise Linux

Mon, 11/13/2017 - 10:31

Nov. 13, 2017 — Today marks a milestone for Red Hat Enterprise Linux with the addition of a new architecture to its list of fully supported platforms. Red Hat Enterprise Linux for ARM is a part of its multi-architecture strategy and the culmination of a multi-year collaboration with the upstream community and its silicon and hardware partners.

The Arm ecosystem has emerged over the last several years with server-optimized SoC (system on chip) products that are designed for cloud and hyperscale, telco and edge computing, as well as high-performance computing applications. Arm SoC designs take advantage of advances in CPU technology, system-level hardware, and packaging to offer additional choices to customers looking for tightly integrated hardware solutions.

Red Hat took a pragmatic approach to Arm servers by helping to drive open standards and develop communities of customers, partners and a broad ecosystem. Its goal was to develop a single operating platform across multiple 64-bit ARMv8-A server-class SoCs from various suppliers while using the same sources to build user functionality and consistent feature set that enables customers to deploy across a range of server implementations while maintaining application compatibility.

In 2015, Red Hat introduced a Development Preview of the operating system to silicon partners, such as Cavium and Qualcomm, and OEM partners, like HPE, that designed and built systems based on a 64-bit Arm architecture. A great example of this collaboration was the advanced technology demonstration by HPE, Cavium, and Red Hat at the International Supercomputing conference in June 2017. That prototype solution became part of HPE’s Apollo 70 system, announced today. If you are attending SuperComputing17 this week, stop by Red Hat’s booth (#1763) to learn more about this new system.

Red Hat’s focus is to provide software support for multiple architectures powered by a single operating platform – Red Hat Enterprise Linux, and driven by open innovation. Red Hat Enterprise Linux 7.4 for ARM, the first commercial release for this architecture, provides customers who have been planning to run their workloads and software and hardware partners that require a stable operating environment to continue development, with a proven and more secure enterprise-grade platform. They plan to continue their work with the ecosystem to expand the reach for Red Hat Enterprise Linux 7.4 for ARM.

In addition to Red Hat Enterprise Linux, Red Hat is also shipping Red Hat Software Collections 3Red Hat Developer Toolset 7 and single host KVM virtualization (as an unsupported Development preview) for this architecture.

To learn more about Red Hat Enterprise Linux 7.4 for ARM, see the release notes at https://access.redhat.com/articles/3158541

Source: Red Hat

The post Red Hat Introduces Arm Server Support for Red Hat Enterprise Linux appeared first on HPCwire.

Penguin Computing Announces Intel Xeon Scalable Processor Availability for On-Demand HPC Cloud

Mon, 11/13/2017 - 09:04

FREMONT, Calif., Nov. 13, 2017 — Penguin Computing, provider of high performance computing, enterprise data center and cloud solutions, today announced that more than 11,500 cores of the latest Intel Xeon Scalable processor (codenamed: Skylake-SP) will be available in December 2017 on Penguin Computing On-Demand (POD) HPC cloud. The new POD HPC cloud compute resources use Intel Xeon Gold 6148 processors, a cluster-wide Intel Omni-Path Architecture low-latency fabric and are integrated with Penguin Computing Scyld Cloud Workstation for web-based, remote desktop access into the public HPC cloud service.

“As an HPC cloud provider, we know that it is critical to provide our customers with the latest processor technologies,” said Victor Gregorio, Senior Vice President, Cloud Services, Penguin Computing. “The latest Intel Xeon Scalable processor expansion will provide an ideal compute environment for MPI workloads that can leverage thousands of cores for computation. We have significant customer demand for POD HPC cloud in applicable areas like high-resolution weather forecasting and computational fluid dynamics, including solutions from software partners like ANSYS, Flow Science and CD-adapco.”

“Intel offers a balanced portfolio of HPC optimized components like the Intel Xeon Scalable processor and Intel Omni-Path Architecture, which provides the foundation for researchers and innovators to drive new discoveries and build new products faster than ever before,” said Trish Damkroger, Vice President of technical computing at Intel. “Penguin Computing On-Demand provides an easy and flexible path to access the latest technology so more users can realize the benefits of HPC.”

Scientists and engineers at every company are trying to innovate faster while holding down costs. Modeling and simulation are the backbone of these efforts. Customers may wish to run simulations at scale, or many different permutations simultaneously but may require more computing resources than are readily available in-house. The POD HPC cloud offers organizations a flexible, cost effective approach to meeting these requirements.

The Intel Xeon Scalable processor provides increased performance, a unified stack optimized for key workloads including data analytics, and integrated technologies including networking, acceleration and storage. The processor’s increased performance is realized through innovations including Intel® AVX-512 extensions that can deliver up to 2x FLOPS per clock cycle, which is especially important for HPC, data analytics and hardware-enhanced security/cryptography workloads. Along with numerous acceleration refinements, the new processor offers integrated 100 Gb/s Intel® Omni-Path Architecture fabric options. With these improvements, the Intel Xeon Scalable Platinum 8180 processor yielded an increase of up to 8.2x more double precision GFLOPS/sec when compared to Intel Xeon processor E5-2690 (codenamed Sandy Bridge) common in the server installed-base, and a 2.27x increase over the previous-generation Intel Xeon processor E5-2699 v4 (codenamed Broadwell)1.

The doubling of cores in the publicly available POD HPC cloud resources in 2017 was proceeded by a 50 percent increase in capacity in 2016. As customer demand continues to increase, POD HPC cloud will continue to grow using the most current technologies to deliver the actionable insights that organizations require.

Visit Penguin Computing at Booth 1801 during SC17 in Denver.

About Penguin Computing

Penguin Computing is one of the largest private suppliers of enterprise and high-performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service, Penguin Computing On-Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivery of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of a limited number of authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing. Penguin Computing has systems installed with more than 2,500 customers in 40 countries across eight major vertical markets.

Source: Penguin Computing

The post Penguin Computing Announces Intel Xeon Scalable Processor Availability for On-Demand HPC Cloud appeared first on HPCwire.

Cavium and Leading Partners to Showcase ThunderX2 Arm-Based Server Platforms and FastLinQ Ethernet Adapters for HPC at SC17

Mon, 11/13/2017 - 08:52

SAN JOSE, Calif. and DENVER, Nov. 13, 2017 — Cavium, Inc. (NASDAQ: CAVM), a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, wired and wireless networking, will showcase various ThunderX2 Arm-based server platforms for high performance computing at this year’s Supercomputing (SC17) conference taking place in the Colorado Convention Center in Denver, Colorado from November 13th to 16th.

ThunderX2 server SoC integrates fully out-of-order, high-performance custom cores supporting single and dual-socket configurations. ThunderX2 is optimized to drive high computational performance delivering outstanding memory bandwidth and memory capacity. The new line of ThunderX2 processors includes multiple SKUs for both scale up and scale out applications and is fully compliant with Armv8-A architecture specifications as well as the Arm Server Base System Architecture and Arm Server Base Boot Requirements standards.

ThunderX2 SoC family is supported by a comprehensive software ecosystem ranging from platform level systems management and firmware to commercial Operating Systems, Development Environments and Applications. Cavium has actively engaged in server industry standards groups such as UEFI and delivered numerous reference platforms to a broad array of community and corporate partners.  Cavium has also demonstrated its leadership role in the Open Source software community driving upstream kernel enablement and toolchain optimization, actively contributing to Linaro’s Enterprise and Networking Groups, investing in key Linux Foundation projects such as DPDK, OpenHPC, OPNFV and Xen and sponsoring the FreeBSD Foundation’s Armv8 server implementation.

SC17 Show Highlights and Product Demonstrations

Cavium’s executive leaders and technology experts will be available to discuss the company’s ThunderX2 processor technology, platforms, roadmap and HPC target solutions while demonstrating a range of platforms and configurations. Many of Cavium’s key partners will also be present with demonstrations that include system implementation, system software, tools and applications.  In addition to the ThunderX2 based ODM and OEM platforms and Cavium’s FastLinQ Ethernet Adapters, the following product demonstrations will be on display on the show floor and at Cavium’s booth #349.

  • Cavium ThunderX2 – 64–bit Armv8 based SoC family that significantly increases performance, memory bandwidth and memory capacity. We will be demonstrating various applications running on ThunderX2 in both single and dual socket configurations. Cavium’s systems partners Bull/Atos (Booth #1925), Cray (Booth #625), Gigabyte (Booth #2151), HPE (Booth #925), and Penguin (Booth #1801) will be showcasing HPC platforms based on ThunderX2. Cavium’s software partners will be demonstrating a variety of software tools and applications optimized for ThunderX2. In addition, there will be a full rack of ThunderX2 based systems showcased in HPE’s Comanche collaboration booth #494.
  • Cavium FastLinQ: – 10/25/40/50/100Gb Ethernet adapters that enable the highest level of application performance with the industry’s only Universal RDMA capability that supports RoCE v1, RoCE v2 and iWARP concurrently. With the explosion of data there is a critical need for fast and intelligent I/O throughout the data center. Cavium FastLinQ products enable machine learning, data analytics, NVMe over fabrics storage while maximizing system performance.

The following additional presentations by Cavium will cover ThunderX2 updates, Arm Ecosystem, and End User Optimizations focused on HPC.

  • On Monday, November 13, 2017 at 3:30 pm, Surya Hotha, Director of Product Marketing for Cavium’s Datacenter Processor Group, will present ThunderX2 in
  • HPC applications at the third annual Arm SC HPC User Forum.
  • On Tuesday, November 14, 2017 at 10.30 am, Giri Chukkapalli, Distinguished Engineer, will present ThunderX2 technology overview at the Red Hat Theater.
  • On Tuesday, November 14, 2017 at 2.30 pm, Varun Shah, Product Marketing Manager for Cavium’s Datacenter Processor Group, will present ThunderX2 advantages for the HPC Market at the Exhibitor Forum.
  • On Tuesday, November 14, 2017 at 2.30 pm, Giri Chukkapalli, Distinguished Engineer, will present ThunderX2 technology overview at the HPE Theater.
  • On Wednesday, November 15, 2017 at 2.30 pm, Cavium experts will present at the SUSE booth.

To schedule a meeting at SC17, please send an email to sales@cavium.com and enter SC17 Meeting Request in the subject line.

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data Center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware-reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, please visit: http://www.cavium.com.

Source: Cavium

The post Cavium and Leading Partners to Showcase ThunderX2 Arm-Based Server Platforms and FastLinQ Ethernet Adapters for HPC at SC17 appeared first on HPCwire.

Oak Ridge National Laboratory Acquires Atos Quantum Learning Machine

Mon, 11/13/2017 - 08:31

PARIS and IRVING, Tex., Nov. 13, 2017 — Atos, a global leader in digital transformation, today announces a new contract with US-based Oak Ridge National Laboratory (ORNL) for a 30-Qubit Atos Quantum Learning Machine (QLM), the world’s highest-performing quantum simulator.

Designed by the ‘Atos Quantum’ laboratory, the first major quantum industry program in Europe, the Atos QLM combines an ultra-compact machine with a universal programming language. The appliance enables researchers and engineers to develop and test today the quantum applications and algorithms of tomorrow.

As the Department of Energy’s largest multi-program science and energy laboratory, ORNL employs almost 5,000 people, including scientists and engineers in more than 100 disciplines. The Atos QLM-30, processing up to 30 quantum bits (Qubits) in-memory, installed at ORNL was operational within hours thanks to Atos’ fast-start process. Set up as a stand-alone appliance, the Atos QLM can run on premise ensuring confidentiality of clients’ research programs and data.

ORNL’s Quantum Computing Institute Director, Dr. Travis Humble says:

“At ORNL, we are preparing for the next-generation of high-performance computing by investigating unique technologies such as quantum computing.

We are researching how quantum computing can provide new methods for advancing scientific applications important to the Department of Energy.

Our researchers focus on applications in the physical sciences, such as chemistry, materials science, and biology, as well as the applied and data sciences. Numerical simulation helps to guide development of these scientific applications and support understanding program correctness. The Atos Quantum Learning Machine provides a unique platform for testing new quantum programming ideas.”

Thierry Breton, CEO and Chairman of Atos, adds:

“We are glad to accompany Oak Ridge National Laboratory from the outset in what is likely to be the next major technological evolution. Thanks to our Atos Quantum Learning Machine, designed by our quantum lab supported by an internationally renowned Scientific Council, researchers from the Department of Energy will benefit from a simulation environment which will enable them to develop quantum algorithms to prepare for the major accelerations to come.”

In the coming years, quantum computing should be able to tackle the explosion of data brought about by Big Data and the Internet of Things. Thanks to its innovative targeted computing acceleration capabilities based in particular on the exascale class supercomputer Bull Sequana, quantum computing should also foster developments in deep learning, algorithms and artificial intelligence for domains as varied as pharmaceutical or new materials. To move forward on these issues, Atos plans to set up several partnerships with research centers and universities around the world.

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around € 12 billion. European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, the Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.

Source: Atos

The post Oak Ridge National Laboratory Acquires Atos Quantum Learning Machine appeared first on HPCwire.

DDN Announces New Solutions and Next Generation Monitoring Tools

Mon, 11/13/2017 - 08:26

DENVER and SANTA CLARA, Calif., Nov. 13, 2017 — DataDirect Networks (DDN) today announced new high-performance computing (HPC) storage solutions and capabilities, which it will feature this week at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC17) in Denver, Colorado. The new solutions include an entry-level burst buffer appliance (IME140) for cost-effective I/O acceleration, a next generation monitoring software (DDN Insight), and the company’s new declustered RAID solution (SFA Declustered RAID “DCR”) for increased data protection in massive storage pools. DDN also announced recent HPC customer wins in some of the world’s largest supercomputing centers.

“Modern HPC workflows require new levels of performance, flexibility and reliability to turn data and ideas into value,” said John Abbott, founder and research VP, 451 Research. “With its long-standing HPC storage heritage, DDN is strongly positioned with closely integrated components that can deliver extreme I/O performance, comprehensive monitoring at scale and new levels of data protection.”

New DDN Solutions and Next Generation Monitoring Tools

HPC and data-intensive enterprise environments are facing new pressures that stem from higher application diversity and sophistication along with a steep growth in volume of active datasets. These trends present a tough challenge to today’s filesystems in delivering the performance and economics to match business needs and compute capability. In addition, as rotational drive capacities grow the risk of data loss increases due to longer drive rebuild times. DDN’s latest technology innovations deliver the enhanced performance, flexibility and management simplicity needed to solve these challenges and to accelerate large-scale workflows for greater operational efficiency and ROI.

  • DDN IME140 
    DDN has expanded its IME product line with the new IME140 that makes IME scale-out flash accessible to more organizations at lower cost. The IME140 supports extreme file performance in a small 1U flash data appliance. Each appliance can deliver more than 11GB/s write and 20GB/s read throughputs and more than 1M file IOPs (read and write). Starting with a resilient solution as small as 4 units, the IME140 allows organizations to cost-effectively scale performance independent of amount of capacity required. Traditional parallel file systems often cannot keep pace with the mixed I/O requirements of modern workloads and fail to deliver the potential of flash. The IME software implements a faster, leaner data path which delivers to applications the low-latencies and high throughputs of NVMe. The IME140 1U building block allows organizations to intelligently apply fast flash where it is needed, while maintaining cost-effective capacity on HDD within the file system.
  • DDN Insight 
    DDN Insight is DDN’s next-generation monitoring software.  Easy to deploy, DDN Insight allows customers to monitor the most challenging environments at scale, across multiple file systems and storage appliances. With DDN Insight customers can quickly identify and address hot spots, bottlenecks and misbehaving applications. Tightly integrated with SFA, EXAScaler, GRIDScaler and IME, DDN Insight delivers an intuitive way for customers to comprehensively monitor their complete DDN-based ecosystem.

Availability

The DDN IME140 will ship in volume in the first quarter of 2018. The SFA DCR is shipping today with the SFA14KX, and DDN Insight monitoring software is integrated and shipping today with DDN’s SFA, EXAScaler, GRIDScaler and IME solutions.

About DDN

DataDirect Networks (DDN) is a leading big data storage supplier to data-intensive, global organizations. For almost 20 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Announces New Solutions and Next Generation Monitoring Tools appeared first on HPCwire.

CoolIT Systems Showcases Newest Datacenter Liquid Cooling Innovations for OEM and Enterprise Customers at SC17

Mon, 11/13/2017 - 08:04

DENVER, Nov. 13, 2017 — CoolIT Systems (CoolIT), a global leader in energy efficient liquid cooling solutions for HPC, Cloud and Hyperscale markets, returns to the highly-anticipated Supercomputing Conference 2017 (SC17) in Denver, Colorado for the sixth consecutive year with its latest Rack DCLC and Closed-Loop DCLC innovations for data centers and servers.

As the most popular integration partner for OEM server manufacturers, CoolIT will showcase liquid-enabled servers from Intel, Dell EMC, HPE, and Huawei. Combined with the broadest range of heat exchangers and supporting liquid infrastructure, CoolIT and their OEM partners are delivering the most complete and robust liquid cooling solutions to the HPC market. CoolIT OEM solutions being shown at booth 1601 include:

  • Intel Buchanan Pass – CoolIT is pleased to announce the liquid-enabled Buchanan Pass server with coldplates managing heat from the processor, voltage regulator, and memory.
  • Dell EMC PowerEdge C6420 – this liquid-enabled server will be on display at the Dell EMC booth (#913) within a fully populated rack, including stainless steel Manifold Modules and the best-in-class CHx80 Heat Exchange Module. With factory-installed liquid cooling, this server is purpose-built for high performance and hyperscale workloads.
  • HPE Apollo 2000 Gen9 System – optimized with Rack DCLC to significantly enhance overall data center performance and efficiency.
  • HPE Apollo Trade and Match Server Solution – optimized with Closed-Loop DCLC to increase density, decrease TCO and take advantage of enhanced performance to capitalize on High Frequency Trading trends.
  • STULZ Micro Data Center – combining CoolIT’s Rack DCLC with STULZ’ world-renowned mission critical air cooling products to create a single enclosed solution for managing high-density compute requirements.

Debuting at SC17 are two industry-first Heat Exchange Modules:

  • Rack DCLC AHx10, CoolIT’s new Liquid-to-Air CDU that delivers the benefits of rack level liquid cooling without the requirement for facility water. The standard 5U system manages 7kW at 25°C ambient air temperature and is expandable to 6U or 7U configurations (via the available expansion kit) to scale capacity up to 10kW of heat load.
  • Rack DCLC AHx2, CoolIT’s new Liquid-to-Air heat exchanger tool for OEMs and System Integrators for DCLC enabled servers to be thermally tested during the factory burn-in process, without liquid cooling infrastructure.

CoolIT will also showcase its Liquid-to-Liquid heat exchangers, including the stand-alone Rack DCLC CHx650, and the 4u Rack DCLC CHx80 that provides 80-100kW cooling capacity with N+1 reliability to manage the most challenging, high density HPC racks.

For the first time, CoolIT will showcase its advanced Rack DCLC Command2 Control System for Heat Exchange Modules. Attendees can experience the plug-and-play functionality of Command2, including built-in autonomous controls and sophisticated safety features.

The latest CPU and GPU coldplate assemblies to support CoolIT’s passive Rack DCLC platform will be displayed, including the RX1 for Intel Xeon Scalable Processor Family (Skylake), the GP1 for NVIDIA Tesla P100 and GP2 for NVIDIA Tesla V100. Additionally, CoolIT’s full coverage MX1, MX2 and MX3 memory cooling coldplates will be featured.

CoolIT will highlight customer installations including:

  • Canadian Hydrogen Intensity Mapping Experiment (CHIME), the world’s largest low-frequency radio telescope. Deployed inside a containerized environment, CoolIT’s liquid cooled system consists of 256 rack-mounted General Technics GT0180 custom 4u servers housed in 26 racks managed by Rack DCLC CHx40 Heat Exchange Modules. Featuring liquid cooled Intel Xeon Processor E5-2620 v3 and dual AMD FirePro S9300x2, CoolIT significantly lowers operating temperatures and improves performance and power-efficiencies.
  • Poznan Supercomputing and Networking Center (PSNC). The PSNC “Eagle” cluster uses 1,232 liquid cooled Huawei CH121 servers to increase density and reduce energy consumption. PSNC was able to deploy this new cluster within their existing data center without having to invest in additional air cooling infrastructure. The heated liquid is also being reused for local heating needs.

In partnership with STULZ, CoolIT will host an SC17 Exhibitor Forum presentation on high-density Chip-to-Atmosphere data center cooling solutions on Thursday 16th Nov at 11.00am. CoolIT encourages all attendees to join the Chip-to-Atmosphere: Providing Safe and Effective Cooling for High-Density, High-Performance Data Center Environments presentation in room 503-504. During the session, David Meadows, Director of Industry, Standards and Technology at STULZ Air Technology Systems, Inc. and Geoff Lyon, CEO and CTO at CoolIT Systems, will be discussing the efficiency gains and performance enhancements made possible by liquid cooling solutions.

“Liquid cooling in the data center continues to grow in adoption and delivers more compelling ROIs. Our collaboration with OEM partners such as Dell EMC, HPE, Intel and STULZ provides further evidence that the future of the data center is destined for liquid cooling,” said Geoff Lyon, CEO and CTO at CoolIT Systems.

To learn more about how CoolIT’s products and solutions maximize data center performance and efficiency, visit booth 1601 at SC17. Executives and technical staff will be on site to guide attendees through new product showcases, live demos. To set up an appointment, contact Lauren Macready at marketing@coolitsystems.com.

About CoolIT Systems

CoolIT Systems, Inc. is a world leader in energy efficient liquid cooling technology for the Data Center, Server and Desktop markets. CoolIT’s Rack DCLC platform is a modular, rack-based, advanced cooling solution that allows for dramatic increases in rack densities, component performance, and power efficiencies. The technology can be deployed with any server and in any rack making it a truly flexible solution. For more information about CoolIT Systems and its technology, visit www.coolitsystems.com.

About Supercomputing Conference (SC17) 

Established in 1988, the annual SC conference continues to grow steadily in size and impact each year. Approximately 5,000 people participate in the technical program, with about 11,000 people overall. SC has built a diverse community of participants including researchers, scientists, application developers, computing center staff and management, computing industry staff, agency program managers, journalists, and congressional staffers. This diversity is one of the conference’s main strengths, making it a yearly “must attend” forum for stakeholders throughout the technical computing community. For more information, visit http://sc17.supercomputing.org/.

Source: CoolIT Systems

The post CoolIT Systems Showcases Newest Datacenter Liquid Cooling Innovations for OEM and Enterprise Customers at SC17 appeared first on HPCwire.

Flipping the Flops and Reading the Top500 Tea Leaves

Mon, 11/13/2017 - 07:58

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released from SC17 in Denver, Colorado, this morning and once again China is in the spotlight, having taken what is on the surface at least a definitive lead in multiple dimensions. China now claims the most systems, biggest flops share and the number one machine for 10 consecutive lists. It’s a coup-level achievement to pull off in five years, disrupting 20 years of US dominance on the Top500, but reading deeper into the Top500 tea leaves reveals a more nuanced analysis that has as much to do with China’s benchmarking chops as it does its supercomputing flops.

PEZY-SC2 chip at ISC 2017 –click to enlarge

Before we thread that needle, let’s take a moment to review the movement at the top of the list. There are no new list entrants in the top ten and no change in the top three, but the upgraded ZettaScaler-2.2 “Gyoukou” stuck its landing for a fourth place ranking. Vaulting 65 spots, the supersized Gyoukou combines Xeons and PEZY-SC2 accelerators to achieve 19.14 petaflops, up from 1.68 petaflops on the previous list. The Top500 authors point out that the system’s 19,860,000 cores represent the highest level of concurrency ever recorded on the Top500 rankings.

Gyoukou also had the honor of being the fifth greenest supercomputer. Fellow ZettaScaler systems Shoubu system B, Suiren2 and Sakura, placed first, second and third respectively (see perf-per-watt numbers below). Nvidia’s DGX SaturnV Volta system, installed at Nvidia headquarters in San Jose, Calif., was the fourth greenest supercomputer.

Nov. 2017 Green500 top five — click to enlarge Nov. 2017 Top500 top 10

Another upgraded machine, Trinity, moved up three positions to seventh place thanks to a recent infusion of Intel Knights Landing Xeon Phi processors that raised its Linpack score from 8.10 petaflops to 14.14 petaflops. Trinity is a Cray XC40 supercomputer operated by Los Alamos National Laboratory and Sandia National Laboratories.

China still has a firm grip on the top of the list with 93-petaflops Sunway TaihuLight and 33.86-petaflops Tianhe-2, the number one and and two systems respectively, which together provide the new list with 15 percent of its flops. Piz Daint, the Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) remains the third fastest system with 19.6 petaflops. With Gyoukou in fourth position, the fastest US system, Titan, slips another notch to fifth place, leaving the United States without a claim to any of the top four rankings. Benchmarked at 17.59 petaflops, the five-year-old Cray XK7 system installed at the Department of Energy’s Oak Ridge National Laboratory, captured the top spot for one list iteration before being knocked off its perch in June 2013 by China’s Tianhe-2. This is the first time in the list’s 24-year history that the US has not held at least a number four ranking.

Although China has enjoyed number one bragging rights for nearly four years, this is the first list that it also dominates by both system number and aggregate performance share as well. China has the most installed systems: 202 compared to 159 on the last list, while US is in second place with 144 down from 169 six month ago (Japan ranks third place with 35, followed by Germany with 20, France with 18, and the UK with 15.). Aggregate performance is similar: China holds 35.3 percent of list flops, and the US is second with 29.8 percent (then Japan with 10.8 percent, Germany with 4.5 percent, UK with 3.8 percent and France with 3.6 percent).

Based on these metrics, undoubtedly some publications will proclaim China’s supercomputing supremacy, but that would be premature. When China expanded its Top500 toehold by a factor of three at SC15, Intersect360 Research CEO Addison Snell remarked that it wasn’t so much that China discovered supercomputing as it discovered the Top500 list. This observation continues to hold water.

An examination of the new systems China is adding to the list indicates concerted efforts by Chinese vendors Inspur, Lenovo, Sugon and more recently Huawei to benchmark loosely coupled Web/cloud systems, which are not true HPC machines. To wit, 68 out of the 96 systems that China introduced onto the latest list utilize 10G networking and none are deployed at research sites. The benchmarking of Internet and telecom systems for Top500 glory is not new. You can see similar fingerprints on the list (current and historical) from HPE and IBM, but China has doubled down. For comparison’s sake, the US put 19 new systems on the list and eight of those rely on 10G networking.

Top500 development over time–countries by performance share. US is red; China is dark blue. Click to enlarge.

Not only has the Linpacking of non-HPC systems inflated China’s list presence, it’s changed the networking demographics as the number of Ethernet-based machines climbs steadily. As the Top500 authors note, Gigabit Ethernet now connects 228 systems with 204 systems using 10G interfaces. InfiniBand technology is now found on 163 systems, down from 178 systems six months ago, and is the second most-used internal system interconnect technology.

Snell provided additional perspective: “What we’re seeing is a concerted effort to list systems in China, particularly from China-based system vendors. The submission rules allow for what is essentially benchmarking by proxy. If Linpack is run and verified on one system, the result can be assumed for other systems of the same (or greater) configuration, so it’s possible to put together concerted efforts to list more systems, whether out of a desire to show apparent market share, or simply for national pride.”

Discussions of list purity and benchmarking by proxy aside, the High Performance Linpack or any one-dimensional metric has limited usefulness across today’s broad mix of HPC applications. This truth, well understood in HPC circles, is not always appreciated outside the community or among government stakeholders who want “something to show” for public investment.

“Actual system effectiveness is getting more difficult to compare, as the industry swings back toward specialized hardware,” Snell commented. “Just because one architecture outperforms another on one benchmark doesn’t make it the best choice for all workloads. This is particularly challenging for mixed-workload research environments trying to serve multiple domains. 88 percent of all HPC users say they will need to support multiple architectures for the next few years, running applications on the most appropriate systems for their requirements.”

Chip technology – click to expand (Source: Top500)

There has been stagnation on the list for several iterations and turnover is historically low. Neither Summit or Sierra (the US CORAL machines, projected to achieve ~180 petaflops) nor the upgraded Tianhe-2A (projected 94.97 petaflops peak) made the cut for the 50th list as had been speculated. While HPC is seeing a time of increased architectural diversity at the system and processor level, the current list is less diverse by some measures. To wit, of the 136 new systems on the list, Intel is foundational to all of them (36 of these utilize accelerators*). So no new Power, no new AMD (it’s still early for EPYC) and nothing from ARM yet. In total 471 systems, or 94.2 percent, are now using Intel processors, up a notch from 92.8 percent six months ago. The share of IBM Power processors is at 14 systems, down from 21 systems in June. There are five AMD-based systems remaining on the list, down from seven one year ago.

Nvidia’s New SaturnV Volta system. Click to enlarge.

In the US, IBM Power9 systems Summit and Sierra are on track for 2018 installation at Oak Ridge and Livermore labs (respectively), and multiple other exascale-focused systems are in play in China, Europe and Japan, showcasing a new wave of architectural diversity. We expect there will be more exciting supercomputing trends to report on from ISC 2017 in Frankfurt.

*Breakdown of the 36 new accelerated systems: 29 have P100s (one with NVLink, an HPE SGI system at number 292 (Japan)), one internal Nvidia V100 Volta system (#149, SaturnV Volta); one K80-based system (#267, Lenovo); two Sugon-built P40 systems (#161, #300), and three PEZY systems (#260, #277, #308). Further, out of the 36, only the internal Nvidia machine is US-based. 30 are Chinese (by Lenovo, Inspur, Sugon); the remaining five are Japanese (by NTT, HPE, PEZY).

The post Flipping the Flops and Reading the Top500 Tea Leaves appeared first on HPCwire.

Ellexus Releases I/O Profiling Tool Suites Based on the Arm Architecture

Mon, 11/13/2017 - 07:38

CAMBRIDGE, England, Nov. 13, 2017 — Ellexus, the I/O profiling company, has released versions of its flagship products BreezeHealthcheck and Mistral, all based on the Armv8-A architecture. The move comes as part of the company’s strategy to provide cross-platform support that gives engineers a uniform tooling experience across different hardware platforms.

Accompanying the release, Ellexus is also announcing that its tools will be integrated with Arm Forge and Arm Performance Reports, market-leading tools for debugging, profiling and optimizing high performance applications, previously known as Allinea.

The integration takes advantage of a custom metrics API in the Arm tools, allowing third parties to plug into them and enable contextual analysis of more targeted performance metrics. The integration with Arm tools will provide an even more comprehensive suite of I/O profiling tools at a time when optimization has never been so important.

Unlike other profiling tools, Ellexus’ technology can be run continuously at scale. The reports generated give enough information to make every engineer an I/O expert. These tools will help organizations to deploy an I/O profiling solution as part of software qualification, as a live monitoring tool, or as a way to understand and prevent I/O problems from returning.

Ellexus Mistral is designed to run in real time on a cluster, identifying rogue jobs before they can cause a problem. In contrast, Ellexus Breeze provides an extremely detailed profile of a job or application, providing dependency analysis that makes cloud migration or migration to a different architecture easy. Ellexus’ latest tool, Healthcheck, produces a simple I/O report that tells the user what their application is doing wrong and why, giving all users the power to optimise I/O for the cluster.

Ellexus Mistral, Breeze and Healthcheck add a comprehensive layer of I/O profiling information to what is already on offer from the Arm tool suite, and can drill down to which files have been accessed. They provide additional monitoring for IT managers and dev ops engineers, in particular those who run continuous integration and testing frameworks.

Tim Whitfield, vice president and general manager, Technology Services Group, Arm, said: “Arm is always looking for ways to further optimize our high-performance application estate, and as we continue to scale up and out this has never been more important. Arm and Ellexus are continuing a deep collaboration in this space to provide a comprehensive tools suite for HPC.”

On the decision to release versions based on Arm, Dr Rosemary Francis, CEO of Ellexus, said, “As the high-performance computing industry targets new compute architectures and cloud infrastructures, it’s never been more important to optimise the way programs access large data sets. Bad I/O patterns can harm shared storage and will limit application performance, wasting millions in lost engineering time.

“We are extremely excited to announce the integration of our tools with the Arm tool suite. Together we will be able to help more organisations to get the most out of their compute clusters.”

About Ellexus

Ellexus is an I/O profiling company. From a detailed analysis of one application or workflow pipeline to whole-cluster, lightweight monitoring and reporting, it provides solutions that solve all I/O profiling needs.

Source: Ellexus

The post Ellexus Releases I/O Profiling Tool Suites Based on the Arm Architecture appeared first on HPCwire.

Pages