HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 39 min ago

Physicists Win Supercomputing Time to Study Fusion and the Cosmos

Tue, 12/12/2017 - 13:10

Dec. 12, 2017 — More than 210 million core hours on two of the most powerful supercomputers in the nation have been won by two teams led by researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL). The highly competitive awards from the DOE Office of Science’s INCITE (Innovative and Novel Impact on Computational Theory and Experiment) program will accelerate the development of nuclear fusion as a clean and abundant source of energy for generating electricity and will advance understanding of the high-energy-density (HED) plasmas found in stars and other astrophysical objects.

A single core hour represents the use of one computer core, or processor, for one hour. A laptop computer with only one processor would take some 24,000 years to run 210 million core hours.

“Extremely important and beneficial”

“These awards are extremely important and beneficial,” said Michael Zarnstorff, deputy director for research at PPPL. “They give us access to leadership-class highest-performance computers for highly complex calculations. This is key for advancing our theoretical modeling and understanding.” Leadership-class computing systems are high-end computers that are among the most advanced in the world for solving scientific and engineering problems.

The allocations include more than 160 million core hours for physicist C.S. Chang and his team, marking the first year of a renewable three-year award. The first-year hours are distributed over two machines: 100-million core hours on Titan, the most powerful U.S supercomputer, which can perform some 27 quadrillion (1015) calculations per second at the Oak Ridge Leadership Computing Facility (OLCF); and 61.5 million core hours on Theta, which completes some 10 quadrillion calculations a second at the Argonne Leadership Computing Facility (ALCF).  Both sites are DOE Office of Science User Facilities.

Also received are 50 million core hours on Titan for Amitava Bhattacharjee, head of the Theory Department at PPPL, and William Fox and their team to study HED plasmas produced by lasers.

Chang’s group consists of colleagues at PPPL and other institutions and will use the time to run the XGC code developed by PPPL and nationwide partners.  The team is exploring the dazzlingly complex edge of fusion plasmas with Chang as lead principal investigator of the partnership center for High-fidelity Boundary Plasma Simulation — a program supported by the DOE Office of Science’s Scientific Discovery through Advanced Computing (SciDAC). The edge is critical to the performance of plasma that fuels fusion reactions.

Fusion — the fusing of light elements

Fusion is the fusing of light elements that most stars use to generate massive amounts of energy – and that scientists are trying to replicate on Earth for a virtually inexhaustible supply of energy. Plasma – the fourth state of matter that makes up nearly all the visible universe – is the fuel they would use to create fusion reactions.

The XGC code will perform double-duty to investigate developments at the edge of hot, charged fusion plasma. The program will simulate the transition from low- to high-confinement of the edge of fusion plasmas contained inside magnetic fields in doughnut-shaped fusion devices called tokamaks. Also simulated will be the width of the heat load that will strike the divertor, the component of the tokamak that will expel waste heat and particles from future fusion reactors based on magnetic confinement such as ITER, the international tokamak under construction in France to demonstrate the practicality of fusion power.

The simulations will build on knowledge that Chang has achieved in the previous-cycle SciDAC project.  “We’re just getting started,” Chang said. “In the new SciDAC project we need to understand the different types of transition that are thought to occur in the plasma, and the physics behind the width of the heat load, which can damage the divertor in future facilities such as ITER if the load is too narrow and concentrated.”

Advancing progress in understanding HED plasmas

The Bhattacharjee-Fox award, the second and final part of a two-year  project, will advance progress in the team’s understanding of the dynamics of magnetic fields in HED plasmas. “The simulations will be immensely beneficial in designing and understanding the results of experiments carried out at the University of Rochester and the National Ignition Facility at Lawrence Livermore National Laboratory” Bhattacharjee said.

The project explores the magnetic reconnection and shocks that occur in HED plasmas, producing enormous energy in processes such as solar flares, cosmic rays and geomagnetic storms. Magnetic reconnection takes place when the magnetic field lines in plasma converge and break apart, converting magnetic energy into explosive particle energy. Shocks appear when the flows in the plasma exceed the speed of sound, and are a powerful process for accelerating charged particles.

To study the process, the team fires high-power lasers at tiny spots of foil, creating plasma bubbles with magnetic fields that collide to form shocks and come together to create reconnection. “Our group has recently made important progress on the properties of shocks and novel mechanisms of magnetic reconnection in laser-driven HED plasmas,” Bhattacharjee said. “This could not be done without INCITE support.”

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov(link is external).

Source: PPPL

The post Physicists Win Supercomputing Time to Study Fusion and the Cosmos appeared first on HPCwire.

ACM Recognizes 2017 Fellows for Advancing Technology in the Digital Age

Tue, 12/12/2017 - 10:20

NEW YORK, Dec. 12, 2017 — ACM, the Association for Computing Machinery, has named 54 members ACM Fellows for major contributions in areas including database theory, design automation, information retrieval, multimedia computing and network security. The accomplishments of the 2017 ACM Fellows lead to transformations in science and society. Their achievements play a crucial role in the global economy, as well as how we live and work every day.

“To be selected as a Fellow is to join our most renowned member grade and an elite group that represents less than 1 percent of ACM’s overall membership,” explains ACM President Vicki L. Hanson. “The Fellows program allows us to shine a light on landmark contributions to computing, as well as the men and women whose tireless efforts, dedication, and inspiration are responsible for groundbreaking work that improves our lives in so many ways.”

Underscoring ACM’s global reach, the 2017 Fellows hail from universities, companies and research centers in China, Denmark, Germany, Hong Kong, Switzerland, the United Kingdom and the United States.

The 2017 Fellows have been cited for numerous contributions in areas including artificial intelligence, big data, computer architecture, computer graphics, high performance computing, human-computer interaction, sensor networks, wireless networking and theoretical computer science.

ACM will formally recognize its 2017 Fellows at the annual Awards Banquet, to be held in San Francisco on June 23, 2018. Additional information about the 2017 ACM Fellows, the awards event, as well as previous ACM Fellows and award winners, is available at http://awards.acm.org/.

2017 ACM Fellows

Lars Birkedal
Aarhus University
For contributions to the semantic and logical foundations of compilers and program verification systems

Edouard Bugnion
EPFL
For contributions to virtual machines

Margaret Burnett
Oregon State University
For contributions to end-user software engineering, understanding gender biases in software, and broadening participation in computing

Shih-Fu Chang
Columbia University
For contributions to large-scale multimedia content recognition and multimedia information retrieval

Edith Cohen
Google Research
For contributions to the design of efficient algorithms for networking and big data

Dorin Comaniciu
Siemens Healthcare
For contributions to machine intelligence, diagnostic imaging, image-guided interventions, and computer vision

Susan M. Dray
Dray & Associates
For co-founding ACM SIGCHI and disseminating exemplary user experience design and evaluation practices worldwide

Edward A. Fox
Virginia Tech
For contributions in information retrieval and digital libraries

Richard M. Fujimoto
Georgia Institute of Technology
For contributions to parallel and distributed discrete event simulation

Shafi Goldwasser  
Massachusetts Institute of Technology
For transformative work that laid the complexity-theoretic foundations for the science of cryptography

Carla P. Gomes  
Cornell University
For establishing the field of computational sustainability, and for foundational contributions to artificial intelligence

Martin Grohe 
RWTH Aachen University
For contributions to logic in computer science, database theory, algorithms, and computational complexity

Aarti Gupta 
Princeton University
For contributions to system analysis and verification techniques and their transfer to industrial practice

Venkatesan Guruswami
Carnegie Mellon University
For contributions to algorithmic coding theory, pseudorandomness and the complexity of approximate optimization

Dan Gusfield
University of California, Davis
For contributions to combinatorial optimization and to algorithmic computational biology

Gregory D. Hager
Johns Hopkins University
For contributions to vision-based robotics and to computer-enhanced interventional medicine

Steven Michael Hand
Google
For contributions to virtual machines and cloud computing

Mor Harchol-Balter 
Carnegie Mellon University
For contributions to performance modeling and analysis of distributed computing systems

Laxmikant Kale 
University of Illinois at Urbana-Champaign
For development of new parallel programming techniques and their deployment in high performance computing applications

Michael Kass
NVIDIA
For contributions to computer vision and computer graphics, particularly optimization and simulation

Angelos Dennis Keromytis
DARPA
For contributions to the theory and practice of systems and network security

Carl Kesselman 
University of Southern California
For contributions to high-performance computing, distributed systems, and scientific data management

Edward Knightly 
Rice University
For contributions to multi-user wireless LANs, wireless networks for underserved regions, and cross-layer wireless networking

Craig Knoblock 
University of Southern California
For contributions to artificial intelligence, semantic web, and semantic data integration

Insup Lee
University of Pennsylvania
For theoretical and practical contributions to compositional real-time scheduling and runtime verification

Wenke Lee
Georgia Institute of Technology
For contributions to systems and network security, intrusion and anomaly detection, and malware analysis

Li Erran Li
Uber Advanced Technologies Group
For contributions to the design and analysis of wireless networks, improving architectures, throughput, and analytics

Gabriel H. Loh
Advanced Micro Devices, Inc.
For contributions to die-stacking technologies in computer architecture

Tomás Lozano-Pérez
Massachusetts Institute of Technology

For contributions to robotics, and motion planning, geometric algorithms, and their applications

Clifford A. Lynch
Coalition for Networked Information
For contributions to library automation, information retrieval, scholarly communication, and information policy

Yi Ma
University of California, Berkeley
For contributions to theory and application of low-dimensional models for computer vision and pattern recognition

Andrew K. McCallum
University of Massachusetts at Amherst
For contributions to machine learning with structured data, and innovations in scientific communication

Silvio Micali
Massachusetts Institute of Technology
For transformative work that laid the complexity-theoretic foundations for the science of cryptography

Andreas Moshovos 
University of Toronto
For contributions to high-performance architecture including memory dependence prediction and snooping coherence

Gail C. Murphy
The University of British Columbia
For contributions to recommenders for software engineering and to program comprehension

Onur Mutlu
ETH Zurich
For contributions to computer architecture research, especially in memory systems

Nuria Oliver
Vodafone/Data-Pop Alliance
For contributions in probabilistic multimodal models of human behavior and uses in intelligent, interactive systems

Balaji Prabhakar 
Stanford University
For developing algorithms and systems for large-scale data center networks and societal networks

Tal Rabin
IBM Research
For contributions to foundations of cryptography, including multi-party computations, signatures, and threshold and proactive protocol design

K. K. Ramakrishnan
University of California, Riverside 
For contributions to congestion control, operating system support for networks and virtual private networks

Ravi Ramamoorthi
University of California San Diego 
For contributions to computer graphics rendering and physics-based computer vision

Yvonne Rogers  
University College London
For contributions to human-computer interaction and the design of human-centered technology

Yong Rui  
Lenovo Group
For contributions to image, video and multimedia analysis, understanding and retrieval

Bernhard Schölkopf
Max Planck Institute for Intelligent Systems
For contributions to the theory and practice of machine learning

Steven M. Seitz
University of Washington, Seattle
For contributions to computer vision and computer graphics

Michael Sipser
Massachusetts Institute of Technology
For contributions to computational complexity, particularly randomized computation and circuit complexity

Anand Sivasubramaniam
Penn State University
For contributions to power management of datacenters and high-end computer systems

Mani B. Srivistava
University of California, Los Angeles
For contributions to sensor networks, mobile personal sensing, and cyber-physical systems

Alexander Vardy
University of California San Diego
For contributions to the theory and practice of error-correcting codes and their study in complexity theory

Geoffrey M. Voelker
University of California San Diego
For contributions to empirical measurement and analysis in systems, networking and security

Martin D. F. Wong
University of Illinois at Urbana-Champaign
For contributions to the algorithmic aspects of electronic design automation (EDA)

Qiang Yang
Hong Kong University of Science and Technology
For contributions to artificial intelligence and data mining

ChengXiang Zhai
University of Illinois at Urbana-Champaign
For contributions to information retrieval and text data mining

Aidong Zhang
State University of New York at Buffalo
For contributions to bioinformatics and data mining

About ACM

ACM, the Association for Computing Machinery (www.acm.org) is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence.  ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

About the ACM Fellows Program

The ACM Fellows Program (http://awards.acm.org/fellow/) initiated in 1993, celebrates the exceptional contributions of the leading members in the computing field. These individuals have helped to enlighten researchers, developers, practitioners and end users of information technology throughout the world. The new ACM Fellows join a distinguished list of colleagues to whom ACM and its members look for guidance and leadership in computing and information technology.

Source: ACM

The post ACM Recognizes 2017 Fellows for Advancing Technology in the Digital Age appeared first on HPCwire.

Tencent Cloud Adopts Mellanox Interconnect Solutions for HPC and AI Cloud Offering

Tue, 12/12/2017 - 09:33

SUNNYVALE, Calif. & YOKNEAM, Israel, Dec. 12, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced that Tencent Cloud has adopted Mellanox interconnect solutions for its high-performance computing (HPC) and artificial intelligence (AI) public cloud offering. TencentCloud is a secure, reliable and high-performance public cloud service that integrates Tencent’s infrastructure capabilities with the advantages of a massive-user platform and ecosystem.

The Tencent Cloud infrastructure leverages Mellanox Ethernet and InfiniBand adapters, switches and cables to deliver advanced public cloud services. By taking advantage of Mellanox RDMA, in-network computing and other interconnect acceleration engines, Tencent Cloud can now offer high-performance computing services, as required by its users, to develop advanced applications and offer new services.

“Tencent Cloud is utilizing Mellanox interconnect and applications acceleration technology to help companies develop their next generation products and offer new and intelligent services,” said Wang Huixing, vice president of Tencent Cloud. “We are excited to work with Mellanox to integrate its world-leading interconnect technologies into our public cloud offerings, and plan to continue to scale our infrastructure product lines to meet the growing needs of our customers.”

“We are proud to partner with Tencent Cloud, who is leveraging our advanced interconnect technology to help build a leading high-performance computing and artificial intelligence-based public cloud infrastructure,” said Amir Prescher, senior vice president of business development at Mellanox Technologies. “Through Tencent Cloud, companies will benefit from Mellanox’s technology to build new products and services that can leverage faster and more efficient data analysis. We look forward to continuing to work with Tencent and expanding the use of Mellanox solutions in its cloud offering.”

Mellanox interconnect solutions deliver the highest efficiency for high-performance computing, artificial intelligence, cloud, storage, and other applications.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

About Tencent Cloud

Tencent Cloud is a leading global cloud service provider. A product of Tencent, Tencent Cloud was built with the expertise of teams who created innovative services like QQ, WeChat and Qzone. Tencent Cloud provides integrated cloud services such as IaaS, PaaS and SaaS, and is a one-stop service for enterprises seeking to adopt public cloud, hybrid cloud, private cloud and cloud-based financial services. It is also a pioneer in cutting edge web technologies such as Cloud Image, facial recognition, big data analytics, machine learning, audio/video technology and security protection. Tencent Cloud delivers integrated industry solutions for gaming, finance, e-commerce, tourism, online-to-offline services, governments, healthcare, online education, and smart hardware. It also provides general solutions with different functions, including online video, website set-up, hybrid cloud, big data, the WeChat eco-system and more. For more information, please visit https://www.qcloud.com/.

Source: Mellanox

The post Tencent Cloud Adopts Mellanox Interconnect Solutions for HPC and AI Cloud Offering appeared first on HPCwire.

ESnet Now Moving More Than 1 Petabyte/wk

Tue, 12/12/2017 - 09:20

Optimizing ESnet (Energy Sciences Network), the world’s fastest network for science, is an ongoing process. Recently a two-year collaboration by ESnet users – the Petascale DTN Project – achieved its ambitious goal to deliver sustained data transfers at over the target rate of 1 petabyte per week. ESnet is managed by Lawrence Berkeley National Laboratory for the Department of Energy.

During the past two years ESnet engineers have been working with staff at DOE labs to fine tune the specially configured systems called data transfer nodes (DTNs) that move data in and out of the National Energy Research Scientific Computing Center (NERSC) at LBNL and the leadership computing facilities at Argonne National Laboratory and Oak Ridge National Laboratory. A good article describing the ESnet project (ESnet’s Petascale DTN project speeds up data transfers between leading HPC centers) was posted yesterday on Phys.org.

A variety of software and hardware upgrades and expansion were required to achieve the speedup. Here are two examples taken from the article:

  • At NERSC, the DTN project resulted in adding eight more nodes, tripling the number, in order achieve enough internal bandwidth to meet the project’s goals. “It’s a fairly complicated thing to do,” said Damian Hazen, head of NERSC’s Storage Systems Group. “It involves adding infrastructure and tuning as we connected our border routers to internal routers to the switches connected to the DTNs. Then we needed to install the software, get rid of some bugs and tune the entire system for optimal performance.”
  • Oak Ridge Leadership Computing Facility now has 28 transfer nodes in production on 40-Gigabit Ethernet. The nodes are deployed under a new model—a diskless boot—which makes it easy for OLCF staff to move resources around, reallocating as needed to respond to users’ needs. “The Petascale DTN project basically helped us increase the ‘horsepower under the hood’ of network services we provide and make them more resilient,” said Jason Anderson, an HPC UNIX/storage systems administrator at OLCF. “For example, we recently moved 12TB of science data from OLCF to NCSA in less than 30 minutes. That’s fast!”

The Petascale DTN collaboration also includes the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, funded by the National Science Foundation (NSF). Together, the collaboration aims to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets. The number of sites with this base capability is also expanding, with Brookhaven National Laboratory in New York now testing its transfer capabilities with encouraging results. Future plans including bringing the NSF-funded San Diego Supercomputer Center and other big data sites into the mix.

Performance measurements from November 2017 at the end of the Petascale DTN project. All of the sites met or exceed project goals. Credit: Eli Dart, ESnet

“This increase in data transfer capability benefits projects across the DOE mission science portfolio” said Eli Dart, an ESnet network engineer and leader of the project. “HPC facilities are central to many collaborations, and they are becoming more important to more scientists as data rates and volumes increase. The ability to move data in and out of HPC facilities at scale is critical to the success of an ever-growing set of projects.”

Link to full Phys.org article: https://phys.org/news/2017-12-esnet-petascale-dtn-hpc-centers.html

The post ESnet Now Moving More Than 1 Petabyte/wk appeared first on HPCwire.

Quantum Unveils Scale-out NAS for High-Value and Data-Intensive Workloads

Tue, 12/12/2017 - 08:51

SAN JOSE, Calif., Dec. 12, 2017 — Quantum Corp. (NYSE: QTM) today announced Xcellis Scale-out NAS, the industry’s first workflow storage appliance to provide the management capabilities and robust features of enterprise scale-out NAS with the cost-effective scaling organizations need to address modern data growth. It delivers greater than 3X the performance of competitive enterprise NAS offerings and, with integrated storage tiering, an end-to-end solution can cost as little as 1/10 that of alternative enterprise NAS solutions with the same capacity and performance. This combination makes Xcellis Scale-out NAS unique in comprehensively addressing the needs of high-value data environments where the organization’s revenue and products are all built around data.

Unified Unstructured Data at Scale 

Many IoT, media and entertainmentlife sciences, manufacturing, video surveillance and enterprise high-performance computing (HPC) environments are outgrowing traditional enterprise NAS. Users have typically turned to scale-out NAS over the past decade as an alternative but are finding that scaling capacity, integrating cloud strategies and sharing data are afterthoughts or not even possible with the solutions they’ve adopted. Unlike enterprise IT workloads, data in high-value workload environments is constantly growing on every axis — ingest, processing, analysis, distribution, archive. These environments require storage solutions with the management and features of enterprise NAS, but which can also cost-effectively scale performance and capacity. Leveraging Quantum’s industry-leading StorNext® parallel file system and data management platform, Xcellis Scale-out NAS offers industry-leading performance, scalability and management benefits for organizations with high-value workloads:

  • Cost-Effective Scaling of Performance and Capacity: Clusters can scale performance and capacity together or independently to reach hundreds of petabytes in capacity and terabytes per second in performance. A single client (SMB, NFS or high-performance client) can achieve over 3X the performance of competitive scale-out NAS offerings with multiple clients scaling a single cluster’s bandwidth to over a terabyte per second. In addition, an end-to-end solution with Xcellis has been shown to manage petabytes of data in a simplified workflow incorporating tape or cloud that provides greater performance than leading NAS-only alternatives for as little as a tenth of the cost.
  • Advanced Features and Flexible Management: With simple installation and setup, a modern administrative single-screen interface provides in-depth monitoring, alerting and management functions as well as rapid scanning and search capabilities that tame large data repositories. Xcellis Scale-out NAS is designed to integrate with the highest performance Ethernet networks through SMB and NFS interfaces and offers the flexibility to also support high-performance block storage in the same converged solution.
  • Lifecycle, Location and Cost Management: Xcellis Scale-out NAS leverages more than 15 years of data management experience built into StorNext. Xcellis data management provides automatic tiering between SSD, disk, tape, object storage and public cloud. Copies can be created for content distribution, collaboration, data protection and disaster recovery.

Artificial Intelligence With Xcellis 

Xcellis Scale-out NAS is the industry’s only NAS solution with integrated artificial intelligence (AI) capabilities that enable customers to create more value from new and existing data. It can actively interrogate data across multiple axes to uncover events, objects, faces, words and sentiments, automatically generating custom metadata that unlocks new possibilities for using stored assets.

Availability

Xcellis Scale-out NAS will be generally available this month with entry configurations and those leveraging tiering starting at under $100 per terabyte (raw).

About Quantum 

Quantum is a leading expert in scale-out tiered storage, archive and data protection. The company’s StorNext platform powers modern high-performance workflows, enabling seamless, real-time collaboration and keeping content readily accessible for future use and remonetization. More than 100,000 customers have trusted Quantum to address their most demanding content workflow needs, including large government agencies, broadcasters, research institutions and commercial enterprises. With Quantum, customers have the end-to-end storage platform they need to manage assets from ingest through finishing and into delivery and long-term preservation. See how at www.quantum.com/customerstories.

Source: Quantum

The post Quantum Unveils Scale-out NAS for High-Value and Data-Intensive Workloads appeared first on HPCwire.

Dell EMC, Alces Flight to Create Hybrid HPC Solution for the University of Liverpool

Mon, 12/11/2017 - 14:58

BICESTER, UK, December 6, 2017 — Built by Dell EMC and Alces Flight, the solution will use Amazon Web Services (AWS) to provide students and researchers a greater depth of flexibility in research by combining on-premises and cloud resources into a hybrid high performance computing (HPC) solution.

“We are pleased to be working with Dell EMC and Alces Flight on this new venture,” said Cliff Addison, head of Advanced Research Computing at the University of Liverpool. “The University of Liverpool has always maintained cutting-edge technology and by architecting flexible access to computational resources on AWS we’re setting the bar even higher for what can be achieved in HPC.”

“Universities are competing in an increasingly demanding environment and the need to differentiate and offer a best-in-class experience is vital. The collaboration between ourselves and Alces Flight helps the University of Liverpool offer a significant computing resource to their students and faculty,” said Peter Barnes, VP Infrastructure Solutions Group, Dell EMC UK and Ireland. “Our Dell PowerEdge 14G servers provide the highly scalable compute platform that is instrumental in this. High-Performance Computing is already being used around the world to make significant scientific breakthroughs and today’s launch will hopefully be the catalyst for more.”

“AWS is making High-Performance Computing (HPC) more accessible and more scalable than ever before,” said Brendan Bouffler, Global Research Computing at Amazon Web Services. “Using AWS, scientists at Liverpool University will have instant access to facilities most of their peers at other institutions stand in a queue for and that means a much faster turn-around between experiments, speeding up their time to the next exciting discovery. It’s humbling to see what this community is able to achieve when traditional constraints of IT are removed.”

“We want researchers and students at the University of Liverpool to be able to run HPC workloads anywhere and at any time,” said Wil Mayers, Technical Director of Alces Flight. “By being able to provide a unified HPC environment that incorporates both Dell EMC on-premises hardware and AWS, we can provide users with a high-quality, consistent experience. Our hope is that this results in further collaborative engagements that push hybrid HPC forward, allowing users to have quick access to some of the best hardware and cloud resources available.”

The collaborative solution from Dell EMC and Alces Flight on AWS is the first of its kind for the University of Liverpool. A fully managed on-premises HPC cluster and cloud-based HPC account have been architected for students and researchers to achieve access to the computational resources required with as little delay as possible.

About the University of Liverpool, Department of Computing Services

The University of Liverpool is one of the UK’s leading research institutions with an annual turnover of £480 million, including £102 million for research. Ranked in the top 1% of higher education institutions worldwide, Liverpool is a member of the prestigious Russell Group of the U.K.’s leading research universities.

About Dell EMC

Dell EMC, a part of Dell Technologies, enables organizations to modernize, automate and transform their data center using industry-leading converged infrastructure, servers, storage and data protection technologies. This provides a trusted foundation for businesses to transform IT, through the creation of a hybrid cloud, and transform their business through the creation of cloud-native applications and big data solutions. Dell EMC services customers across 180 countries – including 98 percent of the Fortune 500 – with the industry’s most comprehensive and innovative portfolio from edge to core to cloud.

About Alces Flight

Alces Flight is made up of a team of specialists in High Performance Computing (HPC) software for scientists, engineers and researchers. Based in the U.K., Alces designs, builds and supports environments to help users make efficient use of the compute and storage resources available to them. Our products are designed to support both existing users with software that is familiar to them and help first-time users to discover, learn and develop their HPC skills.

Source: Alces Flight Limited

The post Dell EMC, Alces Flight to Create Hybrid HPC Solution for the University of Liverpool appeared first on HPCwire.

HPC-as-a-Service Finds Toehold in Iceland

Mon, 12/11/2017 - 14:20

While high-demand workloads (e.g., bitcoin mining) can overheat data center cooling capabilities, at least one data center infrastructure provider has announced an HPC-as-a-service offering that features 100 percent free and zero-carbon cooling.

Verne Global, a company seemingly intent on converting Iceland into a gigantic, renewably powered data center facility, has announce hpcDIRECT,  a scalable, bare metal service designed to support power-intensive high performance computing applications. Finding initial markets in the financial services, manufacturing (particularly automotive) and scientific research verticals, hpcDIRECT is powered by the island country’s abundant supply of hydroelectric, geothermal and, to a lesser degree, wind energy that the company says delivers 60 percent savings on power costs.

The launch follows Verne’s a late-October announcement that it had completed a 12.4-Tbps upgrade to the Greenland Connect subsea cable system with three 100-Gpbs connection to New York, lowering latency and delivering, according to Verne, up to 90 percent lower network costs.

hpcDIRECT is a response to customers who “want to consume their HPC in both the traditional sense, where we provide them with colocation capability, power space cooling – the traditional method a data center operator provides services to a customer – and then also to provide a next layer in the technology stack…the hardware and the orchestration of that hardware,” Dominic Ward, managing director at Verne, told EnterpriseTech.

He said hpcDIRECT is available with no upfront charges and can be provisioned to customers’ size and configuration requirements. hpcDIRECT clusters are built using updated architectures available, including Intel’s Xeon (Skylake) servers, connected with Mellanox EDR InfiniBand.

Source: Verne Global

“By leveraging low-cost, reliable, and 100 percent renewable power at its Keflavik campus, the company holds a rather unique position compared to other providers in the industry that offer services similar to hpcDIRECT,” said Teddy Miller, associate analyst at industry watcher 451 Research. “Verne Global’s new product will appeal particularly to enterprises with corporate sustainability mandates or initiatives. The recent completion of an upgrade to Tele Greenland’s Greenland Connect subsea cable system should also significantly lower latency and network costs between the Keflavik campus and New York City. Verne Global may be small compared to other players in the space, but what it offers its customers is cheap, green and increasingly well-connected.”

Verne said hpcDIRECT is accessible via a range of options, from incremental additions to augment existing high performance computing, to supporting massive processing requirements with petaflops of compute. “This flexibility makes it an ideal solution for applications such as computer-aided engineering, genomic sequencing, molecular modelling, grid computing, artificial intelligence and machine learning” the company said.

Demand drivers in financial services for colocation data center services begin with the industry’s movement to reducing capex expenditures on the balance sheet and toward more efficient opex alternatives. Ward said banks and hedge fund companies typically run compute-intensive inter- and intra-day risk applications, “they often have a core level of compute that they want to augment with a more flexible and scalable solution.”

In the automotive sector, companies typically have CFD and crash simulation software running at high utilization 24/7/365. A service like hpcDIRECT, Ward said, “enables them to increment the compute resource they have for high performance applications on a steady basis.” This avoids time consuming and costly procurement cycles, he said. “We’re able to provide them an augmentation, or a complete replacement, for (their high-performance) resources and step into the demand profile that fits their compute demands.”

The post HPC-as-a-Service Finds Toehold in Iceland appeared first on HPCwire.

Fujitsu Develops WAN Acceleration Technology Utilizing FPGA Accelerators

Mon, 12/11/2017 - 10:37

TOKYO, Dec. 11, 2017 — Fujitsu Laboratories Ltd. today announced the development of WAN acceleration technology that can deliver transfer speeds up to 40Gbps for migration of large volumes of data between clouds, using servers equipped with field-programmable gate arrays (FPGAs).

Connections in wide area networks (WANs) between clouds are moving from 1Gbps lines to 10Gbps lines, but with the recent advance of digital technology, including IoT and AI, there is an even greater demand for faster high-speed data transfers as huge volumes of data are collected in the cloud. Until now the effective transfer speed of WAN connections has been raised using techniques to reduce the volume of data, such as compression and deduplication. However, with WAN lines of 10Gbps there are enormous volumes of data to be processed, and existing WAN acceleration technologies usable in cloud servers have not been able to sufficiently raise the effective transfer rate.

Fujitsu Laboratories has now developed WAN acceleration technology capable of real-time operation even with speeds of 10Gbps or higher. WAN acceleration technology is achieved with a dedicated computational unit specialized for a variety of processing, such as feature value calculations and compression processing, mounted onto an FPGA equipped on a server, and in tandem with this, by enabling highly parallel operation of the computational units by supplying data at the appropriate times based on the predicted completion of each computation.

In a test environment where this technology was deployed on servers that use FPGAs, and where the servers were connected with 10Gbps lines, Fujitsu Laboratories confirmed that this technology achieved effective transfer rates of up to 40Gbps, the highest performance in the industry. With this technology, it has become possible to transfer data at high-speeds between clouds, including data sharing and backups, enabling the creation of next-generation cloud services that share and utilize large volumes of data across a variety of companies and locations.

Fujitsu Laboratories aims to deploy this technology, capable of use in cloud environments, as an application loaded on an FPGA-equipped server. It is continuing evaluations in practical environments with the goal of commercializing this technology during fiscal 2018.

Fujitsu Laboratories will announce details of this technology at the 2017 International Conference on Field-Programmable Technology (FPT 2017), an international conference to be held in Melbourne, Australia on December 11-13.

Development Background

As the cloud has grown in recent years, there has been a movement to increase data and server management and maintenance efficiency by migrating data (i.e., internal documents, design data, and email) that had been managed on internal servers to the cloud. In addition, as shown by the spread in the use of digital technology such as IoT and AI, there are high expectations for the ways that work and business will be transformed by the analysis and use of large volumes of data, including camera images from factories and other on-site locations, and log data from devices. Given this, there has been explosive growth in the volume of data passing through WAN lines between clouds, spurring a need for next-generation WAN acceleration technology capable of huge data transfers at high-speed between clouds.

Issues

WAN acceleration technologies improve effective transfer speeds by reducing the volume of data through compression or deduplication of the data to be transferred. When transferring data at even higher speeds using 10Gbps network lines, the volume of data needing to be processed is so great that the compression and deduplication processing speed in the server bottlenecks. Therefore, in order to improve real-time operation, there is a need for either CPUs that can operate at higher speeds, or for WAN acceleration technology with faster processing speeds.

About the Newly Developed Technology

Fujitsu Laboratories has now developed WAN acceleration technology that can achieve real-time operation usable in the cloud even with speeds of 10Gbps or more, using server-mounted FPGAs as accelerators. Efficient operations with WAN acceleration technology are accomplished by using an FPGA to process a portion of the processing for which the computation is heavy and for which it is difficult to improve processing speed in the CPU, when performing compression or deduplication for WAN acceleration processing, and by efficiently connecting the CPU with the FPGA accelerator. Details of the technology are as follows.

1. FPGA parallelization technology using highly parallel dedicated computational units

Fujitsu Laboratories has developed FPGA parallelization technology that can significantly reduce the processing time required for data compression and deduplication by deploying dedicated computational units specialized for data partitioning, feature value calculation, and lossless compression processing in a FPGA in a highly parallel configuration, and by enabling highly parallel operation of the computational units by delivering data at the appropriate times based on predictions of the completion of each calculation.

2. Technology to optimize the flow of processing between CPU and FPGA

Previously, in determining whether to apply lossless compression to data based on the identification of duplication in that data, it was necessary to read the data twice, both before and after the duplication identification was executed on the FPGA, increasing overhead and preventing the system from delivering sufficient performance. Now, by consolidating the processing handoff onto the FPGA, handling both the preprocessing for duplication identification and the compression processing on the FPGA, and using a processing sequence that controls how the compression processing results are reflected on the CPU based on the results of the duplication identification, this technology reduces the overhead between the CPU and FPGA from reloading the input data and from control exchanges. This reduces the waiting time due to the handoff of data and control between the CPU and FPGA, delivering efficient coordinated operation of the CPU and FPGA accelerator.

Effects

Fujitsu Laboratories deployed this newly developed technology in servers installed with FPGAs, confirming acceleration approximately thirty times the performance of CPU processing alone. Fujitsu Laboratories evaluated the transfer speed for a high volume of data in a test environment where the servers were connected with 10Gbps connections, and in a test simulating the regular backup of data, including documents and video, confirmed that this technology achieved transfer speeds up to 40Gbps, an industry record. This technology has significantly improved data transfer efficiency over WAN connections, enabling high-speed data transfers between clouds, such as data sharing and backups, making possible the creation of next-generation cloud services that share and use large volumes of data between a variety of companies and locations.

Future Plans

Fujitsu Laboratories will continue to evaluate this technology in practical environments, deploying this technology in virtual appliances that can be used in cloud environments. Fujitsu Laboratories aims to make this technology available as a product of Fujitsu Limited during fiscal 2018.

About Fujitsu Laboratories

Founded in 1968 as a wholly owned subsidiary of Fujitsu Limited, Fujitsu Laboratories Ltd. is one of the premier research centers in the world. With a global network of laboratories in Japan, China, the United States and Europe, the organization conducts a wide range of basic and applied research in the areas of Next-generation Services, Computer Servers, Networks, Electronic Devices and Advanced Materials. For more information, please see: http://www.fujitsu.com/jp/group/labs/en/.

About Fujitsu Ltd

Fujitsu is a leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 155,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.5 trillion yen (US$40 billion) for the fiscal year ended March 31, 2017. For more information, please seehttp://www.fujitsu.com.

Source: Fujitsu Ltd

The post Fujitsu Develops WAN Acceleration Technology Utilizing FPGA Accelerators appeared first on HPCwire.

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

Mon, 12/11/2017 - 09:53

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be carefully woven together by people to create the computational capabilities that are used to deliver insights into the behaviors of complex systems. This collection of technologies and people has been called the High Performance Computing (HPC) ecosystem. This is an appropriate metaphor because it evokes the complicated nature of the interdependent elements needed to deliver first of a kind computing systems.

The idea of the HPC ecosystem has been around for years and most recently appeared in one of the objectives for the National Strategic Computing Initiative (NSCI). The 4th objective calls for “Increasing the capacity and capability of an enduring national HPC ecosystem.” This leads to the questions of, “what makes up the HPC ecosystem” and why is it so important? Perhaps the more important question is, why does the United States need to be careful about letting its HPC ecosystem diminish?

The heart of the HPC ecosystem is clearly the “big humming boxes” that contain the advanced computing hardware. The rows upon rows of cabinets are the focal point of the electronic components, operating software, and application programs that provide the capabilities that produce the results used to create new scientific and engineering insights that are the real purpose of the HPC ecosystem. However, it is misleading to think that any one computer at any one time is sufficient to make up an ecosystem. Rather, the HPC ecosystem requires a continuous pipeline of computer hardware and software. It is that continuous flow of developing technologies that keeps HPC progressing on the cutting edge.

The hardware element of the pipeline includes systems and components that are under development, but are not currently available. This includes the basic research that will create the scientific discoveries that enable new approaches to computer designs. The ongoing demand for “cutting edge” systems is important to keep system and component designers pushing the performance envelope. The pipeline also includes the currently installed highest performance systems. These are the systems that are being tested and optimized. Every time a system like this is installed, technology surprises are found that must be identified and accommodated. The hardware pipeline also includes systems on the trailing edge. At this point, the computer hardware is quite stable and allows a focus on developing and optimizing modeling and simulation applications.

One of the greatest challenges of maintaining the HPC ecosystem is recognizing that there are significant financial commitments needed to keep the pipeline filled. There are many examples of organizations that believed that buying a single big computer would make them part of the ecosystem. In those cases, they were right, but only temporarily. Being part of the HPC ecosystem requires being committed to buying the next cutting-edge system based on the lessons learned from the last system.

Another critical element of the HPC ecosystem is software. This generally falls into two categories – software needed to operate the computer (also called middleware or the “stack”) and software that provides insights into end user questions (called applications). Middleware plays the critical role of managing the operations of the hardware systems and enabling the execution of applications software. Middleware includes computer operating systems, file systems and network controllers. This type of software also includes compilers that translate application programs into the machine language that will be executed on hardware. There are quite a number of other pieces of middleware software that include libraries of commonly needed functions, programming tools, performance monitors, and debuggers.

Applications software span a wide range and are as varied as the problems users want to address through computation. Some applications are quick “throwaway” (prototype) attempts to explore potential ways in which computers may be used to address a problem. Other applications software is written, sometimes with different solution methods, to simulate physical behaviors of complex systems. This software will sometimes last for decades and will be progressively improved. An important aspect of these types of applications is the experimental validation data that provide confidence that the results can be trusted. For this type of applications software, setting up the problem that can include finite element mesh generation, populating that mesh with material properties and launching the execution are important parts of the ecosystem. Other elements of usability of application software include the computers, software, and displays that allow users to visualize and explore simulation results.

Data is yet another essential element of the HPC ecosystem. Data is the lifeblood in the circulatory system that flows through the system to keep it doing useful things. The HPC ecosystem includes systems that hold and move data from one element to another. Hardware aspects of the data system include memory, storage devices, and networking. Also software device drivers and file systems are needed to keep track of the data. With the growing trend to add machine learning and artificial intelligence to the HPC ecosystem, its ability to process and productively use data are becoming increasingly significant.

Finally, and most importantly, trained and highly skilled people are an essential part of the HPC ecosystem. Just like computing systems, these people make up a “pipeline” that starts in elementary school and continues through undergraduate and then advanced degrees. Attracting and educating these people in computing technologies is critical. Another important part of the people pipeline of the HPC ecosystem are the jobs offered by academia, national labs, government, and industry. These professional experiences provide the opportunities needed to practice and hone HPC skills.

The origins of the United States’ HPC ecosystem dates back to the decision by the U.S. Army Research Lab to procure an electronic computer to calculate ballistic tables for its artillery during World War II (i.e. ENIAC). That event led to finding and training the people, who in many cases were women, to program and operate the computer. The ENIAC was just the start of the nation’s significant investment in hardware, middleware software, and applications. However, just because the United States was the first does not mean that it was alone. Europe and Japan also have robust HPC ecosystems for years and most recently China has determinedly set out to create one of their own.

The United States and other countries made the necessary investments in their HPC ecosystems because they understood the strategic advantages that staying at the cutting edge of computing provides. These well-document advantages apply to many areas that include: national security, discovery science, economic competitiveness, energy security and curing diseases.

The challenge of maintaining the HPC ecosystem is that, just like a natural ecosystem, the HPC version can be threatened by becoming too narrow and lacking diversity. This applies to the hardware, middleware, and applications software. Betting on just a few types of technologies can be disastrous if one approach fails. Diversity also means having and using a healthy range of systems that covers the highest performance cutting edge systems to wide deployment of mid and low-end production systems. Another aspect of diversity is the range of applications that can productively use on advanced computing resources.

Perhaps the greatest challenge to an ecosystem is complacency and assuming that it, and the necessary people, will always be there. This can take the form of an attitude that it is good enough to become a HPC technology follower and acceptable to purchase HPC systems and services from other nations. Once a HPC ecosystem has been lost, it is not clear if it can be regained. Having a robust HPC ecosystem can last for decades, through many “half lives” of hardware. A healthy ecosystem allows puts countries in a leadership position and this means the ability to influence HPC technologies in ways that best serve their strategic goals. Happily, the 4th NSCI objective signals that the United States understands these challenges and the importance of maintaining a healthy HPC ecosystem.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

The post HPC Iron, Soft, Data, People – It Takes an Ecosystem! appeared first on HPCwire.

MareNostrum 4 Chosen as ‘Most Beautiful Data Center’

Mon, 12/11/2017 - 09:28

BARCELONA, Dec. 11, 2017 — MareNostrum 4 supercomputer has been chosen as the winner of the Most Beautiful Data Center in the world Prize, hosted by the Datacenter Dynamics (DCD) Company.

There are 15 prizes in different categories, besides the prize for the most beautiful data centre, which is elected by popular vote. MareNostrum 4 competed with such impressive facilities as the Switch Pyramid in Michigan, the Bahnhof Pionen in Stockholm or the Norwegian Green Mountain. BSC supercomputer has prevailed for its particular location, inside the chapel of Torre Girona, located in the North Campus of the Universitat Politècnica de Catalunya (UPC).

The awards ceremony took place on December 7th in London and both Mateo Valero, BSC Director, and Sergi Girona, Operations department Director, received the prize.

About MareNostrum 4

MareNostrum is the generic name used by BSC to refer to the different upgrades of its most emblematic supercomputer, the most powerful in Spain.  The first version was installed in 2005, and the fourth version is currently in operation.

MareNostrum 4 began operations last July, and according to the latest call of the Top500 list, it ranks the 16th position among the highest performing supercomputers. Currently, MareNostrum provides 11.1 Petaflops of processing power – that is, the capacity to perform 11.1 x (1015) operations per second– to scientific production and innovation. This capacity will be increased soon thanks to the installation of new clusters, featuring emerging technologies, which are currently being developed in USA and Japan.

Aside from being the most beautiful, MareNostrum has been dubbed the most interesting supercomputer in the world due to the heterogeneity of the architecture it will include once installation of the supercomputer is complete. Its total speed will be 13.7 Petaflops. Its main memory is of 390 Terabytes and it has the capacity to store 14 Petabytes (14 million Gigabytes) of data. A high-speed network connects all the components in the supercomputer to one another.

MareNostrum 4 has been funded by the Economy, Industry and Competitiveness Ministry of the Spanish Government and was awarded by public tender to IBM Company, which integrated into a single machine its own technologies together with the ones developed by Lenovo, Intel and Fujitsu.

About Barcelona Supercomputing Center

Barcelona Supercomputing Center (BSC) is the national supercomputing centre in Spain. BSC specialises in High Performance Computing (HPC) and its mission is two-fold: to provide infrastructure and supercomputing services to European scientists, and to generate knowledge and technology to transfer to business and society.

Source: Barcelona Supercomputing Center

The post MareNostrum 4 Chosen as ‘Most Beautiful Data Center’ appeared first on HPCwire.

PSSC Labs Launches PowerWulf HPC Clusters with Pre-Configured Intel Data Center Blocks

Mon, 12/11/2017 - 08:54

LAKE FOREST, Calif., Dec. 11, 2017 – PSSC Labs, a developer of custom HPC and Big Data computing solutions, today announced its PowerWulf HPC clusters are now available with Intel’s new Xeon Scalable Processors and Intel’s Omni-Path HPC Fabric to deliver the performance needed to tackle cutting edge computing tasks including real-time analytics, virtualized infrastructure and high-performance computing.

PowerWulf clusters are built with Intel’s Data Center Blocks to ensure a truly turnkey solution that addresses customer integration challenges. Today’s customer datacenters require unique server solutions that run complex, business-critical workloads. Intel Data Center Blocks configurations are purpose-built with all-Intel technology, optimized to address the needs of specific market segments. These fully validated blocks deliver performance, reliability and quality for solutions customer want and can trust to handle their demanding cloud, HPC, and business critical workloads.

PSSC Labs PowerWulf HPC Clusters are available as config-to-order (CTO) to meet the specific needs of a customer. Key features of these solutions include:

  • Pre-configured and fully validated blocks with the latest Intel HPC technology
  • Powered by the Intel Xeon processor Scalable family, delivers an overall performance increase up to 1.65x compared to the previous generation, and up to 5x Online Transaction       Processing warehouse workloads versus the current install base.
  • 2 operating system options to choose: RedHat, SUSE, and CentOS Linux
  • Multiple models with different support options
  • Intel Fabric Suite 10.5.1, Lustre 2.10
  • Intel Omni-Path Host Fabric Interface (Intel OP HFI) Adapter 100 Series and FDR/EDR InfiniBand Fabric
  • Intel Datacenter SATA and NVMe Solid State Drives (SSD)

“Intel’s integrated and fully-validated Data Center Blocks enables PSSC Labs to deliver more efficient and turnkey approach and reduce time to market, complexity and the costs of system design, validation and integration,” said Alex Lesser, EVP of PSSC Labs. “Partnering with Intel allows us to offer our customers the latest hardware options in our line of custom turn-key PowerWulf HPC clusters for a variety of applications across government, academic and commercial environments.”

PowerWulf HPC clusters also feature PSSC Labs CBeST Cluster Management Toolkit (Complete Beowulf Software Toolkit) to deliver a preconfigured solution with all the necessary hardware, network settings and cluster management software prior to shipping. With its component structure, CBeST is the most flexible cluster management software package available.

Every PowerWulf HPC Cluster includes a three year unlimited phone / email support package (additional year support available) with all support provided by PSSC Labs US-based team of engineers. PSSC Labs is an Intel HPC Data Center Specialist and has been a Platinum Provider with Intel since 2009. For more information see http://www.pssclabs.com/solutions/hpc-cluster/

 

Source: PSSC Labs

The post PSSC Labs Launches PowerWulf HPC Clusters with Pre-Configured Intel Data Center Blocks appeared first on HPCwire.

Intel® Omni-Path Architecture and Intel® Xeon® Scalable Processor Family Enable Breakthrough Science on 13.7 petaFLOPS MareNostrum 4

Mon, 12/11/2017 - 08:49

In publicly and privately funded computational science research, dollars (or Euros in this case) follow FLOPS. And when you’re one of the leading computing centers in Europe with a reputation around the world of highly reliable, leading edge technology resources, you look for the best in supercomputing in order to continue supporting breakthrough research. Thus, Barcelona Supercomputing Center (BSC) is driven to build leading supercomputing clusters for its research clients in the public and private sectors.

MareNostrum 4 is nestled within the Torre Girona chapel

“We have the privilege of users coming back to us each year to run their projects,” said Sergio Girona, BSC’s Operations Department Director. “They return because we reliably provide the technology and services they need year after year, and because our systems are of the highest level.” Supported by the Spanish and Catalan governments and funded by the Ministry of Economy and Competitiveness with €34 million in 2015, BSC sought to take its MareNostrum 3 system to the next-generation of computing capabilities. It specified multiple clusters for both general computational needs of ongoing research, and for development of next-generation codes based on emerging supercomputing technologies and tools for the Exascale computing era. It fell to IBM, who partnered with Fujitsu and Lenovo, to design and build MareNostrum 4.

MareNostrum 4 is a multi-cluster system which main cluster and data storage are interconnected by the Intel® Omni-Path Architecture (Intel® OPA) fabric. A general-purpose compute cluster, with 3,456 nodes of Intel® Xeon® Scalable Processor Product Family will provide up to 11.1 petaFLOPS of computational capacity. A smaller cluster delivering up to 0.5 petaFLOPS is built on the Intel® Xeon Phi Processor 7250. A third small cluster up to 1.5 petaFLOPS will include Power9* and Nvidia GPUs, and a fourth one, made of ARM v8 processors, will provide other 0.5 petaFLOPS of performance. And an IBM storage array will round out the system. All systems are interconnected with the storage subsystem. MareNostrum 4 is designed to be twelve times faster than its predecessor.

Spain’s 13.7 petaFLOPS supercomputer contributes to the Partnership for Advanced Computing in Europe (PRACE) and supports the Spanish Supercomputing Network (RES).

“From my point of view,” stated Girona, “Intel had, at the time of the procurement, the best processor for general purpose systems. Intel is very good on specific domains, and they continue to innovate in other domains. That is why we chose Intel processors for the general-purpose cluster and Intel Xeon Phi Processor for one of the emerging technology clusters, on which we can explore new code development.” The system was in production by July 2017 and placed at number 13 in the June 2017 Top500 list and number 16 on the November 2017 list.

“The Barcelona Supercomputing Center team is committed to maximizing MareNostrum in any way we can,” concluded Girona. “But MareNostrum is not about us. Our purpose at BSC is to help others. We are successful when the scientists and engineers using MareNostrum’s computing power get all the data they need to further their discoveries. It is always rewarding to know we help others to further cutting-edge scientific exploration.”

Learn more about Intel HPC resources >

The post Intel® Omni-Path Architecture and Intel® Xeon® Scalable Processor Family Enable Breakthrough Science on 13.7 petaFLOPS MareNostrum 4 appeared first on HPCwire.

TACC Works with C-DAC, India to Organize Workshop on Software Challenges in Supercomputing

Fri, 12/08/2017 - 11:17

Dec. 8, 2017 — The Texas Advanced Computing Center (TACC) in the U.S. – a world leader in supercomputing – is collaborating with the Centre for Development of Advanced Computing (C-DAC) in India to host a workshop on the “Software Challenges to Exascale Computing (SCEC17)” on December 17th, 2017, from 9 AM to 7 PM at the Hotel Royal Orchid, in Jaipur. The main goal of this workshop is to foster international collaborations in the area of software for the current and next generation supercomputing systems.

At the workshop, exciting talks on advanced software engineering and supercomputing will be delivered by world leaders from the National Science Foundation in the U.S. (https://nsf.gov/), leading academic institutions in India, Japan and the U.S., R&D organizations, and industry. In line with the 2015 “National Strategic Computing Initiative (NSCI)” of the U.S. government, and the “Skill India” campaign of the Government of India, the workshop includes training on using supercomputing resources to solve problems of high societal impact, like earthquake simulation studies and drug discovery efforts. (Additional details on the workshop can be found at: https://scecforum.github.io/)

“I am delighted to collaborate with our colleagues at C-DAC and contribute towards developing a skilled workforce and a strong community in the area of high-level software tools for supercomputing platforms,” said Dr. Ritu Arora, the SCEC17 workshop chair and a scientist at TACC. “Without a concerted effort in this area, it will be hard to lower the adoption barriers to supercomputing and to make it accessible to the masses, especially the non-traditional users of the supercomputers.”

Intel and Nvidia, two key industry players in the supercomputing sector, are generously supporting the workshop. The workshop will provide a forum through which hardware vendors and software developers can communicate with each other and influence the architecture of the next-generation supercomputing systems and the supporting software stack. By fostering cross-disciplinary associations, the workshop will serve as a stepping-stone towards innovations in the future.

About TACC: The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is a leading advanced computing research center in the world. TACC provides comprehensive advanced computing resources and support services to researchers across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

About C-DAC: Centre for Development of Advanced Computing (C-DAC) is the premier R&D organization of the Ministry of Electronics and Information Technology (MeitY) for carrying out R&D in IT, Electronics and associated areas. Different areas of C-DAC, had originated at different times, many of which came out as a result of identification of opportunities.

Source: TACC

The post TACC Works with C-DAC, India to Organize Workshop on Software Challenges in Supercomputing appeared first on HPCwire.

NVIDIA Introduces TITAN V GPU

Fri, 12/08/2017 - 11:03

LONG BEACH, Calif., Dec. 8, 2017 — NVIDIA today introduced TITAN V, a powerful GPU for the PC, driven by the NVIDIA Volta GPU architecture.

Announced by NVIDIA founder and CEO Jensen Huang at the annual NIPS conference, TITAN V excels at computational processing for scientific simulation. Its 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency.

“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”

NVIDIA Supercomputing GPU Architecture, Now for the PC

TITAN V’s Volta architecture features a major redesign of the streaming multiprocessor that is at the center of the GPU. It doubles the energy efficiency of the previous generation Pascal design, enabling dramatic boosts in performance in the same power envelope.

New Tensor Cores designed specifically for deep learning deliver up to 9x higher peak teraflops. With independent parallel integer and floating-point data paths, Volta is also much more efficient on workloads with a mix of computation and addressing calculations. Its new combined L1 data cache and shared memory unit significantly improve performance while also simplifying programming.

Fabricated on a new TSMC 12-nanometer FFN high-performance manufacturing process customized for NVIDIA, TITAN V also incorporates Volta’s highly tuned 12GB HBM2 memory subsystem for advanced memory bandwidth utilization.

Free AI Software on NVIDIA GPU Cloud

TITAN V’s power is ideal for developers who want to use their PCs to do work in AI, deep learning and high performance computing.

Users of TITAN V can gain immediate access to the latest GPU-optimized AI, deep learning and HPC software by signing up at no charge for an NVIDIA GPU Cloud account. This container registry includes NVIDIA-optimized deep learning frameworks, third-party managed HPC applications, NVIDIA HPC visualization tools and the NVIDIA TensorRT inferencing optimizer.

Immediate Availability

TITAN V is available to purchase today for $2,999 from the NVIDIA store in participating countries.

About NVIDIA

NVIDIA’s (NASDAQ:NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

Source: NVIDIA

The post NVIDIA Introduces TITAN V GPU appeared first on HPCwire.

Data Vortex Technologies Partners with Providentia Worldwide to Develop Novel Solutions

Thu, 12/07/2017 - 16:18

AUSTIN, December 6 – This November, proprietary network company Data Vortex Technologies formalized a partnership with Providentia Worldwide, LLC. Providentia is a technologies and solutions consulting venture which bridges the gap between traditional HPC and enterprise computing. The company works with Data Vortex and potential partners to develop novel solutions for Data Vortex technologies and to assist with systems integration into new markets. This partnership will leverage the deep experience in enterprise and hyperscale environments of Providentia Worldwide founders, Ryan Quick and Arno Kolster, and merge the unique performance characteristics of the Data Vortex with traditional systems.

“Providentia Worldwide is excited to see the Data Vortex network in new areas which can benefit from their fine-grained, egalitarian efficiencies. Messaging middleware, data and compute intensive appliances, real-time predictive analytics and anomaly detection, and parallel application computing are promising focus areas for Data Vortex Technologies,” says Quick. “Disrupting entrenched enterprise deployment designs is never easy, but when the gains are large enough, the effort is well worth it. Providentia Worldwide sees the potential to dramatically improve performance and capabilities in these areas, causing a sea-change in how fine-grained network problems are solved going forward.”

The senior technical teams of Data Vortex and Providentia are working on demonstrating the capabilities and performance of popular open source applications on the proprietary Data Vortex Network. The goal is to bring unprecedented performance increases to spaces that are often unaffected by traditional advancements in supercomputing. “This is a necessary step for us – the Providentia partnership is adding breadth to the Data Vortex effort,” says Data Vortex President, Carolyn Coke Reed Devany. “Up to this point we have deployed HPC systems, built with commodity servers connected with Data Vortex switches and VICs [Vortex Interconnect Cards], to federal and academic customers. The company is now offering a network solution that will allow customers to connect an array of different devices to address their most challenging data movement needs.”

Source: Data Vortex Technologies; Providentia Worldwide

The post Data Vortex Technologies Partners with Providentia Worldwide to Develop Novel Solutions appeared first on HPCwire.

The Stanford Living Heart Project Wins Prestigious HPC Awards During SC17

Thu, 12/07/2017 - 16:15

Dec. 7 — During SC’17, the 30th International Conference for High Performance Computing, Networking, Storage and Analysis in Denver, in November, UberCloud – on behalf of the Stanford Living Heart Project – received three HPC awards. It started at the preceding Intel HPC Developer Conference when the Living Heart Project (LHP), presented by Burak Yenier from UberCloud, won a best paper award. During SC’17 on Monday, the Stanford LHP Team received the HPCwire Editors’ Choice Award for Best Use of HPC in the Cloud. And finally, on Tuesday, the team won the Hyperion (former IDC) Award for Innovation Excellence, elected by the Steering Committee of the HPC User Forum.

The Stanford LHP project dealt with simulating cardiac arrhythmia which can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, decimating its pumping function, thus diminishing the circulation of blood through the body. Some kind of cardiac arrhythmia, if not treated with a defibrillator, will cause death within minutes.

Before a new drug reaches the market, pharmaceutical companies need to check for the risk of inducing arrhythmias. Currently, this process takes years and involves costly animal and human studies. In this project, the Living Matter Laboratory of Stanford University developed a new software tool enabling drug developers to quickly assess the viability of a new compound. This means better and safer drugs reaching the market to improve patients’ lives.

Figure 1: Evolution of the electrical activity for the baseline case (no drug) and after application of the drug Quinidine. The electrical propagation turns chaotic after the drug is applied, showing the high risk of Quinidine to produce arrhythmias.

“The Living Heart Project team, led by researchers from the Living Matter Laboratory at Stanford University, is proud and humbled by being elected from HPCwire’s editors for the Best Use of HPC in the Cloud, and from the 29 renowned members of the HPC User Forum Steering Committee for the 2017 Hyperion Innovation Excellence Award”, said Wolfgang Gentzsch from The UberCloud. “And we are deeply grateful for all the support from Hewlett Packard Enterprise and Intel (the sponsors), Dassault Systemes SIMULIA (for Abaqus 2017), Advania (providing HPC Cloud resources), and the UberCloud tech team for containerizing Abaqus and integrating all software and hardware components into one seamless solution stack.” 

Figure 2: Electrocardiograms: tracing for a healthy, baseline case, versus the arrhythmic development after applying the drug Sotalol.

A computational model that is able to assess the response of new drug compounds rapidly and inexpensively is of great interest for pharmaceutical companies, doctors, and patients. Such a tool will increase the number of successful drugs that reach the market, while decreasing cost and time to develop them, and thus help hundreds of thousands of patients in the future. However, the creation of a suitable model requires taking a multiscale approach that is computationally expensive: the electrical activity of cells is modelled in high detail and resolved simultaneously in the entire heart. Due to the fast dynamics that occur in this problem, the spatial and temporal resolutions are highly demanding. For more details about the Stanford Living Heart Project please read the previous HPCwire article HERE. .

About UberCloud

UberCloud is the online Community, Marketplace, and Software Container Factory where engineers, scientists, and their service providers, discover, try, and buy ubiquitous high-performance computing power and Software-as-a-Service, from Cloud resource providers and application software vendors around the world. UberCloud’s unique high-performance software container technology simplifies software packageability and portability, enables ease of access and instant use of engineering SaaS solutions, and maintains scalability across multiple compute nodes. Please visit www.TheUberCloud.com or contact us at www.TheUberCloud.com/help/.

Source: UberCloud

The post The Stanford Living Heart Project Wins Prestigious HPC Awards During SC17 appeared first on HPCwire.

Supermicro Announces Scale-Up SuperServer Certified for SAP HANA

Thu, 12/07/2017 - 12:31

SAN JOSE, Calif., Dec. 7, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions, green computing technology and an SAP global technology partner, today announced that its latest 2U 4-Socket SuperServer (2049U-TR4) supporting the highest performance Intel Xeon Scalable processors, maximum memory and all-flash SSD storage has been certified for operating the SAP HANA platform. SuperServer 2049U-TR4 for SAP HANA supports customers by offering a unique scale-up single node system based on a well-defined hardware specification designed to meet the most demanding performance requirements of SAP HANA in-memory technology.

“Combining our capabilities in delivering high-performance, high-efficiency server technology, innovation, end-to-end green computing solutions to the data center, and cloud computing with the in-memory computing capabilities of SAP HANA, Supermicro SuperServer 2049U-TR4 for SAP HANA offers customers a pre-assembled, pre-installed, pre-configured, standardized and highly optimized solution for mission-critical database and applications running on SAP HANA,” said Charles Liang, President and CEO of Supermicro. “The SAP HANA certification is a vital addition to our solution portfolio further enabling Supermicro to provision and service innovative new mission-critical solutions for the most demanding enterprise customer requirements.”

Supermicro is collaborating with SAP to bring its rich portfolio of open cloud-scale computing solutions to enterprise customers looking to transition from traditional high-cost proprietary systems to open, cost-optimized, software-defined architectures. To support this collaboration, Supermicro has recently joined the SAP global technology partner program.

SAP HANA combines database, data processing, and application platform capabilities in-memory. The platform provides libraries for predictive, planning, text processing, spatial and business analytics. By providing advanced capabilities, such as predictive text analytics, spatial processing and data virtualization on the same architecture, it further simplifies application development and processing across big-data sources and structures. This makes SAP HANA a highly suitable platform for building and deploying next-generation, real-time applications and analytics.

The new SAP-certified solution complements existing solutions from Supermicro for SAP NetWeaver technology platform and helps support customers’ transition to SAP HANA and SAP S/4HANA. In fact, Supermicro has certified its complete portfolio of server and storage solutions to support the SAP NetWeaver technology platform running on Linux. Designed for enterprises that require the highest operational efficiency and maximum performance, all these Supermicro SuperServer solutions are ready for SAP applications based on the NetWeaver technology platform such as SAP ECC, SAP BW and SAP CRM, either as application or database server in a two- or three-tier SAP configuration.

Supermicro plans to continue expanding its portfolio of SAP HANA certified systems including an 8-socket scale-up solution based on the SuperServer 7089P-TR4 and a 4-socket solution based on its SuperBlade in the first half of 2018.

About Super Micro Computer, Inc. (NASDAQ: SMCI)

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced Server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide.

Source: Super Micro Computer, Inc.

The post Supermicro Announces Scale-Up SuperServer Certified for SAP HANA appeared first on HPCwire.

CMU, PSC and Pitt to Build Brain Data Repository

Thu, 12/07/2017 - 11:12

Dec. 7, 2017 — Researchers with Carnegie Mellon’s Molecular Biosensor and Imaging Center (MBIC), the Pittsburgh Supercomputing Center (PSC) and the University of Pittsburgh’s Center for Biological Imaging (CBI) will help to usher in an era of open data research in neuroscience by building a confocal fluorescence microscopy data repository. The data archive will give researchers easy, searchable access to petabytes of existing data.

The project is funded by a $5 million, five-year grant from the National Institutes of Health’s (NIH’s) National Institute of Mental Health (MH114793) and is part of the federal BRAIN initiative.

“This grant is a testament to the fact that Pittsburgh is a leader in the fields of neuroscience, imaging and computer science,” said Marcel Bruchez, MBIC director, professor of biological sciences and chemistry at Carnegie Mellon and co-principal investigator of the grant. “By merging these disciplines, we will create a tool that helps the entire neuroscience community advance our understanding of the brain at an even faster pace.”

New imaging tools and technologies, like large-volume confocal fluorescence microscopy, have greatly accelerated neuroscience research in the past five years by allowing researchers to image large regions of the brain at such a high level of resolution that they can zoom in to the level of a single neuron or synapse, or zoom out to the level of the whole brain. These images, however, contain such a large amount of data that only a small part of one brain’s worth of data can be accessed at a time using a standard desktop computer. Additionally, images are often collected in different ways — at different resolutions, using different methodologies and different orientations. Comparing and combining data from multiple whole brains and datasets requires the power of supercomputing.

“PSC has a long experience with handling massive datasets for its users, as well as a deep skillset in processing microscopic images with high-performance computing,” said Alex Ropelewski, director of PSC’s Biomedical Applications Group and a co-principal investigator in the NIH grant. “This partnership with MBIC and CBI was a natural step in the ongoing collaborations between the institutions.”

The Pittsburgh-based team will bring together MBIC and CBI’s expertise in cell imaging and microscopy and pair it with the PSC’s long history of experience in biomedical supercomputing to create a system called the Brain Imaging Archive. Researchers will be able to submit their whole brain images, along with metadata about the images, to the archive. There the data will be indexed into a searchable system that can be accessed using the internet. Researchers can search the system to find existing data that will help them narrow down their research targets, making research much more efficient.

About PSC

The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry, and is a leading partner in XSEDE (Extreme Science and Engineering Discovery Environment), the National Science Foundation cyberinfrastructure program.

Source: PSC

The post CMU, PSC and Pitt to Build Brain Data Repository appeared first on HPCwire.

System Fabric Works and ThinkParQ Partner for Parallel File System

Thu, 12/07/2017 - 09:59

AUSTIN, Tex. and KAISERLAUTERN, Germany, Dec. 7, 2017 — Today System Fabric Works announces its support and integration of the BeeGFS file system with the latest NetApp E-Series All Flash and HDD storage systems which makes BeeGFS available on the family of NetApp E-Series Hyperscale Storage products as part of System Fabric Work’s (SFW) Converged Infrastructure solutions for high-performance Enterprise Computing, Data Analytics and Machine Learning.

“We are pleased to announce our Gold Partner relationship with ThinkParQ,” said Kevin Moran, President and CEO, System Fabric Works. “Together, SFW and ThinkParQ can deliver, worldwide, a highly converged, scalable computing solution based on BeeGFS, engineered with NetApp E-Series, a choice of InfiniBand, Omni-Path, RDMA over Ethernet and NVMe over Fabrics for targeted performance and 99.9999 reliability utilizing customer-chosen clustered servers and clients and SFW’s services for architecture, integration, acceptance and on-going support services.”

SFW’s solutions can utilize each of these networking topologies for optimal BeeGFS performance and 99.9999 reliability as full turnkey deployments, adapted to utilize customer-chosen clustered servers and clients.  SFW provides services for architecture, integration, acceptance and on-going support services.

BeeGFS delivered by ThinkParQ, is the leading parallel cluster file system designed specifically to deal with I/O intensive workloads in performance-critical environments. With a strong focus on performance and high flexibility, including converged environments where storage servers are also used for computing, BeeGFS help customers worldwide to increase their productivity by delivering results faster and by enabling analysis methods that were not possible without the specific advantages of BeeGFS.

Designed for very easy installation and management, BeeGFS transparently spreads user data across multiple servers. Therefore, the users can simply scale performance and capacity to the desired level by increasing the number of servers and disks in the system, seamlessly from small clusters up to enterprise-class systems with thousands of nodes. BeeGFS, which is available open source, is powering the storage of hundreds of scientific and industry customer-sites worldwide.

Sven Breuner, CEO of ThinkParQ, stated, “The long experience and solid track record of System Fabric Works in the field of enterprise storage makes us proud of this new partnership. Together, we can now deliver perfectly tailored solutions that meet and exceed customer expectations, no matter whether the customer needs a traditional spinning disk system for high capacity, an all-flash system for maximum performance or a cost-effective hybrid solution with pools of spinning disks and flash drives together in the same file system.”

Due to its performance tuned design and various optimized features, BeeGFS is ideal for demanding, high-performance, high-throughput workloads found in Technical Computing for modeling and simulation, product engineering, life sciences, deep learning, predictive analytics, media, financial services, and many more business advantage tools.

With the new storage pools feature in BeeGFS v7, users can now have their current project pinned to the latest NetApp E-Series All Flash SSD pool to have the full performance of an all-flash system, while the rest of the data resides on spinning disks, where it also can be accessed directly – all within the same namespace and thus completely transparent for applications.

SFW BeeGFS solutions can be based on x86_64 and ARM64 ISAs, support multiple networks with dynamic failover and provide fault-tolerance with built-in replication, and come with additional file system integrity and storage reliability features. Another compelling part of the solution offerings is BeeOND (BeeGFS on demand) which allows on the fly creation of a temporary parallel file system instances on the internal SSDs of compute nodes a per-job basis for burst-buffering. Graphical monitoring and an additional command line interface provides easy management for any kind of environment.

SFW BeeGFS high performance storage solutions with architectural design, implementation and on-gong support services are immediately available from System Fabric Works.

About ThinkParQ

ThinkParQ was founded as a spin-off from the Fraunhofer Center for High Performance Computing by the key people behind BeeGFS to bring fast, robust, scalable storage to market. ThinkParQ is responsible for support, provides consulting, organizes and attends events, and works together with system integrators to create turn-key solutions. ThinkParQ and Fraunhofer internally cooperate closely to deliver high quality support services and to drive further development and optimization of BeeGFS for tomorrow’s performance-critical systems. Visit www.thinkparq.com to learn more about the company.

About System Fabric Works

System Fabric Works (“SFW”), based in Austin, TX, specializes in delivering engineering, integration and strategic consulting services to organizations that seek to implement high performance computing and storage systems, low latency fabrics and the necessary related software. Derived from its 15 years of experience, SFW also offers custom integration and deployment of commodity servers and storage systems at many levels of performance, scale and cost effectiveness that are not available from mainstream suppliers. SFW personnel are widely recognized experts in the fields of high performance computing, networking and storage systems particularly with respect to OpenFabrics Software, InfiniBand, Ethernet and energy saving, efficient computing technologies such as RDMA. Detailed information describing SFW’s areas of expertise and corporate capabilities can be found at www.systemfabricworks.com.

Source: System Fabric Works

The post System Fabric Works and ThinkParQ Partner for Parallel File System appeared first on HPCwire.

Call for Sessions and Registration Now Open for 14th Annual OpenFabrics Alliance Workshop

Thu, 12/07/2017 - 09:06

BEAVERTON, Ore., Dec. 6, 2017 — The OpenFabrics Alliance (OFA) has published a Call for Sessions for its 14th annual OFA Workshop, taking place April 9-13, 2018, in Boulder, CO. The OFA Workshop is a premier means of fostering collaboration among those who develop fabrics, deploy fabrics and create applications that rely on fabrics. It is the only event of its kind where fabric developers and users can discuss emerging fabric technologies, collaborate on future industry requirements, and address problems that exist today. In support of advancing open networking communities, the OFA is proud to announce that Promoter Member Los Alamos National Laboratory, a strong supporter of collaborative development of fabric technologies, will underwrite a portion of the Workshop. For more information about the OFA Workshop and to find support opportunities, visit the event website.

Call for Sessions

The OFA Workshop 2018 Call for Sessions encourages industry experts and thought leaders to help shape this year’s discussions by presenting or leading discussions on critical high performance networking issues. Sessions are designed to educate attendees on current development opportunities, troubleshooting techniques, and disruptive technologies affecting the deployment of high performance computing environments. The OFA Workshop places a high value on collaboration and exchanges among participants. In keeping with the theme of collaboration, proposals for Birds of a Feather sessions and panels are particularly encouraged.

The deadline to submit session proposals is February 16, 2018, at 5:00 p.m. PST. For a list of recommended session topics, formats and submission instructions download the official OFA Workshop 2018 Call for Sessions flyer.

Registration

Early bird registration is now open for all participants of the OFA Workshop 2018. For more information on event registration and lodging, visit the OFA Workshop 2018 Registration webpage.

Dates: April 9-13, 2018

Location: Embassy Suites by Hilton Boulder, CO

Registration Site: http://bit.ly/OFA2018REGRegistration Fee: $695 (Early Bird to March 19, 2018), $815 (Regular)

Lodging: Embassy Suites room discounts available until 6:00 p.m. MDT on Monday, March 19, 2018, or until room block is filled.

About the OpenFabrics Alliance

The OpenFabrics Alliance (OFA) is a 501(c) (6) non-profit company that develops, tests, licenses and distributes the OpenFabrics Software (OFS) – multi-platform, high performance, low-latency and energy-efficient open-source RDMA software. OpenFabrics Software is used in business, operational, research and scientific infrastructures that require fast fabrics/networks, efficient storage and low-latency computing. OFS is free and is included in major Linux distributions, as well as Microsoft Windows Server 2012. In addition to developing and supporting this RDMA software, the Alliance delivers training, workshops and interoperability testing to ensure all releases meet multivendor enterprise requirements for security, reliability and efficiency. For more information about the OFA, visit www.openfabrics.org.

Source: OpenFabrics Alliance

The post Call for Sessions and Registration Now Open for 14th Annual OpenFabrics Alliance Workshop appeared first on HPCwire.

Pages