HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 5 hours 14 min ago

Geospatial Data Research Leverages GPUs

Thu, 08/17/2017 - 16:23

MapD Technologies, the GPU-accelerated database specialist, said it is working with university researchers on leveraging graphics processors to advance geospatial analytics.

The San Francisco-based company is collaborating with the Center for Geographic Analysis at Harvard University. The research center is a member of university-industry consortium backed by the National Science Foundation called the Spatiotemperal Innovation Center.

The partners said center researchers would use MapD’s GPU-based tools to analyze and visualize billions of rows of geospatial data to search for insights into “natural and social phenomena,” including hydrological models used for water management and public safety, the partners said Wednesday (Aug. 16).

The geospatial analytics effort also will focus on detailed weather forecasts and field observations on streams and reservoirs. These data will be used to predict water flow, saturation and flooding, which are becoming increasingly common as the number of extreme weather events increases. The partners said they expect to generate billion of hydrological predictions.

Part of the impetus for the project is the enormous computing demands associated with the current U.S. National Water Model, which incorporates data from about 7,000 river gauges along with weather models and other geospatial data. The model makes hourly predictions on water flows from 2.7 million stream outlets and 1,260 reservoirs.

National Water Model is a forecasting tool that will help forecasters predict when and where flooding can be expected

The predictions are delivered as “preprocessed visuals,” MapD noted, but are not currently available as an interactive application due to computing limits. “It is hoped that GPU-based analytics will support faster visualization of data sets like the U.S. National Water Model and enrich them with data such as flood or drought vulnerabilities, local population densities, emergency response availability and even social media sentiment about water policies,” MapD noted in a statement announcing the collaboration.

Center officials explained that they previously relied on data preparation and CPU-based computing resources to churn through large spatio-temporal data sets that were then integrated with external sources. “We hope to explore whether a GPU-based platform will enable testing hypotheses as we think of them, using a fraction of the computing resources at a much lower cost,” added Ben Lewis, geospatial technology manager at the Center for Geographic Analysis.

The Harvard Center will leverage MapD’s open source database and visualization libraries along with its Immerse visual analytics client. In exchange, the university researchers will look to add new geospatial features to the company’s platform while extending support for open geospatial standards.

MapD’s analytics approach leverages graphics processors to accelerate SQL queries and visualizations of large data sets. Aggressive startups such as Kinetica and MapD are pushing the boundaries of GPU processing and in-memory techniques to developed real-time analytics platforms for faster SQL queries. Along with geospatial data, these companies are also focusing on deep learning and other applications.

MapD’s roots are in university research. Founded in 2013, it was spun off from MIT’s Computer Science and Artificial Intelligence Laboratory. Other seed investors include GPU specialist Nvidia and In-Q-Tel, the CIA’s venture capital arm.

The post Geospatial Data Research Leverages GPUs appeared first on HPCwire.

Intel, NERSC and University Partners Launch New Big Data Center

Thu, 08/17/2017 - 16:05

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Centers (IPCCs) has resulted in a new Big Data Center (BDC) that will work both on code modernization and tackle real science challenges.

According to Prabhat, BDC Director and Group Lead for NERSC Data and Analytics and Services team, “The goal of the BDC is to solve DOE’s leading data-intensive science problems at scale on the NERSC Cori supercomputer. The BDC, in collaboration with Intel and the IPCCs, will test to see if the current HPC systems can support data-intensive workloads that require analysis of over 100 terabytes datasets on 100,000 cores or greater. The BDC will optimize and scale the production data analytics and management stack on Cori.

All BDC projects will run on the NERSC Cori supercomputer. Courtesy of NERSC.

“Our first task will be to identify capability applications in the DOE data science community, articulate analytics requirements and then develop scalable algorithms.” Prabhat continued. “The key is in developing algorithms in the context of the production stack. Our multi-disciplinary team consisting of performance optimization and scaling experts is well positioned to enable capability applications on Cori. All the optimizations done at the BDC will be open source and made available to peer HPC centers as well as the broader HPC and data analytics communities.”

Quincey Koziol, BDC co-director and principal data architect at NERSC, noted, “While data analytics is undoubtedly the rage at this point in time, scalable analytics fundamentally relies on a robust data management infrastructure. We will be working on examining the performance of parallel I/O as exercised through modern data analytics languages (Python, R, Julia) and machine learning/deep learning libraries.”

Joseph Curley, director for Intel’s code modernization efforts, states, “We’ve combined the BDC goal of providing software stacks and access to HPC machines where the data driven methods can be developed with our IPCC program. We married the two ideas together by combining research members of the community with a program that we have for outreach.

“The objective of the Big Data Center (BDC) comes from a common desire in the industry to have software stacks that can help the NERSC user base, using data driven methods, to solve their largest problems at scale on the Cori supercomputer. So one of our main goals is to be able to use the supercomputer hardware to its fullest capability. Some underlying objectives at BDC are to build and harden the data analytics frameworks in the software stack so that developers and data scientists can use the Cori supercomputer in a productive way to get insights from their data. Our work with NERSC and the IPCCs will involve code modernization at scale as well as creating the software environment and software stack needed to meet these needs.”

The five IPCCs who are part of the BDC program include the University of California-Berkeley, University of California-Davis, New York University (NYU), Oxford University and the University of Liverpool. Their initial BDC work includes this research:

  • The University of California-Berkeley team is working on the Celeste project. Celeste aims on developing highly scalable inference methods for extracting a unified catalog of objects in the visible universe from all available astronomy data.
  • The University of California-Davis group is working on development of computational mechanics techniques to extract patterns from climate simulation data. The techniques build upon techniques in information theory to achieve unsupervised pattern discovery.
  • The New York University (NYU) team is working on extending deep learning to operate on irregular, graph-structured data. The techniques are being applied to problems in high-energy physics.
  • The Oxford University group is developing a new class of methods called probabilistic programming and applying the methods to challenging pattern and anomaly detections in high-energy physics. The work combines probabilistic programming with deep learning to train large networks on Cori.
  • The University of Liverpool team is working on developing topological methods to analyze climate datasets. The techniques are being used to extract stable, low-dimensional manifolds, and robust pattern descriptors for weather patterns.

The BDC work will also benefit the larger data analysis and HPC communities. Curley states, “In our IPCC program, we encourage system users to discover methods of solving problems on HPC systems, document what they did, and teach others how to follow their methods. This creates a beneficial cycle of new research, hardening the machine, developing new software stacks leading to research that is more productive—what we call a virtuous cycle.

“We are excited about the IPCCs working in conjunction with the BDC because there will be people working on problems we never could have anticipated and advancing human knowledge in ways we never could have guessed. If you can combine this with the BDC at NERSC that has a large machine like Cori and a diverse user group, you end up creating networks of knowledge and creating scientific results that are unpredictably wonderful.”

About the Author

Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR.

Feature image: Berkeley Lab Shyh Wang Hall, home of NERSC

The post Intel, NERSC and University Partners Launch New Big Data Center appeared first on HPCwire.

Google Releases Deeplearn.js to Further Democratize Machine Learning

Thu, 08/17/2017 - 13:39

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last week the cloud giant released deeplearn.js as part of that initiative. Deeplearn.js an open source WebGL-accelerated JavaScript library for machine learning that runs entirely in a browser.

Writing on the Google Research blog last Friday, software engineers Nikhil Thorat and Daniel Smilkov noted, “There are many reasons to bring machine learning into the browser. A client-side ML library can be a platform for interactive explanations, for rapid prototyping and visualization, and even for offline computation. And if nothing else, the browser is one of the world’s most popular programming platforms.”

One can envision having the power of AI programming in a browser connected to specialized backend HPC architectures, such as Google Tensor Cloud, would be a useful tool.

(Benny Marty/Shutterstock)

Thorat and Smilkov wrote, “While web machine learning libraries have existed for years (e.g., Andrej Karpathy’s convnetjs) they have been limited by the speed of Javascript, or have been restricted to inference rather than training (e.g., TensorFire). By contrast, deeplearn.js offers a significant speedup by exploiting WebGL to perform computations on the GPU, along with the ability to do full backpropagation.”

According to the blog, the API mimics the structure of TensorFlow and NumPy, with a delayed execution model for training (like TensorFlow) and intermediate execution model for inference (like Numpy).

“We have also implemented versions of some of the most commonly-used TensorFlow operations. With the release of deeplearn.js, we will be providing tools to export weights from TensorFlow checkpoints, which will allow authors to import them into web pages for deeplearn.js inference,” wrote Thorat and Smilkov.

“Our vision is that this library will significantly increase visibility and engagement with machine learning, giving developers access to powerful tools while simultaneously providing the everyday user with a way to interact with them. We’re looking forward to collaborating with the open source community to drive this vision forward.”

It will be interesting to follow other tool releases under the PAIR umbrella.

Link to blog: https://research.googleblog.com/2017/08/harness-power-of-machine-learning-in.html#gpluscomments

The post Google Releases Deeplearn.js to Further Democratize Machine Learning appeared first on HPCwire.

Spoiler Alert: Glimpse Next Week’s Solar Eclipse Via Simulation from TACC, SDSC, and NASA

Thu, 08/17/2017 - 11:50

Can’t wait to see next week’s solar eclipse? You can at least catch glimpses of what scientists expect it will look like. A team from Predictive Science Inc. (PSI), based in San Diego, working with Stampede2 at the Texas Advanced Computing Center (TACC), Comet at the San Diego Supercomputer Center (SDSC), and NASA’s Pleiades supercomputer have produced a stunning simulation of the event.

According to an article posted on the TACC web site today (Spoiler Alert: Computer Simulations Provide Preview Of Solar Eclipse, written by Aaron Dubrow), PSI took the opportunity of the approaching eclipse to prove its forecasting chops with support from NASA, the Air Force Office of Scientific Research and the National Science Foundation.

The work started on July 28. “[T]hey began a large-scale simulation of the Sun’s surface in preparation for a prediction of what the solar corona — the aura of plasma that surrounds the sun and extends millions of kilometers into space — will look like during the eclipse,” wrote Dubrow. The images accompanying the article are breathtaking, as shown below, and include a simulation.

The image above on the left shows a digitally processed version of the polarized brightness using a “Wavelet” filter to bring out the details in the image. The image on the right shows traces of selected magnetic field lines from the model.

Jon Linker, president and senior research scientist of PSI is quoted, “Advanced computational resources are crucial to developing detailed physical models of the solar corona and solar wind. The growth in the power of these resources in recent years has fueled an increase in not only the resolution of these models, but the sophistication of the way the models treat the underlying physical processes as well.”

Time on Stampede2 and Comet was provided by the Extreme Science and Engineering Discovery Environment (XSEDE), a collection of integrated advanced digital resources available to U.S. researchers. The research team completed their initial forecasts on July 31, 2017, and published their final predictions using newer magnetic field data on their website on August 15, 2017. They will present their results in a series of presentations at the Solar Physics Division (SPD) meeting of the American Astronomical Society (AAS), Aug. 22-24.

Link to the article by Dubrow: https://www.tacc.utexas.edu/-/spoiler-alert-computer-simulations-provide-preview-of-solar-eclipse

The post Spoiler Alert: Glimpse Next Week’s Solar Eclipse Via Simulation from TACC, SDSC, and NASA appeared first on HPCwire.

COMSNETS 2018 Issues Call for Papers

Thu, 08/17/2017 - 10:28

Aug. 17, 2017 — The Tenth International Conference on COMmunication Systems and NETworkS (COMSNETS) will be held in Bangalore, India, during January 3-7, 2018. COMSNETS is a premier international conference dedicated to advances in Networking and Communications Systems. The conference shall be organised under the auspices of IEEE, IEEE ComSoc, and ACM in co-operation with SIGCOMM and SIGMOBILE. The conference is a yearly event for a world-class gathering of researchers from academia and industry, practitioners, and business leaders, providing a forum for discussing cutting edge research, and directions for new innovative business and technology.

The conference will include a highly selective technical program consisting of submitted papers, a small set of invited papers on important and timely topics from well-known leaders in the field, and poster session of work in progress.

Focused workshops and panel discussions will be held on emerging topics to allow for a lively exchange of ideas. International business and government leaders will be invited to share their perspectives, and will complement the technical program.

Topics of Interest

The topics of interest for the technical program include (but are not limited to) the following:

5G and wireless broadband networks
Technologies for 6-100 GHz spectrum
Visible light communications
Heterogeneous networks (HetNets)
Cognitive radio and white-space networking
Economics of networks
Energy-efficient communications
Cloud computing
Enterprise, data center, and storage-area networks
Internet architecture and protocols, Internet science and emergent behavior
Mobility and location management
Mobile Sensing
Traffic analysis and engineering
Internet of Things (IoT)
Caching & content delivery systems
Information/Content centric networks (ICN)
Network management and operations
Network security and privacy
Trusted computing
Network science
Online social networks
Overlay communications, content distribution
Wireless adhoc and sensor networks
Systems and networks for smarter energy and sustainability
Vehicular communications
Smart Grid communications and networking
Machine Learning and AI in Networking
Big Data Analytics in Networking, including IoT Analytics
Cognition and Cognitive Computing in Networking

Important Dates and Deadlines
Abstract submission:    11th September, 2017 at 11:59 pm EST
Paper submission:    18th September, 2017 at 11:59 pm EST
Notification of Acceptance:    3rd November, 2017
Camera-Ready Submission:    26th November, 2017
Main Conference:    4th – 6th January 2018
Workshops:    3rd & 7th January 2018

Conference Highlights

Keynote Talks
Invited Talks
Paper  Sessions
Poster Session
Panel Discussions
Graduate Student Forum
Demo & Exhibits Sessions
Mobile India 2018
ASSET
Co-located Workshops — WACI, Intelligent Transportation Systems, Social Networking, NetHealth

2018 Keynote Speakers

Lixia Zhang, UCLA, USA (http://web.cs.ucla.edu/~lixia/)
Krishna Gummadi (https://people.mpi-sws.org/~gummadi/)
Dhananjay Gore, Qualcomm R&D, India (https://www.linkedin.com/in/dhananjay-gore-b2b19b1/?ppe=1)

2018 Invited Speakers

Kirill Kogan, IMDEA Labs, Spain (http://people.networks.imdea.org/~kirill_kogan/)
Binbin Chen, ADSC, Singapore (https://adsc.illinois.edu/people/binbin-chen)
Samir R. Das, Univ. of Stony Brook, USA (http://www3.cs.stonybrook.edu/~samir/)
Kaushik Roy Chowdhury, Norhteastern University, USA (http://krc.coe.neu.edu/)
Mahesh K Marina, University of Edinburgh, USA (http://homepages.inf.ed.ac.uk/mmarina/)
Biplab Sikdar, NUS, Singapore (https://www.ece.nus.edu.sg/staff/bio/biplab.html)
Prasenjit Dey, IBM, IRL (https://www.linkedin.com/in/prasenjit-dey-4082763/?ppe=1)
Georg Carle, TU Munich, Germany (https://www.net.in.tum.de/members/carle/)
Raghunath Nambiar, Cisco Systems, USA (https://blogs.cisco.com/author/raghunathnambiar)
Lipika Dey, TCS Innovation Labs (https://www.linkedin.com/in/lipika-dey-3381713/?ppe=1)

Source: COMSNETS

The post COMSNETS 2018 Issues Call for Papers appeared first on HPCwire.

Federal S&E Obligations to Universities Fell Two Percent Between 2014 and 2015

Thu, 08/17/2017 - 10:06

Aug. 17, 2017 — In Fiscal Year (FY) 2015, federal agencies obligated $30.5 billion to 1,016 academic institutions for science and engineering (S&E) activities, a 2 percent decrease in current dollars from the $31.1 billion in obligations to 1,003 academic institutions in FY 2014.

These statistics are from the Survey of Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions (Federal S&E Support Survey) from the National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF).

Three federal agencies — the Department of Health and Human Services (HHS), NSF and the Department of Defense (DOD) — provided 85 percent of all federally funded academic S&E obligations in FY 2015.

HHS and DOD decreased obligations between FY 2014 and FY 2015, with HHS reporting the largest decrease among funding agencies ($500 million, or 3 percent). NSF increased obligations in FY 2015 by $200 million (4 percent).

The Johns Hopkins University (including its Applied Physics Laboratory) continued to be the leading academic recipient of federal S&E obligations, with $1.6 billion in FY 2015.

S&E obligations to minority-serving institutions (MSIs) were $783 million, 3 percent of the total S&E obligations to universities and colleges in FY 2015. MSIs include historically black colleges and universities, high-Hispanic-enrollment (HHE) institutions, and tribal colleges and universities.

Between FY 2014 and FY 2015, obligations to MSIs increased by 1 percent ($11.5 million), the third straight yearly increase.

The top 20 MSIs ranked by federal academic S&E support accounted for 56 percent of the academic S&E total for MSIs in FY 2015. New Mexico State University, an HHE, was the leading MSI recipient of federal S&E obligations, receiving $48.8 million in FY 2015, of which 84 percent was for research and development (R&D). New Mexico State University received 62 percent of its S&E total from three agencies: DOD ($11.6 million), NSF ($9.6 million) and NASA ($9.0 million).

During FY 2015, federal agencies obligated $5.8 billion to 1,024 nonprofit institutions, a 5 percent decrease from the $6.1 billion reported in FY 2014. Massachusetts General Hospital received the most federal funding among nonprofits in FY 2015, with HHS providing 97 percent.

For more information, including data tables, read the report.

Source: NSF

The post Federal S&E Obligations to Universities Fell Two Percent Between 2014 and 2015 appeared first on HPCwire.

PRACE MOOC: Discover and Learn More about Powerful Supercomputers

Thu, 08/17/2017 - 10:04

Aug. 17, 2017 — How does a Supercomputer work? What are parallel computers and what is parallel computing? How does computer simulation help science and industry to boost their research results? – these questions and many more will be answered and explained by the experts from the Edinburgh Parallel Computing Centre (EPCC) at The University of Edinburgh in collaboration with SURFsara from the Netherlands. This free online supercomputing course is developed by the Partnership for Advanced Computing in Europe (PRACE), EPCC and SURFsara.

Are you interested? Join the course!

Starting date 28 August 2017.

Register at: www.futurelearn.com/courses/supercomputing.

This course is open to all. Discover and learn more about Supercomputers and their crucial impact on society, science and industry. These powerful calculating machines give scientists and engineers a tool to study the natural world, conducting virtual experiment that are impossible in the real world.

Our course experts will introduce you to what Supercomputers are, how they are used and how we can exploit their full computational potential to make scientific breakthroughs. Be part of this exciting community! Read more about the PRACE MOOC course in cooperation with FutureLearn at: www.futurelearn.com/courses/supercomputing.

Source: PRACE

The post PRACE MOOC: Discover and Learn More about Powerful Supercomputers appeared first on HPCwire.

NCSA Researcher Awarded SIGHPC/Intel Computational Data and Science Fellowship

Thu, 08/17/2017 - 10:02

URBANA, Ill., Aug 17, 2017 — Santiago Nuñez-Corrales, a graduate researcher at the National Center for Supercomputing Applications (NCSA) and Ph.D. student in Informaticsat the University of Illinois at Urbana-Champaign was one of twelve recipients to be awarded the SIGHPC/Intel Computational and Data Science Fellowship for 2017.

The fellowship, funded by Intel, was established to increase the diversity of graduate students pursuing degrees in data science and computational science. The fellowship provides $15,000 for students to study anywhere in the world, and travel support to attend the annual supercomputing conference, better known as SC17, this year in Denver, Colorado.

Santiago will bring to SC17 nearly a decade of experience in computer science.

In 2007, Santiago graduated from the Costa Rica Institute of Technology in computer science and engineering, with a growing interest in the intersection of computing and science. Fast forward three years to 2010, when Santiago meets Professor Erik Jakobsson from NCSA and the Beckman Institute at the University of Illinois.

“Professor Jakobsson was visiting Costa Rica for a seminar that helped get started what would become our first generation of local computational scientists at the Costa Rica Institute of Technology,” Nuñez-Corrales said.

They talked about Fokker-Planck equations, stochastic methods, and six years later, Santiago came to the University of Illinois to start his Ph.D in Informatics under the direction of professor Les Gasser.

Gasser, a professor in both computer science and informatics at Illinois is a faculty fellow at NCSA. Gasser’s project, “Simulating Social Systems at Scale,” demonstrates new approaches to building very large computer models of social phenomena such as social change, the emergence of organizations, or the evolution of language and information.

Sanitago’s work focuses on software development for Agent-Based Models (ABMs) geared towards social simulation using Blue Waters. Together, their goal is to develop cyberinfrastructure resources that facilitate the design of experiments based on social theories, embodied in ABMs. Santiago’s work with Gasser also landed him the opportunity to intern with the Midwest Big Data Hub, based at NCSA.

Gasser also nominated Santiago for the fellowship.

“Research like this requires the ability to draw together knowledge from many disciplines including simulation, statistical physics, stochastic computing, and domain issues,” Gasser said. “Santiago has the fluency in all of these arenas to be able to synthesize novel solutions that push the state of the art, and this had a direct impact on his success.”

“The fellowship brings with it so many benefits,” said Nuñez-Corrales. “Attending SC17 is important to keep up with development in the HPC community, and the financial support is critical to continuing my degree, as I have no funding from my home country.”

One thing he adamantly acknowledges is that his successes have been a group effort.

“This award forces me to reflect on the collection of people that laid out opportunities that I could access,” said Nuñez-Corrales. “None of us are self-made insofar that opportunities exist that were devised by others. I am in particular grateful to Professor Les Gasser and Professor Eric Jakobsson for their guidance and support, as well as to NCSA, the Informatics Ph.D Program and Illinois for creating a space where success is possible, and certainly to the ACM and Intel committee for the SIGHPC/Intel Fellowship.”

For more information about opportunities at NCSA, please visit our website at http://www.ncsa.illinois.edu/.

Source: NCSA

The post NCSA Researcher Awarded SIGHPC/Intel Computational Data and Science Fellowship appeared first on HPCwire.

Oak Ridge ‘Summit’ Facility Completes This Month, Says Design Firm

Thu, 08/17/2017 - 09:57

OAK RIDGE, Tenn., Aug. 17, 2017 — Oak Ridge National Laboratory (ORNL) is moving equipment into a new high-performance computing center this month which is anticipated to become one of the world’s premier resources for open science computing.

The IBM sourced computer and file storage system named Summit is expected to deliver up to 200 petaflops (quadrillion calculations per second), about twice as powerful as the world’s current leader in computer performance. Within a facility designed by Heery International to accommodate the system’s intense power, cooling and security needs, the Summit system will likely be one of the world’s fastest and most capable high-performance computing resources.

Heery International provided full architectural and engineering design services for the reconfigured 10,000 square foot on-campus facility which includes a new 20 megawatt power and cooling plant as well as an expanded central energy plant for the campus.

“There were a lot considerations to be had when designing the facilities for Summit,” explained George Wellborn, Heery Project Architect. “We are essentially harnessing a small city’s worth of power into one room. We had to ensure the confined space was adaptable for the power and cooling that is needed to run this next generation supercomputer.”

The design of the Leadership Computing Facility furthers ORNL’s commitment to research in science, energy, and technology. Summit will be used to tackle national challenges in the areas of sustainable energy solutions, safe nuclear energy systems, advanced materials, sustainable transportation, and detailed knowledge of atomic-level structure and dynamics of materials.

ORNL and Heery International’s relationship dates back to 2001, when the firm designed ORNL’s 388,000 square foot East Campus Complex. Heery has since partnered with the National Laboratory on numerous projects, including the DOE’s Multi-Program Research Facility, a 218,000 square foot cutting edge, large scale secure science and technology center. Throughout the course of its 16-year working-relationship with ORNL, Heery has continued to push the envelope in what can be made possible at the Oak Ridge campus. The firm has since been retained by ORNL as the designer for Facility Upgrades associated with the OLCF-5 system, scheduled for installation in 2021. The OLCF-5 system is expected to delivery up to 1 exaflop (1000 Petaflops) of computational power. Exascale computing is a major scientific milestone and long range goal for ORNL and the Department of Energy.

A full-service architecture, engineering, interior design, construction management, and program management firm, Heery has 17 offices throughout the U.S. Founded in 1952, the firm consistently ranks annually among the top professional services firms by Engineering News-Record and Building Design + Construction and has over 75 industry design awards. The firm’s unique culture, known as The Heery Way, is integrated into each project and reflects a passion for the built environment and staying true to the client’s vision. (www.heery.com)

Source: Heery

The post Oak Ridge ‘Summit’ Facility Completes This Month, Says Design Firm appeared first on HPCwire.

ASRock Rack Launches C3000 Server Motherboards

Wed, 08/16/2017 - 13:01

TAIPEI, Taiwan, Aug. 16, 2017 — From home offices to server rooms, demands of accessing data are rising at light speed while budgets are tightly controlled. The new launch of ASRock Rack’s C3000 server motherboards is going to raise the bar for performance and capacity at a reasonable cost. It will benefit users who are looking for affordable NAS and storage solutions.

This time, ASRock Rack is releasing three types of server motherboards featuring the Next Generation Intel Atom C3000 Processor. Compared with the last generation, performance has been bumped up a factor of 2.3. In addition, I/O flexibility is enhanced by PCI-E3.0, SATA3 and USB3.0, delivering faster speeds and better flexibility. Last but not least, power consumption is extremely low while performance is much improved.

ASRock Rack’s C3000 collection is going to focus on mini-ITX and uATX. The mini-ITX model targets home-based power users or small business owners who need inexpensive storage. As the predecessor performed well in the market, this upgraded model is expected to knock it out of the park soon. Meanwhile, the uATX model is aimed at mid to large enterprise users with more extreme storage demands. It can be effortlessly deployed, with 1U/2U rackmount chassis for cold storage or 10 bays NAS chassis.

All models are available to the market in the end of September. Please check out www.asrockrack.com for more information.

 

“ASRock Rack” is the official trademark for ASRock Rack, Inc., all the covers need to remain the original brand name without any modification. All other brands, names and trademarks are the property of their respective owners.

About ASRock Rack

ASRock Rack Inc., established in 2013, specializes in providing high-performance and high-efficiency server technology in the fields of Cloud Computing, Enterprise IT, HPC and Datacenter. We adopted the design concept of “Creativity, Consideration, Cost-effectiveness” from ASRock, and the company devotes passion to think out-of-the-box in the Server Industry. Leveraged by ASRock’s growing momentum and distribution channels, this young and vibrant company targets the booming market of Cloud Computing, and is committed to serve the market with user-friendly, eco-friendly and do-it-yourself Server technology, featuring flexible and reliable products.

Source: ASRock Rack

The post ASRock Rack Launches C3000 Server Motherboards appeared first on HPCwire.

Dell EMC will Build OzStar – Swinburne’s New Supercomputer to Study Gravity

Wed, 08/16/2017 - 11:55

Dell EMC announced yesterday it is building a new supercomputer – the OzStar – for the Swinburne University of Technology (Australia) in support the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav). The OzGrav project was first announced last September. The OzStar supercomputer will be based on Dell EMC PowerEdge R740 nodes, have more than one petaflops capability, and is expected to be completed in September.

OzGrav will use the new machine in support of efforts to understand the extreme physics of black holes and warped space-time. Among other projects, OzGrav will process data from LIGO (Laser Interferometer Gravitational Wave Observatory) gravitational wave detectors and the Square Kilometre Array (SKA) radio telescope project with facilities built in Australia and South Africa.

The OzStar’s architecture will leverage advanced Intel (Xeon V5) and Nvidia (P100) technology and feature three building blocks: Dell EMC 14th Generation PowerEdge R740 Servers; Dell EMC H-Series Networking Fabric; and Dell EMC HPC Storage with Intel Lustre filesystem. The networking fabric is Intel Omni Path Architecture and will provide “86.4 Terabits per second of aggregate network bandwidth at 0.9 µs latency” according to Dell EMC. As is typical in such contracts, Dell EMC will provide support.

Dell EMC PowerEdge R740

Here’s snapshot of OzStar’s specs as provided by Dell EMC:

  • 4,410 x86 cores at 2.3Ghz across 107 standard compute and eight data crunching nodes
  • 230 x NVIDIA Tesla P100 12GB GPUs (one per CPU socket)
  • 272 Intel Xeon Phi cores at 1.6Ghz across 4 C6320p KNL nodes
  • A high-speed, low latency network fabric able to move data across each building block at over 100Gbps with various features to ensure reliability and traffic flow (9 Dell EMC H1048 switches), as well as GPU-Direct capability
  • 5PiB of usable storage via the Lustre file system at 60GB/s throughput, on the Dell EMC PowerVault MD3060e storage with R740 controllers (10 Dell EMC PowerEdge R740 + 16 MD3060e and 1 MD3240)
Matthew Bailes, OzGrav Director

“While Einstein predicted the existence of gravitational waves, it took one hundred years for technology to advance to the point they could be detected,” said Professor Matthew Bailes, director of OzGrav, Swinburne University of Technology. “Discoveries this significant don’t occur every day and we have now opened a new window on the Universe. This machine will be a tremendous boost to our brand-new field of science and will be used by astrophysicists at our partner nodes as well as internationally.”

“This combination of Dell EMC technologies will deliver the incredibly high computing power required to move and analyze data sets that are literally astronomical in size,” said Andrew Underwood, Dell EMC’s ANZ high performance computing lead, who collaborated with Swinburne on the supercomputer design.

The NSF-funded LIGO project first successfully detected gravitational waves in 2015. Those waves were caused by the collision of two modest size black holes spiraling into one another (see HPCwire article, Gravitational Waves Detected! Historic LIGO Success Strikes Chord with Larry Smarr). LIGO has since detected two more events opening up a whole new way to examine the universe.

According to today’s announcement, up to 35% of the supercomputer’s time will be spent on OzGrav research related to gravitational waves. The supercomputer will also continue to incorporate the GPU Supercomputer for Theoretical Astrophysics Research (gSTAR), operating as a national facility for the astronomy community funded under the federal National Collaborative Research Infrastructure Scheme (NCRIS) in cooperation with Astronomy Australia Limited (AAL). In addition, the supercomputer will underpin the research goals of Swinburne staff and students across multiple disciplines, including molecular dynamics, nanophotonics, advanced chemistry and atomic optics.

OzStar replaces the “green” machines that have served Swinburne for the last decade and seeks to further reduce Swinburne’s carbon footprint by minimizing CO2 emissions by carefully considering heating, cooling and a very high performance per watt ration of power consumption.

OzGrav is funded by the Australian Government through the Australian Research Council Centres of Excellence funding scheme and is a partnership between Swinburne University (host of OzGrav headquarters), the Australian National University, Monash University, University of Adelaide, University of Melbourne, and University of Western Australia, along with other collaborating organisations in Australia and overseas.

The post Dell EMC will Build OzStar – Swinburne’s New Supercomputer to Study Gravity appeared first on HPCwire.

Out-of-this-World Science Project to be Featured as Keynote at SC17

Wed, 08/16/2017 - 10:18

DENVER, Aug. 16, 2017 — Professor Philip Diamond, Director General of the international Square Kilometer Array (SKA) project, will be the keynote speaker at SC17, the International Conference for High Performance Computing, Networking, Storage and Analysis in Denver, Nov. 12-17.

SKA is an international collaboration to build the world’s largest radio telescope that will change our understanding of space as we know it.

Professor Diamond, accompanied by Dr. Rosie Bolton, SKA Regional Centre Project Scientist, will take SC17 attendees around the globe and out into the deepest reaches of the observable universe as they describe the SKA’s international partnership that will map and study the entire sky in greater detail than ever before.

When completed, the SKA telescope will be at the forefront of scientific research, from looking at how the very first stars and galaxies formed just after the big bang to helping scientists understand the evolution of the universe and the nature of the mysterious force known as dark energy.

As Director-General of the SKA, Prof. Diamond coordinated the global effort to establish and now oversee 12 international engineering consortia, bringing together over 100 companies and research institutes and 600+ experts in 20 countries to design the SKA.

“The SKA is one of the most ambitious science enterprises of our times,” said Bernd Mohr, SC17 Conference Chair from Juelich Supercomputing Centre. “Professor Diamond and Dr. Bolton’s visually stunning and information rich-presentation will captivate as they describe one of the largest scientific endeavors in history, incorporating the world’s largest scientific instrument – a global science project of unprecedented size and scale and a prime example of our conference’s HPC connects theme.”

Thousands of antennas distributed across two continents will generate petabytes—quadrillions of bytes—and eventually exabytes (one exabyte is a quintillion bytes) of data, making the SKA a bleeding-edge project in the era of big data and extreme-scale computing.

With its unique ability to pick up smaller and fainter objects in the sky than any other radio telescope, the SKA will enable astronomers to make discoveries in areas as diverse as the formation of Earth-like planets, the detection of gravitational distortions of space-time in our galaxy, the origin of cosmic magnetic fields, and the understanding of the formation and growth of black holes.

“Like Wi-Fi and the World Wide Web before them, some of these innovations will trickle down to society and be applied in other fields,” said Diamond. ”For instance, spin-offs in areas linked to the SKA’s computing activities could benefit other power-efficient systems that need to process large volumes of data in remote areas from geographically dispersed sources.”

As SKA Regional Centre Project Scientist, Dr. Bolton is looking at how to distribute hundreds of petabytes of data products per year to thousands of scientists around the world and also determine best practices based on how the scientific community will interact with that data.

The SKA is already driving technology development in collaboration with industry to act as a testbed of emerging technologies for potential future market applications. Potential areas of innovation include data management techniques; data mining and analytics; and imaging algorithms, remote visualization, and pattern matching—all of which can impact areas such as medicine, transportation, and security. The project is also committed to outreach, education, and training in developing countries.

Professor Philip Diamond

Professor Philip Diamond is the Director-General of the SKA (Square Kilometre Array). He was appointed to this position in October 2012, and is responsible for the team designing and ultimately constructing the SKA Professor Diamond’s many research interests include studies of star birth and death, galactic and extra-galactic supernovae and discs of gas rotating around super-massive black-holes at the centres of galaxies. He has published more than 300 research papers on astronomy.

Dr. Rosie Bolton

Dr. Bolton acts as the SKA Regional Centre Project Scientist as well as Project Scientist for the international engineering consortium designing the High Performance Computers that will generate some 300 petabytes of data per year of initial science data products for the SKA. Dr. Bolton is also looking at how to distribute those data products to the user community of thousands of scientists around the world and how they will interact with the data.

About SC17

SC17 is the premier international conference showcasing the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. The annual event, created and sponsored by the ACM (Association for Computing Machinery) and the IEEE Computer Society, attracts HPC professionals and educators from around the globe to participate in its complete technical education program, workshops, tutorials, a world-class exhibit area, demonstrations and opportunities for hands-on learning.

Source: SC17

The post Out-of-this-World Science Project to be Featured as Keynote at SC17 appeared first on HPCwire.

HPC Researcher Moves to Georgia Tech

Wed, 08/16/2017 - 10:01

Aug. 16, 2017 — Vivek Sarkar is moving from his position as E.D. Butcher Chair in Engineering at Rice University in Houston, Texas to join the School of Computer Science in the College of Computing at Georgia Institute of Technology, in Atlanta, Georgia. Effective August 16, 2017, he will hold the Stephen Fleming Chair for Telecommunications in the College of Computing at Georgia Tech. “Our goal in hiring Vivek,” said Zvi Galil, the John P. Imlay Jr. Dean of Computing at Georgia Tech, “is to further bolster the College of Computing at Georgia Tech as the premier academic institution for research in software for future systems in the 21st century, both by creating technologies that will be broadly adopted by industry and by introducing new pedagogies for parallel software that will influence on-campus and on-line teaching across the globe.”

At Rice, Sarkar’s lab, which is also known as the Habanero Extreme Scale Software Research Project, included more than 20 PhD students, postdoctoral researchers, and research scientists working on advancing foundations of programming technologies and HPC software for a wide range of parallel and distributed platforms. Some of these foundations have already influenced industry standards for parallelism including the doacross construct in OpenMP 4.5, the task blocks library for C++, Java’s Phaser library, and the Open Community Runtime (OCR) system. Sarkar served as chair of the Department of Computer Science at Rice during 2013 – 2016. He is also PI of the DARPA-funded Pliny project on “big code” analytics led by Rice, and instructor for a new online specialization in Coursera on Parallel, Concurrent, and Distributed Programming.

Prior to joining Rice in 2007, Sarkar was Senior Manager of Programming Technologies at IBM Research. His research projects at IBM included the X10 parallel programming language, the Jikes Research Virtual Machine for the Java language, the ASTI optimizer used in IBM’s XL Fortran product compilers, and the PTRAN automatic parallelization system. He became a member of the IBM Academy of Technology in 1995, and was inducted as an ACM Fellow in 2008. Sarkar has been serving as a member of the US Department of Energy’s Advanced Scientific Computing Advisory Committee (ASCAC) since 2009, and on CRA’s Board of Directors since 2015. He will also hold a joint faculty position at Oak Ridge National Laboratory starting this month.   “It is great to have Vivek on board with a joint faculty appointment in our Computing and Computational Sciences Directorate (CCSD) to further strengthen the ongoing partnerships between ORNL and Georgia Tech,” said Jeff Nichols, who leads CCSD at ORNL.

Sarkar said that the approaching end-game to Moore’s Law signals a “new era of combined software-hardware research challenges” with an increased focus on leveraging heterogeneous parallelism, heterogeneous memories, and customized computing. He will be an Associate Director for Georgia Tech’s Center for Research into Novel Computing Hierarchies (CRNCH). “Georgia Tech has a long tradition of pushing the envelope in computer architecture innovations. Vivek’s intellectual interests in software for novel hardware should help amplify our research directions in CRNCH,” said Tom Conte, CRNCH Director and Professor of Computer Science and Electrical & Computer Engineering at Georgia Tech, and Co-Chair of IEEE’s Rebooting Computing initiative.

Sarkar earned his Ph.D. from Stanford University in 1987, and has authored or coauthored more than 300 papers in the areas of parallel computing and programming systems. His mentors include his Ph.D. advisor, John Hennessy, and his hiring manager at IBM Research, Fran Allen.

The post HPC Researcher Moves to Georgia Tech appeared first on HPCwire.

Microsoft Bolsters Azure With Cloud HPC Deal

Tue, 08/15/2017 - 11:57

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud.

Terms of the acquisition were not disclosed. According to the web site Crunchbase, a Microsoft investment arm previously provided Cycle Computing with “non-equity assistance.”

Microsoft said Tuesday (Aug. 15) the deal would advance its “big compute” workloads efforts that provides “on-demand power and infrastructure necessary to run massive workloads at scale without the overhead,” Jason Zander, Microsoft’s corporate vice president for Azure, noted in a blog post.

The company is betting the cloud orchestration deal with help it capitalize on AI, deep learning, Internet of Things and other computing intensive jobs that are expected to increase demand for running large workloads at scale.

The partners said Tuesday (Aug. 15) the deal would combine Cycle Computing’s orchestration technology for managing Linux and Windows computing and data workloads with Microsoft’s Azure cloud computing infrastructure.

Based in Stamford, Conn., Cycle Computing was launched in 2005 by co-founders Rob Futrick and Jason Stowe. “We had the rare opportunity to invent and lead a product category, Cloud HPC,” Cycle Computing CEO Stowe noted in a separate blog post announcing the deal.

Zander said the acquisition would combine Azure public cloud infrastructure such as GPU processing capabilities and support for Infiniband networking with Cycle Computing’s orchestration and HPC heritage. The combination would “enhance our support of Linux HPC workloads and make it easier to extend on-premise workloads to the cloud,” Zander asserted. Microsoft is the only major cloud provider currently supporting Infiniband.

The “big computing” deal would also seek to leverage Microsoft’s recently announced plans to add GPU instances based on Nvidia’s Pascal generation of graphics processors to its Azure cloud. Azure currently includes Nvidia M60 and K80 GPU instances, and the cloud vendor said it would add P40- and P100-based virtual machines later this year.

Google, IBM and other public cloud providers have also moved to integrate deploy Nvidia’s Pascal GPU into their cloud infrastructure as customers run compute intensive workloads.

Those upgrades target AI and deep learning workloads as well as HPC workloads such as DNA sequencing and Monte Carlo simulations. Those capabilities mesh with Cycle Computing’s current customer base that includes biomedical researchers and financial firms that use its cloud HPC software and orchestration tools to access cloud-based computing resources.

Cycle Computing, which in the past has had strong ties to Microsoft’s cloud rival, Amazon Web Services, has also played a key role in transforming HPC technology as a provider of software that links users to the cloud.

“HPC workloads are always changing and perhaps the definition of the HPC along with it, but I think what’s really happening is the customer demographics are changing,” Tim Carroll, head of ecosystem development and sales, told HPCwire earlier this year. “It’s a customer demographic defined not by the software or the system, but the answer.”

 

The post Microsoft Bolsters Azure With Cloud HPC Deal appeared first on HPCwire.

Microsoft Acquires Cycle Computing for Big Computing in the Cloud

Tue, 08/15/2017 - 10:31

Aug. 15, 2017 — Today, Microsoft announced its acquisition of Cycle Computing to offer its customers High-Performance Computing (HPC) and other Big Computing capabilities in the cloud. Jason Stowe, CEO of Cycle Computing, announced the acquisition in the following blog post:

When Rachel, Rob, Doug, and I started Cycle twelve years ago on an $8,000 credit card bill, customers needed large up-front investments to access Big Compute. We set out to fix that, to accelerate the pace of innovation by changing the way the world accesses computing. Since then, our products have helped customers fight cancer & other diseases, design faster rockets, build better hard drives, create better solar panels, and manage risk for peoples’ retirements.

We’ve had an amazing experience bootstrapping Cycle Computing without VC funding, building products that will manage 1 Billion core-hours this year, growing at 2.7x every 12 months, with a customer base that spends $50-100 million annually on cloud infrastructure. Today we couldn’t be happier to announce that we’re joining Microsoft to accelerate HPC cloud adoption.

To our existing customers: We couldn’t have done any of this without you, and we’re excited to continue supporting you and the amazing work you do. Together you form an impressive community of innovative users that span Global 2000 Manufacturing, Big 5 Life Insurance, Big 10 Pharma & Biotech, Big 10 Media & Entertainment, Big 10 Financial Services & Hedge Funds, startups, and government agencies. We will continue to make CycleCloud the leading Big Compute and Cloud HPC software, but now even bigger and better than before.

To my fellow Cyclers: You’ve been an integral part of a special team of bright, hard-working people that engineer great products and get things done with integrity. Customers frequently say that they love working with you and the products you build. I couldn’t be prouder. It has been an honor working alongside each of you, and I look forward to the great things we will accomplish together at Microsoft.

To the extended Cycle family, our advisors, supporters, vendors, ISVs, partners: As a team, we had the rare opportunity to invent and lead a product category, Cloud HPC, drive customer adoption, push the boundaries of what can be done, and execute our vision of changing the world for the better. Thank you all for helping Cyclers push things forward; your collaboration means so much to us.

Now, we see amazing opportunities in joining forces with Microsoft. Its global cloud footprint and unique hybrid offering is built with enterprises in mind, and its Big Compute/HPC team has already delivered pivotal technologies such as InfiniBand and next generation GPUs. The Cycle team can’t wait to combine CycleCloud’s technology for managing Linux and Windows compute & data workloads, with Microsoft Azure’s Big Compute infrastructure roadmap and global market reach.

In short, we’re psyched to be joining the Azure team precisely because they share our vision of bringing Big Compute to the world: to solve our customers’, and frequently humanity’s most challenging problems through the use of cloud HPC.

Source: Jason Stowe, Cycle Computing

The post Microsoft Acquires Cycle Computing for Big Computing in the Cloud appeared first on HPCwire.

HPE Ships Supercomputer to Space Station, Final Destination Mars

Mon, 08/14/2017 - 15:37

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system into space aboard the SpaceX Dragon Spacecraft to explore if such a system, equipped with purpose-built software from HPE, can operate successfully under harsh environmental conditions that include radiation, solar flares, and unstable electrical power.

Currently ruggedizing space-bound computers is a years-long process, so that by the time they blast off they are three to four generations behind the current state of the art. HPE has designed its new system software to mitigate environmentally induced errors using real-time adaptive throttling techniques. If successful, it would mean that space travelers need not go through the extensive hardening process for their computers and would benefit from the latest technologies.

After this morning’s launch from NASA’s Kennedy Space Center (Merritt Island, Florida), the Spaceborne Computer is headed to the International Space Station (ISS) for one year, which is about how long it takes to get to Mars.

“Our vision is to have a general purpose HPC supercomputer on board the space craft,” said Dr. Mark Fernandez, leading payload engineer for the project. “Today, all of the experiments must send the data to earth over the precious network bandwidth and this opens up the opportunity to what we’ve been talking a lot about lately, which is bring the compute to the data rather than bring the data to the compute.”

As one considers the latency and bandwidth issues of space travel, the advantage of on-board HPC is clear. The average round trip signal as you get close to Mars is 26 minutes. With this delay, it’s hard to have a conversation over this network much less carry out complex computational tasks. “When you need on the spot computation, for simulation, analytics, artificial intelligence, the answers tends to get a bit too long to come by if you rely on earth so more and more as you travel further and further out you need to carry more compute power with you – this is our belief,” said Dr. Eng Lim Goh, VP, Chief Technology Officer of SGI at HPE and one of the inventors of the approach.

Ultimately HPE is positioning itself to provide its memory-based The Machine supercomputer for Mars exploration.

Sending people to Mars opens up enormous computing demands. They will need to be “guided by a computer capable of performing extraordinary tasks,” writes Kirk Bresniker, Chief Architect, Hewlett Packard labs. These include:

  • Monitoring onboard systems the way a smart city would monitor itself—anticipating and addressing problems before they threaten the mission.
  • Tracking minute-by-minute changes in astronaut health—monitoring vitals and personalizing treatments to fit the exact need in the exact moment.
  • Coordinating every terrestrial, deep space, Martian orbital and rover sensor available, so crew and craft can react to changing conditions in real time.
  • And, perhaps most importantly… combining these data sets to find the hidden correlations that can keep a mission and crew alive.

“Memory-Driven Computing will help us efficiently and effectively tackle the big data challenges of our day, and make it possible for us to—one day—send humans to Mars,” asserts Goh. “But even if we expect Memory-Driven Computing to become the standard for supercomputing in space we need to start somewhere.”

To that end, the phase one Spaceborne Computer includes two x86 HPE Apollo 40-class two-socket systems, powered by Broadwell processors. These are the latest generation Xeons at the point the configuration was frozen by NASA in March ahead of shipment.

The InfiniBand-connected Linux cluster will be housed in a standard NASA dimension locker, equipped with standard Ethernet cables, standard 110 volt AC connectors and NASA-approved cooling technology. The rack design means the system can be easily swapped out for an upgraded model. No modifications were made to the main components, but HPE created a custom water-cooled enclosure that taps into a cooling loop on the space station, leveraging the free ambient cooling of space.

During the year spent circling Earth’s orbit, the computer will run three HPC benchmarks, each of which targets a different kind of computational workload: the compute and power-hungry Linpack, the data intensive HPCG and a benchmark suite from NASA, the NAS parallel benchmark.

“We selected these for relevance, to be as realistic as possible for NASA and space related work,” said Goh.

HPE designed the entire experiment so that testing can run autonomously. “It doesn’t require the astronauts to be system engineers,” said Goh. “They just need to plug the system in and turn it on and the experiments will run automatically.”

The tests will generate approximately 5 megabytes of data per day that will be sent to HPE for analysis. There’s also the capability for an uplink that would give cleared HPE team members limited access to the system, but the plan is to run autonomously other than the regular downloads of data, which will be compared with a control machine in Chippewa Falls, Wisconsin.

Through its SGI acquisition, HPE has a relationship with NASA that extends back 30 years. The Spaceborne Computer “Apollo 40” compute nodes are the same class as those used in the NASA’s flagship Pleiades supercomputer, an SGI ICE X machine that is ranked at number 15 on the current Top500 list.

The post HPE Ships Supercomputer to Space Station, Final Destination Mars appeared first on HPCwire.

AMD EPYC Video Takes Aim at Intel’s Broadwell

Mon, 08/14/2017 - 12:11

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based system on the STREAM benchmark and on a test case running ANSYS’s CFD application, Fluent. The intent was to showcase the new AMD chip’s (EPYC) strength on memory-bound HPC applications.

In the video, presenter Joshua Mora, senior manager field applications engineering, AMD, touts the EYPC’s memory controller and the memory bandwidth delivered. AMD has high hopes for its new EPYC line both in head-to-head competition with Intel as well as potentially creating a single socket market (see HPCwire article, AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor). Intel, of course, has been busy with its own introductions (see HPCwire article, Intel Unveils Xeon Scalable Processors). AMD EPYC will ultimately have to compete with Intel’s Skylake and IBM’s Power9 chip

The tested Intel system featured Xeon E5-2699 v4 processors (22 cores) and the AMD system featured EPYC 7601 (32 cores). Both were dual socket systems. “It is two clusters, tightly coupled with high speed, low latency InfiniBand interconnect running Windows OS,” according to Mora.

The AMD system was roughly 2X better on the STREAM benchmark which is intended to measure sustainable memory bandwidth. The dual socket Intel system ran at roughly 116 GB/s while the AMD system ran at roughly 266 GB/s. AMD says the STREAM performance is a good proxy for a range of HPC applications. Intel would no doubt offer a different view of the system’s set up and comparability of results.

AMD was roughly 78 percent faster running the Fluent simulation, which was a 14 million cell simulation of various aerodynamic effects on a jet. Mora cited AMD’s greater number of cores as well as its memory bandwidth as factors.

It seems clear AMD is ramping up its effort to win chunks of the datacenter and HPC landscaping following its absence from those markets in recent years.

At the time of EPYC’s launch, Scott Aylor, AMD corporate VP and GM of enterprise solutions business, said “It’s not enough to come back with one product, you’ve got to come back with a product cadence that moves as the market moves. So not only are we coming back with EPYC, we’re also [discussing follow-on products] so when customers move with us today on EPYC they know they have a safe home and a migration path with Rome.” AMD has committed to socket compatibility between EPYC 7000 line and Rome, code name of the next scheduled generation AMD processor aimed at the datacenter.

Based on the Zen core, EPYC is a line of system on a chip (SoC) devices designed with enhanced memory bandwidth and fast interconnect in mind. AMD also introduced a one-socket device, optimized for many workloads, which AMD says will invigorate a viable one-socket server market. With EPYC, “we can build a no compromise one-socket offering that will allow us to cover up to 50 percent of the two-socket market that is today held by the [Intel Broadwell] E5-2650 and below,” said Aylor.

Intel is the giant here, and not standing still. It will be interesting to watch the competition. EPYC seems to be a serious contender, though with a lot of mindshare ground to make up.

The post AMD EPYC Video Takes Aim at Intel’s Broadwell appeared first on HPCwire.

Mellanox to Present at Upcoming Investor Conference

Mon, 08/14/2017 - 09:38

SUNNYVALE, Calif. & YOKNEAM, Israel, Aug. 14, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of end-to-end interconnect solutions for servers and storage systems, today announced that it will present at the following conference during the third quarter of 2017:

  • Deutsche Bank Technology Conference in Las Vegas, Nevada, Wednesday, Sept. 13th at 3:20 p.m., Pacific Daylight Time.

When available, a webcast of the live event, as well as a replay, will be available on the company’s investor relations website at: http://ir.mellanox.com.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox to Present at Upcoming Investor Conference appeared first on HPCwire.

AI: Simplifying and Scaling for Faster Innovation

Mon, 08/14/2017 - 01:05

The AI revolution has already begun, right?

In some ways it has. Deep learning applications have already bested humans in complex games, including chess, Jeopardy, Go, and poker, and in practical tasks, such as image and speech recognition. They are also impacting our everyday lives, introducing human-like capabilities into personal digital assistants, online preference engines, fraud detection systems and more.

However, these solutions were developed primarily by organizations with deep pockets, deep expertise and high-end computing resources.[1] For the AI revolution to move into the mainstream, cost and complexity must be reduced, so smaller organizations can afford to develop, train, and deploy powerful deep learning applications.

It’s a tough challenge. Interest in AI is high, technologies are in flux and no one can reliably predict what those technologies will look like even five years from now. How do you simplify and drive down costs in such an inherently complex and changing environment?

Intel has a strategy, and it involves software as much as hardware. It also involves HPC.

Optimized Software Building Blocks that are Flexible—and Fast Figure 1. Intel provides highly-optimized software tools, libraries, and frameworks to simplify the development of fast and scalable AI applications.

Most of today’s deep learning algorithms were not designed to scale on modern computing systems. Intel has been addressing those limitations by working with researchers, vendors and the open-source community to parallelize and vectorize core software components for Intel® Xeon® and Intel® Xeon Phi™ processors.

The optimized tools, libraries, and frameworks often provide order-of-magnitude and higher performance gains, potentially reducing the cost and complexity of the required hardware infrastructure. They also integrate more easily into standards-based environments, so new AI developers have less to learn, deployment is simpler and costs are lower.

Bring AI and HPC Together to Unleash Broad and Deep Innovation Figure 2. Intel® HPC Orchestrator simplifies the design, deployment, and use of HPC clusters, and includes optimized development and runtime support for AI applications.

Optimized software development tools help, but deep learning applications are compute-intensive, data sets are growing exponentially, and time-to-results can be key to success. HPC offers a path to scaling compute power and data capacity to address these requirements.

However, combining AI and HPC brings additional challenges. AI and HPC have grown up in relative isolation, and there is currently limited overlap in expertise between the two areas. Intel is working with both communities to provide a better and more open foundation for mutual development.

Intel is also working to extend the benefits of AI and HPC to a broader audience. One example of this effort is Intel® HPC Orchestrator, an extended version of OpenHPC that provides a complete, integrated system software stack for HPC-class computing. Intel HPC Orchestrator will help the HPC ecosystem deliver value to customers more quickly by eliminating the complex and duplicated work of creating, testing, and validating a system software

Figure 3. The Intel® Scalable System Framework brings hardware and software together to support the next-generation of AI applications on simpler, more affordable, and massively scalable HPC clusters.

stack. Intel has already integrated its AI-optimized software building blocks into Intel HPC Orchestrator to provide better development and runtime environments for AI applications. Work has also been done to optimize other core components, such as MPI, to provide higher performance and better scaling for the data- and compute-intensive demands of deep learning.

Powerful Hardware to Run It All

Of course, AI software can only be as powerful as the hardware that runs it. Intel is delivering disruptive new capabilities in its processors, and supporting them with synchronized advances in workstation and server platforms. Intel engineers are also integrating these advances—along with Intel HPC Orchestrator—into the Intel® Scalable System Framework (Intel SSF), a reference architecture for HPC clusters that are simpler, more affordable, more scalable, and designed to handle the full range of HPC and AI workloads. It’s a platform for the future of AI.

Click on a link to learn more about the benefits Intel SSF brings to AI at each layer of the solution stack: overview, compute, memory, fabric, storage.

 

[1] For example, Libratus, the application that beat five of the world’s top poker players, was created by a team at Carnegie Mellon University and relied on 600-nodes of a University of Pittsburgh supercomputer for overnight processing during the competition.

The post AI: Simplifying and Scaling for Faster Innovation appeared first on HPCwire.

YT Project Awarded NSF Grant to Expand to Multiple New Science Domains

Fri, 08/11/2017 - 09:39

URBANA, Ill., Aug. 11, 2017 — The yt Project, an open science environment created to address astrophysical questions through analysis and visualization, has been awarded a $1.6 million dollar grant from the National Science Foundation(NSF) to continue developing their software project. This grant will enable yt to expand and begin to support other domains beyond astrophysics, including weather, geophysics and seismology, molecular dynamics and observational astronomy. It will also support the development of curricula for Data Carpentry, to ease the onramp for scientists new to data from these domains.

The yt project, led by Matt Turk along with Nathan Goldbaum, Kacper Kowalik, and Meagan Lang at the National Center for Supercomputing Applications (NCSA) at the University of Illinois’ Urbana campus and in collaboration with Ben Holtzman at Columbia University in the City of New York and Leigh Orf at the University of Wisconsin-Madison, is an open source, community-driven project working to produce an integrated science environment for collaboratively asking and answering questions about simulations of astrophysical phenomena, leading to the application of analysis and visualizations to many different problems within the field. It is built in an ecosystem of packages from the scientific software community and is committed to open science principles and emphasizes a helpful community of users and developers. Many theoretical astrophysics researchers use yt as a key component of all stages of their computational workflow, from debugging to data exploration, to the preparation of results for publication.

yt has been used for projects within astrophysics as diverse as studying mass-accretion onto the first stars in the Universe, to studying the outflows from compact objects and supernovae, to the star formation history of galaxies. It has been used to analyze and visualize some of the largest simulations ever conducted, and visualizations generated by yt have been featured in planetarium shows such as Solar Superstormscreated by the Advanced Visualization Lab at NCSA.

“I’m delighted and honored by this grant, and we hope it will enable us to build, sustain and grow the thriving open science community around yt, and share the increase in productivity and discovery made possible by yt in astrophysics with researchers across the physical sciences,” said Principal Investigator Matt Turk.

This NSF SI2-SSI award is expected to last from October 2017 – September 2022. A copy of the grant proposal may be found here.

Source: NCSA

The post YT Project Awarded NSF Grant to Expand to Multiple New Science Domains appeared first on HPCwire.

Pages