HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 14 min ago

Microsoft Extends Hybrid Cloud Push with Avere Deal

Wed, 01/03/2018 - 23:00

Microsoft continued its foray into the high-end cloud storage sector with a deal this week to acquire hybrid cloud data storage and management vendor Avere Systems.

The deal announced on Wednesday (Jan. 3) follows Microsoft’s acquisition last August of Cycle Computing to bolster its “big compute” initiatives on the Azure cloud. Microsoft said the Cycle and Avere deals are part of its strategy to bring high-end computing to hybrid cloud deployments. (Note that Cycle and Avere had entered into a technology partnership a year and a half ago, so there was some history here already).

Terms of the deal for Pittsburgh-based Avere Systems were not disclosed.

Avere’s scalable cloud storage platform dubbed FXT Edge Filers targets enterprises trying to integrate applications requiring file systems into the cloud. Along with data storage access, the platform helps scale computing and storage depending on application requirements.

Jason Zander, vice president of Microsoft Azure, said the acquisition gives the public cloud vendor a combination of file system and caching technologies. Avere works with animation studios that run computing intensive workloads, and the deal is expected to give Microsoft entrée into those media and entertainment sectors.

“By bringing together Avere’s storage expertise with the power of Microsoft’s cloud, customers will benefit from industry-leading innovations that enable the largest, most complex high-performance workloads to run in Microsoft Azure,” Zander asserted in a blog post announcing the deal.

Avere Systems CEO Ron Bianchini added that the acquisition would expand the reach of its data storage technology from datacenters and public clouds to hybrid cloud storage and “cloud-bursting environments.”

Along with the entertainment sector, Avere Systems’ customers include the Library of Congress, Johns Hopkins University and automated test equipment manufacturer Teradyne. Its customer base also includes financial services and oil and gas customers along with the education, healthcare and manufacturing sectors.

Last year, Avere Systems announced an investment round that included Microsoft cloud rival Google. It also disclosed partnerships with private cloud storage vendors, including support for Dell EMC’s Elastic Cloud Storage platform. The partners said the software-defined object storage approach would allow private cloud customers to consolidate content archives and file storage systems in a central repository.

Meanwhile, Microsoft’s August 2017 acquisition of Cycle Computing combined the startup’s orchestration technology for managing Linux and Windows computing and data workloads with Microsoft’s Azure cloud computing infrastructure.

Observers praised Microsoft’s acquisition of Avere Systems, noting that its storage technology could help Microsoft boost its Azure revenues by allowing customers to use the public cloud while keeping some data on-premises.

The post Microsoft Extends Hybrid Cloud Push with Avere Deal appeared first on HPCwire.

Hyperion Sets Agenda for HPC User Forum in Europe

Wed, 01/03/2018 - 12:46

Regional HPC strategies, including the perennial jostling for sway among Europe, the U.S., and Japan, will highlight the HPC User Forum, Europe, scheduled for March 6-7 in France. Hyperion Research, organizer of the event and administrator of the HPC User Forum, just released the preliminary agenda. Besides global competition, AI, industrial HPC use, and a dinner talk on quantum computing are all on the docket.

“Because of China’s rapid rise in the Top500 rankings and the Chinese government’s well-funded plans to deploy the first exascale system, many HPC observers see future supercomputing leadership as a two-horse race between relative newcomer China and long-time leader the United States. In reality, it’s at least a four-horse race that includes two other serious contenders, Europe and Japan,” says Steve Conway, SVP research, Hyperion.

Steve Conway, Hyperion SVP

“That’s one of the things we’ll highlight at the March HPC User Forum meeting: the global nature of the push to advance the state-of-the-art in supercomputing. We’ll have senior officials from Europe, Japan and the U.S.  We haven’t secured a speaker from China, but we’d welcome one. Once you abandon the idea that leadership is limited to Linpack performance, it becomes clear that each of these four contenders is likely to be the future supercomputing leader in some important respects and to advance the state-of-the-art in supercomputing. I’m talking about the 2023-2024 era, when we’re likely to see productive exascale supercomputers from multiple parts of the world, rather than just smaller-scale prototypes and early machines.”

Here are a few agenda highlights:

  • European HPC Strategy, Thomas Skordas or Leonardo Flores, European Commission
  • Japan’s Flagship 2020 Project, Shig Okaya, Flagship 2020/RIKEN
  • View from America, Dimitri Kuznesov and Barbara Helland, DOE
  • HPC-based AI/Deep Learning in the Commercial World, Arno Kolster, Providentia Worldwide
  • Worldwide Study on HPC Centers and Industrial Users, Steve Conway, Hyperion
  • Exascale Computing Project Update, Doug Kothe

According to Conway, the U.S. has a hefty lead in processors and accelerators, but he expects gains for ARM-based designs and indigenous Chinese processors. “When it comes to highly scalable system and application software, the U.S. and Europe stand out, with each excelling in different scientific domains. Europe is also very strong in scalable software for industrial applications. The U.S., Japan and Europe have the largest, most experienced HPC user communities but China is gaining ground quickly. Historically, Japan has shown the ability to mount herculean efforts and jump to the head of the pack,” he says.

So, the race is on. Interestingly, there’s still a fair amount of international collaboration. Rougly a year ago, France’s CEA and Japan’s RIKEN announced they would join forces to advance the ARM ecosystem. CEA and Teratec are co-hosting the March User Forum meeting. There are also initiatives such as the International Exascale Software Project. The exascale race promises to be more multi-dimensional than just a Linpack bake-off says Conway.

This meeting, held at the Très Grand Centre de Calcul du CEA (TGCC) on the Teratec campus in Bruyères-le-Châtel, France (close to Paris), will be the first HPC User Forum run by Hyperion since becoming an independent company in December. Registration for the conference is free. For more information about registering: http://www.hpcuserforum.com

The post Hyperion Sets Agenda for HPC User Forum in Europe appeared first on HPCwire.

TACC Supercomputers Help Researchers Design Patient-Specific Cancer Models

Wed, 01/03/2018 - 11:20

Jan. 3, 2018 — Attempts to eradicate cancer are often compared to a “moonshot” — the successful effort that sent the first astronauts to the moon.

But imagine if, instead of Newton’s second law of motion, which describes the relationship between an object’s mass and the amount of force needed to accelerate it, we only had reams of data related to throwing various objects into the air.

This, says Thomas Yankeelov, approximates the current state of cancer research: data-rich, but lacking governing laws and models.

The solution, he believes, is not to mine large quantities of patient data, as some insist, but to mathematize cancer: to uncover the fundamental formulas that represent how cancer, in its many varied forms, behaves.

Model of tumor growth in a rat brain before radiation treatment (left) and after one session of radiotherapy (right). The different colors represent tumor cell concentration, with red being the highest. The treatment reduced the tumor mass substantially (Lima et. al. 2017, Hormuth et. al. 2015).

“We’re trying to build models that describe how tumors grow and respond to therapy,” said Yankeelov, director of the Center for Computational Oncology at The University of Texas at Austin (UT Austin) and director of Cancer Imaging Research in the LIVESTRONG Cancer Institutes of the Dell Medical School. “The models have parameters in them that are agnostic, and we try to make them very specific by populating them with measurements from individual patients.”

The Center for Computational Oncology (part of the broader Institute for Computational Engineering and Sciences, or ICES) is developing complex computer models and analytic tools to predict how cancer will progress in a specific individual, based on their unique biological characteristics.

In December 2017, writing in Computer Methods in Applied Mechanics and Engineering, Yankeelov and collaborators at UT Austin and Technical University of Munich, showed that they can predict how brain tumors (gliomas) will grow and respond to X-ray radiation therapy with much greater accuracy than previous models. They did so by including factors like the mechanical forces acting on the cells and the tumor’s cellular heterogeneity. The paper continues research first described in the Journal of The Royal Society Interface in April 2017.

“We’re at the phase now where we’re trying to recapitulate experimental data so we have confidence that our model is capturing the key factors,” he said.

To develop and implement their mathematically complex models, the group uses the advanced computing resources at the Texas Advanced Computing Center (TACC). TACC’s supercomputers enable researchers to solve bigger problems than they otherwise could and reach solutions far faster than with a single computer or campus cluster.

According to ICES Director J. Tinsley Oden, mathematical models of the invasion and growth of tumors in living tissue have been “smoldering in the literature for a decade,” and in the last few years, significant advances have been made.

“We’re making genuine progress to predict the growth and decline of cancer and reactions to various therapies,” said Oden, a member of the National Academy of Engineering.

Model Selection and Testing

Over the years, many different mathematical models of tumor growth have been proposed, but determining which is most accurate at predicting cancer progression is a challenge.

In October 2016, writing in Mathematical Models and Methods in Applied Sciences, the team used a study of cancer in rats to test 13 leading tumor growth models to determine which could predict key quantities of interest relevant to survival, and the effects of various therapies.

They applied the principle of Occam’s razor, which says that where two explanations for an occurrence exist, the simpler one is usually better. They implemented this principle through the development and application of something they call the “Occam Plausibility Algorithm,” which selects the most plausible model for a given dataset and determines if the model is a valid tool for predicting tumor growth and morphology.

The method was able to predict how large the rat tumors would grow within 5 to 10 percent of their final mass.

“We have examples where we can gather data from lab animals or human subjects and make startlingly accurate depictions about the growth of cancer and the reaction to various therapies, like radiation and chemotherapy,” Oden said.

The team analyzes patient-specific data from magnetic resonance imaging (MRI), positron emission tomography (PET), x-ray computed tomography (CT), biopsies and other factors, in order to develop their computational model.

Each factor involved in the tumor response — whether it is the speed with which chemotherapeutic drugs reach the tissue or the degree to which cells signal each other to grow — is characterized by a mathematical equation that captures its essence.

“You put mathematical models on a computer and tune them and adapt them and learn more,” Oden said. “It is, in a way, an approach that goes back to Aristotle, but it accesses the most modern levels of computing and computational science.”

The group tries to model biological behavior at the tissue, cellular and cell signaling levels. Some of their models involve 10 species of tumor cells and include elements like cell connective tissue, nutrients and factors related to the development of new blood vessels. They have to solve partial differential equations for each of these elements and then intelligently couple them to all the other equations.

“This is one of the most complicated projects in computational science. But you can do anything with a supercomputer,” Oden said. “There’s a cascading list of models at different scales that talk to each other. Ultimately, we’re going to need to learn to calibrate each and compute their interactions with each other.”

From Computer to Clinic

The research team at UT Austin — which comprises 30 faculty, students, and postdocs — doesn’t only develop mathematical and computer models. Some researchers work with cell samples in vitro; some do pre-clinical work in mice and rats. And recently, the group has begun a clinical study to predict, after one treatment, how an individual’s cancer will progress, and use that prediction to plan the future course of treatment.

At Vanderbilt University, Yankeelov’s previous institution, his group was able to predict with 87 percent accuracy whether a breast cancer patient would respond positively to treatment after just one cycle of therapy. They are trying to reproduce those results in a community setting and extend their models by adding new factors that describe how the tumor evolves.

The combination of mathematical modeling and high-performance computing may be the only way to overcome the complexity of cancer, which is not one disease but more than a hundred, each with numerous sub-types.

“There are not enough resources or patients to sort this problem out because there are too many variables. It would take until the end of time,” Yankeelov said. “But if you have a model that can recapitulate how tumors grow and respond to therapy, then it becomes a classic engineering optimization problem. ‘I have this much drug and this much time. What’s the best way to give it to minimize the number of tumor cells for the longest amount of time?'”

Computing at TACC has helped Yankeelov accelerate his research. “We can solve problems in a few minutes that would take us 3 weeks to do using the resources at our old institution,” he said. “It’s phenomenal.”

According to Oden and Yankeelov, there are very few research groups trying to sync clinical and experimental work with computational modeling and state-of-the-art resources like the UT Austin group.

“There’s a new horizon here, a more challenging future ahead where you go back to basic science and make concrete predictions about health and well-being from first principles,” Oden said.

Said Yankeelov: “The idea of taking each patient as an individual to populate these models to make a specific prediction for them and someday be able to take their model and then try on a computer a whole bunch of therapies on them to optimize their individual therapy — that’s the ultimate goal and I don’t know how you can do that without mathematizing the problem.”

The research is supported by National Science Foundation, the U.S. Department of Energy, the National Council of Technological and Scientific Development, Cancer Prevention Research Institute of Texas and the National Cancer Institute.

Source: Aaron Dubrow, TACC

 

The post TACC Supercomputers Help Researchers Design Patient-Specific Cancer Models appeared first on HPCwire.

Microsoft to Acquire Avere Systems

Wed, 01/03/2018 - 07:17

Jan. 3, 2018 — Microsoft has signed an agreement to acquire Avere Systems, a leading provider of high-performance NFS and SMB file-based storage for Linux and Windows clients running in cloud, hybrid and on-premises environments.

Avere uses an innovative combination of file system and caching technologies to support the performance requirements for customers who run large-scale compute workloads. In the media and entertainment industry, Avere has worked with global brands including Sony Pictures Imageworks, animation studio Illumination Mac Guff and Moving Picture Company (MPC) to decrease production time and lower costs in a world where innovation and time to market is more critical than ever.

High performance computing needs however do not stop there. Customers in life sciences, education, oil and gas, financial services, manufacturing and more are increasingly looking for these types of solutions to help transform their businesses. The Library of Congress, John Hopkins University and Teradyne, a developer and supplier of automatic test equipment for the semiconductor industry, are great examples where Avere has helped scale datacenter performance and capacity, and optimize infrastructure placement.

Microsoft says that by bringing together Avere’s storage expertise with the power of Microsoft’s cloud, customers will benefit from industry-leading innovations that enable the largest, most complex high-performance workloads to run in Microsoft Azure.

Source: Microsoft

The post Microsoft to Acquire Avere Systems appeared first on HPCwire.

ASC18 Competition Timeline Released

Tue, 01/02/2018 - 07:14

Jan. 2, 2018 — The 2018 ASC Student Supercomputer Challenge (ASC18) has released its competition timeline (official website: http://www.asc-events.org/ASC18/index.php), with a starting date set for January 16, 2018. Making the announcement, the ASC18 organizers also promised that the upcoming competition would include a well-known AI application in one of its challenges, a tantalizing clue to the dozens of entrants from China, America, Europe and around the world who are expected to attend.

The timeline sets January 15, 2018, as the deadline for registration, before which all entrants must submit an application on the ASC website. Each team entering the competition must include one mentor and five undergraduate students, with multiple teams from a single university allowed. Competition will begin with the preliminary round, set to kick off on January 16 and conclude on March 20. Within this timeframe, participating teams must submit their written solutions and application optimization according to ASC requirements. Each entry will be scored by the judging panel. With the top 20 teams advancing to the final round. For five days, from May 5 to May 9, these teams will compete face-to-face for the ASC18 Grand Prix.

But while releasing the ASC18 timeline, the competition’s organizers have yet to reveal several key details, hoping to spark the curiosity of young supercomputing enthusiasts around the world. The organizing committee noted ASC18 will continue with past precedent in cooperating with an ultra-large-scale supercomputer system to serve as the competition platform. But no indication was given as to whether this would be the Gordon Bell Prize-winning Sunway TaihuLight featured in last year’s competition or another system altogether. Organizers have likewise not yet revealed the location of the upcoming competition.

Whatever surprises organizers have in store, ASC18 is sure to make waves as it follows past competitions in breaking new ground. The inaugural ASC in 2012 was the first event of its kind to use a world-class supercomputer system as its competition platform, while in 2014 ASC introduced Tianhe II, the world’s fastest supercomputer.

ASC has also given participating teams the chance to engage in major international science projects. In 2015, the competition partnered with SKA, the world’s largest radio telescope project. In 2016, ASC used the Gordon Bell Prize-winning numerical simulation of high-resolution waves to present teams with a challenge relating to driverless vehicles.

With its ever-increasing challenges and unmatched opportunities for participants, the ASC has won praise from numerous leading names in the supercomputing field. “The US’ SC is like a marathon, testing participants’ hard work and perseverance; Germany’s ISC is a sprint, testing innovation and adaptability. China’s ASC is a combination of both,” said OrionX partner and HPCwire correspondent Dan Olds, who has covered all three major supercomputing challenges.

Jack Dongarra, founder of TOP500, a ranking of the world’s most powerful supercomputer systems, and researcher at the Oak Ridge National Laboratory and University of Tennessee, has described ASC as having “by far the most intense competition” of any student supercomputer contests he has witnessed.

About ASC

Sponsored by China, ASC is one of the world’s three major supercomputer contests, alongside the US’ SC and Germany’s ISC. Held annually since 2012, the competition is devoted to cultivating young talent in the field of supercomputing.

Source: ASC

The post ASC18 Competition Timeline Released appeared first on HPCwire.

HPCAC Announces the 9th Swiss Annual HPC Conference in Collaboration With HPCXXL User Group

Fri, 12/22/2017 - 08:18

LUGANO, Switzerland, Dec. 22, 2017 — The HPC Advisory Council (HPCAC), a leading for community benefit, member-based organization dedicated to advancing high-performance computing (HPC) research, education and collaboration through global outreach, has announced the Swiss Annual HPC Conference taking place in Lugano, Switzerland, April 9-12 2018, hosted by the Swiss National Supercomputing Centre (CSCS). For the first time, the conference will be held in concert with the Winter HPCXXL User Group meeting.

The conference agenda focuses on HPC, Cloud, Containers, Deep Learning, Exascale, Open Stack and more. It draws peers, members and non-members alike, together from all over the world, for three days of immersive talks, tutorials and networking. It also attracts contributed talks from renowned subject matter experts (SMEs).

The first three days organized by HPCAC will focus on the intersection of private and public research and collaborative enablement with a combination of invited talks featuring industry visionaries with sponsor-led usage, innovation and future technology sessions. The fourth day will be entirely dedicated to NDA site updates from HPCXXL members.

Currently running through January 31 2018, open submissions enlist experts who define the topic and session type which can include best practices talks and workshops, live hands-on tutorials, etc. A select volunteer committee from amongst the council’s 400 plus members reviews all of the conference sessions to assure relevance and interest which helps to bring all of these expert specialists together to network, share latest findings, explore the newest technology, techniques, trends and more.

“The conference provides a variety of resources and access to experts to learn and teach and is inclusive of everyone. While we focus on educating, sharing and learning, it’s really about inspiring others,” said Gilad Shainer, HPC Advisory Council Chairman. “The open forum format and consistent delivery of expert content from leaders on the bleeding edge of HPC and forays into AI, for example, is what draws the highly diverse, multi-disciplined global community and curious to Lugano as attendees, as SMEs, as conference sponsors and to HPCAC as members.”

“We are very excited to organize a joint conference here in Lugano, bringing together the communities of HPCAC and HPCXXL,” said Hussein Harake, HPC system manager, CSCS. “We believe that such a collaboration will offer a unique opportunity for HPC professionals to discuss and share their knowledge and experiences.”

“We are enthusiastic about combining our efforts and offering our members broader content while continuing to have frank NDA conversations during the HPCXXL day,” said Michael Stephan, president, HPCXXL.

Registration is required to attend the Swiss Annual HPC Conference which charges a nominal fee of CHF 180 and includes breaks and lunch during the conference along with a special group outing. HPCXXL User Group attendees must also register for the members meeting which charges a separate fee of CHF 90 for break and lunch services during the one day session. Organizers now offer streamlined registration on a single site for all attendees, including sponsors and their delegates, and accommodates registration for one, three and four day schedules. Registration is open through April 1, 2018. Participants can take advantage of the early bird pricing of CHF 90 by registering before February 11, 2018. Deadline for submitting session proposals is January 31, 2018. Additional details and forms can be found on the HPCAC website.

Additional HPCAC Conferences

In addition to its four day combined 2018 Swiss Conference and HPCXXL User Group event, the HPCAC’s premiere EU conference, the council hosts multiple annual conferences throughout the world. The 2018 schedule leads off with the two day U.S.-based Stanford Conference in February; Australia’s two day conference planned for August; with one day sessions in Spain in September and China in October to close out the year. The council also supports two major student-focused competitions each year. The annual ISC-HPCAC Student Cluster Competition (SCC), in partnership with the ISC Group, officially kicked off in November with the reveal of nine of twelve international teams competing next June during the ISC High Performance Conference and Exhibition in Frankfurt, Germany. Launching in May, the annual RDMA competition is a six-month long programming competition between student teams throughout China that culminates in October with winning teams revealed at the annual China Conference. Sponsor opportunities are available to support all of these critical education initiatives. More information on becoming a member, conference dates, locations and sponsorships and student competitions and sponsorships is available on the HPC Advisory Council website.

About the HPC Advisory Council

Founded in 2008, the non-profit HPC Advisory Council is an international organization with over 400 members committed to education and outreach. Members share expertise, lead special interest groups and have access to the HPCAC technology center to explore opportunities and evangelize the benefits of high performance computing technologies, applications and future development. The HPC Advisory Council hosts multiple international conferences and STEM challenges including the RDMA Student Competition in China and the Student Cluster Competition at the annual ISC High Performance conferences. Membership is free of charge and obligation. More information: www.hpcadvisorycouncil.com.

Source: HPCAC

The post HPCAC Announces the 9th Swiss Annual HPC Conference in Collaboration With HPCXXL User Group appeared first on HPCwire.

HSA Foundation China Regional Committee Wraps Up 2nd Annual Symposium

Thu, 12/21/2017 - 07:56

BEIJING, China, Dec. 21, 2017 — The China Regional Committee (CRC) of the Heterogeneous System Architecture (HSA) Foundation has successfully concluded its 2nd Symposium in Beijing. The CRC was formed earlier this year; its mandate is to enhance the awareness of heterogeneous computing and promote the adoption of standards such as Heterogeneous System Architecture (HSA) in China.

More than 40 representatives of the CRC members and related companies, research institutes and universities throughout China attended the conference. HSA Foundation President Dr. John Glossner also participated in this important benchmark meeting that exchanged ideas on important topics including interfaces and specifications for the next generation of heterogeneous computing, vector parallel computing model, system security and protection, artificial intelligence, software defined radio, Network-on-Chip (NoC), and programming of commercial HSA chips. The meeting was co-organized by China Electronics Standardization Institute (CESI) and the HSA Foundation’s CRC, and sponsored by Huaxia General Processor Technologies.

Last year the HSA Foundation held its first Global Summit in Beijing. The CRC has actively carried out various work in conjunction with CESI for the development of global heterogeneous computing standards with a China focus.

At the meeting, each CRC working group shared its progress and insights on related key technologies:

  • Application & System Evaluation Working Group – “The application situation and development trend of artificial intelligence in China and typical rigid demands and key indicators of artificial intelligence” – presented by State Grid;
  • Virtual ISA Working Group – “Artificial intelligence instruction set design for heterogeneous computing and exploratory research of HSAIL artificial intelligence extended subset” – presented by Dr. Jun Han, Fudan University;
  • Interconnect Working Group – “Latest research results on network-on-chip in the heterogeneous computing SoCs, and the next step verification and standardization work arrangements” – presented by Dr. Zhiyi Yu, Sun Yat-sen University;
  • Compilation & Runtime LIB Working Group – “The latest research trends in vector computing models and related programming models, and basic recommendations for facilitating integration into HSA system architectures” – presented by Dr. Lei Wang, Huaxia General Processor Technologies;
  • System Architecture Working Group – “Using HSA to systematically address the basic views of software-defined communications, software-defined radio, heterogeneous multi-core chip architecture and application development” – presented by Wanting Tian, Sanechips Technology;
  • Security & Protection Working Group – “Research work and principles on adapting heterogeneous computing for security protection” – presented by Shaowei Chen, Nationz Technologies.

The CRC has been adding members since the first CRC Symposium in May; some of which include Huaqiao University, Hunan University, Jimei University, Tsinghua University, Xiamen University, Xiamen University of Technology and Zhejiang University.

Dr. John Glossner, HSA Foundation President, said: “The HSA Foundation CRC has been laying the groundwork for standardization progress in heterogeneous computing standards in China for almost a year. It is focused on supporting the needs of HSA Foundation members in China and helping to fulfill the mission of the Foundation, which is to make heterogeneous programming universally easier.”

About the HSA Foundation

The HSA (Heterogeneous System Architecture) Foundation is a non-profit consortium of SoC IP vendors, OEMs, Academia, SoC vendors, OSVs and ISVs, whose goal is making programming for parallel computing easy and pervasive. HSA members are building a heterogeneous computing ecosystem, rooted in industry standards, which combines scalar processing on the CPU with parallel processing on the GPU, while enabling high bandwidth access to memory and high application performance with low power consumption. HSA defines interfaces for parallel computation using CPU, GPU and other programmable and fixed function devices, while supporting a diverse set of high-level programming languages, and creating the foundation for next-generation, general-purpose computing.

Source: HSA Foundation

The post HSA Foundation China Regional Committee Wraps Up 2nd Annual Symposium appeared first on HPCwire.

Fast Forward: Five HPC Predictions for 2018

Thu, 12/21/2017 - 07:00

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Exascale Computing Project leadership shuffling? AMD’s return from the dead in the data center? Scandal at PEZY? Aurora’s stumble? Trump? There’s lots to choose from.

Whether you’re thinking ‘good riddance’ or ‘stay a little longer’ about 2017 – it feels like a year where there’s not a lot in between. It’s probably best focus on the future. Here are a few 2018 predictions mostly accenting the positive; indeed there is quite a bit to be positive about amid ever present dark clouds. Along the way there’s a few observations about 2017, and links to HPCwire coverage of note.

1. Big Blue Gets Its Mojo Back

Let’s be candid. Since dumping its x86 business, IBM has endured a bumpy ride. Building a new ecosystem has a definite “what were we thinking” level of difficulty. That doesn’t mean it can’t be done. OpenPOWER and IBM have done much that’s right, but getting to payoff is a costly, painful struggle. Power8 systems, despite the much-praised Power instruction set and scattered public support from systems builders and hyperscalers, mostly fizzled; some would argue it was swamped by timing and anticipation of Power9. Pricing may also have played a role.

Three events now seem poised to reenergize IBM and OpenPOWER.

  • First is arrival of the Power9 processor in December. It’s being promoted as a from-the-ground-up AI optimized chip able to leverage all kinds of accelerator (FPGA, GPU, etc.) high-speed interconnect (NVLink, OpenCAPI, etc.), and high memory bandwidth technology. It’s available in IBM’s AC922 server based on the same architecture as the Department of Energy CORAL supercomputers, Summit and Sierra. The Power9 wait is over.
  • Second, the Aurora project being led by Intel has been delayed. True, it is now scheduled to be the first U.S. exascale machine, deployed in 2021 at Argonne National Laboratory, but it clearly missed its mark as one of the scheduled pre-exascale machines. There’s also an open question as to which processor will be used for Aurora. And 2021 still seems quite distant. Overall Aurora’s trouble is IBM’s serendipity.
  • Third, expectations are high the IBM Summit machine will be stood up and tested in time to top the next Top500 list in June 2018. It’s expected to hit 150-200 petaflops peak performance. That would be a huge boost for IBM and its advanced Power architecture from a public awareness perspective. China has dominated the recent lists (ten consecutive ‘wins’) and the top performing U.S. machine, Titan, fell from for fourth to fifth in November. BTW, Titan is powered by AMD Opteron processors.
IBM Power9 AC922 rendering

IBM will attack the market in force with its ‘Summit-based’ servers. It will also likely get stronger buy-in from the OpenPOWER community, most of whom must still support Intel systems. Power9 system price points are also expected to be more attractive. Finally, with U.S. national competitiveness juices bubbling – amplified by Trump’s ‘America First’ mantra – the current U.S. administration is likely to talk up the IBM Top500 Summit achievement.

Bottom line: Big Blue will start making early hay in the server market (HPC and otherwise) after what must seem like a very long growing season. Time to start the harvest. Also, let’s not forget IBM is a giant in computing generally with a growing cloud business, a rapidly advancing quantum computing research and commercial program, a neuromorphic chip and research effort, and extensive portfolio of storage, software, mainframe, and services offerings. (For me, the jury is still out on Watson.) Big Blue is getting its mojo back.

Links to relevant articles:

IBM Begins Power9 Rollout with Backing from DOE, Google

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

Flipping the Flops and Reading the Top500 Tea Leaves 


2. AMD’s Data Center Revival Looks Good – Don’t Blow It!

AMD has had many lives and it’s crossed swords with Intel over x86 matters (technology and markets) with regularity. Sometimes enough is enough. The company largely abandoned the data center a few years ago for a number of reasons. David versus Goliath doesn’t always end well for David. This year AMD has plunged back in and its bet is a big one that encompasses solid technology, price performance, and as of SC17, considerable commitment from some important systems makers to support the EPYC processor line.

“It’s not enough to come back with one product, you’ve got to come back with a product cadence that moves as the market moves. So not only are we coming back with EPYC, we’re also [discussing follow-on products] so when customers move with us today on EPYC they know they have a safe home and a migration path with Rome,” said Scott Aylor, AMD corporate VP and GM of enterprise solutions business, at the time of the launch.

Lower cost is clearly part of the strategy and AMD has been touting cost-performance comparisons. A portion of the EPYC line has been designed for single socket servers which are nearly extinct in the data center these days. AMD argues that around 50 percent of the market buys two-socket solutions because there was no alternative; now, says AMD, there is. In fact, Microsoft Azure recently announced an instance based on a single socket EPYC solution.

To meet a broad range of applications, AMD is tiering products in 32, 24, and 16-core ranges. The top end is aimed at scale out and HPC workloads. Indeed, AMD showcased ‘Project 47 supercomputer’ at SIGGRAPH in the summer which is based on the EPYC 7601 and AMD Radeon Instinct MI25 GPUs. A full 20-server rack of P47 systems achieves 30.05 gigaflops per watt in single-precision performance, but is less impressive on double precision arithmetic.

Bottom line: AMD is back, at least for now, charging into the data center.

Links to relevant articles:

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration


3. The Quantum Computing Haze will Thicken Not Thin

Ok, I admit it. Quantum computing pretty much baffles me. Despite the mountain of rhetoric surrounding quantum computing, I suspect am not alone. Universal Quantum Computers. Quantum Adiabatic Annealing Computers. An expanding zoo of qubit types. Qudits. Quantum simulation on classical computers. Quantum Supremacy. Google, Microsoft, IBM, D-wave, a handful of academic and national lab quantum computing programs. Something called the Chicago Quantum Exchange under David Awschalom, associated with UChicago, Argonne, Fermilab, and located in the Institute for Molecular Engineering.

Feynman would chuckle. The saying is ‘where there’s smoke there’s fire’ and while that’s true enough, the plentiful smoke around quantum computing today is awfully hard to see through. Obviously there is something important going on but how important or when it will be important (let alone mainstream) is very unclear.

An IBM cryostat wired for a prototype 50 qubit system. (PRNewsfoto/IBM)

Philip Ball’s recent Nature piece on quantum supremacy (Race for quantum supremacy hits theoretical quagmire, Nature, 11/14/17) is both informative and entertaining. Quantum supremacy is the stage at which the capabilities of a quantum computer exceed those of any available classical computer. Of course the latter keep advancing.

Ball wrote, “Computer scientists and engineers are rather more phlegmatic about the notion of quantum supremacy than excited commentators who foresee an impending quantum takeover of information technology. They see it not as an abrupt boundary but as a symbolic gesture: a conceptual tool on which to peg a discussion of the differences between the two methods of computation. And, perhaps, a neat advertising slogan.”

Actually there’s a fair amount of good literature on quantum computing. Just a few of the current challenges include size (how many qubits) of current machines, needed error correction, nascent software, decoherency, exotic machines – think supercooled superconductors as an example – optimum qubit types…You get the idea.

Bottom line: The haze surrounding quantum computing’s future won’t lift for a few more years. Maybe a few specific quantum communication applications will emerge sooner.

Links to relevant articles:

Microsoft Wants to Speed Quantum Development

House Subcommittee Tackles US Competitiveness in Quantum Computing

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

IBM Breaks Ground for Complex Quantum Chemistry

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

IBM Launches Commercial Quantum Network with Samsung, ORNL


4. AI will Continue Sucking the Air Out of the Room

Maybe this is a good thing. AI writ large is blanketing the computer landscape. Its language is everywhere and dominating the marketing landscape. Every vendor, it seems, has an AI box or service or chip(s). More interesting is what’s happening in developing and using ‘AI’ technology. The CANcer Distributed Learning Environment (CANDLE) project – tasked with developing deep learning tools for the war on cancer and putting them to use – is a good example; it has released the early version of its infrastructure on GitHub. This includes algorithms, frameworks, and all manner of relevant tools.

CANDLE has already developed a model able to predict tumor response to drug pairs for a particular cancer type at 93 percent. The data sets are huge and machine learning is the only way to chew through them to build models. It’s working on a model to handle triplet drug combos. “There will be drugs I predict in clinical trials based on the results that we achieve this year,” Rick Stevens, one of the PIs on CANDLE and a senior researcher at Argonne National Laboratory, told HPCwire at SC17.

There’s a wealth of new (and some rather old data analytics) technology to support AI. New frameworks. Advancing accelerator technology. The rise of mixed precision machines – Japan’s plans for a 130-petaflops (half-precision) supercomputer by early 2018 called ABCI for AI Bridging Cloud Infrastructure is a good high-end example. There’s too much to cover here beyond saying AI is a game changer on its own for many applications and will also prove to be incredibly powerful in speeding up traditional floating-point intensive HPC applications such as molecular modeling.

In a memo to employees this week, Intel CEO Brian Krzanich wrote, “It’s almost impossible to perfectly predict the future, but if there’s one thing about the future I am 100 percent sure of, it is the role of data. Anything that produces data, anything that requires a lot of computing.” AI computing will be an important part of nearly all computing going forward.

Bottom line: Brace for more AI.

Links to relevant articles:

Japan Plans Super-Efficient AI Supercomputer

AI Speeds Astrophysics Image Analysis by 10,000x

Cray Brings AI and HPC Together on Flagship Supers

Nvidia CEO Predicts AI ‘Cambrian Explosion’

Intel Unveils Deep Learning Framework, BigDL


5. The HPC Identity Crisis will Continue in Force (Does it Matter?)

Ok, a better phrasing is what constitutes HPC today and do we even know how many HPC workers there are? We talk about this inside HPCwire all the time. The blending (broadening) of HPC with big data/AI computing is one element. Simple redefinition by fiat is another. Various constituents offer differing perspectives.

When someone says HPC it means something really specific to traditional HPC folks; it’s tightly coupled, we’ve got some sort of low latency interconnect, parallel file systems, designed to run high performance, highly scalable custom applications. But today, this has changed. HPC has come to mean pretty much any form of scientific computing and as a result, its breadth has grown in terms of what kind of applications we need to support.” – Gregory Kurtzer, Singularity (HPC container software).

Hyperion Research pegs the number of HPC sites in the U.S. at 759 (academic, government, commercial) and suggests there could be around 120,000 HPCers in the U.S. and perhaps a quarter of a million worldwide.

Making sense of the collision between traditional HPC and big data (and finding ways to harmonizing the two) has been a hot topic at least since 2015 when it was identified as an objective in National Strategic Computing Initiative. There’s even been a series of five international workshops (in US, Japan, the EU and China) on Big Data and Extreme-scale Computing (BDEC) and Jack Dongarra and colleagues working on the project have just issued a reportPathways to Convergence: Towards a Shaping Strategy for a Future Software and Data Ecosystem for Scientific Inquiry. HPCwire will dig into the report’s findings at a subsequent time.

The point here is that change is overwhelming how HPC is looked at and what it is considered to be. HPC census and market sizing is an ongoing challenge. One astute industry observer noted:

“The idea of framing out the real HPC TAM (total available market) is an interesting one.  If I live in a big DoE facility and run code on the Titan HPC, I know I am an HPC guy. But if I am a car part designer that subs to GM, who uses Autodesk for visualization for the design of a driver’s side mirror, I may not think of myself as such (I sure as hell will not attend  SC17) .

“That and the fact that I saw so many vendors at SC that have products that address some of the less technically aggressive aspects of HPC (i.e. tape storage) that really aren’t HPC specific but that can be relevant to HPC users. So it’s hard to say what the TAM is because reaching out to customers who may be HPC, but don’t move in the HPC world per se is complicated at best.

“Even worse, figuring out how to count marketing dollars that reaches some indeterminant percentage of a loosely defined HPC market is fraught with intrigue.”

Bottom line: The HPC Who-am-I? pathos will continue in 2018 but preoccupation with delivering AI will mute some of the debate.


6. Lesser but Still Interesting 2018/2017 Glimpses

Doug Kothe, ECP director

The container craze will continue because it solves a real problem. ECP, now led by Doug Kothe, will shift into its next gear as the first U.S pre-exascale machines are stood up. Forget the doubters – the Nvidia juggernaut will keep rolling, though perhaps there won’t be another V100-like blockbuster introduced in 2018. Intel’s impressive Skylake chip line arrived and is in systems everywhere. Vendors’ infatuation with selling so-called easier-to-deploy HPC solutions into the enterprise – think vertical solutions – will fade; they’ve tried selling these but mostly without success for many reasons.

ARM will continue its march into new markets. This topic doesn’t rise to greater prominence here because we still need to see more systems on line, whether at the very high end such as the post K computer, or sales of ARM server systems such as HPE’s recently introduced Apollo 70, the company’s first ARM-based HPC server. The earlier ARM-based Moonshot offering fared poorly.

Unexpected scandal marred the end of the year with the arrest of PEZY founder, president and CEO Motoaki Saito and another PEZY employee, Daisuke Suzuki, on suspicion of defrauding a government institution of 431 million yen (~$3.8 million) and is unsettling. HPC seems reasonably free of such misbehavior. Maybe that’s my misperception.

It was sad to see what amounts to the end of the line for SPARC with Oracle’s discontinuance of development efforts and related layoffs.

On a positive note: There’s a new book from Thomas Sterling, professor of electrical engineering and director of the Center for Research in Extreme Scale Technologies, Indiana University – High Performance Computing: Modern Systems and Practices, co-written with colleagues Matthew Anderson and Maciej Brodowicz. It’s available now (link to publisher: https://www.elsevier.com/books/high-performance-computing/sterling/978-0-12-420158-3?start_rank=1&sortby=sortByDateDesc&imprintname=Morgan%20Kaufmann).

As always, there was a fair amount of personnel shuffling this year. Diane Bryant left Intel and joined Google. AI pioneer Andrew Ng left his post at Baidu. Intel lured GPU designer Raja Koduri from AMD; he was SVP and chief of architecture for Radeon Technology Group. Meg Whitman is stepping down as chairman of HPE – the company she helped bring into existence by overseeing the spit up of HP a year ago – and will be succeeded by Antonio Neri.

Obviously there is so much more to talk about. The HPC world is a vibrant, fascinating place, and tremendous force in science and society today.

Happy holidays and a hopeful new year to all. On to 2018!

The post Fast Forward: Five HPC Predictions for 2018 appeared first on HPCwire.

Samsung Now Mass Producing Industry’s First 2nd-Generation, 10-Nanometer Class DRAM

Wed, 12/20/2017 - 15:02

SEOUL, South Korea, Dec. 20, 2017 — Samsung Electronics Co., Ltd., a world leader in advanced memory technology, announced today that it has begun mass producing the industry’s first 2nd-generation of 10-nanometer class* (1y-nm), 8-gigabit (Gb) DDR4 DRAM. For use in a wide range of next-generation computing systems, the new 8Gb DDR4 features the highest performance and energy efficiency for an 8Gb DRAM chip, as well as the smallest dimensions.

Samsung 1y-nm 8Gb DDR4 (Photo: Business Wire)

“By developing innovative technologies in DRAM circuit design and process, we have broken through what has been a major barrier for DRAM scalability,” said Gyoyoung Jin, president of Memory Business at Samsung Electronics. “Through a rapid ramp-up of the 2nd-generation 10nm-class DRAM, we will expand our overall 10nm-class DRAM production more aggressively, in order to accommodate strong market demand and continue to strengthen our business competitiveness.”

Samsung’s 2nd-generation 10nm-class 8Gb DDR4 features an approximate 30 percent productivity gain over the company’s 1st-generation 10nm-class 8Gb DDR4. In addition, the new 8Gb DDR4’s performance levels and energy efficiency have been improved about 10 and 15 percent respectively, thanks to the use of an advanced, proprietary circuit design technology. The new 8Gb DDR4 can operate at 3,600 megabits per second (Mbps) per pin, compared to 3,200 Mbps of the company’s 1x-nm 8Gb DDR4.

To enable these achievements, Samsung has applied new technologies, without the use of an EUV process. The innovation here includes use of a high-sensitivity cell data sensing system and a progressive “air spacer” scheme.

In the cells of Samsung’s 2nd-generation 10nm-class DRAM, a newly devised data sensing system enables a more accurate determination of the data stored in each cell, which leads to a significant increase in the level of circuit integration and manufacturing productivity.

The new 10nm-class DRAM also makes use of a unique air spacer that has been placed around its bit lines to dramatically decrease parasitic capacitance**. Use of the air spacer enables not only a higher level of scaling, but also rapid cell operation.

With these advancements, Samsung is now accelerating its plans for much faster introductions of next-generation DRAM chips and systems, including DDR5, HBM3, LPDDR5 and GDDR6, for use in enterprise servers, mobile devices, supercomputers, HPC systems and high-speed graphics cards.

Samsung has finished validating its 2nd-generation 10nm-class DDR4 modules with CPU manufacturers, and next plans to work closely with its global IT customers in the development of more efficient next-generation computing systems.

In addition, the world’s leading DRAM producer expects to not only rapidly increase the production volume of the 2nd-generation 10nm-class DRAM lineups, but also to manufacture more of its mainstream 1st-generation 10nm-class DRAM, which together will meet the growing demands for DRAM in premium electronic systems worldwide.

Source: Samsung

The post Samsung Now Mass Producing Industry’s First 2nd-Generation, 10-Nanometer Class DRAM appeared first on HPCwire.

NCSA Faculty Fellow Makes Breakthrough in Protein Prediction Using Deep Learning

Wed, 12/20/2017 - 14:59

Dec. 20, 2017 — Jian Peng, NCSA Faculty Fellow and Assistant Professor in the Department of Computer Science at Illinois and graduate student, Yang Liu, Department of Computer Science, have discovered a major breakthrough in protein structure predictions using deep learning data processed by NCSA’s Blue Waters supercomputer published in Cell Systems journal.

Peng’s research proposes to largely explore a more accurate function for evaluating predicted protein structures through his development of the deep learning tool, DeepContact. DeepContact automatically leverages local information and multiple features to discover patterns in contact map space and embeds this knowledge within the neural network. Furthermore, in subsequent prediction of new proteins, DeepContact uses what it has learned about structure and contact map space to impute missing contacts and remove spurious predictions, leading to significantly more accurate inference of residue-residue contacts.

Essentially, this tool converts hard-to-interpret coupling scores into probabilities, moving the field toward a consistent process to assess contact prediction across diverse proteins.

Applying the existing protein structure prediction algorithms and sampling techniques generates a massive dataset that is then processed and scaled up by the Blue Waters supercomputer. Based on this dataset, Peng hopes to develop a new structure motif-­based deep neural network to assess the structural quality of predictions and to strengthen existing structure prediction algorithms.

Peng’s team, iFold, was top-ranked at the 12th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP12) last year. “We greatly improved the prediction accuracy for protein residue contact,” said Peng, “We believe that the improved contact prediction will further help us get closer to the ultimate goal of protein folding.” When proteins coil and fold into specific three-dimensional shapes they are able to perform their biological function, however, when misfolding happens in proteins, it then causes the proteins to malfunction, resulting in diseases like Alzheimer’s Disease. Peng’s research will use DeepContact to improve models for protein folding, that will facilitate a paradigm shift in protein structure prediction.

Peng plans to collaborate with NCSA affiliate, Dr. Matthew Turk using NCSA’s high­performance CPU and GPU resources, expanding on more efficient distributed implementations to accelerate both structure generation and training of deep neural networks.

Earlier this year, NCSA was awarded a $2.7 million grant from the National Science Foundation for deep learning research, which included Peng as a co-PI.

About the National Center for Supercomputing Applications

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50@reg; for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About the Blue Waters Project

The Blue Waters petascale supercomputer is one of the most powerful supercomputers in the world, and is the fastest sustained supercomputer on a university campus. Blue Waters uses hundreds of thousands of computational cores to achieve peak performance of more than 13 quadrillion calculations per second. Blue Waters has more memory and faster data storage than any other open system in the world. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenges. Recent advances that were not possible without these resources include computationally designing the first set of antibody prototypes to detect the Ebola virus, simulating the HIV capsid, visualizing the formation of the first galaxies and exploding stars, and understanding how the layout of a city can impact supercell thunderstorms.

Source: NCSA

The post NCSA Faculty Fellow Makes Breakthrough in Protein Prediction Using Deep Learning appeared first on HPCwire.

Intel to Take More Risks, Declares CEO in Memo

Wed, 12/20/2017 - 14:08

In a year-end call-to-arms memo sent to all Intel employees, CEO Brian Krzanich declared that “the new normal” at Intel will be change and risk-taking. According to a published story by CNBC, Krzanich said the company faces “an exciting challenge” in strategic, or “new growth,” markets (connected devices, AI, autonomous driving) where other companies have forged ahead of the chip giant.

Krzanich said Intel is on the path to transitioning from being a client computing (PC)-first business.

“We’re just inches away from being a 50/50 company, meaning that half our revenue comes from the PC and half from new growth markets,” he wrote. “In many of these new markets we are definitely the underdog. That’s an exciting challenge – it requires that we develop and use new, different muscles.

“The new normal for Intel is that we are going to take more risks,” he continued. “The new normal is that we will continue to make bold moves and try new things. We’ll make mistakes. Bold doesn’t always mean right or perfect. The new normal is that we’ll get good at trying new things, determining what works and moving forward.

Driving Krzanich’s vision is his determination that Intel become a data driven company – i.e., focused on delivering technologies that support data-intensive workloads.

Brian Krzanich

“It’s almost impossible to perfectly predict the future, but if there’s one thing about the future I am 100 percent sure of, it is the role of data,” he said. “Data is becoming the most valuable asset for any company. That’s why our growth strategy is centered on data: memory, FPGAs, IOT, artificial intelligence, autonomous driving. Anything that produces data, anything that requires a lot of computing, the vision is, we’re there. I believe almost everything that impacts our lives—whether it’s healthcare or driving, retail or government—it will all be touched by our technology over the next 5 to 10 years. The world will run on Intel silicon.”

For a historical reference to the change Intel is undergoing now, he drew a parallel to the company’s wrenching decision to abandon the DRAM business and move into the processor business in the mid-1980s, not long after Intel hired Krzanich.

“…I watched as Intel made a massive shift. It required downsizing, new investments, and a lot of change. Yet in December 1997—20 years ago this month—Time magazine named then-Intel CEO Andy Grove its Man of the Year,” he said. “Under his leadership, Intel had transformed from embattled memory maker to the world’s leading microprocessor company and a leader of the digital revolution. Two decades later, Intel is again reinventing itself…”

Krzanich in 2013 took over the top spot at Intel, which has engaged in a series of acquisitions that has broadened the company’s portfolio and market range; these include acquisitions of Mobileye (self-driving car technology), Movidius (computer vision), Nervana (AI-specific processors) and Altera (FPGAs).

“We have the opportunity to look to the future and embrace a new Intel,” Krzanich said, “a different, fast-paced, global enterprise that’s adapting and growing.”

The post Intel to Take More Risks, Declares CEO in Memo appeared first on HPCwire.

XSEDE Systems Stampede1 and Comet Help Sample Protein Folding in Bone Regeneration Study

Wed, 12/20/2017 - 08:51

Dec. 20, 2017 — Some secrets to repair our skeletons might be found in the silky webs of spiders, according to recent experiments guided by supercomputers. Scientists involved say their results will help understand the details of osteoregeneration, or how bones regenerate.

A study found that genes could be activated in human stem cells that initiate biomineralization, a key step in bone formation. Scientists achieved these results with engineered silk derived from the dragline of golden orb weaver spider webs, which they combined with silica. The study appeared September 2017 in the journal Advanced Functional Materials and has been the result of the combined effort from three institutions: Tufts University, Massachusetts Institute of Technology and Nottingham Trent University.

XSEDE supercomputers Stampede at TACC and Comet at SDSC helped study authors simulate the head piece domain of the cell membrane protein receptor integrin in solution, based on molecular dynamics modeling. (Davoud Ebrahimi)

Study authors used the supercomputers Stampede1 at the Texas Advanced Computing Center (TACC) and Comet at the San Diego Supercomputer Center (SDSC) at the University of California San Diego through an allocation from XSEDE, the eXtreme Science and Engineering Discovery Environment, funded by the National Science Foundation. The supercomputers helped scientists model how the cell membrane protein receptor called integrin folds and activates the intracellular pathways that lead to bone formation. The research will help larger efforts to cure bone growth diseases such as osteoporosis or calcific aortic valve disease.

“This work demonstrates a direct link between silk-silica-based biomaterials and intracellular pathways leading to osteogenesis,” said study co-author Zaira Martín-Moldes, a post-doctoral scholar at the Kaplan Lab at Tufts University. She researches the development of new biomaterials based on silk. “The hybrid material promoted the differentiation of human mesenchymal stem cells, the progenitor cells from the bone marrow, to osteoblasts as an indicator of osteogenesis, or bone-like tissue formation,” Martín-Moldes said.

“Silk has been shown to be a suitable scaffold for tissue regeneration, due to its outstanding mechanical properties,” Martín-Moldes explained. It’s biodegradable. It’s biocompatible. And it’s fine-tunable through bioengineering modifications. The experimental team at Tufts University modified the genetic sequence of silk from golden orb weaver spiders (Nephila clavipes) and fused the silica-promoting peptide R5 derived from a gene of the diatom Cylindrotheca fusiformis silaffin.

The bone formation study targeted biomineralization, a critical process in materials biology. “We would love to generate a model that helps us predict and modulate these responses both in terms of preventing the mineralization and also to promote it,” Martín-Moldes said.

“High performance supercomputing simulations are utilized along with experimental approaches to develop a model for the integrin activation, which is the first step in the bone formation process,” said study co-author Davoud Ebrahimi, a postdoctoral associate at the Laboratory for Atomistic and Molecular Mechanics of the Massachusetts Institute of Technology.

Integrin embeds itself in the cell membrane and mediates signals between the inside and the outside of cells. In its dormant state, the head unit sticking out of the membrane is bent over like a nodding sleeper. This inactive state prevents cellular adhesion. In its activated state, the head unit straightens out and is available for chemical binding at its exposed ligand region.

“Sampling different states of the conformation of integrins in contact with silicified or non-silicified surfaces could predict activation of the pathway,” Ebrahimi explained. Sampling the folding of proteins remains a classically computationally expensive problem, despite recent and large efforts in developing new algorithms.

The derived silk–silica chimera they studied weighed in around a hefty 40 kilodaltons. “In this research, what we did in order to reduce the computational costs, we have only modeled the head piece of the protein, which is getting in contact with the surface that we’re modeling,” Ebrahimi said. “But again, it’s a big system to simulate and can’t be done on an ordinary system or ordinary computers.”

The Computational team at MIT used the molecular dynamics package called Gromacs, a software for chemical simulation available on both the Stampede1 and Comet supercomputing systems. “We could perform those large simulations by having access to XSEDE computational clusters,” he said.

“I have a very long-standing positive experience using XSEDE resources,” said Ebrahimi. “I’ve been using them for almost 10 years now for my projects during my graduate and post-doctoral experiences. And the staff at XSEDE are really helpful if you encounter any problems. If you need software that should be installed and it’s not available, they help and guide you through the process of doing your research. I remember exchanging a lot of emails the first time I was trying to use the clusters, and I was not so familiar. I got a lot of help from XSEDE resources and people at XSEDE. I really appreciate the time and effort that they put in order to solve computational problems that we usually encounter during our simulation,” Ebrahimi reflected.

Computation combined with experimentation helped advance work in developing a model of osteoregeneration. “We propose a mechanism in our work,” explained Martín-Moldes, “that starts with the silica-silk surface activating a specific cell membrane protein receptor, in this case integrin αVβ3.” She said this activation triggers a cascade in the cell through three mitogen-activated protein kinsase (MAPK) pathways, the main one being the c-Jun N-terminal kinase (JNK) cascade.

She added that other factors are also involved in this process such as Runx2, the main transcription factor related to osteogenesis. According to the study, the control system did not show any response, and neither did the blockage of integrin using an antibody, confirming its involvement in this process. “Another important outcome was the correlation between the amount of silica deposited in the film and the level of induction of the genes that we analyzed,” Martín-Moldes said. “These factors also provide an important feature to control in future material design for bone-forming biomaterials.”

“We are doing a basic research here with our silk-silica systems,” Martín-Moldes explained. “But we are helping in building the pathway to generate biomaterials that could be used in the future. The mineralization is a critical process. The final goal is to develop these models that help design the biomaterials to optimize the bone regeneration process, when the bone is required to regenerate or to minimize it when we need to reduce the bone formation.”

These results help advance the research and are useful in larger efforts to help cure and treat bone diseases. “We could help in curing disease related to bone formation, such as calcific aortic valve disease or osteoporosis, which we need to know the pathway to control the amount of bone formed, to either reduce or increase it, Ebrahimi said.

“Intracellular Pathways Involved in Bone Regeneration Triggered by Recombinant Silk–Silica Chimeras,” DOI: 10.1002/adfm.201702570, appeared September 2017 in the journal Advanced Functional Materials. The National Institutes of Health funded the study, and the National Science Foundation through XSEDE provided computational resources. The study authors are Zaira Martín-Moldes, Nina Dinjaski, David L. Kaplan of Tufts University; Davoud Ebrahimi and Markus J. Buehler of the Massachusetts Institute of Technology; Robyn Plowright and Carole C. Perry of Nottingham Trent University.

Source: Jorge Salazar, TACC

The post XSEDE Systems Stampede1 and Comet Help Sample Protein Folding in Bone Regeneration Study appeared first on HPCwire.

HPC Advisory Council and Stanford University Announce 8th HPC and AI Conference

Wed, 12/20/2017 - 08:26

SUNNYVALE, Calif., Dec. 20. 2017 — The HPC Advisory Council (HPCAC), a leading for community benefit, member-based organization dedicated to advancing high-performance computing (HPC) research, education and collaboration through global outreach, and Stanford University today announced its 2018 HPCAC-Stanford Conference will take place February 20-21 2018, at Stanford University’s Munger Conference Center/Paul Brest Hall. The annual California-based conference draws world-class experts from all over the world for two days of thought leadership talks and immersive tutorials focusing on emerging trends with extensive coverage of AI, Data Sciences, HPC, Machine Learning and more.

February 2018 will mark eight years of the council’s collaboration with Stanford University’s High Performance Computing Center (HPCC). Silicon Valley’s locality, combined with Stanford’s intimate venue and attraction of savvy participants, has made the Stanford Conference a go-to annual forum, attracting HPC and AI users and vendors to showcase the latest in their development or offerings. The council has a consistent history of delivering content-rich agendas and expert specialists and attracting broad participation from across the highly diverse, distributed, multi-disciplined community and beyond. Focused on topics of great societal impact and responsibility, the conference combines invited talks featuring industry visionaries with sponsor-led usage, innovation and future technology sessions. It also attracts contributed talks from renowned subject matter experts (SMEs).

“HPC drives most aspects of new discoveries, innovations and breakthroughs,” said Gilad Shainer, HPC Advisory Council chairman. “The conference’s open format and broad accessibility gives attendees a forum for learning about new and emerging technologies and sharing best practices to further their own works. By gaining early insight into the rapid changes taking place and exploring areas where we can make a difference together, everyone benefits.”

“The Stanford Conference is an intimate gathering of the global HPC community who come together to collaborate and innovate the way to the future,” said Steve Jones, Director of Stanford’s High Performance Computing Center. “SMEs, mentors, students, peers and professionals, representing a diverse range of disciplines, interests and industry, are drawn to the conference to learn from each other and leave collectively inspired to contribute to making the world a better place.”

Registration is required to attend the annual Stanford Conference which is open to the public and free of charge. Registration is open through Feb. 16, 2018. Information on sessions submissions and sponsorship proposals can be found on the conference pages on the HPCAC website.

In addition to its premiere Stanford Conference, the organization’s only U.S.-based forum, the council hosts multiple conferences around world each year. The 2018 schedule continues in April with the first four day, combined session of the annual Swiss Conference and HPCXXL meeting in Lugano, Switzerland; Australia’s two-day conference planned for August; with one day sessions to be held in Spain in September and China in October to close out the year. The council also supports two major student-focused competitions each year. The annual ISC-HPCAC Student Cluster Competition (SCC), in partnership with the ISC Group, officially kicked off in November with the reveal of nine of twelve international teams competing next June during the ISC High Performance Conference and Exhibition in Frankfurt, Germany. Kicking off in May, the annual RDMA competition is a six-month long programming competition between student teams throughout China that culminates in October with winning teams revealed at the annual China Conference. Sponsor opportunities are available to support all of these critical education initiatives. More information on becoming a member; conference dates, locations and sponsorships; and student competitions and sponsorships is available on the HPCAC website.

About the HPC Advisory Council

Founded in 2008, the non-profit HPC Advisory Council is an international organization with over 400 members committed to education and outreach. Members share expertise, lead special interest groups and have access to the HPCAC technology center to explore opportunities and evangelize the benefits of high performance computing technologies, applications and future development. The HPC Advisory Council hosts multiple international conferences and STEM challenges including the RDMA Student Competition in China and the Student Cluster Competition at the annual ISC High Performance conferences. Membership is free of charge and obligation. More  information: www.hpcadvisorycouncil.com.

Source: HPC Advisory Council

The post HPC Advisory Council and Stanford University Announce 8th HPC and AI Conference appeared first on HPCwire.

CHPC National Conference in Pretoria, South Africa

Wed, 12/20/2017 - 08:00

The 11th South African Centre for High Performance Computing (CHPC) Annual National Conference convened Dec. 3-7, 2017 at the Velmore Estate Hotel south of Pretoria.

CHPC, the South African Research and Education Network (SANReN) and the Data Intensive Research Initiative of South Africa (DIRISA) showcased a broad range of resources and human capital development programs that supported the conference theme, “HPC Convergence with Novel Application Models for Better Service to Research and Industry.”

The event was officially opened by Phil Mjwara, Director General of the South African National Department of Science and Technology and Hina Patel, Executive Director of the Council for Scientific and Industrial Research (CSIR) Meraka Institute. The CHPC National Conference was called to order by CHPC Director, Happy Sithole.

More than 450, including 132 students (72 competitors and 60 posters), registered for the five-day event which included two full days of workshops and tutorials, student challenges, a student poster session, plenary addresses, birds-of-a-feather sessions, and the annual co-located meetings of the Southern African Development Community (SADC) Cyberinfrastructure Collaboration Forum and the CHPC Industry Forum.

Southern African Development Community (SADC) Cyberinfrastructure Collaborative Forum

The SADC Forum first met during the 2012 CHPC National Conference. This year’s session was chaired by Mmampei Chaba (Chief Director: Multilateral and Africa, South Africa Department of Science and Technology), with SADC Secretariat Anneline Morgan.

Most delegates work for universities or government agencies that advise their national Ministries of Science and Technology as well as Information, Communication and Technology (ICT). Some are researchers, and many have teaching obligations. Among Forum goals is to develop a framework for a shared cyberinfrastructure ecosystem in the broader, sub-Saharan region. Forum delegates are also able to attend CHPC sessions where they learn useful skills and build professional networks.

SADC delegates from South Africa, Botswana, Mozambique, Namibia, Seychelles, Swaziland, Zambia, and Zimbabwe were present, and SADC announced the addition of a 16th country, Comoros, which will be the sub-Saharan region’s fourth island nation (with Madagascar, Mauritius and Seychelles).

Delegates offered a brief presentation about their national cyberinfrastructure, highlights from the past year, challenges and future plans. Advisers from South African, European and U.S. organizations were present to help those who are just getting started progress quicker by learning from the successes and setbacks that others have endured over the past decades.

One adviser, Thomas Sterling (Indiana University/CREST-US), had great advice for SADC delegates, “Don’t shoot for the moon; aim for where the moon will be when you get there!”

Dr. Sterling also led an HPC Workforce Development workshop where he provided a sneak peek of his new book titled, “High Performance Computing: Modern Systems and Practices.” The publication was several years in the making, and reflects more than 20 years of teaching experience and pedagogical lessons learned by Sterling and his collaborators. It includes course notes, videos and the opportunity to engage with real-time, online support. The book’s intellectual contributions were supported by the US National Science Foundation (US-NSF), so it costs less than $100.00US. Since it’s affordable and self-paced, it will be extremely useful for those who wish to become HPC engineers, but can’t afford time off to train, or travel.

Several SADC delegates also attended STEM-Trek workshops in 2016 and 2017 that were co-located with the annual Supercomputing Conference in the U.S. This year, with support from the US-NSF, Google, Corelight, the SC Conference Chair, and others, 15 SADC delegates from seven countries attended a cybersecurity-themed workshop in Denver, Colorado-US November 11-18, 2017.

Understanding Risk in Shared CyberEcosystems (URISC@SC17) in Denver, Colorado-US.

Many SADC sites were beneficiaries of the Ranger HPC donation by the CHPC; a system that was originally funded by US-NSF in 2008, decommissioned by the University of Texas in 2012, and donated to South Africa. Twenty-five Ranger racks were split into small, stand-alone clusters. The clusters, and a supply of spare parts, were then donated to universities in 12 locations throughout South Africa and the SADC region where they’re used for education and light research. Additional sites inherited a similar class of hardware donated by the University of Cambridge in 2014. A new donation will arrive soon—the US-NSF-funded Stampede system—decommissioned in 2016, and donated by the Texas Advanced Computing Center via the University of Texas. Stampede will replace end-of-life Ranger systems, and expand the number of sites that are participating in the HPC Ecosystems project led by Bryan Johnston (CHPC Advanced Computer Engineering (ACE) Lab Senior Technologist and Lecturer).

SANReN Cyber Security Challenge

For the first time, SANReN hosted a Cyber Security Challenge (CSC), sponsored by the CSIR Meraka Institute.

The network security-focused challenge allowed students to decrypt passwords, geolocate pictures, secure websites, find information from TCP traffic and extract weak security keys. For the competitive finale, students participated in a live hacking scenario where they had to defend their own network infrastructure from competitors’ attacks.

It began last summer with a call for participation, and an elimination round that drew more than 100 students. Thirty-one were chosen to attend the second round in December. Six universities were represented, including North West University, the University of Pretoria, Rhodes University, University of Stellenbosch, University of the Witwatersrand, and the University of the Western Cape.

CHPC Student Cluster Competition

This was the sixth year for the CHPC Student Cluster Competition (SCC). Thanks to Dell-EMC and Mellanox hardware donations, plus Dell’s generous program support, the winning South African team will train for a week in Austin, Texas, and then journey to Frankfurt, Germany next summer to compete in the International Supercomputing Conference Student Cluster Competition (ISC-SCC) where South African teams have placed first or second since they first began to compete in 2010.

But the opportunity to compete in the ISC-SCC means much more to South Africa than a chance to win coveted first prize. ISC rules allow students to compete for as many as four years running. By the time they arrive in Frankfurt, some others have participated in as many as ten or more competitions. With this added exposure, it’s easier to develop the stellar skills that winning teams typically demonstrate.

South Africa, however, imposes two self-limiting rules for its program. Each year, an entirely-new team is chosen, and the organizing committee makes every effort to engage the broadest possible number of schools. This ensures that students from disadvantaged backgrounds and demographics that are underrepresented in the use of advanced computational and data science fields have a chance to get their foot in the door. “Many of our finalists have only recently become acquainted with Linux and have never competed; their learning curve is much steeper,” said Competition Organizer David Macleod (CHPC ACE Lab).

The process begins each summer. This year in June, 120 students applied, and 80 students were chosen—20 four-person teams—to participate in the first-round competition at Stellenbosch University in July. Forty students from the top teams returned to compete in the ‘South African Champs’ competition at the Velmore Estate in December.

SCC finalists represented six universities, including Rhodes University, University of Limpopo, Stellenbosch University, Khulisa Academy, Wits University and University of the Free State. Choosing a diverse final team was easier since the initial cohort was extremely diverse: 38 percent female, 50 percent black, 10 percent Indian and 3 percent Asian. Most student competitors are pursuing computer science or electrical engineering degrees. “Each year an increasing number of “long-tail” disciplines are represented, including some that are pursuing bachelor of arts degrees,” said Macleod.

The Dell Development Fund, with help from the CHPC, sponsored a team from Khulisa Academy which prepares disadvantaged students for postsecondary education and careers. Four Khulisa contenders, including two women, are in their second year at KA where they entered after completing the twelfth grade.

David Mcleod (far right, blue checked shirt), with 2017 Student Cluster Competition competitors at the Velmore Estate Hotel. Click to enlarge.

The CHPC organizing team includes: David Macleod (ACE Lab Manager, Competition Organizer); Matthew Cawood (ACE Lab Engineer, Lecturer, Tutor and Benchmark Guy); Israel Tshililo (ACE Lab Engineer, Tutor); Bryan Johnston (ACE Lab Senior Technologist, Lecturer); Sakhile Matsoka (ACE Lab Engineer, Tutor); and John Poole (CHPC BSP Manager, Lecturer).

There were even younger faces at the CHPC conference this year since students from five regional grammar schools participated in a special CHPC field trip. The children listened to high-tech plenary addresses, visited vendor booths, and watched the cluster and cybersecurity competitors in action. It was excellent exposure for South Africa’s youngest prospects who got a glimpse of what it’s like to enter the high-tech workforce pipeline.

And the winners are!

University of the Witwatersrand (Wits) teams captured first, second and third place at the contest. The winning team of four, plus two selected standout individuals and two reserves selected by the judges from other teams, will travel to Frankfurt to compete in the ISC-SCC next June.

While lead trainer Macleod looks on (left checked shirt), Dr. Sithole (Director, CHPC and Meraka Institute) announced the winning team that will travel to Frankfurt. The winning team members received a Dell laptop as first-prize.
From left: Nathan Michlo (Wits, HPC Club); Sharon Evans (Wits, Giga Biters); Zubair Bulbilia (Wits, Gekko); Njabulo Sithole (University of Limpopo, Phoenix Bit); Katleho Mokoena (Wits, Wits1); Meir Rosendorf (Wits, Wits1); Kimessha Paupamah (Wits, Wits1); and Joshua Bruton (Wits, Wits1).

The SANReN Cyber Security Challenge Winners

Team Bitphase from Stellenbosch University won first prize in the SANReN Cyber Security Challenge.

Presentations awarded by Ajay Makan (SANReN) on left, and Renier Van Heerden (SANReN) on the far right. Student winners from left: Jonathan Botha, Joseph Rautenbach, Luke Joshua and Nicolaas Weideman. Team “Awesome Source” and “H5N1” from the University of Pretoria won second and third place, respectively. Team N5N1 received the “Snowden Prize” for successfully launching a social engineering attack against fellow teams.

About CHPC

The CHPC is one of three primary pillars of the national integrated cyber-infrastructure intervention supported by the Department of Science and Technology (DST). The South African National Research Network (SANReN) and the Data Intensive Research Infrastructure of South Africa (DIRISA) complement the CHPC through the provision of high-speed, high-bandwidth connectivity, and the effective curation of a variety of notably large and critical databases. The CHPC infrastructure is updated and maintained meticulously to comply with international standards.

The 2017 CHPC National Conference and student events were supported by Dell-EMC, Dell Foundation, Mellanox, Altair, Eclipse Holdings, and Bright Computing. The photography in this article is by Lawrette McFarlane Photography.

The post CHPC National Conference in Pretoria, South Africa appeared first on HPCwire.

BOXX Technologies Appoints Lorne Wilson as Vice President of Sales

Wed, 12/20/2017 - 07:49

AUSTIN, Tex., Dec. 20, 2017 — BOXX Technologies, a leading innovator of high-performance computer workstations, rendering systems, and servers, today announced the appointment of Lorne Wilson as Vice President of Sales. As BOXX expands into the deep learning marketplace, Wilson is responsible for all direct and indirect global sales functions and is entrusted to grow strategic customer accounts and partner relationships which tactically meld with core BOXX business goals.

“As we near the close of a record-setting profit year highlighted by our acquisition of Cirrascale and subsequent expansion into the deep learning market, it’s imperative that our sales team be guided by an experienced, innovative leader,” said BOXX CEO Rick Krause. “We are excited about Lorne’s direct and indirect sales plans and believe that he is the professional capable of driving our sales success to even greater heights.”

By acquiring Cirrascale Corporation, a premier developer of multi-GPU servers and cloud solutions designed for deep learning infrastructure, BOXX solidified its position as the leader in multi-GPU computer technology. The addition of deep learning and artificial intelligence (AI) to its line of multi-GPU solutions for VFX, animation, motion media, architecture, engineering, and other 3D design markets, resulted in unprecedented growth requiring a new level of world-class enterprise sales leadership. Upon accepting the position, Wilson moved quickly, launching an initiative to scale BOXX sales infrastructure, including technical and additional sales resources that will play an essential role in future revenue growth.

“It’s a privilege to join BOXX at a time when the server, workstation, and rendering system markets are experiencing tremendous growth in part due to advancements in deep learning and Al applications,” said Wilson. “With a firm commitment to sales presence expansion essential for growth, I am confident of meeting our 2018 sales objectives and taking BOXX to the next level.”

Prior to his new position at BOXX, Wilson’s experience includes executive sales and marketing roles at both startups and large enterprise organizations. Previously, he served as Chief Sales Officer at cyber security provider BluStor. In addition, Wilson spent twelve years with Fujitsu as Senior Vice President of Sales, Marketing, and New Product Development.

About BOXX Technologies

BOXX is a leading innovator of high-performance computer workstations, rendering systems, and servers for engineering, product design, architecture, visual effects, animation, deep learning, and more. For 21 years, BOXX has combined record-setting performance, speed, and reliability with unparalleled industry knowledge to become the trusted choice of creative professionals worldwide. For more information, visit www.boxx.com.

Source: BOXX Technologies

The post BOXX Technologies Appoints Lorne Wilson as Vice President of Sales appeared first on HPCwire.

As Exascale Frontier Opens, Science Application Developers Share Pioneering Strategies

Tue, 12/19/2017 - 16:16

In November 2015, three colleagues representing the US Department of Energy (DOE) Office of Science’s three major supercomputing facilities struck up a conversation with a science and technology book publisher about a project to prepare a publication focusing on the future of application development in anticipation of pre-exascale and exascale supercomputers and the challenges posed by such systems.

Two years later, the fruits of that discussion became tangible in the form of a new book, which debuted at SC17. Exascale Scientific Applications: Scalability and Performance Portability captures programming strategies being used by leading experts across a wide spectrum of scientific domains to prepare for future high-performance computing (HPC) resources. The book’s initial collaborators and eventual coeditors are Tjerk Straatsma, Scientific Computing Group leader at the Oak Ridge Leadership Computing Facility (OLCF); Katerina Antypas, Data Department Head at the National Energy Research Scientific Computing Center (NERSC); and Timothy Williams, Deputy Director of Science at the Argonne Leadership Computing Facility (ALCF).

Twenty-four teams, including many currently participating in early science programs at the OLCF, ALCF, and NERSC, contributed chapters on preparing codes for next-generation supercomputers, in which they summarized approaches to make applications performance portable and to develop applications that align with trends in supercomputing technology and architectures.

In this interview, Straatsma, Antypas, and Williams discuss the significance of proactive application development and the benefits this work portends for the scientific community.

Tjerk Straatsma

How did this book come to be written?

Tjerk Straatsma: When we proposed writing the book, the intent was to provide application developers with an opportunity to share what they are doing today to take advantage of pre-exascale machines. These are the people doing the actual porting and optimization work. Through their examples, we hope that others will be inspired and get ideas about how to approach similar problems for their applications to do more and better science.

For quite some time, the three DOE ASCR [Advanced Scientific Computing Research] supercomputing facilities have been the leaders when it comes to working on performance portability for science applications. For our users, it’s very important that they can move from one system to another and continue their research at different facilities. That’s why DOE is very much interested in the whole aspect of portability—not just architectural portability but also performance portability. You want high performance on more than just a single system.

Katerina Antypas

Katerina Antypas: As the three of us discussed the different application readiness programs within our centers, it was clear that despite architectural differences between the systems at each center, the strategies to optimize applications for pre-exascale systems were quite similar. Sure, if a system has a GPU, a different semantic might be needed, but the processes of finding hot spots in codes, increasing data locality, and improving thread scalability were the same. And in fact, teams from NERSC, OLCF, and ALCF talked regularly about best practices and lessons learned preparing applications. We thought these lessons learned and case studies should be shared more broadly with the rest of the scientific computing community.

Timothy Williams: Nothing instructs the developer of scientific applications more clearly than an example. Capturing the efforts of our book’s authors as examples was an idea that resonated with us. Measuring and understanding the performance of applications at large scale is key for those developers, so we were glad we could include discussions about some of the tools that make that possible across multiple system architectures. Libraries supporting functions common to many applications, such as linear algebra, are an ideal approach to performance portability, so it made good sense to us to include this as a topic as well.

Tim Williams

Why is it important for these programming strategies to be shared now?

Straatsma: It’s important because DOE’s newest set of machines is starting to arrive. In 2016, NERSC delivered Cori, which comprises 9,688 Intel Xeon Phi Knights Landing processors, each with 68 cores. As we speak, the OLCF is building Summit—which will be around eight times more powerful than our current system, Titan, when it debuts in 2018. The ALCF is working to get its first exascale machine, Aurora, and the OLCF and NERSC are already working on the machines to follow their newest systems, at least one of which is likely to be an exascale machine.

It takes a long time to prepare codes for these new machines because they are becoming more and more complex. Hierarchies of processing elements, memory space, and communication networks are becoming more complex. Effectively using these resources requires significant effort porting applications. If you do that in a way that makes them portable between current machines, there’s a better chance that they will also be portable to future machines—even if you don’t know exactly what those systems will look like.

This is what this book is all about: providing a set of practical approaches that are currently being used by application development teams with the goal of getting applications to run effectively on future-generation architectures.

Antypas: There are three key technologies that applications need to take advantage of to achieve good performance on exascale systems: longer vector units, high bandwidth memory, and many low-powered cores. Regardless of vendor or specific architecture, future exascale systems will all have these features. The pre-exascale systems being deployed today—Cori at NERSC, Theta at ALCF, and Summit at OLCF—have early instances of exascale technologies that scientists can use to optimize their applications for the coming exascale architectures. Preparing applications for these changes now means better performing codes today and a smoother transition to exascale systems tomorrow.

Williams: Exascale computing is coming to the US in an accelerated timeframe—by 2021. This makes the work on applications, tools, and libraries documented in this book all the more relevant. Today is also a time of extraordinary innovation in both hardware and software technologies. Developing applications that are up to today’s state of the art, and well-positioned to adapt to those new technologies, is effort well spent.

What other major challenges are science and engineering application developers grappling with?

Straatsma: The biggest challenge is expressing parallelism across millions and millions—if not billions—of compute elements. That’s an algorithmic challenge. Then you have the hardware challenge, mapping those algorithms on to the specific hardware that you are targeting. Whether you have NVIDIA GPUs as accelerators together with IBM Power CPUs like on Summit or you’re looking at NERSC’s Cori system with its Intel Knights Landing processors, the basic story is the same: Taking the parallelism you’ve expressed and mapping it on to that hardware.

It’s a tall order, but, if done right, there is an enormous payoff because things that are being developed for these large pre-exascale machines tend to also lead to more efficient use of traditional architectures. In that sense, we’re at the forefront of the hardware with these machines, but we’re also at the forefront of the software. The benefits trickle down to the wider community.

Antypas: Besides the challenges associated with expressing on-node parallelism and improving data locality, scientists are grappling with the huge influx of data from experiments and observational facilities such as light sources, telescopes, sensors, and detectors, and how to incorporate data from these experiments into models and simulations. In the not too distant past, workflows started and ended within a supercomputing facility. Now, many user workflows start from outside of a computing facility and end with users needing to share data with a large collaboration. Data transfer, management, search, analysis, and curation have become large challenges for users.

Williams: Whether you view it as a challenge or an opportunity is a matter of perspective, but those developers who are themselves computational scientists are now more tightly coupled to the work of experimentalists and theorists. They are increasingly codependent. For example, cosmological simulations inform observational scientists of specific signs to look for in sky surveys, given an assumed set of parameter values for theoretical models. Particle-collider event simulations inform detectors at the experiment about what to look for, and what to ignore, in the search for rare particles—before the experiment is run.

How is scientific application development, which has traditionally entailed modeling and simulation, being influenced by data-driven discovery and artificial intelligence?

Straatsma: Most of the applications that we have in our current application readiness programs at the DOE computing facilities use traditional modeling and simulation, but artificial intelligence, machine learning, and deep learning are rapidly affecting the way we do computational science. Because of growth in datasets, it’s now possible to use these big machines to analyze data to discover underlying models. This is the broad area of data analytics. In our book, one such project is using seismic data analysis to derive models that are being used to get a better understanding of the Earth’s crust and interior.

In a sense, it’s doing computational science from the opposite direction than what has traditionally been done. Instead of having a model and simulating that model to create a lot of data that you use to learn things from your system, you start with potentially massive datasets—experimental or observational—and use inference methods to derive models, networks, or other features of interest.

Antypas: Machine learning and deep learning have revolutionized many fields already and are increasingly being used by NERSC users to solve science challenges important to the mission of the Department of Energy’s Office of Science. As part of a requirements-gathering process with the user community, scientists from every field represented noted they were exploring new methods for data analysis, including machine learning. We also expect scientists will begin to incorporate the inference step of learning directly into simulations.

Williams: Computational scientists now increasingly employ data-driven and machine learning approaches to answer the same science and engineering questions addressed by simulation. Fundamental-principles–based simulation and machine learning have some similarities. They can both address problems where there is no good, high-level theory to explain phenomena. For example, behavior of materials at the nanoscale, where conventional theories don’t apply, can be understood either by simulating the materials atom-by-atom or by using machine learning approaches to generate reduced models that predict behavior.

In the foreword, the contributors to this book are referred to as “the pioneers who will explore the exascale frontier.” How will their work benefit the larger scientific community?

Straatsma: In multiple ways. The most obvious benefit is that we get a set of applications that run very well on very large machines. If these are applications used by broad scientific communities, many researchers will benefit from them. The second benefit is in finding methodologies that can be translated to other codes or other application domains and be used to make these applications run very well on these new architectures. A third benefit is that application developers get a lot of experience doing this kind of work, and based on that experience, we have better ideas on how to approach the process of application readiness and performance portability.

Williams: With each step forward in large-scale parallel computing, a cohort of young scientists comes along for the ride, engaged in these pioneering efforts. The scale of this computing, and the sophistication of the software techniques employed, will become routine for them going forward. This is really just a manifestation of the advance of science, which builds on successes and corrects itself to be consistent with what we learn.

After coediting this volume, are there any key lessons that you hope readers take from this work?

Straatsma: I hope that people who are wondering about HPC at the scale we’re talking about will get inspired to think about what these future resources could do for their science or think bigger than what they’re thinking now. To draw one example from the book, astrophysicists are developing techniques for exascale systems that are projected to enable simulation of supernova explosions that include significantly larger kinetic networks than can be used today, and these systems can do this faster and more accurately. That’s just one example of the many described in this publication of exascale-capable applications with the promise of enabling computational science with more accurate models and fewer approximations, leading to more reliable predictions.

Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Jonathan Hines is a science writer at Oak Ridge National Laboratory.

The post As Exascale Frontier Opens, Science Application Developers Share Pioneering Strategies appeared first on HPCwire.

Independent Hyperion Research Will Chart its Own Course

Tue, 12/19/2017 - 16:16

Hyperion Research, formerly the HPC research and consulting practice within IDC, has become an independent company with Earl Joseph, the long-time leader of the IDC HPC team, acquiring the company as a sole proprietorship. He becomes CEO of Hyperion Research, LLC. The year-end resolution of Hyperion’s status had been expected. Under the terms of the sale of IDC to China Holding Group and IDG Capital last January, the U.S. government required the IDC HPC practice be spun out separately while a new owner was sought.

Earl Joseph, Hyperion Research CEO, formerly leader of IDC HPC Group

Hyperion has long held a pre-eminent position among HPC analysts. Its regular HPC market updates at SC and ISC have become fixtures in the HPC community (HPCwire coverage of the most recent update, Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue). Hyperion will also continue to administer the HPC User Forum, which is comprised of HPC community leaders who set the agenda for regular meetings held in the U.S. and abroad.

Talks were held with at least nine suitors, including some vendors and other analyst firms, said Steve Conway, long-time IDC HPC executive and now Hyperion senior vice president of research. Adequate resources and independence were key factors in the eventual decision to become an independent entity. “The team and I had discussions with many potential acquirers in recent months, but none of them would have given us the same freedom-of-action and growth potential we now have as an independent company,” Joseph said in the official press release.

The coverage areas will largely be the same, said Conway, with growth into the enterprise segment, an increased focus on artificial intelligence and a new quantum computing practice led by IDC veteran Bob Sorensen, now vice president of technology and research. Conway said, “Really we are just tracking HPC’s expansion.” Conway was instrumental in developing IDC high performance data analysis (HPDA) effort and said the AI effort is really a subset of that.

Quantum computing is of course new but Hyperion already has clients for those services, according to Conway, and it is meeting with entities such as the national labs to help define what such a practice would encompass. He noted, “Quantum is still in the hype phase but the carrot is so big [you have to pursue ].” Quantum computing will be one of the main themes at the HPC User Forum (April 16-18), he said.

Steve Conway, Hyperion SVP

Conway cited deep learning as another growth area. “We expect to see a ramp up in exploratory deep learning; it’s still not quite there in most segments.” The overall 2018 HPC outlook remains strong, he said.

As described in today’s announcement, “Hyperion helps IT professionals, business executives, and the investment community make fact-based decisions on HPC-related technology purchases and business strategy.”

“The team will continue all the worldwide activities that have made it the world’s most respected HPC industry analyst group for more than 25 years,” said Conway, “That includes sizing and tracking the global markets for HPC and high-performance data analysis. We will also continue offering our subscription services, customer studies and papers, and operating the HPC User Forum.” Conway emphasized that Hyperion’s business model relies more heavily on consulting relative to subscriptions.

During the eight-month period between Hyperion’s separation from IDC and the formation of the new company, not a single client left Hyperion, according to the release. “In fact, our business continued growing to the point where we recently hired two more people, analyst Alex Norton and our former IDC colleague Kurt Gantrish as a second sales counterpart to Mike Thorp,” said Joseph, adding that the firm plans to add more analysts to address expected demand.

“On behalf of our entire team,” Joseph said, “I deeply thank IDC for making it possible for us to continue the HPC business practice we all worked so hard to build while at IDC.” For more about Hyperion Research, see www.HyperionResearch.com and www.hpcuserforum.com.

The post Independent Hyperion Research Will Chart its Own Course appeared first on HPCwire.

The High Stakes Semiconductor Game that Drives HPC Diversity

Tue, 12/19/2017 - 08:29

The semiconductor market is worth $300-billion-plus revenue per annum and Intel accounts for almost $60 billion of this total. In fact the top three companies (Intel, Samsung and TSMC) account for almost $130 billion or more than 40 percent of the total market and the next seven companies account for another $90 billion. Why is this relevant to a discussion about technological diversity in the HPC space you ask yourself?

It’s a rich man’s world

It’s a truism that no supplier has ever gotten rich from HPC — apart that is from the component suppliers such as Intel and their shareholders. For any tier 1 vendor, HPC is considered, if not quite a vanity project, certainly one that generates a halo effect rather than significant profits.

The arrival of commodity clusters in the 1990s dramatically changed the HPC market dynamics but it also inadvertently placed almost all of the real profits and control of the technical direction into the hands of a small number of commodity component manufacturers.

Now I’m not going to argue that this has been a bad thing. Intel and the cohort of top semiconductor suppliers (those who invested most heavily in process technology and foundry capacity) have ensured that Moore’s law carried on trucking. As has been oft pointed out, it became less of an observation and more of a self-sustaining marketing prophecy. The periodic restatements of the ‘law’ necessary to ensure that it remained a viable marketing tool were viewed as justified by just about everyone who was riding the wave.

One of the consequences of a move away from custom core components (such as vector processors) for HPC was that opportunities for innovation and differentiation between vendors were reduced. However a positive effect of the commoditization of HPC was that HPC was brought to the masses in a way that probably wouldn’t have happened otherwise.

If you discount IBM as an outlier in technological terms (in that they actually had four competing HPC platforms at one point as well as a foundry capability), if everyone is using the same CPUs (or at the very least the same ISA), memory, storage and to a certain extent network (InfiniBand and Ethernet) it only takes a change in one of those to potentially disrupt the market.

When the chips are down

Consider the cautionary tale of Cray, one of, if not the preeminent exponent of HPC systems engineering and integration in the market today. Excellent though the XC series and the Aries fabric clearly are, they are not a passport to huge financial reward in the HPC market. Simply put, success in supplying tier 1 capability machines of the sort that Cray excels at building, does not translate into sales into the extremely price sensitive tier 2/3 arena. Even in the good years, Cray can still struggle to deliver profit margins that would make investors heads turn.

What does this say about the HPC market as a whole? It shows that excellent engineering isn’t in itself enough. Cray themselves attempted to move down the food chain with their acquisition of Appro (2012) and in doing so started to compete more directly with the likes of HPE and Dell for cluster based HPC sales. Their more recent acquisition of the ClusterStor line from Seagate and the launch of HPC as a service under the Azure umbrella are all attempts to diversify and increase their total addressable market. The problem is that when your competitor’s revenues are literally an order of magnitude greater, simple economies of scale start to become even more relevant.

Cash flow now becomes even more critical, with purchasing power a function of how far out you can place orders and in what volumes (and probably what hedge positions you can take). Ironically, it also means that some of the systems that you are technologically well placed to build are actually too big a stretch financially without finding a deep pocketed sugar daddy (think Intel and Cray’s exascale partnership).

As an industry we now have the slightly perverse situation that, as we are entering a new era of technological innovation and diversity, as well as building things bigger and hopefully rather better, there will inevitably be a renewed phase of market consolidation.

Money makes the world go round

Now there are lots of reasons for a merger and acquisition and I’m actually willing to believe that some are definitely a meeting of minds as well as accountants. What’s also true is that in the semiconductor and computer business first mover advantage often applies. So when one of the big players at the semiconductor poker table bets big, it inevitably triggers a flurry of further M&A activity as the other players decide to follow their money or fold.

We’ve seen this recently with the ARMing of the datacentre. Softbank’s purchase of ARM in 2016 and the uptick in sentiment that ARM had finally found a rich foster parent who would invest in pushing into territory hitherto dominated by Intel encouraged a number of other moves by Intel’s semiconductor rivals.

The interplay, first between Broadcom (Avago) and Cavium (taking on the orphaned Vulcan product), then Broadcom making an opportunistic and hostile bid for Qualcomm (of course this was about way more than just the Centriq processor line) and most recently with Marvell’s bid for Cavium (again not just for ThunderX II) was interesting and instructive to watch.

From an HPC perspective it was hard to tell how much it would affect the likelihood that one or more of the ARM vendors would mount a credible challenge to Intel in the datacentre. Certainly my feeling was that Broadcom were likely to be the least sympathetic winner of the hand, but when you consider that they are playing for a share of a $60 billion pot you start to see the sort of stakes being wagered.

Which brings us to how we as an industry maintain a healthy technological diversity in the HPC market, when the reality is that only a handful of semiconductor companies have any realistic hope of challenging the current Intel hegemony.

Fabtastic

If we look at the common denominator between the top semiconductor companies, we see that Intel and Samsung are both vertically integrated. In other words they own their own fabs and they make profits by taking their core IP (in Intel’s case the x86 architecture) into the primary consumer market and then an even more lucrative variant into the datacenter and HPC segments. Even with the eye watering capital expenditure necessary to build and equip modern fabs they are at a relative competitive advantage to those companies who have to source their devices from foundries such as TSMC, GlobalFoundries and UMC.

Qualcomm and Broadcom round out the top five semiconductor companies (both fab at TSMC) and it’s no surprise that both have been linked with ARM-derived datacenter class CPUs. Perhaps the only surprise is that Samsung appears to have sat out the hand and concentrated purely on the development of consumer space SoCs.

Of course they are far from the only companies who are looking at challenging Intel’s dominance in the datacenter but in terms of relative size, financial stability and ability to stay in the game they hold cards that none of the other players at the table do.

Spread betting

Even the mighty Intel is looking to grow market share via diversification (not something they have managed in the mobile space). With dominance in the datacentre (and consumer) space that they could never have expected even ten years ago, they are well aware that the next big technology wave can come along and swamp you if you’re not paying attention. Intel have made recent bets on storage (3D XPoint and NAND), the internet of things, edge and autonomous computing (Altera, Movidius and Mobileye to name but three acquisitions) and also machine and deep learning (Nervana). All of these markets are forecast to be fast growing and ultimately at least as large as the combined consumer and datacentre CPU markets.

Of course Intel will never abandon the CPU market, but recent missteps with Phi and the challenges transitioning to lower process geometries have had knock on effects in its core markets. At least some of the problems surrounding Aurora are likely to have been caused by the change in cadence for Moore’s law (assuming it’s still in ICU and not on the way to the mortuary).

There is a real diversity starting to appear, at least in the processor space, with IBM’s Power 9 and Nvidia’s Volta being stood up for the CORAL pre-exascale systems. Add to that competitive debuts for AMD’s EPYC, Cavium’s ThunderX II, Qualcomm’s Centriq, along with a range of other processors (many ARM derived), accelerators (including the reappearance of classical vector co-processors) and some real innovation in the ML/DL space and there are real competitive threats on the horizon.

Unless Intel takes a good look at some of their recent product segmentation and pricing decisions I expect that we will see a slow erosion of their numbers in 2018. With the first hyperscaler to jump ship, that trickle may become a steady stream, but until then Intel are still firmly in pole position. They market is theirs to lose but if there is a readjustment then how long can the HPC space continue to rely on what has amounted to a commodity subsidy for HPC research and development?

About the Author

Dairsie has a somewhat eclectic background, having worked in a variety of roles on supplier side and client side across the commercial and public sectors as a consultant and software engineer. Following an early career in computer graphics, micro-architecture design and full stack software development; he has over twelve years’ specialist experience in the HPC sector, ranging from developing low-level libraries and software for novel computing architectures to porting complex HPC applications to a range of accelerators. He also advises clients on strategy, technology futures, HPC procurements and managing challenging technical projects.

The post The High Stakes Semiconductor Game that Drives HPC Diversity appeared first on HPCwire.

Blue Waters Supercomputer Processes New Data for NASA’s Terra Satellite

Tue, 12/19/2017 - 07:56

Dec. 19, 2017 — Over the course of nearly two decades, NASA’s Terra satellite has exceeded many of its expectations from the time of its launch. Blue Waters professor at the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Larry Di Girolamo, who was in his first year of graduate school studying atmospheric sciences when Terra was conceived, has literally watched the Earth change before his very eyes with Terra. “Terra has transformed earth sciences. Many of the advancements in earth sciences have come from Terra,” said Di Girolamo.

Di Girolamo presented the Terra visualizations created by NCSA’s Advanced Visualization Lab, with data processed by NCSA’s Blue Waters supercomputer at the 2017 American Geophysical Union (AGU) Fall Meeting in New Orleans, Louisiana. “NASA’s Terra data archive is about 1.2 petabytes and so far spans 17 years. Long, well calibrated data records are central to studying the earth’s climate. Blue Waters is one of the only computers that can process that data on such a large scale in a timeframe suitable for scientists to interact and ask questions of the data,” said DiGirolamo, “Blue Waters gives scientists the ability to rapidly go through the data. Without the leadership system, we would be severely held back.” These visualizations will showcase the how the Terra satellite samples the Earth in a visually compelling and informative way, and how the different instruments on the Terra satellite are fused together to better document how the Earth has been changing.



Terra satellite carries five instruments that take coincident measurements of the Earth’s system and each carries out a different mission:

  • Advanced Spaceborne Thermal Emission and Reflection Radiometer(ASTER) produces images using infrared, red and green wavelengths of light to create detailed high resolution land maps visualizing temperature, emissivity, reflectance and elevation.
  • Clouds and Earth’s Radiant Energy System (CERES) is a pair of identical CERES sensors aboard Terra that measure the Earth’s total radiation budget and provide cloud property estimates that enable scientists to quantitatively assess the role of clouds in the Earth system.
  • Multi-angle Imaging Spectroradiometer (MISR) views the Earth with cameras pointed at nine different angles to determine measure the angular distribution of scattered sunlight, which is used to measure aerosol pollution over land and water, cloud structure, and surface vegetation properties. It is designed to enhance our knowledge of the lower atmosphere and to observe how the atmosphere interacts with the land and ocean biospheres.
  • Measurement of Pollution in the Troposphere (MOPITT) is the first satellite sensor to use gas correlation spectroscopy to measure the emitted and reflected radiance from the Earth in three spectral bands. The data is used to measure the amount of carbon monoxide and methane in our atmosphere.
  • Moderate Resolution Imaging Spectroradiometer (MODIS) makes detailed measurements from the visible to the infrared over a 2,300-km swath, allowing scientists to retrieve a wide range of properties of the Earth’s surface, atmosphere and clouds.

The Terra satellite launched on December 18, 1999 and was originally anticipated to have a lifespan of six years. But with great engineering and clever fuel usage, Terra is now estimated to continue into the 2020’s. Terra completes about 14 orbits a day, in a circular 10:30 a.m. sun-synchronous polar orbit, where every over takes 99 minutes to complete. Since the early 2000’s, Terra has played a key role in the study of air pollution. In fact, Terra was the first to study air pollution over land from space, and includes studies of the effects of air pollution on human health. “Being able to produce results quickly with Blue Waters allows scientists to look through their data and analysis quickly, raise new hypotheses based on that analysis, and re-analyze the data to test the new hypotheses,” said Di Girolamo. “That type of interaction with the data is central to the rapid advancement of Earth Science with Terra.”

The Terra satellite is approximately the size of a school bus, weighing in at 11,442 lbs. What’s on the horizon for the future of the Terra satellite as it reaches its extended years is balancing its fuel supply against the science needs. Scientists are actively weighing the benefits and risks of maintaining its current climate quality record by continuing its orbit path, or lowering its altitude to preserve fuel to extend its time in space, but would end Terra’s climate quality data record for climate trend analysis.

About the National Center for Supercomputing Applications

NCSA at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About NCSA’S Blue Waters Project

The Blue Waters petascale supercomputer is one of the most powerful supercomputers in the world, and is the fastest sustained supercomputer on a university campus. Blue Waters uses hundreds of thousands of computational cores to achieve peak performance of more than 13 quadrillion calculations per second. Blue Waters has more memory and faster data storage than any other open system in the world. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenges. Recent advances that were not possible without these resources include computationally designing the first set of antibody prototypes to detect the Ebola virus, simulating the HIV capsid, visualizing the formation of the first galaxies and exploding stars, and understanding how the layout of a city can impact supercell thunderstorms.

Source: NCSA

The post Blue Waters Supercomputer Processes New Data for NASA’s Terra Satellite appeared first on HPCwire.

CMU Paper Reveals Libratus’ Winning Poker Strategy

Mon, 12/18/2017 - 12:40

Ever wondered how Libratus, the celebrated poker playing (and winning) AI software from Carnegie Mellon University, outsmarts its opponents? Turns out Libratus uses a three-pronged strategy which its inventors share in a paper published online yesterday in Science – Superhuman AI for heads-up no-limit poker: Libratus beats top professionals.

Libratus has been turning heads for some time with its ability to win against professional gamblers in Texas Hold’em, a game that emphasizes bluffing. “AI programs have defeated top humans in checkers, chess and Go — all challenging games, but ones in which both players know the exact state of the game at all times. Poker players, by contrast, contend with hidden information — what cards their opponents hold and whether an opponent is bluffing,” according to an interesting account on the CMU website.

The proof, of course, is in the winning; Libratus did this in spades at a 20-day, 120,000-hand competition last year at Rivers Casino, Pittsburgh. It was the first time an AI defeated top human players at Texas Hold’em. Libratus won $1.8 million in chips. (Too bad they couldn’t be cashed in). “As measured in milli-big blinds per hand (mbb/hand), a standard used by imperfect-information game AI researchers, Libratus decisively defeated the humans by 147 mbb/hand.”

Apparently HPCwire readers also like the idea of winning. Libratus received the HPCwire Reader’s Choice Award for Best Use of AI at SC17. Tuomas Sandholm, professor of computer science, and Noam Brown, a Ph.D. student in the Computer Science Department, detail how their AI was able to achieve “superhuman” performance by breaking the game into computationally manageable parts. They also explain how, based on its opponents’ game play, Libratus fixed potential weaknesses in its strategy during the competition.

(nazarovsergey/Shutterstock)

“The techniques in Libratus do not use expert domain knowledge or human data and are not specific to poker,” report Sandholm and Brown in the paper. “Thus they apply to a host of imperfect-information games.” Such hidden information is ubiquitous in real-world strategic interactions, they noted, including business negotiation, cybersecurity, finance, strategic pricing and military applications.

Here is an excerpt from the paper describing Libratus’ three main modules:

  • “The first module computes an abstraction of the game, which is smaller and easier to solve, and then computes game-theoretic strategies for the abstraction. The solution to this abstraction provides a detailed strategy for the early rounds of the game, but only an approximation for how to play in the more numerous later parts of the game. We refer to the solution of the abstraction as the blueprint strategy.
  • “When a later part of the game is reached during play, the second module of Libratus constructs a finer-grained abstraction for that subgame and solves it in real time. Unlike subgame-solving techniques in perfect-information games, Libratus does not solve the subgame abstraction in isolation; instead, it ensures that the fine-grained solution to the subgame fits within the larger blueprint strategy of the whole game. The subgame solver has several key advantages over prior subgame-solving techniques. Whenever the opponent makes a move that is not in the abstraction, a subgame is solved with that action included. We call this nested subgame solving. This technique comes with a provable safety guarantee.
  • “The third module of Libratus – the self-improver – enhances the blueprint strategy. It fills in missing branches in the blueprint abstraction and computes a game-theoretic strategy for those branches. In principle, one could conduct all such computations in advance, but the game tree is way too large for that to be feasible. To tame this complexity, Libratus uses the opponents’ actual moves to suggest where in the game tree such filling is worthwhile.”

As the CMU researchers point out, one can imagine many “contests” with hidden information in which Libratus AI software techniques might be used.

Link to Science paper: http://science.sciencemag.org/content/early/2017/12/15/science.aao1733.full

Link CMU article: https://www.cmu.edu/news/stories/archives/2017/december/ai-inner-workings.html

The post CMU Paper Reveals Libratus’ Winning Poker Strategy appeared first on HPCwire.

Pages