Feed aggregator

Mines partnership with QL+ featured by multiple news outlets

Colorado School of Mines - Wed, 01/10/2018 - 13:22

A partnership between Colorado School of Mines and nonprofit Quality of Life Plus that is bringing Mines students together with injured veterans to to engineer custom solutions for specific mobility problems was recently featured by multiple news outlets, including The Denver Post, CBS4 Denver, FOX31 Denver and the Golden Transcript. 

The Denver Post: Disabled Colorado veterans gain new mobility thanks to School of Mines technology

CBS4 Denver: Engineering Students Help Wounded Veterans Thanks To Partnership

 

FOX31 Denver: School of Mines engineers help improve lives of disabled veterans

Categories: Partner News

Honored Physicist Steven Chu Selected as AAAS President-Elect

HPC Wire - Tue, 01/09/2018 - 14:40

Jan. 9, 2018 — Nobel laureate and former Energy Secretary Steven Chu has been chosen as president-elect of the American Association for the Advancement of Science. Chu will start his three-year term as an officer and member of the Executive Committee of the AAAS Board of Directors at the 184th AAAS Annual Meeting in Austin, Texas, in February.

“As Secretary of Energy, I was reminded daily that science must continue to be elevated and integrated into our national life and throughout the world. The work of AAAS in connecting science with society, public policy, human rights, education, diplomacy and journalism – through its superb journals and programs – is essential,” said Chu in his candidacy statement.

“Never has there been a more important time than today for AAAS to communicate the advances in science, the methods we use to acquire this knowledge and the benefits of these discoveries to the public and our policymakers,” he said.

Chu cited his role in key reports by National Academies and the American Academy of Arts and Sciences on the competitiveness of the U.S. scientific enterprise and the state of fundamental research, studies that “sounded alarms that the health of science, science education and integration of science into public decision-making in the U.S. was in peril and heading in the wrong direction,” he said in his candidacy statement. “Concern among scientists and friends of science is even greater today and we in AAAS have our work cut out for us.”

AAAS must continue its efforts to communicate the benefits of scientific progress, Chu noted, saying the world’s largest general scientific organization must continue to ensure scientists and students have access to the free exchange of ideas and the ability to pursue discovery across national boundaries.

Chu currently serves as the William R. Kenan Jr. Professor of Physics and Professor of Molecular and Cellular Physiology at Stanford University. Prior to rejoining Stanford in 2013, Chu was secretary of energy during President Barack Obama’s first term, the first scientist to head the Department of Energy, the home of the nation’s 17 National Laboratories.

Prior to his appointment as energy secretary, Chu was director of the Lawrence Berkeley National Laboratory as well as a professor of physics and molecular and cell biology at University of California, Berkeley. He first joined Stanford University in 1987, where he was a professor of physics until 2004.

Between 1978 and 1987, Chu worked at Bell Labs, where he ultimately led its Quantum Electronics Research Department. At Bell Labs, Chu carried out research on laser cooling and atom trapping, work that would earn him – along with Claude Cohen-Tannoudji and William Daniel Phillips – the Nobel Prize for Physics in 1997. Their new methods for using laser light to “trap” and slow down atoms to study them in greater detail “contributed greatly to increasing our knowledge of the interplay between radiation and matter,” the Nobel Committee said in 1997.

Chu received bachelor’s degrees in mathematics and physics from the University of Rochester and a Ph.D. in physics from the University of California, Berkeley.

He was named an elected fellow of AAAS in 2000 and has been a member of AAAS since 1995. He served on the AAAS Committee on Nominations, which selects the annual slate of candidates for AAAS president-elect and Board of Directors elections, from 2009 to 2011.

The current AAAS president-elect, Margaret Hamburg, will begin her term as AAAS president at the close of the 2018 Annual Meeting. Hamburg is foreign secretary of the National Academy of Medicine. The current president, Susan Hockfield, will become chair of the AAAS Board of Directors. Hockfield is president emerita of the Massachusetts Institute of Technology.

Source: AAAS

The post Honored Physicist Steven Chu Selected as AAAS President-Elect appeared first on HPCwire.

Micron and Intel Announce End to NAND Memory Joint Development Program

HPC Wire - Tue, 01/09/2018 - 11:20

BOISE, Idaho, and SANTA CLARA, Calif., Jan. 8, 2018 – Micron and Intel today announced an update to their successful NAND memory joint development partnership that has helped the companies develop and deliver industry-leading NAND technologies to market.

The announcement involves the companies’ mutual agreement to work independently on future generations of 3D NAND. The companies have agreed to complete development of their third-generation of 3D NAND technology, which will be delivered toward the end of this year and extending into early 2019. Beyond that technology node, both companies will develop 3D NAND independently in order to better optimize the technology and products for their individual business needs.

Micron and Intel expect no change in the cadence of their respective 3D NAND technology development of future nodes. The two companies are currently ramping products based on their second-generation of 3D NAND (64 layer) technology.

Both companies will also continue to jointly develop and manufacture 3D XPoint at the Intel-Micron Flash Technologies (IMFT) joint venture fab in Lehi, Utah, which is now entirely focused on 3D XPoint memory production.

“Micron’s partnership with Intel has been a long-standing collaboration, and we look forward to continuing to work with Intel on other projects as we each forge our own paths in future NAND development,” said Scott DeBoer, executive vice president of Technology Development at Micron. “Our roadmap for 3D NAND technology development is strong, and we intend to bring highly competitive products to market based on our industry-leading 3D NAND technology.”

“Intel and Micron have had a long-term successful partnership that has benefited both companies, and we’ve reached a point in the NAND development partnership where it is the right time for the companies to pursue the markets we’re focused on,” said Rob Crooke, senior vice president and general manager of Non-Volatile Memory Solutions Group at Intel Corporation. “Our roadmap of 3D NAND and Optane technology provides our customers with powerful solutions for many of today’s computing and storage needs.”

Source: Intel

The post Micron and Intel Announce End to NAND Memory Joint Development Program appeared first on HPCwire.

Activist Investor Ratchets up Pressure on Mellanox to Boost Returns

HPC Wire - Tue, 01/09/2018 - 10:29

Activist investor Starboard Value has sent a letter to Mellanox CEO Eyal Waldman demanding dramatic operational changes to boost returns to shareholders. This is the latest missive in an ongoing struggle between Starboard and Mellanox that began back in November when Starboard raised its stake in the interconnect specialist to 10.7 percent. Starboard argues Mellanox is significantly undervalued and that its costs, notably R&D, are unreasonably high.

The letter, dated January 8 and under the signature of Peter Feld, is pointed as shown in this excerpt:

“As detailed in the accompanying slides, over the last twelve months Mellanox’s R&D expenditures as a percentage of revenue were 42%, compared to the peer median of 22%. On SG&A, Mellanox spent 24% of revenue versus the peer median of 17%. It is critical to appreciate that Mellanox is not just slightly worse than peers on these key metrics, it is completely out of line with the peer group.”

Mellanox issued 2018 guidance for “low-to-mid-teens” (percent) revenue growth. Starboard cites a ‘consensus’ estimate of $816.5 million in revenue for 2017 and $986.4 million (14.5 percent). At 70.6 percent, Mellanox has one of the highest gross margins among comparable companies, and one of the lowest operating margins at 13.8 percent, according to Starboard.

“We believe there is a tremendous opportunity at Mellanox, but it will require substantial change, well beyond just the Company’s recently announced 2018 targets,” wrote Feld.

Link to Starboard letter: http://www.starboardvalue.com/wp-content/uploads/Starboard_Value_LP_Letter_to_MLNX_01.08.2018.pdf

The post Activist Investor Ratchets up Pressure on Mellanox to Boost Returns appeared first on HPCwire.

ACM Names New Director of Global Policy and Public Affairs

HPC Wire - Tue, 01/09/2018 - 10:24

NEW YORK, Jan. 9, 2018 — ACM, the Association for Computing Machinery, has named Adam Eisgrau as its new Director of Global Policy and Public Affairs, effective January 3, 2018. Eisgrau will coordinate and support ACM’s engagement with public technology policy issues involving information technology, globally and particularly in the US and Europe. ACM aims to educate and inform computing professionals, policymakers, and the public about information technology policy and its consequences, and to shape public technology policy through a deeper understanding of the information technology issues involved.

“ACM has long been committed to providing policy makers in the US and abroad with the most current, accurate, objective and non-partisan information about all things digital as they wrestle with issues that profoundly affect billions of people,” said ACM President Vicki L. Hanson. “We’re thrilled to add a communicator of Adam’s caliber to our team as the computing technologies pioneered, popularized and promulgated by ACM members become ever more integrated to the fabric of daily life.”

“Speaking tech to power clearly, apolitically and effectively has never been more important,” said Eisgrau. “The chance to do so for ACM in Washington, Brussels and beyond is a dream opportunity.”

A former communications attorney, Eisgrau began his policy career as Judiciary Committee Counsel to then-freshman US Senator Dianne Feinstein (D-CA). Since leaving Senator Feinstein’s office in 1995, he has represented both public- and private-sector interests in international forums and to Congress, federal agencies and the media on a host of technology-driven policy matters. These include: digital copyright, e-commerce competition, peer-to-peer software, cybersecurity, encryption, online financial services, warrantless surveillance and digital privacy.

Prior to joining ACM, Eisgrau directed the government relations office of the American Library Association. He is a graduate of Dartmouth College and Harvard Law School.

About ACM

ACM, the Association for Computing Machinery www.acm.org, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post ACM Names New Director of Global Policy and Public Affairs appeared first on HPCwire.

NOAA to Expand Compute Capacity by 50 Percent with Two New Dells

HPC Wire - Tue, 01/09/2018 - 10:10

January 9, 2018 — NOAA’s combined weather and climate supercomputing system will be among the 30 fastest in the world, with the ability to process 8 quadrillion calculations per second, when two Dell systems are added to the IBMs and Crays at data centers in Reston, Virginia, and Orlando, Florida, later this month.

“NOAA’s supercomputers play a vital role in monitoring numerous weather events from blizzards to hurricanes,” said Secretary of Commerce Wilbur Ross. “These latest updates will further enhance NOAA’s abilities to predict and warn American communities of destructive weather.”

This upgrade completes phase three of a multi-year effort to build more powerful supercomputers that make complex calculations faster to improve weather, water and climate forecast models. It adds 2.8 petaflops of speed at both data centers combined, increasing NOAA’s total operational computing speed to 8.4 petaflops — or 4.2 petaflops per site.

Sixty percent more storage

The upgrade also adds 60 percent more storage capacity, allowing NOAA to collect and process more weather, water and climate observations used by all the models than ever before.

“NOAA’s supercomputers ingest and analyze billions of data points taken from satellites, weather balloons, airplanes, buoys and ground observing stations around the world each day,” said retired Navy Rear Adm. Timothy Gallaudet, Ph.D., acting NOAA administrator. “Having more computing speed and capacity positions us to collect and process even more data from our newest satellites — GOES-East, NOAA-20 and GOES-S — to meet the growing information and decision-support needs of our emergency management partners, the weather industry and the public.”

With this upgrade, U.S. weather supercomputing paves the way for NOAA’s National Weather Service to implement the next generation Global Forecast System, known as the “American Model,” next year. Already one of the leading global weather prediction models, the GFS delivers hourly forecasts every six hours. The new GFS will have significant upgrades in 2019, including increased resolution to allow NOAA to run the model at 9 kilometers and 128 levels out to 16 days, compared to the current run of 13 kilometers and 64 levels out to 10 days. The revamped GFS will run in research mode on the new supercomputers during this year’s hurricane season.

“As we look toward launching the next generation GFS in 2019, we’re taking a ‘community modeling approach’ and working with the best and brightest model developers in this country and abroad to ensure the new U.S. model is the most accurate and reliable in the world,” said National Weather Service Director Louis W. Uccellini, Ph.D.

Supporting a Weather-Ready Nation

The upgrade announced today – part of the agency’s commitment to support the Weather-Ready Nation initiative – will lead to more innovation, efficiency and accuracy across the entire weather enterprise. It opens the door for the National Weather Service to advance its seamless suite of weather, water and climate models over the next few years, allowing for more precise forecasts of extreme events a week in advance and beyond.

Improved hurricane forecasts and expanded flood information will enhance the agency’s ability to deliver critical support services to local communities. In addition, the new supercomputers will allow NOAA’s atmosphere and ocean models to run as one system, helping forecasters to more readily identify interaction between the two and reducing the number of operational models; as well as allow for development of a new seasonal forecast system to replace the Climate Forecast System in 2022, paving the way for improved seasonal forecasts as part of the Weather Research and Forecasting Innovation Act.

The added computing power will support upgrades to the National Blend of Models, which is being developed to provide a common starting point for all local forecasts; allow for more sophisticated ensemble forecasting, which is a method of improving the accuracy of forecasts by averaging results of various models; and provide quicker turnaround for atmosphere and ocean simulations, leading to earlier predictions of severe weather.

NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources.

Source: NOAA

The post NOAA to Expand Compute Capacity by 50 Percent with Two New Dells appeared first on HPCwire.

Momentum Builds for US Exascale

HPC Wire - Tue, 01/09/2018 - 10:08

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. quest for exascale on a solid foundation. In my last article, I provided a description of the elements of the High Performance Computing (HPC) ecosystem and its importance for advancing and sustaining this strategically important technology. It is good to report that the U.S. exascale program seems to be hitting the full range of ecosystem elements.

As a reminder, the National Strategic Computing Initiative (NSCI) assigned the U.S. Department of Energy (DOE) Office of Science (SC) and the National Nuclear Security Administration (NNSA) to execute a joint program to deliver capable exascale computing that emphasizes sustained performance on relevant applications and analytic computing to support their missions. The overall DOE program is known as the Exascale Computing Initiative (ECI) and is funded by the SC Advanced Scientific Computing Research (ASCR) program and the NNSA Advanced Simulation and Computing (ASC) program. Elements of the ECI include the procurement of exascale class systems and the facility investments in site preparations and non-recurring engineering. Also, ECI includes the Exascale Computing Project (ECP) that will conduct the Research and Development (R&D) in the areas of middleware (software stack), applications, and hardware to ensure that exascale systems will be productively usable to address Office of Science and NNSA missions.

In the area of hardware – the last part of 2017 revealed a number of important developments. First and most visible, is the initial installation of the SC Summit system at Oak Ridge National Laboratory (ORNL) and the NNSA Sierra system at Lawrence Livermore National Laboratory (LLNL). Both systems are being built by IBM using Power9 processors with Nvidia GPU co-processors. The machines will have two Power9 CPUs per system board and will use a Mellenox InfinBand interconnection network.

Beyond that, the architecture of each machine is slightly different. The ORNL Summit machine will use six Nvidia Volta GPUs per two Power9 CPUs on a system board and will use NVLink to connect to 512 GB of memory. The Summit machine will use a combination of air and water cooling. The LLNL Sierra machine will use four Nvidia Voltas and 256 GB of memory connected with the two Power9 CPUs per board. The Sierra machine will use only air cooling. As was reported by HPCwire in November 2017, the peak performance of the Summit machine will be about 200 petaflops and the Sierra machine is expected to be about 125 petaflops.

Installation of both the Summit and Sierra systems is currently underway with about 279 racks (without system boards) and the interconnection network already installed at each lab. Now that IBM has formally released the Power9 processors, the racks will soon start being populated with the boards that contain the CPUs, GPUs and memory. Once that is completed, the labs will start their acceptance testing, which is expected to be finished later in 2018.

Another important piece of news about the DOE exascale program is the clarification of the status of the Argonne National Laboratory (ANL) Aurora machine. This system was part of the collaborative CORAL procurement that also selected the Sierra and Summit machines. The Aurora system is being manufactured by Intel with Cray Inc. acting as the system integrator. The machine was originally scheduled to be an approximately 180 peak petaflops system using the Knights Hill third generation Phi processors. However, during SC17, we learned that Intel is removing the Knights Hill chip from its roadmap. This explains the reason why during the September ASCR Advisory Committee (ASCAC) meeting, Barb Helland, the Associate Director of the ASCR office, announced that the Aurora system would be delayed to 2021 and upgraded to 1,000 petaflops (aka 1 exaflops).

The full details of the revised Aurora system are still under wraps. We have learned that it is going to use “novel” processor technologies, but exactly what that means is unclear. The ASCR program subjected the new Aurora design to an independent outside review. It found, “The hardware choices/design within the node is extremely well thought through. Early projections suggest that the system will support a broad workload.” The review committee even suggested that, “The system as presented is exciting with many novel technology choices that can change the way computing is done.” The Aurora system is in the process of being “re-baselined” by the DOE. Hopefully, once that is complete, we will get a better understanding of the meaning of “novel” technologies. If things go as expected, the changes to Aurora will allow the U.S. to achieve exascale by 2021.

An important, but sometimes overlooked, aspect of the U.S. exascale program is the number of computing systems that are being procured, tested and optimized by the ASCR and ASC programs as part of the buildup to exascale. Other computing systems involved with “pre-exascale” systems include the 8.6 petaflops Mira computer at ANL and the 14 petaflops Cori system at Lawrence Berkeley National Lab (LBNL). The NNSA also has the 14.1 petaflops Trinity system at Los Alamos National Lab (LANL). Up to 20 percent of these precursor machines will serve as testbeds to enable computing science R&D needed to ensure that the U.S. exascale systems will be able to productively address important national security and discovery science objectives.

The last, but certainly not least, bit of hardware news is that the ASCR and ASC programs are expected to start their next computer system procurement processes in early 2018. During her presentation to the U.S. Consortium for the Advancement of Supercomputing (USCAS), Barb Helland told the group that she expects that the Request for Proposals (RFP) will soon be released for the follow-ons to the Summit and Sierra systems. These systems, to be delivered in the 2021-2023 timeframe, are expected to be provide in excess of exaFLOP/s performance. The procurement process to be used will be similar to the CORAL procurement and will be a collaboration between the DOE-SC ASCR and NNSA ASC programs. The ORNL exascale system will be called Frontier and the LLNL system will be known as El Capitan.

2017 also saw significant developments for the people element of the U.S HPC ecosystem. As was previously reported, at last September’s ASCAC meeting, Paul Messina announced that he would be stepping down as the ECP Director on October 1st. Doug Kothe, who was previously the applications development lead, was announced as the new ECP Director. Upon taking the Director job, Kothe with his deputy, Stephen Lee of LANL, instituted a process to review the organization and management of the ECP. At the December ASCAC conference call, Doug reported that the review had been completed and resulted in a number of changes. This included paring down ECP from five to four components (applications development, software technology, hardware and integration, and project management). He also reported that ECP has implemented a more structured management approach that includes a revised work breakdown structure (WBS) and additional milestones, new key performance parameters and risk management approaches. Finally, the new ECP Director reported that they had established an Extended Leadership Team with a number of new faces.

Another important, element of the HPC ecosystem are the people doing the R&D and other work need to keep the ecosystem going. The DOE ECI involves a huge number of people. Last year, there were about 500 researchers who attended the ECP Principle Investigator meeting and there are many more involved in other DOE/NNSA programs and from industry. The ASCR and ASC programs are involved with a number of programs to educate and train future members of the HPC ecosystem. Such programs are the ASCR and ASC co-funded Computational Science Graduate Fellowship (CSGF) and the Early Career Research Program. The NNSA offers similar opportunities. Both the ASCR and ASC programs continue to coordinate with National Science Foundation educational programs to ensure that America’s top computational science talent continues to flow into the ecosystem.

Finally, in addition to people and hardware, the U.S. program continues to develop the software stack (aka middleware) to develop end users’ applications to ensure that exascale will be used productively. Doug Kothe reported that ECP has adopted standard Software Development Kits. These SDKs are designed to support the goal of building a comprehensive, coherent software stack that enables application developers to productively write highly parallel applications that effectively target diverse exascale architectures. Kothe also reported that ECP is making good progress in developing applications software. This includes the implementation of innovative approaches that include Machine Learning to utilize the GPUs that are part of the future exascale computers.

All in all – the last several months of 2017 have set the stage for a very exciting 2018 for the U.S. exascale program. It has been about 5 years since the ORNL Titan supercomputer came onto the stage at #1 on the TOP500 list. Over that time, other more powerful DOE computers have come online (Trinity, Cori, etc.) but they were overshadowed by Chinese and European systems. It remains unclear whether or not the upcoming exascale systems will put the U.S. back on the top of the supercomputing world. However, the recent developments help to reassure the country is not going to give up its computing leadership position without a fight. That is great news because for more than 60 years, the U.S. has sought leadership in high performance computing for the strategic value it provides in the areas of national security, discovery science, energy security, and economic competitiveness.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

The post Momentum Builds for US Exascale appeared first on HPCwire.

Tilton wins NSF CAREER award to model, improve feed spacers

Colorado School of Mines - Tue, 01/09/2018 - 09:03

Nils Tilton, assistant professor of mechanical engineering at Colorado School of Mines, has received a National Science Foundation CAREER Award to develop a computational fluid dynamics model to improve efficient, low-energy options for wastewater treatment and desalination.

Tilton’s project, “Robust Numerical Modeling for Rational Design of Membrane Filtration Processes,” will receive $547,364 over five years beginning in August. 

“Shortages in potable water are creating a large demand for water treatment and desalination. California’s recent drought, for example, is motivating municipalities to invest in seawater desalination plants. Desalination technology is also now used to recycle municipal and industrial wastewater. The problem is that desalination requires a lot of energy, and the generation of that energy by power plants requires a lot of water. In the process, you also make more pollution, which exacerbates climate change and drought,” Tilton said. “Finding new, more energy-efficient ways of producing potable water is key to securing long-term water and energy security.”

Tilton’s work will focus on membrane separation processes, such as reverse osmosis and nanofiltration. Both offer promising low-energy solutions for desalination and wastewater treatment – that is, until the membranes get bogged down.

“You're basically filtering water by forcing it through a membrane that acts like a sieve – water goes through the membrane while salts and other contaminants are blocked,” Tilton said. “The problem is, all that stuff builds up on the membrane and increases the pressures needed to force the water through. With time, the salts also form a hard mineral scale, like the calcium deposits you get on shower walls, that impedes filtration, damages the membrane and increases maintenance costs.”

That retention of solutes, known as concentration polarization, can be tackled by patterning a mesh-like net of physical spaces on the membrane to alter the fluid flow at the surface. The impact of those feed spacers, however, is not well understood.

Tilton and his team will develop a new method for simulating the interactions between polarization, scaling and mixing due to feed spacers. Using the information they gather, they will then design better patterns for the meshes to minimize polarization.

Researchers will also collaborate with a 3-D printing company to look into the possibility of using 3-D printing to produce the meshes, as well as Newmont Mining, which will provide sample mine wastewater for testing. 

Applications of membrane separation processes include the desalination of seawater and the treatment of municipal wastewater for potable reuse, as well as the recycling of the wastewater generated during hydraulic fracturing. 

As part of the project, Tilton is also partnering with the Asian Pacific Development Center in Aurora to develop a new summer youth workshop. Aurora high school students would come to Mines for the workshop, learning computer programming using affordable Raspberry Pi computers. Mines students in the Multicultural Engineering Program would lead the workshop.

Tilton joined Mines in 2014 after serving as a postdoctoral research fellow at University of Maryland, College Park, and University of Aix-Marseille. He holds a PhD, master’s degree and bachelor’s degree in mechanical engineering from McGill University in Montreal.

Categories: Partner News

Stampede1 Helps Researchers Examine a Greener Carbon Fiber Alternative

HPC Wire - Tue, 01/09/2018 - 07:48

Jan. 9, 2018 — From cars and bicycles to airplanes and space shuttles, manufacturers around the world are trying to make these vehicles lighter, which helps lower fuel use and lessen the environmental footprint.

One way that cars, bicycles, airplanes and other modes of transportation have become lighter over the last several decades is by using carbon fiber composites. Carbon fiber is five-times stronger than steel, twice as stiff, and substantially lighter, making it the ideal manufacturing material for many parts. But with the industry relying on petroleum products to make carbon fiber today, could we instead use renewable sources?

In the December 2017 issue of Science, Gregg Beckham, a group leader at the National Renewable Energy Laboratory (NREL), and an interdisciplinary team reported the results of experimental and computational investigations on the conversion of lignocellulosic biomass into a bio-based chemical called acrylonitrile, the key precursor to manufacturing carbon fiber.

The catalytic reactor shown here is for converting chemical intermediates into acrylonitrile. The work is part of the Renewable Carbon fiber Consortium. Photo by Dennis Schroeder/NREL

Acrlyonitrile is a large commodity chemical, and it’s made today through a complex petroleum-based process at the industrial scale. Propylene, which is derived from oil or natural gas, is mixed with ammonia, oxygen, and a complex catalyst. The reaction generates high amounts of heat and hydrogen cyanide, a toxic by-product. The catalyst used to make acrylonitrile today is also quite complex and expensive, and researchers still do not fully understand its mechanism.

“That’s where our study comes in,” Beckham said. “Acrylonitrile prices have witnessed large fluctuations in the past, which has in turn led to lower adoption rates for carbon fibers for making cars and planes lighter weight. If you can stabilize the acrylonitrile price by providing a new feedstock from which to make acrylonitrile, in this case renewably-sourced sugars from lignocellulosic biomass, we might be able to make carbon fiber cheaper and more widely adopted for everyday transportation applications.”

To develop new ideas to make acrylonitrile manufacturing from renewable feedstocks, the Department of Energy (DOE) solicited a proposal several years ago that asked: Is it possible to make acrylonitrile from plant waste material? These materials include corn stover, wheat straw, rice straw, wood chips, etc. They’re basically the inedible part of the plant that can be broken down into sugars, which can then be converted to a large array of bio-based products for everyday use, such as fuels like ethanol or other chemicals.

“If we could do this in an economically viable way, it could potentially decouple the acrylonitrile price from petroleum and offer a green carbon fiber alternative to using fossil fuels,” Beckham said.

Beckham and the team moved forward to develop a different process. The NREL process takes sugars derived from waste plant materials and converts those to an intermediate called 3-hydroxypropionic acid (3-HP). The team then used a simple catalyst and new chemistry, dubbed nitrilation, to convert 3-HP to acrylonitrile at high yields. The catalyst used for the nitrilation chemistry is about three times less expensive than the catalyst used in the petroleum-based process and it’s a simpler process. The chemistry is endothermic so it doesn’t produce excess heat, and unlike the petroleum-based process, it doesn’t produce the toxic byproduct hydrogen cyanide. Rather, the bio-based process only produces water and alcohol as its byproducts.

From a green chemistry perspective, the bio-based acrylonitrile production process has multiple advantages over the petroleum-based process that is being used today. “That’s the crux of the study,” Beckham said.

XSEDE’s Role in the Chemistry

Beckham is no stranger to XSEDE, the eXtreme Science and Engineering Discovery Environment that’s funded by the National Science Foundation. He’s been using XSEDE resources, including Stampede1, Bridges, Comet and now Stampede2, for about nine years as a principal investigator. Stampede1 and Stampede2 (currently #12 on the Top500 list list) are deployed and maintained by the Texas Advanced Computing Center.

Most of the biological and chemistry research conducted for this project was experimental, but the mechanism of the nitrilation chemistry was only at first hypothesized by the team. A postdoctoral researcher in the team, Vassili Vorotnikov of NREL, was recruited to run periodic density functional theory calculations on Stampede1 as well as the machines at NREL to elucidate the mechanism of this new chemistry.

Over about two months and several millions of CPU-hours used on Stampede1, the researchers were able to shed light on the chemistry of this new catalytic process. “The experiments and computations lined up nicely,” Vorotnikov said.

Because they had an allocation on Stampede1, they were able to rapidly turn around a complete mechanistic picture of how this chemistry works. “This will help us and other Top500 institutions to develop this chemistry further and design catalysts and processes more rationally,” Vorotnikov said. “XSEDE and the predictions of Stampede1 are pointing the way forward on how to improve nitrilation chemistry, how we can apply it to other molecules, and how we can make other renewable products for industry.”

“After the initial experimental discovery, we wanted to get this work out quickly,” Beckham continued. “Stampede1 afforded a great deal of bandwidth for doing these expensive, computationally intensive density functional theory calculations. It was fast and readily available and just a great machine to do these kind of calculations on, allowing us to turn around the mechanistic work in only a matter of months.”

Next Steps

There’s a large community of chemists, biologists and chemical engineers who are developing ways to make everyday chemicals and materials from plant waste materials instead of petroleum. Researchers have tried to do this before with acrylonitrile. But no one has been as successful in the context of developing high yielding processes with possible commercial potential for this particular product. With their new discovery, the team hopes this work makes the transition into industry sooner rather than later.

The immediate next step is scaling the process up to produce 50 kilograms of acrylonitrile. The researchers are working with several companies including a catalyst company to produce the necessary catalyst for pilot-scale operation; an agriculture company to help scale up the biology to produce 3-HP from sugars; a research institute to scale the separations and catalytic process; a carbon fiber company to produce carbon fibers from the bio-based acrylonitrile; and a car manufacturer to test the mechanical properties of the resulting composites.

“We’ll be doing more fundamental research as well,” Beckham said. “Beyond scaling acrylonitrile production, we are also excited about is using this powerful, robust chemistry to make other everyday materials that people can use from bio-based resources. There are lots of applications for nitriles out there — applications we’ve not yet discovered.”

Source: Faith Singer-Villalobos, TACC

The post Stampede1 Helps Researchers Examine a Greener Carbon Fiber Alternative appeared first on HPCwire.

Mixed-Signal Neural Net Leverages Memristive Technology

HPC Wire - Mon, 01/08/2018 - 13:11

Memristive technology has long been attractive for potential use in neuromorphic computing. Among other things it would permit building artificial neural network (ANN) circuits that are processed in parallel and more directly emulate how neuronal circuits in the brain work. Recent work led by researchers at Oak Ridge National Laboratory and the University of Tennessee proposes a mixed signal approach that leverages memristive technology to build better ANNs.

“[Our] mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system… The proposed [system] includes synchronous digital long term plasticity (DLTP), an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead,” writes Catherine Schuman, a Liane Russell Early Career Fellow in Computational Data Analytics at Oak Ridge National Laboratory, and colleagues[i].

Their paper, Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level, was published in the IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

The researchers point out that digital and analog approaches to building ANNs each have drawbacks. While digital implementations have precision, robustness, noise resilience and scalability, they are area intensive. Conversely, analog counterparts are efficient in terms of silicon area and processing speed, but “rely on representing synaptic weights as volatile voltages on capacitors or in resistors, which do not lend themselves to energy and area efficient learning.”

Instead, they propose a mixed-signal system where communication and control is digital while the core multiply-and-accumulate functionality is analog. Researchers used a hafnium-oxide memristor design based on earlier work (“A practical hafnium-oxide memristor model suitable for circuit design and simulation,” in Proceedings of IEEE International Symposium on Circuits and Systems).

Their design (figure two, shown below) consists of m x n memristive neuromorphic cores. “Each core has several memristive synapses and one mixed-signal neuron (analog in, digital out) to implement a spiking neural network. This arrangement helps maintain similar capacitance at the synaptic outputs and corresponding neurons. The similar distance between synapse and inputs also results in negligible difference in charge accumulation,” write the authors.

Also exciting is the researchers’ approach to implementing learning. Most ANNs require offline learning. For a network to learn online, Long Term Plasticity plays an important role in training the circuit with continuous updates of synaptic weights based on the timing of pre- and post-neuron fires.

“Instead of carefully crafting analog tails to provide variation in the voltage across the synapses, we utilize digital pre- and post-neuron firing signals and apply pulse modulation to implement a digital LTP (DLTP) technique…Basically the online learning process implemented here is one clock cycle tracking version of Spike time Dependent Plasticity… A more thorough STDP learning implementation would need to track several clock cycles before and after the post-neuron fire leading to more circuitry and hence increased power and area. Our DLTP approach acts similarly but ensures lower area and power,” write the authors.

Link to paper: http://ieeexplore.ieee.org/document/8119503/

Feature image source: ORNL

[i] Gangotree Chakma, Student Member, IEEE, Md Musabbir Adnan, Student Member, IEEE, Austin R. Wyer, Student Member, IEEE, Ryan Weiss, Student Member, IEEE, Catherine D. Schuman, Member, IEEE, and Garrett S. Rose, Member, IEEEAustin R. Wyer, Student Member, IEEE, Ryan Weiss, Student Member, IEEE, Catherine D. Schuman, Member, IEEE, and Garrett S. Rose, Member, IEEE

The post Mixed-Signal Neural Net Leverages Memristive Technology appeared first on HPCwire.

Curie Supercomputer Uses HPC to Help Improve Agricultural Production

HPC Wire - Mon, 01/08/2018 - 11:47

Jan. 8, 2018 — Agriculture is the principal means of livelihood in many regions of the developing world, and the future of our world depends on a sustainable agriculture at planetary level. High Performance Computing is becoming critical in agricultural activity, plague control, pesticides design and pesticides effects. Climate data are used to understand the impacts on water and agriculture in many regions of the world, help local authorities in the management of water and agricultural resources, and assist vulnerable communities in the region through improved drought management and response.

Image courtesy of the European Commission.

The demand for agricultural products has increased globally and meeting this growing demand would have a negative effect on the environment.  Increased agricultural production needs the use of 70% of the world’s water resources and a rise in greenhouse gas emissions.

To be able to reduce the negative impact to the ecosystem, seed companies are on the lookout for new plant varieties that yield more produce. Companies normally find such new varieties through field trials.  These field trials are a simple observation method but they cost a lot of money and are time consuming taking years to find the best ones.

Using High Performance Computing (HPC), the Curie supercomputer is able to provide the most efficient solution to this problem.  HPC enables numerical simulations of plant growth that help seed companies to achieve superior varieties instead of doing field trials which are more expensive and harmful for the environment.

For example, if a farmer wants to know what the conditions are for a plant to grow best in ( its genetic parameter), they would have to test its growth rate under various conditions to select the best parameter corresponding to the specific environment of the region. With the help of HPC, the estimation of these parameters is made more accurate and simpler by simulating plant growth. The simulation models take into account, the plant’s interaction with the environment.  It reduces the number of field trials by a large percent, for example, instead of 100, 10 field trials would be enough to  estimate the best genetic parameter.

Cybele Tech, the French company has used High Performance Computing to enable farmers to produce more with less and know what exactly their plants need to get a better yield.

They’ve been awarded with 4 million core hours on Curie hosted by GENCI at CEA, France.

Source: European Commission

The post Curie Supercomputer Uses HPC to Help Improve Agricultural Production appeared first on HPCwire.

Tutoring program aims to grow future scientists, engineers

Colorado School of Mines - Mon, 01/08/2018 - 11:42

When Kate Smits was an engineering student at the U.S. Air Force Academy in Colorado Springs, she struggled to find a mentor.

She had great professors and a great experience overall. But she never found that mentor, that female engineer she could look to and say, “Hey, there’s someone who looks like me. I want to be like that.”

What she did find, though, is a passion for STEM education and breaking down the barriers to participation in a field where nearly half of all workers in 2015 were white males, according to the National Science Foundation

“I started looking into that discrepancy of why people are or aren’t going into engineering,” said the assistant professor of civil and environmental engineering at Colorado School of Mines. “What the literature shows is there's a critical age – middle school – where students either get really excited about STEM or their interest completely drops off. You either grab them or you don’t.”

So when Smits received a National Science Foundation CAREER Award three years ago, it made perfect sense to dedicate the educational outreach portion of her grant to reaching those potential future scientists and engineers on their own turf – a middle-school classroom.

“I hypothesized that if we introduced STEM to these students in a sustained way at a critical time in their development, they would be more likely to go into it,” Smits said. “A lot of what we do, the decisions that we make, are based on what we’re exposed to.”

Twice a week, 12-15 Mines students visit College View Middle School, a public charter school in south Denver, for a hour of math and science tutoring and mentoring. 

College View, which is part of the Denver School of Science and Technology network, is 95 percent minority and 93 percent of students come from low-income backgrounds. Students attend tutoring based on need, teacher requirement or interest. 

On a recent afternoon, tutors worked with the middle schoolers in small groups, going over the answers to their last test, on linear equations. 

In addition to helping with homework and science fair projects, the Mines students are also encouraged to talk to the younger students about careers, college and life. Some of the tutors are interested in pursuing teaching careers, while others just like working with young people or appreciate the break from their own school work.

 

“It’s a great way to give back to the STEM community. Education is so important so any opportunity I can help out is important,” said Madison Webster, a sophomore studying chemical engineering. “I’ve really enjoyed it.” 

Blue O’Brennan, a sophomore majoring in physics, once brought one of his own statics exams to tutoring, just to show the younger students how what he’s doing in college is like what they’re doing in middle school. 

Working as a tutor has also been good practice for O’Brennan, who hopes to become a high school physics teacher

“I‘m hoping with my degree at Mines I’ll be able to show students where they can go with that information,” O’Brennan said. “It’s not just, ‘Here’s an inclined plane. Figure out how it works.’ It’s, ‘ Alright, here's what you’ll be doing in college, in research, in industry that involves this knowledge.’”

That connection to the bigger picture is so beneficial for younger students, said Alyse Nelsen, one of two College View teachers who partnered with Smits on the tutoring program.

“Sometimes in middle school it can feel really futile what they’re doing. But I always am overhearing the tutors saying things like, ‘Oh well I felt that way too when I was younger but this is how it applies to college’ or ‘This is what I’m interested in,’” Nelsen said. “I really can't overemphasize how important it is for these kids to see older people who they think are cool and look up to helping them with math.” 

Three years into a five-year grant, the tutoring program is already seeing results, Smits said. Students who have never tested at grade level on standardized math and science tests are passing for the first time and science fair participation at the school is up dramatically.

Smits also got approval to do a long-term study of the students, surveying them multiple times a year from sixth through 12th grade about their interest, participation and self-confidence in STEM. Early results are already showing increases in confidence and interest.

“The crazy thing is it doesn’t cost a lot of money. Here at Mines, you have a bunch of college students who want a part-time job and have huge hearts and are really motivated by service. There, you have a whole bunch of kids who really would love the time of an adult to sit down and work through some STEM issues. What a great combination,” Smits said. “This is a simple grassroots way of making a measurable impact on a community.”

That impact can’t come soon enough, either. STEM occupations in the U.S. are projected to grow by 8.9 percent between 2014 and 2024, compared to just 6.4 percent growth for non-STEM occupations, according to the U.S. Department of Commerce’s Economics and Statistics Administration.

“We talk a lot at Mines about preparing the pipeline. If we actually want to do that here in the state of Colorado, we need to start a whole lot sooner than the freshmen who walk through our door,” Smits said. “We need to start back when they’re a lot younger to be able to capture that incredible talent pool that's not currently being captured.”

“We’re missing out on a big percentage of talent by not incorporating underrepresented groups in engineering,” she said. “We’re not going to be able to solve the engineering problems of the future without incorporating everybody.” 

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Hike for Help service trip featured in The Himalayan Times

Colorado School of Mines - Mon, 01/08/2018 - 11:07

A group of Colorado School of Mines students traveled to Nepal over winter break to help build foot trails in the Mt. Everest region, and their service trip was featured in The Himalayan Times, an English-language newspaper in Nepal. The volunteers with Hike for Help Nepal also delivered about 400 pairs of shoes to low-income families in the area.

Categories: Partner News

Ellexus Publishes White Paper Advising HPCers on Meltdown, Spectre

HPC Wire - Mon, 01/08/2018 - 10:10

Jan 8 — Can you afford to lose a third of your compute real estate? If not, you need to pre-empt the impact of Meltdown and Spectre.

Meltdown and Spectre are quickly becoming household names and not just in the HPC space. The severe design flaws in Intel microprocessors that could allow sensitive data to be stolen and the fixes are likely to be bad news for any I/O intensive applications such as those often used in HPC.

Ellexus Ltd, the I/O profiling company, has released a white paper: How the Meltdown and Spectre bugs work and what you can do to prevent a performance plummet.

Why is the Meltdown fix worse for HPC applications?

The changes that are being imposed on the Linux kernel (called the KAISER patch) to more securely separate user and kernel space are causing additional overhead to context switches. This is having a measurable impact on the performance of shared file systems and I/O intensive applications, which is particularly noticeable in I/O heavy workloads. A performance penalty could reach 10-30%.

Systems that were previously just about coping with I/O heavy workloads could now be in real trouble. It’s very easy for applications sharing datasets to overload the file system and prevent other applications from working, but bad I/O can also affect each program in isolation, even before the patches for the attacks make that worse.

Profile application I/O to rescue lost performance

You don’t have to put up with poor performance in order to improve security, however. The most obvious way to mitigate performance losses is to profile I/O and identify ways to optimise applications’ I/O performance.

By using the tool suites from Ellexus, Breeze and Mistral, to analyse workflows it is possible to identify changes that will help to eliminate bad I/O and regain the performance lost to these security patches.

Ellexus’ tools locate bottlenecks and applications with bad I/O on large distributed systems, cloud infrastructure and super computer clusters. Once applications with bad I/O patterns have been located, our tools will indicate the potential performance increases as well as pointers on how to achieve them. Often the optimisation is as simple as changing an environment variable, changing a single line in a script or changing a simple I/O call to read more than one byte at a time.

In some cases, the candidates for optimisation will be obvious – a workflow that clearly stresses the file system every time it is run, for example, or one that runs for significantly longer than a typical task.

In others it may be necessary to perform an initial high-level analysis of each job. Follow three steps to optimise application I/O and mitigate the impact of the KAISER patch:

1.       Profile all your applications with Mistral to look for the worst I/O patterns

Mistral, our I/O profiling tool, is lightweight enough to run at scale. In this case Mistral would be set up to record relatively detailed information on the type of I/O that workflows are performing over time. It would look for factors such as how many meta data operations are being performed, the number of small I/O and so on.

2.       Deal with the worst applications, delving into detail with Breeze

Once the candidate workflows have been identified they can be analysed in detail with Breeze. As a first step, the Breeze trace can be run through our Healthcheck tool that identifies common issues such as an application that has a high ratio of file opens to writes or a badly configured $PATH causing the file system to be trawled every time a workflow uses “grep”.

3.       Put in place longer-term I/O quality assurance

Implement the Ellexus tools across your systems to get the most from the compute and storage and to prevent problems reoccurring.

By following these simple steps and our best practices guidance it is easy to find and fix the biggest issues quickly and give you more time to optimise for the best performance possible.

Source: Ellexus Ltd

The post Ellexus Publishes White Paper Advising HPCers on Meltdown, Spectre appeared first on HPCwire.

Maniloff interviewed by Los Angeles Times

Colorado School of Mines - Mon, 01/08/2018 - 08:58

A Los Angeles Times article about the Trump administration's plans to open coastal California waters to expanded drilling featured an interview with Peter Maniloff, assistant professor of economics and business at Colorado School of Mines.

From the story:

Oil is trading at about $60 a barrel — roughly the price that would make an offshore project profitable, said Peter Maniloff, an economist at Colorado School of Mines who studies the oil and gas industry.

But “you want to be confident that prices will remain that high before undertaking a very large investment to drill an offshore well,” Maniloff said. “And it’s hard to be confident of that because fracking has driven prices down.”

Categories: Partner News

AMD Previews Processor and Graphics Products for HPC at CES 2018

HPC Wire - Mon, 01/08/2018 - 07:33

LAS VEGAS, Jan. 8, 2018 — AMD has detailed its forthcoming roll-out plan for its new and next generation of high-performance computing and graphics products during an event in Las Vegas just prior to the opening of CES 2018. Alongside announcing the first desktop Ryzen processors with built-in Radeon Vega Graphics, AMD also detailed the full line up of Ryzen mobile APUs including the new Ryzen PRO and Ryzen 3 models, and provided a first look at the performance of its upcoming 12nm 2nd generation Ryzen desktop CPU expected to launch in April. In graphics, AMD announced the expansion of the “Vega” family with Radeon Vega Mobile and that its first 7nm product is planned to be a Radeon “Vega” GPU specifically built for machine learning applications.

“We successfully accomplished the ambitious goals we set for ourselves in 2017, reestablishing AMD as a high-performance computing leader with the introduction and ramp of 10 different product families,” said AMD President and CEO Dr. Lisa Su. “We are building on this momentum in 2018 as we make our strongest product portfolio of the last decade even stronger with new CPUs and GPUs that bring more features and more performance to a broad set of markets.”

Technology Updates

AMD CTO and SVP Mark Papermaster shared updates on AMD’s process technology roadmaps for both x86 processors and graphics architectures.

  • x86 Processors
    • The “Zen” core, currently shipping in Ryzen desktop and mobile processors, is in production at both 14nm and 12nm, with 12nm samples now shipping.
    • The “Zen 2” design is complete and will improve on the award-winning “Zen” design in multiple dimensions.
    • AMD is on track to deliver performance and performance-per-watt improvements through 2020.
  • Graphics Processors
    • Expanding  the “Vega” product family in 2018 with the Radeon Vega Mobile GPU for ultrathin notebooks.
    • The first 7nm AMD product,  a Radeon “Vega” based GPU built specifically for machine learning applications.
    • A production-level machine learning software environment with AMD’s MIOpen libraries supporting common machine learning frameworks like TensorFlow and Caffe on the ROCm Open eCosystem platform. The industry’s first fully open heterogeneous software environment, which is making it easier to program using AMD GPUs for high performance compute and deep learning environments.

Client Compute Updates

AMD SVP and General Manager, Computing and Graphics Business Group Jim Anderson detailed upcoming AMD client compute processors including:

  • The Ryzen desktop processor with Radeon graphics
    • Desktop Ryzen APUs combine the latest “Zen” core and AMD Radeon graphics engine based on the advanced “Vega” architecture, bringing:
      • The highest performance graphics engine in a desktop processor
      • Advanced quad core performance with up to 8 processing threads
      • 1080p HD+ gaming performance without a discrete graphics card
      • Beautiful display features with Radeon FreeSync technology
      • Full benefit of Radeon software driver features including Radeon Chill, Enhanced Sync and Radeon ReLive
    • Planned to be available starting February 12, 2018.

 

About AMD

For more than 45 years, AMD has driven innovation in high-performance computing, graphics, and visualization technologies ― the building blocks for gaming, immersive platforms, and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses, and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible.

Source: AMD

The post AMD Previews Processor and Graphics Products for HPC at CES 2018 appeared first on HPCwire.

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

HPC Wire - Mon, 01/08/2018 - 07:30

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, for an update on the CANDLE (CANcer Distributed Learning Environment) project on which he is a PI. CANDLE is an effort to develop a “a broad deep learning infrastructure able to run on leadership class computers and other substantial machines” for use in cancer research. While most of the conversation covered CANDLE’s deep learning efforts, Stevens also offered thoughts on ARM technology’s prospects in HPC and challenges facing quantum and neuromorphic computing.

For background on CANDLE see HPCwire article, Deep Learning Thrives in Cancer Moonshot. There are many elements to the CANDLE program; Stevens is the PI for a pilot program that is screening drugs against cancer cell lines and xenograft tumor tissue and using the data to build models able to predict how effective the drugs will be against various cancers. Progress has been remarkably quick and Stevens presented a paper at SC17 – Predicting Tumor Cell Line Response to Drug Pairs with Deep Learning – showcasing his group’s efforts so far.

“There will be drugs, I predict, in clinical trials based on the results that we achieve this year,” Stevens told HPCwire.

Presented here are portions of the interview with Stevens.

HPCwire: Let’s start with CANDLE. Can you give us an update? As I understand it there’s a release on GitHub.

Rick Stevens, Argonne National Laboratory

Rick Stevens: We did the first release of the CANDLE environment this summer. It’s running on the Theta (Cray) machine at Argonne, on Cori (Cray) at NERSC, on SummitDev (IBM) at Oak Ridge, and will soon be running on Summit (IBM). It’s also running on an Nvidia DGX-1 and at the NIH campus on Biowulf. Those are the main platforms. We’ve been using CANDLE both as the production engine but also using it to search for better model parameters and better hyper-parameters for the cancer models in the drug responder problem.

HPCwire: This is the pilot to screen drugs against cancer cell lines and xenografts and to use results to develop predictive models of how effective the drugs are?

Rick Stevens: Yes. We actually have a new model and it’s achieving much better performance than anything anybody else has in terms of predictive drug pair response. Now this is using experimental data from cell lines, it’s not yet using clinical data. One of the key problems is that although it seems like we have a lot of data, we actually have very little data of the type we need, which is high-quality labeled clinical data that we can link back to high-quality molecular data.

HPCwire: What makes the new model so much better?

Rick Stevens: We’ve been experimenting with convolutional networks, which are widely used in computer vision, and we thought that was giving us better performance than networks we’d been trying before which were simpler. We did a bunch of experiments which showed that in fact they were training faster, but the accuracy we achieved with convolutions wasn’t better than the accuracy we achieved without convolutions – it just trained about ten times faster.

So we went back and started trying different network types. The one that we are currently using is based on residual networks. Basically it uses what is called a tower architecture[i] and it essentially is borrowing a different kind of idea developed for computer vision. Residual networks are where each layer of the network is both computing [i.e., learning] a new function, but it is also taking input from the previous layer. In other words, it allows the network to decide as it’s learning whether to use a transform feature it computes or whether to use the residual of the difference between that transform feature and the original version.

It comes up with its own weights during training and is doing that across thousands of connections, literally tens of thousands of connections. That architecture just works better. We have some theoretical understanding as to why it works better, but one notion of why it works better is that it gives the network a slightly simpler thing to learn each time.

That’s currently our best performing model across cell lines and it is being used in both single drugs and drug pairs. The drug pairs problem is the really hard one and we can [already] predict with about 93 percent accuracy the growth inhibition [or not] of the tumor when given these two drugs. That’s used to prioritize drugs for further testing. We’re using it right now to [design] follow-on experiments (network diagram below).

Neural network architecture. The orange square boxes, from bottom to top, represent input features, encoded features, and output growth values. Feature models are denoted by round shaded boxes: green for molecular features and blue for drug features. There are multiple types of molecular features that are fed into submodels for gene expression, proteome, and microRNA. The descriptors for the two drugs share the same descriptor model. All encoded features are then concatenated to form input for the top fully connected layers. Most connecting layers are linked by optional residual skip connections if their dimensions match. Source: Fig. 2 from the cited paper, Predicting Tumor Cell Line Response to Drug Pairs with Deep Learning.


HPCwire:
These models are correlation models, built on the results you see. Are you also working with mechanistic models?

Stevens: Although it is not a done deal yet, we are talking to a company that has built a mechanistic model for cancer drug response prediction that couples the machine learning models with the mechanistic models. The mechanistic models use mutational data and signaling pathways. This [collaboration] will help us fill in the holes where those [mechanistic] models fall down. We have a collaboration that is spinning up in a few months and maybe we’ll have some progress to show with this hybridization [approach].

HPCwire: Will the hybrid model outperform either of the models individually?

Rick Stevens: That’s what we are shooting for. These mechanistic models in very narrow cancer types are about 80 percent predictive. What we are hoping is that by combining these things we can push the combined engine up 96-97 percent. At that point you are probably in the noise in data at which things have been misclassified. So we have also been testing a lot of classification data. This is all tumor data from the large archives, NCI Genomic Data Commons, and we are building classifiers that can recognize between normal and tumor data and can also identify the cancer type and the site of origin based on just expression data. We can get these predictions to be about 98 percent accurate.

HPCwire: Poor data quality seems to be a constant problem in both cancer research and deep learning. Until recently much of the descriptive data in cancer came from pathologists looking at tissue under a microscope. Interpretations varied.

Rick Stevens: You know the worst thing for training is to have bad data so you want to clean the data throughout the outliers and have the best possible representation of the distribution you are trying to learn. The idea is building these kinds of quality control front ends. We’re doing it for cancer but it turns out that the autonomous vehicle people are doing exactly the same thing and so we are sharing architectural ideas about how to do that. Going into this kind of production, large-scale use of AI, everybody’s got the same infrastructure needs and that’s what CANDLE is. We’re debugging it around cancer but we have already started using it for drug design that’s a different problem.

[For example,] one thing CANDLE can do is these large searches. One of the problems for doing drug design is you need to generate libraries of lead-like structures (structures likely to have pharmacological activity). That’s a huge search problem and you need to be able to manage that search problem in a principled way. We built into CANDLE a set of optimizers that are not optimizing the internal parameters of the model but they are optimizing the search. The model is optimizing its own internal parameters but the CANDLE search supervisor, we call it, is using an optimization algorithm to decide which part of this search space to try next based on how well you’ve been doing.

HPCwire: Down to the structure of the molecule?

Rick Stevens: Exactly. We can use CANDLE to optimize the search space for these drugs. You are just trying to generate these molecules. Another interesting factoid is we started incorporating some software from Uber. Uber is moving aggressively on self-driving cars and they collaborated with Nvidia earlier this year to produce a piece of software call Horovod. It comes from a Russian dance which is this kind of funky folk dance that implements a very efficient ring-based sort of communication. They made it open source in a way that is generic so we have incorporated that into CANDLE.

We are going to borrow any piece technology we can get so we just plugged that right in. It turns out that everybody is trying to solve the same problem. If you take away the application, there’s deep learning and I have got a bunch of data and models and you’ve got to try to find the optimal models against my data and I’ve got data that’s dirty and data that’s not balanced and so forth; so the generic technology behind AI is all of the same stuff whether you are working on robotics or on computer driven cars or cancer or choosing ads in Facebook. It’s all the same underlying problem you are trying to solve from a data management and optimization [perspective].

HPCwire: Who’s actually using CANDLE at this point?

Rick Stevens: The first beta release of the whole system was in July and we’ve done some tutorials. It’s installed at NIH and we’ve got probably 20 users there and they are all trying different things and all in the early stages of debugging their machine learning approach. CANDLE is really aimed at groups that kind of know what they are doing. [You don’t] want to burn millions of node hours trying to optimize a model if you have no idea if your model is any good. For people that are just tinkering, CANDLE is not the place to start because you can easily burn up all of your allocation quickly.

The other thing we are doing there is stepping up the work on portable model representation. It turns out there’s three different standards emerging for taking neural network models and making them portable between systems. We were hoping it would be one but it turns out there’s three. There’s two that are coming from the community and Nvidia is doing a third one. NIH has taken the lead on that. Ultimately we want to build deliverables from these projects, models that other people can use. This is on two levels. One is the code for those models. But the other is the model itself, an executable of the model that can be put it some pipeline. We are creating a database of models that are independent of the language used to describe them.

HPCwire: How else is the CANDLE infrastructure being used now?

Rick Stevens: We are also using it to produce very large scale predictions right now. NCI has a high throughput experimental lab where they can do thousands of experiments a day and we want to apply optimal experimental design strategies to those. To do that we not only have to build models but also we have to optimize the models to run in inference mode and then use them to make literally millions of predictions against tumor samples that NCI has that they can do experiments on.

The part of that that is really interesting is we have experimental data for drug combinations. Just doubles. Pairs. They took 100 of the top small FDA compounds and paired them out, so that’s 5,000. It would be 10,000 but we only have to do half the pairs, and you have to do it at like ten doses and in a large number of cell lines. But it is only 100 compounds. We’ve got a database of million compounds that we want to test but we can’t afford to test a million times a million. Nobody is ever going to do that. So the idea is we can train the models up on all the data we have and we run them on these pairs or triplets against 1,000 cell lines and 1,000 xenografts we have – it’s literally billions of predictions that we are making.

HPCwire: Let’s change gears for a second. You’ve said in the past that for ARM to gain a bigger foothold in HPC, it needed to have a clearer accelerator strategy. Do you still think that? What’s your take on ARM’s prospects?

Rick Stevens: ARM is fine. The chips that are out are showing many benchmarks that are comparable to server class Xeon. The 64-bit ARM core probably does not have exactly the same thread performance as the state of the art Xeon but they are not that far behind. The memory architecture is still evolving and the compilers for the server class machines have to get a little bit better. But for everything we’re (CANDLE) doing and that science is doing, not everything but lots of it, you need another order of magnitude of power efficiency and you are not going to get that without adding accelerators.

If you look at where the leadership-class machines are, we are not fielding any machines that are not accelerated in some way. These ARM server class nodes are not manycore in the same way that say Xeon Phi is and they are not GPUs even though they could be paired with GPUs. The few that are out there right now are not NVLink supporting, so it would be a PCIe offloaded model for accelerated stuff.

If the goal for ARM is essentially to be an alternative to Xeon in the computer center, it doesn’t necessarily give you a reason to move to it because 99 percent of your workload is going to be on the accelerator and the host processor is not particularly interesting. Look at the Summit and Sierra machines, the total amount of capability that’s in the accelerator versus the host is [close to] 98 percent in the accelerator. If you are just running on the host you are only using 2 percent of the silicon you have access to. That’s not a particularly good place to be from a price/performance application.

I think it’s important to have this really innovative ecosystem and getting ARM in there is good, because it causes everybody to think harder about where to go. It also gets players that haven’t been in the HPC business before. On the other hand, you’ve got to be able to field a machine that can win bids and so the guys making machines with ARM must have an accelerator strategy so they can win bids. If you are trying to compete in the kind of HPC simulation space or the deep learning space I think it would be very hard to win bids without an accelerator strategy.

HPCwire: Deep learning is often associated with neuromorphic technology and the idea that closely mimicking actual brain neuronal functioning will dramatically cut power and boost performance. Has CANDLE looked at neuromorphic technology?

Rick Stevens: We’re doing some exploration there. We are obviously interested in what Intel (Nervana) is doing and what IBM (True North) is doing. There’s some early results that are mostly from inside of the labs where they are still doing things in emulation or simulation that are pretty encouraging from the standpoint of being able to very efficiently solve problems, from a power and number of neurons used perspective. But there hasn’t been somebody taking a production deep neural network, pick your favorite, and running that on neuromorphic hardware. There’s no proof of principle that we can do that yet.

The principal problem here is that for the deep neural networks we’re using back propagation and stochastic gradient descent or some derivative to train these things, and while you can use back propagation to train neuromorphic hardware, there’s a penalty; it kind of defeats the whole purpose. We’ve got to have a way to train these networks that takes advantage of the kind of synaptic plasticity that’s built in to the designs that are actually trainable. The IBM early (neuromorphic) chips were not trainable. You did all the training off line and then moved them onto the network. The newer chips will be online trainable but how well that will work is not clear. This whole idea of how to train neuromorphic hardware on things that are not model problems is still TBD.

HPCwire: How about quantum computing? There’s more buzz daily. What’s your take on the reality?

An IBM cryostat wired for a prototype 50 qubit system. (PRNewsfoto/IBM)

Rick Stevens: So our interest, it is like the same thing with quantum, you have to track it and the best way to track is to get your hands dirty trying to do it. For what we are doing, quantum computers, as they exist today, are really not appropriate for moving large amounts of data. Quantum computers require you to store a bunch of superpositions and trying to do this is something like machine learning; you have to essentially preload the data and it takes exponential [time]. If you have a lot of ‘n’ different states it is going to take you that many cycles to load the data so that’s kind of the opposite of big data machines, they are like tiny data machine, or no data machines. The best algorithms are ones where there is no data at all. The functions that you are trying to compute, you kind of generate on the fly. Because we don’t have quantum storage, we don’t have quantum communication. It’s very painful to get data inside a quantum computer today.

Now there are ideas people have on how to deal with this. So you use a classical algorithm, non quantum, to train something and then calculate a reduced state of that, in other words a mathematical algorithm function that kind of approximates the function and then try to form an analytical version of that and then you use that to load…I mean there’s all these tricks people are thinking about. But none of it is practical. I think quantum computing is very important but I can’t draw a line now where I say in 2028 we will stop using our 100 exaflops machines, or whatever we are using at that time, and start using quantum machines for this problem. Until we can solve data, they are going to be good for things like quantum simulation where I am using the quantum computer to simulate a quantum system, a quantum chemistry problem for example. The reason you can do that is there is no data. You have a pure algorithmic formulation.

The other question is how big do the qubits have to be. You may need more physical qubits to get one logical qubit because of error management. The problem is when you start going into these larger collections of them, there is also the notion of what’s the topology of the qubits. So for it to be a universal quantum computer, things have to be entangled. That means in some sense each pair of bits has to be able to talk to each other somehow and it’s really hard to do that if you make a linear array. The distance between the edges is quite far so people are thinking of doing these 2D arrays or 2-and-a-half D arrays or one-and-a-half D arrays to try to make it possible for the qubits to entangle each other without having to move states across very large distances because that’s really hard to do. You want these things to be compact. Yet you want all the bits to see each other in some way or see each other in some minimum number of hops so they can entangle each other.

Plus these superconducting devices are, you know, physics experiments. I read the press release on the latest IBM machine. It stays coherent for 90 microseconds or something.

HPCwire: How about D-Wave’s quantum annealing approach?

Rick Stevens: There have been some experiments to show that maybe there’s some quantum speedup there but quantum annealing is a very special case and it’s not clear how many problems we can map into that, number one, and it’s not clear by the time you do that and you end up having to probabilistically solve this thing, that you are getting speedup. So there’s some controversy over that. I’m not saying yes or no there. Just there’s enough controversy that you have to question whether or not it makes any sense. It’s essentially a special purpose machine. Well I’ve got lots of other special purpose machine ideas that we could target but I mean as a physics experiment it needs to keep going.

I think of quantum computing…you know the hype cycle right. The hype cycle has these humps. In quantum computing we are still in this first part. We haven’t fallen into the valley of disillusionment. I think we will fall into there when people realize, ok IBM and Google will knock each other out for awhile, and they will do some mock problems that shows quantum supremacy and they will say, ok, now what. As people start to get more and more understanding of it they’ll say ok there is a class of problems that we can make hardware solve but it’s not nearly as broad as the popular press has made it sound like. At the same time there could be revolutionary advances. The Chicago Quantum Exchange is working on defect based qubits. That’s using an off the shelf technology.

Brief Stevens Bio

Rick Stevens is Argonne’s Associate Laboratory Director for Computing, Environment and Life Sciences. Stevens has been at Argonne since 1982, and has served as director of the Mathematics and Computer Science Division and also as Acting Associate Laboratory Director for Physical, Biological and Computing Sciences. He is currently leader of Argonne’s Petascale Computing Initiative, Professor of Computer Science and Senior Fellow of the Computation Institute at the University of Chicago, and Professor at the University’s Physical Sciences Collegiate Division. From 2000-2004, Stevens served as Director of the National Science Foundation’s TeraGrid Project and from 1997-2001 as Chief Architect for the National Computational Science Alliance.

[i] Generalization Tower Network: A Novel Deep Neural Network Architecture for Multi-Task Learning, https://arxiv.org/abs/1710.10036

The post ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More appeared first on HPCwire.

Supercomputer Simulations Allow Researchers to Understand Characteristics of Diamonds

HPC Wire - Fri, 01/05/2018 - 10:10

Jan. 5, 2018 — For centuries diamonds have been revered for their strength, beauty, value and utility. Now a team of researchers from Argonne National Laboratory, running molecular dynamics calculations at the Argonne Leadership Computing Facility (ALCF) and Berkeley Lab’s National Energy Research Scientific Computer Center (NERSC), are finding additional reasons to celebrate this complex material—and it has nothing to do with color, cut or clarity.

In a series of papers published in in ScienceNature and Nature Communications, experimentalists and computational scientists from Argonne’s Center for Nanoscale Materials (CNM) shared several “firsts” in their ongoing efforts to uncover new characteristics in diamond and diamond-like carbons that make these materials even more attractive, particularly for industrial applications.

For example, the Nature study, published in August 2016, highlights their discovery of a revolutionary diamond-like film that is generated by the heat and pressure of an automotive engine. This ultra-durable, self-lubricating tribofilm (a film that forms between moving surfaces) could have profound implications for the efficiency and durability of future engines and other moving metal parts that can be made to develop self-healing, diamond-like carbon tribofilms.

The phenomenon was first discovered several years ago through experiments conducted by researchers in the Tribology and Thermal-Mechanics Department in Argonne’s Center for Transportation Research. But it took theoretical insight using supercomputing resources to fully understand what was happening at the molecular level in the experiments. Argonne nanoscientist Subramanian Sankaranarayanan and postdoctoral researcher Badri Narayanan ran molecular dynamics simulations on Argonne’s Mira system and NERSC’s Edison system to understand what was happening at the atomic level. These calculations helped them determine that the catalyst metals in the nanocomposite coatings were stripping hydrogen atoms from the hydrocarbon chains of the lubricating oil, then breaking the chains down into smaller segments. The smaller chains then joined together under pressure to create the highly durable DLC tribofilm.

“This is an example of catalysis under extreme conditions created by friction. It is opening up a new field where you are merging catalysis and tribology, which has never been done before,” said Sankaranarayanan. “This new field of tribocatalysis has the potential to change the way we look at lubrication.”

In the Nature Communications study, published in July 2016, a team of Argonne and University of California-Riverside researchers once again used a combination of experiments and molecular dynamics simulations to demonstrate how diamond—in this case ultrananocrystalline diamond that serves as a substrate—can be used to grow graphene that contains relatively few impurities and costs less to make, in shorter time and at lower temperatures compared to the process widely used to make graphene today. Current graphene fabrication protocols introduce impurities during the etching process itself, which involves adding acid and extra polymers, and when they are transferred to a different substrate for use in electronics. These impurities negatively affect the electronic properties of the graphene, the researchers noted.

The simulations—which were developed by Sankaranarayanan and his post-docs, Badri Narayanan and Sanket Deshmukh, and utilized 300,000 to 500,000 node hours at NERSC in addition to computing time at Argonne—helped the team understand the molecular-level processes underlying graphene growth. They ran three different sets of calculations on NERSC’s Edison supercomputer to tease out the sequence of events leading to graphene nucleation on nickel and to determine what kind of graphene structures can grow on different crystal orientations.

“NERSC is a very good resource to have because it allows the flexibility to do intermediate, production-run calculations,” Sankaranarayanan said. “In this example, you have a lot of things happening mechanistically, and the experimentalists have an end point and the time scales involved are quite fast. But they have not yet reached a stage where in situ experiments can be performed on these kinds of rapidly evolving interfaces, and they want to understand the dynamics of what is happening at the nanosecond and microsecond time scales. It is this dynamical evolution that the experimentalists want us to simulate.”

In an earlier, related study published in Science, the Argonne team described how a series of molecular dynamics simulations paved the way for the design of a near-frictionless hybrid material. The research team again used a combination of experiments and simulations to demonstrate that superlubricity can be realized at engineering scale when graphene is used in combination with nanodiamond particles and diamond-like carbon. Considering that nearly one-third of every fuel tank is spent overcoming friction in automobiles, a material that can achieve superlubricity would greatly benefit industry and consumers alike.

“The beauty of this particular discovery is that we were able to see sustained superlubricity at the macroscale for the first time, proving this mechanism can be used at engineering scales for real-world applications,” Sankaranarayanan said. “It was really a big breakthrough that purely came out of calculations that we did initially at NERSC and then at ACLF.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. »Learn more about computing sciences at Berkeley Lab.

Source: Kathy Kincade, NERSC and Berkeley Lab

The post Supercomputer Simulations Allow Researchers to Understand Characteristics of Diamonds appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator