Feed aggregator

Abbud-Madrid discusses Martian ice cliffs with National Geographic, Science

Colorado School of Mines - Thu, 01/11/2018 - 16:03

Angel Abbud-Madrid, director of the Center for Space Resources at Colorado School of Mines, was recently interviewed by National Geographic and Science about the discovery of ice cliffs on Mars. The eight cliffs expose what scientists believe is nearly pure water ice up to 100 meters thick, according to findings published this week in Science magazine.

From the National Geographic article:

"It’s looking more encouraging that water ice could be available at depths shallow enough that could be used as resources for human missions to Mars," says Angel Abbud-Madrid, the director of the Center for Space Resources at the Colorado School of Mines.

National Geographic: Huge Water Reserves Found All Over Mars

Science: Ice cliffs spotted on Mars

Categories: Partner News

Matt Morgan interviewed by 9News Denver about debris flows

Colorado School of Mines - Thu, 01/11/2018 - 13:21

Matt Morgan, deputy director and senior research geologist of the Colorado Geologic Survey, recently spoke to 9News meteorologist Cory Reppenhagen about the debris flows that can occur following wildfires. Mudslides claimed the lives of at least 17 people in Southern California earlier this week.

Categories: Partner News

Tom Williams featured in Denver Post article about personal robots

Colorado School of Mines - Thu, 01/11/2018 - 11:56

An article in The Denver Post about a Boulder-based startup's new personal robot featured an interview with Tom Williams, assistant professor of computer science at Colorado School of Mines. Williams' research focuses on artificial intelligence for human-robot interaction.

From the story:

But the day when a Rosie is in every home is not close, said Tom Williams, an assistant professor of computer science at the Colorado School of Mines.

“Part of the problem is that there has been a lot of progress in related areas that trick us into thinking that Rosie is right around the corner,” said Williams, who attended a Misty Robotics hackathon with students last month and felt Misty is a step forward. “Look at Siri, Alexa or Google Now. These have advanced natural language processing capabilities. Surely, we should be able to translate that into robots and achieve Rosie the robot. But the problem is that when you move into a robotics domain, you move in the physical world.”

Categories: Partner News

Glass shop featured by 9News Denver

Colorado School of Mines - Thu, 01/11/2018 - 11:46

The new glass shop in the Hill Hall Foundry was recently featured by 9News Denver. In the segment, gaffer Jake Ivy, a PhD student in materials science, demonstrates glassblowing techniques and talks about the art and science of making the beautiful glass objects.

Categories: Partner News

Mines partnership with QL+ featured by multiple news outlets

Colorado School of Mines - Wed, 01/10/2018 - 13:22

A partnership between Colorado School of Mines and nonprofit Quality of Life Plus that is bringing Mines students together with injured veterans to to engineer custom solutions for specific mobility problems was recently featured by multiple news outlets, including The Denver Post, CBS4 Denver, FOX31 Denver and the Golden Transcript. 

The Denver Post: Disabled Colorado veterans gain new mobility thanks to School of Mines technology

CBS4 Denver: Engineering Students Help Wounded Veterans Thanks To Partnership


FOX31 Denver: School of Mines engineers help improve lives of disabled veterans

Categories: Partner News

Honored Physicist Steven Chu Selected as AAAS President-Elect

HPC Wire - Tue, 01/09/2018 - 14:40

Jan. 9, 2018 — Nobel laureate and former Energy Secretary Steven Chu has been chosen as president-elect of the American Association for the Advancement of Science. Chu will start his three-year term as an officer and member of the Executive Committee of the AAAS Board of Directors at the 184th AAAS Annual Meeting in Austin, Texas, in February.

“As Secretary of Energy, I was reminded daily that science must continue to be elevated and integrated into our national life and throughout the world. The work of AAAS in connecting science with society, public policy, human rights, education, diplomacy and journalism – through its superb journals and programs – is essential,” said Chu in his candidacy statement.

“Never has there been a more important time than today for AAAS to communicate the advances in science, the methods we use to acquire this knowledge and the benefits of these discoveries to the public and our policymakers,” he said.

Chu cited his role in key reports by National Academies and the American Academy of Arts and Sciences on the competitiveness of the U.S. scientific enterprise and the state of fundamental research, studies that “sounded alarms that the health of science, science education and integration of science into public decision-making in the U.S. was in peril and heading in the wrong direction,” he said in his candidacy statement. “Concern among scientists and friends of science is even greater today and we in AAAS have our work cut out for us.”

AAAS must continue its efforts to communicate the benefits of scientific progress, Chu noted, saying the world’s largest general scientific organization must continue to ensure scientists and students have access to the free exchange of ideas and the ability to pursue discovery across national boundaries.

Chu currently serves as the William R. Kenan Jr. Professor of Physics and Professor of Molecular and Cellular Physiology at Stanford University. Prior to rejoining Stanford in 2013, Chu was secretary of energy during President Barack Obama’s first term, the first scientist to head the Department of Energy, the home of the nation’s 17 National Laboratories.

Prior to his appointment as energy secretary, Chu was director of the Lawrence Berkeley National Laboratory as well as a professor of physics and molecular and cell biology at University of California, Berkeley. He first joined Stanford University in 1987, where he was a professor of physics until 2004.

Between 1978 and 1987, Chu worked at Bell Labs, where he ultimately led its Quantum Electronics Research Department. At Bell Labs, Chu carried out research on laser cooling and atom trapping, work that would earn him – along with Claude Cohen-Tannoudji and William Daniel Phillips – the Nobel Prize for Physics in 1997. Their new methods for using laser light to “trap” and slow down atoms to study them in greater detail “contributed greatly to increasing our knowledge of the interplay between radiation and matter,” the Nobel Committee said in 1997.

Chu received bachelor’s degrees in mathematics and physics from the University of Rochester and a Ph.D. in physics from the University of California, Berkeley.

He was named an elected fellow of AAAS in 2000 and has been a member of AAAS since 1995. He served on the AAAS Committee on Nominations, which selects the annual slate of candidates for AAAS president-elect and Board of Directors elections, from 2009 to 2011.

The current AAAS president-elect, Margaret Hamburg, will begin her term as AAAS president at the close of the 2018 Annual Meeting. Hamburg is foreign secretary of the National Academy of Medicine. The current president, Susan Hockfield, will become chair of the AAAS Board of Directors. Hockfield is president emerita of the Massachusetts Institute of Technology.

Source: AAAS

The post Honored Physicist Steven Chu Selected as AAAS President-Elect appeared first on HPCwire.

Micron and Intel Announce End to NAND Memory Joint Development Program

HPC Wire - Tue, 01/09/2018 - 11:20

BOISE, Idaho, and SANTA CLARA, Calif., Jan. 8, 2018 – Micron and Intel today announced an update to their successful NAND memory joint development partnership that has helped the companies develop and deliver industry-leading NAND technologies to market.

The announcement involves the companies’ mutual agreement to work independently on future generations of 3D NAND. The companies have agreed to complete development of their third-generation of 3D NAND technology, which will be delivered toward the end of this year and extending into early 2019. Beyond that technology node, both companies will develop 3D NAND independently in order to better optimize the technology and products for their individual business needs.

Micron and Intel expect no change in the cadence of their respective 3D NAND technology development of future nodes. The two companies are currently ramping products based on their second-generation of 3D NAND (64 layer) technology.

Both companies will also continue to jointly develop and manufacture 3D XPoint at the Intel-Micron Flash Technologies (IMFT) joint venture fab in Lehi, Utah, which is now entirely focused on 3D XPoint memory production.

“Micron’s partnership with Intel has been a long-standing collaboration, and we look forward to continuing to work with Intel on other projects as we each forge our own paths in future NAND development,” said Scott DeBoer, executive vice president of Technology Development at Micron. “Our roadmap for 3D NAND technology development is strong, and we intend to bring highly competitive products to market based on our industry-leading 3D NAND technology.”

“Intel and Micron have had a long-term successful partnership that has benefited both companies, and we’ve reached a point in the NAND development partnership where it is the right time for the companies to pursue the markets we’re focused on,” said Rob Crooke, senior vice president and general manager of Non-Volatile Memory Solutions Group at Intel Corporation. “Our roadmap of 3D NAND and Optane technology provides our customers with powerful solutions for many of today’s computing and storage needs.”

Source: Intel

The post Micron and Intel Announce End to NAND Memory Joint Development Program appeared first on HPCwire.

Activist Investor Ratchets up Pressure on Mellanox to Boost Returns

HPC Wire - Tue, 01/09/2018 - 10:29

Activist investor Starboard Value has sent a letter to Mellanox CEO Eyal Waldman demanding dramatic operational changes to boost returns to shareholders. This is the latest missive in an ongoing struggle between Starboard and Mellanox that began back in November when Starboard raised its stake in the interconnect specialist to 10.7 percent. Starboard argues Mellanox is significantly undervalued and that its costs, notably R&D, are unreasonably high.

The letter, dated January 8 and under the signature of Peter Feld, is pointed as shown in this excerpt:

“As detailed in the accompanying slides, over the last twelve months Mellanox’s R&D expenditures as a percentage of revenue were 42%, compared to the peer median of 22%. On SG&A, Mellanox spent 24% of revenue versus the peer median of 17%. It is critical to appreciate that Mellanox is not just slightly worse than peers on these key metrics, it is completely out of line with the peer group.”

Mellanox issued 2018 guidance for “low-to-mid-teens” (percent) revenue growth. Starboard cites a ‘consensus’ estimate of $816.5 million in revenue for 2017 and $986.4 million (14.5 percent). At 70.6 percent, Mellanox has one of the highest gross margins among comparable companies, and one of the lowest operating margins at 13.8 percent, according to Starboard.

“We believe there is a tremendous opportunity at Mellanox, but it will require substantial change, well beyond just the Company’s recently announced 2018 targets,” wrote Feld.

Link to Starboard letter: http://www.starboardvalue.com/wp-content/uploads/Starboard_Value_LP_Letter_to_MLNX_01.08.2018.pdf

The post Activist Investor Ratchets up Pressure on Mellanox to Boost Returns appeared first on HPCwire.

ACM Names New Director of Global Policy and Public Affairs

HPC Wire - Tue, 01/09/2018 - 10:24

NEW YORK, Jan. 9, 2018 — ACM, the Association for Computing Machinery, has named Adam Eisgrau as its new Director of Global Policy and Public Affairs, effective January 3, 2018. Eisgrau will coordinate and support ACM’s engagement with public technology policy issues involving information technology, globally and particularly in the US and Europe. ACM aims to educate and inform computing professionals, policymakers, and the public about information technology policy and its consequences, and to shape public technology policy through a deeper understanding of the information technology issues involved.

“ACM has long been committed to providing policy makers in the US and abroad with the most current, accurate, objective and non-partisan information about all things digital as they wrestle with issues that profoundly affect billions of people,” said ACM President Vicki L. Hanson. “We’re thrilled to add a communicator of Adam’s caliber to our team as the computing technologies pioneered, popularized and promulgated by ACM members become ever more integrated to the fabric of daily life.”

“Speaking tech to power clearly, apolitically and effectively has never been more important,” said Eisgrau. “The chance to do so for ACM in Washington, Brussels and beyond is a dream opportunity.”

A former communications attorney, Eisgrau began his policy career as Judiciary Committee Counsel to then-freshman US Senator Dianne Feinstein (D-CA). Since leaving Senator Feinstein’s office in 1995, he has represented both public- and private-sector interests in international forums and to Congress, federal agencies and the media on a host of technology-driven policy matters. These include: digital copyright, e-commerce competition, peer-to-peer software, cybersecurity, encryption, online financial services, warrantless surveillance and digital privacy.

Prior to joining ACM, Eisgrau directed the government relations office of the American Library Association. He is a graduate of Dartmouth College and Harvard Law School.

About ACM

ACM, the Association for Computing Machinery www.acm.org, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post ACM Names New Director of Global Policy and Public Affairs appeared first on HPCwire.

NOAA to Expand Compute Capacity by 50 Percent with Two New Dells

HPC Wire - Tue, 01/09/2018 - 10:10

January 9, 2018 — NOAA’s combined weather and climate supercomputing system will be among the 30 fastest in the world, with the ability to process 8 quadrillion calculations per second, when two Dell systems are added to the IBMs and Crays at data centers in Reston, Virginia, and Orlando, Florida, later this month.

“NOAA’s supercomputers play a vital role in monitoring numerous weather events from blizzards to hurricanes,” said Secretary of Commerce Wilbur Ross. “These latest updates will further enhance NOAA’s abilities to predict and warn American communities of destructive weather.”

This upgrade completes phase three of a multi-year effort to build more powerful supercomputers that make complex calculations faster to improve weather, water and climate forecast models. It adds 2.8 petaflops of speed at both data centers combined, increasing NOAA’s total operational computing speed to 8.4 petaflops — or 4.2 petaflops per site.

Sixty percent more storage

The upgrade also adds 60 percent more storage capacity, allowing NOAA to collect and process more weather, water and climate observations used by all the models than ever before.

“NOAA’s supercomputers ingest and analyze billions of data points taken from satellites, weather balloons, airplanes, buoys and ground observing stations around the world each day,” said retired Navy Rear Adm. Timothy Gallaudet, Ph.D., acting NOAA administrator. “Having more computing speed and capacity positions us to collect and process even more data from our newest satellites — GOES-East, NOAA-20 and GOES-S — to meet the growing information and decision-support needs of our emergency management partners, the weather industry and the public.”

With this upgrade, U.S. weather supercomputing paves the way for NOAA’s National Weather Service to implement the next generation Global Forecast System, known as the “American Model,” next year. Already one of the leading global weather prediction models, the GFS delivers hourly forecasts every six hours. The new GFS will have significant upgrades in 2019, including increased resolution to allow NOAA to run the model at 9 kilometers and 128 levels out to 16 days, compared to the current run of 13 kilometers and 64 levels out to 10 days. The revamped GFS will run in research mode on the new supercomputers during this year’s hurricane season.

“As we look toward launching the next generation GFS in 2019, we’re taking a ‘community modeling approach’ and working with the best and brightest model developers in this country and abroad to ensure the new U.S. model is the most accurate and reliable in the world,” said National Weather Service Director Louis W. Uccellini, Ph.D.

Supporting a Weather-Ready Nation

The upgrade announced today – part of the agency’s commitment to support the Weather-Ready Nation initiative – will lead to more innovation, efficiency and accuracy across the entire weather enterprise. It opens the door for the National Weather Service to advance its seamless suite of weather, water and climate models over the next few years, allowing for more precise forecasts of extreme events a week in advance and beyond.

Improved hurricane forecasts and expanded flood information will enhance the agency’s ability to deliver critical support services to local communities. In addition, the new supercomputers will allow NOAA’s atmosphere and ocean models to run as one system, helping forecasters to more readily identify interaction between the two and reducing the number of operational models; as well as allow for development of a new seasonal forecast system to replace the Climate Forecast System in 2022, paving the way for improved seasonal forecasts as part of the Weather Research and Forecasting Innovation Act.

The added computing power will support upgrades to the National Blend of Models, which is being developed to provide a common starting point for all local forecasts; allow for more sophisticated ensemble forecasting, which is a method of improving the accuracy of forecasts by averaging results of various models; and provide quicker turnaround for atmosphere and ocean simulations, leading to earlier predictions of severe weather.

NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources.

Source: NOAA

The post NOAA to Expand Compute Capacity by 50 Percent with Two New Dells appeared first on HPCwire.

Momentum Builds for US Exascale

HPC Wire - Tue, 01/09/2018 - 10:08

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. quest for exascale on a solid foundation. In my last article, I provided a description of the elements of the High Performance Computing (HPC) ecosystem and its importance for advancing and sustaining this strategically important technology. It is good to report that the U.S. exascale program seems to be hitting the full range of ecosystem elements.

As a reminder, the National Strategic Computing Initiative (NSCI) assigned the U.S. Department of Energy (DOE) Office of Science (SC) and the National Nuclear Security Administration (NNSA) to execute a joint program to deliver capable exascale computing that emphasizes sustained performance on relevant applications and analytic computing to support their missions. The overall DOE program is known as the Exascale Computing Initiative (ECI) and is funded by the SC Advanced Scientific Computing Research (ASCR) program and the NNSA Advanced Simulation and Computing (ASC) program. Elements of the ECI include the procurement of exascale class systems and the facility investments in site preparations and non-recurring engineering. Also, ECI includes the Exascale Computing Project (ECP) that will conduct the Research and Development (R&D) in the areas of middleware (software stack), applications, and hardware to ensure that exascale systems will be productively usable to address Office of Science and NNSA missions.

In the area of hardware – the last part of 2017 revealed a number of important developments. First and most visible, is the initial installation of the SC Summit system at Oak Ridge National Laboratory (ORNL) and the NNSA Sierra system at Lawrence Livermore National Laboratory (LLNL). Both systems are being built by IBM using Power9 processors with Nvidia GPU co-processors. The machines will have two Power9 CPUs per system board and will use a Mellenox InfinBand interconnection network.

Beyond that, the architecture of each machine is slightly different. The ORNL Summit machine will use six Nvidia Volta GPUs per two Power9 CPUs on a system board and will use NVLink to connect to 512 GB of memory. The Summit machine will use a combination of air and water cooling. The LLNL Sierra machine will use four Nvidia Voltas and 256 GB of memory connected with the two Power9 CPUs per board. The Sierra machine will use only air cooling. As was reported by HPCwire in November 2017, the peak performance of the Summit machine will be about 200 petaflops and the Sierra machine is expected to be about 125 petaflops.

Installation of both the Summit and Sierra systems is currently underway with about 279 racks (without system boards) and the interconnection network already installed at each lab. Now that IBM has formally released the Power9 processors, the racks will soon start being populated with the boards that contain the CPUs, GPUs and memory. Once that is completed, the labs will start their acceptance testing, which is expected to be finished later in 2018.

Another important piece of news about the DOE exascale program is the clarification of the status of the Argonne National Laboratory (ANL) Aurora machine. This system was part of the collaborative CORAL procurement that also selected the Sierra and Summit machines. The Aurora system is being manufactured by Intel with Cray Inc. acting as the system integrator. The machine was originally scheduled to be an approximately 180 peak petaflops system using the Knights Hill third generation Phi processors. However, during SC17, we learned that Intel is removing the Knights Hill chip from its roadmap. This explains the reason why during the September ASCR Advisory Committee (ASCAC) meeting, Barb Helland, the Associate Director of the ASCR office, announced that the Aurora system would be delayed to 2021 and upgraded to 1,000 petaflops (aka 1 exaflops).

The full details of the revised Aurora system are still under wraps. We have learned that it is going to use “novel” processor technologies, but exactly what that means is unclear. The ASCR program subjected the new Aurora design to an independent outside review. It found, “The hardware choices/design within the node is extremely well thought through. Early projections suggest that the system will support a broad workload.” The review committee even suggested that, “The system as presented is exciting with many novel technology choices that can change the way computing is done.” The Aurora system is in the process of being “re-baselined” by the DOE. Hopefully, once that is complete, we will get a better understanding of the meaning of “novel” technologies. If things go as expected, the changes to Aurora will allow the U.S. to achieve exascale by 2021.

An important, but sometimes overlooked, aspect of the U.S. exascale program is the number of computing systems that are being procured, tested and optimized by the ASCR and ASC programs as part of the buildup to exascale. Other computing systems involved with “pre-exascale” systems include the 8.6 petaflops Mira computer at ANL and the 14 petaflops Cori system at Lawrence Berkeley National Lab (LBNL). The NNSA also has the 14.1 petaflops Trinity system at Los Alamos National Lab (LANL). Up to 20 percent of these precursor machines will serve as testbeds to enable computing science R&D needed to ensure that the U.S. exascale systems will be able to productively address important national security and discovery science objectives.

The last, but certainly not least, bit of hardware news is that the ASCR and ASC programs are expected to start their next computer system procurement processes in early 2018. During her presentation to the U.S. Consortium for the Advancement of Supercomputing (USCAS), Barb Helland told the group that she expects that the Request for Proposals (RFP) will soon be released for the follow-ons to the Summit and Sierra systems. These systems, to be delivered in the 2021-2023 timeframe, are expected to be provide in excess of exaFLOP/s performance. The procurement process to be used will be similar to the CORAL procurement and will be a collaboration between the DOE-SC ASCR and NNSA ASC programs. The ORNL exascale system will be called Frontier and the LLNL system will be known as El Capitan.

2017 also saw significant developments for the people element of the U.S HPC ecosystem. As was previously reported, at last September’s ASCAC meeting, Paul Messina announced that he would be stepping down as the ECP Director on October 1st. Doug Kothe, who was previously the applications development lead, was announced as the new ECP Director. Upon taking the Director job, Kothe with his deputy, Stephen Lee of LANL, instituted a process to review the organization and management of the ECP. At the December ASCAC conference call, Doug reported that the review had been completed and resulted in a number of changes. This included paring down ECP from five to four components (applications development, software technology, hardware and integration, and project management). He also reported that ECP has implemented a more structured management approach that includes a revised work breakdown structure (WBS) and additional milestones, new key performance parameters and risk management approaches. Finally, the new ECP Director reported that they had established an Extended Leadership Team with a number of new faces.

Another important, element of the HPC ecosystem are the people doing the R&D and other work need to keep the ecosystem going. The DOE ECI involves a huge number of people. Last year, there were about 500 researchers who attended the ECP Principle Investigator meeting and there are many more involved in other DOE/NNSA programs and from industry. The ASCR and ASC programs are involved with a number of programs to educate and train future members of the HPC ecosystem. Such programs are the ASCR and ASC co-funded Computational Science Graduate Fellowship (CSGF) and the Early Career Research Program. The NNSA offers similar opportunities. Both the ASCR and ASC programs continue to coordinate with National Science Foundation educational programs to ensure that America’s top computational science talent continues to flow into the ecosystem.

Finally, in addition to people and hardware, the U.S. program continues to develop the software stack (aka middleware) to develop end users’ applications to ensure that exascale will be used productively. Doug Kothe reported that ECP has adopted standard Software Development Kits. These SDKs are designed to support the goal of building a comprehensive, coherent software stack that enables application developers to productively write highly parallel applications that effectively target diverse exascale architectures. Kothe also reported that ECP is making good progress in developing applications software. This includes the implementation of innovative approaches that include Machine Learning to utilize the GPUs that are part of the future exascale computers.

All in all – the last several months of 2017 have set the stage for a very exciting 2018 for the U.S. exascale program. It has been about 5 years since the ORNL Titan supercomputer came onto the stage at #1 on the TOP500 list. Over that time, other more powerful DOE computers have come online (Trinity, Cori, etc.) but they were overshadowed by Chinese and European systems. It remains unclear whether or not the upcoming exascale systems will put the U.S. back on the top of the supercomputing world. However, the recent developments help to reassure the country is not going to give up its computing leadership position without a fight. That is great news because for more than 60 years, the U.S. has sought leadership in high performance computing for the strategic value it provides in the areas of national security, discovery science, energy security, and economic competitiveness.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

The post Momentum Builds for US Exascale appeared first on HPCwire.

Tilton wins NSF CAREER award to model, improve feed spacers

Colorado School of Mines - Tue, 01/09/2018 - 09:03

Nils Tilton, assistant professor of mechanical engineering at Colorado School of Mines, has received a National Science Foundation CAREER Award to develop a computational fluid dynamics model to improve efficient, low-energy options for wastewater treatment and desalination.

Tilton’s project, “Robust Numerical Modeling for Rational Design of Membrane Filtration Processes,” will receive $547,364 over five years beginning in August. 

“Shortages in potable water are creating a large demand for water treatment and desalination. California’s recent drought, for example, is motivating municipalities to invest in seawater desalination plants. Desalination technology is also now used to recycle municipal and industrial wastewater. The problem is that desalination requires a lot of energy, and the generation of that energy by power plants requires a lot of water. In the process, you also make more pollution, which exacerbates climate change and drought,” Tilton said. “Finding new, more energy-efficient ways of producing potable water is key to securing long-term water and energy security.”

Tilton’s work will focus on membrane separation processes, such as reverse osmosis and nanofiltration. Both offer promising low-energy solutions for desalination and wastewater treatment – that is, until the membranes get bogged down.

“You're basically filtering water by forcing it through a membrane that acts like a sieve – water goes through the membrane while salts and other contaminants are blocked,” Tilton said. “The problem is, all that stuff builds up on the membrane and increases the pressures needed to force the water through. With time, the salts also form a hard mineral scale, like the calcium deposits you get on shower walls, that impedes filtration, damages the membrane and increases maintenance costs.”

That retention of solutes, known as concentration polarization, can be tackled by patterning a mesh-like net of physical spaces on the membrane to alter the fluid flow at the surface. The impact of those feed spacers, however, is not well understood.

Tilton and his team will develop a new method for simulating the interactions between polarization, scaling and mixing due to feed spacers. Using the information they gather, they will then design better patterns for the meshes to minimize polarization.

Researchers will also collaborate with a 3-D printing company to look into the possibility of using 3-D printing to produce the meshes, as well as Newmont Mining, which will provide sample mine wastewater for testing. 

Applications of membrane separation processes include the desalination of seawater and the treatment of municipal wastewater for potable reuse, as well as the recycling of the wastewater generated during hydraulic fracturing. 

As part of the project, Tilton is also partnering with the Asian Pacific Development Center in Aurora to develop a new summer youth workshop. Aurora high school students would come to Mines for the workshop, learning computer programming using affordable Raspberry Pi computers. Mines students in the Multicultural Engineering Program would lead the workshop.

Tilton joined Mines in 2014 after serving as a postdoctoral research fellow at University of Maryland, College Park, and University of Aix-Marseille. He holds a PhD, master’s degree and bachelor’s degree in mechanical engineering from McGill University in Montreal.

Categories: Partner News

Stampede1 Helps Researchers Examine a Greener Carbon Fiber Alternative

HPC Wire - Tue, 01/09/2018 - 07:48

Jan. 9, 2018 — From cars and bicycles to airplanes and space shuttles, manufacturers around the world are trying to make these vehicles lighter, which helps lower fuel use and lessen the environmental footprint.

One way that cars, bicycles, airplanes and other modes of transportation have become lighter over the last several decades is by using carbon fiber composites. Carbon fiber is five-times stronger than steel, twice as stiff, and substantially lighter, making it the ideal manufacturing material for many parts. But with the industry relying on petroleum products to make carbon fiber today, could we instead use renewable sources?

In the December 2017 issue of Science, Gregg Beckham, a group leader at the National Renewable Energy Laboratory (NREL), and an interdisciplinary team reported the results of experimental and computational investigations on the conversion of lignocellulosic biomass into a bio-based chemical called acrylonitrile, the key precursor to manufacturing carbon fiber.

The catalytic reactor shown here is for converting chemical intermediates into acrylonitrile. The work is part of the Renewable Carbon fiber Consortium. Photo by Dennis Schroeder/NREL

Acrlyonitrile is a large commodity chemical, and it’s made today through a complex petroleum-based process at the industrial scale. Propylene, which is derived from oil or natural gas, is mixed with ammonia, oxygen, and a complex catalyst. The reaction generates high amounts of heat and hydrogen cyanide, a toxic by-product. The catalyst used to make acrylonitrile today is also quite complex and expensive, and researchers still do not fully understand its mechanism.

“That’s where our study comes in,” Beckham said. “Acrylonitrile prices have witnessed large fluctuations in the past, which has in turn led to lower adoption rates for carbon fibers for making cars and planes lighter weight. If you can stabilize the acrylonitrile price by providing a new feedstock from which to make acrylonitrile, in this case renewably-sourced sugars from lignocellulosic biomass, we might be able to make carbon fiber cheaper and more widely adopted for everyday transportation applications.”

To develop new ideas to make acrylonitrile manufacturing from renewable feedstocks, the Department of Energy (DOE) solicited a proposal several years ago that asked: Is it possible to make acrylonitrile from plant waste material? These materials include corn stover, wheat straw, rice straw, wood chips, etc. They’re basically the inedible part of the plant that can be broken down into sugars, which can then be converted to a large array of bio-based products for everyday use, such as fuels like ethanol or other chemicals.

“If we could do this in an economically viable way, it could potentially decouple the acrylonitrile price from petroleum and offer a green carbon fiber alternative to using fossil fuels,” Beckham said.

Beckham and the team moved forward to develop a different process. The NREL process takes sugars derived from waste plant materials and converts those to an intermediate called 3-hydroxypropionic acid (3-HP). The team then used a simple catalyst and new chemistry, dubbed nitrilation, to convert 3-HP to acrylonitrile at high yields. The catalyst used for the nitrilation chemistry is about three times less expensive than the catalyst used in the petroleum-based process and it’s a simpler process. The chemistry is endothermic so it doesn’t produce excess heat, and unlike the petroleum-based process, it doesn’t produce the toxic byproduct hydrogen cyanide. Rather, the bio-based process only produces water and alcohol as its byproducts.

From a green chemistry perspective, the bio-based acrylonitrile production process has multiple advantages over the petroleum-based process that is being used today. “That’s the crux of the study,” Beckham said.

XSEDE’s Role in the Chemistry

Beckham is no stranger to XSEDE, the eXtreme Science and Engineering Discovery Environment that’s funded by the National Science Foundation. He’s been using XSEDE resources, including Stampede1, Bridges, Comet and now Stampede2, for about nine years as a principal investigator. Stampede1 and Stampede2 (currently #12 on the Top500 list list) are deployed and maintained by the Texas Advanced Computing Center.

Most of the biological and chemistry research conducted for this project was experimental, but the mechanism of the nitrilation chemistry was only at first hypothesized by the team. A postdoctoral researcher in the team, Vassili Vorotnikov of NREL, was recruited to run periodic density functional theory calculations on Stampede1 as well as the machines at NREL to elucidate the mechanism of this new chemistry.

Over about two months and several millions of CPU-hours used on Stampede1, the researchers were able to shed light on the chemistry of this new catalytic process. “The experiments and computations lined up nicely,” Vorotnikov said.

Because they had an allocation on Stampede1, they were able to rapidly turn around a complete mechanistic picture of how this chemistry works. “This will help us and other Top500 institutions to develop this chemistry further and design catalysts and processes more rationally,” Vorotnikov said. “XSEDE and the predictions of Stampede1 are pointing the way forward on how to improve nitrilation chemistry, how we can apply it to other molecules, and how we can make other renewable products for industry.”

“After the initial experimental discovery, we wanted to get this work out quickly,” Beckham continued. “Stampede1 afforded a great deal of bandwidth for doing these expensive, computationally intensive density functional theory calculations. It was fast and readily available and just a great machine to do these kind of calculations on, allowing us to turn around the mechanistic work in only a matter of months.”

Next Steps

There’s a large community of chemists, biologists and chemical engineers who are developing ways to make everyday chemicals and materials from plant waste materials instead of petroleum. Researchers have tried to do this before with acrylonitrile. But no one has been as successful in the context of developing high yielding processes with possible commercial potential for this particular product. With their new discovery, the team hopes this work makes the transition into industry sooner rather than later.

The immediate next step is scaling the process up to produce 50 kilograms of acrylonitrile. The researchers are working with several companies including a catalyst company to produce the necessary catalyst for pilot-scale operation; an agriculture company to help scale up the biology to produce 3-HP from sugars; a research institute to scale the separations and catalytic process; a carbon fiber company to produce carbon fibers from the bio-based acrylonitrile; and a car manufacturer to test the mechanical properties of the resulting composites.

“We’ll be doing more fundamental research as well,” Beckham said. “Beyond scaling acrylonitrile production, we are also excited about is using this powerful, robust chemistry to make other everyday materials that people can use from bio-based resources. There are lots of applications for nitriles out there — applications we’ve not yet discovered.”

Source: Faith Singer-Villalobos, TACC

The post Stampede1 Helps Researchers Examine a Greener Carbon Fiber Alternative appeared first on HPCwire.

Mixed-Signal Neural Net Leverages Memristive Technology

HPC Wire - Mon, 01/08/2018 - 13:11

Memristive technology has long been attractive for potential use in neuromorphic computing. Among other things it would permit building artificial neural network (ANN) circuits that are processed in parallel and more directly emulate how neuronal circuits in the brain work. Recent work led by researchers at Oak Ridge National Laboratory and the University of Tennessee proposes a mixed signal approach that leverages memristive technology to build better ANNs.

“[Our] mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system… The proposed [system] includes synchronous digital long term plasticity (DLTP), an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead,” writes Catherine Schuman, a Liane Russell Early Career Fellow in Computational Data Analytics at Oak Ridge National Laboratory, and colleagues[i].

Their paper, Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level, was published in the IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

The researchers point out that digital and analog approaches to building ANNs each have drawbacks. While digital implementations have precision, robustness, noise resilience and scalability, they are area intensive. Conversely, analog counterparts are efficient in terms of silicon area and processing speed, but “rely on representing synaptic weights as volatile voltages on capacitors or in resistors, which do not lend themselves to energy and area efficient learning.”

Instead, they propose a mixed-signal system where communication and control is digital while the core multiply-and-accumulate functionality is analog. Researchers used a hafnium-oxide memristor design based on earlier work (“A practical hafnium-oxide memristor model suitable for circuit design and simulation,” in Proceedings of IEEE International Symposium on Circuits and Systems).

Their design (figure two, shown below) consists of m x n memristive neuromorphic cores. “Each core has several memristive synapses and one mixed-signal neuron (analog in, digital out) to implement a spiking neural network. This arrangement helps maintain similar capacitance at the synaptic outputs and corresponding neurons. The similar distance between synapse and inputs also results in negligible difference in charge accumulation,” write the authors.

Also exciting is the researchers’ approach to implementing learning. Most ANNs require offline learning. For a network to learn online, Long Term Plasticity plays an important role in training the circuit with continuous updates of synaptic weights based on the timing of pre- and post-neuron fires.

“Instead of carefully crafting analog tails to provide variation in the voltage across the synapses, we utilize digital pre- and post-neuron firing signals and apply pulse modulation to implement a digital LTP (DLTP) technique…Basically the online learning process implemented here is one clock cycle tracking version of Spike time Dependent Plasticity… A more thorough STDP learning implementation would need to track several clock cycles before and after the post-neuron fire leading to more circuitry and hence increased power and area. Our DLTP approach acts similarly but ensures lower area and power,” write the authors.

Link to paper: http://ieeexplore.ieee.org/document/8119503/

Feature image source: ORNL

[i] Gangotree Chakma, Student Member, IEEE, Md Musabbir Adnan, Student Member, IEEE, Austin R. Wyer, Student Member, IEEE, Ryan Weiss, Student Member, IEEE, Catherine D. Schuman, Member, IEEE, and Garrett S. Rose, Member, IEEEAustin R. Wyer, Student Member, IEEE, Ryan Weiss, Student Member, IEEE, Catherine D. Schuman, Member, IEEE, and Garrett S. Rose, Member, IEEE

The post Mixed-Signal Neural Net Leverages Memristive Technology appeared first on HPCwire.

Curie Supercomputer Uses HPC to Help Improve Agricultural Production

HPC Wire - Mon, 01/08/2018 - 11:47

Jan. 8, 2018 — Agriculture is the principal means of livelihood in many regions of the developing world, and the future of our world depends on a sustainable agriculture at planetary level. High Performance Computing is becoming critical in agricultural activity, plague control, pesticides design and pesticides effects. Climate data are used to understand the impacts on water and agriculture in many regions of the world, help local authorities in the management of water and agricultural resources, and assist vulnerable communities in the region through improved drought management and response.

Image courtesy of the European Commission.

The demand for agricultural products has increased globally and meeting this growing demand would have a negative effect on the environment.  Increased agricultural production needs the use of 70% of the world’s water resources and a rise in greenhouse gas emissions.

To be able to reduce the negative impact to the ecosystem, seed companies are on the lookout for new plant varieties that yield more produce. Companies normally find such new varieties through field trials.  These field trials are a simple observation method but they cost a lot of money and are time consuming taking years to find the best ones.

Using High Performance Computing (HPC), the Curie supercomputer is able to provide the most efficient solution to this problem.  HPC enables numerical simulations of plant growth that help seed companies to achieve superior varieties instead of doing field trials which are more expensive and harmful for the environment.

For example, if a farmer wants to know what the conditions are for a plant to grow best in ( its genetic parameter), they would have to test its growth rate under various conditions to select the best parameter corresponding to the specific environment of the region. With the help of HPC, the estimation of these parameters is made more accurate and simpler by simulating plant growth. The simulation models take into account, the plant’s interaction with the environment.  It reduces the number of field trials by a large percent, for example, instead of 100, 10 field trials would be enough to  estimate the best genetic parameter.

Cybele Tech, the French company has used High Performance Computing to enable farmers to produce more with less and know what exactly their plants need to get a better yield.

They’ve been awarded with 4 million core hours on Curie hosted by GENCI at CEA, France.

Source: European Commission

The post Curie Supercomputer Uses HPC to Help Improve Agricultural Production appeared first on HPCwire.

Tutoring program aims to grow future scientists, engineers

Colorado School of Mines - Mon, 01/08/2018 - 11:42

When Kate Smits was an engineering student at the U.S. Air Force Academy in Colorado Springs, she struggled to find a mentor.

She had great professors and a great experience overall. But she never found that mentor, that female engineer she could look to and say, “Hey, there’s someone who looks like me. I want to be like that.”

What she did find, though, is a passion for STEM education and breaking down the barriers to participation in a field where nearly half of all workers in 2015 were white males, according to the National Science Foundation

“I started looking into that discrepancy of why people are or aren’t going into engineering,” said the assistant professor of civil and environmental engineering at Colorado School of Mines. “What the literature shows is there's a critical age – middle school – where students either get really excited about STEM or their interest completely drops off. You either grab them or you don’t.”

So when Smits received a National Science Foundation CAREER Award three years ago, it made perfect sense to dedicate the educational outreach portion of her grant to reaching those potential future scientists and engineers on their own turf – a middle-school classroom.

“I hypothesized that if we introduced STEM to these students in a sustained way at a critical time in their development, they would be more likely to go into it,” Smits said. “A lot of what we do, the decisions that we make, are based on what we’re exposed to.”

Twice a week, 12-15 Mines students visit College View Middle School, a public charter school in south Denver, for a hour of math and science tutoring and mentoring. 

College View, which is part of the Denver School of Science and Technology network, is 95 percent minority and 93 percent of students come from low-income backgrounds. Students attend tutoring based on need, teacher requirement or interest. 

On a recent afternoon, tutors worked with the middle schoolers in small groups, going over the answers to their last test, on linear equations. 

In addition to helping with homework and science fair projects, the Mines students are also encouraged to talk to the younger students about careers, college and life. Some of the tutors are interested in pursuing teaching careers, while others just like working with young people or appreciate the break from their own school work.


“It’s a great way to give back to the STEM community. Education is so important so any opportunity I can help out is important,” said Madison Webster, a sophomore studying chemical engineering. “I’ve really enjoyed it.” 

Blue O’Brennan, a sophomore majoring in physics, once brought one of his own statics exams to tutoring, just to show the younger students how what he’s doing in college is like what they’re doing in middle school. 

Working as a tutor has also been good practice for O’Brennan, who hopes to become a high school physics teacher

“I‘m hoping with my degree at Mines I’ll be able to show students where they can go with that information,” O’Brennan said. “It’s not just, ‘Here’s an inclined plane. Figure out how it works.’ It’s, ‘ Alright, here's what you’ll be doing in college, in research, in industry that involves this knowledge.’”

That connection to the bigger picture is so beneficial for younger students, said Alyse Nelsen, one of two College View teachers who partnered with Smits on the tutoring program.

“Sometimes in middle school it can feel really futile what they’re doing. But I always am overhearing the tutors saying things like, ‘Oh well I felt that way too when I was younger but this is how it applies to college’ or ‘This is what I’m interested in,’” Nelsen said. “I really can't overemphasize how important it is for these kids to see older people who they think are cool and look up to helping them with math.” 

Three years into a five-year grant, the tutoring program is already seeing results, Smits said. Students who have never tested at grade level on standardized math and science tests are passing for the first time and science fair participation at the school is up dramatically.

Smits also got approval to do a long-term study of the students, surveying them multiple times a year from sixth through 12th grade about their interest, participation and self-confidence in STEM. Early results are already showing increases in confidence and interest.

“The crazy thing is it doesn’t cost a lot of money. Here at Mines, you have a bunch of college students who want a part-time job and have huge hearts and are really motivated by service. There, you have a whole bunch of kids who really would love the time of an adult to sit down and work through some STEM issues. What a great combination,” Smits said. “This is a simple grassroots way of making a measurable impact on a community.”

That impact can’t come soon enough, either. STEM occupations in the U.S. are projected to grow by 8.9 percent between 2014 and 2024, compared to just 6.4 percent growth for non-STEM occupations, according to the U.S. Department of Commerce’s Economics and Statistics Administration.

“We talk a lot at Mines about preparing the pipeline. If we actually want to do that here in the state of Colorado, we need to start a whole lot sooner than the freshmen who walk through our door,” Smits said. “We need to start back when they’re a lot younger to be able to capture that incredible talent pool that's not currently being captured.”

“We’re missing out on a big percentage of talent by not incorporating underrepresented groups in engineering,” she said. “We’re not going to be able to solve the engineering problems of the future without incorporating everybody.” 

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Hike for Help service trip featured in The Himalayan Times

Colorado School of Mines - Mon, 01/08/2018 - 11:07

A group of Colorado School of Mines students traveled to Nepal over winter break to help build foot trails in the Mt. Everest region, and their service trip was featured in The Himalayan Times, an English-language newspaper in Nepal. The volunteers with Hike for Help Nepal also delivered about 400 pairs of shoes to low-income families in the area.

Categories: Partner News

Ellexus Publishes White Paper Advising HPCers on Meltdown, Spectre

HPC Wire - Mon, 01/08/2018 - 10:10

Jan 8 — Can you afford to lose a third of your compute real estate? If not, you need to pre-empt the impact of Meltdown and Spectre.

Meltdown and Spectre are quickly becoming household names and not just in the HPC space. The severe design flaws in Intel microprocessors that could allow sensitive data to be stolen and the fixes are likely to be bad news for any I/O intensive applications such as those often used in HPC.

Ellexus Ltd, the I/O profiling company, has released a white paper: How the Meltdown and Spectre bugs work and what you can do to prevent a performance plummet.

Why is the Meltdown fix worse for HPC applications?

The changes that are being imposed on the Linux kernel (called the KAISER patch) to more securely separate user and kernel space are causing additional overhead to context switches. This is having a measurable impact on the performance of shared file systems and I/O intensive applications, which is particularly noticeable in I/O heavy workloads. A performance penalty could reach 10-30%.

Systems that were previously just about coping with I/O heavy workloads could now be in real trouble. It’s very easy for applications sharing datasets to overload the file system and prevent other applications from working, but bad I/O can also affect each program in isolation, even before the patches for the attacks make that worse.

Profile application I/O to rescue lost performance

You don’t have to put up with poor performance in order to improve security, however. The most obvious way to mitigate performance losses is to profile I/O and identify ways to optimise applications’ I/O performance.

By using the tool suites from Ellexus, Breeze and Mistral, to analyse workflows it is possible to identify changes that will help to eliminate bad I/O and regain the performance lost to these security patches.

Ellexus’ tools locate bottlenecks and applications with bad I/O on large distributed systems, cloud infrastructure and super computer clusters. Once applications with bad I/O patterns have been located, our tools will indicate the potential performance increases as well as pointers on how to achieve them. Often the optimisation is as simple as changing an environment variable, changing a single line in a script or changing a simple I/O call to read more than one byte at a time.

In some cases, the candidates for optimisation will be obvious – a workflow that clearly stresses the file system every time it is run, for example, or one that runs for significantly longer than a typical task.

In others it may be necessary to perform an initial high-level analysis of each job. Follow three steps to optimise application I/O and mitigate the impact of the KAISER patch:

1.       Profile all your applications with Mistral to look for the worst I/O patterns

Mistral, our I/O profiling tool, is lightweight enough to run at scale. In this case Mistral would be set up to record relatively detailed information on the type of I/O that workflows are performing over time. It would look for factors such as how many meta data operations are being performed, the number of small I/O and so on.

2.       Deal with the worst applications, delving into detail with Breeze

Once the candidate workflows have been identified they can be analysed in detail with Breeze. As a first step, the Breeze trace can be run through our Healthcheck tool that identifies common issues such as an application that has a high ratio of file opens to writes or a badly configured $PATH causing the file system to be trawled every time a workflow uses “grep”.

3.       Put in place longer-term I/O quality assurance

Implement the Ellexus tools across your systems to get the most from the compute and storage and to prevent problems reoccurring.

By following these simple steps and our best practices guidance it is easy to find and fix the biggest issues quickly and give you more time to optimise for the best performance possible.

Source: Ellexus Ltd

The post Ellexus Publishes White Paper Advising HPCers on Meltdown, Spectre appeared first on HPCwire.

Maniloff interviewed by Los Angeles Times

Colorado School of Mines - Mon, 01/08/2018 - 08:58

A Los Angeles Times article about the Trump administration's plans to open coastal California waters to expanded drilling featured an interview with Peter Maniloff, assistant professor of economics and business at Colorado School of Mines.

From the story:

Oil is trading at about $60 a barrel — roughly the price that would make an offshore project profitable, said Peter Maniloff, an economist at Colorado School of Mines who studies the oil and gas industry.

But “you want to be confident that prices will remain that high before undertaking a very large investment to drill an offshore well,” Maniloff said. “And it’s hard to be confident of that because fracking has driven prices down.”

Categories: Partner News


Subscribe to www.rmacc.org aggregator