Feed aggregator

Cray Reports 2017 Full Year and Fourth Quarter Financial Results

HPC Wire - Thu, 02/15/2018 - 15:08

SEATTLE, Feb. 15, 2018 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced financial results for the year and fourth quarter ended December 31, 2017.

All figures in this release are based on U.S. GAAP unless otherwise noted.  A reconciliation of GAAP to non-GAAP measures is included in the financial tables in this press release.

For 2017, Cray reported total revenue of $392.5 million, which compares with $629.8 million in 2016. Net loss for 2017 was $133.8 million, or $3.33 per diluted share, compared to net income of $10.6 million, or $0.26 per diluted share in 2016.  Non-GAAP net loss, which adjusts for selected unusual and non-cash items, was $40.5 million, or $1.01 per diluted share for 2017, compared to non-GAAP net income of $19.9 million, or $0.49 per diluted share in 2016.

Revenue for the fourth quarter of 2017 was $166.6 million, compared to $346.6 million in the fourth quarter of 2016.  Net loss for the fourth quarter of 2017 was $97.5 million, or $2.42 per diluted share, compared to net income of $51.8 million, or $1.27 per diluted share in the fourth quarter of 2016.  Non-GAAP net income was $9.2 million, or $0.22 per diluted share for the fourth quarter of 2017, compared to non-GAAP net income of $56.3 million, or $1.38 per diluted share for the same period in 2016.

The Company’s GAAP Net Loss for the fourth quarter and year ended December 31, 2017 was significantly impacted by both the enactment of the Tax Cuts and Jobs Act of 2017 and by its decision to record a valuation allowance against all of its U.S. deferred tax assets.  The combined GAAP impact totaled $103 million.  These items have been excluded for non-GAAP purposes.

For 2017, overall gross profit margin on a GAAP and non-GAAP basis was 33% and 34%, respectively, compared to 35% on a GAAP and non-GAAP basis for 2016.

Operating expenses for 2017 were $196.4 million, compared to $211.1 million in 2016.  Non-GAAP operating expenses for 2017 were $176.5 million, compared to $199.7 million in 2016.  GAAP operating expenses in 2017 included $8.6 million in restructuring charges associated with our recent workforce reduction.

As of December 31, 2017, cash, investments and restricted cash totaled $147 million.  Working capital at the end of the fourth quarter was $354 million, compared to $373 million at December 31, 2016.

“Despite difficult conditions in our core market we finished 2017 strong, highlighted by several large acceptances at multiple sites around the world, including completing the installation of what is now the largest supercomputing complex in India at the Ministry of Earth Sciences,” said Peter Ungaro, president and CEO of Cray.  “As we shift to 2018, we’re seeing signs of a rebound at the high-end of supercomputing as well as considerable growth opportunities in the coming years.  Supercomputing continues to expand in importance to both government and commercial customers, driving growth and competitiveness across many different disciplines and industries.  As the leader at the high-end of the market, we’re poised to play a key role in this growth and I’m excited about where we’re headed.”

Outlook
For 2018, while a wide range of results remains possible, Cray continues to expect revenue to grow in the range of 10-15% over 2017.  Revenue is expected to be about $50 million for the first quarter of 2018.  For 2018, GAAP and non-GAAP gross margins are expected to be in the low- to mid-30% range.  Non-GAAP operating expenses for 2018 are expected to be in the range of $190 million.  For 2018, non-GAAP adjustments are expected to total about $14 million, driven primarily by share-based compensation.  For the year, GAAP operating expenses are anticipated to be about $12 million higher than non-GAAP operating expenses, and GAAP gross profit is expected to be about $2 million lower than non-GAAP gross profit.

Based on this outlook, Cray’s effective GAAP and non-GAAP tax rates for 2018 are both expected to be in the low-single digit range, on a percentage basis.

Actual results for any future periods are subject to large fluctuations given the nature of Cray’s business.

Recent Highlights

  • In January, Cray announced it had deployed two Cray XC40 supercomputers and two Cray ClusterStor storage systems as part of a $67 million contract with the Ministry of Earth Sciences in India.  The combined systems are the largest supercomputing resource in India and were accepted in late 2017.
  • In December, Cray announced that it has joined the Big Data Center at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC). The collaboration is representative of Cray’s commitment to leverage its supercomputing expertise, technologies, and best practices to advance the adoption of Artificial Intelligence (AI), deep learning, and data-intensive computing.
  • In November, Cray announced that Samsung Electronics Co. Ltd. has purchased a Cray CS-Storm accelerated cluster supercomputer. The Samsung Strategy & Innovation Center procured the system for use in its research into AI and deep learning workloads, including systems for connected cars and autonomous technologies.
  • In November, Cray announced new high performance computing storage solutions including Cray View for ClusterStor – providing customers with dramatically improved job productivity; Cray ClusterStor L300N – a flash-based acceleration solution; and Cray DataWarp for the Cray XC50 supercomputer – exponentially reducing data access time.
  • In November, Cray announced the Company is creating an Arm-based supercomputer with the addition of Cavium ThunderX2 processors to the Cray XC50 supercomputer. Cray customers will have a complete Arm-based supercomputer that features a full software environment, including the Cray Linux Environment, the Cray Programming Environment, and Arm-optimized compilers, libraries, and tools for running today’s supercomputing workloads.
  • In November, Cray announced a comprehensive set of AI products and programs that will empower customers to learn, start, and scale their deep learning initiatives.  These include the new Cray Accel AI lab, new Cray Accel AI offerings, a new Cray Urika-XC analytics software suite, and an AI collaboration agreement with Intel.
  • In December, Cray announced that Catriona Fallon was appointed to Cray’s board of directors.  Fallon is currently the Senior Vice President, Networks Segment at Itron Inc. and was Chief Financial Officer before Itron’s acquisition of Silver Springs Networks in January 2018.

Conference Call Information
Cray will host a conference call today, Thursday, February 15, 2018 at 1:30 p.m. PT (4:30 p.m. ET) to discuss its year and fourth quarter ended December 31, 2017 financial results.  To access the call, please dial into the conference at least 10 minutes prior to the beginning of the call at (855) 894-4205. International callers should dial (765) 889-6838 and use the conference ID #56308204.  To listen to the audio webcast, go to the Investors section of the Cray website at www.cray.com/company/investors.

If you are unable to attend the live conference call, an audio webcast replay will be available in the Investors section of the Cray website for 180 days.  A telephonic replay of the call will also be available by dialing (855) 859-2056, international callers dial (404) 537-3406, and entering the conference ID #56308204.  The conference call replay will be available for 72 hours, beginning at 4:45 p.m. PT on Thursday, February 15, 2018.

About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges.  Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability.  Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray Inc.

The post Cray Reports 2017 Full Year and Fourth Quarter Financial Results appeared first on HPCwire.

Dugan interviewed by The Verge about offshore freshwater aquifers, Cape Town water crisis

Colorado School of Mines - Thu, 02/15/2018 - 13:23

Brandon Dugan, associate professor and Baker Hughes Chair in Petrophysics and Borehole Geophysics at Colorado School of Mines, was recently interviewed by The Verge about the possibility of tapping into offshore freshwater aquifers to address the Cape Town water crisis. 

From the article:

According to [a 2013 Nature] study, there’s an estimated 120,000 cubic miles of subsea fresh water globally — roughly 1,000 to 1,200 times the amount of water used in the US annually.

That would be more than enough to provide backup water supplies to other cities facing water shortages beyond Cape Town, like São Paulo, Brazil and Mexico City. To date, however, none of it has been pumped up for public use.

But why?

“It’s complicated,” says Brandon Dugan, a geophysicist and associate professor with the Colorado School of Mines, who has been studying offshore freshwater aquifers since 2002. “We don’t exactly understand the plumbing of the system or the precise volume of fresh water that’s down there. So that makes it difficult to devise a pumping strategy to maximize use of the resource.”

Categories: Partner News

Mines, NREL researchers improve perovskite solar cells

Colorado School of Mines - Thu, 02/15/2018 - 12:57

Researchers from the Colorado School of Mines Chemistry Department and the National Renewable Energy Laboratory have developed a perovskite solar cell that retains its efficiency after 1,000 hours of continuous use, with their findings published in Nature Energy.

Associate Professor Alan Sellinger, graduate student Tracy Schloemer and former Mines postdoc Jonathan Tinkham are co-authors of the paper, titled “Tailored interfaces of unencapsulated perovskite solar cells for >1,000 hour operational stability.” The project was led by NREL’s Joseph Luther and Joseph Berry and also included Jeffrey Christians, Philip Schulz, Steven Harvey and Bertrand Tremolet de Villers.

Over the past decade, perovskites have rapidly evolved into a promising technology, now with the ability to convert about 23 percent of sunlight into electricity. But work is still needed to make the devices durable enough for long-term use.

According to the researchers, their new cell was able to generate power even after 1,000 straight hours of testing. While more testing is needed to prove the cells could survive for 20 years or more in the field—the typical lifetime of solar panels—the study represented an important benchmark for determining that perovskite solar cells are more stable than previously thought.

A new molecule developed by Sellinger, nicknamed EH44, was used to replace an organic molecule called spiro-OMeTAD that is typically used in perovskite solar cells. Solar cells that use spiro-OMeTAD experience an almost immediate 20 percent drop in efficiency, which continues to steadily decline as it becomes more unstable.

The researchers theorized that replacing the layer of spiro-OMeTAD could stop the initial drop in efficiency in the cell. The lithium ions within the spiro-OMeTAD film move uncontrollably throughout the device and absorb water. The free movement of the ions and the presence of water causes the cells to degrade. EH44 was incorporated as a replacement because it repels water and doesn’t contain lithium.

The use of EH44 as the top layer resolved the later more gradual degradation but did not solve the initial fast decreases that were seen in the cell’s efficiency. The researchers tried another approach, this time swapping the cell’s bottom layer of titanium dioxide (TiO2) for one with tin oxide (SnO2). With both EH44 and SnO2 in place, as well as stable replacements to the perovskite material and metal electrodes, the solar cell efficiency remained steady. The experiment found that the new SnO2 layer resolved the chemical makeup issues seen in the perovskite layer when deposited onto the original TiO2 film.

“This study reveals how to make the devices far more stable,” Luther said. “It shows us that each of the layers in the cell can play an important role in degradation, not just the active perovskite layer.”

Funding for the research came from the U.S. Department of Energy Solar Energy Technologies Office.

CONTACT
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu

Categories: Partner News

Mines alum among first group of Knight-Hennessy scholars

Colorado School of Mines - Thu, 02/15/2018 - 11:53

A magna cum laude graduate of Colorado School of Mines is among the first cohort of the Knight-Hennessy Scholars program, which will fully fund her pursuit of a PhD in computational engineering at Stanford University.

Izzy Aguiar earned a bachelor’s degree in applied mathematics and statistics and a master’s degree in computational and applied mathematics from Mines in 2017. She is currently pursuing a second master’s degree in computer science from CU Boulder.

Aguiar is passionate about effective communication and increasing diversity and community in science, technology, engineering and mathematics. During her time at Mines, she co-founded the Teacher Education Alliance and served as vice president of the Society of Women in Mathematics. She received the Martin Luther King Jr. Recognition Award for her work with the campus club Equality Through Awareness, and the E-Days Engineering Award.

The Knight-Hennessy Scholars program selected 49 students for its inaugural group of scholars, who will pursue graduate degrees in 28 departments across all seven of Stanford’s schools.

In addition to supporting the full cost of attendance, the program will provide leadership training, mentorship and experiential learning. The program aims to prepare a new generation of leaders with the deep academic foundation and broad skill set needed to develop creative solutions for the world’s most complex challenges.

The program is named for John L. Hennessy, director of the program and president of Stanford from 2000 to 2016, and Nike co-founder Phil Knight, who earned an MBA from the university in 1962 and is contributing $400 million to back the program.

CONTACT
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu

Categories: Partner News

Accelerate Your Research with a Turn-key HPC Solution for Bioinformatics Analysis

HPC Wire - Thu, 02/15/2018 - 11:00

Bioinformatics is an increasingly data-driven endeavor today. HPC requirements for rapid analysis are high and will continue to grow as more data is used and as the use of bioinformatics analysis expands from traditional areas like new drug discovery to clinical areas, biomarker identification, and translational medicine.

The post Accelerate Your Research with a Turn-key HPC Solution for Bioinformatics Analysis appeared first on HPCwire.

Embrace AI, NVIDIA’s Ian Buck Tells US Congressional Committee

HPC Wire - Thu, 02/15/2018 - 09:22

Feb. 15, 2018 — Artificial intelligence represents the biggest technological and economic shift in our lifetime, NVIDIA’s Ian Buck told a U.S. Congressional committee Wednesday.

In testimony before a hearing of the House of Representatives Subcommittee on Information Technology, Buck, vice president and general manager of our Tesla business, said the federal government should increase research funding and adopt AI to boost the nation’s economy and improve government services.

“While other governments are aggressively raising their research funding, U.S. government research has been relatively flat,” Buck said. “We should boost research funding through agencies like the NSF, NIH and DARPA. We also need faster supercomputers, which are essential for AI research.”

Lawmakers Interested In AI

The hearing is the latest example of how artificial intelligence capabilities once thought of as science fiction are being put to work to solve a growing number of real world problems. It’s also a sign that the conversation around artificial intelligence is maturing, with lawmakers looking for ways to support and accelerate AI in a socially responsible and constructive way.

In addition to William Hurd (R-TX), chairman of the information technology committee, other representatives who asked questions included Robin Kelly (D-IL), Stephen Lynch (D-MA), and Gerald Connolly (D-VA).

The committee members expressed concerns about potential bias in AI systems, diversity in the AI workforce, defending citizens and government servers from cyber attack, and understanding how AI systems arrive at conclusions. They also asked what data the government should make available to researchers, and how federal agencies could more quickly adopt AI.

Buck said that while the public perception of AI is that it’s “rocket science,” federal agencies can begin with focused pilot projects like the Air Force’s Project Maven.

“Using AI for aerial reconnaissance holds great promise to alleviate airmen from having to stare at screens for 8 hours a day looking for a problem,” Buck said. “Image recognition is a well established technique, and can be leveraged quickly.”

Broad Support for AI

Buck appeared in a panel focused on the current state of AI and barriers to government adoption. Other panel members included Dr. Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence, Dr. Charles Isbell, executive associate dean and professor at the Georgia Institute of Technology, and Dr. Amir Khosrowshahi, vice president and CTO of Intel’s AI Group.

The panelists encouraged the government to invest more in AI research and promote science, technology, engineering and math (STEM) in middle and high schools.

Georgia Tech’s Isbell said the scarcity of computer science teachers was a limiting factor. “We need more teachers to develop the next-generation workforce,” he said.

The Allen Institute’s Etzioni said schools teach PowerPoint because teachers don’t know how to program. That’s a missed opportunity because 8 to 10 year olds are at the perfect age to learn computer programming.

Buck said every major federal agency — just like every major technology company — needs to invest in AI. He also recommended that the government open up more data to the research community.

Buck identified several major areas where AI could benefit the government, including cyber defense, healthcare and reducing waste and fraud.

When asked which government data should be made available first, Buck suggested healthcare data offered the greatest opportunity.

“The problem we face is too important and the opportunity is too great to not open access to healthcare data,” he said. “It has the greatest potential to actually save lives.”

Source: NVIDIA

The post Embrace AI, NVIDIA’s Ian Buck Tells US Congressional Committee appeared first on HPCwire.

Nominations Open for PRACE Ada Lovelace Award for HPC 2018

HPC Wire - Thu, 02/15/2018 - 08:59

Feb. 15, 2018 — PRACE initiated the “PRACE Ada Lovelace Award for HPC” at PRACEdays16, having Dr. Zoe Cournia, a Computational Chemist, Investigator – Assistant Professor at the Biomedical Research Foundation, Academy of Athens (BRFAA), Greece as the first successful winner (http://www.prace-ri.eu/2016praceadalovelaceaward/). Dr. Frauke Gräter of the Heidelberg Institute for Theoretical Studies and University of Heidelberg was awarded the second annual PRACE Ada Lovelace Award for HPC at PRACEdays17 (http://www.prace-ri.eu/pd17-hpcwire-interview-prace-ada-lovelace-award-winner-dr-frauke-grater/).

PRACE is happy to receive your nomination for the Award. The nomination process is fully open via this Call for Nominations published on the PRACE website and via different social media channels. Nominations should be sent to submissions-pracedays@prace-ri.eu by Thursday 1 March 2018.

The winner of the Award will be invited to participate in the concluding Panel Session at PRACEdays18, and will receive a cash prize of € 1 000 as well as a certificate and an engraved crystal trophy.

Nomination

A nomination should include the following:

  • Name, address, phone number, and email address of nominator (person making the nomination). The nomination should be submitted by a recognised member of the HPC community.
  • Name, address, and email address of the nominee (person being nominated).
  • The nomination statement addressing why the candidate should receive this award, should include a description of the first two criteria in detail within half page each (max 300 words each).
  • Copy of the candidate’s CV, certificates, listing publications (with indication of the h-index), honours, etc. should be provided.

Selection Criteria

  • Outstanding impact on HPC research, computational science or service provision at a global level.
  • Role model for women beginning careers in HPC.
  • The winner of this award must be a young female scientist (PhD +10 years max, excluding parental leave) who is currently working in Europe or has been working in Europe during the past three years.

Selection Committee

The Selection Committee is composed of:

  1. Claudia Filippi, Member of the PRACE Scientific Steering Committee (SSC), University of Twente, Faculty of Science and Technology, Netherlands
  2. Erik Lindahl, Chair of the PRACE Scientific Steering Committee (SSC)
  3. Lee Margetts, Chair of the PRACE Scientific Steering Committee (SSC)
  4. Laura Grigori, Member of the PRACE Scientific Steering Committee (SSC), INRIA/University Pierre and Marie Curie, France), member of the Selection Committee since 2017
  5. Suzanne Talon, Women in HPC member, CEO of Calcul Québec, Canada, member of the Selection Committee since 2017.

The committee will (partially) change every 2 years.

About PRACE

The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreement 730913. For more information, see www.prace-ri.eu.

Source: PRACE

The post Nominations Open for PRACE Ada Lovelace Award for HPC 2018 appeared first on HPCwire.

NCSA Announces Spring 2018 Call for Illinois Allocations on Blue Waters

HPC Wire - Thu, 02/15/2018 - 08:53

Feb. 15, 2018 — Blue Waters is one of the world’s most powerful computing systems, located at the University of Illinois at Urbana-Champaign and housed at the National Petascale Computing Facility. Each year that Blue Waters is in operation, about 6 to 8 million node-hours are available for projects originating from the University of Illinois at Urbana-Champaign via Illinois Allocations. As each node has many powerful cores, Illinois possesses significantly more computing power than most universities have available for staff use, and this can provide University research faculty and staff with a unique opportunity to perform groundbreaking, computationally laborious research.

Proposal submission for the Spring 2018 round of allocations is due March 15, 2018.

Faculty or staff for whom Illinois is their home institution by primary appointment affiliation are eligible to submit an Illinois allocation proposal as Principal Investigator. This includes postdoctoral fellows or postdoctoral research associates. Registered graduate or undergraduate students are not eligible to apply as Principal Investigators due to administrative requirements regarding appointment status but are encouraged to apply if their faculty or staff advisor will agree to be Principal Investigator on the proposal.

Visiting faculty or external adjunct faculty for whom Illinois is not their primary home institution are eligible to apply as Principal Investigators if, for the period covered by the proposal request: i) Illinois will be their primary (majority) place of residence; and: ii) and they will hold appointments at Illinois during this period. All proposals can include co-PIs and collaborators from other institutions.

For questions about submitting a request contact the Blue Waters Project Office at help+bw@ncsa.illinois.edu or go to https://bluewaters.ncsa.illinois.edu/illinois-allocations.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About Blue Waters

Blue Waters is one of the most powerful supercomputers in the world. Located at the University of Illinois, it can complete more than 1 quadrillion calculations per second on a sustained basis and more than 13 times that at peak speed. The peak speed is almost 3 million times faster than the average laptop. Blue Waters is supported by the National Science Foundation and the University of Illinois; the National Center for Supercomputing Applications (NCSA) manages the Blue Waters project and provides expertise to help scientists and engineers take full advantage of the system for their research.

The Blue Waters sustained-petascale computing project is supported by the National Science Foundation (awards OCI-0725070, ACI-0725070 and ACI-1238993) and the state of Illinois.

Source: NCSA

The post NCSA Announces Spring 2018 Call for Illinois Allocations on Blue Waters appeared first on HPCwire.

Technical Program Chair David Keyes Announces Changes for SC18

HPC Wire - Thu, 02/15/2018 - 08:44

Feb. 15, 2018 — SC18 Technical Program Chair David Keyes today announced major changes to the program and planning for SC18. Keyes outlined the changes in an article, which is included in full below.

Important News from SC18 Technical Program Chair David Keyes

David Keyes

How do we make the best even better? It’s an important question to ask as a team of more than 400 volunteers undertakes to create a world-class SC18 technical program. It is a daunting task to live up to the 30-year history of distinction SC has carved for itself.

No other HPC conference delivers such a broad diversity of topics and depth of insight, and it’s a thrill to be helming such an international effort.

As we seek to achieve even more with our technical program, you’ll see some exciting changes built into the planning for SC18 in Dallas this November.

With the help and support of our accomplished team of diverse and talented chairs who hail from industry, nonprofit organizations, laboratories and academia, we have determined to:

–Build on SC’s program quality reputation by strengthening the already rigorous double-blind review process for technical paper submissions, resulting in a new two-stage submission process and some revised submission dates worth noting;

–Add the option of reproducibility supplements to Workshops and Posters;

–Include new proceedings venues for Workshops and the SciViz Showcase;

–Call on technical submissions to emphasize the theme of “convergence” among our four charter areas of exploration—High Performance Computing, Networking, Storage and Analysis;

–Consolidate all five poster categories into a single exhibit space;

–Offer career-themed Fireside Chats;

–Move to a code-enabled online content distribution onsite for fully up-to-date tutorial materials and away from the USB stick distribution method;

–Adapt physical meeting spaces to better accommodate registrants for technical programs.

We welcome the opportunity to hear your comments and questions. If you have insights to share, please send them to techprogram@info.supercomputing.org.

Source: David Keyes, SC18

The post Technical Program Chair David Keyes Announces Changes for SC18 appeared first on HPCwire.

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

HPC Wire - Thu, 02/15/2018 - 08:05

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled out to current HPC platforms, it might be helpful to explore how future HPC systems could be better insulated from CPU or operating system security flaws that could cause massive disruptions. Surprisingly, most of the core concepts to build supercomputers that are resistant to a wide range of threats have already been invented and deployed in HPC systems over the past 20 years. Combining these technologies, concepts, and approaches not only would improve cybersecurity but also would have broader benefits for improving HPC performance, developing scientific software, adopting advanced hardware such as neuromorphic chips, and building easy-to-deploy data and analysis services. This new form of “Fluid HPC” would do more than solve current vulnerabilities. As an enabling technology, Fluid HPC would be transformative, dramatically improving extreme-scale code development in the same way that virtual machine and container technologies made cloud computing possible and built a new industry.

In today’s extreme-scale platforms, compute nodes are essentially embedded computing devices that are given to a specific user during a job and then cleaned up and provided to the next user and job. This “space-sharing” model, where the supercomputer is divided up and shared by doling out whole nodes to users, has been common for decades. Several non-HPC research projects over the years have explored providing whole nodes, as raw hardware, to applications. In fact, the cloud computing industry uses software stacks to support this “bare-metal provisioning” model, and Ethernet switch vendors have also embraced the functionality required to support this model. Several classic supercomputers, such as the Cray T3D and the IBM Blue Gene/P, provided nodes to users in a lightweight and fluid manner. By carefully separating the management of compute node hardware from the software executed on those nodes, an out-of-band control system can provide many benefits, from improved cybersecurity to shorter Exascale Computing Project (ECP) software development cycles.

Updating HPC architectures and system software to provide Fluid HPC must be done carefully. In some places, changes to the core management infrastructure are needed. However, many of the component technologies were invented more than a decade ago or simply need updating. Three key architectural modifications are required.

  1. HPC storage services and parallel I/O systems must be updated to use modern, token-based authentication. For many years, web-based services have used standardized technologies like OAuth to provide safe access to sensitive data, such as medical and financial records. Such technologies are at the core of many single-sign-on services that we use for official business processes. These token-based methods allow clients to connect to storage services and read and write data by presenting the appropriate token, rather than, for example, relying on client-side credentials and access from restricted network ports. Some data services, such as Globus, MongoDB, and Spark, have already shifted to allow token-based authentication. As a side effect, this update to HPC infrastructure would permit DOE research teams to fluidly and easily configure new storage and data services, both locally or remotely, without needing special administration privileges. In the same way that a website such as OpenTable.com can accept Facebook or Google user credentials, an ECP data team could create a new service that easily accepted NERSC or ALCF credentials. Moving to modern token-based authentication will improve cybersecurity, too; compromised compute nodes would not be able to read another user’s data. Rather, they would have access only to the areas for which an authentication token had been provided by the out-of-band system management layer.
  2. HPC interconnects must be updated to integrate technology from software-defined networking (SDN). OpenFlow, an SDN standard, is already implemented in many commercial Ethernet switches. SDN allows massive data cloud computing providers such as Google, Facebook, Amazon, and Microsoft to manage and separate traffic within a data center, preventing proprietary data from flowing past nodes that could be maliciously snooping. A compromised node must be prevented from snooping other traffic or spoofing other nodes. Essentially, SDN decouples the control plane and data movement from the physical and logical configuration. Updating the HPC interconnect technology to use SDN technologies would provide improved cybersecurity and also isolate errant HPC programs from interfering or conflicting with other jobs. With SDN technology, a confused MPI process would not be able to send data to another user’s node, because the software-defined network for the user, configured by the external system management layer, would not route the traffic to unconfirmed destinations.
  3. Compute nodes must be efficiently reinitialized, clearing local state between user jobs. Many HPC platforms were designed to support rebooting and recycling compute nodes between jobs. Decades ago, netbooting Beowulf clusters was common. By quickly reinitializing a node and carefully clearing previous memory state, data from one job cannot be leaked to another. Without this technique, a security vulnerability that escalates privilege permits a user to look at data left on the node from the previous job and leave behind malware to watch future jobs. Restarting nodes before each job improves system reliability, too. While rebooting sounds simple, however, guaranteeing that RAM and even NVRAM is clean between reboots might require advanced techniques. Fortunately, several CPU companies have been adding memory encryption engines, and NVRAM producers have added similar features; purging the ephemeral encryption key is equivalent to clearing memory. This feature is used to instantly wipe modern smartphones, such as Apple’s iPhone. Wiping state between users can provide significant improvements to security and productivity.

These three foundational architectural improvements to create a Fluid HPC system must be connected into an improved external system management layer. That layer would “wire up” the software-defined network for the user’s job, hand out storage system authentication tokens, and push a customized operating system or software stack onto the bare-metal provisioned hardware. Modern cloud-based data centers and their software communities have engineered a wide range of technologies to fluidly manage and deploy platforms and applications. The concepts and technologies in projects such as OpenStack, Kubernetes, Mesos, and Docker Swarm can be leveraged for extreme-scale computing without hindering performance. In fact, experimental testbeds such as the Chameleon cluster at the University of Chicago and the Texas Advanced Computing Center have already put some of these concepts into practice and would be an ideal location to test and develop a prototype of Fluid HPC.

These architectural changes make HPC platforms programmable again. The software-defined everything movement is fundamentally about programmable infrastructure. Retooling our systems to enable Fluid HPC with what is essentially a collection of previously discovered concepts, rebuilt with today’s technology, will make our supercomputers programmable in new ways and have a dramatic impact on HPC software development.

  1. Meltdown and Spectre would cause no performance degradation on Fluid HPC systems. In Fluid HPC, compute nodes are managed as embedded systems. Nodes are given completely to users, in exactly the way many hero programmers have been begging for years. The security perimeter around an embedded system leverages different cybersecurity techniques. The CPU flaws that gave us Meltdown and Spectre can be isolated by using the surrounding control system, rather than adding performance- squandering patches to the node. Overall cybersecurity will improve by discarding the weak protections in compute nodes and building security into the infrastructure instead.
  2. Extreme-scale platforms would immediately become the world’s largest software testbeds. Currently, testing new memory management techniques or advanced data and analysis services is nearly impossible on today’s large DOE platforms. Without the advanced controls and out-of-band management provided by Fluid HPC, system operators have no practical method to manage experimental software on production systems. Furthermore, without token-based authentication to storage systems and careful network management to prevent accidental or mischievous malformed network data, new low-level components can cause system instability. By addressing these issues with Fluid HPC, the world’s largest platforms could be immediately used to test and develop novel computer science research and completely new software stacks on a per job basis.
  3. Extreme-scale software development would be easier and faster. For the same reason that the broader software development world is clamoring to use container technologies such as Docker to make writing software easier and more deployable, giving HPC code developers Fluid HPC systems would be a disruptive improvement to software development. Coders could quickly test deploy any change to the software stack on a per-job basis. They could even use machine learning to automatically explore and tune software stacks and parameters. They could ship those software stack modifications across the ocean in an instant, to be tried by collaborators running code on other Fluid HPC systems. Easy performance regression testing would be possible. The ECP community could package software simply. We can even imagine running Amazon-style lambda functions on HPC infrastructure. In short, the HPC community would develop software just as the rest of the world does.
  4. The HPC community could easily develop and deploy new experimental data and analysis services. Deploying an experimental data service or file system is extremely difficult. Currently, there are no common, practical methods for developers to submit a job to a set of file servers with attached storage in order to create a new parallel I/O system, and then give permission to compute jobs to connect and use the service. Likewise, HPC operators cannot easily test deploy new versions of storage services against particular user applications. With the Fluid HPC model, however, a user could instantly create a memcached-based storage service, MongoDB, or Spark cluster on a few thousand compute nodes. Fluid HPC would make the infrastructure programmable; the impediments users now face deploying big data applications on big iron would be eliminated.
  5. Fluid HPC would enable novel, improved HPC architectures. With intelligent and programmable system management layers, modern authentication, software-defined networks, and dynamic software stacks provided by the basic platform, new types of accelerators—from neuromorphic to FPGAs—could be quickly added to Fluid HPC platforms. These new devices could be integrated as a set of disaggregated network-attached resources or attached to CPUs without needing to support multiuser and kernel protections. For example, neuromorphic accelerators could be quickly added without the need to support memory protection or multiuser interfaces. Furthermore, the low-level software stack could jettison the unneeded protection layers, permission checks, and security policies in the node operating system.

It is time for the HPC community to redesign how we manage and deploy software and operate extreme-scale platforms. Computer science concepts are often rediscovered or modernized years after being initially prototyped. Many classic concepts can be recombined and improved with technologies already deployed in the world’s largest data centers to enable Fluid HPC. In exchange, users would receive improved flexibility and faster software development—a supercomputer that not only runs programs but is programmable. Users would have choices and could adapt their code to any software stack or big data service that meets their needs. System operators would be able to improve security, isolation, and the rollout of new software components. Fluid HPC would enable the convergence of HPC and big data infrastructures and radically improve the environments for HPC software development. Furthermore, if Moore’s law is indeed slowing and a technology to replace CMOS is not ready, the extreme flexibility of Fluid HPC would speed the integration of novel architectures while also improving cybersecurity.

It’s hard to thank Meltdown and Spectre for kicking the HPC community into action, but we should nevertheless take the opportunity to aggressively pursue Fluid HPC and reshape our software tools and management strategies.

*Acknowledgments: I thank Micah Beck, Andrew Chien, Ian Foster, Bill Gropp, Kamil Iskra, Kate Keahey, Arthur Barney Maccabe, Marc Snir, Swann Peranau, Dan Reed, and Rob Ross for providing feedback and brainstorming on this topic.

About the Author

Pete Beckman

Pete Beckman is the co-director of the Northwestern University / Argonne Institute for Science and Engineering and designs, builds, and deploys software and hardware for advanced computing systems. When Pete was the director of the Argonne Leadership Computing Facility he led the team that deployed the world’s largest supercomputer for open science research. He has also designed and built massive distributed computing systems. As chief architect for the TeraGrid, Pete oversaw the team that built the world’s most powerful Grid computing system for linking production HPC centers for the National Science Foundation. He coordinates the collaborative research activities in extreme-scale computing between the US Department of Energy (DOE) and Japan’s ministry of education, science, and technology and leads the operating system and run-time software research project for Argo, a DOE Exascale Computing Project. As founder and leader of the Waggle project for smart sensors and edge computing, he is designing the hardware platform and software architecture used by the Chicago Array of Things project to deploy hundreds of sensors in cities, including Chicago, Portland, Seattle, Syracuse, and Detroit. Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985).

The post Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre appeared first on HPCwire.

Spring Career Day welcomes a record 248 companies

Colorado School of Mines - Wed, 02/14/2018 - 16:09

Colorado School of Mines hosted the largest Spring Career Day in its history Feb. 13, with a record 248 companies present.

Hundreds of students entered the Student Recreation Center for the chance to speak to recruiters about opportunities. For some students, that meant waiting in the long lines that formed at larger companies. Among the larger employers at Career Day were Kiewit, Martin/Martin, Orbital ATK, Denver Water, BP America and Halliburton. 

Many recruiters are Career Day veterans. They come back each year ready to hire Mines students because they are "very intelligent, well-formed students," said Jason Berumen, senior global talent manager for Webroot, a Colorado-based cybersecurity and threat intelligence services firm.

Additionally, many companies use Career Day as a chance to market themselves, even if they do not have jobs currently available. 

"We want people to recognize us as a competitive employer in Colorado," said Allison Martindale, HR analyst for the city of Thornton. "I have noticed that a lot of students can support our infrastructure department once we have jobs available."

Many recruiters strongly suggest that students thoroughly research the companies they are interested in talking to. Being prepared demonstrates interest in the company, and for many recruiters, this interest translates into passion. 

"We are looking for a degree, first and foremost, and then students who have good interpersonal skills and enthusiasm," said Cameron Matthews, associate director for Turner & Townsend, a multinational project management company. "We want our employees to be passionate about what they do."

Additionally, recruiters seek students with a good attitude and enthusiasm for the company. 

Excellent communication skills are a necessity for many companies. In fact, for Sundt recruiters Jim Pullen and Mike Morales, it was the most important thing for students to have mastered. In addition, "we are looking for a good GPA and students willing to travel," Morales said. 

The general contracting firm was also looking for students who were confident in their technical abilities, which, for them, was demonstrated through consistent eye contact and a good handshake.

And while students are often told to apply online for open positions, the trip to Career Day is still worth it, they said. 

"People just tell you to apply online, but showing up here is helpful because maybe they will remember your name in the hundreds of applicants," said Olivia Eppler, a junior studying mechanical engineering. 

Many students also view Career Day as a way to get more information about internships.

"Coming to Career Day mostly just gives me information on what I might want to do," said Tristan Collette, a senior in mechanical engineering. 

Emma Miller, a junior majoring in environmental engineering, said going to Career Day before she needed an internship helped prepare her for when she was actively looking for one.

Oftentimes, the information listed online can be rather vague. Talking to recruiters allows students to ask what the company culture is like and get further details on the jobs they have available. 

While some students experience nerves, Collette said remembering that recruiters "know what it's like to go here and know how hard it is" makes talking with them easier. "They know we have learned to problem-solve and create solutions."

"Have some notes written down to refer to in case you get nervous and forget what you are saying," Eppler said. "Remember, they are just people."

Networking events that are held before Career Day by various clubs can also help to alleviate some of the nerves. 

"Martin/Martin was at an American Society of Civil Engineering networking event yesterday, so I already knew them," said Ken Sullivan, a senior in civil engineering. "Today, I just got to drop off my resume and say hello."

The best advice, students said, is to stay true to yourself. "They want to hire who they interview, not someone who is trying to fit a mold," Collette said.

CONTACT
Katharyn Peterman, Student News Reporter | kpeterma@mymail.mines.edu
Emilie Rusch, Public Information Specialist, Colorado School of Mines | 303-273-3361 | erusch@mines.edu

Categories: Partner News

DOE Gets New Office of Cybersecurity, Energy Security, and Emergency Response

HPC Wire - Wed, 02/14/2018 - 14:35

WASHINGTON, D.C., Feb. 14 – Today, U.S. Secretary of Energy Rick Perry is establishing a new Office of Cybersecurity, Energy Security, and Emergency Response (CESER) at the U.S. Department of Energy(DOE). $96 million in funding for the office was included in President Trump’s FY19 budget request to bolster DOE’s efforts in cybersecurity and energy security.

The CESER office will be led by an Assistant Secretary that will focus on energy infrastructure security, support the expanded national security responsibilities assigned to the Department and report to the Under Secretary of Energy.

“DOE plays a vital role in protecting our nation’s energy infrastructure from cyber threats, physical attack and natural disaster, and as Secretary, I have no higher priority,” said Secretary Perry. “This new office best positions the Department to address the emerging threats of tomorrow while protecting the reliable flow of energy to Americans today.”

The creation of the CESER office will elevate the Department’s focus on energy infrastructure protection and will enable more coordinated preparedness and response to natural and man-made threats.

Source: US Department of Energy

The post DOE Gets New Office of Cybersecurity, Energy Security, and Emergency Response appeared first on HPCwire.

Landis to speak on diversity in STEM at AAAS Annual Meeting

Colorado School of Mines - Wed, 02/14/2018 - 14:09

Amy Landis, presidential faculty fellow for access, attainment and diversity at Colorado School of Mines, has been invited to speak about diversity in STEM at the 2018 American Association for the Advancement of Science Annual Meeting Feb. 15-19 in Austin, Texas. 

Landis, who leads the President's Council on Diversity, Inclusion and Access at Mines, will present during two sessions at the conference, the largest general scientific meeting in the world.

On Feb. 17, she will discuss her and her colleagues' recent work on impostor syndrome and science communication during a career development workshop, Cultivating Your Voice and Banishing Your Inner Impostor: Workshop for Women in STEM. Conducting the workshop with Landis are Christine O'Connell, assistant professor of science communication at the Alan Alda Center for Communicating Science, and Pragnya Eranki, research faculty in civil and environmental engineering at Mines.

The following day, Landis will be on a panel discussing communication challenges and opportunities for women in STEM with 500 Women Scientists' Melissa Creary and Liz Neeley of The Story Collider.  

A professor of civil and environmental engineering at Mines, Landis has spent her career promoting and supporting women and underrepresented minorities in the STEM fields. Before joining Mines in 2017, she was a professor at Clemson University and director of Clemson's Institute for Sustainability, where she established numerous successful programs including an undergraduate research program for underrepresented students, a graduate professional development program and a workshop on communicating engineering for women. At the University of Pittsburgh, she helped create negotiation workshops, networking events, work-life balance discussion groups and an impostor syndrome workshop.

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Intel Touts Silicon Spin Qubits for Quantum Computing

HPC Wire - Wed, 02/14/2018 - 13:54

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offered a quick take on the promise of silicon spin qubits, one of two approaches to quantum computing that Intel is exploring.

Silicon spin qubits, which leverage the spin of a single electron on a silicon device to perform quantum calculations, offer several advantages over the more familiar superconducting qubit counterparts, contends Intel. They are physically smaller, expected to have longer coherence times, should scale well, and are likely to be able to be fabricated using familiar processes.

“Intel has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers sourced specifically for the production of spin-qubit test chips. Fabricated in the same facility as Intel’s advanced transistor technologies, Intel is now testing the initial wafers. Within a couple of months, Intel expects to be producing many wafers per week, each with thousands of small qubit arrays,” according to the Intel news brief posted online today.

Intel has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers, like this one. (Credit: Walden Kirsch/Intel)

The topic isn’t exactly new. Use of quantum dots for qubits has long been studied. The new Nature paper, A programmable two-qubit quantum processor in silicon, demonstrated overcoming some of the cross-talk obstacles presented when using quantum dots.

Abstract excerpt: “[W]e overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 percent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.”

Intel emphasizes silicon spin qubits can operate at higher temperatures than superconducting qubits (1 kelvin as opposed to 20 millikelvin). “This could drastically reduce the complexity of the system required to operate the chips by allowing the integration of control electronics much closer to the processor. Intel and academic research partner QuTech are exploring higher temperature operation of spin qubits with interesting results up to 1K (or 50x warmer) than superconducting qubits. The team is planning to share the results at the American Physical Society (APS) meeting in March.”

A link to a simple but neat video (below) explaining programming on a silicon chip is below.

Link to Nature paper: https://www.nature.com/articles/nature25766

Link to full Intel release: https://newsroom.intel.com/news/intel-sees-promise-silicon-spin-qubits-quantum-computing/

The post Intel Touts Silicon Spin Qubits for Quantum Computing appeared first on HPCwire.

PNNL, OHSU Create Joint Research Co-Laboratory to Advance Precision Medicine

HPC Wire - Wed, 02/14/2018 - 13:30

PORTLAND, Ore., Feb. 14, 2018 — Pacific Northwest National Laboratory and OHSU today announced a joint collaboration to improve patient care by focusing research on highly complex sets of biomedical data, and the tools to interpret them.

The OHSU-PNNL Precision Medicine Innovation Co-Laboratory, called PMedIC, will provide a comprehensive ecosystem for scientists to utilize integrated ‘omics, data science and imaging technologies in their research in order to advance precision medicine — an approach to disease treatment that takes into account individual variability in genes, environment and lifestyle for each person.

“This effort brings together the unique and complementary strengths of Oregon’s only academic medical center, which has a reputation for innovative clinical trial designs, and a national laboratory with an international reputation for basic science and technology development in support of biological applications,” said OHSU President Joe Robertson. “Together, OHSU and PNNL will be able to solve complex problems in biomedical research that neither institution could solve alone.”

“The leading biomedical research and clinical work performed at OHSU pairs well with PNNL’s world-class expertise in data science and mass spectrometry analyses of proteins and genes,” said PNNL Director Steven Ashby. “By combining our complementary capabilities, we will make fundamental discoveries and accelerate our ability to tailor healthcare to individual patients.”

The co-laboratory will strengthen and expand the scope of existing interactions between OHSU and PNNL that already include cancer, regulation of the cardiovascular function, immunology and infection, and brain function, and add new collaborations in areas from metabolism to exposure science. The collaboration brings together the two institution’s strengths in data science, imaging and integrated ‘omics, which explores how genes, proteins and various metabolic products interact. The term arises from research that explores the function of key biological components within the context of the entire cell — genomics for genes, proteomics for proteins, and so on.

“PNNL has a reputation for excellence in the technical skill sets required for precision medicine, specifically advanced ‘omic’ platforms that measure the body’s key molecules — genes, proteins and metabolites — and the advanced data analysis methods to interpret these measurements,” said Karin Rodland, director of biomedical partnerships at PNNL. “Pairing these capabilities with the outstanding biomedical research environment and innovative clinical trials at OHSU will advance the field of precision medicine and lead to improved patient outcomes.”

In the long term, OHSU and PNNL aim to foster a generation of biomedical researchers fluent in all the aspects of the science underlying precision medicine, from clinical trials to molecular and computational biology to bioengineering and technology development — a new generation of scientists skilled in translating basic science discoveries to clinical care.

“Just as we have many neuroscientists focused on brain science at OHSU, we have many researchers taking different approaches on the path toward the goal of precision medicine,” said Mary Heinricher, associate dean of basic research, OHSU School of Medicine. “I believe one of the greatest opportunities of this collaboration is for OHSU graduate students and post-docs to have exposure to new technologies and collaborations with PNNL. This will help train the next generation of scientists.”

OHSU and PNNL first collaborated in 2015, when they formed the OHSU-PNNL Northwest Co-Laboratory for Integrated ‘Omicsand were designated a national Metabolomics Center for the Undiagnosed Disease Network, an NIH-funded national network designed to identify underlying mechanisms of very rare diseases. In 2017, the two organizations partnered on a National Cancer Institute-sponsored effort to become a Proteogenomic Translational Research Center focused on a complex form of leukemia.

Source: PNNL

The post PNNL, OHSU Create Joint Research Co-Laboratory to Advance Precision Medicine appeared first on HPCwire.

NCSA Researchers Create Reliable Tool for Long-Term Crop Prediction in the U.S. Corn Belt

HPC Wire - Wed, 02/14/2018 - 13:21

Feb. 14, 2018 — With the help of the Blue Waters supercomputer, at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, Blue Waters Professor Kaiyu Guan and NCSA postdoc fellow, Bin Peng implemented and evaluated a new maize growth model. The CLM-APSIM model combines superior features in both Community Land Model (CLM) and Agricultural Production Systems sIMulator (APSIM), creating one of the most reliable tools for long-term crop prediction in the U.S. Corn Belt. Peng and Guan recently published their paper, “Improving maize growth processes in the community land model: Implementation and evaluation” in the Agricultural and Forestry Meteorology journal. This work is an outstanding example of the convergence of simulation and data science that is a driving factor in the National Strategic Computing Initiative announced by the White House in 2015.

Conceptual diagram for phenological stages in the original CLM, APSIM and CLM-APSIM models. Unique features in CLM-APSIM crop model are also highlighted. Note that the stage duration in this diagram is not proportional to real stage length, and only presented for illustrative purpose. Image courtesy of NCSA.

“One class of crop models is agronomy-based and the other is embedded in climate models or earth system models. They are developed for different purposes and applied at different scales,” says Guan. “Because each has its own strengths and weaknesses, our idea is to combine the strengths of both types of models to make a new crop model with improved prediction performance.” Additionally, what makes the new CLM-APSIM model unique is the more detailed phenology stages, an explicit implementation of the impacts of various abiotic environmental stresses (including nitrogen, water, temperature and heat stresses) on maize phenology and carbon allocation, as well as an explicit simulation of grain number.

With support from the NCSA Blue Waters project (funded by the National Science Foundation and Illinois), NASA and the USDA National Institute of Food and Agriculture (NIFA) Foundational Program, Peng and Guan created the prototype for CLM-APSIM. “We built this new tool to bridge these two types of crop models combining their strengths and eliminating the weaknesses.”

The team is currently conducting a high resolution regional simulation over the contiguous United States to simulate corn yield at each planting corner. “There are hundreds of thousands of grids, and we run this model over each grid for 30 years in historical simulation and even more for future projection simulation,” said Peng, “currently it takes us several minutes to calculate one model-year simulation over a single grid. The only way to do this in a timely manner is to use parallel computing with thousands of cores in Blue Waters.”

Peng and Guan examined the results of this tool at seven different locations across the U.S. Corn Belt, revealing that the CLM-APSIM model more accurately predicted and simulated phenology of leaf area index and canopy height, surface fluxes including gross primary production, net ecosystem exchange, latent heat, sensible heat and especially in simulating the biomass partition and maize yield in comparison to the earlier CLM4.5 model. The CLM-APSIM model also corrected a serious deficiency in the original CLM model that underestimated aboveground biomass and overestimated the Harvest Index, which led to a reasonable yield estimation with wrong mechanisms.

Additionally, results from a 13-year simulation (2001-2013) at three sites located in Mead, NE, (US-Ne1, Ne2 and Ne3) show that the CLM-APSIM model can more accurately reproduce maize yield responses to growing season climate (temperature and precipitation) than the original CLM4.5 when benchmarked with the site-based observations and USDA county-level survey statistics.

“We can simulate the past, because we already have the weather datasets, but looking into the next 50 years, how can we understand the effect of climate change? Furthermore, how can we understand what farmers can do to improve and mitigate the climate change impact and improve the yield?” Guan said.

Their hope is to integrate satellite data into the model, similar to that of weather forecasting. “The ultimate goal is to not only have a model, but to forecast in real-time, the crop yields and to project the crop yields decades into the future,” said Guan. “With this technology, we want to not only simulate all the corn in the county of Champaign, Illinois, but everywhere in the U.S. and at a global scale.”

From here, Peng and Guan plan to expand this tool to include other staple crops, such as wheat, rice and soybeans. They are projected to complete a soybean simulation model for the entire United States within the next year.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About the Blue Waters Project

The Blue Waters petascale supercomputer is one of the most powerful supercomputers in the world, and is the fastest sustained supercomputer on a university campus. Blue Waters uses hundreds of thousands of computational cores to achieve peak performance of more than 13 quadrillion calculations per second. Blue Waters has more memory and faster data storage than any other open system in the world. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenges. Recent advances that were not possible without these resources include computationally designing the first set of antibody prototypes to detect the Ebola virus, simulating the HIV capsid, visualizing the formation of the first galaxies and exploding stars, and understanding how the layout of a city can impact supercell thunderstorms.

Source: NCSA

The post NCSA Researchers Create Reliable Tool for Long-Term Crop Prediction in the U.S. Corn Belt appeared first on HPCwire.

Physics Data Processing at NERSC Dramatically Cuts Reconstruction Time

HPC Wire - Wed, 02/14/2018 - 13:15

Feb. 14, 2018 — In a recent demonstration project, physicists from Brookhaven National Laboratory (BNL) and Lawrence Berkeley National Laboratory (Berkeley Lab) used the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC) to reconstruct data collected from a nuclear physics experiment, an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries.

The researchers reconstructed multiple datasets collected by the STAR (Solenoidal Tracker At RHIC) detector during particle collisions at the Relativistic Heavy Ion Collider (RHIC), a nuclear physics research facility at BNL. By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed raw data into “physics-ready” data at the petabyte scale in a fraction of the time it would have taken using in-house high-throughput computing resources—even with a two-way transcontinental journey via ESnet, the Department of Energy’s high-speed, high-performance data-sharing network that is managed by Berkeley Lab.

Preparing raw data for analysis typically takes many months, making it nearly impossible to provide such short-term responsiveness, according to Jérôme Lauret, a senior scientist at BNL and co-author on a paper outlining this work that was published in the Journal of Physics.

“This is a key usage model of high performance computing (HPC) for experimental data, demonstrating that researchers can get their raw data processing or simulation campaigns done in a few days or weeks at a critical time instead of spreading out over months on their own dedicated resources,” said Jeff Porter, a member of the data and analytics services team at NERSC and co-author on the Journal of Physics paper.

Billions of Data Points

The STAR experiment is a leader in the study of strongly interacting QCD matter that is generated in energetic heavy ion collisions. STAR consists of a large, complex set of detector systems that measure the thousands of particles produced in each collision event. Detailed analyses of billions of such collisions have enabled STAR scientists to make fundamental discoveries and measure the properties of the quark-gluon plasma. Since RHIC started running in the year 2000, this raw data processing, or reconstruction, has been carried out on dedicated computing resources at the RHIC and ATLAS Computing Facility (RACF) at BNL. High-throughput computing clusters crunch the data event by event and write out the coded details of each collision to a centralized mass storage space accessible to STAR physicists around the world.

In recent years, however, STAR datasets have reached billions of events, with data volumes at the multi-petabyte scale. The raw data signals collected by the detector electronics are processed using sophisticated pattern recognition algorithms to generate the higher-level datasets that are used for physics analysis. So the STAR computing team investigated the use of external resources to meet the demand for timely access to physics-ready data, ultimately turning to NERSC. Among other things, NERSC operates the PDSF cluster for the HEP/NP experiment community, which represents the second largest compute cluster available to the STAR collaboration.

A Processing Framework

Unlike the high-throughput computers at the RACF and PDSF, which analyze events one by one, HPC resources like those at NERSC break large problems into smaller tasks that can run in parallel. So the challenge was to parallelize the processing of STAR event data in a way that can scale out to run on large amounts of data with reproducible results.

The processing framework run at NERSC was built upon several core features. Shifter, a Linux container system developed at NERSC, provided a simple solution to the difficult problem of porting complex software to new computing systems and keep its expected behavior. Scalability was achieved by eliminating bottlenecks in accessing both the event data and experiment databases that record environmental changes—voltage, temperature, pressure and other detector conditions—during data taking. To do this, the workload was broken up into data chunks, sized to run on a single node onto which a snapshot of the STAR database could also be stored. Each node was then self-sufficient, allowing the work to automatically expand out to as many nodes as available without any direct intervention.

“Several technologies developed in-house at NERSC allowed us to build a highly fault-tolerant, multi-step, data-processing pipeline that could scale to practically unlimited number of nodes with the potential to dramatically fold the time it takes to process data for many experiments,” noted Mustafa Mustafa a Berkeley Lab physicist who helped design the system.

Another challenge in migrating the task of raw data reconstruction to an HPC environment was getting the data from BNL in New York to NERSC in California and back. Both the input and output datasets are huge. The team started small with a proof-of-principle experiment—just a few hundred jobs—to see how their new workflow programs would perform. Colleagues at RACF, NERSC and ESnet—including Damian Hazen of NERSC and Eli Dart of ESnet—helped identify hardware issues and optimize the data transfer and the end-to-end workflow.

After fine-tuning their methods based on the initial tests, the team started scaling up, initially using 6,400 computing cores on Cori; in their most recent test they utilized 25,600 cores. The end-to-end efficiency of the entire process—the time the program was running (not sitting idle, waiting for computing resources) multiplied by the efficiency of using the allotted supercomputing slots and getting useful output all the way back to BNL—was 98 percent.

“This was a very successful large-scale data processing run on NERSC HPC,“ said Jan Balewski, a member of the data science engagement group at NERSC who worked on this project. “One that we can look to as a reference as we actively test alternative approaches to support scaling up the computing campaigns at NERSC by multiple physics experiments.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. »Learn more about computing sciences at Berkeley Lab.

Source: NERSC

The post Physics Data Processing at NERSC Dramatically Cuts Reconstruction Time appeared first on HPCwire.

Brookhaven Ramps Up Computing for National Security Effort

HPC Wire - Wed, 02/14/2018 - 10:41

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. A week earlier, a report surfaced in a Russian media outlet that a group of Russian nuclear scientists had been arrested for using a government supercomputer to mine crypto-currency.

These very public episodes of computer misdeeds are a small portion of a growing and largely hidden iceberg of computer-related dangers with the potential to harm society. There are, of course, many active efforts to mitigate the ‘hacker’ onslaught as well as to use computational capabilities for U.S. national security purposes. Now, a formal effort is ramping up at Brookhaven National Laboratory (BNL).

Last fall, Adolfy Hoisie then at the Pacific Northwest National Laboratory was tapped to join Brookhaven’s expanding computing research and to become chair of the new Computing for National Security (CSN) Department. Since then Hoisie has been quickly drawing up the roadmap for the new effort – it’s charged with researching and developing novel technologies and applications for use in solving computing challenges in the national security arena.

The new CNS department is a recent addition to Brookhaven’s roughly three-year old Computational Science Initiative (CSI) which is intended to further computational capability and research at Brookhaven with a distinct emphasis on data science. Brookhaven is probably best known for its high-energy physics research. Most recently its National Synchrotron Light Source II is grabbing attention – it will be the brightest in the world when completed and accommodate 60 to 70 beamlines. Brookhaven also houses RHIC (Relativistic Heavy Ion Collider), which among other things is currently looking for the missing spin of the proton.

Adolfy Hoisie, Chair, Computing for National Security (CSN) Department, Brookhaven National Laboratory

Not surprisingly, the synchrotron, RHIC, and a variety of other experimental instruments at Brookhaven produce a lot of data. “We have the second largest scientific data archive in the U.S. and the fourth largest in the world,” said Hoisie, founding chairman of the CSN department. “On an annual basis, data to the tune of 35 petabytes are being ingested, 37 petabytes are being exported, and 400 petabytes of data analyzed. [What’s more] given the scientific community nature of this work, a lot of this data needs to be accessed at high bandwidth in and out of the experimental facilities and the Lab’s storage systems.”

Dealing with that mountain of experimental data is the main computational challenge at Brookhaven and Hoisie noted the CNS mission is ‘highly synergistic’ with those efforts.

“A large spectrum, if not a preponderance of applications, inspired by national security challenges, are in actual fact data sciences problems. It is speed of collection from various sources, whether the volume or velocity of data, the quality of data, analysis of data, which sets performance bounds for [security-related] applications. Just like data being streamed from a detector on an x-ray beam, data that is being streamed from a UAV (unmanned aerial vehicle) also has the challenges of too much data being generated and not enough bandwidth-to-the-ground in order for it to become actionable information and then make it back to the flying vehicle,” he said.

“The methodologies for data analysis, including machine learning and deep learning, required for national security concerns are very much synergistic with the challenges in data sciences. The spectrum of applications of interest to my department includes intelligence apps, cybersecurity, non-proliferation activities including international aspects of that, supply chain security, and a number of computational aspects of security of the computing infrastructure.”

Hoisie is no stranger to HPC or to building focused HPC research organizations. He joined Brookhaven from PNNL where he was the Director of the Advanced Computing, Mathematics, and Data Division, and the founding director of the Center for Advanced Technology Evaluation (CENATE). He plans to significantly expand the breadth, depth, and reach of the technologies and applications considered, with a focus on the full technology pipeline (basic research through devices, board, systems, to algorithms and applications).

Brookhaven, of course, already has substantial computational resources, a big chunk of which are co-located with the new synchrotron and dedicated to it. Predictably, I/O and storage is a particularly thorny issue and Hoisie noted Brookhaven has a large assortment of storage solutions and devices “from novel solutions all the way to discs and tapes of many generations that require computational resources in order to operate and do the data management.”

Brookhaven Light Source II

Currently, there is a second effort to centralize and expand the remaining computational infrastructure. The new CSN, along with much of the CSI, will be located in the new center.

“The first floor of the old synchrotron (National Synchrotron Light Source I) is being refurbished to a modern machine room through a DOE sponsored project. The approximate size of the area is 50,000 square feet. Significant power will be added to house the existing large scale computing and storage systems, and provide for the ability to grow in the future commensurate with the computing aspirations of Brookhaven. The new facility will also include computing Lab space for high accuracy and resolution measurement of computing technologies from device to systems, and to house computing systems “beyond Moore’s law” that will likely require special operating conditions,” said Hoisie.

Brookhaven has a diverse portfolio of ongoing research some of which will be tapped by CNS. “For example, there’s a significant scientific emphasis in materials design. That includes a center for nano materials, developing methodologies for material design and actual development of materials. We are trying to enmesh this expertise in materials with that in computing to tackle the challenges of computing at the device level,” Hoise said.

Hoisie’s group will also look at emerging technologies such as quantum computing. “That’s an area of major interest. We are looking at not only creating the appropriate facilities for siting quantum computing, such as the infrastructure for deep cooling and whatnot, but also looking at very significantly expanding the range of applications that are suitable for quantum computing. On that we have active discussions with IBM and others. You know, quantum computing is a little bit of a work in progress. I know I am stating the obvious but a lot depends on expanding significantly the range of applications to which quantum computing is applicable. We too often say, yes, quantum computing is very good for quantum chemistry or studying quantum effects in all kinds of processes, and cryptography, but there are many other areas we are trying to explore.”

Industry collaboration is an important part of the plan. In fact, noted Hoisie, “CSI, for example, is partly endowed by a New York State grant and part of the rules of engagement related to the grant and the management structure of Brookhaven [requires] development of a bona fide, high quality, high bandwidth interaction with regional powerhouses in computing including IBM. So we have quite a few ongoing in-depth discussions with potential partners that we hope soon to materialize to tackle together specific technologies.”

Throughout his computing career, Hoisie developed fruitful collaborations with technology providers with many collaborators such as IBM, AMD, Nvidia, and Data Vortex, just to name a few. He expects to do the same now.

Also, the modeling and simulation (ModSim) workshop series he helped organize and run will also continue including through his leadership of it and the participation of his new group. “The series of ModSim meetings will continue. Although I am not on the West Coast now we decided to organize them for continuity in Seattle at the University of Washington. These are events in which we are going to showcase technologies and applications including those national security interests and how ModSim is going to help. We’ve refreshed the committee to expand its base. That will continue as an interagency-funded operation that involves DOE, NSF, and a number of sectors from DoD,” Hoisie said.

Obviously these are still early days for the Computing for National Security initiative. A limited number of projects are still taking shape and there are few details available. That said Hoisie has high expectations:

“We have very significant plans to grow this department. The goal is to bring this Computing for National Security department, which is small at the moment, to the level of a high quality, and the emphasis is on the highest possible quality, of a top-notch national laboratory division level effort.

“This is the way in which we conducted HPC research for decades in my groups. There is the highest quality staff that we hire. There is active integration across the spectrum from technology and systems to the system software to applications and algorithms. And there is a healthy mixture of applied mathematics and computer science and domain sciences that are all contributing to the team effort. And there is a pipeline that we are interested in at all stages: as the technology matures you get more and more into areas that are related to computer science and mathematics and algorithm development and end up in tech development arena. These technologies materialize into boards, devices, systems and then into very large scale supercomputers that offer efficient solutions for solving science or national security problems. We absolutely plan to follow this way.”

The post Brookhaven Ramps Up Computing for National Security Effort appeared first on HPCwire.

Mining Engineering to launch professional master's degree

Colorado School of Mines - Wed, 02/14/2018 - 09:28

The Mining Engineering Department at Colorado School of Mines has received approval from the Board of Trustees to launch an innovative new graduate degree program. The Professional Masters in Mining Engineering and Management is an advanced degree that focuses on the practical integration of the technical, financial, management and other linked disciplines that make up the mining industry today. The program will be delivered exclusively online and will be among the first online programs offered by the School of Mines.

 “This is a one-of-a-kind program that we are really excited about,” said Dr. Priscilla Nelson, professor and head of the Mining Engineering Department. “It focuses on those things that industry executives tell us they wish they would have learned during their academic careers.  We have wrapped the business and management elements into a mining engineering degree that emphasizes where the industry will be in the future instead of where it has been in the past.  And, because it is delivered online, students don’t have to quit their jobs and come to campus to get their advanced degree – they can do this program from anywhere and receive their degree from one of the best mining schools in the world.”

Successful candidates for this program will have an undergraduate degree in engineering and at least five years of professional experience in the mining sector. Applications are being accepted for Fall 2018, pending final confirmation from the Colorado Department of Higher Education and online accreditation from the Higher Learning Commission. 

The online program will be comprised of twelve 8-week courses plus an independent project, for a total of 33 credit hours. Courses are intended to be taken in a fixed sequence, one course at a time, with program completion in two years. Courses will cover mine-related engineering and technology, mine support services, and mine-applied business and management. Each course will address the state of the practice, the risks and uncertainties, and the innovations and trends that will impact the mining industry in the future.  Courses will also address the use of information systems to organize and use the huge amounts of data the industry generates, and how best to integrate important linked disciplines like social and environmental responsibility, occupational and community health and safety, project security, water and waste management, internal and external communications, and life-cycle planning and closure.

Colorado School of Mines has been providing specialty knowledge to mining industry professionals since 1874. The Mining Engineering Department has offered on-campus Master of Science degrees with an academic focus and research activities for many years, and will continue to do so. The Industry Advisory Committee to the Mining Department and Mining Department faculty have strongly advocated for a professional, practice-centered program that will help advance mid-level professionals into senior and executive-level leadership roles – this Professional Masters in Mining Engineering and Management is just that program.

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Barbara A. Filas, Mining Engineering Department | 303-941-9140 | bfilas@mines.edu

 

Categories: Partner News

OLCF-Developed Visualization Tool Offers Customization and Faster Rendering

HPC Wire - Wed, 02/14/2018 - 07:15

Feb. 14, 2018 — At the home of America’s most powerful supercomputer, the Oak Ridge Leadership Computing Facility (OLCF), researchers often simulate millions or billions of dynamic atoms to study complex problems in science and energy. The OLCF is a US Department of Energy Office of Science User Facility located at Oak Ridge National Laboratory.

SIGHT visualization from a project led by University of Virginia’s Leonid Zhigilei to explore how lasers transform metal surfaces. Image courtesy of OLCF.

Finding fast, user-friendly ways to organize and analyze all this data is the job of the OLCF Advanced Data and Workflow Group and computer scientists like Benjamín Hernández, who has developed a new visualization tool called SIGHT for OLCF users.

“The amount of data users deal with is huge, and we want them to be able to easily visualize their datasets remotely and in real-time to see what they are simulating,” Hernández said. “We also want to provide ‘cinematic rendering’ to enhance the visual perception of visualizations.”

Through scientific visualizations, researchers can better compare experimental and computational data. Using a type of scientific visualization known as exploratory visualization, researchers can interactively manipulate 3D renderings of their data to make new connections between atomic structure and physical properties, thereby improving the effectiveness of the visualization.

However, as scientific data grow in complexity, so too do memory requirements—especially for exploratory visualization. To provide an easy-to-use, remote visualization tool for users, Hernández developed SIGHT, an exploratory visualization tool customized for OLCF user projects and directly deployed on OLCF systems.

As opposed to traditional visualization in which images are often rendered during post-processing, exploratory visualization can enable researchers to improve models before starting a simulation; make previously unseen connections in data that can inform modeling and simulation; and more accurately interpret computational results based on experimental data.

Hernández incrementally developed the exploratory visualization tool SIGHT by working with a few teams of OLCF users to fold in the specific features they needed for their projects.

To study how lasers transform metal surfaces to create complex, multiscale roughness and drive the ejection of nanoparticles, a team led by materials scientist Leonid Zhigilei of the University of Virginia used Titan to simulate more than 2 billion atoms over thousands of time steps.

“The initial attempts to visualize the atomic configurations were very time-consuming and involved cutting the system into several pieces and reassembling the images produced for different pieces,” Zhigilei said. “SIGHT, however, enabled the researchers at the University of Virginia to take a quick look at the whole system, monitor the evolution of the system over time, and identify the most interesting regions of the system that require additional detailed analysis.”

SIGHT provides high-fidelity “cinematic” rendering that adds visual effects—such as shadowing, ambient occlusion, and photorealistic illumination—that can reveal hidden structures within an atomistic dataset. SIGHT also includes a variety of tools, such as a sectional view that enables researchers to see inside the chunks of atoms forming the dataset. From there, the research team can further explore sections of interest and use SIGHT’s remote capabilities to present results on different screens, from mobile devices to OLCF’s Powerwall, EVEREST.

SIGHT also reduces the amount of work users must wade through upfront. Out-of-the-box tools come with many customization options and an underlying data structure that adapts the product to run on a variety of systems but slows performance by taking up more memory.

Hernández has so far deployed SIGHT on Rhea, OLCF’s 512-node Linux cluster for data processing, and DGX-1, a NVIDIA artificial intelligence supercomputer with eight GPUs. On both Rhea and DGX-1, SIGHT takes advantage of high-memory nodes and optimizes CPU and GPU rendering through the CPU-rendering OSPRay and GPU-rendering NVIDIA OptiX libraries.

Keeping data concentrated on one or a few nodes is critical for exploratory visualization, which is possible on machines like Rhea and DGX-1.

“With traditional visualization tools, you might be able to perform exploratory visualization to some extent on a single node at low interactive rates, but as the amount of data increases, visualization tools must use more nodes and the interactive rates go down even further,” Hernández said.

To model fundamental energy processes in the cell, another user project team led by chemist Abhishek Singharoy of Arizona State University simulated 100 million atoms of a membrane that aids in the production of adenosine triphosphate, a molecule that stores and transports energy in the cell. Using Rhea, collaborator Noah Trebesch from Emad Tajkhorshid’s laboratory at the University of Illinois at Urbana-Champaign then used SIGHT to extended molecular visualization of biological systems to billions of atoms with a model of a piece of the endoplasmic reticulum called the Terasaki ramp.

Compared to a typical desktop computer with a commodity GPU that a researcher might use for running SIGHT at their home university or institution, using SIGHT for remote visualization on Rhea enabled higher particle counts and frame rates by several orders of magnitude. With 1 terabyte of memory available in a single Rhea node, SIGHT using the OSPRay backend to reveal over 4 billion particles and there is still memory available for larger counts.

Furthermore, running SIGHT on the DGX-1 system with the NVIDIA OptiX backend resulted in frame rates up to 10 times faster than a typical desktop computer and almost 5 times faster than a Rhea node.

Anticipating the arrival of Summit later this year, Hernández is conducting tests on how remote interactive visualization workloads can be deployed on the OLCF’s next-generation supercomputer.

Source: Oak Ridge National Laboratory

The post OLCF-Developed Visualization Tool Offers Customization and Faster Rendering appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator