Feed aggregator

Mines welcomes new Blaster mascot at Rock the Lock

Colorado School of Mines - Fri, 02/16/2018 - 19:30

To thunderous applause, Colorado School of Mines introduced the newest member of the Oredigger family at Rock the Lock on February 16. Mines students, staff, alumni, fans and friends warmly welcomed the new mascot, Blaster, to campus before the men’s basketball game.

Blaster the Burro has been a beloved part of the Mines family since the 1950s. Blaster is the symbol of steadfast determination and hard work. Together, Blaster and Marvin the Miner complete the Oredigger duo.

The new mascot will complement the real burro, which isn’t allowed at indoor events and some campus activities. The Blaster mascot will participate in basketball and volleyball games, as well as other campus events that the burro cannot attend.

 

CONTACT
Joe DelNero, Digital Media and Communications Manager, Communications and Marketing | 303-273-3326 | jdelnero@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Moab/NODUS Cloud Bursting 1.1.0 Released by Adaptive Computing

HPC Wire - Fri, 02/16/2018 - 14:24

NAPLES, Fla., Feb. 16 — Adaptive Computing announces the release of Moab/NODUS Cloud Bursting 1.1.0. The new release introduces several new features to Moab’s elastic computing capabilities with the addition of Cloud Bursting and its related features.

With Moab/NODUS Cloud Bursting, when you run out of computing resources in your internal data center, you can “burst” the additional workload to an external cloud on-demand. Moab has feature-rich bursting capability, industry-leading cluster utilization, robust policy and SLA enforcement, is highly customizable for different cluster configurations, and is already used in private cloud deployments.

The addition of NODUS has improved productivity and customer success in a variety of ways. The NODUS Platform provisions nodes in the cloud. It is easy to use, manage, and configure, and integrates with on-premise resources. It offers full stack provisioning, it is automated, and is also very cost-effective for customers.

With Moab/NODUS Cloud Bursting, customers do not have to buy additional hardware to accommodate the peak requirements, therefore realizing huge savings. When there is a sudden influx of jobs, spikes in demand can be accommodated by “bursting” to the cloud for the necessary resources. Moab/NODUS Cloud Bursting is also highly customizable and extendable to satisfy multiple use cases and scenarios.

Working in High-Performance Computing ecosystems can be very complex and one of the key challenges is migrating HPC workloads into Cloud Environments. The addition of NODUS to Moab has simplified this process and is making HPC cloud strategies more accessible than ever before.

Moab/NODUS Cloud Bursting is bringing success and increased productivity to both commercial industry and research organizations by eliminating backlog and insuring that SLA’s are met automatically. This solution offers several advantages over competing products such as the ability to burst to multiple cloud providers (AWS, Google, Azure, etc.) and bare metal provisioning. Cloud resources are automatically deprovisioned from the cloud provider when no longer needed. Bursts can be done in blocks of nodes or based off the highest priority job. Cloud nodes are totally dynamic inside of Moab, with no need to predefine them ahead of time. Usage limits for bursting can be set on a daily, weekly, quarterly, or yearly basis.

This simple, yet powerful solution is automated, with no admin/user interaction, and integrates seamlessly with existing management infrastructure.

About Adaptive Computing 

Adaptive Computing’s Workload and Resource Orchestration software platform, Moab, is a world leader in dynamically optimizing large-scale computing environments. Moab intelligently places and schedules workloads and adapts resources to optimize application performance, increase system utilization, and achieve organizational objectives. Moab’s unique intelligent and predictive capabilities evaluate the impact of future orchestration decisions across diverse workload domains (HPC, HTC, Big Data, Grid Computing, SOA, Data Centers, Cloud Brokerage, Workload Management, Enterprise Automation, Workflow Management, Server Consolidation, and Cloud Bursting); thereby optimizing cost reduction and speeding product delivery. Moab gives enterprises a competitive advantage, inspiring them to develop cancer-curing treatments, discover the origins of the universe, lower energy prices, manufacture better products, improve the economic landscape, and pursue game-changing endeavors.

Source: Adaptive Computing

The post Moab/NODUS Cloud Bursting 1.1.0 Released by Adaptive Computing appeared first on HPCwire.

TACC Panel Discusses Advanced Computing and Water Management

HPC Wire - Fri, 02/16/2018 - 09:09

Feb. 16, 2018 — Artificial intelligence – or AI – is helping people make better decisions about how to manage water resources. That’s because scientists are taking the best tools of advanced computing to help make science-based decisions about complex and pressing problems in how to manage Earth’s resources, including water.

Some of those tools include benchmarks that make data accessible on open repositories and help ease testing on machine learning algorithms and other methodologies from intelligent systems. Other tools include new kinds of interfaces and visualizations that help decision makers see meaning in data.

A science panel on AI and water management meets in Austin, Texas on February 17th at the 2018 meeting of the American Association for the Advancement of Science.

Suzanne Pierce moderates and co-organized the panel. Pierce is a Research Scientist in Dynamic Decision Support Systems and part of the Data Management & Collections Group of the Texas Advanced Computing Center.

Dr. Pierce joins TACC podcast host Jorge Salazar to talk about the Intelligent Systems for Geosciences community, of which she is on the steering committee; her panel on AI and water management at the AAAS; the Planet Texas 2050 initiative; and the work TACC is doing to support efforts to bridge advanced computing with Earth science.

Suzanne Pierce: It’s a great time to look at artificial intelligence as a tool that helps humans make better decisions. At this point, the kinds of artificial intelligence that are being developed are really accelerating and improving the way we can understand our problems. It’s something that is exciting. It’s very promising and really needed. I think that it’s also important to know that there’s always a human in the loop. It’s not giving control to to the AI. It’s about letting the AI help us be better decision makers. And it helps us move towards answering, discussing, and exploring the questions that are most important and most critical for our quality of life and our communities so that we can develop a future together that’s brighter.

AAAS Meeting session: Finding Water Management Solutions With Artificial Intelligence, Saturday, February 17, 2018 10:00 AM – 11:30 AM, Austin Convention Center, Room 17A.

Source: TACC

The post TACC Panel Discusses Advanced Computing and Water Management appeared first on HPCwire.

Mines launches new master’s degree program

Colorado School of Mines - Fri, 02/16/2018 - 09:06

Colorado School of Mines has launched a new master’s degree program that will apply a unique, multidisciplinary social science lens to natural resources and energy issues, preparing students for careers in energy and engineering companies, advocacy and government agencies.

The new Natural Resources and Energy Policy (NREP) graduate program “is unique in that it targets engineers that are working in industry for a social science program,” said Kathleen Hancock, program director and associate professor of humanities, arts and social sciences.

The program replaces the Master of International Political Economy of Resources (MIPER) program, which was founded in 2005 but phased out in 2015. NREP will cover both domestic and international topics, natural resources, energy and policy, and will work to link students to industry and potential employers. In the required political assessment course, students work on a report for actual companies.

“Students are required to find a real company and work with them to prepare a political risk assessment for a country the company is interested in,” Hancock said. “They invite company representatives and present a final report in class, and provide the company with that final report.”

NREP is also developing strategies to integrate departments from across campus. Two of the required courses are taught through the Petroleum and Mining Engineering departments.

“You cannot examine policy in isolation and without learning how the policy will be applied,” said Linda Battalora, teaching professor of petroleum engineering. “You need to have technical appreciation for the policy that will extend over the life cycle of an engineering project.”

NREP students will learn more about the major stakeholders for energy and extractive industries, the processes behind local, national and global policymaking, laws and regulations related to energy and extractive industry and principles of social responsibility. Graduates from the program will also learn to apply quantitative analysis to assess energy and natural resource issues, identify political risk and mitigation options and conduct independent and original research.

“Students will learn to communicate policies with project stakeholders in the government, academia, community and other regulators,” Battalora said.

“The program is intended for both people with a social sciences background interested in the energy and natural resources sectors as well as engineers who want to expand their perspective in the industry they are working in,” Hancock said. “There is a really great mix of professors involved in this program. We are a very diverse group of people and a lot of us have hands on backgrounds that we are all bringing into play.”

Applications for the Fall 2018 semester are open until July 2018. The master’s degree requires 30 hours, but there’s also a 12-credit-hour minor for graduate students pursuing degrees in other departments as well as a 12-credit-hour certificate option.

 

CONTACT
Joe DelNero, Digital Media and Communications Manager, Communications and Marketing | 303-273-3326 | jdelnero@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Cray Reports 2017 Full Year and Fourth Quarter Financial Results

HPC Wire - Thu, 02/15/2018 - 15:08

SEATTLE, Feb. 15, 2018 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced financial results for the year and fourth quarter ended December 31, 2017.

All figures in this release are based on U.S. GAAP unless otherwise noted.  A reconciliation of GAAP to non-GAAP measures is included in the financial tables in this press release.

For 2017, Cray reported total revenue of $392.5 million, which compares with $629.8 million in 2016. Net loss for 2017 was $133.8 million, or $3.33 per diluted share, compared to net income of $10.6 million, or $0.26 per diluted share in 2016.  Non-GAAP net loss, which adjusts for selected unusual and non-cash items, was $40.5 million, or $1.01 per diluted share for 2017, compared to non-GAAP net income of $19.9 million, or $0.49 per diluted share in 2016.

Revenue for the fourth quarter of 2017 was $166.6 million, compared to $346.6 million in the fourth quarter of 2016.  Net loss for the fourth quarter of 2017 was $97.5 million, or $2.42 per diluted share, compared to net income of $51.8 million, or $1.27 per diluted share in the fourth quarter of 2016.  Non-GAAP net income was $9.2 million, or $0.22 per diluted share for the fourth quarter of 2017, compared to non-GAAP net income of $56.3 million, or $1.38 per diluted share for the same period in 2016.

The Company’s GAAP Net Loss for the fourth quarter and year ended December 31, 2017 was significantly impacted by both the enactment of the Tax Cuts and Jobs Act of 2017 and by its decision to record a valuation allowance against all of its U.S. deferred tax assets.  The combined GAAP impact totaled $103 million.  These items have been excluded for non-GAAP purposes.

For 2017, overall gross profit margin on a GAAP and non-GAAP basis was 33% and 34%, respectively, compared to 35% on a GAAP and non-GAAP basis for 2016.

Operating expenses for 2017 were $196.4 million, compared to $211.1 million in 2016.  Non-GAAP operating expenses for 2017 were $176.5 million, compared to $199.7 million in 2016.  GAAP operating expenses in 2017 included $8.6 million in restructuring charges associated with our recent workforce reduction.

As of December 31, 2017, cash, investments and restricted cash totaled $147 million.  Working capital at the end of the fourth quarter was $354 million, compared to $373 million at December 31, 2016.

“Despite difficult conditions in our core market we finished 2017 strong, highlighted by several large acceptances at multiple sites around the world, including completing the installation of what is now the largest supercomputing complex in India at the Ministry of Earth Sciences,” said Peter Ungaro, president and CEO of Cray.  “As we shift to 2018, we’re seeing signs of a rebound at the high-end of supercomputing as well as considerable growth opportunities in the coming years.  Supercomputing continues to expand in importance to both government and commercial customers, driving growth and competitiveness across many different disciplines and industries.  As the leader at the high-end of the market, we’re poised to play a key role in this growth and I’m excited about where we’re headed.”

Outlook
For 2018, while a wide range of results remains possible, Cray continues to expect revenue to grow in the range of 10-15% over 2017.  Revenue is expected to be about $50 million for the first quarter of 2018.  For 2018, GAAP and non-GAAP gross margins are expected to be in the low- to mid-30% range.  Non-GAAP operating expenses for 2018 are expected to be in the range of $190 million.  For 2018, non-GAAP adjustments are expected to total about $14 million, driven primarily by share-based compensation.  For the year, GAAP operating expenses are anticipated to be about $12 million higher than non-GAAP operating expenses, and GAAP gross profit is expected to be about $2 million lower than non-GAAP gross profit.

Based on this outlook, Cray’s effective GAAP and non-GAAP tax rates for 2018 are both expected to be in the low-single digit range, on a percentage basis.

Actual results for any future periods are subject to large fluctuations given the nature of Cray’s business.

Recent Highlights

  • In January, Cray announced it had deployed two Cray XC40 supercomputers and two Cray ClusterStor storage systems as part of a $67 million contract with the Ministry of Earth Sciences in India.  The combined systems are the largest supercomputing resource in India and were accepted in late 2017.
  • In December, Cray announced that it has joined the Big Data Center at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC). The collaboration is representative of Cray’s commitment to leverage its supercomputing expertise, technologies, and best practices to advance the adoption of Artificial Intelligence (AI), deep learning, and data-intensive computing.
  • In November, Cray announced that Samsung Electronics Co. Ltd. has purchased a Cray CS-Storm accelerated cluster supercomputer. The Samsung Strategy & Innovation Center procured the system for use in its research into AI and deep learning workloads, including systems for connected cars and autonomous technologies.
  • In November, Cray announced new high performance computing storage solutions including Cray View for ClusterStor – providing customers with dramatically improved job productivity; Cray ClusterStor L300N – a flash-based acceleration solution; and Cray DataWarp for the Cray XC50 supercomputer – exponentially reducing data access time.
  • In November, Cray announced the Company is creating an Arm-based supercomputer with the addition of Cavium ThunderX2 processors to the Cray XC50 supercomputer. Cray customers will have a complete Arm-based supercomputer that features a full software environment, including the Cray Linux Environment, the Cray Programming Environment, and Arm-optimized compilers, libraries, and tools for running today’s supercomputing workloads.
  • In November, Cray announced a comprehensive set of AI products and programs that will empower customers to learn, start, and scale their deep learning initiatives.  These include the new Cray Accel AI lab, new Cray Accel AI offerings, a new Cray Urika-XC analytics software suite, and an AI collaboration agreement with Intel.
  • In December, Cray announced that Catriona Fallon was appointed to Cray’s board of directors.  Fallon is currently the Senior Vice President, Networks Segment at Itron Inc. and was Chief Financial Officer before Itron’s acquisition of Silver Springs Networks in January 2018.

Conference Call Information
Cray will host a conference call today, Thursday, February 15, 2018 at 1:30 p.m. PT (4:30 p.m. ET) to discuss its year and fourth quarter ended December 31, 2017 financial results.  To access the call, please dial into the conference at least 10 minutes prior to the beginning of the call at (855) 894-4205. International callers should dial (765) 889-6838 and use the conference ID #56308204.  To listen to the audio webcast, go to the Investors section of the Cray website at www.cray.com/company/investors.

If you are unable to attend the live conference call, an audio webcast replay will be available in the Investors section of the Cray website for 180 days.  A telephonic replay of the call will also be available by dialing (855) 859-2056, international callers dial (404) 537-3406, and entering the conference ID #56308204.  The conference call replay will be available for 72 hours, beginning at 4:45 p.m. PT on Thursday, February 15, 2018.

About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges.  Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability.  Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray Inc.

The post Cray Reports 2017 Full Year and Fourth Quarter Financial Results appeared first on HPCwire.

Dugan interviewed by The Verge about offshore freshwater aquifers, Cape Town water crisis

Colorado School of Mines - Thu, 02/15/2018 - 13:23

Brandon Dugan, associate professor and Baker Hughes Chair in Petrophysics and Borehole Geophysics at Colorado School of Mines, was recently interviewed by The Verge about the possibility of tapping into offshore freshwater aquifers to address the Cape Town water crisis. 

From the article:

According to [a 2013 Nature] study, there’s an estimated 120,000 cubic miles of subsea fresh water globally — roughly 1,000 to 1,200 times the amount of water used in the US annually.

That would be more than enough to provide backup water supplies to other cities facing water shortages beyond Cape Town, like São Paulo, Brazil and Mexico City. To date, however, none of it has been pumped up for public use.

But why?

“It’s complicated,” says Brandon Dugan, a geophysicist and associate professor with the Colorado School of Mines, who has been studying offshore freshwater aquifers since 2002. “We don’t exactly understand the plumbing of the system or the precise volume of fresh water that’s down there. So that makes it difficult to devise a pumping strategy to maximize use of the resource.”

Categories: Partner News

Mines, NREL researchers improve perovskite solar cells

Colorado School of Mines - Thu, 02/15/2018 - 12:57

Researchers from the Colorado School of Mines Chemistry Department and the National Renewable Energy Laboratory have developed a perovskite solar cell that retains its efficiency after 1,000 hours of continuous use, with their findings published in Nature Energy.

Associate Professor Alan Sellinger, graduate student Tracy Schloemer and former Mines postdoc Jonathan Tinkham are co-authors of the paper, titled “Tailored interfaces of unencapsulated perovskite solar cells for >1,000 hour operational stability.” The project was led by NREL’s Joseph Luther and Joseph Berry and also included Jeffrey Christians, Philip Schulz, Steven Harvey and Bertrand Tremolet de Villers.

Over the past decade, perovskites have rapidly evolved into a promising technology, now with the ability to convert about 23 percent of sunlight into electricity. But work is still needed to make the devices durable enough for long-term use.

According to the researchers, their new cell was able to generate power even after 1,000 straight hours of testing. While more testing is needed to prove the cells could survive for 20 years or more in the field—the typical lifetime of solar panels—the study represented an important benchmark for determining that perovskite solar cells are more stable than previously thought.

A new molecule developed by Sellinger, nicknamed EH44, was used to replace an organic molecule called spiro-OMeTAD that is typically used in perovskite solar cells. Solar cells that use spiro-OMeTAD experience an almost immediate 20 percent drop in efficiency, which continues to steadily decline as it becomes more unstable.

The researchers theorized that replacing the layer of spiro-OMeTAD could stop the initial drop in efficiency in the cell. The lithium ions within the spiro-OMeTAD film move uncontrollably throughout the device and absorb water. The free movement of the ions and the presence of water causes the cells to degrade. EH44 was incorporated as a replacement because it repels water and doesn’t contain lithium.

The use of EH44 as the top layer resolved the later more gradual degradation but did not solve the initial fast decreases that were seen in the cell’s efficiency. The researchers tried another approach, this time swapping the cell’s bottom layer of titanium dioxide (TiO2) for one with tin oxide (SnO2). With both EH44 and SnO2 in place, as well as stable replacements to the perovskite material and metal electrodes, the solar cell efficiency remained steady. The experiment found that the new SnO2 layer resolved the chemical makeup issues seen in the perovskite layer when deposited onto the original TiO2 film.

“This study reveals how to make the devices far more stable,” Luther said. “It shows us that each of the layers in the cell can play an important role in degradation, not just the active perovskite layer.”

Funding for the research came from the U.S. Department of Energy Solar Energy Technologies Office.

CONTACT
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu

Categories: Partner News

Mines alum among first group of Knight-Hennessy scholars

Colorado School of Mines - Thu, 02/15/2018 - 11:53

A magna cum laude graduate of Colorado School of Mines is among the first cohort of the Knight-Hennessy Scholars program, which will fully fund her pursuit of a PhD in computational engineering at Stanford University.

Izzy Aguiar earned a bachelor’s degree in applied mathematics and statistics and a master’s degree in computational and applied mathematics from Mines in 2017. She is currently pursuing a second master’s degree in computer science from CU Boulder.

Aguiar is passionate about effective communication and increasing diversity and community in science, technology, engineering and mathematics. During her time at Mines, she co-founded the Teacher Education Alliance and served as vice president of the Society of Women in Mathematics. She received the Martin Luther King Jr. Recognition Award for her work with the campus club Equality Through Awareness, and the E-Days Engineering Award.

The Knight-Hennessy Scholars program selected 49 students for its inaugural group of scholars, who will pursue graduate degrees in 28 departments across all seven of Stanford’s schools.

In addition to supporting the full cost of attendance, the program will provide leadership training, mentorship and experiential learning. The program aims to prepare a new generation of leaders with the deep academic foundation and broad skill set needed to develop creative solutions for the world’s most complex challenges.

The program is named for John L. Hennessy, director of the program and president of Stanford from 2000 to 2016, and Nike co-founder Phil Knight, who earned an MBA from the university in 1962 and is contributing $400 million to back the program.

CONTACT
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu

Categories: Partner News

Accelerate Your Research with a Turn-key HPC Solution for Bioinformatics Analysis

HPC Wire - Thu, 02/15/2018 - 11:00

Bioinformatics is an increasingly data-driven endeavor today. HPC requirements for rapid analysis are high and will continue to grow as more data is used and as the use of bioinformatics analysis expands from traditional areas like new drug discovery to clinical areas, biomarker identification, and translational medicine.

The post Accelerate Your Research with a Turn-key HPC Solution for Bioinformatics Analysis appeared first on HPCwire.

Embrace AI, NVIDIA’s Ian Buck Tells US Congressional Committee

HPC Wire - Thu, 02/15/2018 - 09:22

Feb. 15, 2018 — Artificial intelligence represents the biggest technological and economic shift in our lifetime, NVIDIA’s Ian Buck told a U.S. Congressional committee Wednesday.

In testimony before a hearing of the House of Representatives Subcommittee on Information Technology, Buck, vice president and general manager of our Tesla business, said the federal government should increase research funding and adopt AI to boost the nation’s economy and improve government services.

“While other governments are aggressively raising their research funding, U.S. government research has been relatively flat,” Buck said. “We should boost research funding through agencies like the NSF, NIH and DARPA. We also need faster supercomputers, which are essential for AI research.”

Lawmakers Interested In AI

The hearing is the latest example of how artificial intelligence capabilities once thought of as science fiction are being put to work to solve a growing number of real world problems. It’s also a sign that the conversation around artificial intelligence is maturing, with lawmakers looking for ways to support and accelerate AI in a socially responsible and constructive way.

In addition to William Hurd (R-TX), chairman of the information technology committee, other representatives who asked questions included Robin Kelly (D-IL), Stephen Lynch (D-MA), and Gerald Connolly (D-VA).

The committee members expressed concerns about potential bias in AI systems, diversity in the AI workforce, defending citizens and government servers from cyber attack, and understanding how AI systems arrive at conclusions. They also asked what data the government should make available to researchers, and how federal agencies could more quickly adopt AI.

Buck said that while the public perception of AI is that it’s “rocket science,” federal agencies can begin with focused pilot projects like the Air Force’s Project Maven.

“Using AI for aerial reconnaissance holds great promise to alleviate airmen from having to stare at screens for 8 hours a day looking for a problem,” Buck said. “Image recognition is a well established technique, and can be leveraged quickly.”

Broad Support for AI

Buck appeared in a panel focused on the current state of AI and barriers to government adoption. Other panel members included Dr. Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence, Dr. Charles Isbell, executive associate dean and professor at the Georgia Institute of Technology, and Dr. Amir Khosrowshahi, vice president and CTO of Intel’s AI Group.

The panelists encouraged the government to invest more in AI research and promote science, technology, engineering and math (STEM) in middle and high schools.

Georgia Tech’s Isbell said the scarcity of computer science teachers was a limiting factor. “We need more teachers to develop the next-generation workforce,” he said.

The Allen Institute’s Etzioni said schools teach PowerPoint because teachers don’t know how to program. That’s a missed opportunity because 8 to 10 year olds are at the perfect age to learn computer programming.

Buck said every major federal agency — just like every major technology company — needs to invest in AI. He also recommended that the government open up more data to the research community.

Buck identified several major areas where AI could benefit the government, including cyber defense, healthcare and reducing waste and fraud.

When asked which government data should be made available first, Buck suggested healthcare data offered the greatest opportunity.

“The problem we face is too important and the opportunity is too great to not open access to healthcare data,” he said. “It has the greatest potential to actually save lives.”

Source: NVIDIA

The post Embrace AI, NVIDIA’s Ian Buck Tells US Congressional Committee appeared first on HPCwire.

Nominations Open for PRACE Ada Lovelace Award for HPC 2018

HPC Wire - Thu, 02/15/2018 - 08:59

Feb. 15, 2018 — PRACE initiated the “PRACE Ada Lovelace Award for HPC” at PRACEdays16, having Dr. Zoe Cournia, a Computational Chemist, Investigator – Assistant Professor at the Biomedical Research Foundation, Academy of Athens (BRFAA), Greece as the first successful winner (http://www.prace-ri.eu/2016praceadalovelaceaward/). Dr. Frauke Gräter of the Heidelberg Institute for Theoretical Studies and University of Heidelberg was awarded the second annual PRACE Ada Lovelace Award for HPC at PRACEdays17 (http://www.prace-ri.eu/pd17-hpcwire-interview-prace-ada-lovelace-award-winner-dr-frauke-grater/).

PRACE is happy to receive your nomination for the Award. The nomination process is fully open via this Call for Nominations published on the PRACE website and via different social media channels. Nominations should be sent to submissions-pracedays@prace-ri.eu by Thursday 1 March 2018.

The winner of the Award will be invited to participate in the concluding Panel Session at PRACEdays18, and will receive a cash prize of € 1 000 as well as a certificate and an engraved crystal trophy.

Nomination

A nomination should include the following:

  • Name, address, phone number, and email address of nominator (person making the nomination). The nomination should be submitted by a recognised member of the HPC community.
  • Name, address, and email address of the nominee (person being nominated).
  • The nomination statement addressing why the candidate should receive this award, should include a description of the first two criteria in detail within half page each (max 300 words each).
  • Copy of the candidate’s CV, certificates, listing publications (with indication of the h-index), honours, etc. should be provided.

Selection Criteria

  • Outstanding impact on HPC research, computational science or service provision at a global level.
  • Role model for women beginning careers in HPC.
  • The winner of this award must be a young female scientist (PhD +10 years max, excluding parental leave) who is currently working in Europe or has been working in Europe during the past three years.

Selection Committee

The Selection Committee is composed of:

  1. Claudia Filippi, Member of the PRACE Scientific Steering Committee (SSC), University of Twente, Faculty of Science and Technology, Netherlands
  2. Erik Lindahl, Chair of the PRACE Scientific Steering Committee (SSC)
  3. Lee Margetts, Chair of the PRACE Scientific Steering Committee (SSC)
  4. Laura Grigori, Member of the PRACE Scientific Steering Committee (SSC), INRIA/University Pierre and Marie Curie, France), member of the Selection Committee since 2017
  5. Suzanne Talon, Women in HPC member, CEO of Calcul Québec, Canada, member of the Selection Committee since 2017.

The committee will (partially) change every 2 years.

About PRACE

The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreement 730913. For more information, see www.prace-ri.eu.

Source: PRACE

The post Nominations Open for PRACE Ada Lovelace Award for HPC 2018 appeared first on HPCwire.

NCSA Announces Spring 2018 Call for Illinois Allocations on Blue Waters

HPC Wire - Thu, 02/15/2018 - 08:53

Feb. 15, 2018 — Blue Waters is one of the world’s most powerful computing systems, located at the University of Illinois at Urbana-Champaign and housed at the National Petascale Computing Facility. Each year that Blue Waters is in operation, about 6 to 8 million node-hours are available for projects originating from the University of Illinois at Urbana-Champaign via Illinois Allocations. As each node has many powerful cores, Illinois possesses significantly more computing power than most universities have available for staff use, and this can provide University research faculty and staff with a unique opportunity to perform groundbreaking, computationally laborious research.

Proposal submission for the Spring 2018 round of allocations is due March 15, 2018.

Faculty or staff for whom Illinois is their home institution by primary appointment affiliation are eligible to submit an Illinois allocation proposal as Principal Investigator. This includes postdoctoral fellows or postdoctoral research associates. Registered graduate or undergraduate students are not eligible to apply as Principal Investigators due to administrative requirements regarding appointment status but are encouraged to apply if their faculty or staff advisor will agree to be Principal Investigator on the proposal.

Visiting faculty or external adjunct faculty for whom Illinois is not their primary home institution are eligible to apply as Principal Investigators if, for the period covered by the proposal request: i) Illinois will be their primary (majority) place of residence; and: ii) and they will hold appointments at Illinois during this period. All proposals can include co-PIs and collaborators from other institutions.

For questions about submitting a request contact the Blue Waters Project Office at help+bw@ncsa.illinois.edu or go to https://bluewaters.ncsa.illinois.edu/illinois-allocations.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About Blue Waters

Blue Waters is one of the most powerful supercomputers in the world. Located at the University of Illinois, it can complete more than 1 quadrillion calculations per second on a sustained basis and more than 13 times that at peak speed. The peak speed is almost 3 million times faster than the average laptop. Blue Waters is supported by the National Science Foundation and the University of Illinois; the National Center for Supercomputing Applications (NCSA) manages the Blue Waters project and provides expertise to help scientists and engineers take full advantage of the system for their research.

The Blue Waters sustained-petascale computing project is supported by the National Science Foundation (awards OCI-0725070, ACI-0725070 and ACI-1238993) and the state of Illinois.

Source: NCSA

The post NCSA Announces Spring 2018 Call for Illinois Allocations on Blue Waters appeared first on HPCwire.

Technical Program Chair David Keyes Announces Changes for SC18

HPC Wire - Thu, 02/15/2018 - 08:44

Feb. 15, 2018 — SC18 Technical Program Chair David Keyes today announced major changes to the program and planning for SC18. Keyes outlined the changes in an article, which is included in full below.

Important News from SC18 Technical Program Chair David Keyes

David Keyes

How do we make the best even better? It’s an important question to ask as a team of more than 400 volunteers undertakes to create a world-class SC18 technical program. It is a daunting task to live up to the 30-year history of distinction SC has carved for itself.

No other HPC conference delivers such a broad diversity of topics and depth of insight, and it’s a thrill to be helming such an international effort.

As we seek to achieve even more with our technical program, you’ll see some exciting changes built into the planning for SC18 in Dallas this November.

With the help and support of our accomplished team of diverse and talented chairs who hail from industry, nonprofit organizations, laboratories and academia, we have determined to:

–Build on SC’s program quality reputation by strengthening the already rigorous double-blind review process for technical paper submissions, resulting in a new two-stage submission process and some revised submission dates worth noting;

–Add the option of reproducibility supplements to Workshops and Posters;

–Include new proceedings venues for Workshops and the SciViz Showcase;

–Call on technical submissions to emphasize the theme of “convergence” among our four charter areas of exploration—High Performance Computing, Networking, Storage and Analysis;

–Consolidate all five poster categories into a single exhibit space;

–Offer career-themed Fireside Chats;

–Move to a code-enabled online content distribution onsite for fully up-to-date tutorial materials and away from the USB stick distribution method;

–Adapt physical meeting spaces to better accommodate registrants for technical programs.

We welcome the opportunity to hear your comments and questions. If you have insights to share, please send them to techprogram@info.supercomputing.org.

Source: David Keyes, SC18

The post Technical Program Chair David Keyes Announces Changes for SC18 appeared first on HPCwire.

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

HPC Wire - Thu, 02/15/2018 - 08:05

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled out to current HPC platforms, it might be helpful to explore how future HPC systems could be better insulated from CPU or operating system security flaws that could cause massive disruptions. Surprisingly, most of the core concepts to build supercomputers that are resistant to a wide range of threats have already been invented and deployed in HPC systems over the past 20 years. Combining these technologies, concepts, and approaches not only would improve cybersecurity but also would have broader benefits for improving HPC performance, developing scientific software, adopting advanced hardware such as neuromorphic chips, and building easy-to-deploy data and analysis services. This new form of “Fluid HPC” would do more than solve current vulnerabilities. As an enabling technology, Fluid HPC would be transformative, dramatically improving extreme-scale code development in the same way that virtual machine and container technologies made cloud computing possible and built a new industry.

In today’s extreme-scale platforms, compute nodes are essentially embedded computing devices that are given to a specific user during a job and then cleaned up and provided to the next user and job. This “space-sharing” model, where the supercomputer is divided up and shared by doling out whole nodes to users, has been common for decades. Several non-HPC research projects over the years have explored providing whole nodes, as raw hardware, to applications. In fact, the cloud computing industry uses software stacks to support this “bare-metal provisioning” model, and Ethernet switch vendors have also embraced the functionality required to support this model. Several classic supercomputers, such as the Cray T3D and the IBM Blue Gene/P, provided nodes to users in a lightweight and fluid manner. By carefully separating the management of compute node hardware from the software executed on those nodes, an out-of-band control system can provide many benefits, from improved cybersecurity to shorter Exascale Computing Project (ECP) software development cycles.

Updating HPC architectures and system software to provide Fluid HPC must be done carefully. In some places, changes to the core management infrastructure are needed. However, many of the component technologies were invented more than a decade ago or simply need updating. Three key architectural modifications are required.

  1. HPC storage services and parallel I/O systems must be updated to use modern, token-based authentication. For many years, web-based services have used standardized technologies like OAuth to provide safe access to sensitive data, such as medical and financial records. Such technologies are at the core of many single-sign-on services that we use for official business processes. These token-based methods allow clients to connect to storage services and read and write data by presenting the appropriate token, rather than, for example, relying on client-side credentials and access from restricted network ports. Some data services, such as Globus, MongoDB, and Spark, have already shifted to allow token-based authentication. As a side effect, this update to HPC infrastructure would permit DOE research teams to fluidly and easily configure new storage and data services, both locally or remotely, without needing special administration privileges. In the same way that a website such as OpenTable.com can accept Facebook or Google user credentials, an ECP data team could create a new service that easily accepted NERSC or ALCF credentials. Moving to modern token-based authentication will improve cybersecurity, too; compromised compute nodes would not be able to read another user’s data. Rather, they would have access only to the areas for which an authentication token had been provided by the out-of-band system management layer.
  2. HPC interconnects must be updated to integrate technology from software-defined networking (SDN). OpenFlow, an SDN standard, is already implemented in many commercial Ethernet switches. SDN allows massive data cloud computing providers such as Google, Facebook, Amazon, and Microsoft to manage and separate traffic within a data center, preventing proprietary data from flowing past nodes that could be maliciously snooping. A compromised node must be prevented from snooping other traffic or spoofing other nodes. Essentially, SDN decouples the control plane and data movement from the physical and logical configuration. Updating the HPC interconnect technology to use SDN technologies would provide improved cybersecurity and also isolate errant HPC programs from interfering or conflicting with other jobs. With SDN technology, a confused MPI process would not be able to send data to another user’s node, because the software-defined network for the user, configured by the external system management layer, would not route the traffic to unconfirmed destinations.
  3. Compute nodes must be efficiently reinitialized, clearing local state between user jobs. Many HPC platforms were designed to support rebooting and recycling compute nodes between jobs. Decades ago, netbooting Beowulf clusters was common. By quickly reinitializing a node and carefully clearing previous memory state, data from one job cannot be leaked to another. Without this technique, a security vulnerability that escalates privilege permits a user to look at data left on the node from the previous job and leave behind malware to watch future jobs. Restarting nodes before each job improves system reliability, too. While rebooting sounds simple, however, guaranteeing that RAM and even NVRAM is clean between reboots might require advanced techniques. Fortunately, several CPU companies have been adding memory encryption engines, and NVRAM producers have added similar features; purging the ephemeral encryption key is equivalent to clearing memory. This feature is used to instantly wipe modern smartphones, such as Apple’s iPhone. Wiping state between users can provide significant improvements to security and productivity.

These three foundational architectural improvements to create a Fluid HPC system must be connected into an improved external system management layer. That layer would “wire up” the software-defined network for the user’s job, hand out storage system authentication tokens, and push a customized operating system or software stack onto the bare-metal provisioned hardware. Modern cloud-based data centers and their software communities have engineered a wide range of technologies to fluidly manage and deploy platforms and applications. The concepts and technologies in projects such as OpenStack, Kubernetes, Mesos, and Docker Swarm can be leveraged for extreme-scale computing without hindering performance. In fact, experimental testbeds such as the Chameleon cluster at the University of Chicago and the Texas Advanced Computing Center have already put some of these concepts into practice and would be an ideal location to test and develop a prototype of Fluid HPC.

These architectural changes make HPC platforms programmable again. The software-defined everything movement is fundamentally about programmable infrastructure. Retooling our systems to enable Fluid HPC with what is essentially a collection of previously discovered concepts, rebuilt with today’s technology, will make our supercomputers programmable in new ways and have a dramatic impact on HPC software development.

  1. Meltdown and Spectre would cause no performance degradation on Fluid HPC systems. In Fluid HPC, compute nodes are managed as embedded systems. Nodes are given completely to users, in exactly the way many hero programmers have been begging for years. The security perimeter around an embedded system leverages different cybersecurity techniques. The CPU flaws that gave us Meltdown and Spectre can be isolated by using the surrounding control system, rather than adding performance- squandering patches to the node. Overall cybersecurity will improve by discarding the weak protections in compute nodes and building security into the infrastructure instead.
  2. Extreme-scale platforms would immediately become the world’s largest software testbeds. Currently, testing new memory management techniques or advanced data and analysis services is nearly impossible on today’s large DOE platforms. Without the advanced controls and out-of-band management provided by Fluid HPC, system operators have no practical method to manage experimental software on production systems. Furthermore, without token-based authentication to storage systems and careful network management to prevent accidental or mischievous malformed network data, new low-level components can cause system instability. By addressing these issues with Fluid HPC, the world’s largest platforms could be immediately used to test and develop novel computer science research and completely new software stacks on a per job basis.
  3. Extreme-scale software development would be easier and faster. For the same reason that the broader software development world is clamoring to use container technologies such as Docker to make writing software easier and more deployable, giving HPC code developers Fluid HPC systems would be a disruptive improvement to software development. Coders could quickly test deploy any change to the software stack on a per-job basis. They could even use machine learning to automatically explore and tune software stacks and parameters. They could ship those software stack modifications across the ocean in an instant, to be tried by collaborators running code on other Fluid HPC systems. Easy performance regression testing would be possible. The ECP community could package software simply. We can even imagine running Amazon-style lambda functions on HPC infrastructure. In short, the HPC community would develop software just as the rest of the world does.
  4. The HPC community could easily develop and deploy new experimental data and analysis services. Deploying an experimental data service or file system is extremely difficult. Currently, there are no common, practical methods for developers to submit a job to a set of file servers with attached storage in order to create a new parallel I/O system, and then give permission to compute jobs to connect and use the service. Likewise, HPC operators cannot easily test deploy new versions of storage services against particular user applications. With the Fluid HPC model, however, a user could instantly create a memcached-based storage service, MongoDB, or Spark cluster on a few thousand compute nodes. Fluid HPC would make the infrastructure programmable; the impediments users now face deploying big data applications on big iron would be eliminated.
  5. Fluid HPC would enable novel, improved HPC architectures. With intelligent and programmable system management layers, modern authentication, software-defined networks, and dynamic software stacks provided by the basic platform, new types of accelerators—from neuromorphic to FPGAs—could be quickly added to Fluid HPC platforms. These new devices could be integrated as a set of disaggregated network-attached resources or attached to CPUs without needing to support multiuser and kernel protections. For example, neuromorphic accelerators could be quickly added without the need to support memory protection or multiuser interfaces. Furthermore, the low-level software stack could jettison the unneeded protection layers, permission checks, and security policies in the node operating system.

It is time for the HPC community to redesign how we manage and deploy software and operate extreme-scale platforms. Computer science concepts are often rediscovered or modernized years after being initially prototyped. Many classic concepts can be recombined and improved with technologies already deployed in the world’s largest data centers to enable Fluid HPC. In exchange, users would receive improved flexibility and faster software development—a supercomputer that not only runs programs but is programmable. Users would have choices and could adapt their code to any software stack or big data service that meets their needs. System operators would be able to improve security, isolation, and the rollout of new software components. Fluid HPC would enable the convergence of HPC and big data infrastructures and radically improve the environments for HPC software development. Furthermore, if Moore’s law is indeed slowing and a technology to replace CMOS is not ready, the extreme flexibility of Fluid HPC would speed the integration of novel architectures while also improving cybersecurity.

It’s hard to thank Meltdown and Spectre for kicking the HPC community into action, but we should nevertheless take the opportunity to aggressively pursue Fluid HPC and reshape our software tools and management strategies.

*Acknowledgments: I thank Micah Beck, Andrew Chien, Ian Foster, Bill Gropp, Kamil Iskra, Kate Keahey, Arthur Barney Maccabe, Marc Snir, Swann Peranau, Dan Reed, and Rob Ross for providing feedback and brainstorming on this topic.

About the Author

Pete Beckman

Pete Beckman is the co-director of the Northwestern University / Argonne Institute for Science and Engineering and designs, builds, and deploys software and hardware for advanced computing systems. When Pete was the director of the Argonne Leadership Computing Facility he led the team that deployed the world’s largest supercomputer for open science research. He has also designed and built massive distributed computing systems. As chief architect for the TeraGrid, Pete oversaw the team that built the world’s most powerful Grid computing system for linking production HPC centers for the National Science Foundation. He coordinates the collaborative research activities in extreme-scale computing between the US Department of Energy (DOE) and Japan’s ministry of education, science, and technology and leads the operating system and run-time software research project for Argo, a DOE Exascale Computing Project. As founder and leader of the Waggle project for smart sensors and edge computing, he is designing the hardware platform and software architecture used by the Chicago Array of Things project to deploy hundreds of sensors in cities, including Chicago, Portland, Seattle, Syracuse, and Detroit. Dr. Beckman has a Ph.D. in computer science from Indiana University (1993) and a BA in Computer Science, Physics, and Math from Anderson University (1985).

The post Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre appeared first on HPCwire.

Spring Career Day welcomes a record 248 companies

Colorado School of Mines - Wed, 02/14/2018 - 16:09

Colorado School of Mines hosted the largest Spring Career Day in its history Feb. 13, with a record 248 companies present.

Hundreds of students entered the Student Recreation Center for the chance to speak to recruiters about opportunities. For some students, that meant waiting in the long lines that formed at larger companies. Among the larger employers at Career Day were Kiewit, Martin/Martin, Orbital ATK, Denver Water, BP America and Halliburton. 

Many recruiters are Career Day veterans. They come back each year ready to hire Mines students because they are "very intelligent, well-formed students," said Jason Berumen, senior global talent manager for Webroot, a Colorado-based cybersecurity and threat intelligence services firm.

Additionally, many companies use Career Day as a chance to market themselves, even if they do not have jobs currently available. 

"We want people to recognize us as a competitive employer in Colorado," said Allison Martindale, HR analyst for the city of Thornton. "I have noticed that a lot of students can support our infrastructure department once we have jobs available."

Many recruiters strongly suggest that students thoroughly research the companies they are interested in talking to. Being prepared demonstrates interest in the company, and for many recruiters, this interest translates into passion. 

"We are looking for a degree, first and foremost, and then students who have good interpersonal skills and enthusiasm," said Cameron Matthews, associate director for Turner & Townsend, a multinational project management company. "We want our employees to be passionate about what they do."

Additionally, recruiters seek students with a good attitude and enthusiasm for the company. 

Excellent communication skills are a necessity for many companies. In fact, for Sundt recruiters Jim Pullen and Mike Morales, it was the most important thing for students to have mastered. In addition, "we are looking for a good GPA and students willing to travel," Morales said. 

The general contracting firm was also looking for students who were confident in their technical abilities, which, for them, was demonstrated through consistent eye contact and a good handshake.

And while students are often told to apply online for open positions, the trip to Career Day is still worth it, they said. 

"People just tell you to apply online, but showing up here is helpful because maybe they will remember your name in the hundreds of applicants," said Olivia Eppler, a junior studying mechanical engineering. 

Many students also view Career Day as a way to get more information about internships.

"Coming to Career Day mostly just gives me information on what I might want to do," said Tristan Collette, a senior in mechanical engineering. 

Emma Miller, a junior majoring in environmental engineering, said going to Career Day before she needed an internship helped prepare her for when she was actively looking for one.

Oftentimes, the information listed online can be rather vague. Talking to recruiters allows students to ask what the company culture is like and get further details on the jobs they have available. 

While some students experience nerves, Collette said remembering that recruiters "know what it's like to go here and know how hard it is" makes talking with them easier. "They know we have learned to problem-solve and create solutions."

"Have some notes written down to refer to in case you get nervous and forget what you are saying," Eppler said. "Remember, they are just people."

Networking events that are held before Career Day by various clubs can also help to alleviate some of the nerves. 

"Martin/Martin was at an American Society of Civil Engineering networking event yesterday, so I already knew them," said Ken Sullivan, a senior in civil engineering. "Today, I just got to drop off my resume and say hello."

The best advice, students said, is to stay true to yourself. "They want to hire who they interview, not someone who is trying to fit a mold," Collette said.

CONTACT
Katharyn Peterman, Student News Reporter | kpeterma@mymail.mines.edu
Emilie Rusch, Public Information Specialist, Colorado School of Mines | 303-273-3361 | erusch@mines.edu

Categories: Partner News

DOE Gets New Office of Cybersecurity, Energy Security, and Emergency Response

HPC Wire - Wed, 02/14/2018 - 14:35

WASHINGTON, D.C., Feb. 14 – Today, U.S. Secretary of Energy Rick Perry is establishing a new Office of Cybersecurity, Energy Security, and Emergency Response (CESER) at the U.S. Department of Energy(DOE). $96 million in funding for the office was included in President Trump’s FY19 budget request to bolster DOE’s efforts in cybersecurity and energy security.

The CESER office will be led by an Assistant Secretary that will focus on energy infrastructure security, support the expanded national security responsibilities assigned to the Department and report to the Under Secretary of Energy.

“DOE plays a vital role in protecting our nation’s energy infrastructure from cyber threats, physical attack and natural disaster, and as Secretary, I have no higher priority,” said Secretary Perry. “This new office best positions the Department to address the emerging threats of tomorrow while protecting the reliable flow of energy to Americans today.”

The creation of the CESER office will elevate the Department’s focus on energy infrastructure protection and will enable more coordinated preparedness and response to natural and man-made threats.

Source: US Department of Energy

The post DOE Gets New Office of Cybersecurity, Energy Security, and Emergency Response appeared first on HPCwire.

Landis to speak on diversity in STEM at AAAS Annual Meeting

Colorado School of Mines - Wed, 02/14/2018 - 14:09

Amy Landis, presidential faculty fellow for access, attainment and diversity at Colorado School of Mines, has been invited to speak about diversity in STEM at the 2018 American Association for the Advancement of Science Annual Meeting Feb. 15-19 in Austin, Texas. 

Landis, who leads the President's Council on Diversity, Inclusion and Access at Mines, will present during two sessions at the conference, the largest general scientific meeting in the world.

On Feb. 17, she will discuss her and her colleagues' recent work on impostor syndrome and science communication during a career development workshop, Cultivating Your Voice and Banishing Your Inner Impostor: Workshop for Women in STEM. Conducting the workshop with Landis are Christine O'Connell, assistant professor of science communication at the Alan Alda Center for Communicating Science, and Pragnya Eranki, research faculty in civil and environmental engineering at Mines.

The following day, Landis will be on a panel discussing communication challenges and opportunities for women in STEM with 500 Women Scientists' Melissa Creary and Liz Neeley of The Story Collider.  

A professor of civil and environmental engineering at Mines, Landis has spent her career promoting and supporting women and underrepresented minorities in the STEM fields. Before joining Mines in 2017, she was a professor at Clemson University and director of Clemson's Institute for Sustainability, where she established numerous successful programs including an undergraduate research program for underrepresented students, a graduate professional development program and a workshop on communicating engineering for women. At the University of Pittsburgh, she helped create negotiation workshops, networking events, work-life balance discussion groups and an impostor syndrome workshop.

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Intel Touts Silicon Spin Qubits for Quantum Computing

HPC Wire - Wed, 02/14/2018 - 13:54

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offered a quick take on the promise of silicon spin qubits, one of two approaches to quantum computing that Intel is exploring.

Silicon spin qubits, which leverage the spin of a single electron on a silicon device to perform quantum calculations, offer several advantages over the more familiar superconducting qubit counterparts, contends Intel. They are physically smaller, expected to have longer coherence times, should scale well, and are likely to be able to be fabricated using familiar processes.

“Intel has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers sourced specifically for the production of spin-qubit test chips. Fabricated in the same facility as Intel’s advanced transistor technologies, Intel is now testing the initial wafers. Within a couple of months, Intel expects to be producing many wafers per week, each with thousands of small qubit arrays,” according to the Intel news brief posted online today.

Intel has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers, like this one. (Credit: Walden Kirsch/Intel)

The topic isn’t exactly new. Use of quantum dots for qubits has long been studied. The new Nature paper, A programmable two-qubit quantum processor in silicon, demonstrated overcoming some of the cross-talk obstacles presented when using quantum dots.

Abstract excerpt: “[W]e overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 percent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.”

Intel emphasizes silicon spin qubits can operate at higher temperatures than superconducting qubits (1 kelvin as opposed to 20 millikelvin). “This could drastically reduce the complexity of the system required to operate the chips by allowing the integration of control electronics much closer to the processor. Intel and academic research partner QuTech are exploring higher temperature operation of spin qubits with interesting results up to 1K (or 50x warmer) than superconducting qubits. The team is planning to share the results at the American Physical Society (APS) meeting in March.”

A link to a simple but neat video (below) explaining programming on a silicon chip is below.

Link to Nature paper: https://www.nature.com/articles/nature25766

Link to full Intel release: https://newsroom.intel.com/news/intel-sees-promise-silicon-spin-qubits-quantum-computing/

The post Intel Touts Silicon Spin Qubits for Quantum Computing appeared first on HPCwire.

PNNL, OHSU Create Joint Research Co-Laboratory to Advance Precision Medicine

HPC Wire - Wed, 02/14/2018 - 13:30

PORTLAND, Ore., Feb. 14, 2018 — Pacific Northwest National Laboratory and OHSU today announced a joint collaboration to improve patient care by focusing research on highly complex sets of biomedical data, and the tools to interpret them.

The OHSU-PNNL Precision Medicine Innovation Co-Laboratory, called PMedIC, will provide a comprehensive ecosystem for scientists to utilize integrated ‘omics, data science and imaging technologies in their research in order to advance precision medicine — an approach to disease treatment that takes into account individual variability in genes, environment and lifestyle for each person.

“This effort brings together the unique and complementary strengths of Oregon’s only academic medical center, which has a reputation for innovative clinical trial designs, and a national laboratory with an international reputation for basic science and technology development in support of biological applications,” said OHSU President Joe Robertson. “Together, OHSU and PNNL will be able to solve complex problems in biomedical research that neither institution could solve alone.”

“The leading biomedical research and clinical work performed at OHSU pairs well with PNNL’s world-class expertise in data science and mass spectrometry analyses of proteins and genes,” said PNNL Director Steven Ashby. “By combining our complementary capabilities, we will make fundamental discoveries and accelerate our ability to tailor healthcare to individual patients.”

The co-laboratory will strengthen and expand the scope of existing interactions between OHSU and PNNL that already include cancer, regulation of the cardiovascular function, immunology and infection, and brain function, and add new collaborations in areas from metabolism to exposure science. The collaboration brings together the two institution’s strengths in data science, imaging and integrated ‘omics, which explores how genes, proteins and various metabolic products interact. The term arises from research that explores the function of key biological components within the context of the entire cell — genomics for genes, proteomics for proteins, and so on.

“PNNL has a reputation for excellence in the technical skill sets required for precision medicine, specifically advanced ‘omic’ platforms that measure the body’s key molecules — genes, proteins and metabolites — and the advanced data analysis methods to interpret these measurements,” said Karin Rodland, director of biomedical partnerships at PNNL. “Pairing these capabilities with the outstanding biomedical research environment and innovative clinical trials at OHSU will advance the field of precision medicine and lead to improved patient outcomes.”

In the long term, OHSU and PNNL aim to foster a generation of biomedical researchers fluent in all the aspects of the science underlying precision medicine, from clinical trials to molecular and computational biology to bioengineering and technology development — a new generation of scientists skilled in translating basic science discoveries to clinical care.

“Just as we have many neuroscientists focused on brain science at OHSU, we have many researchers taking different approaches on the path toward the goal of precision medicine,” said Mary Heinricher, associate dean of basic research, OHSU School of Medicine. “I believe one of the greatest opportunities of this collaboration is for OHSU graduate students and post-docs to have exposure to new technologies and collaborations with PNNL. This will help train the next generation of scientists.”

OHSU and PNNL first collaborated in 2015, when they formed the OHSU-PNNL Northwest Co-Laboratory for Integrated ‘Omicsand were designated a national Metabolomics Center for the Undiagnosed Disease Network, an NIH-funded national network designed to identify underlying mechanisms of very rare diseases. In 2017, the two organizations partnered on a National Cancer Institute-sponsored effort to become a Proteogenomic Translational Research Center focused on a complex form of leukemia.

Source: PNNL

The post PNNL, OHSU Create Joint Research Co-Laboratory to Advance Precision Medicine appeared first on HPCwire.

NCSA Researchers Create Reliable Tool for Long-Term Crop Prediction in the U.S. Corn Belt

HPC Wire - Wed, 02/14/2018 - 13:21

Feb. 14, 2018 — With the help of the Blue Waters supercomputer, at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, Blue Waters Professor Kaiyu Guan and NCSA postdoc fellow, Bin Peng implemented and evaluated a new maize growth model. The CLM-APSIM model combines superior features in both Community Land Model (CLM) and Agricultural Production Systems sIMulator (APSIM), creating one of the most reliable tools for long-term crop prediction in the U.S. Corn Belt. Peng and Guan recently published their paper, “Improving maize growth processes in the community land model: Implementation and evaluation” in the Agricultural and Forestry Meteorology journal. This work is an outstanding example of the convergence of simulation and data science that is a driving factor in the National Strategic Computing Initiative announced by the White House in 2015.

Conceptual diagram for phenological stages in the original CLM, APSIM and CLM-APSIM models. Unique features in CLM-APSIM crop model are also highlighted. Note that the stage duration in this diagram is not proportional to real stage length, and only presented for illustrative purpose. Image courtesy of NCSA.

“One class of crop models is agronomy-based and the other is embedded in climate models or earth system models. They are developed for different purposes and applied at different scales,” says Guan. “Because each has its own strengths and weaknesses, our idea is to combine the strengths of both types of models to make a new crop model with improved prediction performance.” Additionally, what makes the new CLM-APSIM model unique is the more detailed phenology stages, an explicit implementation of the impacts of various abiotic environmental stresses (including nitrogen, water, temperature and heat stresses) on maize phenology and carbon allocation, as well as an explicit simulation of grain number.

With support from the NCSA Blue Waters project (funded by the National Science Foundation and Illinois), NASA and the USDA National Institute of Food and Agriculture (NIFA) Foundational Program, Peng and Guan created the prototype for CLM-APSIM. “We built this new tool to bridge these two types of crop models combining their strengths and eliminating the weaknesses.”

The team is currently conducting a high resolution regional simulation over the contiguous United States to simulate corn yield at each planting corner. “There are hundreds of thousands of grids, and we run this model over each grid for 30 years in historical simulation and even more for future projection simulation,” said Peng, “currently it takes us several minutes to calculate one model-year simulation over a single grid. The only way to do this in a timely manner is to use parallel computing with thousands of cores in Blue Waters.”

Peng and Guan examined the results of this tool at seven different locations across the U.S. Corn Belt, revealing that the CLM-APSIM model more accurately predicted and simulated phenology of leaf area index and canopy height, surface fluxes including gross primary production, net ecosystem exchange, latent heat, sensible heat and especially in simulating the biomass partition and maize yield in comparison to the earlier CLM4.5 model. The CLM-APSIM model also corrected a serious deficiency in the original CLM model that underestimated aboveground biomass and overestimated the Harvest Index, which led to a reasonable yield estimation with wrong mechanisms.

Additionally, results from a 13-year simulation (2001-2013) at three sites located in Mead, NE, (US-Ne1, Ne2 and Ne3) show that the CLM-APSIM model can more accurately reproduce maize yield responses to growing season climate (temperature and precipitation) than the original CLM4.5 when benchmarked with the site-based observations and USDA county-level survey statistics.

“We can simulate the past, because we already have the weather datasets, but looking into the next 50 years, how can we understand the effect of climate change? Furthermore, how can we understand what farmers can do to improve and mitigate the climate change impact and improve the yield?” Guan said.

Their hope is to integrate satellite data into the model, similar to that of weather forecasting. “The ultimate goal is to not only have a model, but to forecast in real-time, the crop yields and to project the crop yields decades into the future,” said Guan. “With this technology, we want to not only simulate all the corn in the county of Champaign, Illinois, but everywhere in the U.S. and at a global scale.”

From here, Peng and Guan plan to expand this tool to include other staple crops, such as wheat, rice and soybeans. They are projected to complete a soybean simulation model for the entire United States within the next year.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About the Blue Waters Project

The Blue Waters petascale supercomputer is one of the most powerful supercomputers in the world, and is the fastest sustained supercomputer on a university campus. Blue Waters uses hundreds of thousands of computational cores to achieve peak performance of more than 13 quadrillion calculations per second. Blue Waters has more memory and faster data storage than any other open system in the world. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenges. Recent advances that were not possible without these resources include computationally designing the first set of antibody prototypes to detect the Ebola virus, simulating the HIV capsid, visualizing the formation of the first galaxies and exploding stars, and understanding how the layout of a city can impact supercell thunderstorms.

Source: NCSA

The post NCSA Researchers Create Reliable Tool for Long-Term Crop Prediction in the U.S. Corn Belt appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator