Feed aggregator

ECP Pagoda Project Rolls Out First Software Libraries

HPC Wire - Thu, 11/02/2017 - 13:48

Nov. 2 — Just one year after the U.S. Department of Energy’s (DOE) Exascale Computing Program (ECP) began funding projects to prepare scientific applications for exascale supercomputers, the Pagoda Project — a three-year ECP software development program based at Lawrence Berkeley National Laboratory — has successfully reached a major milestone: making its open source software libraries publicly available as of September 30, 2017.

Led by Scott B. Baden, Group Lead of the Computer Languages and Systems Software (CLaSS) Group within Berkeley Lab’s Computational Research Division, the Pagoda Project’s libraries are designed to support lightweight global address space communication for exascale applications. The libraries take advantage of the Partitioned Global Address Space (PGAS) model to emulate large, distributed shared memories. By employing this model, which allows researchers to treat the physically separate memories of each supercomputer node as one address space, the Pagoda libraries will be able to leverage available global address hardware support to significantly reduce the communication costs of moving data — often a performance bottleneck in large-scale scientific applications, Baden explained.

“Our job is to ensure that the exascale applications reach key performance parameters defined by the DOE,” he added.

Thus this first release of the software is as functionally complete as possible, Baden emphasized, covering a good deal of the specification released last June. “We need to quickly determine if our users, in particular our ECP application developer partners, are satisfied,” he said. “If they can give us early feedback, we can avoid surprises later on.”

GASNet-EX and UPC++

The Pagoda software stack comprises a communication substrate layer, GASNet-Ex, and a productivity layer, UPC++. GASNet-Ex is a communication interface that provides language-independent, low-level networking for PGAS languages such as UPC and Coarray Fortran, the UPC++ library and for the Legion Programming Language. UPC++ is a C++ interface for application programmers that creates “friendlier” PGAS abstractions above GASNet-Ex’s communication services.

“GASNet-Ex, which has been around for over 15 years and is being enhanced to make it more versatile and performant in the exascale environment, is a library intended for developers of tools that are in turn used to develop applications,” Baden explained. “It operates at the network hardware level, which is more challenging to program than at the productivity layer.” The GASNet-Ex effort is led by Pagoda co-PI Paul Hargrove and was originally designed by Dan Bonachea, who jointly develops the software. Both are members of CLaSS.

As the productivity layer, UPC++ sits at a slightly higher level over GASNet-Ex, in a form appropriate for applications programmers. The goal of this layer is to impose minimal overheads in exchange for hiding considerable idiosyncratic detail, so users are satisfied with the benefits obtained by increased productivity.

Over the past year, the Pagoda team worked closely with several Berkeley Lab partners to develop applications and application frameworks, including the Adaptive Mesh Refinement Co-Design Center (AMReX), Sparse Solvers (ECP AD project) and ExaBiome (ECP AD Project). They also worked with several industry partners, including IBM, NVIDIA, HPE and Cray, and over the next few months will be meeting with all of the major vendors who are vying to build the first exascale computer or the components that will go into those computers.

“We are part of a large community of ECP developers,” Baden said. “And the ECP wants to deploy a software stack, a full set of tools, as an integrated package that will enable them to ensure that the pieces are compatible, that they will all work together. I am fortunate to be working with such a talented team that is highly motivated to deliver a vital component of the ECP software stack.” This team includes other members of CLaSS—Steve Hofmeyr and Amir Kamil (at the University of Michigan)—as well John Bachan, Brian van Straalen and Mathias Jacquelin. Bryce Lelbach, now with NVIDIA, also made early contributions.

Now that they are publicly available, the Pagoda libraries are expected to be used by other ECP efforts and supercomputer users in general to meet the challenges posed not only by the first-generation exascale computers but by today’s petascale systems as well.

“Much of the ECP software and programming technology can be leveraged across multiple applications, both within ECP and beyond,” said Kathy Yelick, Associate Lab Director for Computing Sciences at Berkeley Lab, in a recent interview with HPCwire. For example, AMReX, which was launched last November and recently announced its own first milestone, released its new framework to support the development of block-structured AMR algorithms, and at least five of the ECP application projects are using AMR to efficiently simulate fine-resolution features, Yelick noted.

For the remaining two years of the Pagoda project, the team will be focused on application integration and performance enhancements that adeptly leverage low-level hardware support, Baden noted.

Source: Berkeley Lab

The post ECP Pagoda Project Rolls Out First Software Libraries appeared first on HPCwire.

UberCloud Brings Parallel MPI to Univa Grid Engine’s Docker Support

HPC Wire - Thu, 11/02/2017 - 13:41

CHICAGO, Nov. 2 2017 – UberCloud and Univa today announced the integration of UberCloud parallel application containers with Univa Grid Engine. In May of last year, Univa, a leading innovator of workload management products, announced the availability of Docker software container support with its Grid Engine 8.4.0 product, enabling enterprises to automatically dispatch and run jobs in Docker containers, from a user specified Docker image, on a Univa Grid Engine cluster.

This significant update simplifies running complex applications in a Univa supported cluster and reduces configuration and OS issues. Since then, user applications are isolated into their own container, avoiding conflicts with other jobs on the system. This integration enables Docker containers and non-container applications to run in the same cluster, on-premise or in the cloud.

UberCloud and Univa technologies complement each other with Univa focusing on orchestrating jobs and containers on any computing resource infrastructure. Univa and UberCloud have worked together to bring MPI parallel execution to Univa Grid Engine managed containers. Parallel applications are pre-installed in UberCloud containers which are managed by Univa in the same way jobs are managed.

Now Univa and Unicloud solutions can automatically launch containerized parallel workloads in the cloud when the local system queues are full. UberCloud containers guarantee that the same software version can run on-premise and in the cloud. Also, software upgrades become automated, and the users just specify which software version of e.g. ANSYS Fluent or SIMULIA Abaqus they want, by selecting the container with the software version they need. Finally, for interactive sessions, the UberCloud/Univa integration provides Nice-DCV and VNC support for GPU-accelerated remote graphics.

Univa’s suite includes the world’s most trusted workload optimization solution enabling organizations to manage and optimize distributed applications, containers, data center services, legacy applications, and Big Data frameworks in a single, dynamically shared set of resources. It is the most widely deployed workload optimization solution used today in more than 10,000 data centers across thousands of applications and use cases.

Be sure to stop by Univa booth (#849) at SC17 to learn more about containers on November 13 – 16, in Denver, Colo.

About UberCloud

UberCloud is the online Community, Marketplace, and Software Container Factory where engineers, scientists, and their service providers, discover, try, and buy ubiquitous high-performance computing power and Software-as-a-Service, from Cloud resource providers and application software vendors around the world. UberCloud’s unique high-performance software container technology simplifies software packageability and portability, enables ease of access and instant use of engineering SaaS solutions, and maintains scalability across multiple compute nodes. Please visit www.TheUberCloud.com or contact us at www.TheUberCloud.com/help/.

About Univa Corporation

Univa is the leading innovator of workload management products that optimize performance of applications, services and containers. Univa enables enterprises to fully utilize and scale compute resources across on-premise, cloud, and hybrid infrastructures. Advanced reporting and monitoring capabilities provide insights to make scheduling decisions and achieve even faster time-to-results. Univa’s solutions help hundreds of companies to manage thousands of applications and run billions of tasks every day. Univa is headquartered in Chicago, with offices in Canada and Germany. For more information, please visit www.univa.com.

Source: Univa Corp.

The post UberCloud Brings Parallel MPI to Univa Grid Engine’s Docker Support appeared first on HPCwire.

Mines team receives funds for 2nd space observatory mission

Colorado School of Mines - Thu, 11/02/2017 - 12:47

A Colorado School of Mines team headed by Physics Professors Lawrence Wiencke and Fred Sarazin has been awarded NASA funding to fly a second long-duration balloon mission to test innovative optical methods for catching high-energy cosmic rays and neutrinos from deep space.

Planned for launch in 2022, the Extreme Universe Space Observatory on a Super Pressure Balloon (EUSO-SPB2) is a major step toward a planned mission to send a probe to space, building on the super pressure balloon mission – EUSO-SPB1 – that Wiencke coordinated earlier this year

“The origin of the highest-energy particles in the universe is one of the greatest mysteries in astrophysics” said Wiencke, who serves as the deputy principal investigator for the five-university U.S. team. “This balloon mission brings a unique opportunity to explore new ways to measure two of the most elusive multi-messengers in astroparticle physics. Historically, advances in this field have been driven by advances in instrumentation used in innovative ways.” 

The Earth is constantly bombarded by particles from space. Cosmic rays – mostly subatomic nuclei – and neutrinos are known to be produced by our own sun and from distant supernovae. However, other unknown cosmic phenomena are responsible for the particles impinging on the Earth at the highest energies. Neutrinos, the “ghost particles” of nature, are especially difficult to detect. Most of them traverse the Earth without interacting with matter a single time. 

“There is much we don’t know about those two cosmic messengers,” Sarazin said. “We have studied high-energy cosmic rays with the Pierre Auger Observatory for more than 10 years now, and while we have shown conclusively that they come from outside our own galaxy, their origin still eludes us.” 

Neutrinos are even more elusive, and only a handful of high-energy neutrinos have been detected. “With this balloon experiment and the future space probe, we will go after even higher-energy neutrinos, which are predicted to exist but have yet to be observed,” Sarazin said.

Taking measurements from space offers a much wider field of view to catch these rare particles. The Earth’s atmosphere makes these ghostly particles observable as faint flashes of light moving at ultra-high speeds. The balloon instrument will float at 20 miles in altitude, carrying a 3,000-pound payload of three specialized telescopes to be built by an international team. One system is built to capture the radiation from extremely energetic neutrinos coming from the Earth below; the other is a fluorescence camera, which picks up the trails of excited nitrogen nuclei as cosmic ray showers cross the atmosphere.

“This mission is much more than a follow-up to the flight we did earlier this year,” said Wiencke, who coordinated the 2017 mission. The 2017 NASA flight was terminated early in the mission after the balloon developed a leak and sank into the Pacific Ocean. 

“That was a tough night for everyone,” Wiencke said. “We learned a lot from that mission and so did our students who participated in the instrument preflight, preflight testing and operation. This new mission and the astrophysics we will tackle are very exciting. The team and facilities at Mines will be critical.”

Like the 2017 mission, the 2022 mission will also launch from New Zealand, so that the balloon can catch a ride on a cold “river” of high thin air that circles the bottom part of the globe. The researchers hope the balloon will make several trips around the Antarctic over the course of 100 days or more.

The flight will provide proof of concept for the planned Probe Of Extreme Multi-Messenger Astrophysics (POEMMA), a pair of orbiting satellites with the same capabilities but much greater sensitivity. A team of scientists, including the Mines team, and NASA engineers are designing the POEMMA mission for consideration by the 2020 Astronomy and Astrophysics Survey, a National Academy of Sciences-led scientific prioritization for the decade.  

In addition to Colorado School of Mines, the team also includes researchers from University of Chicago, Lehman College (CUNY), Marshall Space Flight Center and University of Alabama-Huntsville.

Photo credit: NASA/Bill Rodman

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Hogue contributes to Los Angeles River watershed study

Colorado School of Mines - Thu, 11/02/2017 - 12:08

A Colorado School of Mines professor is one of the lead authors on a recently published study examining the impact of water management strategies on the Los Angeles River watershed

Terri Hogue, professor of civil and environmental engineering and director of the Hydrologic Science and Engineering Program, worked with researchers at UCLA on the study, which is part of the Sustainable LA Grand Challenge, a UCLA research initiative aiming to help transition Los Angeles County to 100 percent renewable energy and 100 percent locally sourced water by 2050.

Achieving that water independence, the study found, will require a careful balancing act from regional planners. Reducing the amount of stormwater or reclaimed water that rushes through the Los Angeles River to the ocean, for example, would also mean less river water for kayakers and wildlife.

For the study, researchers drew from 60 years of flow data to model changes in flow and water quality to understand the impacts that potential management measures – such as the use of porous pavement and the creation of man-made ponds -- would have on reducing stormwater pollution. 

The Los Angeles River watershed covers 825 square miles, beginning in the southwest corner of the San Fernando Valley and ending at the Pacific Ocean.

“This type of modeling analysis provides invaluable information on the potential trade-offs among various stormwater pollution control measures that will improve water quality but also consider local water supply and flood control impacts,” Hogue said.

Hogue was joined on the research team by Mines graduate students Elizabeth Gallo and Ryan Edgley and postdoctoral fellow Laura Read. Leading the study was Mark Gold, associate vice chancellor for environment and sustainability at UCLA. Also contributing from UCLA were Katie Mika, Stephanie Pincetl and Kim Truong.

To read the full study, go to escholarship.org/uc/item/42m433ps.

Categories: Partner News

LiquidCool Solutions Names Darwin P. Kauffman CEO

HPC Wire - Thu, 11/02/2017 - 10:47

ROCHESTER, Minn., Nov. 2, 2017 — LiquidCool Solutions today announced that Darwin P. Kauffman has been appointed as the company’s new CEO, effective immediately. The announcement was made today by former CEO Herb Zien and Chairman Stephen Einhorn, both of whom will remain as co-chairmen of the Board.

“We are delighted that Darwin will lead LiquidCool into its next stage of growth,” said Zien. “With Darwin’s stellar technical, product and leadership credentials, he will be instrumental in the development and execution of LiquidCool’s long-term success.”

Kauffman, who has a degree in electrical engineering and an MBA, began his career at Seagate Technology where he led product development, product management and strategy roles for Seagate’s enterprise storage devices and solutions products. While with Seagate, Kauffman transformed one of its declining $700M+ business unit into a $1B+ enterprise; launched enterprise HDD products into existing market capturing 12% market share and $800M in new revenue; and consistently drove double and triple-digit revenue and profit growth in various technology market segments.

“This is an exciting time for LiquidCool Solutions, and I am thrilled to be taking on the CEO role,” said Kauffman. “LiquidCool solidified the total immersion server industry with the most elegant and efficient liquid cooling technology for electronic equipment. I’m honored and excited about leading LiquidCool into our next phase of growth, where we will combine business and product innovation to provide even more customer value.”

About LiquidCool Solutions

LiquidCool Solutions is a technology development firm with 30 issued patents centering on cooling electronics by total immersion in a dielectric liquid. LCS technology places special emphasis on scalability and rack management.  Beyond providing superior energy savings, performance and reliability, LCS technology enables a broad range of unique applications not possible with any other air or liquid cooling systems.

Source: LiquidCool Solutions

The post LiquidCool Solutions Names Darwin P. Kauffman CEO appeared first on HPCwire.

SC17 Preview: The National Strategic Computing Initiative

HPC Wire - Thu, 11/02/2017 - 10:02

In Washington, the conventional wisdom is that an initiative started by one presidential administration will not survive into a new one. This seemed to be particularly true with the transition of the Obama administration into the Trump administration. However, an exception to this unwritten rule may be the case of an initiative to support exascale, data analytics, “post-Moore’s Law” computing and the HPC ecosystem. The jury is still out, but the signs are starting to look good.

In the summer of 2014, during the tail-end of the Obama administration, a team at the White House’s Office of Science and Technology Policy (OSTP) started to formulate what would become known as the National Strategic Computing Initiative (NSCI). Over the next year, the NSCI was defined and refined through an interagency process and interactions with computer companies and industry users of high performance computing. Although the initiative was formally started by President Obama on July 29, 2015, support by the US federal government for advanced computing is not new, nor is the concept of multi-agency national strategic computing programs. For example, precedents include the Strategic Computing Initiative of the 1980s, the High-Performance Computing Act of 1991, and the High Productivity Computing Systems program of the 2000s. Information concerning NSCI can be found at https://www.nitrd.gov/nsci/.

NSCI recognizes the value of US investments in cutting-edge, high-performance computing for national security, economic security, and scientific discovery. It directs the administration to take an “whole of government” approach to continuing and expanding those activities. The initiative puts the Department of Defense, the Department of Energy and the National Science Foundation into leadership roles to coordinate those efforts. The initiative also identifies different agencies to conduct foundational R&D and be involved with deployment and implementation. The “whole of government” approach is quite important to collect and coordinate the resources (i.e. funding) to achieve the NSCI goals.

There are five strategic objectives for this initiative. The first is to accelerate the delivery of a “capable exascale computing system” (defined as the integration of hardware and software capability to deliver approximately 100 times the performance of current 10-petaflop systems across a range of applications representing government needs). The second seeks to increase the coherence between traditional modeling and simulation and large data analytics. The third objective is to establish, over the next 15 years, a viable path forward for advanced computing in the “post Moore’s Law era.” The fourth objective seeks to increase the capacity and capability of the entire HPC ecosystem, both human and technical. Finally, the fifth NSCI objective is to implement enduring public-private collaborations to ensure that the benefits of the initiative are shared between the government and the industrial and academic sectors of the economy.

An NSCI Joint Program Office (JPO) has been established with representatives from the lead agencies (DOD, DOE, and NSF). There was also a decision to have the Networking and Information Technology Research and Development (NITRD)’s National Coordination Office (NCO) to act as the communications arm for the initiative. Also, an Executive Council led by the directors of OSTP and the OMB (Office of Management and Budget) has been established and in July of 2016 published a Strategic Plan for the initiative.

The bad news is that there were not any formally designated funds for NSCI identified in the President Trump’s Fiscal Year 2018 request (although the initiative was mentioned in several places). In the federal government that could be the “kiss of death.” An initiative without funding often withers away and dies. The encouraging thing about the NSCI is that it may be okay that there is no specifically designated funding. The reason for this is that there other currently funded activities at the lead agencies that already align with the goals of the NSCI. Therefore, the only thing needed for “NSCI implementation” is for these activities to work in a coordinated way and that is already happening, to some degree, through the JPO. The synergy of the currently funded NSCI relevant activities provides additional hope that the initiative will survive the transition.

Other pieces of good news include the fact that the staff at the White House’s OSTP is growing and we understand has been briefed on the initiative. We also heard that the White House’s Deputy Chief Technology Officer, Michael Kratsios, has been briefed on NSCI. Another very good sign was that on August 17th, Mike Mulvaney of OMB and Michael Kratsios issued the Administration’s R&D budget priorities. One of those, under the category of Military Superiority, was the call for the U.S. to maintain its leadership in future computing capabilities. Also, under the category of American Prosperity, the budget priorities expressed an interest in R&D in machine learning and quantum computing. Finally, there was direction given for the coordination of new R&D efforts to avoid duplication with existing efforts, which is what the NSCI JPO is already doing.

More specific information about the status of the NSCI will be available at the upcoming Birds of a Feather session at the SC17 conference (5:15 pm, Wed 11/15, Room 601). There, current members of the JPO (Mark Sims of DOD, William Harrod of DOE, and Irene Qualters of NSF) will be able to provide the latest and greatest about the initiative.

For the initiative to survive, the new administration will need to take ownership. Sometimes, with an administration shift, this may involve adjusting its scope. However, there has been previous initiatives that successfully made the administration leap intact (an example is the DOE Accelerated Strategic Computing Initiative (ASCI)). These tend to be initiatives that have a clear and compelling reason to exist and a sound organization that provides confidence that they will succeed.

Things continue to look good for funding the exascale program in the Trump administration. Also, the growth of large scale data analytics across the spectrum of government, industry, and academia probably means that there is a good chance that NSCI will survive the transition.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

The post SC17 Preview: The National Strategic Computing Initiative appeared first on HPCwire.

Appentra Raises a Funding Round of €400K

HPC Wire - Thu, 11/02/2017 - 09:15

Nov. 2, 2017 — Appentra, a technology-based company spin-off of the University of Coruña established in 2012, has raised a funding round of €400.000. The round was led by three Spanish venture capital organizations: Caixa Capital Risc, Unirisco and Xesgalicia.

The new funds will be used to accelerate the market uptake of Parallware tools, allowing us to scale our team and to further improve the Parallware technology.

Appentra  provides top quality software tools that allow extensive use of High Performance Computing (HPC) techniques in all application areas of engineering, science and industry. Appentra’s target clients are companies and organizations that run frequently updated compute-intensive applications in markets like aerospace, automotive, civil engineering, biomedicine or chemistry.

”It is a privilege to have such a supportive group of investors that believe in our vision and in our team.” said Manuel Arenaz (CEO).

About Caixa Capital Risc

Caixa Capital Risc is the venture capital arm of Criteria Caixa, an investor that provides equity and convertible loans to innovative companies in their early stages. They manages a capital of 195 million euro and invests mainly in Spanish companies in the Industrial technology, Healthcare/Life Sciences and time fields.

About Unirisco

Unirisco is a venture capital group promoting the creation of companies making use of university knowledge. This is achieved through short-term investment operations in their financing or through other financial instruments, always with the criteria of profitability and job creation in mind.

About Xesgalicia

Xesgalicia is a Galician Venture Capital Management firm. It finances company development through the temporary acquisition of minority shares of the capital of unquoted companies. In addition, it may make ordinary or mezzanine loans to the companies in which it invests through differents venture capital funds and the assets of a venture capital company.

Source: Appentra

The post Appentra Raises a Funding Round of €400K appeared first on HPCwire.

NSF Selects Anne Kinney to Head Mathematical and Physical Sciences Directorate

HPC Wire - Thu, 11/02/2017 - 09:08

Nov. 2, 2017 — The National Science Foundation (NSF) has selected Dr. Anne Kinney to serve as head of the Directorate for Mathematical and Physical Sciences (MPS), which supports fundamental research in astronomy, chemistry, physics, materials science and mathematics.

Kinney has more than 30 years of leadership and management experience in the astronomical community. Since 2015, she has been serving as chief scientist at the W. M. Keck Observatory, which hosts the world’s largest optical and infrared telescopes. At Keck, she served as a liaison to the global scientific community, acting as an ambassador to the observatory’s entire user community.

Prior to that, Kinney held multiple positions at NASA’s Goddard Space Flight Center — most recently as Director of the Solar System Exploration Division, leading and managing a team of more than 350 people. Before moving to Goddard Space Flight Center, Kinney was director of the Universe Division at NASA Headquarters. She oversaw successful space missions that included the Hubble Space Telescope, the Spitzer Space Telescope, the Wilkinson Microwave Anisotropy Probe and the Galaxy Evolution Explorer.

“Anne Kinney arrives at a special moment in our quest to understand the universe — as excitement builds for a new era of multi-messenger astrophysics. And, as we look to convergence research to address some of the most challenging issues in science and engineering, all of the fields in the MPS directorate — mathematics, chemistry, materials science, physics and astronomy — play foundational and leading roles,” said NSF Director France Córdova. “Kinney has successfully brought together researchers, educators, students and other partners time and again to support significant scientific and engineering feats. I am thrilled to welcome her to the NSF leadership team, where her skills and experience will help us maintain our position keeping the U.S. at the forefront of scientific and technological excellence.”

MPS provides about 43 percent of the federal funding for basic research at academic institutions in the mathematical and physical sciences. The directorate serves the nation by supporting fundamental discoveries at the leading edge of science, with special emphasis on supporting early career investigators and advancing areas of science, including quantum information science, optics, photonics, clean energy, data science and more. The NSF-funded Laser Interferometer Gravitational-Wave Observatory (LIGO), which recently detected the collision of two neutron stars, has been supported by MPS for more than 40 years.

“MPS explores some of our most compelling scientific questions, and I am eager to add to the efforts of an agency that plays a key role in driving the U.S. economy, ensuring national security and enhancing the nation’s global leadership in innovation,” Kinney said. “Throughout my career, I’ve been fortunate to lead teams that have used knowledge gained from breakthroughs in fundamental science to enrich how we see and understand the universe. It’s exciting to think that my work at MPS will support research with the potential to fuel decades’ worth of future exploration.”

An expert in extragalactic astronomy, Kinney has published more than 80 papers on quasars, blazars, active galaxies and normal galaxies, and signatures of accretion disks in active galaxies. Her research showed that accretion disks in the center of active galaxies lie at random angles relative to their host galaxies.

Kinney has won numerous awards and honors, including the Presidential Rank Award for Meritorious Service, the NASA Medal for Outstanding Leadership and several NASA Group Achievement Awards for projects such as the Keck Observatory Archive and the James Webb Space Telescope, the Gamma-ray Large Area Space Telescope (now known as the Fermi Gamma-ray Space Telescope) and Lunar Orbiter Laser Altimeter. An avid supporter of science communication and outreach, Kinney created the Space Telescope Science Institute education group — launching the online Amazing Space program — and has served on the editorial board of Astronomy Magazine since 1997.

Kinney has a bachelor’s degree in astronomy and physics from the University of Wisconsin and a doctorate in astrophysics from New York University. She studied in Denmark for several years at the Niels Bohr Institute, University of Copenhagen.

Kinney will begin her NSF appointment on Jan. 2, 2018.

Source: NSF

The post NSF Selects Anne Kinney to Head Mathematical and Physical Sciences Directorate appeared first on HPCwire.

XSEDE Hosting Collaboration Booth at Supercomputing 17

HPC Wire - Thu, 11/02/2017 - 09:06

Nov. 2, 2017 — The Extreme Science and Engineering Development Environment (XSEDE) will be hosting a collaboration booth at Supercomputing Conference (SC) in Denver, Colorado from November 13th through 16th.

XSEDE, which is supported by the National Science Foundation (NSF), is the most advanced, powerful and robust collection of integrated digital resources and services in the world. XSEDE is inviting SC17 attendees to the XSEDE collaboration booth (#225) to learn about XSEDE-enabled projects, chat with XSEDE representatives, and see what type of resources and collaborations XSEDE offers researchers. XSEDE’s collaboration booth also will feature two-half day events to encourage XSEDE meetups and project collaborations.

RSVP here:

Source: XSEDE

The post XSEDE Hosting Collaboration Booth at Supercomputing 17 appeared first on HPCwire.

Ellexus Launches Container Checker on AWS Marketplace

HPC Wire - Thu, 11/02/2017 - 09:01

CAMBRIDGE, England, Nov. 2, 2017 — Ellexus has launched Container Checker on Amazon Web Services’ Marketplace, a pioneering cloud-based tool that provides visibility into the inner workings of Docker containers.

Container Checker is the first tool aimed directly at the cloud market from Ellexus, the I/O profiling company. The software provider has seven years’ experience in providing dependency analysis and performance tools in big-compute environments. Ellexus Container Checker brings this expertise to a much wider audience and will enable organisations of all sizes in many sectors to scale rapidly and quickly.

Using the tool is simple; spin up the container on an AWS machine, run the Container Checker trace and receive your report when the trace is complete. It will only take as long as the application takes to run.

The following checks are included in the trace:

  • I/O performance: Small reads and writes? Excess meta data operations? Discover all the flaws that are bringing your application to a standstill – and costing you in wasted cloud spend
  • Dependencies and security: What files does my application need to run? Are they inside the container or outside? Make sure you have everything you need and the container doesn’t access files or network locations that it shouldn’t
  • Time: Where is your application wasting time? Should you use a different cloud set-up? Find out if your container is a time waster.

Currently available on AWS, the tool will soon be rolled out across other cloud platforms. Over time Container Checker will also add more types of container to its checklist.

“We are extremely excited to launch our first tool aimed directly at improving the lives of cloud platform users,” said Ellexus CEO Dr Rosemary Francis.

“The use of both cloud compute clusters and containers is set to grow extremely quickly over the next few years, which undoubtedly means people will run into challenges as they adapt to new set-ups and try to scale quickly.

“Container Checker will help people using cloud platforms to quickly detect problems within their containers before they are let loose on the cloud to potentially waste time and compute spend. Estimates suggest that up to 45% of cloud spend is wasted due in part to unknown application activity and unsuitable storage decisions, which is what we want to help businesses tackle.”

Find out more at www.ellexus.com.

 About Ellexus Ltd

Ellexus is an I/O profiling company, trusted by the world’s largest software vendors and high performance computing organisations. Its monitoring and profiling tools profile thousands of applications daily, improving performance by up to 35%.

Source: Ellexus

The post Ellexus Launches Container Checker on AWS Marketplace appeared first on HPCwire.

Asetek Announces OEM Partnership With E4 Computer Engineering and Installation

HPC Wire - Thu, 11/02/2017 - 08:26

OSLO, Norway, Nov. 2, 2017 — Asetek (ASETEK.OL) today announced E4 Computer Engineering, an Italian technology provider of solutions for HPC, data analytics and AI, as a new data center OEM partner. E4 Computer Engineering has utilized Asetek RackCDU D2C (Direct-to-Chip) liquid cooling for the D.A.V.I.D.E. SUPERCOMPUTER in Italy. This follows the announcement of an undisclosed OEM partner and installation on July 14, 2017.

You can read more about the partnership at Asetek.com.

About Asetek

Asetek is a global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK). For more information, visit www.asetek.com

Source: Asetek

The post Asetek Announces OEM Partnership With E4 Computer Engineering and Installation appeared first on HPCwire.

Huawei Partners with Intel to Build a Supercomputing Cluster for Technical University of Denmark

HPC Wire - Thu, 11/02/2017 - 08:17

SHENZHEN, China, Nov. 2, 2017 — For almost two centuries DTU, Technical University of Denmark, has been dedicated to fulfilling the vision of H.C. Orsted – the father of electromagnetism – who founded the university in 1829 to develop and create value using the natural sciences and the technical sciences to benefit society. Today, DTU is ranked as one of the foremost technical universities in Europe.

High-Performance Computing Propels Materials Research

DTU promotes promising fields of research within the technical and the natural sciences, especially based on usefulness to society, relevance to business and sustainability. DTU focuses on basic science that has significant challenges and clear application prospects, from atomic-scale materials analysis to quantum physics and renewable energy. As the material application environment becomes increasingly complex, laboratory research for materials performance analysis has become even more challenging.

DTU aims to understand the nature of materials by developing electron structural theory, and design new functional nanostructures through new-found insights. These studies require the analysis of the structure, strength, and characteristics of new materials, involving intensive, complex numerical computation and simulation tests on material and energy. This will produce a vast number of computational data. Therefore, High-Performance Computing (HPC) resources that can accelerate performance modeling and solving are particularly important to research in this field.

In order to speed up the process from discovery to application of new materials and maintain a leading edge in research, DTU plans to expand and upgrade its supercomputing cluster, Niflheim, which is deployed at the Computational Atomic-scale Materials Design (CAMD) Center.

Combining the Best of Both Worlds: Huawei X6800 High-Density Server and Intel OPA Network

The existing Niflheim cluster at DTU was built from 2009 to 2015, and was capable of a peak computing capability of only 73 TFLOPS. The cluster was equipped with previous generation and even earlier computing product hardware. The oldest products had limited processor performance, small memory capacity, with low-bandwidth but high-latency computing network. The old cluster was failing to meet the growing demands of computing-intensive simulation tests. As a result, the cluster became a bottleneck since the CAMD center needed research efficiency improvements.

DTU wanted to deploy a new supercomputing system to give the Niflheim cluster a boost in computing resources and performance, and meanwhile also prepare the cluster for future technology evolution as well as cluster-scale expansion. DTU has carefully studied various solutions in terms of overall performance, product quality, and service capabilities, and through an EU tender finally selected Huawei and Intel as the vendors to help the university build a new-generation computing cluster with their innovative technologies and computing products.

Solution Highlights

Supreme Performance, Leading Computing Efficiency:

Nodes configured with Intel® Xeon® E5-2600 v4 series processors, up to 845 GFLOPS compute power per node;

Nodes configured with 256 GB DIMMs and 240 GB SSDs, eliminates I/O bottlenecks, and improves data processing efficiency with high-speed data caching;

Leverages the Intel® Omni-Path Architecture (OPA) to build a two-layer fat-tree fabric, delivers bandwidth of up to 100 Gbit/s, and end-to-end latency as low as 910 ns;

Power Supply Units (PSUs) and fan modules shared by multiple nodes, enhanced with Huawei’s Dynamic Energy Management Technology (DEMT) to lower system energy consumption by over 10%.

High-Density Deployment, Easy to Manage and Expand:

4U chassis configured with eight 2-socket compute nodes, delivers computing density twice that of traditional 1U rack servers, significantly improves rack space utilization;

Supports aggregated management network port for unified management, meanwhile reduces cable connections;

Adopts a modular design, and supports hot swap for all key components, greatly improves Operations and Maintenance (O&M) efficiency.

New-Generation Niflheim Cluster Expedites New Material Discovery and Application

The new-generation Niflheim cluster went live in Dec. 2016. The new cluster helps more researchers carry out research and analysis on new materials and new energy, but also provides a great leap in feedback speeds of test results. It has enabled new levels of scientific research progress and strength, helping DTU generate new innovation capabilities in the field of material analysis.

The Niflheim cluster delivers a computing power of up to 225 TFLOPS, which is three times the level of the original system;

Substantially shorten the materials’ analysis time, enabling researchers to discover and apply new materials more quickly;

With flexible expandability, the cluster can seamlessly expand up to 112 nodes without requiring additional new cabinets.

Source: Huawei

The post Huawei Partners with Intel to Build a Supercomputing Cluster for Technical University of Denmark appeared first on HPCwire.

Holley named ASCE Region 7 Outstanding Faculty Advisor

Colorado School of Mines - Wed, 11/01/2017 - 16:20

A Colorado School of Mines alum and professor has been named 2017 American Society of Civil Engineers Region 7 Oustanding Faculty Advisor.

Jeff Holley, assistant teaching professor of civil and environmental engineering, was nominated for the award by Mines students in recognition of his outstanding support of the student chapter of ASCE at Mines. Region 7 covers Colorado, Wyoming, Kansas, Nebraska, South Dakota, Missouri and Iowa. 

Holley, who joined the Mines faculty full time in 2012, earned both his master’s degree in environmental science and engineering and his bachelor’s degree in engineering from Mines. He also holds an MBA from University of Colorado Denver. 

He began co-advising the Mines chapter of ASCE with Professor Emeritus Candace Sulzbach in 2013, before taking over in 2014. 

“This award is more of a statement to the hard work and commitment of the students and the importance that the college places on the student group than it is me as an individual,” Holley said. “The ASCE Student group is a big group with many, many activities throughout the year. Members, and in particular the officers, are an amazing group. It is really an award that we share.”

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

House Subcommittee Tackles U.S. Competitiveness in Quantum Computing

HPC Wire - Wed, 11/01/2017 - 13:46

How important is quantum computing? How are U.S. quantum research efforts stacking up against the rest of the world? What should a national quantum computing policy, if any, look like? Is the U.S. really falling behind in the crucial area? Late last month six leaders from government and industry tackled these questions at the Subcommittee on Research & Technology and Subcommittee on Energy Hearing – American Leadership in Quantum Technology.

While much of the testimony offered covered familiar ground for the HPC community, it was nonetheless an excellent overview of ongoing efforts and attitudes at the Department of Energy, National Science Foundation, IBM, among others. There was even a proposal put forth by one speaker for a National Quantum Initiative funded to the tune of $500 million over five years and focused on three areas: quantum enhanced sensors; optical photonic quantum communication networks, and quantum computers.

Given the breadth of material covered, it’s useful that the committee has made the formal statements by the six panelists readily available (just click on the name bulleted below). There’s also a video of the proceedings.

Panel 1:

  • Carl J. Williams, acting director, Physical Measurement Laboratory, National Institute of Standards and Technology (NIST)
  • Jim Kurose, assistant director, Computer and Information Science and Engineering Directorate, National Science Foundation
  • John Stephen Binkley, acting director of science, U.S. Department of Energy

Panel 2:

  • Scott Crowder, vice president and chief technology officer for quantum computing, IBM Systems Group
  • Christopher Monroe, distinguished university professor & Bice Zorn Professor, Department of Physics, University of Maryland; founder and chief scientist, IonQ, Inc.
  • Supratik Guha, director, Nanoscience and Technology Division, Argonne National Laboratory; professor, Institute for Molecular Engineering, University of Chicago

Full committee chair, Lamar Smith (R-Texas) noted “Although the United States retains global leadership in the theoretical physics that underpins quantum computing and related technologies, we may be slipping behind others in developing the quantum applications – programming know-how, development of national security and commercial applications…Just last year, Chinese scientists successfully sent the first-ever quantum transmission from Earth to an orbiting satellite… According to a 2015 McKinsey report, about 7,000 scientists worldwide, with a combined budget of about $1.5 billion, worked on non-classified quantum technology.”

Summarizing the comments the panelists is challenging other than to say quantum computing is important and could use more government support. Ideas on how such support might be shaped differed among speakers and it’s interesting to hear their recaps of ongoing quantum research. While we tend fix on quantum computing, exotic quibits, and revolutionizing computing (maybe), Williams’s (NIST) testimony reminded the gathering that quantum science has a wide-ranging reach:

“Atomic clocks define the second and tell time with amazing precision. For example, the most accurate U.S. atomic clock currently used for defining the second is the NIST-F2. It keeps time to an accuracy of less than a millionth of a billionth of a second. Stated in another way, the NIST-F2 clock will not lose a second in at least 300 million years. And just this month, NIST published a description of a radically new atomic clock design—the three-dimensional (3-D) quantum gas atomic clock. With a precision of just 3.5 parts error in 10 quintillion (1 followed by 19 zeros) in about 2 hours, it is the first atomic clock to ever reach the 10 quintillion threshold, and promises to usher in an era of dramatically improved measurements and technologies across many areas based on controlled quantum systems.”

The post House Subcommittee Tackles U.S. Competitiveness in Quantum Computing appeared first on HPCwire.

The Longest Mile Matters: URISC@SC17 Coming to Denver, Colorado

HPC Wire - Wed, 11/01/2017 - 13:19

The Understanding Risk in Shared CyberEcosystems workshop will convene Saturday, November 11 through Thursday, November 16, 2017, in Denver, Colorado. In addition to 12 hours of cybersecurity training, URISC participants will attend SC17; the flagship high performance computing (HPC) industry conference and technology showcase that attracts more than 10,000 international attendees each year.

A STEM-Trek call for participation closed Sept. 11. Applications were accepted from cybersecurity professionals, HPC systems administrators, educators and network engineers who support research computing at US and sub-Saharan African colleges and universities in under-served regions. All serve in professional support roles at least 50 percent of the time where they help students, faculty and staff leverage locally-hosted, or remotely-accessed advanced cyberinfrastructure (CI) for education and open research.

US applications were reviewed and ranked by African reviewers, and vice versa. Thirty percent of African applications were received from women, and 80 percent of the pool reflects demographics that are typically under-represented in cybersecurity and HPC careers.

Thirty-percent of applicants were awarded grants which cover flights, lodging, ground transit and some meals; international scholars will also receive U.S. pocket-money. Every effort was made to shape the diversity of the final cohort so it mirrors the applicant pool in terms of gender, ethnicity, research domains, and regions represented, and the same consideration was applied when choosing presenters. Eight US participants are XSEDE Campus Champions (six are supported by the project, and two are self-funded), and five are from EPSCoR states (NSF Established Program to Stimulate Competitive Research). Special guests from Nepal (ICIMOD) and Canada (U-BC) are invited to attend; altogether, 34 URISC delegates, trainers and guests will represent 11 countries and 12 US states.

An introduction to open-source materials developed by the Center for Trustworthy Scientific Cyberinfrastructure (CTSC) will be shared, as well as coaching in the art of external relations – specifically how to foster administrative and legislative buy-in for a greater cybersecurity investment on college campuses. The agenda has been customized to consider what has become an increasingly diverse body of campus stakeholders—including researchers, students, faculty, government agency stakeholders, “long-tail” user communities, and regional industry partners.

On Friday, Nov. 10, a small delegation from the US, Botswana, South Africa and Nepal will visit the National Center for Atmospheric Research (NCAR) in Boulder. They will tour NCAR’s visualization lab and cybersecurity center, and meet researchers who lead a variety of global climate and environmental projects.

URISC @SC17 Program Committee:

Elizabeth Leake (STEM-Trek Nonprofit), URISC Planning Committee Chair and Facilitator;
Von Welch (IU/CTSC), Planning Committee Cybersecurity SME, Facilitator and Trainer;
Happy Sithole (Director, CHPC/Cape Town), URISC Planning Committee SME;
Bryan Johnston (Trainer, CHPC/Cape Town), URISC Trainer;
Meshack Ndala (Cybersecurity Lead, CHPC, Cape Town), URISC Trainer, SME.

Trainers and Special Guests, in order of appearance:

  • Von Welch (Indiana University), Directs NSF-supported Centers for Applied Cybersecurity Research and Trustworthy Scientific Cyberinfrastructure: “Cybersecurity Methodology for Open Science.”
  • Ryan Kiser (Indiana University): “Log Analysis for Intrusion Detection.”
  • Susan Ramsey (NCAR): “The Anatomy of a Breach.”
  • Jim Basney (National Center for Supercomputing Applications/CTSC): “Lightweight Cybersescurity Risk Assessment Tools for Cyberinfrastructure.”
  • Bart Miller and Elisa Heymann (UW-Wisconsin at Madison): “Secure Coding.”
  • Nick Roy (InCommon/Internet2): “Federated Trust: One Benefit of Regional Alliance Membership.”
  • Thomas Sterling (Indiana University, CREST): Sterling will share highlights of a new NSF-funded course titled, “High Performance Computing: Modern Systems and Practices, first edition,” scheduled for release Dec. 2017.
  • Happy Sithole (Director, South African Centre for HPC): Sithole will provide a brief welcome, and overview of technology initiatives supported by the CHPC.
  • Elizabeth Leake (Director and Founder, STEM-Trek Nonprofit): “The Softer Side of Cybersecurity.”
  • Bryan Johnston & Meshack Ndala (South African Centre for HPC): “Learn to be Cyber-Secure before you’re Cyber-Sorry.”
  • Florence Hudson (Senior Vice President and Chief Innovation Officer, Internet2): “IoT security challenges and Risk in Shared CyberEcosystems.”

Why the Longest Last Mile Matters

For more than 50 years, HPC has supported tremendous advances in all areas of science. Densely-populated, urban communities can more easily support subscription-based commodity networks and energy infrastructure that make it more affordable for nearby universities to engage with globally-collaborative science. Conversely, research centers that are located in sparsely-populated regions are disadvantaged since their last mile is much longer; there are fewer partners with which to cost-share regional connectivity. It’s more difficult for them to recruit and retain skilled personnel, they must travel longer distances to attend workshops and conferences, and it’s tougher to buy new hardware and software; there are many more competing priorities for limited funds, and they receive less federal grant support.

At the same time, they represent industrial landscapes that reflect globally-significant environmental factors, rich biodiversity, geology, and minerals. Every place on earth has a unique perspective of our universe, and less-populated regions offer the most detailed and unfettered vantage points. When researchers everywhere can access data that are generated by and stored at these sites, progress will be accelerated toward solutions to problems that impact global climate, environment, food and water security, public health, quality of life, and world peace.

While every HPC professional would benefit from attending the annual Supercomputing Conference, few from the communities STEM-Trek helps could afford to attend otherwise. Many are campus “tech generalists” who must balance administrative, support and teaching obligations; it’s more difficult for them to take time away from work because skill sets are usually one-deep (there is no back-up to mitigate the many crises that arise when centers function with inadequate and/or aging e-infrastructure). Because they wear both sysadmin and trainer hats, they rely on student labor to support their HPC resources. Their students learn more, and make an exponentially larger and more meaningful contribution to the global HPC workforce pipeline.

Even if they could take time away from work, they can’t afford to; in many cases, state and federal travel budgets have been legislatively restricted or eliminated altogether. Some of the countries that will be represented at URISC have consumer prices that are 80 and 90 percent lower than they are in the US and Europe where such conferences are typically held. This is also why they’re disadvantaged when it comes to purchasing new hardware, and why we encourage more affluent universities and government labs to donate decommissioned hardware so its life can be extended for another five to seven years in a light research and training capacity.

Despite these barriers, URISC attendance is easier to justify since participants will not only learn cybersecurity best practices from some of the world’s most informed specialists, they will become part of a multinational “affinity” network which offers a psycho-social framework of support for the future, and access a wealth of information at SC17.

Financial Support for URISC@SC17

This workshop is supported by US National Science Foundation grants managed by Indiana University and Oklahoma State University, with STEM-Trek donations from GoogleCorelight, and SC17 General Chair Bernd Mohr (Jülich Supercomputing Centre) with support from Inclusivity Chair Toni Collis (U-Edinburgh).

History of Southern Africa’s Shared CyberEcosystem

The SADC HPC Forum formed in 2013 when the University of Texas donated their decommissioned, NSF-sponsored Ranger system to the South African CHPC. Twenty-five Ranger racks were divided into ten smaller clusters and were installed in universities in the SADC region. It is their goal to develop a shared cyberecosystem for open science.

In 2016, a second system was donated by the University of Cambridge, UK. It was also split into small clusters that were installed in Madagascar and South Africa (North-West University). In 2017, Ghana joined the collaboration and CHPC installed a cluster there that will become part of the shared SADC cyberecosystem. The CHPC continues to lead training efforts in the region, and a dozen or so US and European HPC industry experts volunteer to advise as the shared African CI project continues to gain traction.

Many SADC delegates have trained as a cohort since 2013, and it has been a successful exercise in science diplomacy. Among them are network engineers, sysadmins, educators, computational, and domain scientists. While there are multiple language and other cultural disparities, as they train together with a common goal, the team has coalesced despite these differences. They are creating a procedural framework for human capital development, open science and research computing. The SADC HPC Forum serves to inform policy-makers who will then advocate for greater national investments in CI.

History of This STEM-Trek Workshop Series

This will be STEM-Trek’s third year to be involved with an SC co-located workshop for African stakeholders, and the second year to include US participants who work at resource-constrained centers, and therefore share many of the same challenges. In 2015, a workshop for SADC delegates was arranged by the Texas Advanced Computing Center (TACC) in Austin, Texas, and was co-facilitated by Melyssa Fratkin (TACC) and Elizabeth Leake (STEM-Trek). Last year’s “HPC On Common Ground @SC16” workshop in Salt Lake City featured a food security theme and was led by Elizabeth Leake (STEM-Trek), Dana Brunson (Oklahoma State University), Henry Neeman (University of Oklahoma), Bryan Johnston (South African Centre for High Performance Computing/CHPC) and Israel Tshililo (CHPC).

STEM-Trek will do it again in Dallas next year! The SC18 workshop will have an energy theme—stay tuned for more information!

About the CTSC

As the NSF Cybersecurity Center of Excellence, CTSC draws on expertise from multiple internationally-recognized institutions, including Indiana University, the University of Illinois, the University of Wisconsin at Madison, and the Pittsburgh Supercomputing Center. Drawing on this expertise, CTSC collaborates with NSF-funded research organizations to focus on addressing the unique cybersecurity challenges faced by such entities. In addition to our leadership team, a world-class CTSC Advisory Committee adds its expertise and a critical eye to the center’s strategic decision-making.

About STEM-Trek Nonprofit

STEM-Trek is a global, grassroots, nonprofit (501.c.3) organization that supports travel and professional development for HPC-curious scholars from under-represented groups and regions. Beneficiaries of our programs are encouraged to “pay-it-forward” by volunteering to serve as technology evangelists in their home communities or in ways that help STEM-Trek achieve its objectives. STEM-Trek was honored to receive the 2016 HPCwire Editors’ Choice Award for Workforce Diversity Leadership. Follow us on Twitter #LongestMileMatters, and FaceBook. For more information, visit our website: www.stem-trek.org.

The post The Longest Mile Matters: URISC@SC17 Coming to Denver, Colorado appeared first on HPCwire.

Graduate student honored as 2017 Champion of Energy

Colorado School of Mines - Wed, 11/01/2017 - 11:14

A Colorado School of Mines graduate student was among the honorees at the 2017 Champions of Energy reception hosted by American Petroleum Institute, Colorado Petroleum Council, Energy Nation, Denver Petroleum Club and ConocoPhillips.

Christopher Ruybal, a PhD candidate in the Department of Civil and Environmental Engineering, was one of four Hispanic STEM students and energy professionals recognized for their professional and personal commitment to the STEM fields, energy and community, in honor of Hispanic Heritage Month. 

Anabel Alvarado of Noble Energy, Kim Mendoza-Cooke of Anadarko and Gil Guethlein of Bayswater were also honored at the Oct. 27 reception in Denver.

A graduate fellow with the ConocoPhillips Center for a Sustainable WE2ST, Ruybal’s research focuses on groundwater challenges and demands related to agriculture, urban growth and energy development in northern Colorado, providing critical information about groundwater stress and the future outlook of local groundwater resources. 

He received his master’s degree in environmental science and engineering from Mines and a bachelor’s degree in environmental science from Regis University.

Photo credit: Courtesy of API

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Supermicro Showcases Deep Learning Optimized Systems

HPC Wire - Wed, 11/01/2017 - 08:08

WASHINGTON, Nov. 1, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today is showcasing GPU server platforms that support NVIDIA Tesla V100 PCI-E and V100 SXM2 GPUs at the GPU Technology Conference (GTC) Washington D.C., Ronald Reagan Building and International Trade Center in booth #506.

For maximum acceleration of highly parallel applications like artificial intelligence (AI), deep learning, autonomous vehicle systems, energy and engineering/science, Supermicro’s new 4U system with next-generation NVIDIA NVLink is optimized for overall performance. The SuperServer 4028GR-TXRT supports eight NVIDIA Tesla V100 SXM2 GPU accelerators with maximum GPU-to-GPU bandwidth for HPC clusters and hyper-scale workloads.  Incorporating the latest NVIDIA NVLink GPU interconnect technology with over five times the bandwidth of PCI-E 3.0, this system features independent GPU and CPU thermal zones to ensure uncompromised performance and stability under the most demanding workloads.

Similarly, the performance optimized 4U SuperServer 4028GR-TRT2 system can support up to 10 PCI-E Tesla V100 accelerators with Supermicro’s innovative and GPU optimized single root complex PCI-E design, which dramatically improves GPU peer-to-peer communication performance.  For even greater density, the SuperServer 1028GQ-TRT supports up to four PCI-E Tesla V100 GPU accelerators in only 1U of rack space.  Ideal for media, entertainment, medical imaging, and rendering applications, the powerful 7049GP-TRT workstation supports up to four NVIDIA Tesla V100 GPU accelerators.

“Supermicro designs the most application-optimized GPU systems and offers the widest selection of GPU-optimized servers and workstations in the industry,” said Charles Liang, President and CEO of Supermicro. “Our high performance computing solutions enable deep learning, engineering and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot and per dollar. With our latest innovations incorporating the new NVIDIA V100 PCI-E and V100 SXM2 GPUs in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world’s most complex and challenging problems.”

“Supermicro’s new high-density servers are optimized to fully leverage the new NVIDIA Tesla V100 data center GPUs to provide enterprise and HPC customers with an entirely new level of computing efficiency,” said Ian Buck, vice president and general manager of the Accelerated Computing Group at NVIDIA. “The new SuperServers deliver dramatically higher throughput for compute-intensive data analytics, deep learning and scientific applications while minimizing power consumption.”

With the convergence of Big Data Analytics, the latest NVIDIA GPU architectures, and improved Machine Learning algorithms, Deep Learning applications require the processing power of multiple GPUs that must communicate efficiently and effectively to expand the GPU network.  Supermicro’s single-root GPU system allows multiple GPUs to communicate efficiently to minimize latency and maximize throughput as measured by the NCCL P2PBandwidthTest.

For comprehensive information on Supermicro NVIDIA GPU system product lines, please go to https://www.supermicro.com/products/nfo/gpu.cfm.

About Super Micro Computer, Inc. 

Supermicro (NASDAQ: SMCI), a leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced Server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Super Micro Computer, Inc.

The post Supermicro Showcases Deep Learning Optimized Systems appeared first on HPCwire.

NVIDIA Announces New AI Partners, Courses, Initiatives to Deliver Deep Learning Training Worldwide

HPC Wire - Wed, 11/01/2017 - 08:00

Nov. 1, 2017 — NVIDIA today announced a broad expansion of its Deep Learning Institute (DLI), which is training tens of thousands of students, developers and data scientists with critical skills needed to apply artificial intelligence.

The expansion includes:

  • New partnerships with Booz Allen Hamilton and deeplearning.ai to train thousands of students, developers and government specialists in AI.
  • New University Ambassador Program enables instructors worldwide to teach students critical job skills and practical applications of AI at no cost.
  • New courses designed to teach domain-specific applications of deep learning for finance, natural language processing, robotics, video analytics and self-driving cars.

“The world faces an acute shortage of data scientists and developers who are proficient in deep learning, and we’re focused on addressing that need,” said Greg Estes, vice president of Developer Programs at NVIDIA. “As part of the company’s effort to democratize AI, the Deep Learning Institute is enabling more developers, researchers and data scientists to apply this powerful technology to solve difficult problems.”

DLI – which NVIDIA formed last year to provide hands-on and online training worldwide in AI – is already working with more than 20 partners, including Amazon Web Services, Coursera, Facebook, Hewlett Packard Enterprise, IBM, Microsoft and Udacity.

Today the company is announcing a collaboration with deeplearning.ai, a new venture formed by AI pioneer Andrew Ng with the mission of training AI experts across a wide range of industries. The companies are working on new machine translation training materials as part of Coursera’s Deep Learning Specialization, which will be available later this month.

“AI is the new electricity, and will change almost everything we do,” said Ng, who also helped found Coursera, and was research chief at Baidu. “Partnering with the NVIDIA Deep Learning Institute to develop materials for our course on sequence models allows us to make the latest advances in deep learning available to everyone.”

DLI is also teaming with Booz Allen Hamilton to train employees and government personnel, including members of the U.S. Air Force. DLI and Booz Allen Hamilton will provide hands-on training for data scientists to solve challenging problems in healthcare, cybersecurity and defense.

To help teach students practical AI techniques to improve their job skills and prepare them to take on difficult computing challenges, the new NVIDIA University Ambassador Program prepares college instructors to teach DLI courses to their students at no cost. NVIDIA is already working with professors at several universities, including Arizona State, Harvard, Hong Kong University of Science and Technology and UCLA.

DLI is also bringing free AI training to young people through organizations like AI4ALL, a nonprofit organization that works to increase diversity and inclusion. AI4ALL gives high school students early exposure to AI, mentors and career development.

“NVIDIA is helping to amplify and extend our work that enables young people to learn technical skills, get exposure to career opportunities in AI and use the technology in ways that positively impact their communities,” said Tess Posner, executive director at AI4ALL.

In addition, DLI is expanding the range of its training content with:

  • New project-based curriculum to train Udacity’s Self-Driving Car Engineer Nanodegree students in advanced deep learning techniques as well as upcoming new projects to help students create deep learning applications in the robotics field around the world.
  • New AI hands-on training labs in natural language processing, intelligent video analytics and financial trading.
  • A full-day self-driving car workshop, “Perception for Autonomous Vehicles,” available later this month. Students will learn how to integrate input from visual sensors and implement perception through training, optimization and deployment of a neural network.

To increase availability of AI training worldwide, DLI recently signed new training delivery partnerships with Skyline ATS in the U.S., Boston in the U.K. and Emmersive in India.

More information is available at the DLI website, where individuals can sign up for in-person or self-paced online training.

Source: NVIDIA

The post NVIDIA Announces New AI Partners, Courses, Initiatives to Deliver Deep Learning Training Worldwide appeared first on HPCwire.

Cray Exceeds Q3 Target, Projects 10 Percent Growth for 2018

HPC Wire - Tue, 10/31/2017 - 17:23

Cray announced yesterday (Monday) that revenue for its third quarter ending in September came in at $79.7 million, slightly higher than the $77.5 million booked in the third quarter of 2016.

The company reported that the pulling of a major acceptance from fourth into the third quarter provided a $20 million boost to its original Q3 target. Cray still expects total revenue for the year in the range of $400 million.

While Cray is meeting its expectations this year, market sluggishness, component delays and other factors have created a downturn since it closed out 2015 with record revenue of $725 million. But the market may finally be showing signs of a modest rebound, according to Cray CEO Peter Ungaro. On the Monday earnings call, he offered guidance that revenue is on track to grow about 10 percent in 2018.

Ungaro acknowledged that 10 percent growth at this point is not huge, but it could indicate a turning point. He said Cray is expecting significant large opportunities across multiple geographies, notably the Americas, EMEA, Asia Pacific and Japan (which Cray counts as a separate region). Some of this revenue will hit 2018, but most of it will be for systems put into production in 2019 and 2020, noted Ungaro.

“That to me really is starting to give us very early signs of a potential market rebound,” he said. “It’s early yet; this is the first quarter we are really beginning to see some signs so I want to be cautious but excited to see these signs. We always believed that the market was going to turn back around and that it wasn’t going to stay muted long.”

Cray would be in a better position for 2018 if the original CORAL contract to provide a 200-petaflops system to Argonne National Laboratory had not been rewritten over the summer, shifting the delivery date from 2018 to 2021. Ungaro said that details of the new exascale-class Aurora contract are still being finalized.

Cray CFO Brian Henry reported that total gross profit margin for the third quarter was about 36 percent with product margin coming in at 23 percent and service margin at 53 percent. He added this product margin was significantly lower than typical due to a $4.1 million anticipated loss on a single large contract scheduled for delivery in 2018. Much of this loss was attributed to rising memory prices. “Nobody would have thought that in the first quarter of 2016 [when they bid the contract] that the prices would be more than double at this time in 2017,” said Henry.

Over the last 12-18 months, Cray’s bottom line has been impacted by an underperforming market, component delays and more recently, slower-than-average adoption of Intel’s Skylake processors. Ungaro commented they usually get about 20-25 percent of their revenues through upgrades and this has slowed down. “The company hasn’t gotten that uptick with the Skylake processors,” he said, attributing this to the increased cost of memory tempering the usual price performance advantage. “We do have a number of opportunities for Skylake,” said Ungaro. “[And] we think it is going to become larger part of market opportunity in 2018, but it’s still relatively muted overall compared to a typical new processor coming to market.”

In closing, Ungaro reminded the investors on the call that the “biggest supercomputing industry conference of the year” SC17 will kick off in Denver in two weeks.

Wall Street reacted favorably to the earnings report. The stock is experiencing unusually high trading volume and shares are up 13.8 percent since Monday morning, closing at $20.65 on Tuesday.

The post Cray Exceeds Q3 Target, Projects 10 Percent Growth for 2018 appeared first on HPCwire.

Storage Strategies in the Age of Intelligent Data

HPC Wire - Tue, 10/31/2017 - 14:40

From scale-out clusters on commodity hardware, to flash-based storage with data temperature tiering, cloud-based object storage, and even tape, there are a myriad of considerations when architecting the right enterprise storage solution. In this round-table webinar, we examine case studies covering a variety of storage requirements available today. We’ll discuss when and where to use various storage media in accordance with use cases, and we’ll look at security challenges and emerging storage technology coming online.

The post Storage Strategies in the Age of Intelligent Data appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator