Feed aggregator

LiquidCool Solutions Names Darwin P. Kauffman CEO

HPC Wire - Thu, 11/02/2017 - 10:47

ROCHESTER, Minn., Nov. 2, 2017 — LiquidCool Solutions today announced that Darwin P. Kauffman has been appointed as the company’s new CEO, effective immediately. The announcement was made today by former CEO Herb Zien and Chairman Stephen Einhorn, both of whom will remain as co-chairmen of the Board.

“We are delighted that Darwin will lead LiquidCool into its next stage of growth,” said Zien. “With Darwin’s stellar technical, product and leadership credentials, he will be instrumental in the development and execution of LiquidCool’s long-term success.”

Kauffman, who has a degree in electrical engineering and an MBA, began his career at Seagate Technology where he led product development, product management and strategy roles for Seagate’s enterprise storage devices and solutions products. While with Seagate, Kauffman transformed one of its declining $700M+ business unit into a $1B+ enterprise; launched enterprise HDD products into existing market capturing 12% market share and $800M in new revenue; and consistently drove double and triple-digit revenue and profit growth in various technology market segments.

“This is an exciting time for LiquidCool Solutions, and I am thrilled to be taking on the CEO role,” said Kauffman. “LiquidCool solidified the total immersion server industry with the most elegant and efficient liquid cooling technology for electronic equipment. I’m honored and excited about leading LiquidCool into our next phase of growth, where we will combine business and product innovation to provide even more customer value.”

About LiquidCool Solutions

LiquidCool Solutions is a technology development firm with 30 issued patents centering on cooling electronics by total immersion in a dielectric liquid. LCS technology places special emphasis on scalability and rack management.  Beyond providing superior energy savings, performance and reliability, LCS technology enables a broad range of unique applications not possible with any other air or liquid cooling systems.

Source: LiquidCool Solutions

The post LiquidCool Solutions Names Darwin P. Kauffman CEO appeared first on HPCwire.

SC17 Preview: The National Strategic Computing Initiative

HPC Wire - Thu, 11/02/2017 - 10:02

In Washington, the conventional wisdom is that an initiative started by one presidential administration will not survive into a new one. This seemed to be particularly true with the transition of the Obama administration into the Trump administration. However, an exception to this unwritten rule may be the case of an initiative to support exascale, data analytics, “post-Moore’s Law” computing and the HPC ecosystem. The jury is still out, but the signs are starting to look good.

In the summer of 2014, during the tail-end of the Obama administration, a team at the White House’s Office of Science and Technology Policy (OSTP) started to formulate what would become known as the National Strategic Computing Initiative (NSCI). Over the next year, the NSCI was defined and refined through an interagency process and interactions with computer companies and industry users of high performance computing. Although the initiative was formally started by President Obama on July 29, 2015, support by the US federal government for advanced computing is not new, nor is the concept of multi-agency national strategic computing programs. For example, precedents include the Strategic Computing Initiative of the 1980s, the High-Performance Computing Act of 1991, and the High Productivity Computing Systems program of the 2000s. Information concerning NSCI can be found at https://www.nitrd.gov/nsci/.

NSCI recognizes the value of US investments in cutting-edge, high-performance computing for national security, economic security, and scientific discovery. It directs the administration to take an “whole of government” approach to continuing and expanding those activities. The initiative puts the Department of Defense, the Department of Energy and the National Science Foundation into leadership roles to coordinate those efforts. The initiative also identifies different agencies to conduct foundational R&D and be involved with deployment and implementation. The “whole of government” approach is quite important to collect and coordinate the resources (i.e. funding) to achieve the NSCI goals.

There are five strategic objectives for this initiative. The first is to accelerate the delivery of a “capable exascale computing system” (defined as the integration of hardware and software capability to deliver approximately 100 times the performance of current 10-petaflop systems across a range of applications representing government needs). The second seeks to increase the coherence between traditional modeling and simulation and large data analytics. The third objective is to establish, over the next 15 years, a viable path forward for advanced computing in the “post Moore’s Law era.” The fourth objective seeks to increase the capacity and capability of the entire HPC ecosystem, both human and technical. Finally, the fifth NSCI objective is to implement enduring public-private collaborations to ensure that the benefits of the initiative are shared between the government and the industrial and academic sectors of the economy.

An NSCI Joint Program Office (JPO) has been established with representatives from the lead agencies (DOD, DOE, and NSF). There was also a decision to have the Networking and Information Technology Research and Development (NITRD)’s National Coordination Office (NCO) to act as the communications arm for the initiative. Also, an Executive Council led by the directors of OSTP and the OMB (Office of Management and Budget) has been established and in July of 2016 published a Strategic Plan for the initiative.

The bad news is that there were not any formally designated funds for NSCI identified in the President Trump’s Fiscal Year 2018 request (although the initiative was mentioned in several places). In the federal government that could be the “kiss of death.” An initiative without funding often withers away and dies. The encouraging thing about the NSCI is that it may be okay that there is no specifically designated funding. The reason for this is that there other currently funded activities at the lead agencies that already align with the goals of the NSCI. Therefore, the only thing needed for “NSCI implementation” is for these activities to work in a coordinated way and that is already happening, to some degree, through the JPO. The synergy of the currently funded NSCI relevant activities provides additional hope that the initiative will survive the transition.

Other pieces of good news include the fact that the staff at the White House’s OSTP is growing and we understand has been briefed on the initiative. We also heard that the White House’s Deputy Chief Technology Officer, Michael Kratsios, has been briefed on NSCI. Another very good sign was that on August 17th, Mike Mulvaney of OMB and Michael Kratsios issued the Administration’s R&D budget priorities. One of those, under the category of Military Superiority, was the call for the U.S. to maintain its leadership in future computing capabilities. Also, under the category of American Prosperity, the budget priorities expressed an interest in R&D in machine learning and quantum computing. Finally, there was direction given for the coordination of new R&D efforts to avoid duplication with existing efforts, which is what the NSCI JPO is already doing.

More specific information about the status of the NSCI will be available at the upcoming Birds of a Feather session at the SC17 conference (5:15 pm, Wed 11/15, Room 601). There, current members of the JPO (Mark Sims of DOD, William Harrod of DOE, and Irene Qualters of NSF) will be able to provide the latest and greatest about the initiative.

For the initiative to survive, the new administration will need to take ownership. Sometimes, with an administration shift, this may involve adjusting its scope. However, there has been previous initiatives that successfully made the administration leap intact (an example is the DOE Accelerated Strategic Computing Initiative (ASCI)). These tend to be initiatives that have a clear and compelling reason to exist and a sound organization that provides confidence that they will succeed.

Things continue to look good for funding the exascale program in the Trump administration. Also, the growth of large scale data analytics across the spectrum of government, industry, and academia probably means that there is a good chance that NSCI will survive the transition.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

The post SC17 Preview: The National Strategic Computing Initiative appeared first on HPCwire.

Appentra Raises a Funding Round of €400K

HPC Wire - Thu, 11/02/2017 - 09:15

Nov. 2, 2017 — Appentra, a technology-based company spin-off of the University of Coruña established in 2012, has raised a funding round of €400.000. The round was led by three Spanish venture capital organizations: Caixa Capital Risc, Unirisco and Xesgalicia.

The new funds will be used to accelerate the market uptake of Parallware tools, allowing us to scale our team and to further improve the Parallware technology.

Appentra  provides top quality software tools that allow extensive use of High Performance Computing (HPC) techniques in all application areas of engineering, science and industry. Appentra’s target clients are companies and organizations that run frequently updated compute-intensive applications in markets like aerospace, automotive, civil engineering, biomedicine or chemistry.

”It is a privilege to have such a supportive group of investors that believe in our vision and in our team.” said Manuel Arenaz (CEO).

About Caixa Capital Risc

Caixa Capital Risc is the venture capital arm of Criteria Caixa, an investor that provides equity and convertible loans to innovative companies in their early stages. They manages a capital of 195 million euro and invests mainly in Spanish companies in the Industrial technology, Healthcare/Life Sciences and time fields.

About Unirisco

Unirisco is a venture capital group promoting the creation of companies making use of university knowledge. This is achieved through short-term investment operations in their financing or through other financial instruments, always with the criteria of profitability and job creation in mind.

About Xesgalicia

Xesgalicia is a Galician Venture Capital Management firm. It finances company development through the temporary acquisition of minority shares of the capital of unquoted companies. In addition, it may make ordinary or mezzanine loans to the companies in which it invests through differents venture capital funds and the assets of a venture capital company.

Source: Appentra

The post Appentra Raises a Funding Round of €400K appeared first on HPCwire.

NSF Selects Anne Kinney to Head Mathematical and Physical Sciences Directorate

HPC Wire - Thu, 11/02/2017 - 09:08

Nov. 2, 2017 — The National Science Foundation (NSF) has selected Dr. Anne Kinney to serve as head of the Directorate for Mathematical and Physical Sciences (MPS), which supports fundamental research in astronomy, chemistry, physics, materials science and mathematics.

Kinney has more than 30 years of leadership and management experience in the astronomical community. Since 2015, she has been serving as chief scientist at the W. M. Keck Observatory, which hosts the world’s largest optical and infrared telescopes. At Keck, she served as a liaison to the global scientific community, acting as an ambassador to the observatory’s entire user community.

Prior to that, Kinney held multiple positions at NASA’s Goddard Space Flight Center — most recently as Director of the Solar System Exploration Division, leading and managing a team of more than 350 people. Before moving to Goddard Space Flight Center, Kinney was director of the Universe Division at NASA Headquarters. She oversaw successful space missions that included the Hubble Space Telescope, the Spitzer Space Telescope, the Wilkinson Microwave Anisotropy Probe and the Galaxy Evolution Explorer.

“Anne Kinney arrives at a special moment in our quest to understand the universe — as excitement builds for a new era of multi-messenger astrophysics. And, as we look to convergence research to address some of the most challenging issues in science and engineering, all of the fields in the MPS directorate — mathematics, chemistry, materials science, physics and astronomy — play foundational and leading roles,” said NSF Director France Córdova. “Kinney has successfully brought together researchers, educators, students and other partners time and again to support significant scientific and engineering feats. I am thrilled to welcome her to the NSF leadership team, where her skills and experience will help us maintain our position keeping the U.S. at the forefront of scientific and technological excellence.”

MPS provides about 43 percent of the federal funding for basic research at academic institutions in the mathematical and physical sciences. The directorate serves the nation by supporting fundamental discoveries at the leading edge of science, with special emphasis on supporting early career investigators and advancing areas of science, including quantum information science, optics, photonics, clean energy, data science and more. The NSF-funded Laser Interferometer Gravitational-Wave Observatory (LIGO), which recently detected the collision of two neutron stars, has been supported by MPS for more than 40 years.

“MPS explores some of our most compelling scientific questions, and I am eager to add to the efforts of an agency that plays a key role in driving the U.S. economy, ensuring national security and enhancing the nation’s global leadership in innovation,” Kinney said. “Throughout my career, I’ve been fortunate to lead teams that have used knowledge gained from breakthroughs in fundamental science to enrich how we see and understand the universe. It’s exciting to think that my work at MPS will support research with the potential to fuel decades’ worth of future exploration.”

An expert in extragalactic astronomy, Kinney has published more than 80 papers on quasars, blazars, active galaxies and normal galaxies, and signatures of accretion disks in active galaxies. Her research showed that accretion disks in the center of active galaxies lie at random angles relative to their host galaxies.

Kinney has won numerous awards and honors, including the Presidential Rank Award for Meritorious Service, the NASA Medal for Outstanding Leadership and several NASA Group Achievement Awards for projects such as the Keck Observatory Archive and the James Webb Space Telescope, the Gamma-ray Large Area Space Telescope (now known as the Fermi Gamma-ray Space Telescope) and Lunar Orbiter Laser Altimeter. An avid supporter of science communication and outreach, Kinney created the Space Telescope Science Institute education group — launching the online Amazing Space program — and has served on the editorial board of Astronomy Magazine since 1997.

Kinney has a bachelor’s degree in astronomy and physics from the University of Wisconsin and a doctorate in astrophysics from New York University. She studied in Denmark for several years at the Niels Bohr Institute, University of Copenhagen.

Kinney will begin her NSF appointment on Jan. 2, 2018.

Source: NSF

The post NSF Selects Anne Kinney to Head Mathematical and Physical Sciences Directorate appeared first on HPCwire.

XSEDE Hosting Collaboration Booth at Supercomputing 17

HPC Wire - Thu, 11/02/2017 - 09:06

Nov. 2, 2017 — The Extreme Science and Engineering Development Environment (XSEDE) will be hosting a collaboration booth at Supercomputing Conference (SC) in Denver, Colorado from November 13th through 16th.

XSEDE, which is supported by the National Science Foundation (NSF), is the most advanced, powerful and robust collection of integrated digital resources and services in the world. XSEDE is inviting SC17 attendees to the XSEDE collaboration booth (#225) to learn about XSEDE-enabled projects, chat with XSEDE representatives, and see what type of resources and collaborations XSEDE offers researchers. XSEDE’s collaboration booth also will feature two-half day events to encourage XSEDE meetups and project collaborations.

RSVP here:

Source: XSEDE

The post XSEDE Hosting Collaboration Booth at Supercomputing 17 appeared first on HPCwire.

Ellexus Launches Container Checker on AWS Marketplace

HPC Wire - Thu, 11/02/2017 - 09:01

CAMBRIDGE, England, Nov. 2, 2017 — Ellexus has launched Container Checker on Amazon Web Services’ Marketplace, a pioneering cloud-based tool that provides visibility into the inner workings of Docker containers.

Container Checker is the first tool aimed directly at the cloud market from Ellexus, the I/O profiling company. The software provider has seven years’ experience in providing dependency analysis and performance tools in big-compute environments. Ellexus Container Checker brings this expertise to a much wider audience and will enable organisations of all sizes in many sectors to scale rapidly and quickly.

Using the tool is simple; spin up the container on an AWS machine, run the Container Checker trace and receive your report when the trace is complete. It will only take as long as the application takes to run.

The following checks are included in the trace:

  • I/O performance: Small reads and writes? Excess meta data operations? Discover all the flaws that are bringing your application to a standstill – and costing you in wasted cloud spend
  • Dependencies and security: What files does my application need to run? Are they inside the container or outside? Make sure you have everything you need and the container doesn’t access files or network locations that it shouldn’t
  • Time: Where is your application wasting time? Should you use a different cloud set-up? Find out if your container is a time waster.

Currently available on AWS, the tool will soon be rolled out across other cloud platforms. Over time Container Checker will also add more types of container to its checklist.

“We are extremely excited to launch our first tool aimed directly at improving the lives of cloud platform users,” said Ellexus CEO Dr Rosemary Francis.

“The use of both cloud compute clusters and containers is set to grow extremely quickly over the next few years, which undoubtedly means people will run into challenges as they adapt to new set-ups and try to scale quickly.

“Container Checker will help people using cloud platforms to quickly detect problems within their containers before they are let loose on the cloud to potentially waste time and compute spend. Estimates suggest that up to 45% of cloud spend is wasted due in part to unknown application activity and unsuitable storage decisions, which is what we want to help businesses tackle.”

Find out more at www.ellexus.com.

 About Ellexus Ltd

Ellexus is an I/O profiling company, trusted by the world’s largest software vendors and high performance computing organisations. Its monitoring and profiling tools profile thousands of applications daily, improving performance by up to 35%.

Source: Ellexus

The post Ellexus Launches Container Checker on AWS Marketplace appeared first on HPCwire.

Asetek Announces OEM Partnership With E4 Computer Engineering and Installation

HPC Wire - Thu, 11/02/2017 - 08:26

OSLO, Norway, Nov. 2, 2017 — Asetek (ASETEK.OL) today announced E4 Computer Engineering, an Italian technology provider of solutions for HPC, data analytics and AI, as a new data center OEM partner. E4 Computer Engineering has utilized Asetek RackCDU D2C (Direct-to-Chip) liquid cooling for the D.A.V.I.D.E. SUPERCOMPUTER in Italy. This follows the announcement of an undisclosed OEM partner and installation on July 14, 2017.

You can read more about the partnership at Asetek.com.

About Asetek

Asetek is a global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK). For more information, visit www.asetek.com

Source: Asetek

The post Asetek Announces OEM Partnership With E4 Computer Engineering and Installation appeared first on HPCwire.

Huawei Partners with Intel to Build a Supercomputing Cluster for Technical University of Denmark

HPC Wire - Thu, 11/02/2017 - 08:17

SHENZHEN, China, Nov. 2, 2017 — For almost two centuries DTU, Technical University of Denmark, has been dedicated to fulfilling the vision of H.C. Orsted – the father of electromagnetism – who founded the university in 1829 to develop and create value using the natural sciences and the technical sciences to benefit society. Today, DTU is ranked as one of the foremost technical universities in Europe.

High-Performance Computing Propels Materials Research

DTU promotes promising fields of research within the technical and the natural sciences, especially based on usefulness to society, relevance to business and sustainability. DTU focuses on basic science that has significant challenges and clear application prospects, from atomic-scale materials analysis to quantum physics and renewable energy. As the material application environment becomes increasingly complex, laboratory research for materials performance analysis has become even more challenging.

DTU aims to understand the nature of materials by developing electron structural theory, and design new functional nanostructures through new-found insights. These studies require the analysis of the structure, strength, and characteristics of new materials, involving intensive, complex numerical computation and simulation tests on material and energy. This will produce a vast number of computational data. Therefore, High-Performance Computing (HPC) resources that can accelerate performance modeling and solving are particularly important to research in this field.

In order to speed up the process from discovery to application of new materials and maintain a leading edge in research, DTU plans to expand and upgrade its supercomputing cluster, Niflheim, which is deployed at the Computational Atomic-scale Materials Design (CAMD) Center.

Combining the Best of Both Worlds: Huawei X6800 High-Density Server and Intel OPA Network

The existing Niflheim cluster at DTU was built from 2009 to 2015, and was capable of a peak computing capability of only 73 TFLOPS. The cluster was equipped with previous generation and even earlier computing product hardware. The oldest products had limited processor performance, small memory capacity, with low-bandwidth but high-latency computing network. The old cluster was failing to meet the growing demands of computing-intensive simulation tests. As a result, the cluster became a bottleneck since the CAMD center needed research efficiency improvements.

DTU wanted to deploy a new supercomputing system to give the Niflheim cluster a boost in computing resources and performance, and meanwhile also prepare the cluster for future technology evolution as well as cluster-scale expansion. DTU has carefully studied various solutions in terms of overall performance, product quality, and service capabilities, and through an EU tender finally selected Huawei and Intel as the vendors to help the university build a new-generation computing cluster with their innovative technologies and computing products.

Solution Highlights

Supreme Performance, Leading Computing Efficiency:

Nodes configured with Intel® Xeon® E5-2600 v4 series processors, up to 845 GFLOPS compute power per node;

Nodes configured with 256 GB DIMMs and 240 GB SSDs, eliminates I/O bottlenecks, and improves data processing efficiency with high-speed data caching;

Leverages the Intel® Omni-Path Architecture (OPA) to build a two-layer fat-tree fabric, delivers bandwidth of up to 100 Gbit/s, and end-to-end latency as low as 910 ns;

Power Supply Units (PSUs) and fan modules shared by multiple nodes, enhanced with Huawei’s Dynamic Energy Management Technology (DEMT) to lower system energy consumption by over 10%.

High-Density Deployment, Easy to Manage and Expand:

4U chassis configured with eight 2-socket compute nodes, delivers computing density twice that of traditional 1U rack servers, significantly improves rack space utilization;

Supports aggregated management network port for unified management, meanwhile reduces cable connections;

Adopts a modular design, and supports hot swap for all key components, greatly improves Operations and Maintenance (O&M) efficiency.

New-Generation Niflheim Cluster Expedites New Material Discovery and Application

The new-generation Niflheim cluster went live in Dec. 2016. The new cluster helps more researchers carry out research and analysis on new materials and new energy, but also provides a great leap in feedback speeds of test results. It has enabled new levels of scientific research progress and strength, helping DTU generate new innovation capabilities in the field of material analysis.

The Niflheim cluster delivers a computing power of up to 225 TFLOPS, which is three times the level of the original system;

Substantially shorten the materials’ analysis time, enabling researchers to discover and apply new materials more quickly;

With flexible expandability, the cluster can seamlessly expand up to 112 nodes without requiring additional new cabinets.

Source: Huawei

The post Huawei Partners with Intel to Build a Supercomputing Cluster for Technical University of Denmark appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator