Feed aggregator

System Fabric Works and ThinkParQ Partner for Parallel File System

HPC Wire - Thu, 12/07/2017 - 09:59

AUSTIN, Tex. and KAISERLAUTERN, Germany, Dec. 7, 2017 — Today System Fabric Works announces its support and integration of the BeeGFS file system with the latest NetApp E-Series All Flash and HDD storage systems which makes BeeGFS available on the family of NetApp E-Series Hyperscale Storage products as part of System Fabric Work’s (SFW) Converged Infrastructure solutions for high-performance Enterprise Computing, Data Analytics and Machine Learning.

“We are pleased to announce our Gold Partner relationship with ThinkParQ,” said Kevin Moran, President and CEO, System Fabric Works. “Together, SFW and ThinkParQ can deliver, worldwide, a highly converged, scalable computing solution based on BeeGFS, engineered with NetApp E-Series, a choice of InfiniBand, Omni-Path, RDMA over Ethernet and NVMe over Fabrics for targeted performance and 99.9999 reliability utilizing customer-chosen clustered servers and clients and SFW’s services for architecture, integration, acceptance and on-going support services.”

SFW’s solutions can utilize each of these networking topologies for optimal BeeGFS performance and 99.9999 reliability as full turnkey deployments, adapted to utilize customer-chosen clustered servers and clients.  SFW provides services for architecture, integration, acceptance and on-going support services.

BeeGFS delivered by ThinkParQ, is the leading parallel cluster file system designed specifically to deal with I/O intensive workloads in performance-critical environments. With a strong focus on performance and high flexibility, including converged environments where storage servers are also used for computing, BeeGFS help customers worldwide to increase their productivity by delivering results faster and by enabling analysis methods that were not possible without the specific advantages of BeeGFS.

Designed for very easy installation and management, BeeGFS transparently spreads user data across multiple servers. Therefore, the users can simply scale performance and capacity to the desired level by increasing the number of servers and disks in the system, seamlessly from small clusters up to enterprise-class systems with thousands of nodes. BeeGFS, which is available open source, is powering the storage of hundreds of scientific and industry customer-sites worldwide.

Sven Breuner, CEO of ThinkParQ, stated, “The long experience and solid track record of System Fabric Works in the field of enterprise storage makes us proud of this new partnership. Together, we can now deliver perfectly tailored solutions that meet and exceed customer expectations, no matter whether the customer needs a traditional spinning disk system for high capacity, an all-flash system for maximum performance or a cost-effective hybrid solution with pools of spinning disks and flash drives together in the same file system.”

Due to its performance tuned design and various optimized features, BeeGFS is ideal for demanding, high-performance, high-throughput workloads found in Technical Computing for modeling and simulation, product engineering, life sciences, deep learning, predictive analytics, media, financial services, and many more business advantage tools.

With the new storage pools feature in BeeGFS v7, users can now have their current project pinned to the latest NetApp E-Series All Flash SSD pool to have the full performance of an all-flash system, while the rest of the data resides on spinning disks, where it also can be accessed directly – all within the same namespace and thus completely transparent for applications.

SFW BeeGFS solutions can be based on x86_64 and ARM64 ISAs, support multiple networks with dynamic failover and provide fault-tolerance with built-in replication, and come with additional file system integrity and storage reliability features. Another compelling part of the solution offerings is BeeOND (BeeGFS on demand) which allows on the fly creation of a temporary parallel file system instances on the internal SSDs of compute nodes a per-job basis for burst-buffering. Graphical monitoring and an additional command line interface provides easy management for any kind of environment.

SFW BeeGFS high performance storage solutions with architectural design, implementation and on-gong support services are immediately available from System Fabric Works.

About ThinkParQ

ThinkParQ was founded as a spin-off from the Fraunhofer Center for High Performance Computing by the key people behind BeeGFS to bring fast, robust, scalable storage to market. ThinkParQ is responsible for support, provides consulting, organizes and attends events, and works together with system integrators to create turn-key solutions. ThinkParQ and Fraunhofer internally cooperate closely to deliver high quality support services and to drive further development and optimization of BeeGFS for tomorrow’s performance-critical systems. Visit www.thinkparq.com to learn more about the company.

About System Fabric Works

System Fabric Works (“SFW”), based in Austin, TX, specializes in delivering engineering, integration and strategic consulting services to organizations that seek to implement high performance computing and storage systems, low latency fabrics and the necessary related software. Derived from its 15 years of experience, SFW also offers custom integration and deployment of commodity servers and storage systems at many levels of performance, scale and cost effectiveness that are not available from mainstream suppliers. SFW personnel are widely recognized experts in the fields of high performance computing, networking and storage systems particularly with respect to OpenFabrics Software, InfiniBand, Ethernet and energy saving, efficient computing technologies such as RDMA. Detailed information describing SFW’s areas of expertise and corporate capabilities can be found at www.systemfabricworks.com.

Source: System Fabric Works

The post System Fabric Works and ThinkParQ Partner for Parallel File System appeared first on HPCwire.

Call for Sessions and Registration Now Open for 14th Annual OpenFabrics Alliance Workshop

HPC Wire - Thu, 12/07/2017 - 09:06

BEAVERTON, Ore., Dec. 6, 2017 — The OpenFabrics Alliance (OFA) has published a Call for Sessions for its 14th annual OFA Workshop, taking place April 9-13, 2018, in Boulder, CO. The OFA Workshop is a premier means of fostering collaboration among those who develop fabrics, deploy fabrics and create applications that rely on fabrics. It is the only event of its kind where fabric developers and users can discuss emerging fabric technologies, collaborate on future industry requirements, and address problems that exist today. In support of advancing open networking communities, the OFA is proud to announce that Promoter Member Los Alamos National Laboratory, a strong supporter of collaborative development of fabric technologies, will underwrite a portion of the Workshop. For more information about the OFA Workshop and to find support opportunities, visit the event website.

Call for Sessions

The OFA Workshop 2018 Call for Sessions encourages industry experts and thought leaders to help shape this year’s discussions by presenting or leading discussions on critical high performance networking issues. Sessions are designed to educate attendees on current development opportunities, troubleshooting techniques, and disruptive technologies affecting the deployment of high performance computing environments. The OFA Workshop places a high value on collaboration and exchanges among participants. In keeping with the theme of collaboration, proposals for Birds of a Feather sessions and panels are particularly encouraged.

The deadline to submit session proposals is February 16, 2018, at 5:00 p.m. PST. For a list of recommended session topics, formats and submission instructions download the official OFA Workshop 2018 Call for Sessions flyer.


Early bird registration is now open for all participants of the OFA Workshop 2018. For more information on event registration and lodging, visit the OFA Workshop 2018 Registration webpage.

Dates: April 9-13, 2018

Location: Embassy Suites by Hilton Boulder, CO

Registration Site: http://bit.ly/OFA2018REGRegistration Fee: $695 (Early Bird to March 19, 2018), $815 (Regular)

Lodging: Embassy Suites room discounts available until 6:00 p.m. MDT on Monday, March 19, 2018, or until room block is filled.

About the OpenFabrics Alliance

The OpenFabrics Alliance (OFA) is a 501(c) (6) non-profit company that develops, tests, licenses and distributes the OpenFabrics Software (OFS) – multi-platform, high performance, low-latency and energy-efficient open-source RDMA software. OpenFabrics Software is used in business, operational, research and scientific infrastructures that require fast fabrics/networks, efficient storage and low-latency computing. OFS is free and is included in major Linux distributions, as well as Microsoft Windows Server 2012. In addition to developing and supporting this RDMA software, the Alliance delivers training, workshops and interoperability testing to ensure all releases meet multivendor enterprise requirements for security, reliability and efficiency. For more information about the OFA, visit www.openfabrics.org.

Source: OpenFabrics Alliance

The post Call for Sessions and Registration Now Open for 14th Annual OpenFabrics Alliance Workshop appeared first on HPCwire.

Cray and NERSC Partner to Drive Advanced AI Development at Scale

HPC Wire - Wed, 12/06/2017 - 16:54

SEATTLE, December 6, 2017 – Global supercomputer leader Cray Inc. today announced the company has joined the Big Data Center (BDC) at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC). The collaboration between the two organizations is representative of Cray’s commitment to leverage its supercomputing expertise, technologies, and best practices to advance the adoption of Artificial Intelligence (AI), deep learning, and data-intensive computing.

The BDC at NERSC was established with a goal of addressing the Department of Energy’s leading data-intensive science problems, harnessing the performance and scale of the Cray XC40 “Cori” supercomputer at NERSC. The collaboration is focused on three fundamental areas that are key to unlocking the capabilities required for the most challenging data-intensive workflows:

·       Advancing the state-of-the-art in scalable, deep learning training algorithms, which is critical to the ability to train models as quickly as possible in an environment of ever-increasing data sizes and complexity;

·       Developing a framework for automated hyper-parameter tuning, which provides optimized training of deep learning models and maximizes a model’s predictive accuracy;

·       Exploring the use of deep learning techniques and applications against a diverse set of important scientific use cases, such as genomics and climate change, which broadens the range of scientific disciplines where advanced AI can have an impact.
“We are really excited to have Cray join the Big Data Center,” said Prabhat, Director of the Big Data Center, and Group Lead for Data and Analytics Services at NERSC. “Cray’s deep expertise in systems, software, and scaling is critical in working towards the BDC mission of enabling capability applications for data-intensive science on Cori. Cray and NERSC, working together with Intel and our IPCC academic partners, are well positioned to tackle performance and scaling challenges of Deep Learning.”

“Deep learning is increasingly dependent on high performance computing, and as the leader in supercomputing, Cray is focused on collaborating with the innovators in AI to address present and future challenges for our customers,” said Per Nyberg, Cray’s Senior Director of Artificial Intelligence and Analytics. “Joining the Big Data Center at NERSC is an important step forward in fostering the advancement of deep learning for science and enterprise, and is another example of our continued R&D investments in AI.”

About the Big Data Center at NERSC

The Big Data Center is a collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel, and five Intel Parallel Computing Centers (IPCCs). The five IPCCs that are part of the Big Data Center program include the University of California-Berkeley, the University of California-Davis, New York University (NYU), Oxford University, and the University of Liverpool.  The Big Data Center program was established in August 2017.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray Inc.

The post Cray and NERSC Partner to Drive Advanced AI Development at Scale appeared first on HPCwire.

Graves accepts International Distinguished Achievement Award from SPE

Colorado School of Mines - Wed, 12/06/2017 - 16:52

The Society of Petroleum Engineers has named College of Earth Resource Sciences and Engineering Dean Ramona Graves as the 2017 recipient of the International Distinguished Achievement Award for Petroleum Engineering Faculty.

SPE presents international awards to recognize those who have made significant technical and professional contributions to the industry and contributed exceptional service and leadership to the society. 

Graves received the award “for her significant scientific achievements in the areas of laser-rock interaction, for dedication to students, teaching and the teaching profession, and for furthering cross-functional cooperation.”

Graves is a Mines alumna, and the second woman in the country to earn a doctorate in petroleum engineering.

“It really is an honor to receive an award for doing something that I absolutely love for the last 40—almost 40—years,” said Graves after receiving her award.

Graves went on to thank colleagues and family, saying she owed a special debt of gratitude to the women in her life.

The award was presented by SPE President Janeen Judah at the Annual Awards Banquet during the Annual Technical Conference and Exhibition, October 9-11, 2017, in San Antonio, Texas.

Watch Graves’ entire thank you speech here.

Contact: Agata Bogucka, Communications Manager, College of Earth Resource Sciences & Engineering | 303-384-2657 | abogucka@mines.edu Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu  
Categories: Partner News

Mines student inspires future University Innovation Fellows

Colorado School of Mines - Wed, 12/06/2017 - 15:54

A Colorado School of Mines student spent nearly a week before Thanksgiving working to educate and inspire the next generation of University Innovation Fellows, including the four newest fellowship candidates from Mines.

Asya Sergoyan, a chemical engineering major, was one of 24 current fellows invited back to facilitate the UIF’s Silicon Valley meetup, which this November brought together 350 young innovators for immersive experiences at Stanford University’s d.school and Google. Aspiring fellows also receive six weeks of training online, which has been described as similar to a four-credit course.

“It was interesting to be on the other side,” Sergoyan said. “It was a much larger group than ever before and very international—students from India, a lot of students from South America. It was really cool.”

Sergoyan facilitated a workshop about integrating music into K-12 education. She also delivered a four-minute “Ignite” speech about learning from failure, shifting one’s perspective and using what one has learned to succeed in the future.

For her presentation—15 slides at 15 seconds per slide—Sergoyan drew upon her experience attempting to translate her success with a nonprofit organization she cofounded in high school to Mines.

Grades for Change provided free science and English tutoring to K-8 students. As a freshman at Mines, Sergoyan hoped to do something similar and encourage fellow college students to promote STEM education at local high schools. “We hosted meetings, but no one was ever interested,” Sergoyan said. “Students were too busy, and they didn’t want to do it for free. I realized that this isn’t what the campus needs, but there are other things it does.”

Sergoyan emphasized three ideas in her speech: “You always learn more about yourself from failure; failure and success aren’t discrete; and recognizing failure is a success in itself,” she said.

Sergoyan was one of six Mines students named University Innovation Fellows in February 2016. Before that, only one Mines student had taken part in the program, which seeks to empower students to become agents of change at their schools.

That cohort’s accomplishments on campus include the creation of maker spaces, the innovation competition sponsored by Newmont and a section of freshman orientation devoted to innovation activities. They’re also organizing a regional UIF meetup on campus next September. “We want to fly a bunch of University Innovation Fellows in from all over the country, maybe the world, to see our campus and work with our students to brainstorm things around poverty and the needs of developing countries,” Sergoyan said.

Even though she was at the Silicon Valley meetup to help the newest fellows, Sergoyan found plenty of inspiration herself. “It was the most incredible time, and I’ve met the most incredible people—people who do the craziest jobs, who have started their own companies, who have gone through tragic events. I saw people with cultural differences who were focused on the same ideas.”
“Everybody was so willing to devote their time to help everybody with their speeches,” she said. “It was such a good community that I just want to go back there.”

Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu

Categories: Partner News

IBM Begins Power9 Rollout with Backing from DOE, Google

HPC Wire - Wed, 12/06/2017 - 15:37

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers Summit and Sierra. The new AC922 server pairs two Power9 CPUs with between four and six Nvidia Tesla V100 NVLink GPUs. IBM is positioning the Power9 architecture as “a game-changing powerhouse for AI and cognitive workloads.”

The AC922 extends many of the design elements introduced in Power8 “Minsky” boxes with a focus on enabling connectivity to a range of accelerators – Nvidia GPUs, ASICs, FPGAs, and PCIe-connected devices — using an array of interfaces. In addition to being the first servers to incorporate PCIe Gen4, the new systems support the NVLink 2.0 and OpenCAPI protocols, which offer nearly 10x the maximum bandwidth of PCI-E 3.0 based x86 systems, according to IBM.

IBM AC922 rendering

“We designed Power9 with the notion that it will work as a peer computer or a peer processor to other processors,” said Sumit Gupta, vice president of of AI and HPC within IBM’s Cognitive Systems business unit, ahead of the launch. “Whether it’s GPU accelerators or FPGAs or other accelerators that are in the market, our aim was to provide the links and the hooks to give all these accelerators equal footing in the server.”

In the coming months and years there will be additional Power9-based servers to follow from IBM and its ecosystem partners but this launch is all about the flagship AC922 platform and specifically its benefits to AI and cognitive computing – something Ken King, general manager of OpenPOWER for IBM Systems Group, shared with HPCwire when we sat down with him at SC17 in Denver.

“We didn’t build this system just for doing traditional HPC workloads,” King said. “When you look at what Power9 has with NVLink 2.0 we’re going from 80 gigabits per second throughput [in NVLink 1.0] to over 150 gigabits per second throughput. PCIe Gen3 only has 16. That GPU to CPU I/O is critical for a lot of the deep learning and machine learning workloads.”

Coherency, which Power9 introduces via both CAPI and NVLink 2.0, is another key enabler. As AI models grow large, they can easily outgrow GPU memory capacity but the AC922 addresses these concerns by allowing accelerated applications to leverage system memory as GPU memory. This reduces latency and simplifies programming by eliminating data movement and locality requirements.

The AC922 server can be configured with either four or six Nvidia Volta V100 GPUs. According to IBM, a four GPU air-cooled version will be available December 22 and both four- and six-GPU water-cooled options are expected to follow in the second quarter of 2018.

While the new Power9 boxes have gone by a couple different codenames (“Witherspoon” and “Newell”), we’ve also heard folks at IBM refer to them informally as their Summit servers and indeed there is great visibility in being the manufacturer for what is widely expected to be the United States’ next fastest supercomputer. Thousands of the AC922 nodes are being connected together along with storage and networking to drive approximately 200 petaflops at Oak Ridge and 120 petaflops at Lawrence Livermore.

As King pointed out in our interview, only one of the original CORAL contractors is fulfilling its mission to deliver a “pre-exascale” supercomputer to the collaboration of US labs.

IBM has also been tapped by Google, which with partner Rackspace is building a server with Power9 processors called Zaius. In a prepared statement, Bart Sano, vice president of Google Platforms, praised “IBM’s progress in the development of the latest POWER technology” and said “the POWER9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google data centers.”

IBM sees the hyperscale market as “a good volume opportunity” but is obviously aware of the impact that volume pricing has had on the traditional server market. “We do see strong pull from them, but we have many other elements in play,” said Gupta. “We have solutions that go after the very fast-growing AI space, we have solutions that go after the open source databases, the NoSQL datacenters. We have announced a partnership with Nutanix to go after the hyperconverged space. So if you look at it, we have lots of different elements that drive the volume and opportunity around our Linux on Power servers, including of course SAP HANA.”

IBM will also be selling Power9 chips through its OpenPower ecosystem, which now encompasses 300 members. IBM says it’s committed to deploying three versions of the Power9 chip, one this year, one in 2018 and another in 2019. The scale-out variant is the one it is delivering with CORAL and with the AC922 server. “Then there will be a scale-up processor, which is the traditional chip targeted towards the AIX and the high-end space and then there’s another one that will be more of an accelerated offering with enhanced memory and other features built into it; we’re working with other memory providers to do that,” said King.

He added that there might be another version developed outside of IBM, leveraging OpenPower, which gives other organizations the opportunity to utilize IBM’s intellectual property to build their own differentiated chips and servers.

King is confident that the demand for IBM’s latest platform is there. “I think we are going to see strong out of the chute opportunities for Power9 in 2018. We’re hoping to see some growth this quarter with the solution that we’re bringing out with CORAL but that will be more around the ESP customers. Next year is when we’re expecting that pent up demand to start showing positive return overall for our business results.”

A lot is riding on the success of Power9 after Power8 failed to generate the kind of profits that IBM had hoped for. There was growth in the first year said King but after that Power8 started declining. He added that capabilities like Nutanix and building PowerAI and other software based solutions on top of it have led to a bit of a rebound. “It’s still negative but it’s low negative,” he said, “but it’s sequentially grown quarter to quarter in the last three quarters since Bob Picciano [SVP of IBM Cognitive Systems] came on.”

Several IBM reps we spoke with acknowledged that pricing or at least pricing perception was a problem for Power8.

“For our traditional market I think pricing was competitive; for some of the new markets that we’re trying to get into like the hyperscaler datacenters I think we’ve got some work to do,” said King. “It’s really a TCO and a price-performance competitiveness versus price only. And we think we’re going to have a much better price performance competitiveness with Power9 in the hyperscalers and some of the low-end Linux spaces that are really the new markets.”

“We know what we need to do for Power9 and we’re very confident with a lot of the workload capabilities that we’ve built on top of this architecture that we’re going to see a lot more growth, positive growth on Power9, with PowerAI with Nutanix with some of the other workloads we’ve put in there and it’s not going to be a hardware only reason,” King continued. “It’s going to be a lot of the software capabilities that we’ve built on top of the platform, and supporting more of the newer workloads that are out there. If you look at the IDC studies of the growth curve of cognitive infrastructure it goes from about $1.6 billion to $4.5 billion over the next two or three years – it’s a huge hockey stick – and we have built and designed Power9 for that market, specifically and primarily for that market.”

The post IBM Begins Power9 Rollout with Backing from DOE, Google appeared first on HPCwire.

Researchers win NASA funding for small spacecraft technology

Colorado School of Mines - Wed, 12/06/2017 - 15:23

A pair of researchers from Colorado School of Mines was one of nine university teams selected for NASA funding to develop and demonstrate new technologies and capabilities for small spacecraft.

Qi Han, associate professor of computer science, and Christopher Dreyer, research assistant professor of mechanical engineering, will receive $200,000 in funding per year for two years through NASA’s Smallsat Technology Partnerships Initiative. Working with two collaborators from NASA’s Jet Propulsion Laboratory in Pasadena, California, their focus will be developing and evaluating algorithms for dynamic spacecraft networking and network-aware coordination of multi-spacecraft swarms.

“This project aims to develop a framework for tight integration of communication and controls as an enabling technology for NASA to effectively deploy swarms of small spacecraft,” Han said. “This framework will make it possible for a network of self-organizing small spacecraft to be highly collaborative among themselves for the monitoring of time-varying and geographically distributed phenomena.”

Current deep-space missions face several challenges, including intermittent network connectivity, stringent bandwidth constraints and diverse quality-of-service (QoS) and quality-of-data (QoD) requirements, she said. 

“The use of a single platform creates non-optimal data-gathering conditions, thus requiring longer duration to meet science requirements,” Han said. “For example, during the NEAR [Near Earth Asteroid Rendezvous] mission, the orbit was a compromise resulting in non-optimal data-gathering conditions for most instruments. Up to a third of the time, communicating with the Earth required maneuvering the spacecraft so that the asteroid was no longer in the instruments’ field of view.”

The distributed spacecraft network proposed by the Mines team would deploy a carrier spacecraft with larger storage and processing capabilities along with the swarm of small spacecraft in orbit about a near-Earth asteroid. 

“The carrier spacecraft is dedicated to data transfer, so it is responsible for sending data gathered by all the spacecraft to the deep space network,” Han said. “This setup will make sure that the spacecraft swarm can collect measurements uninterrupted in the shortest period of time.” 

As part of the project, researchers will also evaluate and demonstrate an integrated prototype system, using a team of unmanned aerial drones in the challenging wireless network environment of the Edgar Experimental Mine.

“The work nicely complements efforts at Mines to expand research and teaching in space-related fields, such as the Mines and Lockheed Martin software academy and the Space Resources Graduate Program,” said Dreyer, who works in the Center for Space Resources at Mines

Other universities to receive funding through NASA’s Smallsat initiative are Massachusetts Institute of Technology; Stanford University; Purdue University; Utah State University; University of Arizona; University of Illinois, Urbana-Champaign; and University of Washington. Proposals were requested in three areas – instrument technologies for small spacecraft, technologies that enable large swarms of small spacecraft and technologies that enable deep-space small spacecraft missions. 

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

University of Oregon Uses Supercomputer to Power New Research Advanced Computing Facility

HPC Wire - Wed, 12/06/2017 - 13:31

Dec. 6, 2017 — A supercomputer that can perform more than 250 trillion calculations per second is powering the UO’s leap into data science as the heart of a new $2.2 million Research Advanced Computing Services facility.

Known as Talapas, the powerhouse mainframe is one of the fastest academic supercomputers in the Northwest. Its computing horsepower will aid researchers doing everything from statistical studies to genomic assemblies to quantum chemistry.

“It’s already had a profound impact on my research,” said Eric Corwin, an associate professor in the Department of Physics who employed the center’s supercomputer to examine a physical process known as “jamming.” “Computational tasks that would otherwise have taken a year running on a lab computer can be finished in just a few days, which means we can be much more exploratory in our approach, which has already led to several unexpected discoveries.”

The new center, which opens officially Dec. 6, is already available to faculty members who register as principal investigators or to members of registered research teams. The center, known by the acronym RACS, offers access to large-scale computing and will soon add high-speed data transfer capabilities, support for data sharing and other services.

In addition to boosting the university’s capacity for big data, the new center opens new doors of discovery for faculty across the spectrum of disciplines, schools, colleges and departments. Director Nick Maggio says the center will also help train students for the careers of tomorrow and make the UO more competitive in recruiting new faculty and securing research funding.

“It allows our researchers to evaluate novel technologies and explore new paradigms of computing that weren’t available to them before,” Maggio said. “We’re here to lower every barrier possible so that research computing can flourish at the University of Oregon.”

Talapas is 10 times more powerful than its aging predecessor, ACISS. In just the first few months of testing, the center has helped faculty members performing molecular dynamics simulations, image analysis, machine learning, deep learning and other types of projects.

Bill Cresko, a professor in the Department of Biology who serves as an associate vice president for research, directs the UO’s Presidential Initiative in Data Science. He points to the high-performance computing center as a crucial element of the initiative.

The center will bring together existing faculty and recruit new faculty across the UO’s schools and colleges to create new research and education programs. The center and the initiative are funded through the $50 million Presidential Fund for Excellence announced earlier this year by UO President Michael Schill.

“Research is becoming more and more data-intensive every day, and it’s crucial that we have the capacity to perform the kinds of larger and larger simulations that the high-performance computing center enables,” Cresko said. “The center will play a key role in our continued success as a research institution and our commitment to discovery and innovation.”

The Research Advanced Computing Services center has a staff of four that includes Maggio, a computational scientist and two system administrators. Among other things, the team has been tasked with transitioning users off of the old system and bringing researchers up to speed with the powerful new technology.

The center is one of nine research core facilities supported by the UO’s Office of the Vice President of Research and Innovation.

“It’s a real jewel in the midst of our growing research enterprise,” said David Conover, UO’s vice president for research and innovation. “It goes a long way toward our goal of advancing transformative excellence in research, innovation and graduate education, and it’s exciting to think about all of the new discoveries and new collaborations that will grow out of the facility.”

Although the machine is physically located in the basement of the Allen Hall Data Center, researchers from any department can sign up to access its services from their desktops. Increasingly, Maggio said, researchers who previously didn’t use such computational approaches are becoming computational researchers after encountering new research projects that quickly overwhelm the limits of their local resources.

“With the data explosion that’s occurred over the last 10 years, new opportunities for computational research exist in every field,” Maggio said. “There is no such thing as a non-computational discipline anymore.”

Maggio credits Schill and the UO Board of Trustees with seeing the importance of high-performance computing and prioritizing the funding and creation of the new center in under two years. Joe Sventek, head of the Department of Computer and Information Science, led a faculty committee that developed plans for acquiring the computational hardware, implemented the hiring of key staff such as Maggio and helped launch RACS — all in record time.

“The fact that Joe and the committee completed this task so quickly is simply amazing,” Conover said.

Looking to the future, Maggio envisions more and more researchers accessing the new facility.  Already, more than 300 different lab members from nearly 80 labs have requested access, and high-performance computing will likely play an increasing role in powering new research initiatives, such as the Phil and Penny Knight Campus for Accelerating Scientific Impact.

“This is the fastest and largest computing asset that the University of Oregon has ever had and it’s still growing,” Maggio said. “This is an incredibly exciting time to be engaged in computational research at the University of Oregon.”

To request access to large-scale computing resources, contact the Research Advanced Computing Services center at racs@uoregon.edu.

Source: University of Oregon

The post University of Oregon Uses Supercomputer to Power New Research Advanced Computing Facility appeared first on HPCwire.

PEZY President Arrested, Charged with Fraud

HPC Wire - Wed, 12/06/2017 - 10:19

The head of Japanese supercomputing firm PEZY Computing was arrested Tuesday on suspicion of defrauding a government institution of 431 million yen (~$3.8 million). According to reports in the Japanese press, PEZY founder, president and CEO Motoaki Saito and another PEZY employee, Daisuke Suzuki, are charged with profiting from padded claims they submitted to the New Energy and Industrial Technology Development Organization (NEDO).

PEZY, which stands for Peta, Exa, Zetta, Yotta, designed the manycore processors for “Gyoukou” one of the world’s fastest and most energy-efficient supercomputers. Installed at Japan Agency for Marine-Earth Science and Technology, Gyoukou achieved a fourth-place ranking on the November 2017 Top500 list with 19.14 petaflops of Linpack performance. Four of the five top systems on current Green500 listing are PEZY based, including Gyoukou in fifth position and the number one machine, Shoubu system B, operated by RIKEN.

Saito is also the founder and CEO of ExaScaler, which manufacturers the PEZY systems using its immersion liquid cooling technology, and Ultra Memory, Inc., a startup working on 3D multi-layer memory technology.

All three companies (PEZY, Exascaler, and Ultra Memory) have been in joint collaboration to develop an exascale supercomputer in the 2019 timeframe.

PEZY Computing was founded in January 2010 and introduced its first generation manycore microprocessor PEZY-1 in 2012; PEZY-SC followed in 2014. The third-generation chip, PEZY-SC2, was released in early 2017. The company has an estimated market cap of 940 million yen ($8.4 million).

NEDO is one of the largest public R&D management organizations in Japan, promoting the development and introduction of industrial and energy technologies.

The post PEZY President Arrested, Charged with Fraud appeared first on HPCwire.

Survey from HSA Foundation Highlights Importance, Benefits of Heterogeneous Systems

HPC Wire - Wed, 12/06/2017 - 09:07

BEAVERTON, Ore., Dec. 6, 2017 — The Heterogeneous System Architecture (HSA) Foundation today released key findings from a second comprehensive members survey. The survey reinforced why heterogeneous architectures are becoming integral for future electronic systems.

HSA is a standardized platform design supported by more than 70 technology companies and universities that unlocks the performance and power efficiency of the parallel computing engines found in most modern electronic devices. It allows developers to easily and efficiently apply the hardware resources—including CPUs, GPUs, DSPs, FPGAs, fabrics and fixed function accelerators—in today’s complex systems-on-chip (SoCs).

Some of the survey questions – and results:

Will the system have HSA features? 

Last year, 58.82% of the respondents answered affirmatively; this year, 100%!

Will it be HSA-compliant?

In 2016, 69.23% said it would; 2017 figures rose to 80%.

What is the top challenge in implementing heterogeneous systems?

27.27% responded in 2016 that it was a lack of standards for software programming models; the 2017 survey also identified this as the most important issue, but the numbers decreased to 7.69%.

What is the top challenge in implementing heterogeneous systems?

Half of the respondents last year said it was a lack of developer ecosystem momentum.  Once again this was identified as the key issue.

Some remarks that further accentuate key survey findings:

“Many HSA Foundation members are currently designing, programming or delivering a wide range of heterogeneous systems – including those based on HSA,” said HSA Foundation President Dr. John Glossner. “Our 2017 survey provides additional insight into key issues and trends affecting these systems that power the electronic devices across every aspect of our lives.”

Greg Stoner, HSA Foundation Chairman and Managing Director said that “the Foundation is developing resources and ecosystems conducive to its members’ various focuses on different application areas, including machine learning, artificial intelligence, datacenter, embedded IoT, and high-performance computing. The Foundation has also been making progress in support of these ecosystems, getting closer to taking normal C++ code and compiling to an HSA system.”

Stoner added that “ROCm 7 by AMD will port HSA for Caffe and TensorFlow; GPT, in the meantime, is releasing an open-sourced HSAIL-based Caffe library, with the first version already up and running – this permits early access for developers.”

Dr. Xiaodong Zhang, from Huaxia General Processor Technologies, who serves as chairman of the China Regional Committee (CRC; established by the HSA Foundation to enhance global awareness of heterogeneous computing), said that “China’s semiconductor industry is rapidly developing, and the CRC is building an ecosystem in the region to include technology, talent, and markets together with an open approach to take advantage of synergies among industry, academia, research, and applications.”

About the HSA Foundation

The HSA (Heterogeneous System Architecture) Foundation is a non-profit consortium of SoC IP vendors, OEMs, Academia, SoC vendors, OSVs and ISVs, whose goal is making programming for parallel computing easy and pervasive. HSA members are building a heterogeneous computing ecosystem, rooted in industry standards, which combines scalar processing on the CPU with parallel processing on the GPU, while enabling high bandwidth access to memory and high application performance with low power consumption. HSA defines interfaces for parallel computation using CPU, GPU and other programmable and fixed function devices, while supporting a diverse set of high-level programming languages, and creating the foundation for next-generation, general-purpose computing.

Source: HSA Foundation

The post Survey from HSA Foundation Highlights Importance, Benefits of Heterogeneous Systems appeared first on HPCwire.

Raytheon Developing Superconducting Computing Technology for Intelligence Community

HPC Wire - Tue, 12/05/2017 - 12:21

CAMBRIDGE, Mass., Dec. 5, 2017 — A Raytheon BBN Technologies-led team is developing prototype cryogenic memory arrays and a scalable control architecture under an award from the Intelligence Advanced Research Projects Activity Cryogenic Computing Complexity program.

The team recently demonstrated an energy-efficient superconducting/ferromagnetic memory cell—the first integration of a superconducting switch controlling a cryogenic memory element.

“This research could generate a new approach to supercomputing that is more efficient, faster, less expensive, and requires a smaller footprint,” said Zachary Dutton, Ph.D. and manager of the quantum technologies division at Raytheon BBN Technologies.

Raytheon BBN is the prime contractor leading a team that includes:

  • Massachusetts Institute of Technology
  • New York University
  • Cornell University
  • University of Rochester
  • University of Stellenbosch
  • HYPRES, Inc.
  • Canon U.S.A, Inc.,
  • Spin Transfer Technologies, Inc.

Raytheon BBN Technologies is a wholly owned subsidiary of Raytheon Company (NYSE: RTN).

About Raytheon 

Raytheon Company, with 2016 sales of $24 billion and 63,000 employees, is a technology and innovation leader specializing in defense, civil government and cybersecurity solutions. With a history of innovation spanning 95 years, Raytheon provides state-of-the-art electronics, mission systems integration, C5ITM products and services, sensing, effects, and mission support for customers in more than 80 countries. Raytheon is headquartered in Waltham, Massachusetts.

Source: Raytheon

The post Raytheon Developing Superconducting Computing Technology for Intelligence Community appeared first on HPCwire.

Cavium Partners with IBM for Next Generation Platforms by Joining OpenCAPI

HPC Wire - Tue, 12/05/2017 - 12:12

SAN JOSE, Calif., Dec. 5, 2017 — Cavium, Inc. (NASDAQ: CAVM), a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, wired and wireless networking is partnering with IBM for next generation platforms by joining OpenCAPI, an initiative founded by IBM, Google, AMD and others. OpenCAPI provides high-bandwidth, low latency interface optimized to connect accelerators, IO devices and memory to CPUs. With this announcement Cavium plans to bring its leadership in server IO and security offloads to next generation platforms that support the OpenCAPI interface.

Traditional system architectures are becoming a bottleneck for new classes of data-centric applications that require faster access to peripheral resources like memory, I/O and accelerators. For the efficient deployment and success of such applications, it is imperative to put the compute power closer to the data. OpenCAPI, a mature and complete specification enables such a server design, that can increase datacenter server performance by several times, enabling corporate and cloud data centers to speed up big data, machine learning, analytics, and other emerging workloads. Capable of 25Gbits per second data rate, OpenCAPI delivers the best in class performance, enabling the maximum utilization of high speed I/O devices like Cavium Fibre Channel adapters, low latency Ethernet NICs, programmable SmartNIC and security solutions.

Cavium delivers the industry’s most comprehensive family of I/O adapters and network acclerators which have the potential to be seamlessly inegrated into OpenCAPI based systems. Cavium’s portfolio includes FastLinQ® Ethernet Adapters, Converged Networking Adapters, LiquidIO SmartNICs, Fibre Channel Adapters and NITROX® Security Accelerators that cover the entire spectrum for data-centric application connectivity, offload and accleration requirements.

“We welcome Cavium to the OpenCAPI consortium to fuel innovation for today’s data-intensive cognitive workloads,” said Bob Picciano, Senior Vice President, IBM Cognitive Systems. “Together, we will tap into Cavium’s next-generation technology, including networking and accelerators, and work in tandem with other partners’ systems technology to unleash high-performance capabilities for our clients’ data center workloads.”

“We are excited to be a part of the OpenCAPI consortium. As our partnership with IBM continues to grow, we see more synergies in high speed communication and Artificial Intelligence applications,” said Syed Ali, founder and CEO of Cavium.  “We look forward to working with IBM to enable exponential performance gains for these applications.”

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data Center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan.

Source: Cavium

The post Cavium Partners with IBM for Next Generation Platforms by Joining OpenCAPI appeared first on HPCwire.

IBM Unveils Power9 Server Designed for HPC, AI

HPC Wire - Tue, 12/05/2017 - 11:38

ARMONK, NY, Dec. 5 2017 — IBM today unveiled its next-generation Power Systems Servers incorporating its newly designed POWER9 processor. Built specifically for compute-intensive AI workloads, the new POWER9 systems are capable of improving the training times of deep learning frameworks by nearly 4x[2] allowing enterprises to build more accurate AI applications, faster.

The system was designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

As a result, data scientists can build applications faster, ranging from deep learning insights in scientific research, real-time fraud detection and credit risk analysis.

POWER9 is at the heart of the soon-to-be most powerful data-intensive supercomputers in the world, the U.S. Department of Energy’s “Summit”and “Sierra” supercomputers, and has been tapped by Google.

“Google is excited about IBM’s progress in the development of the latest POWER technology,” said Bart Sano, VP of Google Platforms “The POWER9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google data centers.”

“We’ve built a game-changing powerhouse for AI and cognitive workloads,” said Bob Picciano, SVP of IBM Cognitive Systems. “In addition to arming the world’s most powerful supercomputers, IBM POWER9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry.”

Accelerating the Future with POWER9

Deep learning is a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data.

To meet these growing industry demands, four years ago IBM set out to design the POWER9 chip on a blank sheet to build a new architecture to manage free-flowing data, streaming sensors and algorithms for data-intensive AI and deep learning workloads on Linux.

IBM is the only vendor that can provide enterprises with an infrastructure that incorporates cutting-edge hardware and software with the latest open-source innovations.

With PowerAI, IBM has optimized and simplified the deployment of deep learning frameworks and libraries on the Power architecture with acceleration, allowing data scientists to be up and running in minutes.

IBM Research is developing a wide array of technologies for the Power architecture. IBM researchers have already cut deep learning times from days to hours with the PowerAI Distributed Deep Learning toolkit.

Building an Open Ecosystem to Fuel Innovation

The era of AI demands more than tremendous processing power and unprecedented speed; it also demands an open ecosystem of innovative companies delivering technologies and tools. IBM serves as a catalyst for innovation to thrive, fueling an open, fast-growing community of more than 300 OpenPOWER Foundation and OpenCAPI Consortium members.

Learn more about POWER9 and the AC922: http://ibm.biz/BdjCQQ

Read more from Bob Picciano, Senior Vice President, IBM Cognitive Systems:  https://www.ibm.com/blogs/think/2017/12/accelerating-ai/

[1]  Results of 3.7X are based IBM Internal Measurements running 1000 iterations of Enlarged GoogleNet model (mini-batch size=5)  on Enlarged Imagenet Dataset (2560×2560). Hardware: Power AC922; 40 cores (2 x 20c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU; Red Hat Enterprise Linux 7.4 for Power Little Endian (POWER9) with CUDA 9.1/ CUDNN 7;. Competitive stack: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) /  40 threads; Intel Xeon E5-2640 v4;  2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04. with CUDA .9.0/ CUDNN 7 Software: Chainverv3 /LMS/Out of Core with patches found at https://github.com/cupy/cupy/pull/694 and https://github.com/chainer/chainer/pull/3762

[2] Results of 3.8X are based IBM Internal Measurements running 1000 iterations of Enlarged GoogleNet model (mini-batch size=5) on Enlarged Imagenet Dataset (2240×2240). Power AC922; 40 cores (2 x 20c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU ; Red Hat Enterprise Linux 7.4 for Power Little Endian (POWER9) with CUDA 9.1/ CUDNN 7;. Competitive stack: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) /  40 threads; Intel Xeon E5-2640 v4;  2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04. with CUDA .9.0/ CUDNN 7.  Software: IBM Caffe with LMS Source code https://github.com/ibmsoe/caffe/tree/master-lms

[3] x86 PCI Express 3.0 (x16) peak transfer rate is 15.75 GB/sec = 16 lanes X 1GB/sec/lane x 128 bit/130 bit encoding.

[4] POWER9 and next-generation NVIDIA NVLink peak transfer rate is 150 GB/sec = 48 lanes x 3.2265625 GB/sec x 64 bit/66 bit encoding.

Source: IBM

The post IBM Unveils Power9 Server Designed for HPC, AI appeared first on HPCwire.

Mines team headed to programming world finals

Colorado School of Mines - Tue, 12/05/2017 - 11:28

A team from Colorado School of Mines is headed to the world finals of the ACM International Collegiate Programming Competition for the first time in school history. 

The SAMurai MASters – Sam Reinehr, Allee Zarrini and Matt Baldin – won the Rocky Mountain Regional on Nov. 11, besting more than 50 teams from Colorado, Utah, Montana, Arizona, Alberta and Saskatchewan to claim the region’s lone spot in the most prestigious collegiate programming competition in the world. 

The Mines juniors will face off against teams from Asia, Europe, Africa, North and South America and Australia when they travel to Beijing, China, in April.

“All CS@Mines faculty are pumped about the first-place finish of SAMurai MASters in our region,” said Tracy Camp, professor and head of the Computer Science Department. “These types of events are such a great educational opportunity for our students, so we were thrilled to see 11 teams – 33 CS@Mines students – participate this year, a record. To have a team win the Rocky Mountain region is huge.” 

Reinehr, Zarrini and Baldin credited their victory to months of preparation – the three friends have been meeting up for four hours every Saturday since the summer and added some individual programming practice this fall. 

“We competed last year, but we didn’t prepare at all. We just did it for fun,” Baldin said. “We went in with no expectations but we thought we could win if we actually tried. It didn’t seem out of reach. So, we promised ourselves that we would practice a lot.” 

In the competition, teams earn points for each algorithmic problem they solve and for how quickly they come up with a correct answer. At regionals, the SAMurai MASters solved 9 of 11 problems – but they did it considerably faster than the only other team that managed to solve that many. 

Unlike many of the schools sending teams, though, Mines does not have a competitive programming club or class, from which the top performers can be curated into teams for regionals. The SAMurai MASters hopes that changes in the future.

“Overall what we hope to get out of this is for Mines, after seeing us place so well, to start developing a program for students to compete and do well in this competition in the future even after we graduate,” Zarrini said. 

“We put Mines on the map,” Baldin added. “We want them to stay there.”

At worlds, the SAMurai don’t expect a repeat performance of regionals – Russian teams have won six years running – but they’d be happy with placing in the top 60 and earning honorable mention. To help prepare, they’re doing an independent study next semester with Teaching Associate Professor Jeff Paone.

“We're all juniors – we've got next year, too,” Reinehr said.

Tech companies also recruit out of the competition, and the teammates have found that all that programming practice has been good prep for interviews. 

“Almost every technical interview question I’ve gotten has not been nearly as difficult as the questions we’re doing here,” Zarrini said.

“I wouldn’t say being good at competitive programming necessarily makes you the best software engineer, but it speaks volumes to someone’s problem-solving ability,” Baldin said. “We want to prove we are good problem solvers."

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Azure Debuts AMD EPYC Instances for Storage Optimized Workloads

HPC Wire - Tue, 12/05/2017 - 11:09

AMD’s return to the data center received a boost today when Microsoft Azure announced introduction of instances based on AMD’s EPYC microprocessors. The new instances – Lv2-Series of Virtual Machine – use the EPYC 7551 processor. Adoption of EPYC by a major cloud provider adds weight to AMD’s argument that it has returned to the data center with a long-term commitment and product roadmap. AMD had been absent from that segment for a number of years.

Writing in a blog, Corey Sanders director of compute, Azure, said, “We’ve worked closely with AMD to develop the next generation of storage optimized VMs called Lv2-Series, powered by AMD’s EPYC processors. The Lv2-Series is designed to support customers with demanding workloads like MongoDB, Cassandra, and Cloudera that are storage intensive and demand high levels of I/O.” The EPYC line was launched last June (see HPCwire article, AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor.)

The instances make use of Microsoft’s Project Olympus intended to deliver a next generation open source cloud hardware design developed with the Open Compute Community (OCP). “We think Project Olympus will be the basis for future innovation between Microsoft and AMD, and we look forward to adding more instance types in the future benefiting from the core density, memory bandwidth and I/O capabilities of AMD EPYC processors,” said Sanders, quoted in the AMD’s announcement of the new instances.

It is an important win for AMD. Gaining a foothold in the X86 landscape today probably requires adoption by hyperscalers. No doubt some “tire kicking” is going on here but use of an Olympus design adds incentive for Microsoft Azure to court customers for the instances. HPE has also announced servers using the EPYC line.

AMD EPYC chip lineup at the June launch

The Lv2-Series instances run on the AMD EPYC 7551 processor featuring a base core frequency of 2.2 GHz and a maximum single-core turbo frequency of 3.0 GHz. “With support for 128 lanes of PCIe connections per processor, AMD provides over 33 percent more connectivity than available two-socket solutions to address an unprecedented number of NVMe drives directly,” says AMD.

The Lv2 VMs will be available starting at eight and ranging to 64 vCPU sizes, with the largest size featuring direct access to 4TB of memory. These sizes will support Azure premium storage disks by default and will also support accelerated networking capabilities for the highest throughput of any cloud.

Scott Aylor, AMD corporate vice president and general manager of Enterprise Solutions said, “There is tremendous opportunity for users to tap into the capabilities we can deliver across storage and other workloads through the combination of AMD EPYC processors on Azure. We look forward to the continued close collaboration with Microsoft Azure on future instances throughout 2018.”

Link to AMD release: http://www.amd.com/en-us/press-releases/Pages/microsoft-azure-becomes-2017dec05.aspx

Link to Azure blog: https://azure.microsoft.com/en-us/blog/announcing-the-lv2-series-vms-powered-by-the-amd-epyc-processor/

The post Azure Debuts AMD EPYC Instances for Storage Optimized Workloads appeared first on HPCwire.

Bryant Departs Intel For Google Cloud

HPC Wire - Tue, 12/05/2017 - 10:24

Google has upped its cloud game with its recruitment of Diane Bryant, the former Intel Corp.’s datacenter boss who becomes chief operating officer of Google Cloud.

Bryant, an engineer who worked her way up through the ranks of Intel during the heyday of the U.S. semiconductor industry, took a leave of absence from Intel in May that was expected to last at least six months. At the time, Intel CEO Brian Krzanich said he looked forward to Bryant’s return.

Instead, Diane Greene, CEO of Google Cloud announced late last week that Bryant would join the company as COO. Bryant “is an engineer with tremendous business focus and an outstanding thirty-year career in technology,” all of it at Intel, Greene noted in announcing the hiring.

Bryant served over the last five years as president of Intel’s Datacenter Group, expanding the chip maker’s focus on cloud computing, big data and network virtualization technologies. The group generated about $17 billion in revenue during 2016 as Intel’s x86-based servers continue to dominate datacenters.

Bryant was instrumental in guiding Intel’s transition away from the fading PC market and its unsuccessful foray into mobile devices. Prior to heading its datacenter operations, Bryant was Intel’s corporate vice president and chief information officer, where she oversaw the chip maker’s IT technology development.

Bryant is the second senior Intel executive to depart this year. In April, Brent Gorda, generation manager of Intel’s High Performance Data Division, left the company. Gorda is the former CEO of Whamcloud, the Lustre specialist acquired by Intel in 2012.

Meanwhile, the addition of Bryant gives Google Cloud another respected technology leader as it challenges cloud giant Amazon Web Services in the booming hybrid cloud market. Bryant’s “strategic acumen, technical knowledge and client focus will prove invaluable as we accelerate the scale and reach of Google Cloud.” Greene noted in a blog post.

Greene, a co-founder and CEO of VMware, took the reins at Google Cloud two years ago with the goal of extending the search giant’s mostly consumer cloud business to the enterprise market still dominated by AWS.

The post Bryant Departs Intel For Google Cloud appeared first on HPCwire.

Microsoft Spins Cycle Computing into Core Azure Product

HPC Wire - Tue, 12/05/2017 - 09:31

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization, mapping out its new role as a core Microsoft Azure product, and deciding what to do with those Cycle customers who currently use non-Azure cloud providers. At SC17, HPCwire caught up with Brett Tanzer, head of Microsoft Azure Specialized Compute Group (ASCG, which used to be Big Compute) in which Cycle now lives, and Tim Carroll, formerly Cycle VP of sales and ecosystem development and now a ‘principal’ in ASCG, for a snapshot of emerging plans for Cycle.

Much has already been accomplished they emphasize – for starters “the Cycle organization has settled in” and most are relocating to Seattle. Much also remains to be done – it will probably be a year or so before Cycle is deeply integrated across Azure’s extensive capabilities. In some ways, it’s best not to think of the Cycle acquisition in isolation but as part of Microsoft’s aggressively evolving strategy to make Azure all things for all users and that includes the HPC community writ large. Cycle is just one of the latest, and a significant, piece of the puzzle.

Founded in 2005 by Jason Stowe and Rob Futrick, Cycle Computing was one of the first companies to target HPC orchestration in the cloud; its software, CycleCloud, enables users to burst and manage HPC workloads (and data) into the cloud. Till now, cloud provider agnosticism has been a key Cycle value proposition. That will change but how quickly is uncertain. Tanzer assures there will be no disruption of existing Cycle customers, but also emphasizes Microsoft intends Cycle to become an Azure-only product over time. Cycle founder Stowe has taken on a new role as a solutions architect in the Specialized Compute Group. The financial details of the Cycle acquisition weren’t made public.

Far more than in the past HPC is seen as an important opportunity for the big cloud providers. The eruption of demand for running AI and deep learning workflows has also been a major driving force for cloud providers.

Nvidia V100 GPU

Microsoft, like Google and Amazon (and others), has been investing heavily in advanced scale technology. The immediate goal is to attract HPC and AI/deep learning customers. One indicator is the way they have all been loading up on GPUs. Azure is no exception and offers a growing list of GPU instances (M60, K80, P100, P40, and V100 (announced)); it also offers InfiniBand high speed interconnect. In October, Microsoft extended its high performance gambit further via a partnership with Cray to offer supercomputing in the cloud (see HPCwire article, Cray+Azure: Can Cloud Propel Supercomputing?).

How the latter bet will play out is unclear – Tanzer says, “We are hearing from customers there are some workloads they need to get into the cloud that require a Cray. And Cray itself is a pretty innovative company. We think the partnership has longer legs. Look for more to come.” One wonders what interesting offerings may sprout from that alliance.

For now the plan for Cycle is ever deeper integration with Azure’s many offerings, perhaps eventually including Cray. It’s still early days, of course. Tanzer says, “If Tim looks like he hasn’t slept much for past three months, it’s because he hasn’t.  Strategically, all of these products – Cycle, Azure Batch, HPC pack (cluster tool) – will work together and drive orchestration across all the key workloads.”

“The company is rallying behind the [HPC] category and customers are responding very well,” says Tanzer. “We are investing in all phases of the maturity curve, so if you are somebody who wants a Cray, we now have an answer for you. If you are rewriting your petrochemical workload and want to make it cloud friendly, then Batch is a great solution. We are really just taking care, wherever we can, to take friction out of using the cloud. We looked at Cycle and its fantastic people and knowledge. The relationship with Cycle is very symbiotic. We look at where our customers are and see [that for many], Cycle helps them bootstrap the process.”

It’s not hard to see why Cycle was an attractive target. Cycle brings extensive understanding of HPC workloads, key customer and ISV relationships, and a robust product. Recently it’s been working to build closer relationships with systems builders (e.g. Dell EMC) and HPC ISVs (e.g. ANSYS). From an operations and support perspective, not much has changed for Cycle customers, says Carroll, although he emphasizes having now gained access to Microsoft’s deep bench of resources. No decision has been made on name changes and Tanzer says, “Cycle is actually a pretty good name.”

Cycle’s new home, the Azure’s Specialized Compute Group seems to be a new organization encompassing what was previously Big Compute. As of this writing, there was still no Specialized Compute Group web page, but from the tone of Tanzer and Carroll it seemed that things could still be in flux. SCG seems to have a fairly broad mission to smooth the path to cloud computing across all segments with so-called “specialized needs” – that, of course, includes HPC but also crosses over into enterprise computing as well. To a significant extent, says Tanzer, it is part of Microsoft’s company-wide mantra to meet-the-customer-where-she/he-is to minimize disruption.

“Quite frankly we are finding customers, even in the HPC space, need a lot of help and it’s also an area where Microsoft has a many differentiated offerings,” Tanzer says. “You should expect us to integrate Cycle’s capabilities more natively into Azure. There is much more that can be done in the category to help customers take advantage of the cloud, from providing best practices about how your workloads move, through governance, and more. Cloud consumption is getting more sophisticated and it’s going require tools to help users maximize their efforts even though the usage models will be very different.”

One can imagine many expanded uses for Cycle functionality, not least close integration with an HPC applications and closer collaboration with ISVs to drive adoption. Microsoft has the clout and understanding of both software and infrastructure businesses to help drive that, says Carroll. “Those two things are important because this is a space that’s always struggled to figure out how to build partnerships between the infrastructure providers and software providers; Microsoft’s ability to talk to some of the significant and important ISVs and figure out ways to work with them from a Microsoft perspective is a huge benefit.”

It probably bears repeating that Tanzer’s expectations seem much broader than HPC or Cycle’s role as an enabler. He says rather matter of factly, “Customers are recognizing the cloud is the destination and thinking in more detail about that. It will be interesting to see how that plays out.” When he says customers, one gets the sense he is talking about more than just a narrow slice of the pie.

The conversation over how best to migrate and perform HPC has a long history. Today, there seems less debate about whether it can be done effectively but more around how to do it right, how much it costs, and what types of HPC jobs are best suited for being run in the cloud. Carroll has for some time argued that technology is not the issue for the most potential HPC cloud users.

Tim Carroll

“It’s less about whether somebody is technically ready than whether they have a business model that requires them to be able to move faster and leverage more compute than they had thought they were going to need,” says Carroll. “Where we see the most growth is [among users] who have deadlines and at the end of the day what they really care about is how long will it take me to get my answer and tell me the cost and complexity to get there. That’s a different conversation than we have had in this particular segment over time.”

Some customer retraining and attitude change will be necessary, says Tanzer.

“They are going to have hybrid environments for a while so to the degree we can help them reduce some of the chaos that comes from that and help retrain the workforce easily on what it needs to take advantage of the cloud. We think that’s important. Workforces who run the workloads really understand all they want to do is to take advantage of the technology but some relearning is necessary and that’s another area where Cycle really helps because of its tools and set of APIs and they speak the language of a developer,” he says.

Cycle connections in the academic space will also be beneficial according to Tanzer. There are both structural and financial obstacles for academicians who wish to run HPC workloads in commercial cloud and Cycle insight will help Azure navigate that landscape to the benefit of Azure and academic users, he suggests. The Cray deal will help in government markets, he says.

Stay tuned.

The post Microsoft Spins Cycle Computing into Core Azure Product appeared first on HPCwire.

DDN’s 5th Annual High Performance Computing Trends Survey Cites Complex I/O Workloads as #1 Challenge

HPC Wire - Tue, 12/05/2017 - 08:50

SANTA CLARA, Calif., Dec. 5, 2017 – DataDirect Networks (DDN) today announced the results of its annual High Performance Computing (HPC) Trends survey, which reflects the continued adoption of flash-based storage as essential to respondent’s overall data center strategy. While flash is deemed essential, respondents anticipate needing additional technology innovations to unlock the full performance of their HPC applications.  Managing complex I/O workload performance remains far and away the largest challenge to survey respondents, with 60 percent of end-users citing this as their number one challenge.

Run for the fifth year in a row by DDN, the survey results included input from more than 100 global end-users across a wide number of data-intensive industries. Respondents included individuals responsible for high performance computing as well as networking and storage systems from financial services, government, higher education, life sciences, manufacturing, national labs, and oil and gas organizations. As expected, the amount of data under management in these organizations continues to grow. Of organizations surveyed: 85 percent manage or use more than one petabyte of data storage (up 12-percentage points from last year).

Survey respondents continue to take a nuanced approach to cloud adoption. Respondents planning to leverage cloud-based storage (encompassing both private and public clouds) for at least part of their data in 2017 jumped to 48 percent, an 11-percentage point increase from 2016 survey results.  Despite a more positive disposition toward cloud storage, only 5 percent of respondents anticipated more than 30 percent of their data residing in the cloud. Maybe because of the limited use, as well as the ever-improving economics of public cloud services, a full 40 percent of respondents anticipated using public cloud in some way as a solution in the coming year even if for a limited amount of data. This response compares with only 20 percent of respondents last year who said they anticipated using public cloud storage options.

While the basic application of flash storage in HPC data centers remains relatively flat, at approximately 90 percent of respondents using flash storage at some level within their data centers today, the main shift is in how much data is being retained in flash. While the vast majority of respondents (76 percent) store less than 20 percent of their data on flash media, many respondents anticipate an increase in 2018, with a quarter of respondents expecting 20-to-30 percent of their data to be flash based, and another 10 percent expecting 20-to-40 percent of their storage to be on a flash tier.

How customers are applying flash to their workflows is also particularly interesting.  A majority of survey respondents (54 percent) are primarily using flash to accelerate file system metadata.  There is a growing interest in using flash for application-specific data as well, with 45 percent of respondents indicating that they are using at least some of their flash storage this way. On the other end of the flash usage spectrum few customers are using flash user data, which is logical given current cost deltas between flash and spinning disk storage.

“Once again, DDN’s annual HPC Trends Survey reflects the developments we see in the wider HPC community. I/O performance is a huge bottleneck to unlocking the power of HPC applications in use today, and customers are beginning to realize that simply adding flash to the storage infrastructure isn’t delivering the anticipated application level improvements,” said Kurt Kuckein, director of marketing for DDN. “Suppliers are starting to offer architectures that include flash tiers optimized specifically for NVM and customers are actively pursuing implementations utilizing these technologies. Technologies like DDN’s IME are specifically targeted to have the most impact on accelerating I/O all the way to the application.”

I/O bottlenecks continue to be the main concern for HPC storage administrators.  Especially in intensive I/O workflows like analytics, where 76 percent of customers running analytics workloads consider I/O their top challenge.  Given this, it is not surprising that only 19 percent of survey participates consider existing storage technologies sufficient to scale to exascale requirements.

A majority of respondents (68 percent) view flash-native caches as the most likely technology to resolve the I/O challenge and to push HPC storage to the next level, reflecting an eight-percentage point increase versus last year’s survey.  HPC storage administrators have already, or are beginning to evaluate flash-native cache technologies at a greater rate than before, with more than 60 percent of the responses indicating that they have implemented, are evaluating now, or plan to evaluate flash-native cache solutions such as NVM.  Evidence of the impact of these technologies can be seen in the recent io500.org results where JCAHPC utilized a flash-based cache to achieve stellar performance and the top spot in the first annual storage I/O benchmark ranking.

As an increasing number of HPC sites move to real-world implementation of multi-site HPC collaboration, concerns about security remain at the forefront. Perhaps somewhat surprisingly, the second highest barrier to multi-site collaboration has nothing to do with technology or security.   Organizational bureaucracy was identified by 43 percent of respondents as a major impediment to data sharing. This means that even though data sharing has become technically possible as well as cost effective, there are still limiting perceptions that stand in the way of wider collaboration.

About DDN

DataDirect Networks (DDN) is a leading big data storage supplier to data-intensive, global organizations. For almost 20 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN’s 5th Annual High Performance Computing Trends Survey Cites Complex I/O Workloads as #1 Challenge appeared first on HPCwire.

Verne Global Announces Launch of hpcDIRECT

HPC Wire - Tue, 12/05/2017 - 08:29

LONDON and KEFLAVIK, Iceland, Dec. 5, 2017 — Verne Global, a provider of highly optimised, secure, and 100% renewably powered data center solutions, today announced the launch of its new hpcDIRECT service. hpcDIRECT is a powerful, agile and efficient HPC-as-a-service (HPCaaS) platform, purpose-built to address the intense compute requirements of today’s data-driven industries. hpcDIRECT provides a fully scalable, bare metal service with the ability to rapidly provision the full performance of HPC servers uncontended and in a secure manner.

“Building hpcDIRECT was a direct response to overwhelming demand from our customers and tightly correlated with the market’s desire to move from a CapEx to an OpEx model for high performance computing,”said Dominic Ward, Managing Director at Verne Global. “With hpcDIRECT, we take the complexity and capital costs out of scaling HPC and bring greater accessibility and more agility in terms of how IT architects plan and schedule their workloads.”

“hpcDIRECT has been designed and built from the outset by HPC specialists for HPC applications. Whether deployed as stand-alone compute or used as a bare metal extension to existing in-house HPC infrastructure, hpcDIRECT provides an industry-leading solution that combines best-in-class design with our HPC optimised, low cost environment and location.”

hpcDIRECT is accessible via a range of options, from incremental additions to augment existing high performance computing, to supporting massive processing requirements with petaflops of compute. This flexibility makes it an ideal solution for applications such as computer-aided engineering, genomic sequencing, molecular modelling, grid computing, artificial intelligence and machine learning.

hpcDIRECT is available with no upfront charges and can be provisioned rapidly to the size and configuration needed. hpcDIRECT clusters are built using the latest architectures available including Intel’s Xeon (Skylake) processors, and fast intercore connectivity using Mellanox Infiniband and Ethernet networks, with storage and memory options to suit each customer’s needs.

Since Verne Global began operations in early 2012, the company has been at the forefront of data center design and technology, bringing new, innovative thinking and infrastructure to the industry. hpcDIRECT is the latest product in this cycle, and is perfectly optimised for companies operating high performance and intensive computing across the world’s most advanced industries.

Source: Verne Global

The post Verne Global Announces Launch of hpcDIRECT appeared first on HPCwire.


Subscribe to www.rmacc.org aggregator