Feed aggregator

Data Vortex Technologies Partners with Providentia Worldwide to Develop Novel Solutions

HPC Wire - Thu, 12/07/2017 - 16:18

AUSTIN, December 6 – This November, proprietary network company Data Vortex Technologies formalized a partnership with Providentia Worldwide, LLC. Providentia is a technologies and solutions consulting venture which bridges the gap between traditional HPC and enterprise computing. The company works with Data Vortex and potential partners to develop novel solutions for Data Vortex technologies and to assist with systems integration into new markets. This partnership will leverage the deep experience in enterprise and hyperscale environments of Providentia Worldwide founders, Ryan Quick and Arno Kolster, and merge the unique performance characteristics of the Data Vortex with traditional systems.

“Providentia Worldwide is excited to see the Data Vortex network in new areas which can benefit from their fine-grained, egalitarian efficiencies. Messaging middleware, data and compute intensive appliances, real-time predictive analytics and anomaly detection, and parallel application computing are promising focus areas for Data Vortex Technologies,” says Quick. “Disrupting entrenched enterprise deployment designs is never easy, but when the gains are large enough, the effort is well worth it. Providentia Worldwide sees the potential to dramatically improve performance and capabilities in these areas, causing a sea-change in how fine-grained network problems are solved going forward.”

The senior technical teams of Data Vortex and Providentia are working on demonstrating the capabilities and performance of popular open source applications on the proprietary Data Vortex Network. The goal is to bring unprecedented performance increases to spaces that are often unaffected by traditional advancements in supercomputing. “This is a necessary step for us – the Providentia partnership is adding breadth to the Data Vortex effort,” says Data Vortex President, Carolyn Coke Reed Devany. “Up to this point we have deployed HPC systems, built with commodity servers connected with Data Vortex switches and VICs [Vortex Interconnect Cards], to federal and academic customers. The company is now offering a network solution that will allow customers to connect an array of different devices to address their most challenging data movement needs.”

Source: Data Vortex Technologies; Providentia Worldwide

The post Data Vortex Technologies Partners with Providentia Worldwide to Develop Novel Solutions appeared first on HPCwire.

The Stanford Living Heart Project Wins Prestigious HPC Awards During SC17

HPC Wire - Thu, 12/07/2017 - 16:15

Dec. 7 — During SC’17, the 30th International Conference for High Performance Computing, Networking, Storage and Analysis in Denver, in November, UberCloud – on behalf of the Stanford Living Heart Project – received three HPC awards. It started at the preceding Intel HPC Developer Conference when the Living Heart Project (LHP), presented by Burak Yenier from UberCloud, won a best paper award. During SC’17 on Monday, the Stanford LHP Team received the HPCwire Editors’ Choice Award for Best Use of HPC in the Cloud. And finally, on Tuesday, the team won the Hyperion (former IDC) Award for Innovation Excellence, elected by the Steering Committee of the HPC User Forum.

The Stanford LHP project dealt with simulating cardiac arrhythmia which can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, decimating its pumping function, thus diminishing the circulation of blood through the body. Some kind of cardiac arrhythmia, if not treated with a defibrillator, will cause death within minutes.

Before a new drug reaches the market, pharmaceutical companies need to check for the risk of inducing arrhythmias. Currently, this process takes years and involves costly animal and human studies. In this project, the Living Matter Laboratory of Stanford University developed a new software tool enabling drug developers to quickly assess the viability of a new compound. This means better and safer drugs reaching the market to improve patients’ lives.

Figure 1: Evolution of the electrical activity for the baseline case (no drug) and after application of the drug Quinidine. The electrical propagation turns chaotic after the drug is applied, showing the high risk of Quinidine to produce arrhythmias.

“The Living Heart Project team, led by researchers from the Living Matter Laboratory at Stanford University, is proud and humbled by being elected from HPCwire’s editors for the Best Use of HPC in the Cloud, and from the 29 renowned members of the HPC User Forum Steering Committee for the 2017 Hyperion Innovation Excellence Award”, said Wolfgang Gentzsch from The UberCloud. “And we are deeply grateful for all the support from Hewlett Packard Enterprise and Intel (the sponsors), Dassault Systemes SIMULIA (for Abaqus 2017), Advania (providing HPC Cloud resources), and the UberCloud tech team for containerizing Abaqus and integrating all software and hardware components into one seamless solution stack.” 

Figure 2: Electrocardiograms: tracing for a healthy, baseline case, versus the arrhythmic development after applying the drug Sotalol.

A computational model that is able to assess the response of new drug compounds rapidly and inexpensively is of great interest for pharmaceutical companies, doctors, and patients. Such a tool will increase the number of successful drugs that reach the market, while decreasing cost and time to develop them, and thus help hundreds of thousands of patients in the future. However, the creation of a suitable model requires taking a multiscale approach that is computationally expensive: the electrical activity of cells is modelled in high detail and resolved simultaneously in the entire heart. Due to the fast dynamics that occur in this problem, the spatial and temporal resolutions are highly demanding. For more details about the Stanford Living Heart Project please read the previous HPCwire article HERE. .

About UberCloud

UberCloud is the online Community, Marketplace, and Software Container Factory where engineers, scientists, and their service providers, discover, try, and buy ubiquitous high-performance computing power and Software-as-a-Service, from Cloud resource providers and application software vendors around the world. UberCloud’s unique high-performance software container technology simplifies software packageability and portability, enables ease of access and instant use of engineering SaaS solutions, and maintains scalability across multiple compute nodes. Please visit www.TheUberCloud.com or contact us at www.TheUberCloud.com/help/.

Source: UberCloud

The post The Stanford Living Heart Project Wins Prestigious HPC Awards During SC17 appeared first on HPCwire.

Supermicro Announces Scale-Up SuperServer Certified for SAP HANA

HPC Wire - Thu, 12/07/2017 - 12:31

SAN JOSE, Calif., Dec. 7, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions, green computing technology and an SAP global technology partner, today announced that its latest 2U 4-Socket SuperServer (2049U-TR4) supporting the highest performance Intel Xeon Scalable processors, maximum memory and all-flash SSD storage has been certified for operating the SAP HANA platform. SuperServer 2049U-TR4 for SAP HANA supports customers by offering a unique scale-up single node system based on a well-defined hardware specification designed to meet the most demanding performance requirements of SAP HANA in-memory technology.

“Combining our capabilities in delivering high-performance, high-efficiency server technology, innovation, end-to-end green computing solutions to the data center, and cloud computing with the in-memory computing capabilities of SAP HANA, Supermicro SuperServer 2049U-TR4 for SAP HANA offers customers a pre-assembled, pre-installed, pre-configured, standardized and highly optimized solution for mission-critical database and applications running on SAP HANA,” said Charles Liang, President and CEO of Supermicro. “The SAP HANA certification is a vital addition to our solution portfolio further enabling Supermicro to provision and service innovative new mission-critical solutions for the most demanding enterprise customer requirements.”

Supermicro is collaborating with SAP to bring its rich portfolio of open cloud-scale computing solutions to enterprise customers looking to transition from traditional high-cost proprietary systems to open, cost-optimized, software-defined architectures. To support this collaboration, Supermicro has recently joined the SAP global technology partner program.

SAP HANA combines database, data processing, and application platform capabilities in-memory. The platform provides libraries for predictive, planning, text processing, spatial and business analytics. By providing advanced capabilities, such as predictive text analytics, spatial processing and data virtualization on the same architecture, it further simplifies application development and processing across big-data sources and structures. This makes SAP HANA a highly suitable platform for building and deploying next-generation, real-time applications and analytics.

The new SAP-certified solution complements existing solutions from Supermicro for SAP NetWeaver technology platform and helps support customers’ transition to SAP HANA and SAP S/4HANA. In fact, Supermicro has certified its complete portfolio of server and storage solutions to support the SAP NetWeaver technology platform running on Linux. Designed for enterprises that require the highest operational efficiency and maximum performance, all these Supermicro SuperServer solutions are ready for SAP applications based on the NetWeaver technology platform such as SAP ECC, SAP BW and SAP CRM, either as application or database server in a two- or three-tier SAP configuration.

Supermicro plans to continue expanding its portfolio of SAP HANA certified systems including an 8-socket scale-up solution based on the SuperServer 7089P-TR4 and a 4-socket solution based on its SuperBlade in the first half of 2018.

About Super Micro Computer, Inc. (NASDAQ: SMCI)

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced Server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide.

Source: Super Micro Computer, Inc.

The post Supermicro Announces Scale-Up SuperServer Certified for SAP HANA appeared first on HPCwire.

CMU, PSC and Pitt to Build Brain Data Repository

HPC Wire - Thu, 12/07/2017 - 11:12

Dec. 7, 2017 — Researchers with Carnegie Mellon’s Molecular Biosensor and Imaging Center (MBIC), the Pittsburgh Supercomputing Center (PSC) and the University of Pittsburgh’s Center for Biological Imaging (CBI) will help to usher in an era of open data research in neuroscience by building a confocal fluorescence microscopy data repository. The data archive will give researchers easy, searchable access to petabytes of existing data.

The project is funded by a $5 million, five-year grant from the National Institutes of Health’s (NIH’s) National Institute of Mental Health (MH114793) and is part of the federal BRAIN initiative.

“This grant is a testament to the fact that Pittsburgh is a leader in the fields of neuroscience, imaging and computer science,” said Marcel Bruchez, MBIC director, professor of biological sciences and chemistry at Carnegie Mellon and co-principal investigator of the grant. “By merging these disciplines, we will create a tool that helps the entire neuroscience community advance our understanding of the brain at an even faster pace.”

New imaging tools and technologies, like large-volume confocal fluorescence microscopy, have greatly accelerated neuroscience research in the past five years by allowing researchers to image large regions of the brain at such a high level of resolution that they can zoom in to the level of a single neuron or synapse, or zoom out to the level of the whole brain. These images, however, contain such a large amount of data that only a small part of one brain’s worth of data can be accessed at a time using a standard desktop computer. Additionally, images are often collected in different ways — at different resolutions, using different methodologies and different orientations. Comparing and combining data from multiple whole brains and datasets requires the power of supercomputing.

“PSC has a long experience with handling massive datasets for its users, as well as a deep skillset in processing microscopic images with high-performance computing,” said Alex Ropelewski, director of PSC’s Biomedical Applications Group and a co-principal investigator in the NIH grant. “This partnership with MBIC and CBI was a natural step in the ongoing collaborations between the institutions.”

The Pittsburgh-based team will bring together MBIC and CBI’s expertise in cell imaging and microscopy and pair it with the PSC’s long history of experience in biomedical supercomputing to create a system called the Brain Imaging Archive. Researchers will be able to submit their whole brain images, along with metadata about the images, to the archive. There the data will be indexed into a searchable system that can be accessed using the internet. Researchers can search the system to find existing data that will help them narrow down their research targets, making research much more efficient.

About PSC

The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry, and is a leading partner in XSEDE (Extreme Science and Engineering Discovery Environment), the National Science Foundation cyberinfrastructure program.

Source: PSC

The post CMU, PSC and Pitt to Build Brain Data Repository appeared first on HPCwire.

System Fabric Works and ThinkParQ Partner for Parallel File System

HPC Wire - Thu, 12/07/2017 - 09:59

AUSTIN, Tex. and KAISERLAUTERN, Germany, Dec. 7, 2017 — Today System Fabric Works announces its support and integration of the BeeGFS file system with the latest NetApp E-Series All Flash and HDD storage systems which makes BeeGFS available on the family of NetApp E-Series Hyperscale Storage products as part of System Fabric Work’s (SFW) Converged Infrastructure solutions for high-performance Enterprise Computing, Data Analytics and Machine Learning.

“We are pleased to announce our Gold Partner relationship with ThinkParQ,” said Kevin Moran, President and CEO, System Fabric Works. “Together, SFW and ThinkParQ can deliver, worldwide, a highly converged, scalable computing solution based on BeeGFS, engineered with NetApp E-Series, a choice of InfiniBand, Omni-Path, RDMA over Ethernet and NVMe over Fabrics for targeted performance and 99.9999 reliability utilizing customer-chosen clustered servers and clients and SFW’s services for architecture, integration, acceptance and on-going support services.”

SFW’s solutions can utilize each of these networking topologies for optimal BeeGFS performance and 99.9999 reliability as full turnkey deployments, adapted to utilize customer-chosen clustered servers and clients.  SFW provides services for architecture, integration, acceptance and on-going support services.

BeeGFS delivered by ThinkParQ, is the leading parallel cluster file system designed specifically to deal with I/O intensive workloads in performance-critical environments. With a strong focus on performance and high flexibility, including converged environments where storage servers are also used for computing, BeeGFS help customers worldwide to increase their productivity by delivering results faster and by enabling analysis methods that were not possible without the specific advantages of BeeGFS.

Designed for very easy installation and management, BeeGFS transparently spreads user data across multiple servers. Therefore, the users can simply scale performance and capacity to the desired level by increasing the number of servers and disks in the system, seamlessly from small clusters up to enterprise-class systems with thousands of nodes. BeeGFS, which is available open source, is powering the storage of hundreds of scientific and industry customer-sites worldwide.

Sven Breuner, CEO of ThinkParQ, stated, “The long experience and solid track record of System Fabric Works in the field of enterprise storage makes us proud of this new partnership. Together, we can now deliver perfectly tailored solutions that meet and exceed customer expectations, no matter whether the customer needs a traditional spinning disk system for high capacity, an all-flash system for maximum performance or a cost-effective hybrid solution with pools of spinning disks and flash drives together in the same file system.”

Due to its performance tuned design and various optimized features, BeeGFS is ideal for demanding, high-performance, high-throughput workloads found in Technical Computing for modeling and simulation, product engineering, life sciences, deep learning, predictive analytics, media, financial services, and many more business advantage tools.

With the new storage pools feature in BeeGFS v7, users can now have their current project pinned to the latest NetApp E-Series All Flash SSD pool to have the full performance of an all-flash system, while the rest of the data resides on spinning disks, where it also can be accessed directly – all within the same namespace and thus completely transparent for applications.

SFW BeeGFS solutions can be based on x86_64 and ARM64 ISAs, support multiple networks with dynamic failover and provide fault-tolerance with built-in replication, and come with additional file system integrity and storage reliability features. Another compelling part of the solution offerings is BeeOND (BeeGFS on demand) which allows on the fly creation of a temporary parallel file system instances on the internal SSDs of compute nodes a per-job basis for burst-buffering. Graphical monitoring and an additional command line interface provides easy management for any kind of environment.

SFW BeeGFS high performance storage solutions with architectural design, implementation and on-gong support services are immediately available from System Fabric Works.

About ThinkParQ

ThinkParQ was founded as a spin-off from the Fraunhofer Center for High Performance Computing by the key people behind BeeGFS to bring fast, robust, scalable storage to market. ThinkParQ is responsible for support, provides consulting, organizes and attends events, and works together with system integrators to create turn-key solutions. ThinkParQ and Fraunhofer internally cooperate closely to deliver high quality support services and to drive further development and optimization of BeeGFS for tomorrow’s performance-critical systems. Visit www.thinkparq.com to learn more about the company.

About System Fabric Works

System Fabric Works (“SFW”), based in Austin, TX, specializes in delivering engineering, integration and strategic consulting services to organizations that seek to implement high performance computing and storage systems, low latency fabrics and the necessary related software. Derived from its 15 years of experience, SFW also offers custom integration and deployment of commodity servers and storage systems at many levels of performance, scale and cost effectiveness that are not available from mainstream suppliers. SFW personnel are widely recognized experts in the fields of high performance computing, networking and storage systems particularly with respect to OpenFabrics Software, InfiniBand, Ethernet and energy saving, efficient computing technologies such as RDMA. Detailed information describing SFW’s areas of expertise and corporate capabilities can be found at www.systemfabricworks.com.

Source: System Fabric Works

The post System Fabric Works and ThinkParQ Partner for Parallel File System appeared first on HPCwire.

Call for Sessions and Registration Now Open for 14th Annual OpenFabrics Alliance Workshop

HPC Wire - Thu, 12/07/2017 - 09:06

BEAVERTON, Ore., Dec. 6, 2017 — The OpenFabrics Alliance (OFA) has published a Call for Sessions for its 14th annual OFA Workshop, taking place April 9-13, 2018, in Boulder, CO. The OFA Workshop is a premier means of fostering collaboration among those who develop fabrics, deploy fabrics and create applications that rely on fabrics. It is the only event of its kind where fabric developers and users can discuss emerging fabric technologies, collaborate on future industry requirements, and address problems that exist today. In support of advancing open networking communities, the OFA is proud to announce that Promoter Member Los Alamos National Laboratory, a strong supporter of collaborative development of fabric technologies, will underwrite a portion of the Workshop. For more information about the OFA Workshop and to find support opportunities, visit the event website.

Call for Sessions

The OFA Workshop 2018 Call for Sessions encourages industry experts and thought leaders to help shape this year’s discussions by presenting or leading discussions on critical high performance networking issues. Sessions are designed to educate attendees on current development opportunities, troubleshooting techniques, and disruptive technologies affecting the deployment of high performance computing environments. The OFA Workshop places a high value on collaboration and exchanges among participants. In keeping with the theme of collaboration, proposals for Birds of a Feather sessions and panels are particularly encouraged.

The deadline to submit session proposals is February 16, 2018, at 5:00 p.m. PST. For a list of recommended session topics, formats and submission instructions download the official OFA Workshop 2018 Call for Sessions flyer.


Early bird registration is now open for all participants of the OFA Workshop 2018. For more information on event registration and lodging, visit the OFA Workshop 2018 Registration webpage.

Dates: April 9-13, 2018

Location: Embassy Suites by Hilton Boulder, CO

Registration Site: http://bit.ly/OFA2018REGRegistration Fee: $695 (Early Bird to March 19, 2018), $815 (Regular)

Lodging: Embassy Suites room discounts available until 6:00 p.m. MDT on Monday, March 19, 2018, or until room block is filled.

About the OpenFabrics Alliance

The OpenFabrics Alliance (OFA) is a 501(c) (6) non-profit company that develops, tests, licenses and distributes the OpenFabrics Software (OFS) – multi-platform, high performance, low-latency and energy-efficient open-source RDMA software. OpenFabrics Software is used in business, operational, research and scientific infrastructures that require fast fabrics/networks, efficient storage and low-latency computing. OFS is free and is included in major Linux distributions, as well as Microsoft Windows Server 2012. In addition to developing and supporting this RDMA software, the Alliance delivers training, workshops and interoperability testing to ensure all releases meet multivendor enterprise requirements for security, reliability and efficiency. For more information about the OFA, visit www.openfabrics.org.

Source: OpenFabrics Alliance

The post Call for Sessions and Registration Now Open for 14th Annual OpenFabrics Alliance Workshop appeared first on HPCwire.

Cray and NERSC Partner to Drive Advanced AI Development at Scale

HPC Wire - Wed, 12/06/2017 - 16:54

SEATTLE, December 6, 2017 – Global supercomputer leader Cray Inc. today announced the company has joined the Big Data Center (BDC) at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC). The collaboration between the two organizations is representative of Cray’s commitment to leverage its supercomputing expertise, technologies, and best practices to advance the adoption of Artificial Intelligence (AI), deep learning, and data-intensive computing.

The BDC at NERSC was established with a goal of addressing the Department of Energy’s leading data-intensive science problems, harnessing the performance and scale of the Cray XC40 “Cori” supercomputer at NERSC. The collaboration is focused on three fundamental areas that are key to unlocking the capabilities required for the most challenging data-intensive workflows:

·       Advancing the state-of-the-art in scalable, deep learning training algorithms, which is critical to the ability to train models as quickly as possible in an environment of ever-increasing data sizes and complexity;

·       Developing a framework for automated hyper-parameter tuning, which provides optimized training of deep learning models and maximizes a model’s predictive accuracy;

·       Exploring the use of deep learning techniques and applications against a diverse set of important scientific use cases, such as genomics and climate change, which broadens the range of scientific disciplines where advanced AI can have an impact.
“We are really excited to have Cray join the Big Data Center,” said Prabhat, Director of the Big Data Center, and Group Lead for Data and Analytics Services at NERSC. “Cray’s deep expertise in systems, software, and scaling is critical in working towards the BDC mission of enabling capability applications for data-intensive science on Cori. Cray and NERSC, working together with Intel and our IPCC academic partners, are well positioned to tackle performance and scaling challenges of Deep Learning.”

“Deep learning is increasingly dependent on high performance computing, and as the leader in supercomputing, Cray is focused on collaborating with the innovators in AI to address present and future challenges for our customers,” said Per Nyberg, Cray’s Senior Director of Artificial Intelligence and Analytics. “Joining the Big Data Center at NERSC is an important step forward in fostering the advancement of deep learning for science and enterprise, and is another example of our continued R&D investments in AI.”

About the Big Data Center at NERSC

The Big Data Center is a collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel, and five Intel Parallel Computing Centers (IPCCs). The five IPCCs that are part of the Big Data Center program include the University of California-Berkeley, the University of California-Davis, New York University (NYU), Oxford University, and the University of Liverpool.  The Big Data Center program was established in August 2017.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray Inc.

The post Cray and NERSC Partner to Drive Advanced AI Development at Scale appeared first on HPCwire.

Graves accepts International Distinguished Achievement Award from SPE

Colorado School of Mines - Wed, 12/06/2017 - 16:52

The Society of Petroleum Engineers has named College of Earth Resource Sciences and Engineering Dean Ramona Graves as the 2017 recipient of the International Distinguished Achievement Award for Petroleum Engineering Faculty.

SPE presents international awards to recognize those who have made significant technical and professional contributions to the industry and contributed exceptional service and leadership to the society. 

Graves received the award “for her significant scientific achievements in the areas of laser-rock interaction, for dedication to students, teaching and the teaching profession, and for furthering cross-functional cooperation.”

Graves is a Mines alumna, and the second woman in the country to earn a doctorate in petroleum engineering.

“It really is an honor to receive an award for doing something that I absolutely love for the last 40—almost 40—years,” said Graves after receiving her award.

Graves went on to thank colleagues and family, saying she owed a special debt of gratitude to the women in her life.

The award was presented by SPE President Janeen Judah at the Annual Awards Banquet during the Annual Technical Conference and Exhibition, October 9-11, 2017, in San Antonio, Texas.

Watch Graves’ entire thank you speech here.

Contact: Agata Bogucka, Communications Manager, College of Earth Resource Sciences & Engineering | 303-384-2657 | abogucka@mines.edu Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu  
Categories: Partner News

Mines student inspires future University Innovation Fellows

Colorado School of Mines - Wed, 12/06/2017 - 15:54

A Colorado School of Mines student spent nearly a week before Thanksgiving working to educate and inspire the next generation of University Innovation Fellows, including the four newest fellowship candidates from Mines.

Asya Sergoyan, a chemical engineering major, was one of 24 current fellows invited back to facilitate the UIF’s Silicon Valley meetup, which this November brought together 350 young innovators for immersive experiences at Stanford University’s d.school and Google. Aspiring fellows also receive six weeks of training online, which has been described as similar to a four-credit course.

“It was interesting to be on the other side,” Sergoyan said. “It was a much larger group than ever before and very international—students from India, a lot of students from South America. It was really cool.”

Sergoyan facilitated a workshop about integrating music into K-12 education. She also delivered a four-minute “Ignite” speech about learning from failure, shifting one’s perspective and using what one has learned to succeed in the future.

For her presentation—15 slides at 15 seconds per slide—Sergoyan drew upon her experience attempting to translate her success with a nonprofit organization she cofounded in high school to Mines.

Grades for Change provided free science and English tutoring to K-8 students. As a freshman at Mines, Sergoyan hoped to do something similar and encourage fellow college students to promote STEM education at local high schools. “We hosted meetings, but no one was ever interested,” Sergoyan said. “Students were too busy, and they didn’t want to do it for free. I realized that this isn’t what the campus needs, but there are other things it does.”

Sergoyan emphasized three ideas in her speech: “You always learn more about yourself from failure; failure and success aren’t discrete; and recognizing failure is a success in itself,” she said.

Sergoyan was one of six Mines students named University Innovation Fellows in February 2016. Before that, only one Mines student had taken part in the program, which seeks to empower students to become agents of change at their schools.

That cohort’s accomplishments on campus include the creation of maker spaces, the innovation competition sponsored by Newmont and a section of freshman orientation devoted to innovation activities. They’re also organizing a regional UIF meetup on campus next September. “We want to fly a bunch of University Innovation Fellows in from all over the country, maybe the world, to see our campus and work with our students to brainstorm things around poverty and the needs of developing countries,” Sergoyan said.

Even though she was at the Silicon Valley meetup to help the newest fellows, Sergoyan found plenty of inspiration herself. “It was the most incredible time, and I’ve met the most incredible people—people who do the craziest jobs, who have started their own companies, who have gone through tragic events. I saw people with cultural differences who were focused on the same ideas.”
“Everybody was so willing to devote their time to help everybody with their speeches,” she said. “It was such a good community that I just want to go back there.”

Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu

Categories: Partner News

IBM Begins Power9 Rollout with Backing from DOE, Google

HPC Wire - Wed, 12/06/2017 - 15:37

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers Summit and Sierra. The new AC922 server pairs two Power9 CPUs with between four and six Nvidia Tesla V100 NVLink GPUs. IBM is positioning the Power9 architecture as “a game-changing powerhouse for AI and cognitive workloads.”

The AC922 extends many of the design elements introduced in Power8 “Minsky” boxes with a focus on enabling connectivity to a range of accelerators – Nvidia GPUs, ASICs, FPGAs, and PCIe-connected devices — using an array of interfaces. In addition to being the first servers to incorporate PCIe Gen4, the new systems support the NVLink 2.0 and OpenCAPI protocols, which offer nearly 10x the maximum bandwidth of PCI-E 3.0 based x86 systems, according to IBM.

IBM AC922 rendering

“We designed Power9 with the notion that it will work as a peer computer or a peer processor to other processors,” said Sumit Gupta, vice president of of AI and HPC within IBM’s Cognitive Systems business unit, ahead of the launch. “Whether it’s GPU accelerators or FPGAs or other accelerators that are in the market, our aim was to provide the links and the hooks to give all these accelerators equal footing in the server.”

In the coming months and years there will be additional Power9-based servers to follow from IBM and its ecosystem partners but this launch is all about the flagship AC922 platform and specifically its benefits to AI and cognitive computing – something Ken King, general manager of OpenPOWER for IBM Systems Group, shared with HPCwire when we sat down with him at SC17 in Denver.

“We didn’t build this system just for doing traditional HPC workloads,” King said. “When you look at what Power9 has with NVLink 2.0 we’re going from 80 gigabits per second throughput [in NVLink 1.0] to over 150 gigabits per second throughput. PCIe Gen3 only has 16. That GPU to CPU I/O is critical for a lot of the deep learning and machine learning workloads.”

Coherency, which Power9 introduces via both CAPI and NVLink 2.0, is another key enabler. As AI models grow large, they can easily outgrow GPU memory capacity but the AC922 addresses these concerns by allowing accelerated applications to leverage system memory as GPU memory. This reduces latency and simplifies programming by eliminating data movement and locality requirements.

The AC922 server can be configured with either four or six Nvidia Volta V100 GPUs. According to IBM, a four GPU air-cooled version will be available December 22 and both four- and six-GPU water-cooled options are expected to follow in the second quarter of 2018.

While the new Power9 boxes have gone by a couple different codenames (“Witherspoon” and “Newell”), we’ve also heard folks at IBM refer to them informally as their Summit servers and indeed there is great visibility in being the manufacturer for what is widely expected to be the United States’ next fastest supercomputer. Thousands of the AC922 nodes are being connected together along with storage and networking to drive approximately 200 petaflops at Oak Ridge and 120 petaflops at Lawrence Livermore.

As King pointed out in our interview, only one of the original CORAL contractors is fulfilling its mission to deliver a “pre-exascale” supercomputer to the collaboration of US labs.

IBM has also been tapped by Google, which with partner Rackspace is building a server with Power9 processors called Zaius. In a prepared statement, Bart Sano, vice president of Google Platforms, praised “IBM’s progress in the development of the latest POWER technology” and said “the POWER9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google data centers.”

IBM sees the hyperscale market as “a good volume opportunity” but is obviously aware of the impact that volume pricing has had on the traditional server market. “We do see strong pull from them, but we have many other elements in play,” said Gupta. “We have solutions that go after the very fast-growing AI space, we have solutions that go after the open source databases, the NoSQL datacenters. We have announced a partnership with Nutanix to go after the hyperconverged space. So if you look at it, we have lots of different elements that drive the volume and opportunity around our Linux on Power servers, including of course SAP HANA.”

IBM will also be selling Power9 chips through its OpenPower ecosystem, which now encompasses 300 members. IBM says it’s committed to deploying three versions of the Power9 chip, one this year, one in 2018 and another in 2019. The scale-out variant is the one it is delivering with CORAL and with the AC922 server. “Then there will be a scale-up processor, which is the traditional chip targeted towards the AIX and the high-end space and then there’s another one that will be more of an accelerated offering with enhanced memory and other features built into it; we’re working with other memory providers to do that,” said King.

He added that there might be another version developed outside of IBM, leveraging OpenPower, which gives other organizations the opportunity to utilize IBM’s intellectual property to build their own differentiated chips and servers.

King is confident that the demand for IBM’s latest platform is there. “I think we are going to see strong out of the chute opportunities for Power9 in 2018. We’re hoping to see some growth this quarter with the solution that we’re bringing out with CORAL but that will be more around the ESP customers. Next year is when we’re expecting that pent up demand to start showing positive return overall for our business results.”

A lot is riding on the success of Power9 after Power8 failed to generate the kind of profits that IBM had hoped for. There was growth in the first year said King but after that Power8 started declining. He added that capabilities like Nutanix and building PowerAI and other software based solutions on top of it have led to a bit of a rebound. “It’s still negative but it’s low negative,” he said, “but it’s sequentially grown quarter to quarter in the last three quarters since Bob Picciano [SVP of IBM Cognitive Systems] came on.”

Several IBM reps we spoke with acknowledged that pricing or at least pricing perception was a problem for Power8.

“For our traditional market I think pricing was competitive; for some of the new markets that we’re trying to get into like the hyperscaler datacenters I think we’ve got some work to do,” said King. “It’s really a TCO and a price-performance competitiveness versus price only. And we think we’re going to have a much better price performance competitiveness with Power9 in the hyperscalers and some of the low-end Linux spaces that are really the new markets.”

“We know what we need to do for Power9 and we’re very confident with a lot of the workload capabilities that we’ve built on top of this architecture that we’re going to see a lot more growth, positive growth on Power9, with PowerAI with Nutanix with some of the other workloads we’ve put in there and it’s not going to be a hardware only reason,” King continued. “It’s going to be a lot of the software capabilities that we’ve built on top of the platform, and supporting more of the newer workloads that are out there. If you look at the IDC studies of the growth curve of cognitive infrastructure it goes from about $1.6 billion to $4.5 billion over the next two or three years – it’s a huge hockey stick – and we have built and designed Power9 for that market, specifically and primarily for that market.”

The post IBM Begins Power9 Rollout with Backing from DOE, Google appeared first on HPCwire.

Researchers win NASA funding for small spacecraft technology

Colorado School of Mines - Wed, 12/06/2017 - 15:23

A pair of researchers from Colorado School of Mines was one of nine university teams selected for NASA funding to develop and demonstrate new technologies and capabilities for small spacecraft.

Qi Han, associate professor of computer science, and Christopher Dreyer, research assistant professor of mechanical engineering, will receive $200,000 in funding per year for two years through NASA’s Smallsat Technology Partnerships Initiative. Working with two collaborators from NASA’s Jet Propulsion Laboratory in Pasadena, California, their focus will be developing and evaluating algorithms for dynamic spacecraft networking and network-aware coordination of multi-spacecraft swarms.

“This project aims to develop a framework for tight integration of communication and controls as an enabling technology for NASA to effectively deploy swarms of small spacecraft,” Han said. “This framework will make it possible for a network of self-organizing small spacecraft to be highly collaborative among themselves for the monitoring of time-varying and geographically distributed phenomena.”

Current deep-space missions face several challenges, including intermittent network connectivity, stringent bandwidth constraints and diverse quality-of-service (QoS) and quality-of-data (QoD) requirements, she said. 

“The use of a single platform creates non-optimal data-gathering conditions, thus requiring longer duration to meet science requirements,” Han said. “For example, during the NEAR [Near Earth Asteroid Rendezvous] mission, the orbit was a compromise resulting in non-optimal data-gathering conditions for most instruments. Up to a third of the time, communicating with the Earth required maneuvering the spacecraft so that the asteroid was no longer in the instruments’ field of view.”

The distributed spacecraft network proposed by the Mines team would deploy a carrier spacecraft with larger storage and processing capabilities along with the swarm of small spacecraft in orbit about a near-Earth asteroid. 

“The carrier spacecraft is dedicated to data transfer, so it is responsible for sending data gathered by all the spacecraft to the deep space network,” Han said. “This setup will make sure that the spacecraft swarm can collect measurements uninterrupted in the shortest period of time.” 

As part of the project, researchers will also evaluate and demonstrate an integrated prototype system, using a team of unmanned aerial drones in the challenging wireless network environment of the Edgar Experimental Mine.

“The work nicely complements efforts at Mines to expand research and teaching in space-related fields, such as the Mines and Lockheed Martin software academy and the Space Resources Graduate Program,” said Dreyer, who works in the Center for Space Resources at Mines

Other universities to receive funding through NASA’s Smallsat initiative are Massachusetts Institute of Technology; Stanford University; Purdue University; Utah State University; University of Arizona; University of Illinois, Urbana-Champaign; and University of Washington. Proposals were requested in three areas – instrument technologies for small spacecraft, technologies that enable large swarms of small spacecraft and technologies that enable deep-space small spacecraft missions. 

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

University of Oregon Uses Supercomputer to Power New Research Advanced Computing Facility

HPC Wire - Wed, 12/06/2017 - 13:31

Dec. 6, 2017 — A supercomputer that can perform more than 250 trillion calculations per second is powering the UO’s leap into data science as the heart of a new $2.2 million Research Advanced Computing Services facility.

Known as Talapas, the powerhouse mainframe is one of the fastest academic supercomputers in the Northwest. Its computing horsepower will aid researchers doing everything from statistical studies to genomic assemblies to quantum chemistry.

“It’s already had a profound impact on my research,” said Eric Corwin, an associate professor in the Department of Physics who employed the center’s supercomputer to examine a physical process known as “jamming.” “Computational tasks that would otherwise have taken a year running on a lab computer can be finished in just a few days, which means we can be much more exploratory in our approach, which has already led to several unexpected discoveries.”

The new center, which opens officially Dec. 6, is already available to faculty members who register as principal investigators or to members of registered research teams. The center, known by the acronym RACS, offers access to large-scale computing and will soon add high-speed data transfer capabilities, support for data sharing and other services.

In addition to boosting the university’s capacity for big data, the new center opens new doors of discovery for faculty across the spectrum of disciplines, schools, colleges and departments. Director Nick Maggio says the center will also help train students for the careers of tomorrow and make the UO more competitive in recruiting new faculty and securing research funding.

“It allows our researchers to evaluate novel technologies and explore new paradigms of computing that weren’t available to them before,” Maggio said. “We’re here to lower every barrier possible so that research computing can flourish at the University of Oregon.”

Talapas is 10 times more powerful than its aging predecessor, ACISS. In just the first few months of testing, the center has helped faculty members performing molecular dynamics simulations, image analysis, machine learning, deep learning and other types of projects.

Bill Cresko, a professor in the Department of Biology who serves as an associate vice president for research, directs the UO’s Presidential Initiative in Data Science. He points to the high-performance computing center as a crucial element of the initiative.

The center will bring together existing faculty and recruit new faculty across the UO’s schools and colleges to create new research and education programs. The center and the initiative are funded through the $50 million Presidential Fund for Excellence announced earlier this year by UO President Michael Schill.

“Research is becoming more and more data-intensive every day, and it’s crucial that we have the capacity to perform the kinds of larger and larger simulations that the high-performance computing center enables,” Cresko said. “The center will play a key role in our continued success as a research institution and our commitment to discovery and innovation.”

The Research Advanced Computing Services center has a staff of four that includes Maggio, a computational scientist and two system administrators. Among other things, the team has been tasked with transitioning users off of the old system and bringing researchers up to speed with the powerful new technology.

The center is one of nine research core facilities supported by the UO’s Office of the Vice President of Research and Innovation.

“It’s a real jewel in the midst of our growing research enterprise,” said David Conover, UO’s vice president for research and innovation. “It goes a long way toward our goal of advancing transformative excellence in research, innovation and graduate education, and it’s exciting to think about all of the new discoveries and new collaborations that will grow out of the facility.”

Although the machine is physically located in the basement of the Allen Hall Data Center, researchers from any department can sign up to access its services from their desktops. Increasingly, Maggio said, researchers who previously didn’t use such computational approaches are becoming computational researchers after encountering new research projects that quickly overwhelm the limits of their local resources.

“With the data explosion that’s occurred over the last 10 years, new opportunities for computational research exist in every field,” Maggio said. “There is no such thing as a non-computational discipline anymore.”

Maggio credits Schill and the UO Board of Trustees with seeing the importance of high-performance computing and prioritizing the funding and creation of the new center in under two years. Joe Sventek, head of the Department of Computer and Information Science, led a faculty committee that developed plans for acquiring the computational hardware, implemented the hiring of key staff such as Maggio and helped launch RACS — all in record time.

“The fact that Joe and the committee completed this task so quickly is simply amazing,” Conover said.

Looking to the future, Maggio envisions more and more researchers accessing the new facility.  Already, more than 300 different lab members from nearly 80 labs have requested access, and high-performance computing will likely play an increasing role in powering new research initiatives, such as the Phil and Penny Knight Campus for Accelerating Scientific Impact.

“This is the fastest and largest computing asset that the University of Oregon has ever had and it’s still growing,” Maggio said. “This is an incredibly exciting time to be engaged in computational research at the University of Oregon.”

To request access to large-scale computing resources, contact the Research Advanced Computing Services center at racs@uoregon.edu.

Source: University of Oregon

The post University of Oregon Uses Supercomputer to Power New Research Advanced Computing Facility appeared first on HPCwire.

PEZY President Arrested, Charged with Fraud

HPC Wire - Wed, 12/06/2017 - 10:19

The head of Japanese supercomputing firm PEZY Computing was arrested Tuesday on suspicion of defrauding a government institution of 431 million yen (~$3.8 million). According to reports in the Japanese press, PEZY founder, president and CEO Motoaki Saito and another PEZY employee, Daisuke Suzuki, are charged with profiting from padded claims they submitted to the New Energy and Industrial Technology Development Organization (NEDO).

PEZY, which stands for Peta, Exa, Zetta, Yotta, designed the manycore processors for “Gyoukou” one of the world’s fastest and most energy-efficient supercomputers. Installed at Japan Agency for Marine-Earth Science and Technology, Gyoukou achieved a fourth-place ranking on the November 2017 Top500 list with 19.14 petaflops of Linpack performance. Four of the five top systems on current Green500 listing are PEZY based, including Gyoukou in fifth position and the number one machine, Shoubu system B, operated by RIKEN.

Saito is also the founder and CEO of ExaScaler, which manufacturers the PEZY systems using its immersion liquid cooling technology, and Ultra Memory, Inc., a startup working on 3D multi-layer memory technology.

All three companies (PEZY, Exascaler, and Ultra Memory) have been in joint collaboration to develop an exascale supercomputer in the 2019 timeframe.

PEZY Computing was founded in January 2010 and introduced its first generation manycore microprocessor PEZY-1 in 2012; PEZY-SC followed in 2014. The third-generation chip, PEZY-SC2, was released in early 2017. The company has an estimated market cap of 940 million yen ($8.4 million).

NEDO is one of the largest public R&D management organizations in Japan, promoting the development and introduction of industrial and energy technologies.

The post PEZY President Arrested, Charged with Fraud appeared first on HPCwire.

Survey from HSA Foundation Highlights Importance, Benefits of Heterogeneous Systems

HPC Wire - Wed, 12/06/2017 - 09:07

BEAVERTON, Ore., Dec. 6, 2017 — The Heterogeneous System Architecture (HSA) Foundation today released key findings from a second comprehensive members survey. The survey reinforced why heterogeneous architectures are becoming integral for future electronic systems.

HSA is a standardized platform design supported by more than 70 technology companies and universities that unlocks the performance and power efficiency of the parallel computing engines found in most modern electronic devices. It allows developers to easily and efficiently apply the hardware resources—including CPUs, GPUs, DSPs, FPGAs, fabrics and fixed function accelerators—in today’s complex systems-on-chip (SoCs).

Some of the survey questions – and results:

Will the system have HSA features? 

Last year, 58.82% of the respondents answered affirmatively; this year, 100%!

Will it be HSA-compliant?

In 2016, 69.23% said it would; 2017 figures rose to 80%.

What is the top challenge in implementing heterogeneous systems?

27.27% responded in 2016 that it was a lack of standards for software programming models; the 2017 survey also identified this as the most important issue, but the numbers decreased to 7.69%.

What is the top challenge in implementing heterogeneous systems?

Half of the respondents last year said it was a lack of developer ecosystem momentum.  Once again this was identified as the key issue.

Some remarks that further accentuate key survey findings:

“Many HSA Foundation members are currently designing, programming or delivering a wide range of heterogeneous systems – including those based on HSA,” said HSA Foundation President Dr. John Glossner. “Our 2017 survey provides additional insight into key issues and trends affecting these systems that power the electronic devices across every aspect of our lives.”

Greg Stoner, HSA Foundation Chairman and Managing Director said that “the Foundation is developing resources and ecosystems conducive to its members’ various focuses on different application areas, including machine learning, artificial intelligence, datacenter, embedded IoT, and high-performance computing. The Foundation has also been making progress in support of these ecosystems, getting closer to taking normal C++ code and compiling to an HSA system.”

Stoner added that “ROCm 7 by AMD will port HSA for Caffe and TensorFlow; GPT, in the meantime, is releasing an open-sourced HSAIL-based Caffe library, with the first version already up and running – this permits early access for developers.”

Dr. Xiaodong Zhang, from Huaxia General Processor Technologies, who serves as chairman of the China Regional Committee (CRC; established by the HSA Foundation to enhance global awareness of heterogeneous computing), said that “China’s semiconductor industry is rapidly developing, and the CRC is building an ecosystem in the region to include technology, talent, and markets together with an open approach to take advantage of synergies among industry, academia, research, and applications.”

About the HSA Foundation

The HSA (Heterogeneous System Architecture) Foundation is a non-profit consortium of SoC IP vendors, OEMs, Academia, SoC vendors, OSVs and ISVs, whose goal is making programming for parallel computing easy and pervasive. HSA members are building a heterogeneous computing ecosystem, rooted in industry standards, which combines scalar processing on the CPU with parallel processing on the GPU, while enabling high bandwidth access to memory and high application performance with low power consumption. HSA defines interfaces for parallel computation using CPU, GPU and other programmable and fixed function devices, while supporting a diverse set of high-level programming languages, and creating the foundation for next-generation, general-purpose computing.

Source: HSA Foundation

The post Survey from HSA Foundation Highlights Importance, Benefits of Heterogeneous Systems appeared first on HPCwire.

Raytheon Developing Superconducting Computing Technology for Intelligence Community

HPC Wire - Tue, 12/05/2017 - 12:21

CAMBRIDGE, Mass., Dec. 5, 2017 — A Raytheon BBN Technologies-led team is developing prototype cryogenic memory arrays and a scalable control architecture under an award from the Intelligence Advanced Research Projects Activity Cryogenic Computing Complexity program.

The team recently demonstrated an energy-efficient superconducting/ferromagnetic memory cell—the first integration of a superconducting switch controlling a cryogenic memory element.

“This research could generate a new approach to supercomputing that is more efficient, faster, less expensive, and requires a smaller footprint,” said Zachary Dutton, Ph.D. and manager of the quantum technologies division at Raytheon BBN Technologies.

Raytheon BBN is the prime contractor leading a team that includes:

  • Massachusetts Institute of Technology
  • New York University
  • Cornell University
  • University of Rochester
  • University of Stellenbosch
  • HYPRES, Inc.
  • Canon U.S.A, Inc.,
  • Spin Transfer Technologies, Inc.

Raytheon BBN Technologies is a wholly owned subsidiary of Raytheon Company (NYSE: RTN).

About Raytheon 

Raytheon Company, with 2016 sales of $24 billion and 63,000 employees, is a technology and innovation leader specializing in defense, civil government and cybersecurity solutions. With a history of innovation spanning 95 years, Raytheon provides state-of-the-art electronics, mission systems integration, C5ITM products and services, sensing, effects, and mission support for customers in more than 80 countries. Raytheon is headquartered in Waltham, Massachusetts.

Source: Raytheon

The post Raytheon Developing Superconducting Computing Technology for Intelligence Community appeared first on HPCwire.

Cavium Partners with IBM for Next Generation Platforms by Joining OpenCAPI

HPC Wire - Tue, 12/05/2017 - 12:12

SAN JOSE, Calif., Dec. 5, 2017 — Cavium, Inc. (NASDAQ: CAVM), a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, wired and wireless networking is partnering with IBM for next generation platforms by joining OpenCAPI, an initiative founded by IBM, Google, AMD and others. OpenCAPI provides high-bandwidth, low latency interface optimized to connect accelerators, IO devices and memory to CPUs. With this announcement Cavium plans to bring its leadership in server IO and security offloads to next generation platforms that support the OpenCAPI interface.

Traditional system architectures are becoming a bottleneck for new classes of data-centric applications that require faster access to peripheral resources like memory, I/O and accelerators. For the efficient deployment and success of such applications, it is imperative to put the compute power closer to the data. OpenCAPI, a mature and complete specification enables such a server design, that can increase datacenter server performance by several times, enabling corporate and cloud data centers to speed up big data, machine learning, analytics, and other emerging workloads. Capable of 25Gbits per second data rate, OpenCAPI delivers the best in class performance, enabling the maximum utilization of high speed I/O devices like Cavium Fibre Channel adapters, low latency Ethernet NICs, programmable SmartNIC and security solutions.

Cavium delivers the industry’s most comprehensive family of I/O adapters and network acclerators which have the potential to be seamlessly inegrated into OpenCAPI based systems. Cavium’s portfolio includes FastLinQ® Ethernet Adapters, Converged Networking Adapters, LiquidIO SmartNICs, Fibre Channel Adapters and NITROX® Security Accelerators that cover the entire spectrum for data-centric application connectivity, offload and accleration requirements.

“We welcome Cavium to the OpenCAPI consortium to fuel innovation for today’s data-intensive cognitive workloads,” said Bob Picciano, Senior Vice President, IBM Cognitive Systems. “Together, we will tap into Cavium’s next-generation technology, including networking and accelerators, and work in tandem with other partners’ systems technology to unleash high-performance capabilities for our clients’ data center workloads.”

“We are excited to be a part of the OpenCAPI consortium. As our partnership with IBM continues to grow, we see more synergies in high speed communication and Artificial Intelligence applications,” said Syed Ali, founder and CEO of Cavium.  “We look forward to working with IBM to enable exponential performance gains for these applications.”

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data Center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan.

Source: Cavium

The post Cavium Partners with IBM for Next Generation Platforms by Joining OpenCAPI appeared first on HPCwire.

IBM Unveils Power9 Server Designed for HPC, AI

HPC Wire - Tue, 12/05/2017 - 11:38

ARMONK, NY, Dec. 5 2017 — IBM today unveiled its next-generation Power Systems Servers incorporating its newly designed POWER9 processor. Built specifically for compute-intensive AI workloads, the new POWER9 systems are capable of improving the training times of deep learning frameworks by nearly 4x[2] allowing enterprises to build more accurate AI applications, faster.

The system was designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

As a result, data scientists can build applications faster, ranging from deep learning insights in scientific research, real-time fraud detection and credit risk analysis.

POWER9 is at the heart of the soon-to-be most powerful data-intensive supercomputers in the world, the U.S. Department of Energy’s “Summit”and “Sierra” supercomputers, and has been tapped by Google.

“Google is excited about IBM’s progress in the development of the latest POWER technology,” said Bart Sano, VP of Google Platforms “The POWER9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google data centers.”

“We’ve built a game-changing powerhouse for AI and cognitive workloads,” said Bob Picciano, SVP of IBM Cognitive Systems. “In addition to arming the world’s most powerful supercomputers, IBM POWER9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry.”

Accelerating the Future with POWER9

Deep learning is a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data.

To meet these growing industry demands, four years ago IBM set out to design the POWER9 chip on a blank sheet to build a new architecture to manage free-flowing data, streaming sensors and algorithms for data-intensive AI and deep learning workloads on Linux.

IBM is the only vendor that can provide enterprises with an infrastructure that incorporates cutting-edge hardware and software with the latest open-source innovations.

With PowerAI, IBM has optimized and simplified the deployment of deep learning frameworks and libraries on the Power architecture with acceleration, allowing data scientists to be up and running in minutes.

IBM Research is developing a wide array of technologies for the Power architecture. IBM researchers have already cut deep learning times from days to hours with the PowerAI Distributed Deep Learning toolkit.

Building an Open Ecosystem to Fuel Innovation

The era of AI demands more than tremendous processing power and unprecedented speed; it also demands an open ecosystem of innovative companies delivering technologies and tools. IBM serves as a catalyst for innovation to thrive, fueling an open, fast-growing community of more than 300 OpenPOWER Foundation and OpenCAPI Consortium members.

Learn more about POWER9 and the AC922: http://ibm.biz/BdjCQQ

Read more from Bob Picciano, Senior Vice President, IBM Cognitive Systems:  https://www.ibm.com/blogs/think/2017/12/accelerating-ai/

[1]  Results of 3.7X are based IBM Internal Measurements running 1000 iterations of Enlarged GoogleNet model (mini-batch size=5)  on Enlarged Imagenet Dataset (2560×2560). Hardware: Power AC922; 40 cores (2 x 20c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU; Red Hat Enterprise Linux 7.4 for Power Little Endian (POWER9) with CUDA 9.1/ CUDNN 7;. Competitive stack: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) /  40 threads; Intel Xeon E5-2640 v4;  2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04. with CUDA .9.0/ CUDNN 7 Software: Chainverv3 /LMS/Out of Core with patches found at https://github.com/cupy/cupy/pull/694 and https://github.com/chainer/chainer/pull/3762

[2] Results of 3.8X are based IBM Internal Measurements running 1000 iterations of Enlarged GoogleNet model (mini-batch size=5) on Enlarged Imagenet Dataset (2240×2240). Power AC922; 40 cores (2 x 20c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU ; Red Hat Enterprise Linux 7.4 for Power Little Endian (POWER9) with CUDA 9.1/ CUDNN 7;. Competitive stack: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) /  40 threads; Intel Xeon E5-2640 v4;  2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04. with CUDA .9.0/ CUDNN 7.  Software: IBM Caffe with LMS Source code https://github.com/ibmsoe/caffe/tree/master-lms

[3] x86 PCI Express 3.0 (x16) peak transfer rate is 15.75 GB/sec = 16 lanes X 1GB/sec/lane x 128 bit/130 bit encoding.

[4] POWER9 and next-generation NVIDIA NVLink peak transfer rate is 150 GB/sec = 48 lanes x 3.2265625 GB/sec x 64 bit/66 bit encoding.

Source: IBM

The post IBM Unveils Power9 Server Designed for HPC, AI appeared first on HPCwire.

Mines team headed to programming world finals

Colorado School of Mines - Tue, 12/05/2017 - 11:28

A team from Colorado School of Mines is headed to the world finals of the ACM International Collegiate Programming Competition for the first time in school history. 

The SAMurai MASters – Sam Reinehr, Allee Zarrini and Matt Baldin – won the Rocky Mountain Regional on Nov. 11, besting more than 50 teams from Colorado, Utah, Montana, Arizona, Alberta and Saskatchewan to claim the region’s lone spot in the most prestigious collegiate programming competition in the world. 

The Mines juniors will face off against teams from Asia, Europe, Africa, North and South America and Australia when they travel to Beijing, China, in April.

“All CS@Mines faculty are pumped about the first-place finish of SAMurai MASters in our region,” said Tracy Camp, professor and head of the Computer Science Department. “These types of events are such a great educational opportunity for our students, so we were thrilled to see 11 teams – 33 CS@Mines students – participate this year, a record. To have a team win the Rocky Mountain region is huge.” 

Reinehr, Zarrini and Baldin credited their victory to months of preparation – the three friends have been meeting up for four hours every Saturday since the summer and added some individual programming practice this fall. 

“We competed last year, but we didn’t prepare at all. We just did it for fun,” Baldin said. “We went in with no expectations but we thought we could win if we actually tried. It didn’t seem out of reach. So, we promised ourselves that we would practice a lot.” 

In the competition, teams earn points for each algorithmic problem they solve and for how quickly they come up with a correct answer. At regionals, the SAMurai MASters solved 9 of 11 problems – but they did it considerably faster than the only other team that managed to solve that many. 

Unlike many of the schools sending teams, though, Mines does not have a competitive programming club or class, from which the top performers can be curated into teams for regionals. The SAMurai MASters hopes that changes in the future.

“Overall what we hope to get out of this is for Mines, after seeing us place so well, to start developing a program for students to compete and do well in this competition in the future even after we graduate,” Zarrini said. 

“We put Mines on the map,” Baldin added. “We want them to stay there.”

At worlds, the SAMurai don’t expect a repeat performance of regionals – Russian teams have won six years running – but they’d be happy with placing in the top 60 and earning honorable mention. To help prepare, they’re doing an independent study next semester with Teaching Associate Professor Jeff Paone.

“We're all juniors – we've got next year, too,” Reinehr said.

Tech companies also recruit out of the competition, and the teammates have found that all that programming practice has been good prep for interviews. 

“Almost every technical interview question I’ve gotten has not been nearly as difficult as the questions we’re doing here,” Zarrini said.

“I wouldn’t say being good at competitive programming necessarily makes you the best software engineer, but it speaks volumes to someone’s problem-solving ability,” Baldin said. “We want to prove we are good problem solvers."

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Azure Debuts AMD EPYC Instances for Storage Optimized Workloads

HPC Wire - Tue, 12/05/2017 - 11:09

AMD’s return to the data center received a boost today when Microsoft Azure announced introduction of instances based on AMD’s EPYC microprocessors. The new instances – Lv2-Series of Virtual Machine – use the EPYC 7551 processor. Adoption of EPYC by a major cloud provider adds weight to AMD’s argument that it has returned to the data center with a long-term commitment and product roadmap. AMD had been absent from that segment for a number of years.

Writing in a blog, Corey Sanders director of compute, Azure, said, “We’ve worked closely with AMD to develop the next generation of storage optimized VMs called Lv2-Series, powered by AMD’s EPYC processors. The Lv2-Series is designed to support customers with demanding workloads like MongoDB, Cassandra, and Cloudera that are storage intensive and demand high levels of I/O.” The EPYC line was launched last June (see HPCwire article, AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor.)

The instances make use of Microsoft’s Project Olympus intended to deliver a next generation open source cloud hardware design developed with the Open Compute Community (OCP). “We think Project Olympus will be the basis for future innovation between Microsoft and AMD, and we look forward to adding more instance types in the future benefiting from the core density, memory bandwidth and I/O capabilities of AMD EPYC processors,” said Sanders, quoted in the AMD’s announcement of the new instances.

It is an important win for AMD. Gaining a foothold in the X86 landscape today probably requires adoption by hyperscalers. No doubt some “tire kicking” is going on here but use of an Olympus design adds incentive for Microsoft Azure to court customers for the instances. HPE has also announced servers using the EPYC line.

AMD EPYC chip lineup at the June launch

The Lv2-Series instances run on the AMD EPYC 7551 processor featuring a base core frequency of 2.2 GHz and a maximum single-core turbo frequency of 3.0 GHz. “With support for 128 lanes of PCIe connections per processor, AMD provides over 33 percent more connectivity than available two-socket solutions to address an unprecedented number of NVMe drives directly,” says AMD.

The Lv2 VMs will be available starting at eight and ranging to 64 vCPU sizes, with the largest size featuring direct access to 4TB of memory. These sizes will support Azure premium storage disks by default and will also support accelerated networking capabilities for the highest throughput of any cloud.

Scott Aylor, AMD corporate vice president and general manager of Enterprise Solutions said, “There is tremendous opportunity for users to tap into the capabilities we can deliver across storage and other workloads through the combination of AMD EPYC processors on Azure. We look forward to the continued close collaboration with Microsoft Azure on future instances throughout 2018.”

Link to AMD release: http://www.amd.com/en-us/press-releases/Pages/microsoft-azure-becomes-2017dec05.aspx

Link to Azure blog: https://azure.microsoft.com/en-us/blog/announcing-the-lv2-series-vms-powered-by-the-amd-epyc-processor/

The post Azure Debuts AMD EPYC Instances for Storage Optimized Workloads appeared first on HPCwire.


Subscribe to www.rmacc.org aggregator