HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 39 min ago

DoD Adds 14 Petaflops of Computing Power with Seven New Systems

Tue, 01/23/2018 - 10:22

Jan. 23 — New HPC systems at the Air Force Research Laboratory and Navy DoD Supercomputing Research Centers will provide an additional 14 petaflops of computational capability.

The Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) completed its fiscal year 2017 investment in supercomputing capability supporting the DoD Science and Technology (S&T), Test and Evaluation (T&E), and Acquisition Engineering communities. The acquisition consists of seven supercomputing systems with corresponding hardware and software maintenance services. At 14 petaflops, this procurement will increase the DoD HPCMP’s aggregate supercomputing capability to 47 petaflops. These systems significantly enhance the Program’s capability to support the Department of Defense’s most demanding computational challenges.

The new supercomputers will be installed at the Air Force Research Laboratory (AFRL) and Navy DoD Supercomputing Resource Centers (DSRCs), and will serve users from all of the services and agencies of the Department.

The AFRL DSRC in Dayton, Ohio, will receive four HPE SGI 8600 systems containing Intel Xeon Platinum 8168 (Skylake) processors. The architecture of the four systems are as follows:

– A single system of 56,448 Intel Platinum Skylake compute cores and 24 NVIDIA Tesla P100 General-Purpose Graphics Processing Units (GPGPUs), 244 terabytes of memory, and 9.2 petabytes of usable storage.

– A single system of 13,824 Intel Platinum Skylake compute cores, 58 terabytes of memory, and 1.6 petabytes of usable storage.

– Two systems, each consisting of 6,912 Intel Platinum Skylake compute cores, 30 terabytes of memory, and 1.0 petabytes of usable storage.

The Navy DSRC at Stennis Space Center, Mississippi, will receive three HPE SGI 8600 systems containing Intel Xeon Platinum 8168 (Skylake) processors. The architecture of the three systems are as follows:

– Two systems, each consisting of 35,328 Intel Platinum Skylake compute cores, 16 NVIDIA Tesla P100 GPGPUs , 154 terabytes of memory, and 5.6 petabytes of usable storage.

– A single system consisting of 7,104 Intel Platinum Skylake compute cores, four NVIDIA Tesla P100 GPGPUs, 32 terabytes of memory, and 1.0 petabytes of usable storage.

The systems are expected to enter production service in the second half of calendar year 2018.

About the DOD High Performance Computing Modernization Program (HPCMP)

The HPCMP provides the Department of Defense supercomputing capabilities, high-speed network communications and computational science expertise that enable DOD scientists and engineers to conduct a wide-range of focused research and development, test and evaluation, and acquisition engineering activities. This partnership puts advanced technology in the hands of U.S. forces more quickly, less expensively, and with greater certainty of success. Today, the HPCMP provides a comprehensive advanced computing environment for the DOD that includes unique expertise in software development and system design, powerful high performance computing systems, and a premier wide-area research network. The HPCMP is managed on behalf of the Department of Defense by the U.S. Army Engineer Research and Development Center located in Vicksburg, Mississippi. For more information, visit our website at: https://www.hpc.mil.

Source: DOD High Performance Computing Modernization Program

The post DoD Adds 14 Petaflops of Computing Power with Seven New Systems appeared first on HPCwire.

U of I Researcher Recognized with ACM Fellowship for Contributions to Parallel Programming

Tue, 01/23/2018 - 08:22

Jan. 23, 2018 — Laxmikant “Sanjay” Kale, a professor of Computer Science at the University of Illinois at Urbana-Champaign, and a NCSA Faculty Affiliate, was named to the the 2017 class of fellows from the Association for Computing Machinery (ACM), the scientific computing community’s largest society.

At Illinois, Kale has pioneered an effort to integrate adaptive runtime systems into parallel programming, leading to collaboration and the development of scalable applications across industries, from biophysics to quantum chemistry and even astronomy. Kale also leads the Parallel Programming Laboratory at the University of Illinois.

The ACM Fellows Program, the organization’s most prestigious honor, recognizes the top 1 percent of ACM members for outstanding accomplishments in computing and information technology. This prestigious honor comes on the back of the implementation of Adaptive Runtime Systems in parallel computing, which has been pioneered by Kale and his group, and implemented into the Charm++ parallel programing framework, maintained at the University of Illinois at Urbana-Champaign.

“We co-developed several science and engineering applications using Charm++, which allowed us to validate and improve the Adaptive Runtime techniques we were developing in our research in the context of full applications,” said Kale. “The application codes developed include NAMD (biophysics), OpenAtom (quantum chemistry/materials modeling), ChaNGa (astronomy), EpiSimdemics (simulation of epidemics), etc. These are highly scalable codes that run from small clusters to supercomputers, including Blue Waters, on hundreds of thousands of processor cores.”

This new adaptive runtime system allows code to run much more efficiently than before, keeping an ever-vigilant digital eye on individual processors and how they are processing data, eliminating downtime and ultimately shortening processing time.

“Our approach allows parallel programmers to write code without worrying about where (i.e. on which processor) the code will execute, or which data will be on what processor,” explained Kale. “The runtime system continuously watches the program behavior and moves data and code-execution among processors so as to automatically improve performance, for example, via dynamic load balancing. This approach especially helps development of complex or sophisticated parallel algorithms.”

Thus, Kale’s continued work will enable the continued expansion of parallel algorithms for high performance computing, and as such, see expanded use as time goes on.

“The credit for my success and for this award certainly goes to generations of my students who worked on various aspects of adaptive runtime systems,” Kale concluded.

Sanjay Kale is also a Professor of Computer Science at the University of Illinois at Urbana-Champaign, and a faculty affiliate at the National Center for Supercomputing Applications. More information on the ACM Fellow program can be found here.

About NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

About NCSA’s Faculty Affiliate Program

NCSA’s Faculty Affiliate Program provides an opportunity for faculty and researchers at the University of Illinois at Urbana-Champaign to catalyze and develop long-term research collaborations between Illinois departments, research units, and NCSA.

Source: NCSA

The post U of I Researcher Recognized with ACM Fellowship for Contributions to Parallel Programming appeared first on HPCwire.

ClusterVision Announces Completion of GPU Cluster System for ASTRON

Mon, 01/22/2018 - 08:29

Jan. 22, 2018 — ClusterVision, Europe’s dedicated specialist in high performance computing solutions, has announced the successful completion of a new high performance computing GPU cluster system for the Netherlands Institute for Radio Astronomy (ASTRON). The 2 PFLOPS installation, codenamed ARTS, will be used to assist the institute’s Westerbork Synthesis Radio Telescope with analysing and deciphering large pulsar flashes.

As part of the APERTIF project, ASTRON has installed new high-speed cameras on the telescopes to better capture the pulsar flashes. To be able to process the large amounts of data these cameras will capture, the institute was looking for a state of the art HPC solution that would satisfy all their needs. ClusterVision designed a GPU based cluster that is able to process all this data. By employing a large number of GPU nodes, data from the telescopes could be processed much faster and way more precisely.

Furthermore, by utilising the deep learning capabilities a GPU cluster brings to the table, the telescopes will be able to detect pulsar flashes with much greater accuracy through self-learning. In the past, ASTRON scientists had to manually detect and input pulsar patterns. With deep learning, ARTS does it for them.

As one of the country’s most advanced HPC installations and the most powerful GPU based supercomputer of the Netherlands, ARTS has attracted widespread attention from the media. Public television news programme EenVandaag will feature the ARTS cluster and its use cases in their Saturday episode. See how HPC and ClusterVision have enabled ASTRON to advance their scientific discovery on 20 January, 2018 at 18:15 (CEST) on NPO1.

About ASTRON

ASTRON, part of the Netherlands Organisation for Scientific Research (NWO), is the Netherlands Institute for Radio Astronomy. ASTRON’s goal is to make discoveries in radio astronomy happen. ASTRON provides front-line observing capabilities for its own astronomers in-house, and for the wider national/ international community. The institute expects this strategy to regularly result in astronomical discoveries that significantly influence our understanding of the content, the structure and the evolution of the Universe.

Source: ClusterVision

The post ClusterVision Announces Completion of GPU Cluster System for ASTRON appeared first on HPCwire.

CesgaHack GPU Hackathon to Return in March

Fri, 01/19/2018 - 09:58

A CORUÑA, Spain, Jan. 19, 2018 — The Galicia Supercomputing Center (CESGA) and Appentra Solutions will host the second edition of the GPU Hackathon  at the Galicia Supercomputing Center, in Santiago de Compostela, from March 5 to 9, this time with an international scope and English as the event  language.

CesgaHack18 aims to help scientists and developers accelerate the execution of their scientific simulation applications using the hardware, software and a team of expert mentors in optimization, parallelization and execution of simulation programs.

At the end of the event, participants will either have a new GPU-accelerated version or a roadmap to reach that goal. The winning team will receive a GeForce GTX 1080Ti, offered by NVIDIA.

Participants will be able to create success stories, as happened last year with the team working on a model for tsunami predictions at the University of Málaga, which with the collaboration of Appentra and Fernanda Foertter (ORNL) published a research paper presented at SuperComputing 17, in Denver, the most important event in the field of High Performance Computing (HPC).

The Hackathon will be divided into a presentation day for all those who want to attend, and 5 intensive days to optimize the scientific simulation application of each participating team.

The deadline to participate is February 11, and participation is free of charge.

CESGA and Appentra encourage scientific projects to participate and obtain this necessary individualized training that will help them move forward and save time in their applications. More science and less coding!

 

About CESGA

Fundación Pública Galega Centro Tecnolóxico de Supercomputación de Galicia (CESGA) is the centre of computing, high performance communications systems, and advanced services of the Galician Scientific Community, the University academic system, and the National Scientific Research Council (CSIC).

About Appentra

Technology-based spin-off company of the University of Coruña established on 2012. Appentra provides top quality software tools allowing for an extensive use of High Performance Computing (HPC) techniques in all application areas of engineering, science and industry. Appentra’s target clients are companies and organizations that run frequently updated compute-intensive applications in markets like aerospace, automotive, civil engineering, biomedicine or chemistry.

Source: Appentra

The post CesgaHack GPU Hackathon to Return in March appeared first on HPCwire.

Mellanox Announces Quarterly and Annual Results

Fri, 01/19/2018 - 08:54

SUNNYVALE, Calif. & YOKNEAM, Israel, Jan. 19, 2019 — Mellanox Technologies, Ltd. (NASDAQ: MLNX) has announced financial results for its fourth quarter and full year 2017 ended December 31, 2017.

“We are pleased to achieve record quarterly and full year revenues,” said Eyal Waldman, President and CEO of Mellanox Technologies. “2017 represented a year of investment and product transitions for Mellanox. Fourth quarter Ethernet revenues increased 11 percent sequentially, due to expanding customer adoption of our 25 gigabit per second and above Ethernet products across all geographies. We are encouraged by the acceleration of our 25 gigabit per second and above Ethernet switch business, which grew 41 percent sequentially, with broad based growth across OEM, hyperscale, tier-2, cloud, financial services and channel customers. During the fourth quarter, InfiniBand revenues grew 2 percent sequentially, driven by growth from our high-performance computing and artificial intelligence customers. For the full fiscal 2017, our revenues from the high performance computing market grew 13 percent year over year. Our 2017 results demonstrate the successful execution of our multi-year revenue diversification strategy, and our leadership position in 25 gigabit per second and above Ethernet adapters.”

Fourth Quarter 2017 – Highlights

  • Revenues were $237.6 million in the fourth quarter, and $863.9 million in fiscal year 2017.
  • GAAP gross margins were 64.1 percent in the fourth quarter, and 65.2 percent in fiscal year 2017.
  • Non-GAAP gross margins were 68.8 percent in the fourth quarter, and 70.4 percent in fiscal year 2017.
  • GAAP operating loss was $(6.7) million, or (2.8) percent of revenue, in the fourth quarter, and was $(17.1) million, or (2.0) percent of revenue, in fiscal year 2017.
  • Non-GAAP operating income was $38.0 million, or 16.0 percent of revenue, in the fourth quarter, and $118.7 million, or 13.7 percent of revenue, in fiscal year 2017.
  • GAAP net loss was $(2.6) million in the fourth quarter, and was $(19.4) million in fiscal year 2017.
  • Non-GAAP net income was $42.9 million in the fourth quarter, and $116.6 million in fiscal year 2017.
  • GAAP net loss per diluted share was $(0.05) in the fourth quarter, and $(0.39) in fiscal year 2017.
  • Non-GAAP net income per diluted share was $0.82 in the fourth quarter, and $2.28 in fiscal year 2017.
  • $66.9 million in cash was provided by operating activities during the fourth quarter.
  • $161.3 million in cash was provided by operating activities during fiscal year 2017.
  • Cash and investments totaled $273.8 million at December 31, 2017.

Mr. Waldman continued, “As we enter 2018, we expect to build on our momentum in Ethernet and InfiniBand. With the recent release of our BlueField system-on-chip, and the future introduction of our 200 gigabit per second InfiniBand and Ethernet products, Mellanox is well positioned to begin reaping the benefits from prior investments. Looking ahead, we anticipate seeing acceleration of revenue growth, while delivering on our commitment to more efficiently manage costs and achieve fiscal 2018 non-GAAP operating margins of 18 to 19 percent. We continue to drive improvements in profitability and identify further efficiencies that can be realized as our prior investments begin to yield positive results and we transition towards new product introductions in 2018 and beyond.”

First Quarter 2018 Outlook

We currently project:

  • Quarterly revenues of $222 million to $232 million
  • Non-GAAP gross margins of 68.5 percent to 69.5 percent
  • Non-GAAP operating expenses of $120 million to $122 million
  • Share-based compensation expense of $16.3 million to $16.8 million
  • Non-GAAP diluted share count of 52.4 million to 52.9 million

Full Year 2018 Outlook

We currently project:

  • Revenues of $970 million to $990 million
  • Non-GAAP gross margins of 68.0 percent to 69.0 percent
  • Non-GAAP operating margin of 18.0 percent to 19.0 percent
  • Non-GAAP operating margin of more than 20.0 percent exiting 2018

Recent Mellanox Press Release Highlights

• January 16, 2018 Mellanox ConnectX®-5 Ethernet Adapter Wins Linley Group Analyst Choice Award for Best Networking Chip • January 9, 2018 Mellanox Discontinuing 1550nm Silicon Photonics Development Activities • January 4, 2018 Mellanox Ships BlueField System-on-Chip Platforms and SmartNIC Adapters to Leading OEMs and Hyperscale Customers • December 18, 2017 Meituan.com Selects Mellanox Interconnect Solutions to Accelerate its Artificial Intelligence, Big Data and Cloud Data Centers • December 12, 2017 Mellanox Interconnect Solutions Accelerate Tencent Cloud High-Performance Computing and Artificial Intelligence Infrastructure • December 4, 2017 Mellanox and NEC Partner to Deliver Innovative High-Performance and Artificial Intelligence Platforms • November 14, 2017 Mellanox Propels NetApp to New Heights with 100Gb/s InfiniBand Connectivity • November 13, 2017 Deployment Collaboration with Lenovo will Power Canada’s Largest Supercomputer Centre with Leading Performance, Scalability for High Performance Computing Applications • November 13, 2017 Mellanox InfiniBand Solutions to Accelerate the World’s Next Fastest Supercomputers • November 13, 2017 Mellanox InfiniBand to Accelerate Japan’s Fastest Supercomputer for Artificial Intelligence Applications • November 13, 2017 InfiniBand Accelerates 77 Percent of New High-Performance Computing Systems on TOP500 Supercomputer List

Conference Call

Mellanox will hold its fourth quarter and fiscal year 2017 financial results conference call today, at 2 p.m. Pacific Time, to discuss the company’s financial results. To listen to the call, dial 1-800-459-5343, or for investors outside the U.S., +1-203-518-9553, approximately 10 minutes prior to the start time.

The Mellanox financial results conference call will be available via live webcast on the investor relations section of the Mellanox website at: http://ir.mellanox.com. Access the webcast 15 minutes prior to the start of the call to download and install any necessary audio software. A replay of the webcast will also be available on the Mellanox website.

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software, cables and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Announces Quarterly and Annual Results appeared first on HPCwire.

Tabor Communications and nGage Events Announce ‘Advanced Scale Forum 2018’

Fri, 01/19/2018 - 06:00

SAN DIEGO, Calif., Jan. 19, 2018 — Tabor Communications and nGage Events today announced the renaming of their 2018 summit. The summit, formerly known as Leverage Big Data + EnterpriseHPC, will now be known as Advanced Scale Forum and is scheduled for May 6-8 2018 at the Hyatt Regency Lost Pines Resort & Spa in Austin, TX.

As with previous events, the summit will focus on bridging the challenges that CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions face as they work to build systems and applications that require increasing amounts of performance and throughput. Focus areas will include, artificial intelligence/ deep and machine learning, IOT – edge to core, cloud, HPC, blockchain, big data, and hybrid IT.

By uniting the Leverage Big Data and EnterpriseHPC conferences in 2017, Tabor Communications and nGage Events maximized the opportunity for conference attendees to collaborate with industry experts across disciplines to maximize their performance as well as their ROI. Now, with a new name, the conference best reflects the scope of what it will offer attendees in its 2018 event.

This power forum serves as a platform for enterprise executives to share, discover, and explore the ways in which High Performance Computing (HPC) technologies harness the ability to transform their business. Through 1:1 meetings, boardroom sessions, networking and entertainment, the summit unifies industry leaders, solution providers, and end users to promote collaboration in developing industry themes and cultivating future success.

“Streaming analytics and high-performance computing loom large in the future of enterprises which are realizing the scaling limitations of their legacy environments,” said Tom Tabor, CEO of Tabor Communications. “As organizations develop analytic models that require increasing levels of compute, throughput and storage, there is a growing need to understand how businesses can leverage high performance computing architectures that can meet the increasing demands being put on their infrastructure.”

“In renaming and reshaping our event,” Tabor continues, “we look to support the leaders who are trying to navigate their scaling challenges, and connect them with others who are finding new and novel ways to succeed.”

Advanced Scale Forum 2018 Summit brings together industry leaders who are tackling these streaming and high-performance challenges and are responsible for driving the new vision forward. Attendees of this invitation-only summit will interact with leaders across industries faced with similar technical challenges for a summit that aims to build dialogue and share solutions and approaches to delivering both systems and software performance in this emerging era of computing.

The summit will be co-chaired by EnterpriseTech Managing Editor, Doug Black, and Datanami Managing Editor, Alex Woodie.

ATTENDING THE SUMMIT

This is an invitation-only hosted summit that is fully paid for qualified attendees, including flight, hotel, meals and summit badge. Targets of the summit include, CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions. To apply for an invitation to this exclusive event, please fill out the qualification form at the following link: Hosted Attendee Interest Form

SUMMIT SPONSORS

Current sponsors for the summit include Arcadia Data, Attunity, Cray, HDF Group, Intel, Kinetica, Lawrence Livermore National Lab, MemSQL, NetApp, Penguin Computing, Quantum, Striim, System Fabric Works, Talking Data, & WekaIO, with more to be announced. For sponsorship opportunities, please contact us at summit@advancedscaleforum.com.

The summit is hosted by Datanami, EnterpriseTech and HPCwire through a partnership between Tabor Communications and nGage Events, the leader in host-based, invitation-only business events.

The post Tabor Communications and nGage Events Announce ‘Advanced Scale Forum 2018’ appeared first on HPCwire.

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

Thu, 01/18/2018 - 17:00

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 15-year relationship between UCSD and Japan’s National Institute of Advanced Industrial Science and Technology (AIST) that goes back to 2002 with the establishment of the Pacific Rim Application and Grid Middleware Assembly (PRAGMA).

With the upcoming spring launch of AIST’s AI Bridging Cloud Infrastructure (ABCI), the deployment of the GPU-powered CHASE-CI machine learning infrastructure (see our coverage here), COMET’s GPU expansion, and the announcement of UCSD’s Data Science Institute, it’s easy to understand the enthusiasm for the opportunities afforded by the MOU, which builds on a shared history and mutual interests and activities around cutting-edge developments in supercomputing AI and deep learning.

The MOU covers research, education, and application of scientific knowledge in AI and more broadly data-intensive science and robotics. Target activities include the organization of workshops between the U.S. and Japan; exchange of faculty, scholars and researchers between the two campuses; collaborative infrastructure projects between UC’s Pacific Research Platform (PRP) and AIST’s AI Bridging Cloud Infastructure (ABCI) and the use of ABCI for collaborative research projects.

UCSD’s soon-to-be-launched Data Science Institute will also play a role. The institute, made possible thanks to a $75 million endowment from Taner Halicioglu (the largest
ever by a UC San Diego alumnus
), will be physically colocated with the San Diego Supercomputer Center (SDSC).

Phil Papadopoulos, chief technology officer of SDSC, and Satoshi Sekiguchi, vice president of AIST, at the UCSD campus signing ceremony on Jan. 10, 2018

Leaders from both groups took part in the signing ceremony and shared remarks. In addition to the two respective project leads, Satoshi Sekiguchi, vice president of AIST, and Phil Papadopoulos, chief technology officer of SDSC, we heard from Michael Norman, director, SDSC; Larry Smarr, director, Calit2; Jeff Elman, UCSD Distinguished Professor of Cognitive Science and one of the co-directors of the new Data Science Institute; and Jason Haga of AIST, speaking on behalf of AIST President Ryoji Chubachi.

Papadopoulos recounted how the groups had developed close ties under PRAGMA and are well-aligned due to their mutual interest in deep learning, GPU-based computation, big data, and very high speed networks. “With all these things happening every day at UCSD at the Data Sciences Institute, the Pacific Research Platform (PRP) out of Calit2, the GPU expansions on COMET, CHASE-CI which is a distributed deep learning platform that is just being built on top of the PRP, it made sense that this really should be a UCSD-wide agreement. AIST is really a terrific organization and is collaborative by nature in the global sense of the word.”

Satoshi Sekiguchi, vice president of AIST, shared similar sentiments and an appreciation of the extended research family. “UCSD’s strength in application and infrastructure areas aligns with AIST’s primary research interest of IT platforms and AI accelerations. These activities also align very well with the Pacific Research Platform that Larry Smarr and Tom DeFanti have been leading.”

At AIST and at the Department of Information Technology and Human Factors, where Sekiguchi serves as director general, one of the key messages on artificial intelligence research is embedding AI in the real world. “AI should be deployed in the physical space to help solve the real problems in life such as in the manufacturing industries, health care and so on and we wish to contribute to the private sectors to help them realize development of AI technologies,” said Sekiguchi. To this end, AIST has established partnerships with several well-known companies, including NEC, Panasonic, and Toyota Industries.

Sekiguchi also expressed his appreciation for the hard work that made it possible for this MOU to come together in only three months. “The short MOU negotiations happened because of our years of friendly relationships. For example, when the Calit2 building opened, they kindly offered us an office to accommodate the AIST research staff and to collaborate continuously together on the PRAGMA program and beyond that,” said Sekiguchi.

SDSC Director Michael Norman praised AIST as a world leader in developing HPC systems and applications in AI, deep learning for science and society. He referred to the ABCI system that is currently being developed with nearly 5,000 GPUs as “the mother of all GPU clusters.”

“This will be one of the most powerful systems for the areas of AI and deep learning. And so at a very practical level this MOU with UCSD will allow UCSD to have a front row seat to this bold experiment in the future of computing and we will be able to participate in it with a bidirectional visitor exchange program. Through this MOU we hope to broaden UCSD’s interactions with the scientists and engineers at AIST across the organization, building on our long-standing relationship in computing,” said Norman.

Programmatic synergies between the two groups are numerous and include energy and the environment, materials and chemistry, life sciences and biotechnology, information technology and electronics and manufacturing.

Larry Smarr, director of Calit2, emphasized the diverse nature of the joint MOU as well as the complementarity between the university and AIST. In 2002, when Calit2 had the largest information technology research grant from NSF in the country to build the OptIPuter, AIST was a formal international partner to that grant from the beginning. This resulted in a long history of high speed optical networking between the institutions. Smarr stated that one of the goals of the MOU will be to set up a 10-100 gigabit per second link directly into AIST from UCSD to accommodate the next phase of artificial intelligence and deep learning on massive amounts of data.

Smarr is co-PI on CHASE-CI (the Cognitive Hardware and Software Ecosystem Cyberinfrastructure), the NSF-funded GPU cloud being built on top of the Pacific Research Platform. “This framework allows for investigators here with the variety of big data including cognitive science to make use of what is essentially the broadest set of architectures to support machine learning anywhere in the world,” said Smarr.

Jeff Elman, one of the co-directors of the new Data Science Institute along with Rajesh Gupta, spoke of the possibilities afforded by the MOU in relation to the new institute and the shared focus on being a force for good in the world. He also emphasized the cross-disciplinary nature of the collaboration.

“The institute has both a research mission in terms of stimulating and supporting research, innovation, but also an educational mission, in terms of training students, post-docs and also interacting with training opportunities from partners and here’s where I see really exciting opportunities with AIST,” said Elman.

“We are entering and in fact have entered an era where the kinds of data that we now have available surpass, I think, the scope of our imaginations to grapple with both in terms of scope, the range of things we can now quantify and measure and the magnitude, the scale, from the nano to the peta, and now there’s an exa and a zetta,” Elman continued. “These data have tremendous potential on the one hand to help us understand phenomena that are global in nature or micro or nano in nature, not only to understand but also to guide action because I think ultimately science and technology are about understanding the world so that one can change it to intervene when there are harmful things but also to benefit and make improvements. Reading AIST’s mission statement clearly the focus on technology for the social good is something that you value and it is clearly a very important part of the ethos of this campus and of the new institute.”

The final set of remarks were delivered by Jason Haga, senior research scientist in the Information Technology Research Institute of AIST, on behalf of AIST President Dr. Ryoji Chubachi. “[As part of this MOU] we will create joint projects between AIST and UCSD using our new ABCI infrastructure to help establish the largest collaboration platform based on AI. Both institutions will aim to build a cyberinfrastructure that enables mutual access to big data accumulated both in the U.S. and Japan. Furthermore we will expand these activities to other institutions in the U.S. as well as Asia to create a larger global network. I would like to conclude by wishing that our collaboration will lead the way in U.S.-Japan innovation in the future.”

From left to right: Jeff Elman, co-director of UCSD Data Science Institute; Michael Norman, director, SDSC; Larry Smarr, director, Calit2; Satoshi Sekiguchi, vice president of AIST; Jason Haga, senior research scientist in the Information Technology Research Institute of AIST; and Phil Papadopoulos, chief technology officer of SDSC

The post UCSD, AIST Forge Tighter Alliance with AI-Focused MOU appeared first on HPCwire.

IBM Reports 2017 Fourth-Quarter and Full-Year Results

Thu, 01/18/2018 - 17:00

ARMONK, NY, Jan. 18, 2018 — IBM (NYSE:IBM) today announced fourth-quarter and full-year 2017 earnings results.

“Our strategic imperatives revenue again grew at a double-digit rate and now represents 46 percent of our total revenue, and we are pleased with our overall revenue growth in the quarter,” said Ginni Rometty, IBM chairman, president and chief executive officer. “During 2017, we strengthened our position as the leading enterprise cloud provider and established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.”

“Over the past several years we have invested aggressively in technology and our people to reposition IBM,” said James Kavanaugh, IBM senior vice president and chief financial officer. “2018 will be all about reinforcing IBM’s leadership position in key high-value segments of the IT industry, including cloud, AI, security and blockchain.”

Strategic Imperatives Revenue

Fourth-quarter cloud revenues increased 30 percent to $5.5 billion (up 27 percent adjusting for currency). Cloud revenue over the last 12 months was $17.0 billion, including $9.3 billion delivered as-a-service and $7.8 billion for hardware, software and services to enable IBM clients to implement comprehensive cloud solutions. The annual exit run rate for as-a-service revenue increased to $10.3 billion from $8.6 billion in the fourth quarter of 2016. In the quarter, revenues from analytics increased 9 percent (up 6 percent adjusting for currency). Revenues from mobile increased 23 percent (up 21 percent adjusting for currency) and revenues from security increased 132 percent (up 127 percent adjusting for currency).

Full-Year 2018 Expectations

The company will discuss 2018 expectations during today’s quarterly earnings conference call.

Cash Flow and Balance Sheet

In the fourth quarter, the company generated net cash from operating activities of $5.7 billion, or $7.8 billion excluding Global Financing receivables. IBM’s free cash flow was $6.8 billion. IBM returned $1.4 billion in dividends and $0.7 billion of gross share repurchases to shareholders. At the end of December 2017, IBM had $3.8 billion remaining in the current share repurchase authorization.

The company generated full-year free cash flow of $13.0 billion, excluding Global Financing receivables. The company returned $9.8 billion to shareholders through $5.5 billion in dividends and $4.3 billion of gross share repurchases.

IBM ended the fourth quarter of 2017 with $12.6 billion of cash on hand. Debt totaled $46.8 billion, including Global Financing debt of $31.4 billion. The balance sheet remains strong and is well positioned over the long term.

Segment Results for Fourth Quarter

  • Cognitive Solutions (includes solutions software and transaction processing software) — revenues of $5.4 billion, up 3 percent (flat adjusting for currency), driven by security and transaction processing software.
  • Global Business Services (includes consulting, global process services and application management) —revenues of $4.2 billion, up 1 percent (down 2 percent adjusting for currency). Strategic imperatives revenue grew 9 percent led by the cloud practice, mobile and analytics.
  • Technology Services & Cloud Platforms (includes infrastructure services, technical support services and integration software) — revenues of $9.2 billion, down 1 percent (down 4 percent adjusting for currency). Strategic imperatives revenue grew 15 percent, driven by hybrid cloud services, security and mobile.
  • Systems (includes systems hardware and operating systems software) — revenues of $3.3 billion, up 32 percent (up 28 percent adjusting for currency) driven by growth in IBM Z, Power Systems and storage.
  • Global Financing (includes financing and used equipment sales) — revenues of $450 million, up 1 percent (down 2 percent adjusting for currency).

Tax Rate

The enactment of the Tax Cuts and Jobs Act in December 2017 resulted in a one-time charge of $5.5 billion in the fourth quarter. The charge encompasses several elements, including a tax on accumulated overseas profits and the revaluation of deferred tax assets and liabilities. As a result, IBM’s reported GAAP tax rate, which includes the one-time charge, was 124 percent for the fourth quarter, and 49 percent for the full year. IBM’s operating (non-GAAP) tax rate, which excludes the one-time charge, was 6 percent for the fourth quarter; and 7 percent for the full year, which includes the effect of discrete tax benefits in the first and second quarters. Without discrete tax items, the full-year operating (non-GAAP) tax rate was 12 percent, at the low end of the company’s previously estimated range.

Full-Year Results

  • Full-year GAAP EPS from continuing operations of $6.14– Includes a one-time charge of $5.5 billion associated with the enactment of U.S. tax reform
  • Full-year operating (non-GAAP) EPS of $13.80– Excludes the one-time charge of $5.5 billion associated with the enactment of U.S. tax reform
  • Full-year revenue of $79.1 billion, down 1 percent

Link to full announcement.

Source: IBM

The post IBM Reports 2017 Fourth-Quarter and Full-Year Results appeared first on HPCwire.

PASC18 Announces Keynote Speaker, Extends Paper Deadline to Jan. 21

Thu, 01/18/2018 - 13:23

Jan. 18, 2018 — The PASC18 Organizing Team is pleased to announce a keynote presentation by Marina Becoulet from CEA, and that the deadline for papers submissions has been extended until January 21, 2018.

Marina Becoulet. Image courtesy of PASC18.

PASC18 keynote presentation: Challenges in the First Principles Modelling of Magneto Hydro Dynamic Instabilities and their Control in Magnetic Fusion Devices

The main goal of the International Thermonuclear Experimental Reactor (ITER) project is the demonstration of the feasibility of future clean energy sources based on nuclear fusion in magnetically confined plasma. In the era of ITER construction, fusion plasma theory and modelling provide not only a deep understanding of a specific phenomenon, but moreover, modelling-based design is critical for ensuring active plasma control.

The most computationally demanding aspect of the project is first principles fusion plasma modelling, which relies on fluid models – such as Magneto Hydro Dynamics (MHD) – or increasingly often on kinetic models. The challenge stems from the complexity of the 3D magnetic topology, the large difference in time scales from Alfvenic (10-7s) to confinement time (hundreds of s), the large difference in space scales from micro-instabilities (mm) to the machine size (few meters), and most importantly, from the strongly non-linear nature of plasma instabilities, which need to be avoided or controlled.

The current status of first principles non-linear modelling of MHD instabilities and active methods of their control in existing machines and ITER will be presented, focusing particularly on the strong synergy between experiment, fusion plasma theory, numerical modelling and computer science in guaranteeing the success of the ITER project.

About the presenter

Marina Becoulet is a Senior Research Physicist in the Institute of Research in Magnetic Fusion at the French Atomic Energy Commission (CEA/IRFM). She is also a Research Director and an International expert of CEA, specializing in theory and modelling of magnetic fusion plasmas, in particular non-linear MHD phenomena. After graduating from Moscow State University (Physics Department, Plasma Physics Division) in 1981, she obtained a PhD in Physics and Mathematics from the Institute of Applied Mathematics, Russian Academy of Science (1985). She worked at the Russian Academy of Science in Moscow, on the Joint European Torus in the UK, and since 1998 has been employed at CEA/IRFM, France.

Call for submissions reminder: deadlines are rapidly approaching!

The deadline for papers submissions has been extended to Sunday, January 21, 2018.

PASC18 upcoming submission deadlines:

  • Papers: January 21, 2018
  • Posters: February 4, 2018

Submit your contributions through the online submissions portal.

Full submission guidelines are available at: pasc18.pasc-conference.org/submission/submissions-portal/

PASC18 Scientific Committee: pasc18.pasc-conference.org/about/organization

Further information on the conference and submission possibilities are available at: pasc18.pasc-conference.org/

Source: PASC18

The post PASC18 Announces Keynote Speaker, Extends Paper Deadline to Jan. 21 appeared first on HPCwire.

Supercomputer Simulations Enable 10-Minute Updates of Rain and Flood Predictions

Thu, 01/18/2018 - 11:36

Jan. 18, 2018 — Using the power of Japan’s K computer, scientists from the RIKEN Advanced Institute for Computational Science and collaborators have shown that incorporating satellite data at frequent intervals—ten minutes in the case of this study—into weather prediction models can significantly improve the rainfall predictions of the models and allow more precise predictions of the rapid development of a typhoon.

Weather prediction models attempt to predict future weather by running simulations based on current conditions taken from various sources of data. However, the inherently complex nature of the systems, coupled with the lack of precision and timeliness of the data, makes it difficult to conduct accurate predictions, especially with weather systems such as sudden precipitation.

As a means to improve models, scientists are using powerful supercomputers to run simulations based on more frequently updated and accurate data. The team led by Takemasa Miyoshi of AICS decided to work with data from Himawari-8, a geostationary satellite that began operating in 2015. Its instruments can scan the entire area it covers every ten minutes in both visible and infrared light, at a resolution of up to 500 meters, and the data is provided to meteorological agencies. Infrared measurements are useful for indirectly gauging rainfall, as they make it possible to see where clouds are located and at what altitude.

For one study, they looked at the behavior of Typhoon Soudelor (known in the Philippines as Hanna), a category 5 storm that wreaked damage in the Pacific region in late July and early August 2015. In a second study, they investigated the use of the improved data on predictions of heavy rainfall that occurred in the Kanto region of Japan in September 2015. These articles were published in Monthly Weather Review and Journal of Geophysical Research: Atmospheres.

For the study on Typhoon Soudelor, the researchers adopted a recently developed weather model called SCALE-LETKF—running an ensemble of 50 simulations—and incorporated infrared measurements from the satellite every ten minutes, comparing the performance of the model against the actual data from the 2015 tropical storm. They found that compared to models not using the assimilated data, the new simulation more accurately forecast the rapid development of the storm. They tried assimilating data at a slower speed, updating the model every 30 minutes rather than ten minutes, and the model did not perform as well, indicating that the frequency of the assimilation is an important element of the improvement.

To perform the research on disastrous precipitation, the group examined data from heavy rainfall that occurred in the Kanto region in 2015. Compared to models without data assimilation from the Himawari-8 satellite, the simulations more accurately predicted the heavy, concentrated rain that took place, and came closer to predicting the situation where an overflowing river led to severe flooding.

According to Miyoshi, “It is gratifying to see that supercomputers along with new satellite data, will allow us to create simulations that will be better at predicting sudden precipitation and other dangerous weather phenomena, which cause enormous damage and may become more frequent due to climate change. We plan to apply this new method to other weather events to make sure that the results are truly robust.”

Source: RIKEN Advanced Institute for Computational Science

The post Supercomputer Simulations Enable 10-Minute Updates of Rain and Flood Predictions appeared first on HPCwire.

ALCF Now Accepting Proposals for Data Science and Machine Learning Projects for Aurora ESP

Thu, 01/18/2018 - 11:25

Jan. 18, 2018 — The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, is now accepting proposals for data science and machine learning projects for its Aurora Early Science Program (ESP). The deadline to apply is April 8, 2018.

Now slated to be the nation’s first exascale system when it is delivered in 2021, Aurora will be capable of performing a quintillion calculations per second. The system is expected to have more than 50,000 nodes and more than 5 petabytes of total memory, including high-bandwidth memory.

The shift in plans for Aurora, which was initially scheduled to be a 180 petaflops supercomputer, is central to a new paradigm for scientific computing at the ALCF—the expansion from traditional simulation-based research to include data science and machine learning approaches. Aurora’s revolutionary architecture will provide advanced capabilities in the three pillars of simulation, data, and learning that will power a new era of scientific discovery and innovation.

The Aurora ESP, which kicked off with 10 projects in 2017, is expanding to align with this new paradigm for leadership computing. Currently underway, the 10 original projects will serve as simulation-based projects for the new system.

This call for proposals will bring in 10 additional projects in the areas of data science (e.g., data analytics, data-intensive computing, advanced statistical analyses) and machine learning (e.g., deep learning, neural networks). Proposals for crosscutting projects that involve simulation, data, and learning are also encouraged.

Aurora ESP projects will prepare key applications for the architecture and scale of the exascale supercomputer and solidify libraries and infrastructure to pave the way for other applications to run on the system.

The program provides an exciting opportunity to be among the first researchers in the world to run calculations and workflows on an exascale system. With substantial allocations of pre-production time on Aurora, ESP teams will be able to pursue scientific computing campaigns that are not possible on today’s leadership-class supercomputers.

For more information, visit the Aurora ESP webpage and the Proposal Instructions webpage.

For hands-on assistance in preparing for ESP proposals, the ALCF is hosting the Simulation, Data, and Learning Workshop from February 27–March 1, 2018. To register, visit the workshop webpage.

Source: ALCF

The post ALCF Now Accepting Proposals for Data Science and Machine Learning Projects for Aurora ESP appeared first on HPCwire.

Groundbreaking Conference Examines How AI Transforms Our World

Thu, 01/18/2018 - 10:35

NEW YORK, Jan. 18, 2018 — ACM, the Association for Computing Machinery; AAAI, the Association for the Advancement of Artificial Intelligence; and SIGAI, the ACM Special Interest Group on Artificial Intelligence have joined forces to organize a new conference on Artificial Intelligence, Ethics and Society (AIES). The conference aims to launch a multi-disciplinary and multi-stakeholder effort to address the challenges of AI ethics within a societal context. Conference participants include experts in various disciplines such as computing, ethics, philosophy, economics, psychology, law and politics. The inaugural AIES conference is planned for February 1-3 in New Orleans.

“The public is both fascinated and mystified about how AI will shape our future,” explains AIES Co-chair Francesca Rossi, IBM Research and University of Padova. “But no one discipline can begin to answer these questions alone. We’ve brought together some of the world’s leading experts to imagine how AI will transform our future and how we can ensure that these technologies best serve humanity.”

Conference organizers encouraged the submission of research papers on a range of topics including building ethical AI systems, the impact of AI on the workforce, AI and the law, and the societal impact of AI. Out of 200 submissions, only 61 papers have been selected and will be presented during the conference.

The program of AIES 2018 also includes invited talks by leading scientists, panel discussions on AI ethics standards and the future AI, and the presentation of the leading professional and student research papers on AI. Co-chairs include Francesca Rossi, a computer scientist and former president of the International Joint Conference on Artificial Intelligence; Jason Furman, a Harvard economist and former Chairman of the Council of Economic Advisors (CEA); Huw Price, a philosopher and Academic Director of the Leverhulme Centre for Future of Intelligence; and Gary Marchant, Regent’s Professor of Law and Director of the Center for Law, Science and Innovation at Arizona State University.

AIES 2018 HIGHLIGHTS

INVITED TALKS

The Moral Machine Experiment: 40 Million Decisions and the Path to Universal Machine Ethics

Iyad Rahwan and Edmond Awad, Massachusetts Institute of Technology

Rahwan and Awad describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game they developed enabled them to gather 40 million decisions from 3 million people in 200 countries and territories. We report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. We also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. Rahwan and Ewad discuss how these three layers of preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics.

AI, Civil Rights and Civil Liberties: Can Law Keep Pace with Technology?

Carol Rose, American Civil Liberties Union

At the dawn of this era of human-machine interaction, human beings have an opportunity to shape fundamentally the ways in which machine learning will expand or contract the human experience, both individually and collectively. As efforts to develop guiding ethical principles and legal constructs for human-machine interaction move forward, how do we address not only what we do with AI, but also the question of who gets to decide and how? Are guiding principles of ‘Liberty and Justice for All’ still relevant? Does a new era require new models of open leadership and collaboration around law, ethics, and AI?

AI Decisions, Risk, and Ethics: Beyond Value Alignment

Patrick Lin, California Polytechnic State University

When we think about the values AI should have in order to make right decisions and avoid wrong ones, there’s a large but hidden third category to consider: decisions that are not-wrong but also not-right. This is the grey space of judgment calls, and just having good values might not help as much as you’d think here. Autonomous cars are used as the case study here. Lessons are offered for broader AI: such as  ethical dilemmas that can arise in everyday scenarios such as lane positioning and navigation—and not just in crazy crash scenarios. This is the space where one good value might conflict with another good value, and there’s no “right” answer or even broad consensus on an answer.

The Great AI/Robot Jobs Scare: reality of automation fear redux 

Richard Freeman, Harvard University

This talk will consider the impact of AI/robots on employment, wages and the future of work more broadly. We argue that we should focus on policies that make AI robotics technology broadly inclusive both in terms of consumption and ownership so that billions of people can benefit from higher productivity and get on the path to the coming age of intolerable abundance.

PANELS

What Will Artificial Intelligence Bring?

Brent Venable, Tulane University (Moderator); Paula Boddington, Oxford University; Wendell Wallach, Yale University; Jason Furman, Harvard University; and Peter Stone, UT Austin

World class researchers from different disciplines and best-selling authors will elaborate on the impact of AI on modern society and will answer questions. This panel is open to the public.

Prioritizing Ethical Considerations in Intelligent and Autonomous Systems: Who Sets the Standards? 

Takashi Egawa, NEC Corporation; Simson L. Garfinkel, USACM; John C. Havens, IEEE (moderator); Annette Reilly, IEEE; and Francesca Rossi, IBM and University of Padova

While dealing with intelligent and autonomous technologies, safety standards and standardization projects are providing detailed guidelines or requirements to help organizations institute new levels of transparency, accountability and traceability. The panelists will explore how we can build trust and maximize innovation while avoiding negative unintended consequences.

BEST PAPER AWARD (sponsored by the Partnership on AI)

Shared between the following two papers:

Transparency and Explanation in Deep Reinforcement Learning Neural Networks

Rahul Iyer, InSite Applications; Yuezhang Li, Google; Huao Li, University of Pittsburgh; Michael Lewis, Facebook; Ramitha Sundar, Carnegie Mellon; and Katia Sycara, Carnegie Mellon

For AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system and to form coherent explanations of the systems decisions and actions. This paper presents a novel and general method to provide a vizualization of internal states of deep reinforcement learning models, thus enabling the formation of explanations that are intelligible to humans.

An AI Race: Rhetoric and Risks 

Stephen Cave, Leverhulme Centre for the Future of Intelligence, Cambridge University  and; Seán S ÓhÉigeartaigh, Centre for the Study of Existential Risk, Cambridge University

The rhetoric of the race for strategic advantage is increasingly being used with regard to the development of AI. This paper assesses the potential risks of the AI race narrative, explores the role of the research community in responding to these risks, and discusses alternative ways to develop AI in a collaborative and responsible way.

For a complete list of research papers and posters which will be presented at the AIES Conference, visit http://www.aies-conference.com/accepted-papers/. The proceedings of the conference will be published in the AAAI and ACM Digital Libraries.

About ACM

ACM, the Association for Computing Machinery (www.acm.org), is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post Groundbreaking Conference Examines How AI Transforms Our World appeared first on HPCwire.

New Blueprint for Converging HPC, Big Data

Thu, 01/18/2018 - 10:31

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), William Gropp (National Center for Supercomputing Applications), and Thomas Schulthess (Swiss National Supercomputing Centre), among others, has issued a comprehensive Big Data and Extreme-Scale Computing Pathways to Convergence Report. Not surprisingly it’s a large work not easily plumbed in a single sitting.

Convergence – harmonizing computational infrastructures to accommodate HPC and big data – isn’t a new topic. Recently, big data’s close cousin, machine learning, has become part of the discussion. Moreover, the accompanying rise of cyberinfrastructure as a dominant force in science computing has complicated convergence efforts.

The central premise of this study is that a ‘data-driven’ upheaval is exacerbating divisions – technical, cultural, political, economic – in the cyberecosystem of science. The report tackles in some depth a narrower slice of the problem. Big data, say the authors, has caused or worsened two ‘paradigm splits’: 1) one between the traditional ‘HPC and High-end Data Analysis (HDA)’ and 2) another between ‘stateless networks and stateful services’ provided by end systems. The report lays out a roadmap for mending these fissures.

 

This snippet from the report’s executive summary does a nice job of summing up the challenge:

“Looking toward the future of cyberinfrastructure for science and engineering through the lens of these two bifurcations made it clear to the BDEC community that, in the era of Big Data, the most critical problems involve the logistics of wide-area, multistage workflows—the diverse patterns of when, where, and how data is to be produced, transformed, shared, and analyzed. Consequently, the challenges involved in codesigning software infrastructure for science have to be reframed to fully take account of the diversity of workflow patterns that different application communities want to create. For the HPC community, all the imposing design and development issues of creating an exascale-capable software stack remain; but the supercomputers that need this stack must now be viewed as the nodes (perhaps the most important nodes) in the very large network of computing resources required to process and explore rivers of data flooding in from multiple sources.”

There’s a lot to digest here, including a fair amount of technical guidance. Issued at the end of 2017, the report is the result of workshops held in the U.S. (2013), Japan (2014), Spain (2015), Germany (2016), and China (2017); it grew out of prior efforts of the International Exascale Software Project (IESP). Descriptions and results of the five workshops (agendas, white papers, presentations, attendee lists) are available at the BDEC site (http://www.exascale.org/bdec/).

Jack Dongarra

Commenting on the work, Dongarra said, “Computing is at a profound inflection point, economically and technically. The end of Dennard scaling and its implications for continuing semiconductor-design advances, the shift to mobile and cloud computing, the explosive growth of scientific, business, government, and consumer data and opportunities for data analytics and machine learning, and the continuing need for more-powerful computing systems to advance science and engineering are the context for the debate over the future of exascale computing and big data analysis.”

The broad hope is that the ideas presented in the report will guide community efforts. Dongarra emphasized “High-end data analytics (big data) and high-end computing (exascale) are both essential elements of an integrated computing research-and-development agenda; neither should be sacrificed or minimized to advance the other.” Shown below are typical differences in the BDEC software ecosystem.

 

There’s too much in the report to adequately cover here. Here are the report’s summary recommendations:

“Our major, global recommendation is to address the basic problem of the two paradigm splits: the HPC/HDA software ecosystem split and the wide area data logistics split. For this to be achieved, there is a need for new standards that will govern the interoperability between data and compute, based on a new, common and open Distributed Services Platform (DSP), that offers programmable access to shared processing, storage and communication resources, and that can serve as a universal foundation for the component interoperability that novel services and applications will require.

“We make five recommendations for decentralized edge and peripheral ecosystems:

  • Converge on a new hourglass architecture for a Common Distributed Service Platform (DSP).
  • Target workflow patterns for improved data logistics.
  • Design cloud stream processing capabilities for HPC.
  • Promote a scalable approach to Content Delivery/Distribution Networks.
  • Develop software libraries for common intermediate processing tasks.

“We make five actionable conclusions for centralized facilities:

  • Energy is an overarching challenge for sustainability.
  • Data reduction is a fundamental pattern.
  • Radically improved resource management is required.
  • Both centralized and decentralized systems share many common software challenges and opportunities: 
(a) Leverage HPC math libraries for HDA.
(b) More efforts for numerical library standards.
(c) New standards for shared memory parallel processing.
(d) Interoperability between programming models and data formats.
  • Machine learning is becoming an important component of scientific workloads, and HPC architectures must be adapted to accommodate this evolution.”

Link to BDEC Report: http://www.exascale.org/bdec/

The post New Blueprint for Converging HPC, Big Data appeared first on HPCwire.

India’s Ministry of Earth Sciences Deploys New Cray XC40 Supercomputers and Cray Storage Systems

Thu, 01/18/2018 - 08:38

SEATTLE, Jan. 18, 2018 — Global supercomputer leader Cray Inc. (Nasdaq: CRAY) today announced that as part of a $67 million contract with Cray to update its supercomputing facilities and systems, the Ministry of Earth Sciences in India has deployed two Cray XC40 supercomputers and two Cray ClusterStor storage systems. The combined systems are the largest supercomputing resource in India, and extend Cray’s leadership position in the weather forecasting and climate research communities.

The Ministry of Earth Sciences (MoES) is dedicated to providing world-class weather, climate, ocean, and seismological services to the citizens of India, and has significantly upgraded its high-performance computing capabilities to better support its operational and research activities. The two Cray systems are located at two divisions of MoES – the Indian Institute of Tropical Meteorology (IITM) in Pune, India, and the National Center for Medium Range Weather Forecasting (NCMRWF) in Noida, India.

The Cray supercomputer at IITM will be used for conducting research on improving weather and climate forecasts, and the system – named “Pratyush” which means the Sun – will also be used by other MoES organizations for research activities to improve their respective weather and climate services. The NCMRWF will use its Cray supercomputer to run daily, operational weather forecasts. The combined supercomputing systems have a peak performance of more than six petaflops, and more than 18 petabytes of Cray ClusterStor storage capacity.

“Our new Cray supercomputing systems provide MoES’ scientists with the computational power needed for producing more accurate and reliable weather forecasts at much higher resolutions,” said Dr. Madhavan Nair Rajeevan, Secretary, Ministry of Earth Sciences, Government of India. “Our country needs better forecasts for weather and climate events such as monsoons, tsunamis, cyclones, and extreme heat waves and cold snaps, and so it is imperative that we augment our HPC facilities with highly-advanced supercomputing systems. The two new Cray systems are major steps forward for MoES, and allows us to stand tall in the international weather and climate communities.”

Cray continues to strengthen its leadership position in the weather forecasting and climate research space, as an increasing number of the world’s leading centers rely on Cray supercomputers and storage systems to run their complex meteorological models. More than three-quarters of the World Meteorological Organization’s Long Range Global Modelling Centers have selected Cray supercomputers for numerical weather prediction, and MoES is the latest organization to deploy Cray systems for numerical weather prediction and climate research.

“MoES has made a substantial enhancement to its high-performance computing infrastructure, and we are honored Cray was chosen to provide both the supercomputing and storage technologies necessary for improving their extensive range of important weather services for the people of India,” said Peter Ungaro, president and CEO of Cray. “The world’s preeminent global weather centers, like MoES, continue to rely on Cray supercomputers to power their weather forecasts. Our leadership position in earth sciences is representative of our proven ability to build production-ready supercomputing and storage systems across many data-intensive workloads such as weather forecasting, analytics, and artificial intelligence.”

The Cray XC series of supercomputers are designed to handle the most challenging workloads requiring sustained multi-petaflop performance. The Cray XC40 supercomputers incorporate the Aries high performance network interconnect for low latency and scalable global bandwidth, the HPC-optimized Cray Linux Environment, the Cray programing environment consisting of powerful tools for application developers, as well as the latest Intel processors and NVIDIA GPU accelerators. The Cray XC supercomputers deliver on Cray’s commitment to performance supercomputing with an architecture and tightly-integrated software environment that provides extreme scalability and sustained performance.

Consisting of products and services, the multi-year contract is valued at more than $67 million. The systems were accepted in late 2017.

For more information on the Cray XC supercomputers, and Cray storage solutions, please visit the Cray website atwww.cray.com.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray Inc.

The post India’s Ministry of Earth Sciences Deploys New Cray XC40 Supercomputers and Cray Storage Systems appeared first on HPCwire.

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

Wed, 01/17/2018 - 22:40

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular HPC applications and benchmarks and are sharing their results in a paper, available on arXiv.org.

Their method was to use the application kernel module of the XD Metrics on Demand (XDMoD) tool to run tests before and after the installation of the vulnerability patches. They recorded the performance difference for the following applications and benchmarks: NWChem, NAMD, the HPC Challenge Benchmark suite (HPCC) [which includes the memory bandwidth micro-benchmark STREAM and the NASA parallel benchmarks (NPB)], IOR, MDTest and interconnect/MPI benchmarks (IMB).

Most of the application kernels were executed on one or two nodes (8 and 16 cores respectively) of a development cluster at the Center for Computational Research. Each node has two Intel L5520 CPUs (Nehalem EP) connected by QDR Mellanox InfiniBand, and can access 3 PB IBM of shared GPFS storage system. The operating system is CentOS Linux release 7.4.1708.

The worst case performance hit went as high as 54 percent for select functions (e.g., MPI random access, memory copying and file metadata operations), while real-world applications showed a 2-3 percent decrease in performance for single node jobs and a 5-11 performance decrease for parallel two-node jobs. The authors indicate there may be a way to recoup some of this loss via compiler and MPI libraries.

Also notable, Fourier transformation (FFT), matrix multiplication and matrix
transposition get slower,  6.4 percent, 2 percent and 10 percent slower (on two nodes) respectively.

The findings of the SUNY team align with those of Red Hat, which earlier this month released the results from benchmark tests it conducted specifically to measure the impact of the kernel patches. Red Hat found that CPU-intensive HPC workloads suffered only a 2-5 percent hit “because jobs run mostly in user space and are scheduled using CPU-pinning or NUMA control.” In comparison, database analytics were found to take a modest 3-7 percent hit and OLTP database workloads suffered the most (8-19 percent degradation).

The SUNY researchers have plans to conduct additional testing “with a larger number of nodes and for more application kernels” once the updates are applied to their production system.

The XD Metrics on Demand (XDMoD) tool employed for the testing was originally developed to provide independent audit capability for the XSEDE program. It was later open-sourced and is now used widely across research and commercial HPC sites. The tool includes an application kernel performance monitoring module that “allows automatic performance monitoring of HPC resources through the periodic execution of application kernels, which are based on benchmarks or real-world applications implemented with sensible input parameters.”

The paper was authored by Nikolay A. Simakov, Martins D. Innus, Matthew D. Jones, Joseph P. White, Steven M. Gallo, Robert L. DeLeon and Thomas R. Furlani. It is available on arxiv.org.

The post Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads appeared first on HPCwire.

Fostering Lustre Advancement Through Development and Contributions

Wed, 01/17/2018 - 17:47

Six months after organizational changes at Intel’s High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre development. Customers who have adopted the technology as their main parallel file system now have a clearer picture of what the future holds for the world’s most utilized parallel file system. Lustre remains strong and will continue to dominate the persistent parallel file system arena, at least for the foreseeable future.

Carlos Aoki Thomaz, Senior Product Manager at DDN

The new Lustre development and adoption strategy has turned out to be surprisingly simple, and more clear and consistent than anticipated. Like the old Whamcloud days, Lustre development has returned to a single code stream, thereby avoiding confusion and lack of discernment regarding different distributions, features, capabilities, and source code differentiation. Quietly released in July 2017, 2.10 is the LTS (Long Term Support) release of Lustre that should be the mainstream version through mid to early 2019.

As a major contributor to the Lustre community, DataDirect Networks (DDN) announced in 2016 that all of its Lustre features will be merged over time into the Lustre master branch. This convergence gives the entire community transparent access to the code, reducing the overhead of code development management and better aligning with the new features released in Lustre 2.10.

A very sophisticated set of features has been announced on Lustre 2.10 such as Progressive File Layouts (PFL), Project Quotas, IB Multi-rail and NRS Delay policy. Progressive file layouts allow system administrators and users to adjust file layouts, and how a file is stripped – the number of stripes and stripe block size now may vary according to the file size. There are several use cases that would take huge advantage leveraging PFL while simplifying the storage administration in the process. The storage administrator could define standard default layouts for different types of files, minimizing the need of users to manipulate file layouts by themselves (although the user is still able to define their own layouts). With the increasing utilization of flash technologies in a hybrid parallel file system (SSDs and NVMe devices mixed with standard rotational drives) it is now possible to create sophisticated mechanisms to optimize data location using PFL and OST pools.

Another feature, possibly the most latent need among the current Lustre users, is the Project Quotas. Project Quotas allows quota definition per “Project” which could be, for example, associated with a specific directory. Previously, Lustre only allowed standard POSIX User and group quotas. With Project Quotas we move one step ahead on the realm of managing spaces among users, groups and projects and planning for capacity and growth. Project Quota adds space accounting and enforcements of capacity utilization based on OSTs, sub-directories and file-sets, providing the granularity needed to manage several different use cases.

Some have asked about the impact of performance related to Project Quotas. Results of various tests have been impressive and encouraging, showing no degradation compared to the standard POSIX quota. Project Quotas is a feature available for Lustre running with a LDISKFS backend.

Although the feature has been only landed on Lustre 2.10, as the developer responsible for this feature, DDN has backported it into its Exascaler 3.2 (based on Lustre 2.7). Historically speaking, the latest and greatest version of Lustre usually brings the most advanced technologies with a price to pay, which is the un-tested and unproven chunk of codes that usually require a few cycles to stabilize. Since Project Quotas is a need for a huge range of customers that are not ready to move to Lustre 2.10 currently, Lustre 2.7 users can get the ability to run Project Quotas and get full support for it. In the case of customers running Project Quotas on Lustre 2.7, once they decide to upgrade to Lustre 2.10, data will be totally preserved (note that any users going from Lustre versions prior to 2.7, to Lustre 2.10 and activating Project Quotas require a reformat of the file system).

LNET IB Multi Rail allows users to take advantage of multiple infiniBand adapters, aggregating the bandwidth for Lustre LNET. This technique is widely used by Ethernet users through Ethernet Bonding. InfiniBand users were previously unable to “bond” interfaces and they were somehow limited to the performance of a single IB card. There was a need for increased bandwidth, especially on the client side. New architectures, such as HPE UV, have multiple sockets and a huge amount of memory capable to run multiple and much larger compute jobs. Those scenarios bring an unbalanced CPU/MEMORY to IO ratio, where even an IB EDR running 100Gbps may turn into a bottleneck. IB Multi Rail leverages Lustre on larger SMP like nodes, aggregating network bandwidth performance and proving a balanced CPU/Memory to IO ratio. On the server side, the biggest advantage is on the high availability capabilities. Having more than one IB link provides redundancy, avoiding scenarios that trigger server failover due an IB failure. Now in network failure scenarios the failures and their recovery are handled transparently, without compromising on performance.

NRS delay policy, which simulates high server load as way of validating the resilience of Lustre under load, is another feature introduced in Lustre 2.10. This is one valid way to perform fault injection and load simulation, usually very important during stabilization phases, performance characterization and overall debugging techniques.

Along with these recently announced features, a new approach has been proposed for Lustre’s policy engine (LiPE), designed to reduce installation and deployment complexity while delivering significantly faster results when executing and managing storage policies. LiPE relies on a set of components that allows the engine to:

  • Scan Lustre metadata targets (MDTs) quickly,
  • Create an in-memory map of the file system’s objects, and
  • Implement data management policies based on that mapped information.

This approach would allow users to define policies that trigger data automation via Lustre HSM hooks or external data management (copy tools, for example) mechanisms.

In the next stage of development, LiPE may be integrated with a File Heat Map mechanism for more automated and transparent data management, resulting in a better utilization of parallel storage infrastructure.

In regard to Lustre performance, a new initiative within the community is investigating the implementation of high-level tools, possibly at the user level, that would improve utilization and configuration of Lustre Quality of Service (QoS). In support of those efforts, a new QoS approach has been developed that is based on the Token Bucket Filter algorithm on the OST level. It allows system administrators to define the maximum number of RPCs to be issued by a user/group or job ID to a given OST. Throttling performance provides I/O control and bandwidth reservation that can guarantee that higher priority jobs run in a more predictable time, avoiding performance variations due to I/O delays.

In keeping with new HPC trends, a tremendous amount of work has also been invested in the integration of Lustre with Linux container-based workloads, providing native Lustre file system capabilities within containers, support for new kernel and specialized Artificial Intelligence and Machine Learning appliances.

2017 was a productive year for Lustre that showcased a very active and growing Lustre community and that positioned Lustre as the “go to” choice for many high-performance computing organizations and data centers. Moving into 2018, look for Lustre roadmaps to solidify this position with enhanced security, performance, Remote Access Service (RAS), and data management capabilities, as well as the addition of more enterprise-class features.

 

The post Fostering Lustre Advancement Through Development and Contributions appeared first on HPCwire.

Supercomputing-Backed Analysis Reveals Decades of Questionable Investments

Wed, 01/17/2018 - 16:58

Jan. 17, 2018 — One of the key principles in asset pricing — how we value everything from stocks and bonds to real estate — is that investments with high risk should, on average, have high returns.

“If you take a lot of risk, you should expect to earn more for it,” said Scott Murray, professor of finance at George State University. “To go deeper, the theory says that systematic risk, or risk that is common to all investments” — also known as ‘beta’ — “is the kind of risk that investors should care about.”

This theory was first articulated in the 1960s by Sharpe (1964), Lintner (1965), and Mossin (1966). However, empirical work dating as far back as 1972 didn’t support the theory. In fact, many researchers found that stocks with high risk often do not deliver higher returns, even in the long run.

“It’s the foundational theory of asset pricing but has little empirical support in the data. So, in a sense, it’s the big question,” Murray said.

Isolating the Cause

In a recent paper in the Journal of Financial and Quantitative Analysis, Murray and his co-authors Turan Bali (Georgetown University), Stephen Brown (Monash University) and Yi Tang (Fordham University), argue that the reason for this ‘beta anomaly’ lies in the fact that stocks with high betas also happen to have lottery-like properties – that is, they offer the possibility of becoming big winners. Investors who are attracted to the lottery characteristics of these stocks push their prices higher than theory would predict, thereby lowering their future returns.

Scott Murray, Assistant Professor of Finance at Georgia State University

To support this hypothesis, they analyzed stock prices from June 1963 to December 2012. For every month, they calculated the beta of each stock (up to 5,000 stocks per month) by running a regression— a statistical way of estimating the relationships among variables — of the stock’s return on the return of the market portfolio. They then sorted the stocks into 10 groups based on their betas and examined the performance of stocks in the different groups.

“Theory predicts that stocks with high betas do better in the long run than stocks with low betas,” Murray said. “Doing our analysis, we find that there really isn’t a difference in the performance of stocks with different betas.”

They next analyzed the data again and, for each stock month, calculated how lottery-like each stock was. Once again, they sorted the stocks into 10 groups based on their betas and then repeated the analysis. This time, however, they implemented a constraint that required each of the 10 groups to have stocks with similar lottery characteristics. By making sure the stocks in each group had the same lottery properties, they controlled for the possibility that their failure to detect a difference in performance between in their original tests was because the stocks in different beta groups have different lottery characteristics.

“We found that after controlling for lottery characteristics, the seminal theory is empirically supported,” Murray said.

In other words: price pressure from investors who want lottery-like stocks is what causes the theory to fail. When this factor is removed, asset pricing works according to theory.

Identifying the Source

Other economists had pointed to a different factor — leverage constraints — as the main cause of this market anomaly. They believed that large investors like mutual funds and pensions that are not allowed to borrow money to buy large amounts of lower-risk stocks are forced to buy higher-risk ones to generate large profits, thus distorting the market.

Murray used the National Science Foundation-funded Wrangler supercomputer at the Texas Advanced Computing Center for his regression analysis. (Source: TACC)

However, an additional analysis of the data by Murray and his collaborators found that the lottery-like stocks were most often held by individual investors. If leverage constraints were the cause of the beta anomaly, mutual funds and pensions would be the main owners driving up demand.

The team’s research won the prestigious Jack Treynor Prize, given each year by the Q Group, which recognizes superior academic working papers with potential applications in the fields of investment management and financial markets.

The work is in line with ideas like prospect theory, first articulated by Nobel-winning behavioral economist Daniel Kahneman, which contends that investors typically overestimate the probability of extreme events — both losses and gains.

“The study helps investors understand how they can avoid the pitfalls if they want to generate returns by taking more risks,” Murray said.

To run the systematic analyses of the large financial datasets, Murray used the Wrangler supercomputer at the Texas Advanced Computing Center (TACC). Supported by a grant from the National Science Foundation, Wrangler was built to enable data-driven research nationwide. Using Wrangler significantly reduced the time-to-solution for Murray.

The plot shows the time-series of aggregate lottery demand. Aggregate lottery demand in any month t is measured as the equal-weighted (EWMAX) or value-weighted (VWMAX) average value of MAX across all stocks in the sample in month t. (Source: TACC)

“If there are 500 months in the sample, I can send one month to one core, another month to another core, and instead of computing 500 months separately, I can do them in parallel and have reduced the human time by many orders of magnitude,” he said.

The size of the data for the lottery-effect research was not enormous and could have been computed on a desktop computer or small cluster (albeit taking more time). However, with other problems that Murray is working on – for instance research on options – the computational requirements are much higher and require super-sized computers like those at TACC.

“We’re living in the big data world,” he said. “People are trying to grapple with this in financial economics as they are in every other field and we’re just scratching the surface. This is something that’s going to grow more and more as the data becomes more refined and technologies such as text processing become more prevalent.”

Though historically used for problems in physics, chemistry and engineering, advanced computing is starting to be widely used — and to have a big impact — in economics and the social sciences.

According to Chris Jordan, manager of the Data Management & Collections group at TACC, Murray’s research is a great example of the kinds of challenges Wrangler was designed to address.

“It relies on database technology that isn’t typically available in high-performance computing environments, and it requires extremely high-performance I/O capabilities. It is able to take advantage of both our specialized software environment and the half-petabyte flash storage tier to generate results that would be difficult or impossible on other systems,” Jordan said. “Dr. Murray’s work also relies on a corpus of data which acts as a long-term resource in and of itself — a notion we have been trying to promote with Wrangler.”

Beyond its importance to investors and financial theorists, the research has a broad societal impact, Murray contends.

“For our society to be as prosperous as possible, we need to allocate our resources efficiently. How much oil do we use? How many houses do we build? A large part of that is understanding how and why money gets invested in certain things,” he explained. “The objective of this line of research is to understand the trade-offs that investors consider when making these sorts of decisions.”

Source: Aaron Dubrow, TACC

The post Supercomputing-Backed Analysis Reveals Decades of Questionable Investments appeared first on HPCwire.

Inventor Claims to Have Solved Floating Point Error Problem

Wed, 01/17/2018 - 15:59

“The decades-old floating point error problem has been solved,” proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and received a patent for a “processor design, which allows representation of real numbers accurate to the last digit.” The patent (No. 9,817,662, “Apparatus for Calculating and Retaining a Bound on Error During Floating Point Operations and Methods Thereof”) was issued on November 14, 2017.

Alan Jorgensen

Jorgensen presents his bounded floating point system as “a game changer for the computing industry,” tackling a pernicious problem that (as he cites) has been implicated in catastrophic failures, including the 1991 Patriot missile failure, which resulted in 28 U.S. military deaths.

The inventor patented a process that addresses floating point errors by computing “two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.”

Jorgensen says the method performs in real time and can operate in conjunction with existing hardware and software. Also, converting between existing standardized floating point and this new bounded floating point format can be done with simple operations, he says.

Unreported floating point errors are relevant for highly compute-intensive functions, especially where accuracy and safety are paramount, such as weather prediction, GPS, autonomous vehicles and finance. Jorgenson claims that his system guarantees accuracy of floating point values to plus or minus one in the last digit.

The invention is said to provide error information with minimal impact to performance or memory space compared with current methods. “In the current art, static error analysis requires significant mathematical analysis and cannot determine actual error in real time,” reads a section of the patent. “This work must be done by highly skilled mathematician programmers. Therefore, error analysis is only used for critical projects because of the greatly increased cost and time required. In contrast, the present invention provides error computation in real time with, at most, a small increase in computation time and a small increase in the maximum number of bits available for the significand.”

Read the patent filing in-full here.

The abstract offers a few more details:

The apparatus and method for calculating and retaining a bound on error during floating point operations inserts an additional bounding field into the standard floating-point format that records the retained significant bits of the calculating with notification upon insufficient retention. The bounding field, which accounts for both rounding and cancellation errors, has two parts, the lost bits D Field and the accumulated rounding error R Field. The D Field states the number of bits in the floating point representation that are no longer meaningful. The bounds on the real value represented are determined from the truncated floating point value (first bound) and the addition of the error determined by the number of lost bits (second bound). The true, real value is absolutely contained by the first and second bounds. The allowed loss (optionally programmable) of significant bits provides a fail-safe, real-time notification of loss of significant bits.

According to Jorgensen’s LinkedIn profile, he has a PhD in Computer Science and is a part time instructor at the University of Nevada, Las Vegas (UNLV) where he teaches computer science to non-computer science students.

The post Inventor Claims to Have Solved Floating Point Error Problem appeared first on HPCwire.

Colovore Announces 2 MW Phase 3 Colocation Expansion

Wed, 01/17/2018 - 08:07

SANTA CLARA, Calif., Jan. 17, 2018 — Colovore has announced that it has begun construction on Phase 3, adding another 2 MW of capacity to its Santa Clara data center. Since launching in 2014, Colovore has grown rapidly by providing power densities in Bay Area colocation, exceptional uptime and service quality, and a cost-effective, pay-by-the-kW pricing model. As with Phase 2, all cabinets in Phase 3 will support 35 kW of critical load. The additional capacity is expected to be delivered in early Q3 and Colovore is now marketing Phase 3, adding much-needed high-density capacity to the tight Bay Area colocation marketplace.

Highlights / Key Facts

  • Customers utilize Colovore to host their high-performance (HPC) and Big Data infrastructure, private/hybrid cloud deployments, and internal lab environments
  • With power densities of 35 kW per rack, Colovore provides the highest footprint efficiency and lowest TCO in Bay Area colocation; customers can pack their racks full of servers and operate in a much smaller, cost-effective footprint than legacy colos
  • Colovore’s pay-by-the-kW pricing model allows customers to match their costs directly to their IT requirements as they go, providing significant cost savings and easy scalability– 1 kW at a time
  • With 9 MW of total power available at its facility, Colovore has plenty of capacity for future expansion beyond this 2 MW Phase 3

“We are clearly seeing increasing rollout of power-hungry, computing platforms supporting a number of fast-growing HPC applications,” stated Sean Holzknecht, President and Co-Founder of Colovore. “Artificial intelligence, Big Data, self-driving cars, and the Internet of Things are exploding and customers need data centers with next-generation power and cooling capabilities to support the underlying IT infrastructure. That is our specialty at Colovore.”

To learn more about how you can benefit from Colovore’s high-performance colocation solutions, contact Ben Coughlin at Colovore (tel. #408-330-9290) or email info@colovore.com.

About Colovore
Colovore is a leading provider of high-performance colocation services. Our 9 MW state-of-the-art data center in Santa Clara features power densities of 35 kW per rack and a pay-by-the-kW pricing model. We offer colocation the way you want it—cost-efficient, scalable, and robust. Colovore is profitable and backed by industry leaders including Digital Realty Trust. For more information please visit www.colovore.com.

Source: Colovore

The post Colovore Announces 2 MW Phase 3 Colocation Expansion appeared first on HPCwire.

Quantum Corporation Names Patrick Dennis CEO

Tue, 01/16/2018 - 18:46

SAN JOSE, Calif., Jan. 16, 2018 — Quantum Corp. today announced that its board of directors has appointed Patrick Dennis as president and CEO, effective today. Dennis was most recently president and CEO of Guidance Software and has also held senior executive roles in strategy, operations, sales, services and engineering at EMC. He succeeds Adalio Sanchez, a member of Quantum’s board who had served as interim CEO since early November 2017. Sanchez will remain on the board and assist with the transition.

“Patrick has been a successful public company CEO and brings a broad range of experience in storage and software, including a proven track record leading business transformations,” said Raghu Rau, Quantum’s chairman. “The other board members and I look forward to working closely with him to drive growth, cost reductions, and profitability and deliver long-term shareholder value. We also want to thank Adalio for stepping in and leading the company during a critical transition period.”

“During my time as CEO, I’ve greatly appreciated the commitment to change I’ve seen from team members across Quantum and will be supporting Patrick in any way I can to build on the important work we started,” said Sanchez.

Dennis served as president and CEO of Guidance Software, a provider of cyber security software solutions, from May 2015 until its acquisition by OpenText last September. During his tenure, he turned the company around, growing revenue and significantly improving profitability. Before joining Guidance Software, Dennis was senior vice president and chief operating officer, Products and Marketing, at EMC, where he led the business operations of its $10.5 billion enterprise and mid-range systems division, including management of its cloud storage business. Dennis spent 12 years at EMC, including as vice president and chief operating officer of EMC Global Services, overseeing a 3,500-person technical sales force. In addition to his time at EMC, he served as group vice president, North American Storage Sales, at Oracle, where he turned around a declining business.

“With its long-standing expertise in addressing the most demanding data management challenges, Quantum is well-positioned to help customers maximize the strategic value of their ever-growing digital assets in a rapidly changing environment,” said Dennis. “I’m excited to be joining the company as it looks to capitalize on this market opportunity by leveraging its strong solutions portfolio in a more focused way, improving its cost structure and execution, and continuing to innovate.”

About Quantum

Quantum is a leading expert in scale-out tiered storage, archive and data protection, providing solutions for capturing, sharing, managing and preserving digital assets over the entire data lifecycle. From small businesses to major enterprises, more than 100,000 customers have trusted Quantum to address their most demanding data workflow challenges. Quantum’s end-to-end, tiered storage foundation enables customers to maximize the value of their data by making it accessible whenever and wherever needed, retaining it indefinitely and reducing total cost and complexity. See how at www.quantum.com/customerstories.

Source: Quantum Corp.

The post Quantum Corporation Names Patrick Dennis CEO appeared first on HPCwire.

Pages