Feed aggregator

Sugon Announces 1st OPA-Based Torus Switch

HPC Wire - Fri, 11/10/2017 - 08:38

Nov. 10, 2017 — As supercomputers achieve petascale and reach toward exascale, efficient communication among thousands of nodes becomes an important question. One pioneer solution is the Silicon Switch (an OPA-based Torus topology switch) by Sugon, China’s high-performance computing leader. A demo of the switch was exhibited at the SC17 Denver.

“Large-scale supercomputers, especially those quasi-Exascale or Exascale systems, have to face severe challenges in terms of system scale, scalability, cost, energy consumption, reliability, etc. The Silicon Switch released by Sugon adopts the Torus architecture and the state-of-art OPA technology, and then carries more competitive features, including advanced performance, almost infinite scalability, and excellent fault tolerance ability. It shall be a wise choice for Exascale supercomputer,” said Dr. Li Bin, General Manager of Business Department for HPC Product of Sugon.

Compared with the traditional Fat-tree network topology, the Torus direct network, which emphasizes the neighboring interconnection, has obvious advantages in scalability and cost/performance, since it only holds a linear dependency between the network cost and the system scale. In addition, the rich redundant data paths and the dynamic routing give inherent superiority in fault tolerance ability. All these features well meet the requirements of Exascale supercomputers and pave a new trend of high-speed network technology.

Dr.Li Bin further remarked that Sugon had realized 3D-Torus network in 2015, as a solution for their Earth System Numerical Simulator. Recently, Sugon’s researches in Torus network technology have made bigger breakthroughs. The dimension of the Torus network has evolved from 3D to 6D which can effectively reduce of the longest network hops of large-scale systems. At the software level, the deadlock-free dynamic routing algorithms supporting 6D-Torus have been verified and tested in the actual environment. At the hardware level, the Silicon Switch released this time is an important sample of the hardware implementation.

The “Silicon” mentioned above refers to a unit in Torus high-dimensional direct network. With the 3D-Torus topological structure adopted in asilicon unit, multiple silicon units can agglomerate into a higher-dimensional 4D/5D/6D-Torus direct network. Integrating a 3D-Torus silicon unit into a modular switch can bring many benefits, such as greatly improving the integration and density of the system, simplifying the network cabling, reducing the deployment complexity and costs. The released Silicon Switch can support up to 192 ports (100Gb each). Different Silicon Switches could be connected through a 400Gb specific interface.

Leveraging the integrated Silicon Switch could also greatly raise the popularity of Torus high-speed network technology, since there is almost no change on the computing nodes side and then some small and medium-scale high-performance computing systems may adopt the Torus topology smoothly.

It is worth mentioning that the released Silicon Switch by Sugon supports the cold-plate direct liquid cooling as well. It has been marking the extension of Sugon’s liquid cooling technology from the computing device to the network system. In fact, the liquid cooling technology has played a key role in improving the integration and reliability of the large-scale network systems in terms of reducing their energy consumption.

The flourishing development of high-performance computing and artificial intelligence rely on not only the powerful computing parts, but also on efficient communication parts. Sugon shall aim to blaze new trails in computing, storage, networking and other core technologies.

Source: Sugon

The post Sugon Announces 1st OPA-Based Torus Switch appeared first on HPCwire.

NVIDIA Announces Financial Results for Third Quarter Fiscal 2018

HPC Wire - Fri, 11/10/2017 - 08:16

Nov. 10, 2017 — NVIDIA has reported record revenue for the third quarter ended October 29, 2017, of $2.64 billion, up 32 percent from $2.00 billion a year earlier, and up 18 percent from $2.23 billion in the previous quarter, with growth across all its platforms.

GAAP earnings per diluted share for the quarter were a record $1.33, up 60 percent from $0.83 a year ago and up 45 percent from $0.92 in the previous quarter. Non-GAAP earnings per diluted share were $1.33, also a record, up 41 percent from $0.94 a year earlier and up 32 percent from $1.01 in the previous quarter.

“We had a great quarter across all of our growth drivers,” said Jensen Huang, founder and chief executive officer of NVIDIA. “Industries across the world are accelerating their adoption of AI.

“Our Volta GPU has been embraced by every major internet and cloud service provider and computer maker. Our new TensorRT inference acceleration platform opens us to growth in hyperscale datacenters. GeForce and Nintendo Switch are tapped into the strongest growth dynamics of gaming. And our new DRIVE PX Pegasus for robotaxis has been adopted by companies around the world. We are well positioned for continued growth,” he said.

Capital Return

During the first nine months of fiscal 2018, NVIDIA returned to shareholders $909 million in share repurchases and $250 million in cash dividends. As a result, the company returned an aggregate of $1.16 billion to shareholders in the first nine months of the fiscal year. The company intends to return $1.25 billion to shareholders in fiscal 2018.

For fiscal 2019, NVIDIA intends to return $1.25 billion to shareholders through ongoing quarterly cash dividends and share repurchases. The company announced a 7 percent increase in its quarterly cash dividend to $0.15 per share from $0.14 per share, to be paid with its next quarterly cash dividend on December 15, 2017, to all shareholders of record on November 24, 2017.

Q3 FY2018 Summary

GAAP ($ in millions except earnings per share) Q3 FY18 Q2 FY18 Q3 FY17 Q/Q Y/Y Revenue $ 2,636 $ 2,230 $ 2,004 Up 18% Up 32% Gross margin 59.5 % 58.4 % 59.0 % Up 110 bps Up 50 bps Operating expenses $ 674 $ 614 $ 544 Up 10% Up 24% Operating income $ 895 $ 688 $ 639 Up 30% Up 40% Net income $ 838 $ 583 $ 542 Up 44% Up 55% Diluted earnings per share $ 1.33 $ 0.92 $ 0.83 Up 45% Up 60%

 

Non-GAAP ($ in millions except earnings per share) Q3 FY18 Q2 FY18 Q3 FY17 Q/Q Y/Y Revenue $ 2,636 $ 2,230 $ 2,004 Up 18% Up 32% Gross margin 59.7 % 58.6 % 59.2 % Up 110 bps Up 50 bps Operating expenses $ 570 $ 533 $ 478 Up 7% Up 19% Operating income $ 1,005 $ 773 $ 708 Up 30% Up 42% Net income $ 833 $ 638 $ 570 Up 31% Up 46% Diluted earnings per share $ 1.33 $ 1.01 $ 0.94 Up 32% Up 41%

NVIDIA’s outlook for the fourth quarter of fiscal 2018 is as follows:

  • Revenue is expected to be $2.65 billion, plus or minus two percent.
  • GAAP and non-GAAP gross margins are expected to be 59.7 percent and 60.0 percent, respectively, plus or minus 50 basis points.
  • GAAP and non-GAAP operating expenses are expected to be approximately $722 million and $600 million, respectively.
  • GAAP and non-GAAP other income and expense are both expected to be nominal.
  • GAAP and non-GAAP tax rates are both expected to be 17.5 percent, plus or minus one percent, excluding any discrete items. GAAP discrete items include excess tax benefits or deficiencies related to stock-based compensation, which the company expects to generate variability on a quarter by quarter basis.

Third Quarter Fiscal 2018 Highlights

During the third quarter, NVIDIA achieved progress in these areas:

Datacenter

Gaming

Professional Visualization

Automotive

  • Announced NVIDIA DRIVE PX Pegasus, the world’s first auto-grade AI computer designed to enable a new class of driverless robotaxis without steering wheels, pedals or mirrors.

Autonomous Machines/AI Edge Computing

Source: NVIDIA

The post NVIDIA Announces Financial Results for Third Quarter Fiscal 2018 appeared first on HPCwire.

Caringo Introduces Caringo Drive for Swarm Scale-Out Hybrid Storage

HPC Wire - Fri, 11/10/2017 - 08:07

AUSTIN, Tex., Nov. 10, 2017 — Today, Caringo announced their latest product, Caringo Drive, a virtual drive for Swarm Scale-Out Hybrid Storage, which they will demo at SC17 Booth 1001 in Denver, Colorado, November 13–16, 2017, along with their complete product line. Once Caringo Drive is installed on macOS and Windows systems, customers have convenient access and can easily drag and drop files to Swarm with background parallel transfer. This speeds content uploads and provides simple drive-based access to Swarm from applications.

Caringo’s flagship product Swarm eliminates storage silos by turning standard server hardware into a limitless pool of data resources delivering continuous protection, multi-tenancy and metering for chargebacks. HPC customers are able to offload data from primary storage and enable collaboration while reducing storage TCO by 75% and scaling to 100s of petabytes.

VP of Product Tony Barbagallo said, “Many organizations like Argonne National Laboratories and Texas Tech University trust their storage infrastructure to Caringo Swarm to provide infinite expansion across multi-vendor, local, and cloud-based storage. They use Swarm to store, preserve, and protect data generated in dispersed locations to facilitate in-depth research, drive technological breakthroughs, and support thousands of staff, researchers, and students around the world. With Caringo Drive, we expand our toolset to empower our customers to easily manage their Swarm cluster.”

In addition to showcasing their complete product line at SC17, Caringo will offer a no-cost, full-featured 100TB licenses of Caringo Swarm Scale-Out Hybrid Storage for qualified High-Performance Computing (HPC) customers. SC17 attendees are also invited to join the Caringo team at their widely anticipated Happy Hour at 2 pm, Tuesday and Wednesday, in the Caringo booth. For more information, see https://www.caringo.com/sc17/.

The 100TB license promotion and integration consultation is available now to qualified HPC and Education organizations. Interested parties can visit https://www.caringo.com/solutions/hpc/ for more information.

About Caringo

Founded in 2005, Caringo is committed to helping customers unlock the value of their data and solve issues associated with data protection, management, organization, and search at massive scale.

Source: Caringo

The post Caringo Introduces Caringo Drive for Swarm Scale-Out Hybrid Storage appeared first on HPCwire.

SIGHPC Education and IHPCTC Join Forces to Promote HPC Education and Training

HPC Wire - Fri, 11/10/2017 - 07:57

Nov. 10, 2017 — The SIGHPC Education Chapter (SIGHPCEDU) and the International High Performance Computing Training Consortium (IHPCTC) have announced an integration of their efforts to build a combined collaborative community focused on the development, dissemination, and assessment of HPC training and education materials.  The goals of the collaboration include the promotion of HPC training activities, avoidance of duplication of efforts in creating such materials, and the assessment of the impacts of that training.   

The combined organization has begun work on a number of short- and long-term activities aimed at those goals.  Those activities will be discussed at the SIGHPC Education Chapter BoF at SC17 (November16 – 12:15 PM Room 205-207).  They include the preparation of a master list of existing training materials, webinars, blogs and discussion forums, and outlets for publishing training and education experiences.  The outcomes of the discussion at SC17 will be posted on the SIGHPCEDU website (https://sighpceducation.acm.org/) following the conference.  Those interested in volunteering to assist with these efforts should contact the SIGHPC Education Chapter Officers (Richard Coffey, Fernanda Foertter, Steve Gordon, Dana Brunson, and Holly Hirst) at SIGHPCEDUC-OFFICERS@listserv.acm.org.

The SIGHPC Education Chapter is the first virtual chapter of the ACM.  Its objectives are to:

  • Promote an increased knowledge of, and greater interest in, the educational and scientific aspects of HPC and their applications.
  • Provide a means of communication among individuals having an interest in education and career building activities relating to HPC.
  • Promote and collate education activities and programs through formal and informal education activities.
  • Provide guidance to the community on the competencies required for effective application of computational modeling, simulation, data analysis, and visualization techniques.
  • Provide information on quality educational programs and materials as will as facilitating experience building access to existing HPC resources.

Membership is only $10 per year for professionals and $5 for students.

The International High Performance Computing Training Consortium is an ad hoc group of training professionals formed in response to several training workshops held at the annual SC meetings.  Its members includes professional staff from 18 countries.  This group has been organizing HPC training workshops at SC for the past four years.  We welcome you to join us for the Fourth SC Workshop on Best Practices for HPC Training on Sunday from 2-5:30 pm in room 601 of the Convention Center.

Source: SIGHPC

The post SIGHPC Education and IHPCTC Join Forces to Promote HPC Education and Training appeared first on HPCwire.

Fujitsu to Build PRIMERGY Supercomputer for the Institute of Fluid Science at Tohoku University

HPC Wire - Thu, 11/09/2017 - 22:22

Nov. 10 — Fujitsu today announced that it has received an order for “The Supercomputer System” from the Institute of Fluid Science at Tohoku University.

The Supercomputer System will consist of multiple computational systems using the latest Fujitsu Server PRIMERGY x86 servers, and is planned to deliver a peak theoretical performance in excess of 2.7 petaflops.

The Supercomputer System will be deployed to the Advanced Fluid Information Research Center in the Institute of Fluid Science, Tohoku University in Sendai, Miyagi Prefecture, with plans to begin operations in fiscal 2018. Through deployment and operations of this system, Fujitsu will support the Tohoku University Institute of Fluid Science in the advancement of its research into the phenomena of fluids in a variety of fields, including biology, energy, aerospace and semiconductors.

Background

The Institute of Fluid Science at Tohoku University has contributed to the development of fluid science in a variety of fields, including clarifying the flow of blood through the body, and controlling plasma flow in semiconductor manufacturing, using a next-generation integrated research method that unites creative experimental research with supercomputer-based computational research.

Now, the institute is upgrading and significantly improving the performance of its core equipment, The Supercomputer System, in order to further enhance its fluid science research in fields such as health, welfare and medicine, the environment and energy, aerospace and manufacturing.

Fujitsu received the order for this system based on a proposal that combined software-based virtualization technology with a large-scale computational system that utilizes the technology Fujitsu has cultivated through HPC development.

Details of the New System

The Supercomputer System is comprised of the core supercomputer which has three computation systems, including two shared-memory parallel computation systems, which can use large capacity memory space, and one distributed-memory parallel computation system, which can execute large-scale parallel programs. It also has a login server and application and remote graphics server, as well as software and a variety of subsystems for tasks such as visualization and storage. The three computation systems in the core supercomputer will consist of Fujitsu’s latest PRIMERGY x86 servers, which are planned to deliver the distributed-memory parallel computation system’s theoretical peak performance in excess of 2.7 petaflops. In addition, by employing a water cooling model, this system will also offer high energy efficiency.

Related Websites

Fujitsu Server PRIMERGY x86 Servers

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Approximately 155,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.5 trillion yen (US$40 billion) for the fiscal year ended March 31, 2017. For more information, please see http://www.fujitsu.com.

Source: Fujitsu

The post Fujitsu to Build PRIMERGY Supercomputer for the Institute of Fluid Science at Tohoku University appeared first on HPCwire.

SC17: Legion Seeks to Elevate HPC Programming

HPC Wire - Thu, 11/09/2017 - 22:15

As modern HPC architectures become ever more complex, so too does the task of programming these machines. In the quest for the trifecta of better performance, portability and programmability, new HPC programming systems are being developed. The Legion programming system, a data-centric parallel programming system for writing portable high performance programs, is one such effort that is being developed at Stanford University in collaboration with Nvidia and several U.S. Department of Energy labs.

In this Q&A, Stanford University Computer Science Chair Alex Aiken and Nvidia Chief Scientist Bill Dally provide an overview of Legion, its goals and its relevance for exascale computing. Aiken will hold a tutorial on the Legion programming model this Sunday at SC17 in Denver from 1:30-5pm MT.

HPCwire: Let’s start with a basic, but important question: why does HPC need new programming models?

Alex Aiken, professor and the chair of computer science at Stanford

Alex Aiken and Bill Dally: New programming models are needed to raise the level of programming to enhance portability across types and generations of high-performance computers. Today programmers specify low-level details, like how much parallelism to exploit, and how to stage data through levels of memory. These low-level details tie an application to the performance of a specific machine, and the effort required to modify the code to target future machines is becoming a major obstacle to actually doing high performance computing. By elevating the level of programming, these target-dependent decisions can be made by the programming system, making it easier to write performant codes, and making the codes themselves performance portable.

HPCwire: What is the Legion programming system? What are the main goals of the project?

Aiken and Dally: Legion is a new programming model for modern supercomputing systems that aims to provide excellent performance, portability, and scalability of application codes across a wide range of hardware. A Legion application is composed of tasks written in language of the programmer’s choice, such as C++, CUDA, Fortran, or OpenACC. Legion tasks specify which “regions” of data they will access as well as what kinds of accesses will be performed. Knowledge of the data used by each task allows Legion to confer many benefits to application developers:

Bill Dally, Nvidia chief scientist & Stanford professor

First, a Legion programming system can analyze the tasks and their data usage to automatically and safely infer parallelism and perform the scheduling transformations necessary to fill an exascale machine, even if the code was written in an apparently-sequential style.

Second, the programming system’s knowledge of which data will be accessed by each task allows Legion to automatically insert the necessary data movement for a complex memory hierarchy, greatly simplifying application code and reducing (or often eliminating) idle cyclems on processors waiting for necessary data to arrive.

Finally, Legion’s machine-agnostic description of an application in terms of tasks and regions decouples the process of specifying an application from the determination of how it is mapped to a target machine. This allows the porting and tuning of an application to be done independently from its development and facilitates tuning by machine experts or even a machine learning algorithm. This makes Legion programs inherently performance portable.

HPCwire: The DOE is investing in Legion development as part as part its exascale program. How is Legion positioned to address the challenges of exascale?

Aiken and Dally: Legion is designed for exascale computation. Legion guarantees that parallel execution has the same result as sequential execution, which is a huge advantage for debugging at scale. Legion also provides rich capabilities for describing how a Legion program uses its data. Since managing and moving data is the limiter in many current petascale and future exascale applications, these features give Legion the information it needs to do a much better job of managing data placement and movement than current programming systems. Legion is also highly asynchronous, avoiding the global synchronization constructs which only become more expensive on larger machines. Finally, under the hood, the Legion implementation exploits the extra information it has about a program’s data and its asynchronous capabilities to the hilt, performing much more sophisticated static and dynamic analysis of programs than is possible in current systems to support Legion’s higher level of abstraction while providing scalable and portable performance.

HPCwire: Why is Nvidia involved in Legion? How does Legion fit into Nvidia’s vision for computing?

Dally: Nvidia wants to make it easy for people to develop production application codes that can scale to exascale machines and easily be ported between supercomputers with different GPU generations, numbers of GPUs, and different sized memory hierarchies. By letting programmers specify target-independent codes at a high level, leaving the mapping decisions to the programming system, Legion accomplishes these goals.

Nvidia is also very excited to collaborate with leading researchers from Stanford University and Los Alamos National Lab to move this technology forward.

HPCwire: One of the stated goals/features of Legion is performance portability; at a high-level, how does it achieve this?

Aiken and Dally: Performance portability is achieved in Legion through a strict separation of concerns: we aim to completely decouple the description of the computation from how it is mapped to the target machine. This approach manifests itself explicitly in the programming model: all Legion programs consist of two parts: a machine-independent specification that describes the computation abstractly without any machine details, and one or more application- and/or machine-specific mappers that make policy decisions about how the application should be executed on the target machine. Machine-independent applications can therefore be written once and easily migrated to new machines only by changing the mapping decisions. Importantly, mapping decisions can only impact the performance of the code and never the correctness as the programming system uses program analysis to determine if any data movement and synchronization is necessary to satisfy the mapping decisions.

HPCwire: Alex, what will you be covering in your SC17 tutorial on Sunday and who should attend?

Aiken: The tutorial will cover the major features of the Legion programming system and will be hands-on; participants will be writing programs almost from the start and every concept will be illustrated with a small programming exercise. Anyone who is interested in learning something about the benefits and state of the art of task-based programming models, and of Legion specifically, should find the tutorial useful.

HPCwire: What is the most challenging part of developing a new HPC programming model?

Aiken and Dally: The most challenging part is managing expectations. Is it easy to forget that it took MPI more than 15 years from the time that the initial prototypes were proposed to when really solid implementations were available for use. Many users are expecting new HPC programming models such as Legion to mature much faster than this. We’ve been lucky to collaborate with groups like Jackie Chen’s combustion group at Sandia National Lab, the FleCSi team at Los Alamos National Lab, and the LCLS-II software team at SLAC that are willing to work with us on real applications that push us through our growing pains and ensure the end result will be one that is broadly useful in the HPC programming ecosystem.

HPCwire: How hard is it for an HPC programmer with a legacy application to migrate that application to Legion?

Aiken and Dally: Legion is designed to facilitate the incremental migration of an MPI-based application. Legion interoperates with MPI, allowing a porting effort to focus on moving the performance-critical sections (e.g., the main time-stepping loop or a key solver) to Legion tasks while leaving other parts of the application such as initialization or file I/O in their original MPI-based form. And since Legion operates at the granularity of tasks, the compute heavy “inner loops” from the original optimized application code can often be used directly as the body of newly-created Legion tasks.

As an example, the combustion simulation application S3D, developed at Sandia National Labs, consists of over 200,000 lines of Fortran+MPI code, but only two engineer-months of effort were required to port the main integration loop to Legion. The integration loop comprises only 15 percent of the overall code base, but consumes 97 percent of the cycles during execution. Although still contained in the original Fortran shell, the use of the Legion version of the integration loop allows S3D to run more than 4x faster than the original Fortran version, and over 2x faster than other GPU-accelerated versions of the code.

The above figure shows the architecture of the Legion programming system. Applications targeting Legion have the option of either being written in the Regent programming language or written directly to the Legion C++ runtime interface. Applications written in Regent are compiled to LLVM (and call a C wrapper for the C++ runtime API). Additional info.

The post SC17: Legion Seeks to Elevate HPC Programming appeared first on HPCwire.

Ahead of SC17, Mellanox Launches Scalable 200G Switch Platforms

HPC Wire - Thu, 11/09/2017 - 15:54

In the run-up to the annual supercomputing conference SC17 next week in Denver, Mellanox made a series of announcements today, including a scalable switch platform based on its HDR 200G InfiniBand technology and the first deployment of a 100Gb/s Linux kernel-based Ethernet switch.

The company touts its HDR (High Data Rate) 200G InfiniBand Quantum, which offers up to 800 ports of 200Gb/s or 1,600 ports 100Gb/s in one chassis, as the most scalable switch platform available.

The platform family includes:

  • Quantum QM8700: 40-port 200Gb/s or 80-port 100Gb/s
  • Quantum CS8510: modular 200-port 200Gb/s or 400-port 100Gb/s
  • Quantum CS8500: modular 800-port 200Gb/s or 1,600-port 100Gb/s

Mellanox said the Quantum product line’s switch density will enable space and power consumption optimization, reducing network equipment cost by 4X, electricity costs by 2X and improving data transfer time by 2X.

Departmental-scale implementations of the Quantum QM8700 switch connects 80 servers, which the company said is 1.7 times higher than competitive products. For enterprise-scale, a 2-layer Quantum switch topology connects 3,200 servers, 2.8X higher. For hyperscale, a 3-layer Quantum switch topology connects 128,000 servers, or 4.6 times higher than competitive products.

“The HDR 200G Quantum switch platforms will enable the highest scalability, all the while dramatically reducing the data center capital and operational expenses,” said Gilad Shainer, vice president of marketing at Mellanox. “Quantum will enable the next generation of high-performance computing, deep learning, big data, cloud and storage platforms to deliver the highest performance and setting a clear path to exascale computing.”

“Mellanox Quantum more than doubles the number of compute nodes per InfiniBand leaf switch, which supports the industry-leading physical density of the Penguin Tundra ES platform,” said Jussi Kukkonen, vice president, Advanced Solutions, Penguin Computing, Inc. “In early 2018, Penguin will bring to market the first systems featuring a true PCI-Express generation 4 I/O subsystem, unlocking the full 200Gbps performance potential of the Mellanox Quantum and InfiniBand HDR.”

Mellanox also announced that Atos, a European services provider focused on digital transformation, big data, cybersecurity and HPC, will  incorporate the HDR (200G) and HDR100 (100G) InfiniBand solutions, on Atos’s BullSequana X1000 open server supercomputer platform.

Mellanox said the new Quantum switches will start shipping in the first half of 2018 and they will be demonstrated at SC17.

Mellanox also announced the first major production deployment of a 100Gb/s Ethernet Spectrum switch based on the Linux Switchdev driver to support the content distribution network service of NGENIX, a subsidiary of Rostelecom, a leading Russian telecom provider. Mellanox said this is the first major deployment of an open Ethernet switch based on the Switchdev (a common API for swapping Ethernet switches in and out of networks) driver that has been accepted and is available as open source as part of the Linux kernel.

The Switchdev driver runs as part of the standard kernel, and thus enables downstream Linux OS distributions and off-the-shelf Linux-based applications to operate the switch, the company said. The driver abstracts proprietary ASIC application programming interfaces (APIs) with standard Linux APIs for the switch data plane configuration. The key advantage of Switchdev for network administrators and software developers is an open source driver that doesn’t rely on any vendor-specific binary packages, with a well-known, well-documented and open data plane abstraction that is native to Linux.

Mellanox said the combination of Spectrum switch systems running an open, standard Linux distribution provides NGENIX with unified Linux interfaces across datacenter entities, servers and switches, with no compromise on performance.

“We were looking for a truly open solution to power our next generation 100Gb Ethernet network,” said Dmitry Krikov, CTO at NGENIX. “The choice was clear. Not only was the Mellanox Spectrum-based switch the only truly open, Linux kernel-based solution, but also allows us to use a single infrastructure to manage, authorize and monitor our entire network. In addition, it’s proving to be very cost-effective in terms of price-performance.”

Mellanox said there is strong market demand among web companies and network operators for a common API to swap Ethernet switches in and out of networks as easily as a new server. “As a pioneer in network disaggregation, Mellanox Technologies has been a major contributor to enabling the infrastructure of the open source Switchdev model,” the company said in a prepared statement.

Mellanox said the Linux kernel Switchdev driver is available for all Mellanox Spectrum SN2000 switch systems, as well as Mellanox Spectrum switch ASIC. The SN2000 portfolio is available in a variety of port and speed configurations (10/25/40/50/100GbE), including the SN2100, a high-density, half-width 16-port non-blocking 100GbE switch. The driver will also be available for Spectrum-2, the next generation 6.4Tb/s switch ASIC and the SN3000 switch systems using it. Both are expected to be available in 2018, the company said.

The post Ahead of SC17, Mellanox Launches Scalable 200G Switch Platforms appeared first on HPCwire.

HPC Cloud Startup XTREME Design Gets Series A Funding, Expands to US

HPC Wire - Thu, 11/09/2017 - 13:49

TOKYO, Nov. 9, 2017 — Japanese HPC Cloud Startup XTREME Design Inc. (XTREME-D) today announced the completion of a $2.75M financing agreement in Series-A round funding led by World Innovation Lab (WiL). The total funding amount is now $4M, including the previous Pre-A round, and WiL is now the lead investor. XTREME-D will be exhibiting at Supercomputing in Denver, Colorado, November 13–16, where the company will be launching and pre-announcing several new products.

XTREME-D is well-known in Japan for architecting technical computing in the cloud. The company develops and sells XTREME DNA, a cloud-based, virtual, supercomputing-on-demand service that provides unattended services ranging from the construction of a high-speed analysis system on the cloud to optimal operation monitoring, and eliminates the systems after use. XTREME DNA was launched to cut costs and make it easy to build HPC clusters in just in 10 minutes. Cost savings extend beyond equipment, as engineers with the specialized skills required to construct complex clusters are no longer needed.

XTREME-D’s new funding will be utilized for R&D, market launch in the US, expanding the feature set of the company’s current products, and developing new solutions for launch in H1 next year. Details about upcoming products can be shared in private meetings at XTREME-D’s booth at Supercomputing, where demos of XTREME DNA and the launch of their new Computer Aided Engineering (CAE) template for ChainerMN (the distributed cloud version of the advanced parallel deep learning framework Chainer) can also be viewed.

“The market is expanding rapidly as demand for high-speed analytical systems for the Internet of Things (IoT), Artificial Intelligence (AI), Deep Learning, and Machine Learning grows,” said Masataka Matsumoto, General Partner of WiL. “These systems are needed not just within traditional HPC but also across broader fields, such as smart cities and bioinformatics. XTREME Design provides access to supercomputing resources for companies that didn’t previously have it, and has tremendous growth potential right now within High Performance Technical Computing and beyond.”

XTREME-D is capitalizing on this exciting market environment with overseas business expansion to both North America and EMEA. Vitec Electronics Americas of San Diego, California, is XTREME-D’s first US reseller, utilizing XTREME-D to provide a range of products, including access to the world’s fastest class GPU instance of SkyScale® (provided by One Stop Systems) running on Microsoft Azure. Partnership with WiL, which has offices in both Japan and United States, will help XTREME-D to make a full-scale entry into the North American market.

German-based ViMOS Technologies is XTREME-D’s first European reseller, and the company is interested in signing up additional ISVs and systems integrators across the US and EMEA. “We have a very strong offering that allows resellers to provide turnkey cloud-based virtual supercomputers to their customers, including software, middleware, and system configuration,” said Naoki Shibata, Founder and CEO of XTREME Design. “Being able to access a virtual supercomputer for minimal budget and no need of a specialized skill set to configure the system is a compelling sales pitch.”

Visit XTREME Design at booth 1485 at SC17 in Denver, Colorado from November 13–16 for an eyes-only sneak peak at next-generation products for the democratization of HPC.

About XTREME Design

XTREME Design Inc. was established in 2015 and is headquartered in Shinagawa-ku, Tokyo. The company has one goal — the democratization of supercomputing. Its cloud-based, virtual, supercomputing-on-demand service XTREME DNA makes HPC resources available to everyone, delivering an easy-to-use customer experience through a robust UI/UX and cloud management features. XTREME DNA delivers high-end compute capabilities supporting private, public, and hybrid cloud, featuring the latest CPUs, GPUs, and interconnect options. Applications include CAE, machine learning, deep learning, high performance data analysis, and IoT. For more information visit http: //xd-lab.net/en.

About World Innovation Labs

World Innovation Labs LLC (WiL) connects entrepreneurs with corporate resources to build global businesses. WiL Fund II, LP is a pooled venture investment development fund managed by WiL and headquartered in Palo Alto, California. The fund specializes in seed investments in technology, media, telecom, and technical services in the United States and Japan, and is engaged in fostering new business through collaboration with large companies. Through this collaboration WiL seeks to develop activities that accelerate open innovation and disseminate entrepreneurial spirit. For more information visit www.wilab.com.

Source: XTREME Design

The post HPC Cloud Startup XTREME Design Gets Series A Funding, Expands to US appeared first on HPCwire.

FileCatalyst to Attend SuperComputing 2017 in Denver Colorado

HPC Wire - Thu, 11/09/2017 - 12:13

OTTAWA, Ontario, Nov. 9, 2017 — FileCatalyst, an Emmy award-winning pioneer in managed file transfers and a world-leading accelerated file transfer solution, will exhibit at SuperComputing 2017 (SC17) in booth 2255 from November 13-16 at the Colorado Convention Center, Denver, Colorado. FileCatalyst will be showcasing all of the latest advancements made to their suite of accelerated file transfer solutions including:

  • Newly updated Graphical User Interfaces (GUI’s) that work with any modern web browser.
  • New consumption-based billing, which will be available in per hour, and per transferred GB models.
  • The FileCatalyst TransferAgent client can now run on Linux, Windows, and OSX as a service, with the addition of a two-way file transfer pane.
  • FileCatalyst Direct Server now has extended and improved web-based administration for HotFolder client and Server.
  • FileCatalyst Central now allows users to configure personalized map views of their deployment (either through geographical or functional maps), real-time and historical transfer data for all nodes, node-to-node-transfers, and TransferAgent client support.
  • FileCatalyst Workflow has integrated TransferAgent for file areas, a video file preview feature, and embeddable upload forms.

“FileCatalyst is entering an exciting period,” says Chris Bailey, Co-Founder, and CEO of FileCatalyst. “Our portfolio has not only seen an update to the GUI’s, but we are happy to include some new features that our customers have requested. This year has also seen growth within our ISV ecosystem, as well as our channel partner portfolio. We are thrilled that people are seeking out FileCatalyst, and we are excited to showcase all of our offerings at SC17.”

FileCatalyst has also developed some ISV partnerships that include:

  • Acembly and FileCatalyst have partnered to create Acembly File Accelerator (AFA), Powered by FileCatalyst, which accelerates file transfers to and from the cloud. The solution also includes a new consumption (per GB) based model.
  • FileCatalyst has integrated with Caringo Swarm to accelerate the transmission of digital assets, allowing Caringo to deliver an even better end-user experience with reduced complexity and costs. Caringo and FileCatalyst will be doing a draw for a $300 Amazon gift card during the conference. Visit the Caringo booth (1001) and FileCatalyst (2255) for a chance to win.
  • NICE Software, a pioneer in technical and engineering cloud solutions, will be providing demos of their HPC Portal, EnginFrame, as well as FileCatalyst Direct running on AWS in booth 2117.

For those attending SC17 that want to learn more, FileCatalyst will be in Booth 2255 from November 13-16. They will be showcasing their entire suite of products, as well as giving live demos on the tradeshow floor.

About FileCatalyst

Located in Ottawa, Canada, a pioneer in managed file transfers and an Emmy award-winning leader of accelerated file transfer solutions. The company, founded in 2000, has more than one thousand customers in media & entertainment, energy & mining, gaming, and printing, including many Fortune 500 companies as well as military and government organizations. FileCatalyst is a software platform designed to accelerate and manage file transfers securely and reliably. FileCatalyst is immune to the effects that latency and packet loss have on traditional file transfer methods like FTP, HTTP, or CIFS. Global organizations use FileCatalyst to solve issues related to file transfer, including content distribution, file sharing, and offsite backups.

Source: FileCatalyst

The post FileCatalyst to Attend SuperComputing 2017 in Denver Colorado appeared first on HPCwire.

The Hair-Raising Potential of Exascale Animation

HPC Wire - Thu, 11/09/2017 - 12:05

Nov. 9, 2017 — There is no questioning the power of a full head of shiny, buoyant hair. Not in real life, not in commercials, and, it turns out, not in computer-generated (CG) animation. Just as more expensive brands of shampoos provide volume, luster, and flow to a human head of hair, so too does more expensive computational power provide the waggle of a prince’s mane or raise the hackles of an evil yak.

Hair proves to be one of the most complex assets in animation, as each strand is comprised of near-infinite individual particles, affecting the way every other strand behaves. With the 2016 release of their feature TrollsDreamWorks Animation had an entire ensemble of characters with hair as a primary feature. The studio will raise the bar again with the film sequel slated for 2020.

The history of DreamWorks Animation is, in many ways, the history of technical advances in computing over the last three decades. Those milestones are evidenced by that flow of hair—or lack thereof—the ripple in a dragon’s leathery wing, or the texture and number of environments in any given film.

Exascale computing will push the Media and Entertainment industry beyond today’s technical barriers.

As the development and accessibility of high-performance computers explode beyond current limits, so too will the creative possibilities for the future of CG animation ignite.

Jeff Wike, Chief Technology Officer (CTO) of DreamWorks Animation, has seen many of the company’s innovations come and go, and fully appreciates both the obstacles and the potential of technological advances on his industry.

“Even today, technology limits what our artists can create,” says Wike. “They always want to up the game, and with the massive amount of technology that we throw at these films, the stakes are enormous.”

Along with his duties as CTO, Wike is a member of the U.S. Department of Energy’s Exascale Computing Project (ECP) Industry Council. The advisory council is comprised of an eclectic group of industry leaders reliant on and looking to the future of high-performance computing, now hurtling toward the exascale frontier.

The ability to perform a billion billion operations per second changes the manufacturing and services landscape for many types of industries and, as Wike will tell you, strip away the creative process and those in the animation industry are manufacturers of digital products.

“This is bigger than any one company or any one industry,” he says. “As a member of the ECP’s Industry Council, we share a common interest and goal with companies representing a diverse group of U.S. industries anxiously anticipating the era of exascale computing.”

Such capability could open a speed-of-light gap between DreamWorks’ current 3D animation and the studio’s origins, 23 years ago, as a 2D animation company producing computer-aided hand-drawn images

Growing CG animation

Wike’s role has certainly evolved since he joined DreamWorks in 1997, with the distinctive job title of technical gunslinger, a position in which he served, he says, as part inventor, part MacGyver, and part tech support.

When Chris deFaria joined DreamWorks Animation as president in March 2017, he instantly identified an untapped opportunity that only could be pursued at a studio where storytellers and technology innovators work in close proximity. He created a collaboration between these two areas in which the artists’ infinite imaginations drive cutting edge technology innovations which, in turn, drive the engineers to imagine even bigger. In essence, a perpetual motion machine of innovation and efficiency.

Under this new reign, Wike distills his broader role into three simple goals: make sure employees have what they need, reduce the cost and production time of films, and continue to innovate in those areas that are transformational.

High-Performance Computing Is Key to Innovation

For DreamWorks—and other large industry players like Disney and Pixar—the transformation of the animated landscape is, and has been, driven by innovations in computer software and hardware.

Much of the CG animation industry was built on the backs of what were, in the late 1990s, fairly high-performance graphics-enabled processors. But computer technology advanced so quickly, DreamWorks was challenged to keep up with the latest and greatest.

“Some of the animators had home computers that were faster than what we had at work,” Wike recalls.

By the time Shrek appeared in 2001, after the early successes of DreamWorks’ first fully CG animated feature, Antz, and Pixar’s Toy Story, it was clear to the fledgling industry, and the movie industry as a whole, that CG animation was the next big wave. Audiences, too, already were expecting higher quality, more complexity and greater diversification with each succeeding film.

To meet mounting expectations, the industry needed a computational overhaul to afford them more power and greater consistency. As the early graphics processors faced more competition, the industry banded together to agree on common requirements, such as commodity hardware, open source libraries, and codes. This developed into an approved list that makes it easier for vendors to support.

Today, DreamWorks’ artists are using high-end dual processor, 32-core workstations with network-attached storage and HPE Gen9 servers utilizing 22,000 cores in the company’s data center. That number is expected to nearly double soon, as the company has now ramped up for production of How to Train Your Dragon 3.

It’s still a long way from exascale. It’s still a long way from petascale, for that matter; compared to current petascale computers that can comprise upwards of 750,000 cores. But the industry continues to push the envelope of what’s possible and what is available. Continuous upgrades in hardware, along with retooling and development of software, create ever-more astounding visuals and further prepare the industry for the next massive leap in computing power.

“I’d be naïve to say that we’re ready for exascale, but we’re certainly mindful of it,” says Wike. “That’s one reason we are so interested in what the ECP is doing.  The interaction with the technology stakeholders from a wide variety of industries is invaluable as we try to understand the full implications and benefits of exascale as an innovation driver for our own industry.”

To read more, follow this link: https://www.exascaleproject.org/hair-raising-potential-exascale-animation/

Source: Exascale Computing Project

The post The Hair-Raising Potential of Exascale Animation appeared first on HPCwire.

Serving the community through community solar

Colorado School of Mines - Thu, 11/09/2017 - 10:50

Colorado School of Mines students are helping make solar power more accessible to low-income Coloradans.

The Mines Energy Club recently volunteered with GRID Alternatives to help build two community solar arrays, one in Fort Collins and the other near Denver International Airport. The nation’s largest nonprofit solar installer, GRID works across the U.S. to increase access to renewable energy technology and job training among underserved communities.

“GRID Alternatives is a really cool organization,” said Evan Wong, a senior majoring in mechanical engineering and vice president of Mines Energy. “It’s facilitating learning and spreading the word about solar while also helping low-income communities.” 

Colorado is among the leaders nationwide in the installation of community solar – also called solar gardens, the arrays allow multiple customers to buy into the power produced and receive a credit on their electric bills.

The new array in Fort Collins, the 2-megawatt Coyote Ridge Solar Farm, is the largest ever built by GRID – by a factor of 10. Volunteers installed the entire system in a matter of weeks between August and September, and it’s already generating power for the Poudre Valley Rural Electric Association.  

Mines volunteers drove up to Fort Collins to lend a hand on two of the Coyote Ridge build days. Tim Ohno, associate professor of physics and co-director of the Energy Minor Program, was among a group tasked with installing the arms that hold the solar panels and then attaching the solar panels themselves.
 
“It really took two people to lift the panels,” Ohno said. “The ones used for utilities are larger than the ones installed on rooftops in most cases.”

Closer to home, Mines students spent a day in October working on another 2-megawatt array, near Denver International Airport for the Denver Housing Authority

DHA will be the first housing authority in the country to develop, own and operate its own solar garden. Throughout construction, GRID will also provide training, certification and employment in the solar industry for affordable housing residents.

Wong, who is minoring in renewable energy, said volunteering with GRID Alternatives was a great opportunity to get hands-on experience with photovoltaics, to supplement the academic instruction he’s received on campus. 

“Although there's a bunch of advanced physics in the crystalline structure, installing solar panels isn’t really that hard,” Wong said. “The entire process probably took around five minutes at most for each solar panel.”

Ohno hopes the GRID Alternatives experience will also help motivate students and faculty to push for more solar on the Mines campus. 

A solar garden could be an efficient and cost-effective option at Mines, too, he said.

“Right now we’re really trying to accommodate the growth in students, but if groups are interested, I can imagine this might be a push in the not-too-distant future,” Ohno said.

Golden voters recently approved a ballot initiative to allow the city to move forward with a project to build a community solar garden at the Rooney Road Sports Complex. Almost half of all U.S. households and businesses are unable to host rooftop solar systems because they rent their spaces or lack suitable roof space, according to a 2015 National Renewable Energy Laboratory report

“It’s a direction that’s probably going to become more and more common,” Ohno said. “When you install panels on someone’s roof, whatever direction the home’s roof faces, that’s where it’s installed and that’s not always optimal. If you build a solar garden, the cost of solar even without subsidies is very comparable to traditional coal and natural gas power plants.”

Photo credit: Courtesy of GRID Alternatives

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Gidel Launches New High Performance Line of Acceleration Boards Based on Intel’s Stratix 10 FPGA

HPC Wire - Thu, 11/09/2017 - 09:49

SANTA CLARA, California, and Or-Akiva, Israel, Nov. 9, 2017 – Gidel, a technology leader in high-performance accelerators utilizing FPGAs, has launched their latest product line, the Proc10S. The Proc10S is part of the Proc family of high performance, scalable compute acceleration boards, but is based on the Stratix 10 FPGA, which was released by Intel in late 2016. The Stratix 10 represents twice the performance gain over the Arria 10, with 30% lower power consumption per TFLOP.

The Proc10S pushes data processing power to new heights with peak single precision performance of up to 10 TFLOPS per device, based on 25 MB of L1 cache at up to 94 TB/s peak bandwidth. The board features an Intel Stratix 10SG 2800/2100/1100 FPGA with 16-lane PCI-Express Gen 3.0 and an 18+ GB multi-level memory structure consisting of three banks of DDR4 memory on board and on DIMMs (up to 260 GB of DDR4).

With up to 2.8 million logic elements, the Proc10S gives designers incredible performance potential. It also features flexible high-speed communication ports — dual SFP+ and dual QSFP+ support at 26 Gb/s per channel — and a PHS connector for a high speed daughter board that features eight channels of full duplex Tx/Rx and up to 139 Gb/s total.

Gidel’s newest acceleration board was designed with high density Big Data and HPC applications in mind. “The Proc10S is a heavy-duty FPGA and thus opens new markets in HPC for Gidel, such as Deep Learning and Big Data analytics,” says Ofer Pravda, VP Marketing and Sales at Gidel. “Gidel’s long history in algorithm acceleration utilizing FPGA technology has resulted in an enormous wealth of product knowledge that provides us with an advantage in certain HPC and Vision arenas.”

Artificial Intelligence and Deep Learning are ideal markets for the Proc10S because features need to be extracted from data in order to solve predictive problems, such as image classification and detection, image recognition and tagging, network intrusion detection, and fraud/face detection. Other applica­tions include compute intense algorithm processing, network analytics, communications, cyber security, storage, big data analytics, and cloud computing.

The Proc10S is supported by the ProcDeveloper’s Kit™, Gidel’s proprietary tools that make developing on FPGA fast and easy, and allow for simultaneous acceleration of multiple applications or processes, unmatched HDL design productivity (VHDL or Verilog), and simple integration with software applica­tions. Gidel’s tools make developing on FPGA accessible to software engineers by automatically gen­erating an Application Support Package (ASP) and an API that maps the relevant user’s variables directly into the FPGA design. The tools offer a solution that is unique in the market, and together with Intel’s HLS and OpenCL allow unmatched development efficiency and effectiveness.

The Proc 10S 2100/2800 will be available in Q1 2018; additional Proc10S accelerators will be released later next year.

Visit Gidel in booth 1242 at SC17 in Denver, Colorado (Nov 13-16) to explore the Proc10S board and view demos on acceleration applications.

About Gidel

For 25 years, Gidel has been a technology leader in high performance, innovative, FPGA-based acceler­ators. Gidel’s reconfigurable platforms and development tools have been used for optimal application tailoring and for reducing the time and cost of project development. Gidel’s dedicated support and its products’ performance, ease-of-use, and long-life cycles have been well appreciated by satisfied cus­­tomers in diverse markets who continuously use Gidel’s products, generation after generation. For more information visit www.gidel.com.

Source: Gidel

The post Gidel Launches New High Performance Line of Acceleration Boards Based on Intel’s Stratix 10 FPGA appeared first on HPCwire.

The HPC Storage Cocktail, Both Shaking and Stirring SC17

HPC Wire - Thu, 11/09/2017 - 09:35

I’ve no doubt that familiar themes will be circulating the halls of Supercomputing in Denver, echoes of last year’s show – how to survive in the post-Moore’s Law era, the race to exascale, how to access quantum computing. But this year I think there will be another overarching theme added to coffee queue chat: how to cope with the new norm, the HPC storage cocktail.

I’m referring to a practice that more and more people are considering: mixing different environments as well as on-prem and cloud platforms to make storage spend go as far as possible. As the new architecture from ARM gains traction and more and more people look to cloud platforms to boost their on-premise clusters, there’s no doubt that the question of how to make these systems work together effectively will be on people’s lips.

Mixing storage systems can throw up real problems, or uncover problems that until that point have been hidden. For example, moving to a new environment can expose I/O problems that weren’t there before including bad I/O patterns such as small reads and writes that can look like CPU activity until the I/O is profiled. An organisation won’t be able to feel the benefit that investment in a new storage system should bring unless the bridge between the existing system and the new can be fully understood.

At SC, this issue will certainly be addressed and there will be the usual rainbow of storage solutions and add-on technologies to help. Our team are looking forward to learning about new solutions that are emerging to help organisations to manage mixed systems. It’s still early days for many people to have adopted this type of environment, but we’ve already spoken to a lot of people who are testing the water with hybrid cloud environments.

At this stage most organisations we work with are selecting specific projects to migrate to the cloud and thinking about new storage architectures that they can exploit with that move. Object storage has a set-up cost, but with potentially good long-term cost savings I expect that a lot of vendors will be pushing that for on-prem deployments as well.

Containerization is another flavour to add to the mix. Most people are looking at Docker or Singularity as the two main options sitting on top of various platforms such as OpenStack or Kubernetes. While Singularity is little known outside the HPC community, from a high level it seems to better support some of the data demands of HPC applications, but obviously doesn’t have such a developed ecosystem around it as Docker. This year’s SC might be the year that more make the leap to deploy it in production and see how it measures up.

Another trend I believe will be that we will see far more people treading the halls of SC who might not have been there in previous years. Big data and the growth of AI mean that more and more industries are looking to what has been considered HPC storage to provide the big compute that they need to run their applications and programs.

These trends all feed into each other. The presence of these newcomers with their different views on hardware and software is no doubt speeding up the growth of cloud platforms in the traditional HPC storage market, which is no bad thing. We could all do having our viewpoints shaken up.

In general, we are heading into an era with more variety, more competitive platforms, serving a greater and more diverse range of customers. This could well be the most exciting SC yet as just a few of the opportunities that this cocktail presents start to become apparent.

About the Author

Dr. Rosemary Francis is CEO and founder of Ellexus, the I/O profiling company. Ellexus makes application profiling and monitoring tools that can be run on a live compute cluster to protect from rogue jobs and noisy neighbors, make cloud migration easy and allow a cluster to be scaled rapidly. The system- and storage-agnostic tools provide end-to-end visibility into exactly what applications and users are up to.

The post The HPC Storage Cocktail, Both Shaking and Stirring SC17 appeared first on HPCwire.

Sowers article featured in Colorado Aerospace STEM Magazine

Colorado School of Mines - Thu, 11/09/2017 - 09:26

An article on the development of a cislunar space economy by Colorado School of Mines Professor of Practice George Sowers was recently featured in Colorado Aerospace STEM Magazine. Sowers, the former chief scientist at United Launch Alliance, joined Mines this year as part of the proposed graduate program in space resources. 

Categories: Partner News

Mental health awareness campaign featured on 9News

Colorado School of Mines - Thu, 11/09/2017 - 08:48

Alpha Phi Omega is focusing its national service week on mental health and suicide prevention this year. The Colorado School of Mines chapter is hosting a number of events and exhibits on campus this week and the awareness campaign was recently featured on 9News Denver.

Categories: Partner News

NCSA Releases 2017 Blue Waters Project Annual Report

HPC Wire - Thu, 11/09/2017 - 08:47

URBANA, Ill., Nov. 9, 2017 — The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign released today the 2017 Blue Waters Project Annual Report. For the project’s fourth annual report, research teams were invited to present highlights from their research that leveraged Blue Waters, the National Science Foundation’s (NSF) most powerful system for sustained computation and data analysis. Spanning economics to engineering, geoscience to space science, Blue Waters has accelerated research and impact across an enormous range of science and engineering disciplines throughout its more than 4-year history covered by the report series. This year is no different.

“To date, the NSF Blue Waters Project has provided over 20 billion core-hour equivalents to science, engineering and research projects, supported not just by the NSF, but also by NIH, NASA, DOE, NOAA, and other funders. Without Blue Waters, these funded investigations might not even be possible,” said Blue Waters Director and Principal Investigator, Dr. William “Bill”  Kramer. “In this year’s report, we are using a ‘badge’ to show the projects that are Data-intensive (39 projects), GPU-accelerated (34), Large Scale greater than 1,000 nodes (65), Memory-intensive (18), Only on Blue Waters (27), Multi-physics/multi-scale (47), Machine learning (9), Communication-intensive (32) and Industry (5).  This shows the breadth and depth of the uses world-class science is making on Blue Waters.”

“I continue to be amazed by the vast range of creative, limit-pushing research that scientists submit to this publication year after year. With the support of the National Science Foundation and the University of Illinois, the National Center for Supercomputing Applications’ Blue Waters Project continues to empower scientists to make discoveries that have immense impact in a diverse range of fields, spark new understanding of our world, and open new avenues for future research,” said Dr. William “Bill” Gropp, Director of NCSA.

The annual report also highlights the Blue Waters Project’s strong focus on education and outreach. Blue Waters provides the equivalent of 60 million core-hours of the system’s computational capacity each year for educational projects, including seminars, courseware development, courses, workshops, institutes, internships, and fellowships. To date, there have been more than 200 approved education, outreach, and training projects from organizations across the country. These allocations have directly benefitted over 3,700 individuals in learning about different aspects of computational and data-enabled science and engineering at more than 160 institutions, including 41 institutions in EPSCoR jurisdictions and at 14 Minority Serving Institutions.

The Blue Waters Annual Report highlights how the project is helping other domain specialists reach petascale sustained performance, specifically through their recently-expanded Petascale Application Improvement Discovery (PAID) program, where the Project provided millions of dollars to science teams and computational and data experts to improve the performance of applications.

Gropp continued, “Even more remarkable breakthroughs will be forthcoming as NCSA continues to partner with scientists around the nation to change the world as we know it.”

This year’s annual report features 130 research abstracts from various allocation types, categorized by space science, geoscience, physics and engineering, biology, chemistry, and social science, economics, health.

This year’s annual report features 130 research abstracts from various allocation types, categorized by space sciencecomputer sciencegeosciencephysics and engineeringbiology, chemistry, and health, and social science, economics, and humanities. Click here to download the full report.

Read the original release in full here: http://www.ncsa.illinois.edu/news/story/ncsa_releases_2017_blue_waters_project_annual_report_detailing_innovative_r

Source: NCSA

The post NCSA Releases 2017 Blue Waters Project Annual Report appeared first on HPCwire.

Atos Launches Next Generation Servers for Enterprise AI

HPC Wire - Thu, 11/09/2017 - 08:25

PARIS, Nov. 9, 2017 — Atos, a global leader in digital transformation, launches BullSequana S, its new range of ultra-scalable servers enabling businesses to take full advantage of AI. With their unique architecture, developed in-house by Atos, BullSequana S enterprise servers are optimized for Machine Learning, business‐critical computing applications and in-memory environments.

In order to utilize the extensive capabilities of AI, businesses require an infrastructure with extreme performance. BullSequana S tackles this challenge with its unique combination of powerful processors (CPUs) and GPUs (Graphics Processing Unit).  The BullSequana S server’s flexibility leverages a proven unique modular architecture, and provides customers with the agility to add Machine Learning and AI capacity to existing enterprise workloads, thanks to the introduction of a GPU. Within a single server, GPU, storage and compute modules are mixed for a tailor-made server, for ready availability of all workloads worldwide.

Ultra-scalable server to answer both challenges: from classical use-case to AI

BullSequana S combines the most advanced Intel Xeon Scalable processors – codenamed Skylake – and an innovative architecture designed by Atos’ R&D teams. It helps reduce infrastructure costs while improving application performance thanks to ultra-scalability – from 2 to 32 CPUs – with innovative high capacity storage and booster capabilities such as GPU (Graphics Processing Unit) and, potentially other technologies such as FPGA in further developments. 

“Atos is a prominent global SAP partner delivering highly performant and scalable solutions for deployments of SAP HANA. We have been working together to accelerate SAP HANA deployments by providing a full range of SAP HANA applications certified up to 16TB. The new BullSequana S server range developed by Atos is one of the most scalable platforms in the market, optimized for critical deployments of SAP HANA. It is expected to open new additional collaboration areas between SAP and Atos around artificial intelligence and machine learning,” said Dr. Jörg Gehring, senior vice president and global head of SAP HANA Technology Innovation Networks.

BullSequana S – to reach extreme performance whilst optimizing investment:

  • Up to 32 processors, 896 cores, and 32 GPUs in a single server delivering an outstanding performance and supporting long-term investment protection, as capacities evolve smoothly according to business needs.
  • With up to 48TB RAM and 64TB NV-RAM in a single server, real‐time analytics of enterprise production databases will run much faster than on a conventional computer by using in‐memory technology whilst ensuring both security and high quality of service.
  • With up to 2PB internal data storage, BullSequana S efficiently supports data lake and virtualization environments.

Availability

The first BullSequana S machines manufactured at France-based Atos factory are available worldwide from today. 

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around € 12 billion. The European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.

Source: Atos

The post Atos Launches Next Generation Servers for Enterprise AI appeared first on HPCwire.

DDN’s Massively Scalable Storage Empowers Pawsey Supercomputing Centre to Speed Scientific Discoveries

HPC Wire - Thu, 11/09/2017 - 08:19

SANTA CLARA, Calif., Nov. 9, 2017 — DataDirect Networks (DDN) today announced that Pawsey Supercomputing Centre in Western Australia has deployed a pair of DDN GRIDScaler parallel file system appliances with 5PBs of storage as well as an additional 2PBs of DDN capacity to support diverse research, simulations and visualizations in radio astronomy, renewable energy and geosciences, among several other scientific disciplines. At Pawsey, DDN’s GRIDScaler delivers the performance and stability needed to address 50 large data collections and contribute towards scientific outcomes for some of the thousand scientists who benefit from Pawsey services.

The prevalence of Pawsey’s data-intensive projects creates a constant need to store massive media files, videos, images, text and metadata that must be stitched together, accessed, searched, shared and archived. “Scientists come to us with lots of speculative technologies with different protocols and access methods, so flexibility is key,” said Neil Stringfellow, executive director of the Pawsey Supercomputing Centre. “We need to adapt to changing research requirements, whether we need to support Big Data analytics, HPC processing and global data sharing as well as connect with some new service or technology.”

Pawsey plays a pivotal role in the trailblazing Square Kilometer Array (SKA) project, which focuses on building a next-generation radio telescope that will be more sensitive and powerful than today’s most advanced telescopes, to survey the universe with incredible depth and speed. The center also uses DDN storage to support the game-changing Desert Fireball Network (DFN) project, which uses cameras to track fireballs as they shoot across the Australian desert night sky, aiding in the discovery and retrieval of newly fallen meteorites.

To keep pace with rapid data growth, which is expected to grow by 15PBs annually to support the SKA precursor projects alone, Pawsey needed a massively scalable yet highly reliable storage platform. DDN integrated storage with various features, including IBM’s Spectrum Scale parallel file system, tiering choices, replication, data protection and data management to support various front-end data access services that required high-speed storage connections over Ethernet. Additionally, DDN’s leadership role in establishing an IBM Spectrum Scale user group in Western Australia has proven instrumental in engaging the broader scientific and technical community in benefiting from file system advancements for academic, research and industrial applications.

With DDN underpinning its research projects, Pawsey continues to push research boundaries. Pawsey recently made headlines for supporting a research team in surveying the entire Southern sky as part of the Galactic and Extragalactic All-Sky Murchison Widefield Array survey (GLEAM). GLEAM Researchers have stored and processed more than 600 TBs of radio astronomy data to produce the world’s first radio-color panoramic view of the universe and have cataloged more than 300,000 radio galaxies from the extensive sky survey.

For the Pawsey Supercomputing Centre, the future holds the promise of additional scientific breakthroughs and constant spikes in HPC and storage needs, which DDN is able to meet. In terms of computing power and storage requirements, the SKA project alone is predicted to surpass the world’s most demanding research to date, including the Large Hadron Collider.

About DDN

DataDirect Networks (DDN) is a leading big data storage supplier to data-intensive, global organizations. For almost 20 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers.

Source: DDN

The post DDN’s Massively Scalable Storage Empowers Pawsey Supercomputing Centre to Speed Scientific Discoveries appeared first on HPCwire.

CoolIT Systems Launches Rack DCLC AHx10 Heat Exchange Module

HPC Wire - Thu, 11/09/2017 - 08:12

CALGARY, Alberta, Nov. 9, 2017 — CoolIT Systems (CoolIT), a world leader in energy efficient liquid cooling solutions for HPC, Cloud, and Enterprise markets, has expanded its Rack DCLC product line with the release of the AHx10 Heat Exchange Module. This latest member of the CoolIT lineup is a rack mounted Liquid-to-Air heat exchanger that enables dramatic increases in server density without the need for any facility water.

The AHx10 strengthens CoolIT Systems’ extensive range of Rack DCLC heat exchange solutions featuring centralized pumping architecture. Customers can now easily deploy high density liquid cooled servers inside their existing data centers without the requirement for facility liquid being brought to the rack. The AHx10 mounts directly in a standard rack, supporting a cold aisle to hot aisle configuration. The standard 5U system manages 7kW at 25°C ambient air temperature and can be expanded to 6U or 7U configurations (via the available expansion kit), scaling capacity up to 10kW of heat load. CoolIT will officially launch the AHx10 at the Supercomputing Conference 2017 (SC17) in Denver, Colorado (booth 1601).

“Data centers struggling to manage the heat from high density racks now have a solution with the AHx10 that requires no facility modification,” said CoolIT Systems VP of Product Marketing, Pat McGinn. “The plug and play characteristics of the AHx10 make it a high flexible, simple to install solution that delivers the great benefits of liquid cooling immediately.”

The AHx10 supports front to back air flow management and is compatible with CoolIT Manifold Modules and Server Modules. It boasts redundant pumping as well as integrated monitoring and control system with remote access.

The AHx10 is the perfect solution to manage cooling for HPC racks within an existing data center. SC17 attendees can learn more about the solution by visiting CoolIT at booth 1601. To set up an appointment, contact Lauren Macready at lauren.macready@coolitsystems.com.

About CoolIT Systems

CoolIT Systems is a world leader in energy efficient liquid cooling technology for the Data Center, Server and Desktop markets. CoolIT’s Rack DCLC platform is a modular, rack-based, advanced cooling solution that allows for dramatic increases in rack densities, component performance, and power efficiencies. The technology can be deployed with any server and in any rack making it a truly flexible solution. For more information about CoolIT Systems and its technology, visit www.coolitsystems.com.

About Supercomputing Conference (SC17)

Established in 1988, the annual SC conference continues to grow steadily in size and impact each year. Approximately 5,000 people participate in the technical program, with about 11,000 people overall. SC has built a diverse community of participants including researchers, scientists, application developers, computing center staff and management, computing industry staff, agency program managers, journalists, and congressional staffers. This diversity is one of the conference’s main strengths, making it a yearly “must attend” forum for stakeholders throughout the technical computing community. For more information, visit http://sc17.supercomputing.org/.

Source: CoolIT Systems

The post CoolIT Systems Launches Rack DCLC AHx10 Heat Exchange Module appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator