Feed aggregator

TACC Researchers Test AI Traffic Monitoring Tool in Austin

HPC Wire - Wed, 12/13/2017 - 10:47

Traffic jams and mishaps are often painful and sometimes dangerous facts of life. At this week’s IEEE International Conference on Big Data being held in Boston, researchers from TACC and colleagues will present a new deep learning tool that uses raw traffic camera footage from City of Austin cameras to recognize objects – people, cars, buses, trucks, bicycles, motorcycles and traffic lights – and characterize how those objects move and interact.

The researchers from Texas Advanced Computing Center (TACC), the University of Texas Center for Transportation Research and the City of Austin have been collaborating to develop tools that allow sophisticated, searchable traffic analyses using deep learning and data mining. An account of the work (Artificial Intelligence and Supercomputers to Help Alleviate Urban Traffic Problems), written by Aaron Dubrow, was posted this week on the TACC website.

Their work is being tested in parts of Austin where cameras on signal lights automatically counted vehicles in a 10-minute video clip, and preliminary results showed that their tool was 95 percent accurate overall.

“We are hoping to develop a flexible and efficient system to aid traffic researchers and decision-makers for dynamic, real-life analysis needs,” said Weijia Xu, a research scientist who leads the Data Mining & Statistics Group at TACC. “We don’t want to build a turn-key solution for a single, specific problem. We want to explore means that may be helpful for a number of analytical needs, even those that may pop up in the future.” The algorithm they developed for traffic analysis automatically labels all potential objects from the raw data, tracks objects by comparing them with other previously recognized objects and compares the outputs from each frame to uncover relationships among the objects.

The team used the open-source YOLO library and neural network developed by University of Washington and Facebook researchers for real-time object detection. According to the team, this is the first time YOLO has been applied to traffic data. For the data analysis and query component, they incorporated HiveQL, a query language maintained by the Apache Software Foundation that lets individuals search and compare data in the system.

Once researchers had developed a system capable of labeling, tracking and analyzing traffic, they applied it to two practical examples: counting how many moving vehicles traveled down a road and identifying close encounters between vehicles and pedestrians.

“Current practice often relies on the use of expensive sensors for continuous data collection or on traffic studies that sample traffic volumes for a few days during selected time periods,” Natalia Ruiz Juri, a research associate and director of the Network Modeling Center at UT’s Center for Transportation Research. “The use of artificial intelligence to automatically generate traffic volumes from existing cameras would provide a much broader spatial and temporal coverage of the transportation network, facilitating the generation of valuable datasets to support innovative research and to understand the impact of traffic management and operation decisions.”

Whether autonomous vehicles will mitigate the problem is an ongoing debate and Juri notes, “The highly anticipated introduction of self-driving and connected cars may lead to significant changes in the behavior of vehicles and pedestrians and on the performance of roadways. Video data will play a key role in understanding such changes, and artificial intelligence may be central to enabling comprehensive large-scale studies that truly capture the impact of the new technologies.”

Link to full article: https://www.tacc.utexas.edu/-/artificial-intelligence-and-supercomputers-to-help-alleviate-urban-traffic-problems

Link to video on the work: http://soda.tacc.utexas.edu

Images: TACC

The post TACC Researchers Test AI Traffic Monitoring Tool in Austin appeared first on HPCwire.

Supermicro Announces Receipt of Extension from Nasdaq

HPC Wire - Wed, 12/13/2017 - 09:14

SAN JOSE, Calif., Dec. 13, 2017 — Super Micro Computer, Inc. (NASDAQ:SMCI), a global leader in high-performance, high-efficiency server, storage technology and green computing, today announced that on December 11, 2017 it had received a letter from the Nasdaq Stock Market (“Nasdaq”) confirming that the Company has been granted an exception to enable the Company to regain compliance with the Nasdaq continued listing requirements. Pursuant to the terms of the exception, on or before March 13, 2018, the Company must file its Annual Report on Form 10-K for the fiscal year ended June 30, 2017 as well as its Quarterly Reports on Form 10-Q for the quarters ended September 30, 2017 and December 31, 2017.

Pursuant to Nasdaq rules, Super Micro’s securities will remain listed on the Nasdaq Global Select Market pending satisfaction of the terms of the exception. In the event the Company does not make the filings within the time period required, Nasdaq will provide written notification that the Company’s securities will be delisted. At that time, the Company may appeal Nasdaq’s determination to a Hearings Panel. Super Micro intends to take all necessary steps to achieve compliance with the Nasdaq continued listing requirements as soon as practicable.

About Super Micro Computer, Inc.

Supermicro, a global leader in high-performance, high-efficiency server technology and innovation is a premier provider of end-to-end green computing solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro’s advanced Server Building Block Solutions® offer a vast array of components for building energy-efficient, application-optimized, computing solutions. Architecture innovations include Twin, TwinPro, FatTwin, Ultra Series, MicroCloud, MicroBlade, SuperBlade, Double-sided Storage, Battery Backup Power (BBP) modules and WIO/UIO.

Source: Super Micro Computer, Inc.

The post Supermicro Announces Receipt of Extension from Nasdaq appeared first on HPCwire.

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

HPC Wire - Wed, 12/13/2017 - 06:30

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the data center. Today, AMD and Baidu announced that China’s giant internet provider would offer AI, big data, and cloud computing services on EPYC-based single socket solutions.

This deal follows last week’s announcement that Microsoft Azure would offer EPYC-based instances (see HPCwire article, Azure Debuts AMD EPYC Instances for Storage Optimized Workloads). The EPYC line’s high memory bandwidth and IO capacity makes it well suited for many areas but especially for storage servers. AMD is working to ensure EPYC doesn’t become stereotyped by this perception.

“You have probably seen in the industry a fair number of single socket platforms from us but they have tended to be more on the storage optimized or GPU optimized,” said Scott Aylor, AMD corporate vice president and general manager of Enterprise Solutions. For example, HPE introduced a storage optimized server, CL3150, using a single socket EPYC design. “Given the variety of services that Baidu deploys, including storage but also others, I want people to know this is really a compute oriented platform,” said Aylor.

It’s clear AMD is targeting price-performance points that it hopes Intel will find difficult to match and that will help AMD reclaim chunks of the x86 data center market after a lengthy absence. The single socket gambit is an important part of the strategy as was made clear by Aylor at the June launch.

“We can build a no compromise one-socket offering that will allow us to cover up to 50 percent of the two-socket market that is today held by the [Intel Broadwell] E5-2650 and below.

“In our one socket offering we have come up with a clever way to maintain all of the I/O capabilities that you would get in a two socket as well as the full complement of eight memory channels. Today people buy two socket, sometimes because they need to, but more often than not because they have to. There are many examples in which I/O rich [workloads] like storage, like GPU compute, and some vertical workloads where people don’t necessarily need two sockets from a CPU performance perspective,” said Aylor.

AMD contends the EPYC processor will deliver 2.6X the I/O density than competitive[i] solutions and enable Baidu to achieve a level of scale and efficiency unrivaled in high-performance x86. “The combination of performance from the EPYC processor cores, and compute and I/O density packaged in a single-socket configuration, provides the ideal platform for Baidu’s next generation cloud services,” according to AMD.

“By offering outstanding performance in single-processor systems, the AMD EPYC platform provides flexibility and high-performance in our datacenter, which allows Baidu to deliver more efficient services to our customers,” said Liu Chao, senior director, Baidu System Technologies Department in the official release.

Again, from the EPYC launch in June, Aylor said, “We’ve selectively optimized a couple of skews for one socket only. So these are skews that are one socket capable only.” As an example of how the one socket and two socket offerings are distinguished, he cited on package interconnect, “The infinity fabric that would normally connect the two sockets in a two socket system, we repurpose that interconnect into more I/O lanes and that’s how you have in a two socket solution 128 lanes of PCIe and in a one socket solution you still keep the same level of connectivity.”

Today’s announcement punctuates what has been a heady year for AMD. Adoption of the single socket solution by Baidu is another demonstration of market traction and according to AMD, Baidu expects to expand its use of EPYC processors across its global datacenters beginning in the first quarter of 2018.

“This announcement with Baidu and the fact that it is AI, big data, and cloud; those are all computing oriented workloads. So think about the point we raised when we first launched [which] is we now can take what has been part of the mainstream of the market and everything that historically has been the [Intel] E5-2650 and below, and really, looking at the [Skylake] Silver and Gold today from [Intel], we can really address that now with a single socket platform,” said Aylor.

It will be interesting to watch how big a swath AMD’s single socket initiative can cut in the competitive data center market. Aylor said more and more varied single socket EPYC-based offerings are coming, but didn’t specify from who or when.

[i] Information supplied by AMD: AMD EPYCTM processor supports up to 128 PCIe Gen 3 I/O lanes (in both 1 and 2-socket configuration), versus the Intel Xeon SP Series processor supporting a maximum of 48 lanes PCIe Gen 3 per CPU, plus 20 lanes in the I/O chip (max of 68 lanes on 1 socket and 96 lanes on 2 socket). NAP-56

The post AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers appeared first on HPCwire.

Microsoft Wants to Speed Quantum Development

HPC Wire - Tue, 12/12/2017 - 16:13

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in advancing its vision for the future of computing that it says will spur major advances in artificial intelligence and address humanities biggest challenges such as world hunger and climate change.

On Monday, Microsoft unveiled its custom Q# (Q-sharp) programming language as part of its effort to build an end-to-end topological quantum computing system suitable for commercial purposes. Along with a simulator for debugging and testing quantum code, Q# is included in Microsoft’s Quantum Development Kit, first announced by the company in September.

“Designed ground up for quantum, Q# is the most approachable high-level programming language with a native type system for qubits, operators, and other abstractions,” says Microsoft. “It is fully integrated with Visual Studio, enabling a complete professional enterprise-grade development tooling system for the fastest path to quantum programming efficiency.”

Using the local quantum simulator on a standard laptop, developers will be able to simulate up to 30 logical qubits, according to Microsoft. For developers who want to go beyond that, Microsoft is offering an Azure-based simulator that supports simulations above 40 logical qubits.

The preview version of the development kit is available at no charge and comes with documentation, libraries and sample programs. Microsoft said that the kit will “give people the background they need to start playing around with aspects of computing that are unique to quantum systems, such as quantum teleportation.”

According to the company, programs created for the simulator will be transferable to a real topological machine, which Microsoft is in the process of developing. Microsoft’s approach to building a universal quantum computer is centered on the topological qubit, purported to be more stable than other qubit implementations. Most approaches to quantum computing require massive amounts of error correction such that a useful device could require 10 physical qubits to achieve one logical qubit, potentially pushing up the number of physical qubits into the tens of thousands. Researchers propose that the topological qubit naturally resists decoherence and therefore requires less error correction. Conceivably this would make it possible to build a quantum machine with fewer physical qubits.

In the video below, Krysta Svore, principal researcher at Microsoft, demonstrates the new Microsoft Quantum Development Kit.

Lots of good info to get started here — https://docs.microsoft.com/en-us/quantum/index?view=qsharp-preview

The post Microsoft Wants to Speed Quantum Development appeared first on HPCwire.

Physicists Win Supercomputing Time to Study Fusion and the Cosmos

HPC Wire - Tue, 12/12/2017 - 13:10

Dec. 12, 2017 — More than 210 million core hours on two of the most powerful supercomputers in the nation have been won by two teams led by researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL). The highly competitive awards from the DOE Office of Science’s INCITE (Innovative and Novel Impact on Computational Theory and Experiment) program will accelerate the development of nuclear fusion as a clean and abundant source of energy for generating electricity and will advance understanding of the high-energy-density (HED) plasmas found in stars and other astrophysical objects.

A single core hour represents the use of one computer core, or processor, for one hour. A laptop computer with only one processor would take some 24,000 years to run 210 million core hours.

“Extremely important and beneficial”

“These awards are extremely important and beneficial,” said Michael Zarnstorff, deputy director for research at PPPL. “They give us access to leadership-class highest-performance computers for highly complex calculations. This is key for advancing our theoretical modeling and understanding.” Leadership-class computing systems are high-end computers that are among the most advanced in the world for solving scientific and engineering problems.

The allocations include more than 160 million core hours for physicist C.S. Chang and his team, marking the first year of a renewable three-year award. The first-year hours are distributed over two machines: 100-million core hours on Titan, the most powerful U.S supercomputer, which can perform some 27 quadrillion (1015) calculations per second at the Oak Ridge Leadership Computing Facility (OLCF); and 61.5 million core hours on Theta, which completes some 10 quadrillion calculations a second at the Argonne Leadership Computing Facility (ALCF).  Both sites are DOE Office of Science User Facilities.

Also received are 50 million core hours on Titan for Amitava Bhattacharjee, head of the Theory Department at PPPL, and William Fox and their team to study HED plasmas produced by lasers.

Chang’s group consists of colleagues at PPPL and other institutions and will use the time to run the XGC code developed by PPPL and nationwide partners.  The team is exploring the dazzlingly complex edge of fusion plasmas with Chang as lead principal investigator of the partnership center for High-fidelity Boundary Plasma Simulation — a program supported by the DOE Office of Science’s Scientific Discovery through Advanced Computing (SciDAC). The edge is critical to the performance of plasma that fuels fusion reactions.

Fusion — the fusing of light elements

Fusion is the fusing of light elements that most stars use to generate massive amounts of energy – and that scientists are trying to replicate on Earth for a virtually inexhaustible supply of energy. Plasma – the fourth state of matter that makes up nearly all the visible universe – is the fuel they would use to create fusion reactions.

The XGC code will perform double-duty to investigate developments at the edge of hot, charged fusion plasma. The program will simulate the transition from low- to high-confinement of the edge of fusion plasmas contained inside magnetic fields in doughnut-shaped fusion devices called tokamaks. Also simulated will be the width of the heat load that will strike the divertor, the component of the tokamak that will expel waste heat and particles from future fusion reactors based on magnetic confinement such as ITER, the international tokamak under construction in France to demonstrate the practicality of fusion power.

The simulations will build on knowledge that Chang has achieved in the previous-cycle SciDAC project.  “We’re just getting started,” Chang said. “In the new SciDAC project we need to understand the different types of transition that are thought to occur in the plasma, and the physics behind the width of the heat load, which can damage the divertor in future facilities such as ITER if the load is too narrow and concentrated.”

Advancing progress in understanding HED plasmas

The Bhattacharjee-Fox award, the second and final part of a two-year  project, will advance progress in the team’s understanding of the dynamics of magnetic fields in HED plasmas. “The simulations will be immensely beneficial in designing and understanding the results of experiments carried out at the University of Rochester and the National Ignition Facility at Lawrence Livermore National Laboratory” Bhattacharjee said.

The project explores the magnetic reconnection and shocks that occur in HED plasmas, producing enormous energy in processes such as solar flares, cosmic rays and geomagnetic storms. Magnetic reconnection takes place when the magnetic field lines in plasma converge and break apart, converting magnetic energy into explosive particle energy. Shocks appear when the flows in the plasma exceed the speed of sound, and are a powerful process for accelerating charged particles.

To study the process, the team fires high-power lasers at tiny spots of foil, creating plasma bubbles with magnetic fields that collide to form shocks and come together to create reconnection. “Our group has recently made important progress on the properties of shocks and novel mechanisms of magnetic reconnection in laser-driven HED plasmas,” Bhattacharjee said. “This could not be done without INCITE support.”

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov(link is external).

Source: PPPL

The post Physicists Win Supercomputing Time to Study Fusion and the Cosmos appeared first on HPCwire.

ACM Recognizes 2017 Fellows for Advancing Technology in the Digital Age

HPC Wire - Tue, 12/12/2017 - 10:20

NEW YORK, Dec. 12, 2017 — ACM, the Association for Computing Machinery, has named 54 members ACM Fellows for major contributions in areas including database theory, design automation, information retrieval, multimedia computing and network security. The accomplishments of the 2017 ACM Fellows lead to transformations in science and society. Their achievements play a crucial role in the global economy, as well as how we live and work every day.

“To be selected as a Fellow is to join our most renowned member grade and an elite group that represents less than 1 percent of ACM’s overall membership,” explains ACM President Vicki L. Hanson. “The Fellows program allows us to shine a light on landmark contributions to computing, as well as the men and women whose tireless efforts, dedication, and inspiration are responsible for groundbreaking work that improves our lives in so many ways.”

Underscoring ACM’s global reach, the 2017 Fellows hail from universities, companies and research centers in China, Denmark, Germany, Hong Kong, Switzerland, the United Kingdom and the United States.

The 2017 Fellows have been cited for numerous contributions in areas including artificial intelligence, big data, computer architecture, computer graphics, high performance computing, human-computer interaction, sensor networks, wireless networking and theoretical computer science.

ACM will formally recognize its 2017 Fellows at the annual Awards Banquet, to be held in San Francisco on June 23, 2018. Additional information about the 2017 ACM Fellows, the awards event, as well as previous ACM Fellows and award winners, is available at http://awards.acm.org/.

2017 ACM Fellows

Lars Birkedal
Aarhus University
For contributions to the semantic and logical foundations of compilers and program verification systems

Edouard Bugnion
EPFL
For contributions to virtual machines

Margaret Burnett
Oregon State University
For contributions to end-user software engineering, understanding gender biases in software, and broadening participation in computing

Shih-Fu Chang
Columbia University
For contributions to large-scale multimedia content recognition and multimedia information retrieval

Edith Cohen
Google Research
For contributions to the design of efficient algorithms for networking and big data

Dorin Comaniciu
Siemens Healthcare
For contributions to machine intelligence, diagnostic imaging, image-guided interventions, and computer vision

Susan M. Dray
Dray & Associates
For co-founding ACM SIGCHI and disseminating exemplary user experience design and evaluation practices worldwide

Edward A. Fox
Virginia Tech
For contributions in information retrieval and digital libraries

Richard M. Fujimoto
Georgia Institute of Technology
For contributions to parallel and distributed discrete event simulation

Shafi Goldwasser  
Massachusetts Institute of Technology
For transformative work that laid the complexity-theoretic foundations for the science of cryptography

Carla P. Gomes  
Cornell University
For establishing the field of computational sustainability, and for foundational contributions to artificial intelligence

Martin Grohe 
RWTH Aachen University
For contributions to logic in computer science, database theory, algorithms, and computational complexity

Aarti Gupta 
Princeton University
For contributions to system analysis and verification techniques and their transfer to industrial practice

Venkatesan Guruswami
Carnegie Mellon University
For contributions to algorithmic coding theory, pseudorandomness and the complexity of approximate optimization

Dan Gusfield
University of California, Davis
For contributions to combinatorial optimization and to algorithmic computational biology

Gregory D. Hager
Johns Hopkins University
For contributions to vision-based robotics and to computer-enhanced interventional medicine

Steven Michael Hand
Google
For contributions to virtual machines and cloud computing

Mor Harchol-Balter 
Carnegie Mellon University
For contributions to performance modeling and analysis of distributed computing systems

Laxmikant Kale 
University of Illinois at Urbana-Champaign
For development of new parallel programming techniques and their deployment in high performance computing applications

Michael Kass
NVIDIA
For contributions to computer vision and computer graphics, particularly optimization and simulation

Angelos Dennis Keromytis
DARPA
For contributions to the theory and practice of systems and network security

Carl Kesselman 
University of Southern California
For contributions to high-performance computing, distributed systems, and scientific data management

Edward Knightly 
Rice University
For contributions to multi-user wireless LANs, wireless networks for underserved regions, and cross-layer wireless networking

Craig Knoblock 
University of Southern California
For contributions to artificial intelligence, semantic web, and semantic data integration

Insup Lee
University of Pennsylvania
For theoretical and practical contributions to compositional real-time scheduling and runtime verification

Wenke Lee
Georgia Institute of Technology
For contributions to systems and network security, intrusion and anomaly detection, and malware analysis

Li Erran Li
Uber Advanced Technologies Group
For contributions to the design and analysis of wireless networks, improving architectures, throughput, and analytics

Gabriel H. Loh
Advanced Micro Devices, Inc.
For contributions to die-stacking technologies in computer architecture

Tomás Lozano-Pérez
Massachusetts Institute of Technology

For contributions to robotics, and motion planning, geometric algorithms, and their applications

Clifford A. Lynch
Coalition for Networked Information
For contributions to library automation, information retrieval, scholarly communication, and information policy

Yi Ma
University of California, Berkeley
For contributions to theory and application of low-dimensional models for computer vision and pattern recognition

Andrew K. McCallum
University of Massachusetts at Amherst
For contributions to machine learning with structured data, and innovations in scientific communication

Silvio Micali
Massachusetts Institute of Technology
For transformative work that laid the complexity-theoretic foundations for the science of cryptography

Andreas Moshovos 
University of Toronto
For contributions to high-performance architecture including memory dependence prediction and snooping coherence

Gail C. Murphy
The University of British Columbia
For contributions to recommenders for software engineering and to program comprehension

Onur Mutlu
ETH Zurich
For contributions to computer architecture research, especially in memory systems

Nuria Oliver
Vodafone/Data-Pop Alliance
For contributions in probabilistic multimodal models of human behavior and uses in intelligent, interactive systems

Balaji Prabhakar 
Stanford University
For developing algorithms and systems for large-scale data center networks and societal networks

Tal Rabin
IBM Research
For contributions to foundations of cryptography, including multi-party computations, signatures, and threshold and proactive protocol design

K. K. Ramakrishnan
University of California, Riverside 
For contributions to congestion control, operating system support for networks and virtual private networks

Ravi Ramamoorthi
University of California San Diego 
For contributions to computer graphics rendering and physics-based computer vision

Yvonne Rogers  
University College London
For contributions to human-computer interaction and the design of human-centered technology

Yong Rui  
Lenovo Group
For contributions to image, video and multimedia analysis, understanding and retrieval

Bernhard Schölkopf
Max Planck Institute for Intelligent Systems
For contributions to the theory and practice of machine learning

Steven M. Seitz
University of Washington, Seattle
For contributions to computer vision and computer graphics

Michael Sipser
Massachusetts Institute of Technology
For contributions to computational complexity, particularly randomized computation and circuit complexity

Anand Sivasubramaniam
Penn State University
For contributions to power management of datacenters and high-end computer systems

Mani B. Srivistava
University of California, Los Angeles
For contributions to sensor networks, mobile personal sensing, and cyber-physical systems

Alexander Vardy
University of California San Diego
For contributions to the theory and practice of error-correcting codes and their study in complexity theory

Geoffrey M. Voelker
University of California San Diego
For contributions to empirical measurement and analysis in systems, networking and security

Martin D. F. Wong
University of Illinois at Urbana-Champaign
For contributions to the algorithmic aspects of electronic design automation (EDA)

Qiang Yang
Hong Kong University of Science and Technology
For contributions to artificial intelligence and data mining

ChengXiang Zhai
University of Illinois at Urbana-Champaign
For contributions to information retrieval and text data mining

Aidong Zhang
State University of New York at Buffalo
For contributions to bioinformatics and data mining

About ACM

ACM, the Association for Computing Machinery (www.acm.org) is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence.  ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

About the ACM Fellows Program

The ACM Fellows Program (http://awards.acm.org/fellow/) initiated in 1993, celebrates the exceptional contributions of the leading members in the computing field. These individuals have helped to enlighten researchers, developers, practitioners and end users of information technology throughout the world. The new ACM Fellows join a distinguished list of colleagues to whom ACM and its members look for guidance and leadership in computing and information technology.

Source: ACM

The post ACM Recognizes 2017 Fellows for Advancing Technology in the Digital Age appeared first on HPCwire.

Solar disinfection system wins CECS Fall Capstone Design Fair

Colorado School of Mines - Tue, 12/12/2017 - 09:49

A solar-powered water disinfection system for use in rural Uganda was awarded first place in the College of Engineering and Computational Sciences’ Fall Capstone Design Trade Fair on Dec. 5.

Team Uganda Solar also won the fair’s Humanitarian Engineering Award for the sustainable and economic point-of-use water disinfection system, which utilizes ultraviolet LEDs instead of the traditional mercury vapor lamps. According to the 2017 Progress on Drinking Water, Sanitation and Hygiene report, prepared by the World Health Organization (WHO) and UNICEF, nearly 22 million residents living in rural areas of Uganda lack access to safe, clean drinking water. 

Members of Team Uganda Solar were Cole Alexander (mechanical engineering), Barron Keith (electrical engineering), Shao Liu (environmental engineering), Chad McFarland (mechanical engineering) and Caitlyn Smith (environmental engineering). 

Second place went to Team Dentium Engineering for their project, Dr. Sluggo’s A-45 Oscillator Toothbrush. Team members were William Cullum (mechanical engineering), Matthew Lewis (mechanical engineering), Duncan Melton (mechanical engineering), Brock Morrison (engineering physics), Cesar Navejas Garcia (mechanical engineering) and John Kater (mechanical engineering).
 
A first-place award was also given out to the top Human-Centered Design Studio project, Team 13e’s Motocross Foot Positioner. Primary team members were Rheana Cordero, Lauren Harrison, Kayla Hounshell and Megan Koehler, all studying mechanical engineering. 
 
Judges evaluated the projects based on poster and display, discussion, problem definition, design analysis and overall impression. 

"For two semesters these teams have been putting to use everything they had learned during their engineering studies to solve a client's problem and the trade fair is the proof in the pudding,” said Kevin Moore, dean of CECS. “Trade Fair is where we can showcase partnerships between Mines and the outside world, as well. This fall's winning team, Uganda Solar, with their project titled ‘eMi Solar-Powered UV Disinfection’ is a perfect example. Partnering with support from an NGO and with a co-client who also had in-country NGO experience, the team put together an amazing humanitarian-motivated solution to a real problem, a true ‘project that matters.’”

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Tencent Cloud Adopts Mellanox Interconnect Solutions for HPC and AI Cloud Offering

HPC Wire - Tue, 12/12/2017 - 09:33

SUNNYVALE, Calif. & YOKNEAM, Israel, Dec. 12, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced that Tencent Cloud has adopted Mellanox interconnect solutions for its high-performance computing (HPC) and artificial intelligence (AI) public cloud offering. TencentCloud is a secure, reliable and high-performance public cloud service that integrates Tencent’s infrastructure capabilities with the advantages of a massive-user platform and ecosystem.

The Tencent Cloud infrastructure leverages Mellanox Ethernet and InfiniBand adapters, switches and cables to deliver advanced public cloud services. By taking advantage of Mellanox RDMA, in-network computing and other interconnect acceleration engines, Tencent Cloud can now offer high-performance computing services, as required by its users, to develop advanced applications and offer new services.

“Tencent Cloud is utilizing Mellanox interconnect and applications acceleration technology to help companies develop their next generation products and offer new and intelligent services,” said Wang Huixing, vice president of Tencent Cloud. “We are excited to work with Mellanox to integrate its world-leading interconnect technologies into our public cloud offerings, and plan to continue to scale our infrastructure product lines to meet the growing needs of our customers.”

“We are proud to partner with Tencent Cloud, who is leveraging our advanced interconnect technology to help build a leading high-performance computing and artificial intelligence-based public cloud infrastructure,” said Amir Prescher, senior vice president of business development at Mellanox Technologies. “Through Tencent Cloud, companies will benefit from Mellanox’s technology to build new products and services that can leverage faster and more efficient data analysis. We look forward to continuing to work with Tencent and expanding the use of Mellanox solutions in its cloud offering.”

Mellanox interconnect solutions deliver the highest efficiency for high-performance computing, artificial intelligence, cloud, storage, and other applications.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

About Tencent Cloud

Tencent Cloud is a leading global cloud service provider. A product of Tencent, Tencent Cloud was built with the expertise of teams who created innovative services like QQ, WeChat and Qzone. Tencent Cloud provides integrated cloud services such as IaaS, PaaS and SaaS, and is a one-stop service for enterprises seeking to adopt public cloud, hybrid cloud, private cloud and cloud-based financial services. It is also a pioneer in cutting edge web technologies such as Cloud Image, facial recognition, big data analytics, machine learning, audio/video technology and security protection. Tencent Cloud delivers integrated industry solutions for gaming, finance, e-commerce, tourism, online-to-offline services, governments, healthcare, online education, and smart hardware. It also provides general solutions with different functions, including online video, website set-up, hybrid cloud, big data, the WeChat eco-system and more. For more information, please visit https://www.qcloud.com/.

Source: Mellanox

The post Tencent Cloud Adopts Mellanox Interconnect Solutions for HPC and AI Cloud Offering appeared first on HPCwire.

ESnet Now Moving More Than 1 Petabyte/wk

HPC Wire - Tue, 12/12/2017 - 09:20

Optimizing ESnet (Energy Sciences Network), the world’s fastest network for science, is an ongoing process. Recently a two-year collaboration by ESnet users – the Petascale DTN Project – achieved its ambitious goal to deliver sustained data transfers at over the target rate of 1 petabyte per week. ESnet is managed by Lawrence Berkeley National Laboratory for the Department of Energy.

During the past two years ESnet engineers have been working with staff at DOE labs to fine tune the specially configured systems called data transfer nodes (DTNs) that move data in and out of the National Energy Research Scientific Computing Center (NERSC) at LBNL and the leadership computing facilities at Argonne National Laboratory and Oak Ridge National Laboratory. A good article describing the ESnet project (ESnet’s Petascale DTN project speeds up data transfers between leading HPC centers) was posted yesterday on Phys.org.

A variety of software and hardware upgrades and expansion were required to achieve the speedup. Here are two examples taken from the article:

  • At NERSC, the DTN project resulted in adding eight more nodes, tripling the number, in order achieve enough internal bandwidth to meet the project’s goals. “It’s a fairly complicated thing to do,” said Damian Hazen, head of NERSC’s Storage Systems Group. “It involves adding infrastructure and tuning as we connected our border routers to internal routers to the switches connected to the DTNs. Then we needed to install the software, get rid of some bugs and tune the entire system for optimal performance.”
  • Oak Ridge Leadership Computing Facility now has 28 transfer nodes in production on 40-Gigabit Ethernet. The nodes are deployed under a new model—a diskless boot—which makes it easy for OLCF staff to move resources around, reallocating as needed to respond to users’ needs. “The Petascale DTN project basically helped us increase the ‘horsepower under the hood’ of network services we provide and make them more resilient,” said Jason Anderson, an HPC UNIX/storage systems administrator at OLCF. “For example, we recently moved 12TB of science data from OLCF to NCSA in less than 30 minutes. That’s fast!”

The Petascale DTN collaboration also includes the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, funded by the National Science Foundation (NSF). Together, the collaboration aims to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets. The number of sites with this base capability is also expanding, with Brookhaven National Laboratory in New York now testing its transfer capabilities with encouraging results. Future plans including bringing the NSF-funded San Diego Supercomputer Center and other big data sites into the mix.

Performance measurements from November 2017 at the end of the Petascale DTN project. All of the sites met or exceed project goals. Credit: Eli Dart, ESnet

“This increase in data transfer capability benefits projects across the DOE mission science portfolio” said Eli Dart, an ESnet network engineer and leader of the project. “HPC facilities are central to many collaborations, and they are becoming more important to more scientists as data rates and volumes increase. The ability to move data in and out of HPC facilities at scale is critical to the success of an ever-growing set of projects.”

Link to full Phys.org article: https://phys.org/news/2017-12-esnet-petascale-dtn-hpc-centers.html

The post ESnet Now Moving More Than 1 Petabyte/wk appeared first on HPCwire.

Quantum Unveils Scale-out NAS for High-Value and Data-Intensive Workloads

HPC Wire - Tue, 12/12/2017 - 08:51

SAN JOSE, Calif., Dec. 12, 2017 — Quantum Corp. (NYSE: QTM) today announced Xcellis Scale-out NAS, the industry’s first workflow storage appliance to provide the management capabilities and robust features of enterprise scale-out NAS with the cost-effective scaling organizations need to address modern data growth. It delivers greater than 3X the performance of competitive enterprise NAS offerings and, with integrated storage tiering, an end-to-end solution can cost as little as 1/10 that of alternative enterprise NAS solutions with the same capacity and performance. This combination makes Xcellis Scale-out NAS unique in comprehensively addressing the needs of high-value data environments where the organization’s revenue and products are all built around data.

Unified Unstructured Data at Scale 

Many IoT, media and entertainmentlife sciences, manufacturing, video surveillance and enterprise high-performance computing (HPC) environments are outgrowing traditional enterprise NAS. Users have typically turned to scale-out NAS over the past decade as an alternative but are finding that scaling capacity, integrating cloud strategies and sharing data are afterthoughts or not even possible with the solutions they’ve adopted. Unlike enterprise IT workloads, data in high-value workload environments is constantly growing on every axis — ingest, processing, analysis, distribution, archive. These environments require storage solutions with the management and features of enterprise NAS, but which can also cost-effectively scale performance and capacity. Leveraging Quantum’s industry-leading StorNext® parallel file system and data management platform, Xcellis Scale-out NAS offers industry-leading performance, scalability and management benefits for organizations with high-value workloads:

  • Cost-Effective Scaling of Performance and Capacity: Clusters can scale performance and capacity together or independently to reach hundreds of petabytes in capacity and terabytes per second in performance. A single client (SMB, NFS or high-performance client) can achieve over 3X the performance of competitive scale-out NAS offerings with multiple clients scaling a single cluster’s bandwidth to over a terabyte per second. In addition, an end-to-end solution with Xcellis has been shown to manage petabytes of data in a simplified workflow incorporating tape or cloud that provides greater performance than leading NAS-only alternatives for as little as a tenth of the cost.
  • Advanced Features and Flexible Management: With simple installation and setup, a modern administrative single-screen interface provides in-depth monitoring, alerting and management functions as well as rapid scanning and search capabilities that tame large data repositories. Xcellis Scale-out NAS is designed to integrate with the highest performance Ethernet networks through SMB and NFS interfaces and offers the flexibility to also support high-performance block storage in the same converged solution.
  • Lifecycle, Location and Cost Management: Xcellis Scale-out NAS leverages more than 15 years of data management experience built into StorNext. Xcellis data management provides automatic tiering between SSD, disk, tape, object storage and public cloud. Copies can be created for content distribution, collaboration, data protection and disaster recovery.

Artificial Intelligence With Xcellis 

Xcellis Scale-out NAS is the industry’s only NAS solution with integrated artificial intelligence (AI) capabilities that enable customers to create more value from new and existing data. It can actively interrogate data across multiple axes to uncover events, objects, faces, words and sentiments, automatically generating custom metadata that unlocks new possibilities for using stored assets.

Availability

Xcellis Scale-out NAS will be generally available this month with entry configurations and those leveraging tiering starting at under $100 per terabyte (raw).

About Quantum 

Quantum is a leading expert in scale-out tiered storage, archive and data protection. The company’s StorNext platform powers modern high-performance workflows, enabling seamless, real-time collaboration and keeping content readily accessible for future use and remonetization. More than 100,000 customers have trusted Quantum to address their most demanding content workflow needs, including large government agencies, broadcasters, research institutions and commercial enterprises. With Quantum, customers have the end-to-end storage platform they need to manage assets from ingest through finishing and into delivery and long-term preservation. See how at www.quantum.com/customerstories.

Source: Quantum

The post Quantum Unveils Scale-out NAS for High-Value and Data-Intensive Workloads appeared first on HPCwire.

Dell EMC, Alces Flight to Create Hybrid HPC Solution for the University of Liverpool

HPC Wire - Mon, 12/11/2017 - 14:58

BICESTER, UK, December 6, 2017 — Built by Dell EMC and Alces Flight, the solution will use Amazon Web Services (AWS) to provide students and researchers a greater depth of flexibility in research by combining on-premises and cloud resources into a hybrid high performance computing (HPC) solution.

“We are pleased to be working with Dell EMC and Alces Flight on this new venture,” said Cliff Addison, head of Advanced Research Computing at the University of Liverpool. “The University of Liverpool has always maintained cutting-edge technology and by architecting flexible access to computational resources on AWS we’re setting the bar even higher for what can be achieved in HPC.”

“Universities are competing in an increasingly demanding environment and the need to differentiate and offer a best-in-class experience is vital. The collaboration between ourselves and Alces Flight helps the University of Liverpool offer a significant computing resource to their students and faculty,” said Peter Barnes, VP Infrastructure Solutions Group, Dell EMC UK and Ireland. “Our Dell PowerEdge 14G servers provide the highly scalable compute platform that is instrumental in this. High-Performance Computing is already being used around the world to make significant scientific breakthroughs and today’s launch will hopefully be the catalyst for more.”

“AWS is making High-Performance Computing (HPC) more accessible and more scalable than ever before,” said Brendan Bouffler, Global Research Computing at Amazon Web Services. “Using AWS, scientists at Liverpool University will have instant access to facilities most of their peers at other institutions stand in a queue for and that means a much faster turn-around between experiments, speeding up their time to the next exciting discovery. It’s humbling to see what this community is able to achieve when traditional constraints of IT are removed.”

“We want researchers and students at the University of Liverpool to be able to run HPC workloads anywhere and at any time,” said Wil Mayers, Technical Director of Alces Flight. “By being able to provide a unified HPC environment that incorporates both Dell EMC on-premises hardware and AWS, we can provide users with a high-quality, consistent experience. Our hope is that this results in further collaborative engagements that push hybrid HPC forward, allowing users to have quick access to some of the best hardware and cloud resources available.”

The collaborative solution from Dell EMC and Alces Flight on AWS is the first of its kind for the University of Liverpool. A fully managed on-premises HPC cluster and cloud-based HPC account have been architected for students and researchers to achieve access to the computational resources required with as little delay as possible.

About the University of Liverpool, Department of Computing Services

The University of Liverpool is one of the UK’s leading research institutions with an annual turnover of £480 million, including £102 million for research. Ranked in the top 1% of higher education institutions worldwide, Liverpool is a member of the prestigious Russell Group of the U.K.’s leading research universities.

About Dell EMC

Dell EMC, a part of Dell Technologies, enables organizations to modernize, automate and transform their data center using industry-leading converged infrastructure, servers, storage and data protection technologies. This provides a trusted foundation for businesses to transform IT, through the creation of a hybrid cloud, and transform their business through the creation of cloud-native applications and big data solutions. Dell EMC services customers across 180 countries – including 98 percent of the Fortune 500 – with the industry’s most comprehensive and innovative portfolio from edge to core to cloud.

About Alces Flight

Alces Flight is made up of a team of specialists in High Performance Computing (HPC) software for scientists, engineers and researchers. Based in the U.K., Alces designs, builds and supports environments to help users make efficient use of the compute and storage resources available to them. Our products are designed to support both existing users with software that is familiar to them and help first-time users to discover, learn and develop their HPC skills.

Source: Alces Flight Limited

The post Dell EMC, Alces Flight to Create Hybrid HPC Solution for the University of Liverpool appeared first on HPCwire.

HPC-as-a-Service Finds Toehold in Iceland

HPC Wire - Mon, 12/11/2017 - 14:20

While high-demand workloads (e.g., bitcoin mining) can overheat data center cooling capabilities, at least one data center infrastructure provider has announced an HPC-as-a-service offering that features 100 percent free and zero-carbon cooling.

Verne Global, a company seemingly intent on converting Iceland into a gigantic, renewably powered data center facility, has announce hpcDIRECT,  a scalable, bare metal service designed to support power-intensive high performance computing applications. Finding initial markets in the financial services, manufacturing (particularly automotive) and scientific research verticals, hpcDIRECT is powered by the island country’s abundant supply of hydroelectric, geothermal and, to a lesser degree, wind energy that the company says delivers 60 percent savings on power costs.

The launch follows Verne’s a late-October announcement that it had completed a 12.4-Tbps upgrade to the Greenland Connect subsea cable system with three 100-Gpbs connection to New York, lowering latency and delivering, according to Verne, up to 90 percent lower network costs.

hpcDIRECT is a response to customers who “want to consume their HPC in both the traditional sense, where we provide them with colocation capability, power space cooling – the traditional method a data center operator provides services to a customer – and then also to provide a next layer in the technology stack…the hardware and the orchestration of that hardware,” Dominic Ward, managing director at Verne, told EnterpriseTech.

He said hpcDIRECT is available with no upfront charges and can be provisioned to customers’ size and configuration requirements. hpcDIRECT clusters are built using updated architectures available, including Intel’s Xeon (Skylake) servers, connected with Mellanox EDR InfiniBand.

Source: Verne Global

“By leveraging low-cost, reliable, and 100 percent renewable power at its Keflavik campus, the company holds a rather unique position compared to other providers in the industry that offer services similar to hpcDIRECT,” said Teddy Miller, associate analyst at industry watcher 451 Research. “Verne Global’s new product will appeal particularly to enterprises with corporate sustainability mandates or initiatives. The recent completion of an upgrade to Tele Greenland’s Greenland Connect subsea cable system should also significantly lower latency and network costs between the Keflavik campus and New York City. Verne Global may be small compared to other players in the space, but what it offers its customers is cheap, green and increasingly well-connected.”

Verne said hpcDIRECT is accessible via a range of options, from incremental additions to augment existing high performance computing, to supporting massive processing requirements with petaflops of compute. “This flexibility makes it an ideal solution for applications such as computer-aided engineering, genomic sequencing, molecular modelling, grid computing, artificial intelligence and machine learning” the company said.

Demand drivers in financial services for colocation data center services begin with the industry’s movement to reducing capex expenditures on the balance sheet and toward more efficient opex alternatives. Ward said banks and hedge fund companies typically run compute-intensive inter- and intra-day risk applications, “they often have a core level of compute that they want to augment with a more flexible and scalable solution.”

In the automotive sector, companies typically have CFD and crash simulation software running at high utilization 24/7/365. A service like hpcDIRECT, Ward said, “enables them to increment the compute resource they have for high performance applications on a steady basis.” This avoids time consuming and costly procurement cycles, he said. “We’re able to provide them an augmentation, or a complete replacement, for (their high-performance) resources and step into the demand profile that fits their compute demands.”

The post HPC-as-a-Service Finds Toehold in Iceland appeared first on HPCwire.

Fujitsu Develops WAN Acceleration Technology Utilizing FPGA Accelerators

HPC Wire - Mon, 12/11/2017 - 10:37

TOKYO, Dec. 11, 2017 — Fujitsu Laboratories Ltd. today announced the development of WAN acceleration technology that can deliver transfer speeds up to 40Gbps for migration of large volumes of data between clouds, using servers equipped with field-programmable gate arrays (FPGAs).

Connections in wide area networks (WANs) between clouds are moving from 1Gbps lines to 10Gbps lines, but with the recent advance of digital technology, including IoT and AI, there is an even greater demand for faster high-speed data transfers as huge volumes of data are collected in the cloud. Until now the effective transfer speed of WAN connections has been raised using techniques to reduce the volume of data, such as compression and deduplication. However, with WAN lines of 10Gbps there are enormous volumes of data to be processed, and existing WAN acceleration technologies usable in cloud servers have not been able to sufficiently raise the effective transfer rate.

Fujitsu Laboratories has now developed WAN acceleration technology capable of real-time operation even with speeds of 10Gbps or higher. WAN acceleration technology is achieved with a dedicated computational unit specialized for a variety of processing, such as feature value calculations and compression processing, mounted onto an FPGA equipped on a server, and in tandem with this, by enabling highly parallel operation of the computational units by supplying data at the appropriate times based on the predicted completion of each computation.

In a test environment where this technology was deployed on servers that use FPGAs, and where the servers were connected with 10Gbps lines, Fujitsu Laboratories confirmed that this technology achieved effective transfer rates of up to 40Gbps, the highest performance in the industry. With this technology, it has become possible to transfer data at high-speeds between clouds, including data sharing and backups, enabling the creation of next-generation cloud services that share and utilize large volumes of data across a variety of companies and locations.

Fujitsu Laboratories aims to deploy this technology, capable of use in cloud environments, as an application loaded on an FPGA-equipped server. It is continuing evaluations in practical environments with the goal of commercializing this technology during fiscal 2018.

Fujitsu Laboratories will announce details of this technology at the 2017 International Conference on Field-Programmable Technology (FPT 2017), an international conference to be held in Melbourne, Australia on December 11-13.

Development Background

As the cloud has grown in recent years, there has been a movement to increase data and server management and maintenance efficiency by migrating data (i.e., internal documents, design data, and email) that had been managed on internal servers to the cloud. In addition, as shown by the spread in the use of digital technology such as IoT and AI, there are high expectations for the ways that work and business will be transformed by the analysis and use of large volumes of data, including camera images from factories and other on-site locations, and log data from devices. Given this, there has been explosive growth in the volume of data passing through WAN lines between clouds, spurring a need for next-generation WAN acceleration technology capable of huge data transfers at high-speed between clouds.

Issues

WAN acceleration technologies improve effective transfer speeds by reducing the volume of data through compression or deduplication of the data to be transferred. When transferring data at even higher speeds using 10Gbps network lines, the volume of data needing to be processed is so great that the compression and deduplication processing speed in the server bottlenecks. Therefore, in order to improve real-time operation, there is a need for either CPUs that can operate at higher speeds, or for WAN acceleration technology with faster processing speeds.

About the Newly Developed Technology

Fujitsu Laboratories has now developed WAN acceleration technology that can achieve real-time operation usable in the cloud even with speeds of 10Gbps or more, using server-mounted FPGAs as accelerators. Efficient operations with WAN acceleration technology are accomplished by using an FPGA to process a portion of the processing for which the computation is heavy and for which it is difficult to improve processing speed in the CPU, when performing compression or deduplication for WAN acceleration processing, and by efficiently connecting the CPU with the FPGA accelerator. Details of the technology are as follows.

1. FPGA parallelization technology using highly parallel dedicated computational units

Fujitsu Laboratories has developed FPGA parallelization technology that can significantly reduce the processing time required for data compression and deduplication by deploying dedicated computational units specialized for data partitioning, feature value calculation, and lossless compression processing in a FPGA in a highly parallel configuration, and by enabling highly parallel operation of the computational units by delivering data at the appropriate times based on predictions of the completion of each calculation.

2. Technology to optimize the flow of processing between CPU and FPGA

Previously, in determining whether to apply lossless compression to data based on the identification of duplication in that data, it was necessary to read the data twice, both before and after the duplication identification was executed on the FPGA, increasing overhead and preventing the system from delivering sufficient performance. Now, by consolidating the processing handoff onto the FPGA, handling both the preprocessing for duplication identification and the compression processing on the FPGA, and using a processing sequence that controls how the compression processing results are reflected on the CPU based on the results of the duplication identification, this technology reduces the overhead between the CPU and FPGA from reloading the input data and from control exchanges. This reduces the waiting time due to the handoff of data and control between the CPU and FPGA, delivering efficient coordinated operation of the CPU and FPGA accelerator.

Effects

Fujitsu Laboratories deployed this newly developed technology in servers installed with FPGAs, confirming acceleration approximately thirty times the performance of CPU processing alone. Fujitsu Laboratories evaluated the transfer speed for a high volume of data in a test environment where the servers were connected with 10Gbps connections, and in a test simulating the regular backup of data, including documents and video, confirmed that this technology achieved transfer speeds up to 40Gbps, an industry record. This technology has significantly improved data transfer efficiency over WAN connections, enabling high-speed data transfers between clouds, such as data sharing and backups, making possible the creation of next-generation cloud services that share and use large volumes of data between a variety of companies and locations.

Future Plans

Fujitsu Laboratories will continue to evaluate this technology in practical environments, deploying this technology in virtual appliances that can be used in cloud environments. Fujitsu Laboratories aims to make this technology available as a product of Fujitsu Limited during fiscal 2018.

About Fujitsu Laboratories

Founded in 1968 as a wholly owned subsidiary of Fujitsu Limited, Fujitsu Laboratories Ltd. is one of the premier research centers in the world. With a global network of laboratories in Japan, China, the United States and Europe, the organization conducts a wide range of basic and applied research in the areas of Next-generation Services, Computer Servers, Networks, Electronic Devices and Advanced Materials. For more information, please see: http://www.fujitsu.com/jp/group/labs/en/.

About Fujitsu Ltd

Fujitsu is a leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 155,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.5 trillion yen (US$40 billion) for the fiscal year ended March 31, 2017. For more information, please seehttp://www.fujitsu.com.

Source: Fujitsu Ltd

The post Fujitsu Develops WAN Acceleration Technology Utilizing FPGA Accelerators appeared first on HPCwire.

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

HPC Wire - Mon, 12/11/2017 - 09:53

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be carefully woven together by people to create the computational capabilities that are used to deliver insights into the behaviors of complex systems. This collection of technologies and people has been called the High Performance Computing (HPC) ecosystem. This is an appropriate metaphor because it evokes the complicated nature of the interdependent elements needed to deliver first of a kind computing systems.

The idea of the HPC ecosystem has been around for years and most recently appeared in one of the objectives for the National Strategic Computing Initiative (NSCI). The 4th objective calls for “Increasing the capacity and capability of an enduring national HPC ecosystem.” This leads to the questions of, “what makes up the HPC ecosystem” and why is it so important? Perhaps the more important question is, why does the United States need to be careful about letting its HPC ecosystem diminish?

The heart of the HPC ecosystem is clearly the “big humming boxes” that contain the advanced computing hardware. The rows upon rows of cabinets are the focal point of the electronic components, operating software, and application programs that provide the capabilities that produce the results used to create new scientific and engineering insights that are the real purpose of the HPC ecosystem. However, it is misleading to think that any one computer at any one time is sufficient to make up an ecosystem. Rather, the HPC ecosystem requires a continuous pipeline of computer hardware and software. It is that continuous flow of developing technologies that keeps HPC progressing on the cutting edge.

The hardware element of the pipeline includes systems and components that are under development, but are not currently available. This includes the basic research that will create the scientific discoveries that enable new approaches to computer designs. The ongoing demand for “cutting edge” systems is important to keep system and component designers pushing the performance envelope. The pipeline also includes the currently installed highest performance systems. These are the systems that are being tested and optimized. Every time a system like this is installed, technology surprises are found that must be identified and accommodated. The hardware pipeline also includes systems on the trailing edge. At this point, the computer hardware is quite stable and allows a focus on developing and optimizing modeling and simulation applications.

One of the greatest challenges of maintaining the HPC ecosystem is recognizing that there are significant financial commitments needed to keep the pipeline filled. There are many examples of organizations that believed that buying a single big computer would make them part of the ecosystem. In those cases, they were right, but only temporarily. Being part of the HPC ecosystem requires being committed to buying the next cutting-edge system based on the lessons learned from the last system.

Another critical element of the HPC ecosystem is software. This generally falls into two categories – software needed to operate the computer (also called middleware or the “stack”) and software that provides insights into end user questions (called applications). Middleware plays the critical role of managing the operations of the hardware systems and enabling the execution of applications software. Middleware includes computer operating systems, file systems and network controllers. This type of software also includes compilers that translate application programs into the machine language that will be executed on hardware. There are quite a number of other pieces of middleware software that include libraries of commonly needed functions, programming tools, performance monitors, and debuggers.

Applications software span a wide range and are as varied as the problems users want to address through computation. Some applications are quick “throwaway” (prototype) attempts to explore potential ways in which computers may be used to address a problem. Other applications software is written, sometimes with different solution methods, to simulate physical behaviors of complex systems. This software will sometimes last for decades and will be progressively improved. An important aspect of these types of applications is the experimental validation data that provide confidence that the results can be trusted. For this type of applications software, setting up the problem that can include finite element mesh generation, populating that mesh with material properties and launching the execution are important parts of the ecosystem. Other elements of usability of application software include the computers, software, and displays that allow users to visualize and explore simulation results.

Data is yet another essential element of the HPC ecosystem. Data is the lifeblood in the circulatory system that flows through the system to keep it doing useful things. The HPC ecosystem includes systems that hold and move data from one element to another. Hardware aspects of the data system include memory, storage devices, and networking. Also software device drivers and file systems are needed to keep track of the data. With the growing trend to add machine learning and artificial intelligence to the HPC ecosystem, its ability to process and productively use data are becoming increasingly significant.

Finally, and most importantly, trained and highly skilled people are an essential part of the HPC ecosystem. Just like computing systems, these people make up a “pipeline” that starts in elementary school and continues through undergraduate and then advanced degrees. Attracting and educating these people in computing technologies is critical. Another important part of the people pipeline of the HPC ecosystem are the jobs offered by academia, national labs, government, and industry. These professional experiences provide the opportunities needed to practice and hone HPC skills.

The origins of the United States’ HPC ecosystem dates back to the decision by the U.S. Army Research Lab to procure an electronic computer to calculate ballistic tables for its artillery during World War II (i.e. ENIAC). That event led to finding and training the people, who in many cases were women, to program and operate the computer. The ENIAC was just the start of the nation’s significant investment in hardware, middleware software, and applications. However, just because the United States was the first does not mean that it was alone. Europe and Japan also have robust HPC ecosystems for years and most recently China has determinedly set out to create one of their own.

The United States and other countries made the necessary investments in their HPC ecosystems because they understood the strategic advantages that staying at the cutting edge of computing provides. These well-document advantages apply to many areas that include: national security, discovery science, economic competitiveness, energy security and curing diseases.

The challenge of maintaining the HPC ecosystem is that, just like a natural ecosystem, the HPC version can be threatened by becoming too narrow and lacking diversity. This applies to the hardware, middleware, and applications software. Betting on just a few types of technologies can be disastrous if one approach fails. Diversity also means having and using a healthy range of systems that covers the highest performance cutting edge systems to wide deployment of mid and low-end production systems. Another aspect of diversity is the range of applications that can productively use on advanced computing resources.

Perhaps the greatest challenge to an ecosystem is complacency and assuming that it, and the necessary people, will always be there. This can take the form of an attitude that it is good enough to become a HPC technology follower and acceptable to purchase HPC systems and services from other nations. Once a HPC ecosystem has been lost, it is not clear if it can be regained. Having a robust HPC ecosystem can last for decades, through many “half lives” of hardware. A healthy ecosystem allows puts countries in a leadership position and this means the ability to influence HPC technologies in ways that best serve their strategic goals. Happily, the 4th NSCI objective signals that the United States understands these challenges and the importance of maintaining a healthy HPC ecosystem.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

The post HPC Iron, Soft, Data, People – It Takes an Ecosystem! appeared first on HPCwire.

MareNostrum 4 Chosen as ‘Most Beautiful Data Center’

HPC Wire - Mon, 12/11/2017 - 09:28

BARCELONA, Dec. 11, 2017 — MareNostrum 4 supercomputer has been chosen as the winner of the Most Beautiful Data Center in the world Prize, hosted by the Datacenter Dynamics (DCD) Company.

There are 15 prizes in different categories, besides the prize for the most beautiful data centre, which is elected by popular vote. MareNostrum 4 competed with such impressive facilities as the Switch Pyramid in Michigan, the Bahnhof Pionen in Stockholm or the Norwegian Green Mountain. BSC supercomputer has prevailed for its particular location, inside the chapel of Torre Girona, located in the North Campus of the Universitat Politècnica de Catalunya (UPC).

The awards ceremony took place on December 7th in London and both Mateo Valero, BSC Director, and Sergi Girona, Operations department Director, received the prize.

About MareNostrum 4

MareNostrum is the generic name used by BSC to refer to the different upgrades of its most emblematic supercomputer, the most powerful in Spain.  The first version was installed in 2005, and the fourth version is currently in operation.

MareNostrum 4 began operations last July, and according to the latest call of the Top500 list, it ranks the 16th position among the highest performing supercomputers. Currently, MareNostrum provides 11.1 Petaflops of processing power – that is, the capacity to perform 11.1 x (1015) operations per second– to scientific production and innovation. This capacity will be increased soon thanks to the installation of new clusters, featuring emerging technologies, which are currently being developed in USA and Japan.

Aside from being the most beautiful, MareNostrum has been dubbed the most interesting supercomputer in the world due to the heterogeneity of the architecture it will include once installation of the supercomputer is complete. Its total speed will be 13.7 Petaflops. Its main memory is of 390 Terabytes and it has the capacity to store 14 Petabytes (14 million Gigabytes) of data. A high-speed network connects all the components in the supercomputer to one another.

MareNostrum 4 has been funded by the Economy, Industry and Competitiveness Ministry of the Spanish Government and was awarded by public tender to IBM Company, which integrated into a single machine its own technologies together with the ones developed by Lenovo, Intel and Fujitsu.

About Barcelona Supercomputing Center

Barcelona Supercomputing Center (BSC) is the national supercomputing centre in Spain. BSC specialises in High Performance Computing (HPC) and its mission is two-fold: to provide infrastructure and supercomputing services to European scientists, and to generate knowledge and technology to transfer to business and society.

Source: Barcelona Supercomputing Center

The post MareNostrum 4 Chosen as ‘Most Beautiful Data Center’ appeared first on HPCwire.

PSSC Labs Launches PowerWulf HPC Clusters with Pre-Configured Intel Data Center Blocks

HPC Wire - Mon, 12/11/2017 - 08:54

LAKE FOREST, Calif., Dec. 11, 2017 – PSSC Labs, a developer of custom HPC and Big Data computing solutions, today announced its PowerWulf HPC clusters are now available with Intel’s new Xeon Scalable Processors and Intel’s Omni-Path HPC Fabric to deliver the performance needed to tackle cutting edge computing tasks including real-time analytics, virtualized infrastructure and high-performance computing.

PowerWulf clusters are built with Intel’s Data Center Blocks to ensure a truly turnkey solution that addresses customer integration challenges. Today’s customer datacenters require unique server solutions that run complex, business-critical workloads. Intel Data Center Blocks configurations are purpose-built with all-Intel technology, optimized to address the needs of specific market segments. These fully validated blocks deliver performance, reliability and quality for solutions customer want and can trust to handle their demanding cloud, HPC, and business critical workloads.

PSSC Labs PowerWulf HPC Clusters are available as config-to-order (CTO) to meet the specific needs of a customer. Key features of these solutions include:

  • Pre-configured and fully validated blocks with the latest Intel HPC technology
  • Powered by the Intel Xeon processor Scalable family, delivers an overall performance increase up to 1.65x compared to the previous generation, and up to 5x Online Transaction       Processing warehouse workloads versus the current install base.
  • 2 operating system options to choose: RedHat, SUSE, and CentOS Linux
  • Multiple models with different support options
  • Intel Fabric Suite 10.5.1, Lustre 2.10
  • Intel Omni-Path Host Fabric Interface (Intel OP HFI) Adapter 100 Series and FDR/EDR InfiniBand Fabric
  • Intel Datacenter SATA and NVMe Solid State Drives (SSD)

“Intel’s integrated and fully-validated Data Center Blocks enables PSSC Labs to deliver more efficient and turnkey approach and reduce time to market, complexity and the costs of system design, validation and integration,” said Alex Lesser, EVP of PSSC Labs. “Partnering with Intel allows us to offer our customers the latest hardware options in our line of custom turn-key PowerWulf HPC clusters for a variety of applications across government, academic and commercial environments.”

PowerWulf HPC clusters also feature PSSC Labs CBeST Cluster Management Toolkit (Complete Beowulf Software Toolkit) to deliver a preconfigured solution with all the necessary hardware, network settings and cluster management software prior to shipping. With its component structure, CBeST is the most flexible cluster management software package available.

Every PowerWulf HPC Cluster includes a three year unlimited phone / email support package (additional year support available) with all support provided by PSSC Labs US-based team of engineers. PSSC Labs is an Intel HPC Data Center Specialist and has been a Platinum Provider with Intel since 2009. For more information see http://www.pssclabs.com/solutions/hpc-cluster/

 

Source: PSSC Labs

The post PSSC Labs Launches PowerWulf HPC Clusters with Pre-Configured Intel Data Center Blocks appeared first on HPCwire.

Intel® Omni-Path Architecture and Intel® Xeon® Scalable Processor Family Enable Breakthrough Science on 13.7 petaFLOPS MareNostrum 4

HPC Wire - Mon, 12/11/2017 - 08:49

In publicly and privately funded computational science research, dollars (or Euros in this case) follow FLOPS. And when you’re one of the leading computing centers in Europe with a reputation around the world of highly reliable, leading edge technology resources, you look for the best in supercomputing in order to continue supporting breakthrough research. Thus, Barcelona Supercomputing Center (BSC) is driven to build leading supercomputing clusters for its research clients in the public and private sectors.

MareNostrum 4 is nestled within the Torre Girona chapel

“We have the privilege of users coming back to us each year to run their projects,” said Sergio Girona, BSC’s Operations Department Director. “They return because we reliably provide the technology and services they need year after year, and because our systems are of the highest level.” Supported by the Spanish and Catalan governments and funded by the Ministry of Economy and Competitiveness with €34 million in 2015, BSC sought to take its MareNostrum 3 system to the next-generation of computing capabilities. It specified multiple clusters for both general computational needs of ongoing research, and for development of next-generation codes based on emerging supercomputing technologies and tools for the Exascale computing era. It fell to IBM, who partnered with Fujitsu and Lenovo, to design and build MareNostrum 4.

MareNostrum 4 is a multi-cluster system which main cluster and data storage are interconnected by the Intel® Omni-Path Architecture (Intel® OPA) fabric. A general-purpose compute cluster, with 3,456 nodes of Intel® Xeon® Scalable Processor Product Family will provide up to 11.1 petaFLOPS of computational capacity. A smaller cluster delivering up to 0.5 petaFLOPS is built on the Intel® Xeon Phi Processor 7250. A third small cluster up to 1.5 petaFLOPS will include Power9* and Nvidia GPUs, and a fourth one, made of ARM v8 processors, will provide other 0.5 petaFLOPS of performance. And an IBM storage array will round out the system. All systems are interconnected with the storage subsystem. MareNostrum 4 is designed to be twelve times faster than its predecessor.

Spain’s 13.7 petaFLOPS supercomputer contributes to the Partnership for Advanced Computing in Europe (PRACE) and supports the Spanish Supercomputing Network (RES).

“From my point of view,” stated Girona, “Intel had, at the time of the procurement, the best processor for general purpose systems. Intel is very good on specific domains, and they continue to innovate in other domains. That is why we chose Intel processors for the general-purpose cluster and Intel Xeon Phi Processor for one of the emerging technology clusters, on which we can explore new code development.” The system was in production by July 2017 and placed at number 13 in the June 2017 Top500 list and number 16 on the November 2017 list.

“The Barcelona Supercomputing Center team is committed to maximizing MareNostrum in any way we can,” concluded Girona. “But MareNostrum is not about us. Our purpose at BSC is to help others. We are successful when the scientists and engineers using MareNostrum’s computing power get all the data they need to further their discoveries. It is always rewarding to know we help others to further cutting-edge scientific exploration.”

Learn more about Intel HPC resources >

The post Intel® Omni-Path Architecture and Intel® Xeon® Scalable Processor Family Enable Breakthrough Science on 13.7 petaFLOPS MareNostrum 4 appeared first on HPCwire.

Reboot of login03

University of Colorado Boulder - Sat, 12/09/2017 - 08:35
Categories: Partner News

TACC Works with C-DAC, India to Organize Workshop on Software Challenges in Supercomputing

HPC Wire - Fri, 12/08/2017 - 11:17

Dec. 8, 2017 — The Texas Advanced Computing Center (TACC) in the U.S. – a world leader in supercomputing – is collaborating with the Centre for Development of Advanced Computing (C-DAC) in India to host a workshop on the “Software Challenges to Exascale Computing (SCEC17)” on December 17th, 2017, from 9 AM to 7 PM at the Hotel Royal Orchid, in Jaipur. The main goal of this workshop is to foster international collaborations in the area of software for the current and next generation supercomputing systems.

At the workshop, exciting talks on advanced software engineering and supercomputing will be delivered by world leaders from the National Science Foundation in the U.S. (https://nsf.gov/), leading academic institutions in India, Japan and the U.S., R&D organizations, and industry. In line with the 2015 “National Strategic Computing Initiative (NSCI)” of the U.S. government, and the “Skill India” campaign of the Government of India, the workshop includes training on using supercomputing resources to solve problems of high societal impact, like earthquake simulation studies and drug discovery efforts. (Additional details on the workshop can be found at: https://scecforum.github.io/)

“I am delighted to collaborate with our colleagues at C-DAC and contribute towards developing a skilled workforce and a strong community in the area of high-level software tools for supercomputing platforms,” said Dr. Ritu Arora, the SCEC17 workshop chair and a scientist at TACC. “Without a concerted effort in this area, it will be hard to lower the adoption barriers to supercomputing and to make it accessible to the masses, especially the non-traditional users of the supercomputers.”

Intel and Nvidia, two key industry players in the supercomputing sector, are generously supporting the workshop. The workshop will provide a forum through which hardware vendors and software developers can communicate with each other and influence the architecture of the next-generation supercomputing systems and the supporting software stack. By fostering cross-disciplinary associations, the workshop will serve as a stepping-stone towards innovations in the future.

About TACC: The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is a leading advanced computing research center in the world. TACC provides comprehensive advanced computing resources and support services to researchers across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

About C-DAC: Centre for Development of Advanced Computing (C-DAC) is the premier R&D organization of the Ministry of Electronics and Information Technology (MeitY) for carrying out R&D in IT, Electronics and associated areas. Different areas of C-DAC, had originated at different times, many of which came out as a result of identification of opportunities.

Source: TACC

The post TACC Works with C-DAC, India to Organize Workshop on Software Challenges in Supercomputing appeared first on HPCwire.

NVIDIA Introduces TITAN V GPU

HPC Wire - Fri, 12/08/2017 - 11:03

LONG BEACH, Calif., Dec. 8, 2017 — NVIDIA today introduced TITAN V, a powerful GPU for the PC, driven by the NVIDIA Volta GPU architecture.

Announced by NVIDIA founder and CEO Jensen Huang at the annual NIPS conference, TITAN V excels at computational processing for scientific simulation. Its 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency.

“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”

NVIDIA Supercomputing GPU Architecture, Now for the PC

TITAN V’s Volta architecture features a major redesign of the streaming multiprocessor that is at the center of the GPU. It doubles the energy efficiency of the previous generation Pascal design, enabling dramatic boosts in performance in the same power envelope.

New Tensor Cores designed specifically for deep learning deliver up to 9x higher peak teraflops. With independent parallel integer and floating-point data paths, Volta is also much more efficient on workloads with a mix of computation and addressing calculations. Its new combined L1 data cache and shared memory unit significantly improve performance while also simplifying programming.

Fabricated on a new TSMC 12-nanometer FFN high-performance manufacturing process customized for NVIDIA, TITAN V also incorporates Volta’s highly tuned 12GB HBM2 memory subsystem for advanced memory bandwidth utilization.

Free AI Software on NVIDIA GPU Cloud

TITAN V’s power is ideal for developers who want to use their PCs to do work in AI, deep learning and high performance computing.

Users of TITAN V can gain immediate access to the latest GPU-optimized AI, deep learning and HPC software by signing up at no charge for an NVIDIA GPU Cloud account. This container registry includes NVIDIA-optimized deep learning frameworks, third-party managed HPC applications, NVIDIA HPC visualization tools and the NVIDIA TensorRT inferencing optimizer.

Immediate Availability

TITAN V is available to purchase today for $2,999 from the NVIDIA store in participating countries.

About NVIDIA

NVIDIA’s (NASDAQ:NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

Source: NVIDIA

The post NVIDIA Introduces TITAN V GPU appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator