HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 33 min ago

Supermicro Expands Edge Computing and Network Appliance Portfolio with New High Density SoC Solutions

Wed, 02/07/2018 - 10:01

SAN JOSE, Calif., Feb. 7, 2018 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today announced several new additions to its edge computing and network appliance portfolio based on the new Intel Xeon D-2100 SoC (System-on-a-Chip) processor.

Leveraging its deep expertise in server technology, Supermicro is bringing customers some of the first Intel Xeon D-2100 System-on-a-Chip (SoC) processor-based solutions. The company’s X11SDV series motherboards offer infrastructure optimization by combining the performance and advanced intelligence of Intel® Xeon® processors into a dense, lower-power system-on-a-chip. Supermicro is introducing a wide range of new systems to the market including compact embedded systems, rackmount embedded systems, as well as multi-node MicroCloud and SuperBlade systems.

With server-class reliability, availability and serviceability (RAS) features now available in an ultra-dense, low-power device, Supermicro X11SDV platforms deliver balanced compute and storage for intelligent edge computing and network appliances. These advanced technology building blocks offer the best workload optimized solutions and long life availability with the Intel® Xeon® D-2100 processor family, available with up to 18 processor cores, up to 512GB DDR4 four-channel memory operating at 2666MHz, up to four 10GbE LAN ports with RDMA support, and available with integrated Intel® QuickAssist Technology (Intel® QAT) crypto/encrypt/decrypt acceleration engine and internal storage expansion options including mini-PCIe, M.2 and NVMe support.

“These compact new Supermicro Embedded Building Block solutions bring advanced technologies and performance into a dense, low-power system-on-a-chip architecture, extending intelligence to the data center and network edge,” said Charles Liang, President and CEO of Supermicro. “With the vast growth of data driven workloads across embedded applications worldwide, Supermicro remains dedicated to developing powerful, agile, and scalable IoT gateway and compact server, storage and networking solutions that deliver the best end to end ecosystems for ease of deployment and open scalability.”

Supermicro’s new SYS-E300-9D is a compact box embedded system that is well-suited for the following applications:  network security appliance, SD-WAN, vCPE controller box, and NFV edge computing server. Based on Supermicro’s X11SDV-4C-TLN2F mini-ITX motherboard with four-core, 60-watt Intel Xeon D-2123IT SoC this system supports up to 512GB memory, dual 10GbE RJ45 ports, quad USB ports, and one SATA/SAS hard drive, SSD or NVMe SSD.

The new SYS-5019D-FN8TP is a compact (less than 10-inch depth) 1U rackmount embedded system that is ideal for cloud and virtualization, network appliance and embedded applications. Featuring Supermicro’s X11SDV-8C-TP8F flex-ATX motherboard supporting the eight-core, 80-watt Intel Xeon D-2146NT SoC, this power and space efficient system with built-in Intel QAT crypto and compression supports up to 512GB memory, four GbE RJ45 ports, dual 10GbE SFP+ and dual 10GbE RJ45 ports, dual USB 3.0 ports, four 2.5″ internal SATA/SAS hard drives or SSDs, and internal storage expansion options including mini-PCIe, M.2 and NVMe support.

For more details on Supermicro’s Xeon SoC processor-based solutions, please visit https://www.supermicro.com/products/nfo/Xeon-D.cfm

For more information on Supermicro’s complete line of Embedded Building Block Solutions visit www.supermicro.com/Embedded or download an Embedded Solutions Brochure.

Supermicro is introducing two new MicroCloud servers based on the new processors. Perfect for cloud computing, dynamic web serving, dedicated hosting, content delivery network, memory caching, and corporate applications, these systems support eight hot-pluggable server nodes in a 3U enclosure with a centralized IPMI server management port.  The SYS-5039MD8-H8TNR features the 8-core, 65-watt Intel Xeon D-2141i SoC, and the new SYS-5039MD18-H8TNR features the 18-core, 86-watt Intel Xeon D-2191 SoC.  Each server node for these MicroCloud systems supports up to 512GB of ECC memory, one PCI-E 3.0 x16 expansion slot, two hybrid storage drives that support U.2 NVMe/SATA3, two M.2 NVMe/SATA3 connectors, and dual GbE ports.

Supermicro’s 4U/8U SuperBlade enclosures feature blade servers that support new Intel Xeon D-2100 System-on-a-Chip (SoC) processors, including the 18-core D-2191 processor as well as the 16-core D2187NT processor with 100G Crypto/Compression. The blade servers support up to 512GB DDR4 memory, hot-plug 2.5″ U.2 NVMe/SATA drives, M.2 NVMe, and 25Gb\10Gb Ethernet and 100G Intel® Omni-Path (OPA) or 100G EDR InfiniBand. Redundant chassis management Modules (CMM) with industry standard IPMI management tools, high-performance switches, integrated power supplies and cooling fans, Battery Backup Modules (BBP) make this all-in-one blade solution ideal for datacenter and cloud applications.

For complete information on Supermicro products, visit www.supermicro.com.

About Super Micro Computer, Inc. (NASDAQ: SMCI)

Supermicro (NASDAQ: SMCI), a leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced Server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Super Micro Computer

The post Supermicro Expands Edge Computing and Network Appliance Portfolio with New High Density SoC Solutions appeared first on HPCwire.

Nyriad Ltd Closes $8.5 Million Series A Financing Round

Wed, 02/07/2018 - 09:08

CAMBRIDGE, New Zealand, Feb. 7, 2018 — Exascale computing company Nyriad Limited has announced it has completed its Series A investment round, bringing it to a total of over US$11 million raised to date. The funds will be used to support the continuing expansion of the business in Cambridge, New Zealand, including further investment in expanding its engineering resources and release of its first product to market in Q1’18.

Founded in 2014, Nyriad specialises in the use of GPUs for converging computing and IO to minimize data movement during processing of large data sets, significantly improving power consumption and accelerating performance for next-generation data centres and supercomputers. It announced its first product, NSULATE, at SC17 in Denver, Colorado.

Nyriad was pleased with the reception it received from the international and New Zealand investment communities. Five VCs participated: Data Collective VC of Palo Alto, Prelude Ventures of San Francisco, East Ventures and IDATEN Ventures of Japan, and New Zealand Venture Investment Fund (NZVIF) in New Zealand. Two New Zealand angel groups, Ice Angels and Enterprise Angels, plus several family desks participated to complete the round.

James Hardiman of Data Collective said, “It is obvious that GPUs are playing a major role in the next generation of computing. Nyriad has expanded that to include storage and is now at the forefront of both of these applications.”

Gabriel Kra from Prelude added, “Current storage architectures may be unable to scale with growing requirements for speed and real-time data security. Nyriad’s software running on GPUs can solve these problems, while dramatically reducing hardware and operating costs, including energy consumption.”

Kenta Adachi of IDATEN Ventures in Tokyo said, “We were delighted to be able to join in the capital round as we introduced HPC Systems to Nyriad who became their first distributor. I’m personally keen to help Nyriad with its business development in Japan, especially since they have made a firm commitment to do business here.”

NZVIF CEO Richard Dellabarca says the high level of investor support is not surprising given Nyriad’s excellent prospects. “This is another example of a New Zealand company developing exceptional technology for a global market and in an area exhibiting vast scale and rapid growth. Although it is a young company, Nyriad is already involved in major global projects and is gaining traction with large international customers. We are pleased that NZVIF and other investors have been able to support the company with capital investment at this stage. We look forward to supporting Nyriad’s growth journey in the coming years,” stated Dellabarca.

Nyriad received record investments from the New Zealand angel groups, Ice Angels and Enterprise Angels, for a follow-on investment. Robbie Paul, CEO of Ice Angels, said, “We continue to back Nyriad for their audacity and their vision to build a global startup from New Zealand that will stay in New Zealand. Their objective to train and cultivate hundreds of new engineers is one that we hope will benefit both Nyriad and the wider New Zealand tech community.”

Matthew Simmons, CEO of Nyriad, stated, “I am thrilled by the support we have had from both the local and international investment communities. Nyriad has a unique business proposition powered by a bold vision. This Series A capital raise gives us the resources to achieve the next stage of our commercial objectives, so I personally thank all who made this funding round a success.”

About Nyriad

Nyriad is a New Zealand-based exascale computing company specialising in advanced data storage solutions for big data and high performance computing. Born out of its consulting work on the Square Kilometre Array Project, the company rethought the relationship between storage, processing and bandwidth to achieve a breakthrough in system stability and performance capable of processing and storing over 160Tb/s of radio antennae data in real-time, within a power budget impossible with any modern IT solutions. The software will be commercially available in the first quarter of 2018. For more information about Nyriad, see https://nyriad.com.

Source: Nyriad

The post Nyriad Ltd Closes $8.5 Million Series A Financing Round appeared first on HPCwire.

Atos Announces Changes to Management Team

Wed, 02/07/2018 - 08:49

PARIS, Feb. 7, 2018 — Atos, a global leader in digital transformation, announces that at the occasion of its yearly Group General Management Meeting held today in Paris with its top 500 worldwide top managers, Thierry Breton, Chairman and CEO, has announced the following evolutions within the Group Executive team:

  • Eric Grall, coordinating Group Operations and the Top program, and Elie Girard, Group CFO, are promoted Senior Executive Vice-President (SEVP).
  • Michel Alain Proch, SEVP, CEO North America Operations is now appointed Group Chief Digital Officer to lead the group internal digital transformation strategy. Michel-Alain Proch will continue to run IT & Security together now with Group Quality.
  • Patrick Adiba, Group Chief Commercial Officer, is now appointed SEVP, CEO North America Operations.
  • Robert Vassoyan, joining from CISCO, is now appointed, SEVP, Group Chief Commercial Officer.
  • Ursula Morgenstern, Executive Vice President (EVP) Business & Platform Solutions, is now appointed CEO Germany, replacing Winfried Holz who will retire in the fall of this year.
  • Sean Narayanan, COO Business & Platform Solution (B&PS) division, is now appointed EVP, head of B&PS.
  • Giuseppe di Franco, head of Atos in Italy, is now appointed EVP, CEO Central & Eastern Europe, replacing Hanns-Thomas Kopf who will take other responsibilities in the Group.

Biography of Robert Vassoyan

After his graduation from French business school ESSEC, and various positions in the IT industry (Compaq as marketing and sales director France, HP as head of alliance, MEA and France services sales) Robert joined Cisco in 2007 as Sales Director in charge of the French Small & Medium Businesses market and member the Executive Committee, before being promoted Managing Director for the Large Accounts a year later. He was then appointed President of Cisco France in August 2011. Robert has been elected President of AmCham (American Chamber of Commerce in France) in February 2016. He is also a board member of CESI (Center for Higher Education Industry). Robert is 50 years old, married with three children. 

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around €12 billion. European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, the Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.

Source: Atos

The post Atos Announces Changes to Management Team appeared first on HPCwire.

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

Tue, 02/06/2018 - 15:14

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series processor. AMD’s new chip line has been steadily gaining traction among systems builders and cloud providers.

The new Dell servers are being positioned as, “highly scalable, single- and dual-socket servers designed to address high-performance workloads, including virtualized storage area networks (VSAN), hybrid-cloud applications, dense virtualization, and big data analytics.” The servers, says Dell, provide up to 20 percent lower total cost of ownership for VSAN and 25 percent more HPC performance for modern workloads.

To some extent, Dell’s adoption of AMD’s new chips came later than expected. When AMD introduced EPYC last June, Dell strongly endorsed the move: “The combination of PowerEdge and the AMD EPYC performance and security capabilities will create unique compute solutions for our customers to accelerate workloads and protect their business,” said Ashley Gorakhpurwalla, president, server solutions, Dell, in the AMD press release.

HPE, Supermicro, Penguin, Baidu, and Microsoft Azure, for example, all took the EPYC plunge earlier. EPYC, of course, competes directly with Intel for x86 sockets. AMD is betting on what it believes is a big enough price performance advantage with its new line to win back customers after its absence from the data center processor market. (see HPCwire article, AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17.)

Dell EMC PowerEdge R7425

The new Dell servers are offered in both single- and dual-socket versions with design features ranging from 32 to 64 cores, up to 4TB of memory capacity, and 12 to 24x direct NVMe drives optimized for database and analytics workloads. Dell emphasizes EPYC also supports high bandwidth and GPU/FPGA capabilities for HPC applications.

Stimulating a single-socket market, largely absent in the data center today, is an important AMD goal. The company reports demand has so far been split equally between single- and two-socket designs. “We tend to see the single socket really resonating on let’s call it the more GPU-centric computing where the CPU tends to be more supervisory as opposed to a foundational computing role,” said Scott Aylor, AMD corporate VP and GM of enterprise solutions business. “The one socket has also drawn attention in big data applications where its ability to connects to massive number of drives is a distinguishing attribute, he said.

Two of the new Dell servers are single-socket designs. Here’s a configuration snapshot from Dell:

  • PowerEdge R7425 “enables fast workload performance” on more cores. It has up to 2 enterprise class EPYC processors; memory and IO flexibility with up to 32 DDR4 DIMMs and 128 lanes of PCIe; storage performance with up to 24 NVMe drives; up to 4 terabytes memory capacity for data base analytics; and increased VDI instances with up to 64 cores.
  • PowerEdge R7415 is intended to “scale workloads while managing costs,” says Dell. “The R7415 delivers software defined storage or business analytics in a single processor design.” Features include: memory and IO flexibility with up to 16 DDR4 DIMMs and 128 lanes of PCIe; storage performance with up to 24 NVMe drives; and up to 2 terabytes memory capacity for in-line memory and analytics.
  • PowerEdge R6415 “balances resources to support demanding workloads…the R6415 single processor server tightly matches workload needs without adding underutilized resources,” according to Dell. Features include: storage performance with up to 10 NVMe drives; up to 2 terabytes of memory and 128 PCIe lanes. Dell says the R6415 “simplifies and speeds deployments with VMware vSAN and ScaleIO Ready Nodes.”

“With AMD’s EPYC processor integrated into the new Dell EMC PowerEdge platforms, we can deliver the scalability and lower total cost of ownership needed to meet the demands of new emerging workloads,” said Ravi Pendekanti, SVP, product management and marketing, Server and Infrastructure Systems, Dell EMC. “Customers are constantly looking for ways to drive growth and leverage new models of computing. AMD’s single-socket platform is a great example of Dell PowerEdge servers moving the industry forward to solve real customer problems.”

The new servers are available now. As listed on Dell’s website, the R6415 starts at $2,179.00, the R7415 at $2,349.00, and the R7425 at $3,819.00.

The post Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips appeared first on HPCwire.

One Stop Systems Introduces Highest Bandwidth, 5th Generation NVMe Ion Accelerator Flash Storage Array

Tue, 02/06/2018 - 14:28

SAN DIEGO, Calif., Feb. 6, 2018 — One Stop Systems, Inc. (Nasdaq: OSS), a leading provider of high performance computing GPU accelerators and NVMe flash arrays for a multitude of HPC applications, has introduced a new 2U Ion Accelerator Flash Storage Array.

The new array offers flexible capacity while maintaining the high-bandwidth and low-latency pedigree of Ion Accelerator arrays deployed in hundreds of global installations. This OSS shared flash storage array boasts the latest Ion Accelerator 5.0 software, NVMe drives, networking options and dual Intel Xeon Scalable Processors to support the most demanding applications.

Multiple high-speed networking options allow this array to fit seamlessly into Fibre Channel (up to 32 Gbps), iSCSI (up to 100 Gbps) or InfiniBand (up to 100 Gbps) deployments. The capacity of the Ion Accelerator can be expanded up to 153 terabytes with 24 2.5” NVMe drives. High Availability (HA) is achieved using two arrays in a mirrored setup.

The new version 5.0 for the Ion Accelerator software from OSS provides a complete all-flash storage solution that delivers cost-effective, near-native NVMe performance for data-intensive workloads. Ion Accelerator is designed to deliver consistent ultra-low latency and high bandwidth for the most performance-hungry applications used in information and financial services, healthcare, government, manufacturing, media and entertainment, and other major industries.

Offering bandwidth from a single storage box of 25GB/sec and latency well below 100 µs, Ion software is the ideal solution for delivering on the promise of NVMe flash performance in shared storage. Ion Accelerator has built-in data integrity in the software, with support for various RAID configurations, as well as a HA option that simultaneously accelerates performance for volumes across two systems.

“This newest release provides game-changing performance, density and fault tolerance,” said OSS Vice President of Engineering, Julia Elbert. “The low overhead with Ion Accelerator software allows systems to deliver near bare-metal performance from the latest NVMe technology flash drives, along with the reliability and data integrity required in mission-critical enterprise applications. With this fifth generation of Ion Accelerator software, existing appliance customers can also upgrade from legacy drives to modern NVMe with zero down time and no data migration costs using our Easy Migrate upgrade program.”

Visitors to the WEST 2018 conference being held in San Diego, CA, February 6-8, can see the 2U Ion Accelerator Flash Storage Array and the fully rugged MIL-STD 4U Ion Accelerator Software-powered FSAn-4 at the OSS booth #1024. Both NVMe arrays are available to order today from One Stop Systems’ highly-trained sales engineers at sales@onestopsystems.com or by calling (760) 745-9883.

About One Stop Systems

One Stop Systems, Inc. (OSS) designs and manufactures high performance compute accelerators, flash storage arrays and customized servers for deep learning, AI, defense, finance and entertainment applications. OSS utilizes the power of PCI Express, the latest GPU accelerators and NVMe flash cards to build award-winning systems, including many industry firsts, for OEMs and government customers. The company’s innovative hardware and Ion Accelerator Software offers exceptional performance and unparalleled scalability. OSS products are available directly, through global distributors, or via its SkyScale cloud services. For more information, go to www.onestopsystems.com.

Source: One Stop Systems

The post One Stop Systems Introduces Highest Bandwidth, 5th Generation NVMe Ion Accelerator Flash Storage Array appeared first on HPCwire.

Dr. Whitfield Diffie to Deliver a Keynote at Supercomputing Frontiers Europe 2018

Tue, 02/06/2018 - 13:46

Feb. 6, 2018 — Supercomputing Frontiers Europe 2018 has announced that Dr. Whitfield Diffie will deliver a keynote address on Monday, March 12th, followed by a special session on cryptography and its applications in block-chain and distributed ledger technologies.

Supercomputing Frontiers has also announced that the final program is now available online:

https://supercomputingfrontiers.eu/2018/conference-programme/

For more information on the conference, including information on registration, visit the conference’s website: https://supercomputingfrontiers.eu/2018/

About Supercomputing Frontiers

Supercomputing Frontiers is an annual international conference that provides a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important visionary trends and substantial innovations in supercomputing.

Source: Supercomputing Frontiers

The post Dr. Whitfield Diffie to Deliver a Keynote at Supercomputing Frontiers Europe 2018 appeared first on HPCwire.

Dell EMC Expands Server Capabilities for Software-Defined, Edge and High-Performance Computing

Tue, 02/06/2018 - 09:17

ROUND ROCK, Texas, Feb. 6, 2018 — Dell EMC announced three new servers designed for software-defined environments, edge and high-performance computing (HPC). The PowerEdge R6415PowerEdge R7415 and PowerEdge R7425 expand the 14th generation of the Dell EMC PowerEdge server portfolio with new capabilities to address the demanding workload requirements of today’s modern data center. All three rack servers with the AMD EPYC processor offer highly scalable platforms with outstanding total cost of ownership (TCO).

“As the bedrock of the modern data center, customers expect us to push server innovation further and faster,” said Ashley Gorakhpurwalla, president, Server and Infrastructure Systems at Dell EMC. “As customers deploy more IoT solutions, they need highly capable and flexible compute at the edge to turn data into real-time insights; these new servers that are engineered to deliver that while lowering TCO.”

The combined innovation of AMD EPYC processors and pioneering PowerEdge server technology deliver compute capabilities that optimally enhance emerging workloads. With up to 32 cores (64 threads), 8 memory channels and 128 PCIe lanes, AMD’s EPYC processors offer flexibility, performance, and security features for today’s software defined ecosystem.

“We are pleased to partner again with Dell EMC and integrate our AMD EPYC processors into the latest generation of PowerEdge servers to deliver enhanced scalability and outstanding total cost of ownership,” said Forrest Norrod, senior vice president and general manager of the Datacenter and Embedded Solutions Business Group (DESG), AMD. “Dell EMC servers are purpose built for emerging workloads like software-defined storage and heterogeneous compute and fully utilize the power of AMD EPYC. Dell EMC always keeps the server ecosystem and customer requirements top of mind, this partnership is just the beginning as we work together to create solutions that unlock the next chapter of data center growth and capability.”

Technology is at a relentless pace of scale and record adoption, which has resulted in emerging workloads that are growing in scale and scope. These workloads are driving new system requirements and features that are, in turn, advancing development and adoption of technologies such as NVMe, FPGAs and in-memory databases. The PowerEdge R6415, PowerEdge R7415 and PowerEdge R7425 are designed to scale-up as customers’ workloads increase and have the flexibility to support today’s modern data center.

Like all 14th generation PowerEdge servers, the new servers will continue to offer a scalable business architecture and intelligent automation with iDRAC9 and Quick Sync 2 management support. Integrated security is always a priority and the integrated cyber resilient architecture security features of the Dell EMC PowerEdge servers protects customers’ businesses and data for the life of the server.

These servers have up to 4TB memory capacity enhanced for database management system (DBMS) and analytics workload flexibility and are further optimized for the following environments:

  • Edge computing deployments – The highly configurable, 1U single-socket Dell EMC PowerEdge R6415, with up to 32 cores, offers ultra-dense and scale-out computing capabilities. Storage flexibility is enabled with up to 10 PCIe NVMe drives.
  • Software-defined Storage deployments – The 2U single-socket Dell EMC PowerEdge R7415 is the first AMD EPYCTM-based server platform certified as a VMware vSAN Ready Node and offers up to 20% better TCO per four-node cluster for vSAN deployments at the edge1. With 128 PCIe lanes, it offers accelerated east/west bandwidth for cloud computing and virtualization. Additionally, with up to 2TB memory capacity and up to 24 NVMe drives, customers can improve storage efficiency and scale quickly at a fraction of the cost of traditional-built storage.
  • High performance computing – The dual-socket Dell EMC PowerEdge R7425 delivers up to 24% improved performance versus the HPE DL385 for containers, hypervisors, virtual machines and cloud computing2 and up to 25% absolute performance improvement for HPC workloads like computational fluid dynamics (CFD)3. With up to 64 cores, it offers high bandwidth with dense GPU/FPGA capability. On standard benchmarks, the server with superior memory bandwidth and core density provided excellent results across a wide range of HPC workloads.

The new line of PowerEdge servers powered by AMD EPYC processor will be available to channel partners across the globe, so they can cover a broad spectrum of configurations to optimize diverse workloads for customers.

Availability

  • Dell EMC PowerEdge R7425, R7415, R6415 are available now worldwide.
  • vSAN Ready Nodes are available now with the PowerEdge R7425, R7415 and the R6415.

About Dell EMC

Dell EMC, a part of Dell Technologies, enables organizations to modernize, automate and transform their data center using industry-leading converged infrastructure, servers, storage and data protection technologies. This provides a trusted foundation for businesses to transform IT, through the creation of a hybrid cloud, and transform their business through the creation of cloud-native applications and big data solutions. Dell EMC services customers across 180 countries – including 98 percent of the Fortune 500 – with the industry’s most comprehensive and innovative portfolio from edge to core to cloud.

About Dell Inc.

Dell Inc., a part of Dell Technologies, provides customers of all sizes – including 98 percent of the Fortune 500 – with a broad, innovative portfolio from edge to core to cloud. Dell Inc. comprises Dell client as well as Dell EMC infrastructure offerings that enable organizations to modernize, automate and transform their data center while providing today’s workforce and consumers what they need to securely connect, produce, and collaborate from anywhere at any time.

Source: Dell EMC

The post Dell EMC Expands Server Capabilities for Software-Defined, Edge and High-Performance Computing appeared first on HPCwire.

Micron Announces Chief Financial Officer Transition

Tue, 02/06/2018 - 08:36

BOISE, Idaho, Feb. 6, 2018 — Micron Technology, Inc. (Nasdaq:MU) announced today that the company has appointed David Zinsner as senior vice president and chief financial officer, effective Feb. 19, 2018. Zinsner succeeds Ernie Maddock, who is retiring from Micron but will remain with the company as an adviser through early June to ensure a smooth transition. Zinsner will report directly to Sanjay Mehrotra, president and CEO.

“On behalf of the company, I want to thank Ernie for his significant contributions to Micron,” Mehrotra said. “He has helped position the company for continued strong growth, and we wish him the best in his future endeavors.”

“I am fortunate to have been a part of Micron’s progress over the last few years,” Maddock said. “My focus now is on supporting Dave and Sanjay during the transition period to ensure a seamless and effective handoff.”

Zinsner joins Micron with over 20 years of financial and operations experience in the semiconductor and technology industry. He most recently served as president and chief operating officer at Affirmed Networks. Prior to that, Zinsner was senior vice president of finance and chief financial officer for eight years at Analog Devices, and before that, he was senior vice president and chief financial officer for four years at Intersil Corp.

“Dave brings a great combination of financial expertise and executive experience to Micron and has a strong track record of achieving outstanding results,” Mehrotra said. “We look forward to his leadership in driving our financial strategy and delivering significant value to our shareholders.”

“I am very excited to be joining Micron at a time when the company is uniquely positioned to take advantage of growing demand for memory and storage solutions across a wide range of industries,” Zinsner said. “I look forward to working with the Micron team to capitalize on those trends and to take the company to the next level.”

Zinsner holds a master’s degree in business administration, finance and accounting from Vanderbilt University and a bachelor’s degree in industrial management from Carnegie Mellon University.

In a separate press release, Micron today updated its financial outlook for its fiscal second quarter of 2018:investors.micron.com.

Additional information on David Zinsner is available at http://www.micron.com/media.

About Micron

We are an industry leader in innovative memory and storage solutions. Through our global brands — Micron, Crucial and Ballistix — our broad portfolio of high-performance memory and storage technologies, including DRAM, NAND, NOR Flash and 3D XPoint memory, is transforming how the world uses information to enrich life. Backed by nearly 40 years of technology leadership, our memory and storage solutions enable disruptive trends, including artificial intelligence, machine learning and autonomous vehicles, in key market segments like cloud, data center, networking and mobile. Our common stock is traded on the NASDAQ under the MU symbol. To learn more about Micron Technology, Inc., visit www.micron.com.

Source: Micron

The post Micron Announces Chief Financial Officer Transition appeared first on HPCwire.

IARPA Ramps Up Molecular Information Storage Program

Tue, 02/06/2018 - 08:21

Later this month IARPA will hold a Proposers’ Day to kick off its planned, four-year Molecular Information Storage (MIST) project. “Today’s exabyte-scale data centers occupy large warehouses, consume megawatts of power, and cost billions of dollars to build, operate and maintain over their lifetimes. This resource intensive model does not offer a tractable path to scaling beyond the exabyte regime in the future,” says IARPA.

The search for new storage technologies is hardly new. In recent years the proliferation of data generating devices (scientific instruments and commercial IoT) and the rise of AI and data analytics capabilities to make use of vast datasets have boosted pressure to find alternative approaches to storage.

The MIST program is expected to last four years and be composed of two 24-month phases. “The desired capabilities” for both phases of the program are described by three Technical Areas (TAs):

  • TA1 (Storage).Develop a table-top device capable of writing information to molecular media with a target throughput and resource utilization budget. Multiple, diverse approaches are anticipated, which may utilize DNA, polypeptides, synthetic polymers, or other sequence-controlled polymer media.
  • TA2 (Retrieval).Develop a table-top device capable of randomly accessing information from molecular media with a target throughput and resource utilization budget. Multiple, diverse approaches are anticipated, which may utilize optical sequencing methods, nanopores, mass spectrometry, or other methods for sequencing polymers in a high-throughput manner.
  • TA3 (Operating System). Develop an operating system for use with storage and retrieval devices that coordinates addressing, data compression, encoding, error-correction and decoding of files from molecular media in a manner that supports efficient random access at scale. Multiple, diverse approaches are anticipated, which may draw on established methods from the storage industry, or develop new methods to accommodate constraints imposed by polymer media. The end result of the program will be technologies that jointly support end-to-end storage and retrieval at the terabyte scale, and which present a clear and commercially viable path to future deployment at the exabyte scale. Collaborative efforts and teaming among potential performers is highly encouraged.

“The scale and complexity of the world’s “big data” problems are increasing rapidly,” said MIST program manager, David Markowitz. “Use cases that require storage and random access from exabytes of mostly unstructured data are now well-established in the private sector and are of increasing relevance to the public sector.” Registration closes on February 14, 2018.

Not surprisingly, IARPA is emphasizing the multidisciplinary nature of the project Among disciplines expected to be tapped are: chemistry, synthetic biology, molecular biology, biochemistry, bioinformatics, microfluidics, semiconductor engineering, computer science and information theory. IARPA is seeking participation from academic institutions and companies from around the world.

The proposer’s day is February 212. Here’s a link to the program announcement: https://www.iarpa.gov/index.php/research-programs/mist

The post IARPA Ramps Up Molecular Information Storage Program appeared first on HPCwire.

‘Next Generation’ Universe Simulation Is Most Advanced Yet

Mon, 02/05/2018 - 18:24

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate.

Like the original Illustris project, Illustris: The Next Generation (IllustrisTNG for short) follows the progression of a cube-shaped universe from just after the Big Bang to the present day using the power of supercomputing. New physics and other refinements have been added to the original model and the scope of the simulated universe has been expanded to 1 billion light-years per side (from 350 million light-years per side previously). The first results from the project have been published in three separate articles in the journal Monthly Notices of the Royal Astronomical Society (Vol. 475, Issue 1).

Visualization of the intensity of shock waves in the cosmic gas (blue) around collapsed dark matter structures (orange/white). Source: IllustrisTNG

A press release put out by the Max Planck Institute for Astrophysics, one of the partners, highlights the significance:

At its intersection points, the cosmic web of gas and dark matter predicted by IllustrisTNG contains galaxies quite similar to the shape and size of real galaxies. For the first time, hydrodynamical simulations could directly compute the detailed clustering pattern of galaxies in space. Comparison with observational data—including newest large surveys—demonstrate the high degree of realism of IllustrisTNG. In addition, the simulations predict how the cosmic web changes over time, in particular in relation to the underlying “back bone” of the dark matter cosmos.

“It is particularly fascinating that we can accurately predict the influence of supermassive black holes on the distribution of matter out to large scales,” said principal investigator Prof. Volker Springel of the Heidelberg Institute for Theoretical Studies. “This is crucial for reliably interpreting forthcoming cosmological measurements.”

The team also includes researchers from the Max Planck Institutes for Astronomy (MPIA, Heidelberg) and Astrophysics (MPA, Garching), Harvard University, the Massachusetts Institute of Technology (MIT) and the Flatiron Institute’s Center for Computational Astrophysics (CCA).

Thin slice through the cosmic large-scale structure in the largest simulation of the IllustrisTNG project. The displayed region extends by about 1.2 billion lightyears from left to right. The underlying simulation is presently the largest magneto-hydrodynamic simulation of galaxy formation, containing more than 30 billion volume elements and particles.

To capture the small-scale turbulent physics at the heart of galaxy formation, astrophysicists used a powerful version of the highly parallel moving mesh code, AREPO, which they deployed on Germany’s fastest supercomputer, Hazel Hen. Ancillary and test runs of the project were also run on the Stampede supercomputer at the Texas Advanced Computing Center, at the Hydra and Draco supercomputers at the Max Planck Computing and Data Facility, and on MIT/Harvard computing resources.

As detailed on the project website, IllustrisTNG actually consists of 18 simulations in total at varying scales. The largest (the highest-resolution TNG300 simulation) occupied 2,000 of Hazel Hen’s Xeons for just over two months. The simulations together generated more than 500 terabytes of data and will keep the team busy for years to come.

A visualization from the project shows the formation of a massive “late-type,” star-forming disk galaxy.

 

Read more about IllustrisTNG at their website.

The post ‘Next Generation’ Universe Simulation Is Most Advanced Yet appeared first on HPCwire.

Former Intel Pres Launches ARM-based Server Chip Venture

Mon, 02/05/2018 - 14:00

The “leadership journey” of Renée James, who held the highest rank of any woman in the history of Intel Corp., has emerged as CEO of a well-heeled, venture-backed company developing ARM-based chips, based on technology from Applied Micro Circuits, that will compete with Intel and others for a portion of the server chip market.

James’s new company, Ampere, located near Intel headquarters in Santa Clara, is funded by private equity firm The Carlyle Group, which James joined in 2016. When she resigned from Intel the previous year, concluding 28 years there, she held the position of president, considered the no. 2 spot to CEO Brian Krzanich.

Ampere processors are built for private and public clouds and deliver 64-bit Arm server chips with, according to the company, “high memory performance and substantially lower power and total cost of ownership.” Ampere processors offer a custom core Armv8-A 64-bit server operating at up to 3.3 GHz, 1TB of memory at a power envelope of 125 watts, the company said, adding that the processors are sampling now and will be in production in the second half of the year.

According to a story in the New York Times, in interviews James is not taking an adversarial stance toward her former company, stating that she respects Intel and that Ampere processors will handle specific cloud services workloads that are not in Intel’s bailiwick.

“I think they’re the best in the world at what they do,” James told the Times. “I just don’t think they’re doing what comes next.”

Renée James

“We have an opportunity with cloud computing to take a fresh approach with products that are built to address the new software ecosystem,” said James in a company announcement. “The workloads moving to the cloud require more memory, and at the same time, customers have stringent requirements for power, size and costs. The software that runs the cloud enables Ampere to design with a different point of view. The Ampere team’s approach and architecture meets the expectation on performance and power and gives customers the freedom to accelerate the delivery of the most memory-intensive applications and workloads such as AI, big data, storage and database in their next-generation data centers.”

When Paul Otellini ended his tenure at CEO of Intel in 2013, James was thought to be a candidate to take his role. As it turned out, James and Krzanich put forward a plan in which she would become president and he would be CEO, a proposal approved by Intel’s board of directors.

But she never lost sight of taking on a company’s top spot, and two years later, when she resigned from Intel, she stated in a memo sent to employees that “when Brian and I were appointed to our current roles, I knew then that being the leader of a company was something that I desired as part of my own leadership journey.”

The emergence of the new, 250-employee company from stealth mode comes at a time when the Meltdown and Spectre security design flaws in x86 processors has put Intel on the defensive.

On the other hand, Arm processors – available from companies such as Qualcomm and Cavium – have not yet made major inroads into the hyperscale cloud and data center server market, 98 percent of which is controlled by Intel.

The post Former Intel Pres Launches ARM-based Server Chip Venture appeared first on HPCwire.

NSB Issues Warning Call on U.S. STEM Worker Development

Mon, 02/05/2018 - 13:41

Last week the National Science Board issued a companion policy statement – Our Nation’s Future Competitiveness Relies on Building a Stem-capable U.S. Workforce – meant to reinforce worrisome data scattered throughout the 2018 National Science & Engineering Indicators report released in mid-January.

“The U.S. can no longer rely on a distinct and relatively small “STEM workforce.” Instead, we need a STEM-capable U.S. workforce that leverages the hard work, creativity, and ingenuity of women and men of all ages, all education levels, and all backgrounds. Our Nation’s Future Competitiveness Relies on Building a Stem-capable U.S. Workforce,” argues the NSB.

The policy statement is a manifesto around a topic that has resonated in the HPC community; however it sometimes seems the community has grown “tone-deaf” because such calls have become perennial and progress seems spare. NSB noted this too:

“Numerous entities, including the National Science Foundation (NSF), have undertaken a myriad of initiatives spanning decades aimed at leveraging the talents of all segments of our population, especially groups historically underrepresented in STEM. Yet, in spite of some progress, crippling disparities in STEM education remain…”

The NSB offers the following rather broad ideas, steering clear of specifics:

“Considering the increasing demands placed on students, workers, businesses, and government budgets, institutions must partner to build the U.S. workforce of the future. These joint efforts are necessary in order to prosper in an increasingly globally competitive knowledge- and technology-intensive world.

  • Governments at all levels should empower all segments of our population through investments in formal and informal education and workforce development throughout an individual’s life-span. This includes redoubling our commitment to training the next generation of scientists and engineers through sustained and predictable Federal investments in graduate education and basic research.
  • Businesses should invest in workplace learning programs–such as apprenticeships and internships–that utilize local talent. By leveraging partnerships between academic institutions and industry, such as those catalyzed by NSF’s Advanced Technological Education Program (ATE), businesses will be less likely to face a workforce “skills gap.”
  • Governments and businesses should expand their investments in community and technical colleges, which continue to provide individuals with on-ramps into skilled technical careers as well as opportunities for skill renewal and development for workers at all education levels throughout their careers.
  • To accelerate progress on diversifying the STEM-capableS. workforce, the Nation should continue to invest in underrepresented segments of the population and leverage Minority Serving Institutions to this end.
  • Collectively, we must proceed with urgency and purpose to ensure that this Nation and all our people are ready to meet the challenges and opportunities of the future.”

The lack of diversity and effective recruitment from underserved population segments is a point of emphasis in the policy statement. Another of the biggest worries called out in the statement is the decline in international graduate level STEM students and changing attitudes towards remaining in the U.S.

“While the U.S. remains the top designations on for internationally mobile students, its share of these students declined from 25% in 2000 to 19% in 2015 as other countries increasingly compete for them…Our Nation’s ability to attract students from around the world is important, but our competitive advantage in this area is fully realized when these individuals stay to work in the United States post-graduation.

“The overall “stay rates” for foreign-born non-citizens who received a Ph.D. from U.S. institutions have generally trended upwards since the turn of the century, reaching 70% for both the 5-year and 10-year stay rates in 2015.31 However, the percentage of new STEM doctorates from China and India—the two top countries of origin—with definite plans to stay in the U.S. has declined over the past decade (from 59% to 49% for China and 62% to 51% for India). As other nations build their innovation capacity through investments in R&D and higher education, we must actively find ways to attract and retain foreign talent and fully capitalize on our own citizens.”

Link to the NSB companion policy statement: https://www.nsf.gov/news/news_summ.jsp?cntn_id=244391&WT.mc_id=USNSF_62&WT.mc_ev=click

HPCwire article on the full 2018 S&E Indicators Report: U.S. Leads but China Gains in NSF 2018 S&E Indicators Report

The post NSB Issues Warning Call on U.S. STEM Worker Development appeared first on HPCwire.

XSEDE’s Maverick Helps Explore Next Generation Solar Cells and LEDs

Mon, 02/05/2018 - 09:16

Feb. 5, 2018 — Solar cells can’t stand the heat. Photovoltaics lose some energy as heat in converting sunlight to electricity. The reverse holds true for lights made with light-emitting diodes (LED), which convert electricity into light. Some scientists think there might be light at the end of the tunnel in the hunt for better semiconductor materials for solar cells and LEDs, thanks to supercomputer simulations that leveraged graphics processing units to model nanocrystals of silicon.

Defect-induced conical intersections (DICIs) allow one to connect material structure to the propensity for nonradiative decay, a source of heat loss in solar cells and LED lights. XSEDE Maverick supercomputer allocation accelerated the quantum chemistry calculations. Credit: Ben Levine.

Scientists call the heat loss in LEDs and solar cells non-radiative recombination. And they’ve struggled to understand the basic physics of this heat loss, especially for materials with molecules of over 20 atoms.

“The real challenge here is system size,” explained Ben Levine, associate professor in the Department of Chemistry at Michigan State University. “Going from that 10-20 atom limit up to 50-100-200 atoms has been the real computation challenge here,” Levine said. That’s because the calculations involved scale with the size of the system to some power, sometimes four or up to six, Levine said. “Making the system ten times bigger actually requires us to perform maybe 10,000 times more operations. It’s really a big change in the size of our calculations.”

Levine’s calculations involve a concept in molecular photochemistry called a conical intersection – points of degeneracy between the potential energy surfaces of two or more electronic states in a closed system. A perspective study published September of 2017 in the Journal of Physical Chemistry Letters found that recent computational and theoretical developments have enabled the location of defect-induced conical intersections in semiconductor nanomaterials.

“The key contribution of our work has been to show that we can understand these recombination processes in materials by looking at these conical intersections,” Levine said. “We’ve been able to show is that the conical intersections can be associated with specific structural defects in the material.”

The holy grail for materials science would be to predict non-radiative recombination behavior of a material based on its structural defects. These defects come from ‘doping‘ semiconductors with impurities to control and modulate its electrical properties.

Looking beyond the ubiquitous silicon semiconductor, scientists are turning to silicon nanocrystals as candidate materials for the next generation of solar cells and LEDs. Silicon nanocrystals are molecular systems in the ballpark of 100 atoms with extremely tunable light emission compared to bulk silicon. And scientists are limited only by their imagination in ways to dope and create new kind of silicon nanocrystals.

“We’ve been doing this for about five years now,” Levine explained about his conical intersection work. “The main focus of our work has been proof-of concept, showing that these are calculations that we can do; that what we find is in good agreement with experiment; and that it can give us insight into experiments that we couldn’t get before,” Levine said.

Levine addressed the computational challenges of his work using graphics processing unit (GPU) hardware, the kind typically designed for computer games and graphics design. GPUs excel at churning through linear algebra calculations, the same math involved in Levine’s calculations that characterize the behavior of electrons in a material. “Using the graphics processing units, we’ve been able to accelerate our calculations by hundreds of times, which has allowed us to go from the molecular scale, where we were limited before, up to the nano-material size,” Levine said.

Cyberinfrastructure allocations from XSEDE, the eXtreme Science and Engineering Discovery Environment, gave Levine access to over 975,000 compute hours on the Maverick supercomputing system at the Texas Advanced Computing Center (TACC). Maverick is a dedicated visualization and data analysis resource architected with 132 NVIDIA Tesla K40 “Atlas” GPU for remote visualization and GPU computing to the national community.

“Large-scale resources like Maverick at TACC, which have lots of GPUs, have been just wonderful for us,” Levine said. “You need three things to be able to pull this off. You need good theories. You need good computer hardware. And you need facilities that have that hardware in sufficient quantity, so that you can do the calculations that you want to do.”

Levine explained that he got started using GPUs to do science ten years ago back when he was in graduate school, chaining together SONY PlayStation 2 video game consoles to perform quantum chemical calculations. “Now, the field has exploded, where you can do lots and lots of really advanced quantum mechanical calculations using these GPUs,” Levine said. “NVIDIA has been very supportive of this. They’ve released technology that helps us do this sort of thing better than we could do it before.” That’s because NVIDIA developed GPUs to more easily pass data, and they developed the popular and well-documented CUDA interface.

“A machine like Maverick is particularly useful because it brings a lot of these GPUs into one place,” Levine explained. “We can sit down and look at 100 different materials or at a hundred different structures of the same material.” We’re able to do that using a machine such as Maverick. Whereas with a desktop gaming machine just has one GPU, we can do one calculation at a time. The large-scale studies aren’t possible,” said Levine.

Now that Levine’s group has demonstrated the ability to predict conical intersections associated with heat loss from semiconductors and semiconductor nanomaterials, he said the next step is to do materials design in the computer.

Said Levine: “We’ve been running some calculations where we use a simulated evolution, called a genetic algorithm, where you simulate the evolution process. We’re actually evolving materials that have the property that we’re looking for, one generation after the other. Maybe we have a pool of 20 different molecules. We predict the properties of those molecules. Then we randomly pick, say, less than ten of them that have desirable properties. And we modify them in some way. We mutate them. Or in some chemical sense ‘breed’ them with one another to create new molecules, and test those. This all happens automatically in the computer. A lot of this is done on Maverick also. We end up with a new molecule that nobody has ever looked at before, but that we think they should look at in the lab. This automated design processes has already started.”

The study, “Understanding Nonradiative Recombination through Defect-Induced Conical Intersections,” was published September 7, 2017 in the Journal of Physical Chemistry Letters (DOI: 10.1021/acs.jpclett.7b01707). The study authors are Yinan Shu (University of Minnesota); B. Scott Fales (Stanford University, SLAC); Wei-Tao Peng and Benjamin G. Levine (Michigan State University). The National Science Foundation funded the study (CHE-1565634).

Source: Jorge Salazar, TACC

The post XSEDE’s Maverick Helps Explore Next Generation Solar Cells and LEDs appeared first on HPCwire.

ISC High Performance Organizers Announce Return of ISC STEM Student Day

Mon, 02/05/2018 - 08:36

FRANKFURT, Germany, Feb. 5, 2018 – The organizers of ISC High Performance are excited to announce the return of the ISC STEM Student Day, with an attractive program for students and sponsoring organizations. This year, students attending the program will receive an exclusive tutorial on high performance computing, machine learning and data analytics, and sponsoring organizations will receive an increased visibility during the full-day program.

From all received applications, 200 regional and international Science, Technology, Engineering, and Mathematics (STEM) students who show a keen interest in HPC, will be admitted into this program for free. These students range from those enrolled in bachelor’s degree programs right through those completing their PhD research in fields such as computer science, computer engineering, information technology, autonomous systems, physics and mathematics. The age group ranges from 19 – 30.

This non-profit effort was first introduced in 2017, and it attracted around 100 students mainly from Germany, the US, the UK, Spain, and China, and it was sponsored by 10 organizations in the HPC space.

“Unlike regular STEM events, the aim of our program is to connect the next generation of regional and international STEM practitioners with the HPC industry and its key players. We hope this will encourage them to pursue careers in this space, and we will see them as part of the HPC user community in the future.”

“The ISC STEM Student Day is also a great opportunity for organizations to associate themselves as STEM employers and invest in their future HPC user base,” said Martin Meuer, the general co-chair of the ISC High Performance conference series. 

The 2018 ISC STEM Student Day will take place on Wednesday, June 21 during the ISC High Performance conference and exhibition. The full-conference will be held from Sunday, June 24 – 28 at Messe Frankfurt and expects 3500 attendees. 

This program is very attractive to organizations that offer HPC training, education opportunities, as well as employment opportunities as some of these students are about to graduate soon. Please contact sales@isc-group.com to get involved.

A returning builder-level sponsor, PRACE’s Communication Officer, Marjolein Oorsprong had the following to say about the event:

“The ISC STEM Student Day (gala evening and booth visits) allowed PRACE to come into direct contact with students who showed a keen interest in HPC and inform them of all our student-related activities. It provided an opportunity to explain to them how participating in our training events can benefit their studies. By sponsoring events such as the ISC STEM Student Day, we hope to attract more applications to the PRACE Summer of HPC program, the International HPC Summer School, and other PRACE Training courses.”

“Experienced members of our executive and management staff were not only given the chance to share their knowledge from within the industry, but also gain valuable insights from academic leaders and have constructive one-on-one conversations with students at the ISC STEM Day & Gala. This event is a great opportunity for tech organizations, such as Super Micro, to meet the brightest stars in STEM,” remarked Cheyene van Noordwyk, Marketing Manager at Super Micro Computer.

The 2018 ISC STEM program includes a day tutorial on HPC Applications, Systems & Programming Languages, conducted by Dr. Bernd Mohr, Senior Scientist at the Jülich Supercomputing Centre and a tutorial on Machine Learning & Data Analytics by his colleague, Prof. Morris Riedel, who is the Head of Research Group High Productivity Data Processing at the Jülich Supercomputing Centre & an Adjunct Associated Professor in high productivity processing of big data at the  School of Engineering and Natural Sciences, University of Iceland.

Later in the day, the students will have the chance to visit the booths and exhibits of sponsoring organizations to gain an impression of their product offering and business units. Another great highlight of the program is the evening event, which includes a dinner and a job fair set in a casual atmosphere in a nearby hotel. All sponsoring organizations will have an exclusive two hour of face-to-face time with the students at the STEM job fair.

Students will be able to register for the program starting mid-April via the program webpage.

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

Source: ISC High Performance

The post ISC High Performance Organizers Announce Return of ISC STEM Student Day appeared first on HPCwire.

BOXX Demos APEXX S3 Workstation at SOLIDWORKS World 2018

Mon, 02/05/2018 - 08:04

AUSTIN, Tex., Feb. 5, 2018 — BOXX Technologies, a leading innovator of high-performance workstations, rendering systems, and servers, today announced that APEXX S3, the world’s fastest workstation, will be demonstrated at BOXX booth #811 at SOLIDWORKS World 2018, Feb. 4-7, at the Los Angeles Convention Center in Los Angeles, CA. A Dassault Systèmes Corporate Partner, designated SOLIDWORKS Solution Partner, and leading manufacturer of CATIA and SOLIDWORKS-certified workstations, BOXX will also debut APEXX T3, a new 16-core AMD Ryzen Threadripper workstation, as well as two GoBOXX mobile workstations. BOXX will continue its longstanding tradition of allowing attendees to test systems by bringing along their assemblies.

“We design our own chassis so we’re SOLIDWORKS users ourselves,” said BOXX VP of Engineering Tim Lawrence. “That experience provides us with a unique perspective and clear understanding of engineering and product design workflows, as well as which configurations will provide optimal application performance. Our best-selling APEXX S3 is the embodiment of BOXX expertise.”

APEXX S3 features the latest Intel Core i7 8700K processor overclocked to 4.8 GHz. The liquid-cooled system sustains that frequency across all cores—even in the most demanding situations. 8th generation Intel processors offer a significant performance increase over previous Intel technology and BOXX is the only workstation manufacturer providing the new micro architecture professionally overclocked and backed by a three-year warranty. In addition to overclocking, BOXX went a step further by removing unused, outdated technology (like optical drive bays) in order to maximize productive space. Inside its compact, industrial chassis, the computationally dense APEXX S3 supports up to two dual-slot NVIDIA or AMD Radeon Pro professional graphics cards, an additional single-slot card, and features solid state drives and faster memory at 2600MHz DDR4.

At SOLIDWORKS World, the best-in-class APEXX S3, configured with dual NVIDIA Quadro P5000 GPUs, will demonstrate SOLIDWORKS 2018, as well as an upcoming version of SOLIDWORKS Visualize featuring OptiX AI-accelerated denoising technology. The workstation offers professional grade performance for all 3D CAD, animation, motion media, and rendering applications and will be utilized inside the SOLIDWORKS booth as well.

Along with the APEXX S3, attendees will have an opportunity to see the soon-to-be-released APEXX T3, an AMD-based workstation featuring a 16-core Ryzen Threadripper processor and Radeon Pro WX9100 GPU. At the BOXX booth, APEXX T3 will demonstrate both SOLIDWORKS 2018 and AMD ProRender for SOLIDWORKS, while also making its debut inside the AMD booth.

The BOXX booth is also home to mobile workstation demonstrations including the GoBOXX MXL VR. Designed for engineers and architects eager to incorporate mobile virtual reality into their workflow, GoBOXX MXL VR features a true desktop-class Intel Core i7 processor (4.0GHz), NVIDIA GeForce graphics, and up to 64GB of RAM. GOBOXX MXL VR will demonstrate a SOLIDWORKS to VR workflow, as will the ultra-thin GoBOXX SLM VR. Featuring a four-core, Intel Core i7 processor and professional NVIDIA Quadro graphics on a 15″ full HD (1920×1080) display, GoBOXX SLM VR provides ample performance and reliability, enabling engineers and product designers to work from anywhere.

“Year after year, our booth is a top destination for SOLIDWORKS World attendees,” said Shoaib Mohammad, BOXX VP of Marketing and Business Development. “We offer consultation with experts, allow user participation in our demos, and help SOLIDWORKS users determine firsthand which BOXX solutions will suit their workflows and empower them to work faster and more efficiently than ever before.”

For further information and pricing, contact a BOXX sales consultant in the US at 1-877-877-2699. Learn more about BOXX systems, finance options, and how to contact a worldwide reseller, by visiting www.boxx.com.

About BOXX Technologies

BOXX is a leading innovator of high-performance computer workstations, rendering systems, and servers for engineering, product design, architecture, visual effects, animation, deep learning, and more. For 22 years, BOXX has combined record-setting performance, speed, and reliability with unparalleled industry knowledge to become the trusted choice of creative professionals worldwide. For more information, visit www.boxx.com.

Source: BOXX Technologies

The post BOXX Demos APEXX S3 Workstation at SOLIDWORKS World 2018 appeared first on HPCwire.

Frank Baetke Departs HPE, Stays on as EOFS Chairman

Fri, 02/02/2018 - 13:03

EOFS Chairman Frank Baetke announced today that he is leaving HPE, where he served as global HPC technology manager for many years. He will keep his post as chair of the European Open File System (EOFS) organization, a position he was elected to in 2015. Baetke was a long-time liaison for HP-CAST, HPE’s user group meeting for high performance computing.

In a letter to the EOFS community, Baetke writes:

I’m no longer with HPE officially, but will definitely remain active on the HPC landscape. I will also remain active as EOFS Chairman representing HPE as a delegate as membership is by corporation or institution.

In that role I will be planning again for Birds of a Feather Sessions (BoFs) at the upcoming ISC’18 in June – the success we had at SC17 with two very well-attended BoFs is really encouraging. Let me also remind you on the upcoming LUSTRE User Group Conference LUG 2018 organized by OpenSFS. The conference will be held April 23-26, 2018 at Argonne National Laboratory in Argonne, Illinois, see: http://opensfs.org/events/lug-2018 Registration is open.

I’m sure I’ll meet most of you at the upcoming ISC’18 event in Frankfurt in June, 24-28, see http://isc-hpc.com/ 

HP-CAST 30 will take place prior to ISC 2018 at the Marriott Frankfurt as planned. We are told that HPE’s Liz King and Ben Bennett are now at the helm.

The post Frank Baetke Departs HPE, Stays on as EOFS Chairman appeared first on HPCwire.

ASC18, featuring AI and Nobel Prize-winning applications, opens in Beijing. Finals to be hosted by Nanchang University in May

Fri, 02/02/2018 - 09:57

On January 30, the opening ceremony of the 2018 ASC Supercomputer Challenge (ASC18) was held in Beijing. The event was attended by hundreds of spectators, including academicians from the Chinese Academy of Engineering (CAE), heads of supercomputing centers, experts in supercomputing and AI, as well as professors and students from participating teams. The ceremony marks the formal start of a two-month competition that will see more than 300 teams of university students from across the world participate, with the top 20 competing in the final round at Nanchang University in May. At the ceremony, organizers explained this year’s challenges, including AI machine reading and comprehension and a Nobel Prize-winning application.

The AI challenge will give students the chance to tackle Answer Prediction for Search Query in natural language reading and comprehension, and is provided by Microsoft. Teams must create an AI answer prediction method and model based on massive amounts of data generated by real questions from search engines such as Bing or voice assistants such as Cortana, providing accurate answers to queries and facilitating the development of AI to better address the cognitive challenge.

ASC18 also includes a supercomputing application for cryo-electron microscopy, a newly-developed technology whose developers were awarded the 2017 Nobel Prize in Chemistry. Allowing scientists to solve challenges in structural biology beyond the scope of traditional X-rays and crystallology, cryo-electron microscopy is based on RELION, a 3D reconstruction software. By including RELION among the challenges of this year’s ASC, the competition organizers aim to keep today’s computing students abreast of the latest cutting-edge developments in scientific discovery and spark their passion for exploring the unknown.

ASC will also host a two-day training camp to get participants up to speed for the competition. Supercomputing and AI experts from the Chinese Academy of Sciences, Microsoft, NVIDIA, Intel, and Inspur will lead sessions on designing and building supercomputer systems, the latest architectures of accelerated computing, AI application optimization and other related topics.

Initiated by China, the ASC Student Supercomputer Challenge is the world’s biggest student supercomputer competition. Since its launch, the ASC challenge has been held 7 times, supported by experts and institutions from across Asia, the US and Europe. Through promoting exchanges and furthering the development of talented young minds in the field of supercomputing around the world, the ASC aims to improve applications and R&D capabilities of supercomputing and accelerate technological and industrial innovation.

“ASC allows participating students to improve their knowledge and ability through practical application,” said Wang Endong, founder of the ASC, member of the CAE, and the chief scientist at Inspur. “This helps nurture well-rounded talents who have both the knowledge and skills for operating software and hardware. Such talents will be better able to play a role in the development of the AI industry.”

Li Bohu, a member of the Chinese Academy of Engineering, said that a deep integration of supercomputing and modeling simulation has become the third research method for understanding and transforming the world, following theoretical study and experimental research. Li anticipated that further integration and development of supercomputing and other expertise will revolutionize society, transform industries, and lead to major changes in how we live and work.

Deng Xiaohua, the vice president of Nanchang University and host of the final round of the ASC18, said that as one of China’s Project 211 institutions, a member of the high-level university project and a university focusing on building world-class disciplines, Nanchang University has always placed high importance on enhancing students’ ability to study and work with supercomputers. The university is dedicated to advancing innovations in scientific research, AI, and big data through supercomputing. As an international platform for supercomputing and the development of talented minds, ASC18 will greatly advance the development of new talents in Nanchang University and further international exchanges and cooperation.

The post ASC18, featuring AI and Nobel Prize-winning applications, opens in Beijing. Finals to be hosted by Nanchang University in May appeared first on HPCwire.

Stanford Lab Uses Blue Waters Supercomputer to Study Epilepsy

Fri, 02/02/2018 - 08:02

Feb. 2, 2018 — Epilepsy is the fourth most common human neurological disorder in the world—a disorder characterized by recurrent, unprovoked seizures. According to the Centers for Disease Control, a record number of people in the United States have epilepsy: 3.4 million total, including nearly half a million children. At this time, there’s no known cause or cure, but with the help of NCSA’s Blue Waters supercomputer at the University of Illinois at Urbana-Champaign, researchers like Ivan Soltesz are making progress.

The Soltesz lab at Stanford University is using Blue Waters to create realistic models of the hippocampus in rat brains. The hippocampus is a seahorse-shaped structure located in the temporal lobe of the brain, and is responsible for forming short-term memories. The hippocampus is thought to be the site of origin of temporal lobe epilepsy, which is the most common variant of the disease.

Generally, areas of the brain process information independently but periodically synchronize their activity when information is transferred across brain regions, for instance as part of the formation of new memories. However, in the epileptic brain, this process of synchronization can result in seizures.

“There is a theory that the healthy hippocampus generates very strong oscillations in the electric field during movement. In epilepsy it is thought that something goes wrong in the mechanisms that generate these oscillations. Instead of normal oscillations, seizures are generated,” says Ivan Raikov, who is part of the Soltesz lab.

The team’s computational research aims to replicate the results of past behavioral research. In those experiments, researchers had rats run on treadmills for several minutes until cells in the hippocampus recognized locations. Often the rats had to run repeatedly to learn their environment.

With the computational modeling on Blue Waters, the Soltesz team began simulating ten-second real time long experiments in order to model those same rat on a treadmill processes. However, even 10-second simulations can take an incredible amount of computer processing power to complete. Using 2,000 nodes (64,000 core equivalents) on Blue Waters, these simulations still take 14 hours of compute time. If this were done on a new, high end laptop it would take almost 26 years to simulate one 10-second experiment.

“Running long, realistic simulations would be impossible to do on other systems,” Raikov says. “It is the computational capacity of Blue Waters that makes it possible to have at least several tens of seconds of physical time represented.”

The team is optimistic that these simulations will help them acquire needed basic knowledge—the role of the hippocampus and how information is transmitted. Understanding how this region of the brain works will potentially allow them to relate the structural differences between a typical brain and one affected by epilepsy.

“We’re hoping once we understand the basic principles of how oscillations are generated in the hippocampus, we can incorporate epileptic changes in our models and how that changes the oscillations,” Raikov says. “If we model the damage that is caused by epilepsy, can we have a simulation that generates seizures or seizure-like behavior? In that way we hope to see some connection between the changes that occur in the brain during the seizure and other pathological events.”

In addition to uncovering the relationship between the hippocampus and pathology, they are also looking to answer what they consider a very fundamental question: How is spatial information represented in the hippocampus and does the oscillatory process have anything to do with it?

“These are very big questions in hippocampal neuroscience, and we believe that understanding those will give us a way to understand episodic memory specifically,” Raikov says.

For other epilepsy related research using Blue Waters, see Curtis Johnson’s work in the Blue Waters Annual Report and on the Blue Waters site.

Source: Susan Szuch, NCSA

The post Stanford Lab Uses Blue Waters Supercomputer to Study Epilepsy appeared first on HPCwire.

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

Thu, 02/01/2018 - 22:17

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling the competitive sector to deploy the latest AI technologies. Beyond the requirement for accurate and speedy seismic and reservoir simulation, oil and gas operations face torrents of sensor, geolocation, weather, drilling and seismic data. Just the sensor data alone from one off-shore rig can accrue to hundreds of terabytes of data annually, however most of this remains unanalyzed, dark data.

A collaboration between Nvidia and Baker Hughes, a GE company (BHGE) — one of the world’s largest oil field services companies — kicked off this week to address these big data challenges by applying deep learning and advanced analytics to improve efficiency and reduce the cost of energy exploration and distribution. The partnership leverages accelerated computing solutions from Nvidia, including DGX-1 servers, DGX Station and Jetson, combined with BHGE’s fullstream analytics software and digital twins to target end-to-end oil and gas operations.

Source: Nvidia

“It makes sense if you think about the nature of the operations, many remote sites, often in difficult locations,” said Binu Mathew, vice president of digital development at Baker Hughes. “But also when you look at it from an industry standpoint, there’s a ton of data being generated, a lot of information, and you often have it in two windows: you have an operator who will have multiple streams of data coming in, but relatively little actual information, because you’ve got to use your own experience to figure out what to do.

“On the flip side you have a lot of very smart, very capable engineers who are very good at building physics models, geological models, who often take weeks or months to fill out these models and run simulations, so they operate in that kind of timeframe. In between you’ve got a big challenge of not being able to have enough actual data crossing silos into a system that can analyze this data that you can take operational action from. This is the area that we at Baker Hughes Digital plan to address. We plan to do it because the technologies are now available in the industry: the rise of computational power and the rise of analytical techniques.”

Mathew’s account of the magnitude of data being generated by the industry leaves little doubt that this is an exascale-class challenge that requires new approaches and efficiencies.

“Even if you don’t talk about things like imaging data — which adds a whole order of magnitude to it — but, just in terms of what you’d call semi-structured data, essentially data coming up from various sensors, it’s in the hundreds of petabytes annually,” Mathew said. “And if you take a deep water rig you’re talking about in the region of a terabyte of data coming in per day. To analyze that kind of data at that kind of scale, the computational power will run into the exaflops and potentially well beyond.”

Source: Nvidia; BHGE

Like an increasing number of groups across academia and industry, Baker Hughes is tackling this extreme-scale challenge using a combination of physics-based and probabilistic models.

“You cannot analyze all that data without something like AI,” said Mathew. “If you go back to the practical models, the oil and gas industry has been very good at coming up with physics based models, and they will still be absolutely key at the core for modeling seismic phenomenon. But to scale those models requires combining physics models with the pattern matching capabilities that you get with AI. That’s the sea change we’ve seen in the last several years. If you look at image recognition and so on, deep learning techniques are now matching or exceeding human capabilities. So if you combine those things together you get into something that’s a step change from what’s been possible before.”

Nvidia is positioning its GPU technologies to fuel this transformation by powering accelerated analytics and deep learning across the spectrum of oil and gas operations.

“With GPU-accelerated analytics, well operators can visualize and analyze massive volumes of production and sensor data such as pump pressures, flow rates and temperatures,” stated Nvidia’s Tony Paikeday in a blog post. “This can give them better insight into costly issues, such as predicting which equipment might fail and how these failures could affect wider systems.

“Using deep learning and machine learning algorithms, oil and gas companies can determine the best way to optimize their operations as conditions change,” Paikeday continued. “For example, they can turn large volumes of seismic data images into 3D maps to improve the accuracy of reservoir predictions. More generally, they can use deep learning to train models to predict and improve the efficiency, reliability and safety of expensive drilling and production operations.”

The collaboration with BHGE will leverage Nvidia’s DGX-1 servers for training models in the datacenter; the smaller DGX Station for computing deskside or in remote, bandwidth-challenged sites; and the Nvidia Jetson for powering real-time inferencing at the edge.

Jim McHugh, Nvidia vice president and general manager, said in an interview that Nvidia excels at bringing together this raw processing power with an entire ecosystem: “Not only our own technology, like CUDA, Nvidia drivers, but we also bring all the leading frameworks together. So when people are going about doing deep learning and AI, and then the training aspect of it, the most optimized frameworks run on DGX, and are available via our NGC [Nvidia GPU cloud] as well.”

Cloud connectivity is a key enabler of the end-to-end platform. “One of the things that allows us to access that dark data is the concept of edge to cloud,” said Mathew. “So you’ve got the Jetsons out at the edge streaming into the clouds, where we can do the training of these models because training is much heavier and using DGX-1 boxes helps enormously with that task and running the actual models in production.”

Baker Hughes says it will work closely with customers to provide them with a turnkey solution. “The oil and gas industry isn’t homogeneous, so we can come out with a model that largely fits their needs but with enough flexibility to tweak,” said Mathew. “And some of that comes inherently from the capabilities you have in these techniques, they can auto-train themselves, the models will calibrate and train to the data that’s coming in. And we can also tweak the models themselves.”

For Nvidia, partnering with BHGE is part of a broader strategy to work with leading companies to bring AI into every industry. The self-proclaimed AI computing company believes technologies like deep learning will effect a strong virtuous cycle.

“The thing about AI is when you start leveraging the algorithms in deep neural networks, you end up developing an insatiable desire for data because it allows you to get new discoveries and connections and correlations that weren’t possible. We are coming from a time when people suffered from a data deluge; now we’re in something new where more data can come, that’s great,” said McHugh.

Doug Black, managing editor of EnterpriseTech, contributed to this report.

The post Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector appeared first on HPCwire.

UK Space Agency Backs Development of Supercomputing Technologies

Thu, 02/01/2018 - 15:53

GLASGOW, Scotland, Feb. 01, 2018 — The UK Space Agency has awarded more than £4 million to Spire Global, a satellite powered data company, to demonstrate cutting-edge space technology including ‘parallel super-computing’, UK Government ministers Lord Henley and Lord Duncan announced today.

Today’s announcement gives the green light to missions designed to showcase the technology and put UK companies into orbit faster and at a lower cost. The UK is the largest funder of the European Space Agency’s Advanced Research in Telecommunications Satellites (ARTES) programme, which transforms research into successful commercial projects.

The funding from the UK Space Agency was announced by Lord Henley, Parliamentary Under-Secretary of State at the Department for Business, Energy and Industrial Strategy, on a visit to Spire’s UK base in Glasgow, where the company intends to create new jobs to add to its existing workforce.

Business Minister, the Rt. Hon. Lord Henley, said:

“Thanks to this new funding, Spire will be able to cement its activities in the UK, develop new technologies and use space data to provide new services to consumers that will allow businesses to access space quicker and at a lower cost – offering an exciting opportunity for the UK to thrive in the commercial space age.

“Through the government’s Industrial Strategy, we are encouraging other high-tech British businesses to pursue more commercial opportunities with the aim of growing the UK’s share of the global space market to 10% by 2030.”

UK Government Minister for Scotland Lord Duncan said:

“Spire Global is at the cutting edge of technology, using satellite data to track ships, planes and weather in some of the world’s most remote regions. They’re also an important employer in Glasgow, investing in the area and recognising the talent of Scotland’s world class engineers and scientists. We know that the space industry is important to Scotland’s economy and this UK Government funding will help companies like Spire stay at the forefront of this field.”

The ARTES Pioneer programme is designed to support industry by funding the demonstration of advanced technologies, systems, services and applications in a representative space environment. Part of this is to support one or more Space Mission Providers, which could provide commercial services to private companies or public bodies.

“Spire’s infrastructure, capabilities, and competencies all support our submission to this program. For the launch of our 50+ satellite constellation, we quickly became our own best customer,” said Theresa Condor, Spire’s EVP of Corporate Development. “We’re looking forward to demonstrating our end-to-end service and infrastructure on this series of validation missions. ‘Space as a Service’ means going from mission technical architecture to customer data/service verification, along with the ongoing development of critical enabling technologies.”

One validation mission will develop parallel super-computing in space – a core component for future computationally intensive missions. A second, exploitation of Global Navigation Satellite System (GNSS) for weather applications, will leverage Galileo signals for GNSS Radio Occultation. Radio occultation is a key data input for the improvement of weather forecasts. Upon completion, the GNSS-RO technology can be immediately commercialized.

Source: UK Space Agency

The post UK Space Agency Backs Development of Supercomputing Technologies appeared first on HPCwire.

Pages