On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. For the last 15 years, Scalable Informatics, the HPC storage and system vendor founded by Joe Landman in 2002, provided high performance software-defined storage and compute solutions to a wide range of markets — from financial and scientific computing to research and big data analytics. Their platform was based on placing tightly coupled storage and computing in the same unit to eliminate bottlenecks and enable high-performance data and I/O for computationally intensive workloads.Tom Tabor with Jacob Loveless, CEO, Lucera (right) and Joe Landman, CEO, Scalable Informatics (left). Source
In a letter to the community posted on the company website, founder and CEO Joe Landman writes:
We want to thank our many customers and partners, whom have made this an incredible journey. We enjoyed designing, building, and delivering market dominating performance and density systems to many groups in need of this tight coupling of massive computational, IO, and network firepower.
Sadly, we ran into the economic realities of being a small player in a capital intensive market. We offered real differentiation in terms of performance, by designing and building easily the fastest systems in market.
But building the proverbial “better mousetrap” was not enough to cause the world to beat a path to our door. We had to weather many storms, ride out many fads. At the end of this process, we simply didn’t have the resources to continue fighting this good fight.
Performance does differentiate, and architecture is the most important aspect of performance. There are no silver bullets, there are no magical software elements that can take a poor architecture and make it a good one. This has been demonstrated by us and many others so often, it ought to be an axiom. Having customers call you up to express regret for making the wrong choice, while somewhat satisfying, doesn’t pay the bills. This happened far too often, and is in part, why we had to make this choice.
This is not how we wanted this to end. But end it did.
Thank you all for your patronage, and maybe in the near future, we will all be …
Landman shares additional thoughts about this difficult situation here: Requiem.
GENEVA, March 23, 2017 — HNSciCloud, the H2020 Pre Commercial Procurement Project aiming at establishing a European hybrid cloud platform that will support high-performance, data-intensive scientific use-cases, today announced its a webcast for the awards ceremony for the successful contractors moving to the Prototype Phase.
CEST, the awards ceremony for the successful contractors moving to the Prototype Phase of the Helix Nebula Science Cloud Pre-Commercial Procurement will take place at CERN, in Geneva, Switzerland on April 3, 2017 at 14:30.
In November 2016, 4 Consortia won the €5.3 million joint HNSciCloud Pre-Commercial Procurement (PCP) tender and started to develop the designs for the European hybrid cloud platform that will support high-performance, data-intensive scientific use-cases. At the beginning of February 2017, the four consortia met at CERN to present their proposals to the buyers. After the submission of their designs, the consortia were asked to prepare their bids for the prototyping phase.
In early April the winners of the bids to build prototypes will be announced at CERN during the “Launching the Helix Nebula Science Cloud Prototype Phase” webcast event. The award ceremony and the presentations of the solutions moving into the prototyping phase will be the focus of the webcast.
If you are interested in understanding more about the prototypes that will be developed, or simply want more insights on the Pre-Commercial Procurement process, mark the date in your agenda and follow the live webcast of the event directly from our website www.hnsicloud.eu.
For more information about the event, please contact: firstname.lastname@example.org
HNSciCloud, with Grant Agreement 687614, is a Pre-Commercial Procurement Action sponsored by 10 of Europe’s leading public research organisations and co-funded by the European Commission.
The post Helix Nebula Science Cloud Moves to Prototype Phase appeared first on HPCwire.
AUSTIN, Texas, March 23, 2017 — Texas Advanced Computing Center (TACC) announced today that its Stampede supercomputer has helped researchers from Tufts, University of Maryland, Baltimore County create tadpoles with pigmentation never before seen in nature.
The flow of information between cells in our bodies is exceedingly complex: sensing, signaling, and influencing each other in a constant flow of microscopic engagements. These interactions are critical for life, and when they go awry can lead to the illness and injury.
Scientists have isolated thousands of individual cellular interactions, but to chart the network of reactions that leads cells to self-organize into organs or form melanomas has been an extreme challenge.
“We, as a community are drowning in quantitative data coming from functional experiments,” says Michael Levin, professor of biology at Tufts University and director of the Allen Discovery Center there. “Extracting a deep understanding of what’s going on in the system from the data in order to do something biomedically helpful is getting harder and harder.”
Working with Maria Lobikin, a Ph.D. student in his lab, and Daniel Lobo, a former post-doc and now assistant professor of biology and computer science at the University of Maryland, Baltimore County (UMBC), Levin is using machine learning to uncover the cellular control networks that determine how organisms develop, and to design methods to disrupt them. The work paves the way for computationally-designed cancer treatments and regenerative medicine.
“In the end, the value of machine learning platforms is in whether they can get us to new capabilities, whether for regenerative medicine or other therapeutic approaches,” Levin says.
Writing in Scientific Reports in January 2016, the team reported the results of a study where they created a tadpole with a form of mixed pigmentation never before seen in nature. The partial conversion of normal pigment cells to a melanoma-like phenotype — accomplished through a combination of two drugs and a messenger RNA — was predicted by their machine learning code and then verified in the lab.
Read the full report from TACC at: https://www.tacc.utexas.edu/-/machine-learning-lets-scientists-reverse-engineer-cellular-control-networks
The post TACC Supercomputer Facilitates Reverse Engineering of Cellular Control Networks appeared first on HPCwire.
“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings (John Wiley & Sons, Inc., Jan. 2017) is both an introductory text and a field guide for anyone working with biomedical data, IT professionals and as well as medical and research staff.
Director of operations at Arizona State University’s Research Computing program, Etchings writes the primary motivation for the book was to bridge the divide “between IT and data technologists, on one hand, and the community of clinicians, researchers, and academics who deliver and advance healthcare, on the other.” As biology and medicine move squarely into the realm of data sciences, driven by the twin engines of big compute and big data, removing the traditional silos between IT and biomedicine will allow both groups to work better and more efficiently, Etchings asserts.
“Work in sciences is routinely compartmentalized and segregated among specialists,” ASU Professor Ken Buetow, PhD, observes in the foreword. “This segregation is particularly true in biomedicine as it wrestles with the integration of data science and its underpinnings in information technology. While such specialization is essential for progress within disciplines, the failure to have cross-cutting discussions results in lost opportunities.”
Aimed at this broader audience, “Strategies in Biomedical Data Science” introduces readers to the cutting-edge and fast moving field of biomedical data. The 443-page book lays out a foundation in the concepts in data management biomedical sciences and empowers readers to:
Efficiently gather data from disparate sources for effective analysis;
Get the most out of the latest and preferred analytic resources and technical tool sets; and
Intelligently examine bioinformatics as a service, including the untapped possibilities for medical and personal health devices.
A diverse array of use cases and case studies highlight specific applications and technologies being employed to solve real-world challenges and improve patient outcomes. Contributing authors, experts working and studying at the intersection of IT and biomedicine, offer their knowledge and experience in traversing this rapidly-changing field.
We reached out to BioTeam VP Ari Berman to get his view on the IT/research gap challenge. “This is exactly what BioTeam [a life sciences computing consultancy] is focused on,” he told us. “Since IT organizations have traditionally supported business administration needs, they are not always equipped to handle the large amounts of data that needs to be moved and stored, or the amount of computational power needed to run the analysis pipelines that may yield new discoveries for the scientists. Because of this infrastructure, skills, and services gap between IT and biomedical data science, many research organizations spend too much time and money trying to bridge that gap on their own through cloud infrastructures or shadow IT running in their laboratories. I’ve spent my career bridging this gap, and I can tell you first hand that doing it correctly has certainly moved the needle forward on scientists’ ability to make new discoveries.”Arizona State University’s director of operations for research computing and senior HPC architect Jay Etchings
Never lost in this far-ranging survey of biomedical data challenges and strategies is the essential goal: to improve human life and reduce suffering. Etchings writes that the book was inspired by “the need for a collaborative and multidisciplinary approach to solving the intricate puzzle that is cancer.” Author proceeds support the Pediatric Brain Tumor Foundation. The charity serves the more than 28,000 children and teens in the United States who are living with the diagnosis of a brain tumor.
To read an excerpt, visit the book page on the publisher’s website.
A listing of Chapter headings:
Chapter 1 Healthcare, History, and Heartbreak 7
Chapter 2 Genome Sequencing: Know Thyself, One Base Pair at a Time 27
Chapter 3 Data Management 53
Chapter 4 Designing a Data-Ready Network Infrastructure 105
Chapter 5 Data-Intensive Compute Infrastructures 163
Chapter 6 Cloud Computing and Emerging Architectures 211
Chapter 7 Data Science 235
Chapter 8 Next-Generation Cyberinfrastructures 307
The post ‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies appeared first on HPCwire.
HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. A letter from the company with a listing of assets is included at the end of the article.
PathScale represents one of handful of compiler technologies that are designed for high performance computing, and it is one the last independent HPC compiler companies. In an interview with HPCwire, PathScale Chief Technology Officer and owner Christopher Bergström attributes the company’s financial insolvency to its heavy involvement in Intel alternative architectures.
“Unfortunately in recent years, we bet big on ARMv8 and the partner ecosystem and the hardware has been extremely disappointing,” said Bergström. “Once partners saw how low their hardware performed on HPC workloads they decided to pull back on their investment in HPC software.”
Due to confidentiality agreements, he’s limited to speaking in generalities but argues that the currently available ARMv8 processors deliver very weak performance for HPC workloads.
“ARM is possibly aware of this issue and as a result has introduced SVE (Scalable Vector Extensions),” Bergström told us. “Unfortunately, they focused more on the portability side of vectorization and the jury is still out if they can deliver competitive performance. SVE’s flexible design and freedom to change vector width on the fly will possibly impact the ability to write code tuned specifically for a target processor. In addition, design of the hardware architecture blocks software optimizations that are very common and potentially critical for HPC. And based on the publicly available roadmaps, the floating point to power ratio is not where it needs to be for HPC workloads in order to effectively compete against Intel or GPUs.”
Before coming to these conclusions, PathScale had a statement of work contract with Cavium to help support optimizing compilers for their ThunderX processors. When that funding was pulled, PathScale also lost their ability to gain and support customers for ARMv8. They looked for funders, and had conversations with stakeholders in the private and public sphere, but the money just wasn’t available.
“Show me a company in the HPC space wanting to invest,” said Bergström, “They’re not investing in compiler technology.”
ARM, which was scooped up by Japanese company SoftBank in September 2016 for $31 billion, may be the exception, but according to Bergström the PathScale technology, while it significantly leverages LLVM, doesn’t perfectly align with what they need.
Bergström brokered the deal with Cray that resurrected PathScale from the ashes of SiCortex in 2009 (more on this below) and he’s proud of what he and his team have accomplished over the last seven years. “We love compilers, we love the technology. We want to continue developing this stuff. The team is rock solid, we’re like family. We live eat and breathe compilers, but we’re not on a sustainable business path and we need a bailout or help refocusing. We need people who understand that these kind of technologies add value and LLVM by itself isn’t a panacea.”
Addison Snell, CEO of HPC analyst firm Intersect360 Research, shared some additional perspective on the market dynamics at play for independent tools vendors. “In the Beowulf era, clusters were all mostly the same, so what little differentiation there was came from things like development environments and job management software,” he said. “Independent middleware companies of all types flourished. Now we’re trending back toward an era of architectural specialization. Users are shopping for architectures more than they’re shopping for which compiler to use for a given architecture, and acquisitions have locked up some of the previously dominant players. Vendors’ solutions will have their own integrated stacks. Free open-source versions might still exist, but there will be less room for independent middleware players.”
PathScale has a winding history that dates back to 2001 with the founding of Key Research by Lawrence Livermore alum Tom McWilliams. The company was riding the commodity cluster wave, developing clustered Linux server solutions based on a low-cost 64-bit design. In 2003, contemporaneous with the rising popularity of AMD Opteron processors, Key Research rebranded as PathScale and expanded its product line to include high-performance computing adapters and 64-bit compilers.
PathScale would then pass through a number of corporate hands. In 2006, QLogic acquired PathScale, primarily to gain access to its InfiniBand interconnect technology. The following year, the compiler assets were sold to SiCortex, which sought a solution for its MIPS-based HPC systems.
When SiCortex closed its doors in 2009, Cray bought the PathScale assets and revived the company. Under an arrangement struck with Cray, PathScale would go forward as an independent technology group with an option to buy. In March 2012, PathScale CTO Christopher Bergström acquired all assets and became the sole owner of PathScale Inc.
The PathScale toolchain currently generates code for the latest Intel processors, AMD64, AMD GPUs, Power8, ARMv8, and NVIDIA GPUs in combination with both Power8 and x86.
In a message to the community, Pathscale writes:
We are evaluating all options to overcome this difficult time, including refocusing to provide training and code porting services instead of purely offering compiler licenses and optimization services. Our team deeply understands parallel programming and whether you have crazy C++ or ancient Fortran, we can likely help get it running on GPUs (NVIDIA or AMD) or vectorization targets (like Xeon Phi).
All PathScale engineers would love to continue to work on the compiler as an independent company, but we need the community to help us. We need people who believe in our technical roadmap. We need people who understand the future exascale computing software stack will likely be complex, but that complexity and advanced optimizations will make it easier for end users. At the same time we must be realistic and without immediate assistance start accepting any reasonable offer on the assets as a whole or piece by piece.
Our assets include:
• PathScale website, trademarks and branding
• C, C++ and Fortran compilers
• Complete GPGPU and many-core runtime which supports OMP4 and OpenACC and is portable across multiple architectures (NVIDIA GPU, ARMv8, Power8+NVIDIA and AMD GPU)
• Significant modifications to CLANG and LLVM to enable support for OpenACC and OpenMP and parallel programming models.
• Complete engineering team with expertise working on CLANG and LLVM and MIPSPro.
• Advertising credits with popular websites ($30,000)
A purchase or funding from crowdsourcing or other community event will keep a highly optimizing OpenMP and OpenACC C/C++ and Fortran compiler toolchain plus experienced development team in operation. Succinctly, PathScale preserves architectural diversity and opens the door for competition with a performant compiler for interesting architectures with OpenMP and OpenACC parallelization.
If interested please contact email@example.com.
Editor’s note: HPCwire has reached out to Cavium and ARM and we will update the article with any responses we receive.
PISCATAWAY, N.J., March 23, 2017 — IEEE today announced the next milestone phase in the development of the International Roadmap for Devices and Systems (IRDS)—an IEEE Standards Association (IEEE-SA) Industry Connections (IC) Program sponsored by the IEEE Rebooting Computing (IEEE RC) Initiative—with the launch of a series of nine white papers that reinforce the initiative’s core mission and vision for the future of the computing industry. The white papers also identify industry challenges and solutions that guide and support future roadmaps created by IRDS.
IEEE is taking a lead role in building a comprehensive, end-to-end view of the computing ecosystem, including devices, components, systems, architecture, and software. In May 2016, IEEE announced the formation of the IRDS under the sponsorship of IEEE RC. The historical integration of IEEE RC and the International Technology Roadmap for Semiconductors (ITRS) 2.0 addresses mapping the ecosystem of the new reborn electronics industry. The new beginning of the evolved roadmap—with the migration from ITRS to IRDS—is proceeding seamlessly as all the reports produced by the ITRS 2.0 represent the starting point of IRDS.
While engaging other segments of IEEE in complementary activities to assure alignment and consensus across a range of stakeholders, the IRDS team is developing a 15-year roadmap with a vision to identify key trends related to devices, systems, and other related technologies.
“Representing the foundational development stage in IRDS is the publishing of nine white papers that outline the vital and technical components required to create a roadmap,” said Paolo A. Gargini, IEEE Fellow and Chairman of IRDS. “As a team, we are laying the foundation to identify challenges and recommendations on possible solutions to the industry’s current limitations defined by Moore’s Law. With the launch of the nine white papers on our new website, the IRDS roadmap sets the path for the industry benefiting from all fresh levels of processing power, energy efficiency, and technologies yet to be discovered.”
“The IRDS has taken a significant step in creating the industry roadmap by publishing nine technical white papers,” said IEEE Fellow Elie Track, 2011-2014 President, IEEE Council on Superconductivity; Co-chair, IEEE RC; and CEO of nVizix. “Through the public availability of these white papers, we’re inviting computing professionals to participate in creating an innovative ecosystem that will set a new direction for the greater good of the industry. Today, I open an invitation to get involved with IEEE RC and the IRDS.”
The series of white papers delivers the starting framework of the IRDS roadmap—and through the sponsorship of IEEE RC—will inform the various roadmap teams in the broader task of mapping the devices’ and systems’ ecosystem:
- Applications Benchmarking
- More Moore
- Beyond CMOS (Emerging Research Devices)
- Outside System Connectivity
- Factory Integration
- Environment, Safety, and Health
- Yield Enhancement
- System and Architecture
“IEEE is the perfect place to foster the IRDS roadmap and fulfill what the computing industry has been searching for over the past decades,” said IEEE Fellow Thomas M. Conte, 2015 President, IEEE Computer Society; Co-chair, IEEE RC; and Professor, Schools of Computer Science, and Electrical and Computer Engineering, Georgia Institute of Technology. “In essence, we’re creating a new Moore’s Law. And we have so many next-generation computing solutions that could easily help us reach uncharted performance heights, including cryogenic computing, reversible computing, quantum computing, neuromorphic computing, superconducting computing, and others. And that’s why the IEEE RC Initiative exists: creating and maintaining a forum for the experts who will usher the industry beyond the Moore’s Law we know today.”
The IRDS leadership team hosted a winter workshop and kick-off meeting at the Georgia Institute of Technology on 1-2 December 2016. Key discoveries from the workshop included the international focus teams’ plans and focus topics for the 2017 roadmap, top-level needs and challenges, and linkages among the teams. Additionally, the IRDS leadership invited presentations from the European and Japanese roadmap initiatives. This resulted in the 2017 IRDS global membership expanding to include team members from the “NanoElectronics Roadmap for Europe: Identification and Dissemination” (NEREID) sponsored by the European Semiconductor Industry Association (ESIA), and the “Systems and Design Roadmap of Japan” (SDRJ) sponsored by the Japan Society of Applied Physics (JSAP).
The IRDS team and its supporters will convene 1-3 April 2017 in Monterey, California, for the Spring IRDS Workshop, which is part of the 2017 IEEE International Reliability Physics Symposium (IRPS). The team will meet again for the Fall IRDS Conference—in partnership with the 2017 IEEE International Conference on Rebooting Computing (ICRC)—scheduled for 6-7 November 2017 in Washington, D.C. More information on both events can be found here: http://irds.ieee.org/events.
IEEE RC is a program of IEEE Future Directions, designed to develop and share educational tools, events, and content for emerging technologies.
IEEE-SA’s IC Program helps incubate new standards and related products and services, by facilitating collaboration among organizations and individuals as they hone and refine their thinking on rapidly changing technologies.
About the IEEE Standards Association
The IEEE Standards Association, a globally recognized standards-setting body within IEEE, develops consensus standards through an open process that engages industry and brings together a broad stakeholder community. IEEE standards set specifications and best practices based on current scientific and technological knowledge. The IEEE-SA has a portfolio of over 1,100 active standards and more than 500 standards under development. For more information visit the IEEE-SA website.
IEEE is the largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics. Learn more at http://www.ieee.org.
The post IEEE Unveils Next Phase of IRDS to Drive Beyond Moore’s Law appeared first on HPCwire.
SAN FRANCISCO, Calif., March 23, 2017 — Rescale and EDEM are pleased to announce that the EDEM GPU solver engine is now available on Rescale’s ScaleX platform, a scalable, on-demand cloud platform for high-performance computing. The GPU solver, which was a highlight of the latest release of EDEM, enables performance increases from 2x to 10x compared to single-node, CPU-only runs.
EDEM offers Discrete Element Method (DEM) simulation software for virtual testing of equipment that processes bulk solid materials in the mining, construction, and other industrial sectors. EDEM software has been available on Rescale’s ScaleX platform since July 2016. Richard LaRoche, CEO of EDEM commented: “The introduction of the EDEM GPU solver has made a key impact on our customers’ productivity by enabling them to run larger simulations faster. Our partnership with Rescale means more users will be able to harness the power of the EDEM engine by accessing the market’s latest GPUs through Rescale’s cloud platform.”
The addition of an integrated GPU solver to Rescale gives users shorter time-to-answer and enables a deeper impact on design innovation. To Rescale, the addition of EDEM’s GPU solver also signals a strengthening partnership. “Rescale’s GPUs are the cutting edge of compute hardware, and EDEM is ahead of the curve in optimizing their software to leverage GPU capabilities. We are proud to be their partner of choice to bring this forward-thinking simulation solution to the cloud, bringing HPC within easy reach of engineers everywhere,” said Rescale CEO Joris Poort.
EDEM is the market-leading Discrete Element Method (DEM) software for bulk material simulation. EDEM software is used for ‘virtual testing’ of equipment that handles or processes bulk materials in the manufacturing of mining, construction, off-highway and agricultural machinery, as well as in the mining and process industries. Blue-chip companies around the world use EDEM to optimize equipment design, increase productivity, reduce costs of operations, shorten product development cycles and drive product innovation. In addition EDEM is used for research at over 200 academic institutions worldwide. For more information visit: www.edemsimulation.com.
Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.
The post EDEM Brings GPU-Optimized Solver to the Cloud with Rescale appeared first on HPCwire.
On Monday, Google announced plans to launch a new peer review journal and “ecosystem” for machine learning. Writing on the Google Research Blog, Shan Carter and Chris Olah described the project as follows:
“Science isn’t just about discovering new results. It’s also about human understanding. Scientists need to develop notations, analogies, visualizations, and explanations of ideas. This human dimension of science isn’t a minor side project. It’s deeply tied to the heart of science.
“That’s why, in collaboration with OpenAI, DeepMind, YC Research, and others, we’re excited to announce the launch of Distill, a new open science journal and ecosystem supporting human understanding of machine learning. Distill is an independent organization, dedicated to fostering a new segment of the research community.
“Modern web technology gives us powerful new tools for expressing this human dimension of science. We can create interactive diagrams and user interfaces the enable intuitive exploration of research ideas. Over the last few years we’ve seen many incredible demonstrations of this kind of work.
“Unfortunately, while there are a plethora of conferences and journals in machine learning, there aren’t any research venues that are dedicated to publishing this kind of work. This is partly an issue of focus, and partly because traditional publication venues can’t, by virtue of their medium, support interactive visualizations. Without a venue to publish in, many significant contributions don’t count as “real academic contributions” and their authors can’t access the academic support structure.”
According to Carter and Olah, “Distill aims to build an ecosystem to support this kind of work, starting with three pieces: a research journal, prizes recognizing outstanding work, and tools to facilitate the creation of interactive articles.
Here’s a snapshot of guidelines for working with the new Journal:
- “Distill articles are prepared in HTML using the Distill infrastructure — see the getting started guide for details. The infrastructure provides nice default styling and standard academic features while preserving the flexibility of the web.
- Distill articles must be released under the Creative Commons Attribution license. Distill is a primary publication and will not publish content which is identical or substantially similar to content published elsewhere.
- To submit an article, first create a GitHub repository for your article. You can keep it private during the review process if you would like — just share it with @colah and @shancarter. Then email firstname.lastname@example.org to begin the process.
Distill handles all reviews and editing through GitHub issues. Upon publication, the repository is made public and transferred to the @distillpub organization for preservation. This means that reviews of published work are always public. It is at the author’s discretion whether they share reviews of unpublished work.”
FREMONT, Calif., March 22, 2017 — Penguin Computing, provider of high performance computing, enterprise data center and cloud solutions, today announced the availability of the company’s expanded Penguin Computing On-Demand (POD) High Performance Computing Cloud.
“As current Penguin POD users, we are excited to have more resources available to handle our mission-critical real-time global environmental prediction workload,” said Dr. Greg Wilson, CEO, EarthCast Technologies. “The addition of the Lustre file system will allow us to scale our applications to full global coverage, run our jobs faster and provide more accurate predictions.”
The expanded POD HPC cloud extends into Penguin Computing’s latest cloud datacenter location, MT2. The MT2 location offers this expansion with the addition of Intel Xeon E5-2680 v4 processors through our B30 node class offering.
B30 Node Specifications
- Dual Intel Xeon E5-2680 v4 processors
- 28 non-hyperthreaded cores per node
- 256GB RAM per node
- Intel Omni-Path low-latency, non-blocking, 100Gb/s fabric
In addition to the new processors, the MT2 location provides customers with access to a Lustre, parallel file system – delivered through Penguin’s FrostByte storage solution. POD’s latest Lustre file system provides high speed storage with an elastic billing model – only billing customers for the storage they consume, metered hourly.
The new POD MT2 public cloud location also provides customers with cloud redundancy – enabling multiple, distinct cloud locations to ensure that business critical, and time sensitive HPC workflows are always able compute.
“The latest expansion to our MT2 location extends the capabilities of our HPC cloud,” said Victor Gregorio, SVP Cloud Services at Penguin Computing. “As an HPC service, we work closely with our customers to deliver their growing cloud needs – scalable
bare-metal compute, easy access to ready-to-run applications, and tools such as our Scyld Cloud Workstation for remote 3D visualization.”
Penguin Computing customers in fields such as manufacturing, engineering, and weather sciences are able to run more challenging HPC applications and workflows on POD with the addition of these capabilities.
These workloads can be time sensitive and complex – demanding the specialized HPC cloud resources Penguin makes available on POD. The compute needs of HPC users are not normally satisfied in a general-purpose public cloud, and Penguin Computing continues to be a leader in unique, cost effective, high-performance cloud services for HPC workloads.
POD customers have immediate access to these new offerings through their existing accounts through the POD Portal. Experience POD by visiting https://www.pod.penguincomputing.com to request a free trial account.
About Penguin Computing
Penguin Computing is one of the largest private suppliers of enterprise and high performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing On-Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivery of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of a limited number of authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing. Penguin Computing has systems installed with more than 2,500 customers in 40 countries across eight major vertical markets.
Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. New advances reported by Swiss researchers in Nature last week suggest practical use of x-rays for fast, accurate, reverse-engineering of chips may be near.
“You’ll pop in your chip and out comes the schematic. Total transparency in chip manufacturing is on the horizon. This is going to force a rethink of what computing is,” said Anthony Levi of the University of Southern California describing the research in an IEEE Spectrum article (X-rays Map the 3D Interior of Integrated Circuits). “This is going to force a rethink of what computing is” what it means for a company to add value in the computing industry.
The work by Mirko Holler, Manuel Guizar-Sicairos, Esther H. R. Tsai, Roberto Dinapoli, Elisabeth Müller, Oliver Bunk, Jörg Raabe (all of Paul Scherrer Institut) and Gabriel Aeppli (ETH) is described in their Nature Letter, “High-resolution non-destructive three-dimensional imaging of integrated circuits.”
“[We] demonstrate that X-ray ptychography – a high-resolution coherent diffractive imaging technique – can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip,” write the researchers in the abstract.
“Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.”
Starting with a known structure – an ASIC developed at the institute – and then moving to an Intel chip (Intel G3260 processor) about which they had limited information, the researchers were able to accurate identify and map components in the chips. A good summary of the experiment is provided in the IEEE Spectrum article:
“The ASIC was produced using 110-nanometer chip manufacturing technology, more than a decade from being cutting edge. But the Intel chip was just a couple of generations behind the state of the art: It was produced using the company’s 22-nm process…To produce a 3D rendering of the Intel chip—an Intel G3260 processor—the team shined an X-ray beam through a portion of the chip. The various circuit components—its copper wires and silicon transistors, for example—scatter the light in different ways and cause constructive and destructive interference. Through a technique called X-ray ptychography, the researchers could point the beam at their sample from a number of different angles and use the resulting diffraction patterns to reconstruct chip’s internal structure.”
The experiment was carried out at the cSAXS beamline of the Swiss Light Source (SLS) at the Paul Scherrer Institut, Villigen, Switzerland. Details of the components are as follows. Coherent X-rays enter the instrument and pass optical elements that in their combination form an X-ray lens used to generate a defined illumination of the sample. These elements are a gold central stop, a Fresnel zone plate and an order sorting aperture. The diffracted X-rays are measured by a 2D detector, a Pilatus 2M in the present case. Accurate sample positioning is essential in a scanning microscopy technique and is achieved by horizontal and vertical interferometers.
As the IEEE Spectrum article notes, “Even if this approach isn’t widely adopted to tear down competitors’ chips, it could find a use in other applications. One of those is verifying that a chip only has the features it is intended to have, and that a “hardware Trojan”—added circuitry that could be used for malicious purposes—hasn’t been introduced.”
Link to IEEE article: http://spectrum.ieee.org/nanoclast/semiconductors/processors/xray-ic-imaging
Link to Nature paper: http://www.nature.com/nature/journal/v543/n7645/full/nature21698.html
The post Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging appeared first on HPCwire.
FRANKFURT, Germany, March 22, 2017 — ISC High Performance is pleased to announce the inclusion of the STEM Student Day & Gala at this year’s conference. The new program aims to connect the next generation of regional and international STEM practitioners with the high performance computing industry and its key players.
ISC 2017 has created this program to welcome STEM students into the world of HPC with the hope that an early exposure to the community will encourage them to acquire the necessary HPC skills to propel their future careers.
The ISC STEM Student Day & Gala will take place on Wednesday, June 21, and is free to attend for 200 undergraduate and graduate students. All regional and international students are welcome to register for the program, including those not attending the main conference. The organizers also encourage female STEM students to exploit this opportunity as ISC 2017 is very committed to improving gender diversity.
Students will be able to register for the program starting mid-April via the program webpage.
Participating students will enjoy an afternoon discovering HPC by visiting the exhibition and then joining a conference keynote before participating in a career fair. In the evening, they can network with key HPC players at a special gala event.
Supermicro, PRACE, CSCS and GNS Systems GmbH have already come forward to support this program. Funding from another six organizations is needed to ensure the full success of the STEM Day & Gala. Sponsorship opportunities start at 500 euros, with all resources flowing directly into the event organization. Please contact email@example.com to get involved.
“There is currently a shortage of a skilled STEM workforce in Europe and it is projected that the gap between available jobs and suitable candidates will grow very wide beyond 2020 if nothing is done about it,” said Martin Meuer, the general co-chair of ISC High Performance.
“This gave us the idea to organize the STEM Day, as many organizations that exhibit at ISC could profit from meeting the future workforce directly.”
The ISC STEM Student Day & Gala is also a great opportunity for organizations to associate themselves as STEM employers and invest in their future HPC user base.
About ISC High Performance
First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.
Over 400 hand-picked expert speakers and 150 exhibitors, consisting of leading research centers and vendors, will greet attendees at ISC High Performance. A number of events complement the Monday – Wednesday keynotes, including the Distinguished Speaker Series, the Industry Track, The Machine Learning Track, Tutorials, Workshops, the Research Paper Sessions, Birds-of-a-Feather (BoF) Sessions, Research Poster, the PhD Forum, Project Poster Sessions and Exhibitor Forums.
The post ISC High Performance Adds STEM Student Day to the 2017 Program appeared first on HPCwire.
A new computer simulation based on codes developed at Los Alamos National Laboratory is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. The simulation is based on a computer code used to understand the coupling of radiation and certain materials.
“Supermassive black holes have a speed limit that governs how fast and how large they can grow,” said Joseph Smidt of the Theoretical Design Division at Los Alamos National Laboratory, “The relatively recent discovery of supermassive black holes in the early development of the universe raised a fundamental question, how did they get so big so fast?”
Using codes developed at Los Alamos for modeling the interaction of matter and radiation related to the Lab’s stockpile stewardship mission, Smidt and colleagues created a simulation of collapsing stars that resulted in supermassive black holes forming in less time than expected, cosmologically speaking, in the first billion years of the universe.
“It turns out that while supermassive black holes have a growth speed limit, certain types of massive stars do not,” said Smidt. “We asked, what if we could find a place where stars could grow much faster, perhaps to the size of many thousands of suns; could they form supermassive black holes in less time?” The work is detailed in a recent paper, “The Formation Of The First Quasars In The Universe.”
It turns out the Los Alamos computer model not only confirms the possibility of speedy supermassive black hole formation, but also fits many other phenomena of black holes that are routinely observed by astrophysicists. The research shows that the simulated supermassive black holes are also interacting with galaxies in the same way that is observed in nature, including star formation rates, galaxy density profiles, and thermal and ionization rates in gasses.
“This was largely unexpected,” said Smidt. “I thought this idea of growing a massive star in a special configuration and forming a black hole with the right kind of masses was something we could approximate, but to see the black hole inducing star formation and driving the dynamics in ways that we’ve observed in nature was really icing on the cake.”
A key mission area at Los Alamos National Laboratory is understanding how radiation interacts with certain materials. Because supermassive black holes produce huge quantities of hot radiation, their behavior helps test computer codes designed to model the coupling of radiation and matter. The codes are used, along with large- and small-scale experiments, to assure the safety, security, and effectiveness of the U.S. nuclear deterrent.
“We’ve gotten to a point at Los Alamos,” said Smidt, “with the computer codes we’re using, the physics understanding, and the supercomputing facilities, that we can do detailed calculations that replicate some of the forces driving the evolution of the Universe.”
Link to paper: https://arxiv.org/pdf/1703.00449.pdf
Link to video about the discovery: https://youtu.be/LD4xECbHx_I
The post LANL Simulation Shows Massive Black Holes Break “Speed Limit” appeared first on HPCwire.
SAN JOSE, Calif., March 21, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a leader in compute, storage and networking technologies including green computing, expands the Industry’s broadest portfolio of Supermicro NVMe Flash server and storage systems with support for Intel Optane SSD DC P4800X, the world’s most responsive data center SSD.
Supermicro’s NVMe SSD Systems with Intel Optane SSDs for the Data Center enable breakthrough performance compared to traditional NAND based SSDs. The Intel Optane SSDs for the data center are the first breakthrough that begins to blur the line between memory and storage, enabling customers to do more per server, or extend memory working sets to enable new usages and discoveries. The PCI-E compliant expansion card delivers an industry leading combination of 2 times better latency performance, up to more than 3 times higher endurance, and up to 3 times higher write throughput than NVMe NAND SSDs. Optane is supported across Supermicro’s complete product line including: BigTwin, SuperBlade, Simply Double Storage and Ultra servers supporting the current and next generation Intel Xeon Processors. These innovative solutions enable a new high performance storage tier that combines the attributes of memory and storage ideal for Financial Services, Cloud, HPC, Storage and overall Enterprise applications.
The first generation Supermicro supported Intel Optane SSDs are initially a PCI-E compliant expansion card with additional form factors to follow. A 2U Supermicro Ultra system will be able to deliver 6 million WRITE IOPs and 16.5 TB of high performance Optane storage. Intel Optane will deliver optimal performance in the 1U 10 NVMe All-Flash SuperServer and the capacity optimized 2U 48 All-Flash NVMe Simply Double Storage Server and provide accelerated caching across the complete line of NVMe supported scale out storage servers including the new 4U 45 Drive system with NVMe Cache drives.
“Being First-To-Market with the latest in computing technology continues to be our corporate strength, the addition of Intel Optane memory technology gives our top tier customers a new memory deployment strategy that provides better write performance and latency than existing NVMe NAND SSD solutions including more than 30 drive writes per day,” said Charles Liang, President and CEO of Supermicro. “In addition this new memory is slated to consume 30 percent lower max-power than SSD NAND memory, supporting our customer’s green computing priorities.”
“Supermicro’s system readiness for the new Optane memory technology will provide fast storage and cache for MySQL and HCI applications, ” said Bill Lesczinske, Vice President, Non-Volatile Memory Solutions Group. “With 77x better read latency in the presence of a high write workload and as a memory replacement with Intel Memory Drive Technology – software will make the Optane SSD look like DRAM transparently to the OS, providing greater in-memory compute performance to Supermicro systems.”
For more information on Supermicro’s complete range of NVMe Flash Solutions, please visit http://www.supermicro.com/products/nfo/NVMe.cfm.
About Super Micro Computer, Inc. (NASDAQ: SMCI)
Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.
The post Supermicro Launches Intel Optane SSD Optimized Platforms appeared first on HPCwire.
SANTA CLARA, Calif., March 21, 2017 — DataDirect Networks (DDN) today announced the appointment of Bret Costelow as the company’s vice president of global sales. In his new role, Costelow will oversee technical computing sales worldwide, and will leverage more than 25 years of sales and sales leadership experience to further boost visibility of DDN’s deep technical expertise and high-performance computing (HPC) storage platform offerings, develop new business strategies and drive revenue growth. Costelow’s leadership and experience spans leading technology companies, including Intel and Ricoh Americas.
“Bret Costelow is an inspiring sales leader with a clear understanding of our customers’ needs and a vision of how DDN’s technologies and solutions can best solve their toughest data storage challenges,” said Robert Triendl, senior vice president, global sales, marketing, and field services, DDN. “Bret’s proven success in high-growth business settings, deep knowledge of the Lustre* and HPC market, proven track record for generating traction with innovative, advanced technologies, and his broad experience with software sales make him a great asset to our team and a great resource for our partners and customers around the world.”
Costelow joins DDN from Intel Corporation, where he led a global sales and business development team for Intel’s HPC software business and supported Intel’s 2012 acquisition of Whamcloud, the main development arm for the open source Lustre file system, and its subsequent sales and marketing. Costelow was instrumental in leading the Lustre business unit to expand into adjacent markets, reaching beyond HPC file systems to HPC cluster orchestration software. Under his leadership, the HPC software business unit opened new markets in Asia, launched a comprehensive, global software sales channel program and drove year-over-year revenue growth that averaged more than 30 percent in each of the past five years. Costelow is also on the board of directors of the European Open File Systems (EOFS), a non-profit organization focused on the promotion and support of open scalable file systems for high-performance computing in the technical computing and enterprise computing markets.
“DDN is the uncontested market leader in HPC storage, with a highly differentiated portfolio of solutions for technical computing users in all vertical markets. This portfolio, combined with aggressive investments in new technologies, positions the company incredibly well for continued growth and success as disruptive technologies, such as non-volatile memory (NVM), unsettle the storage market landscape and create exciting new opportunities,” said Bret Costelow, vice president, global sales at DDN. “The current market dynamics and DDN’s agility to respond made this the perfect time to join DDN. I look forward to working with the incredible talent in DDN’s field team, product management, product development and software engineering teams to help drive DDN’s success and growth to new levels, and to help accelerate the success of DDN’s customers and partners around the world.”
DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.
SEATTLE, Wash., March 21, 2017 — Supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced that the company’s President and CEO, Peter Ungaro, will give a presentation on “The Convergence of Big Data and Supercomputing” at TechIgnite, a IEEE Computer Society conference exploring the trends, threats, and truth behind technology.
The convergence of artificial intelligence technologies and supercomputing at scale is happening now. As a featured speaker at TechIgnite’s “AI and Machine Learning” track, Ungaro’s presentation will examine how the convergence of big data and modeling and simulation run on supercomputing platforms at scale is creating new opportunities for organizations to discover innovative ways of extracting value from massive data sets.
Other TechIgnite speakers include Apple co-founder Steve Wozniak, Tony Jebara, director of machine learning at Netflix, William Ruh, CEO for GE Digital, and more.
TechIgnite will take place on March 21-22, 2017 at the Hyatt Regency San Francisco Airport Hotel in Burlingame, CA. Ungaro’s presentation will be held at 2:00pm PT on Wednesday, March 22. A complete list of TechIgnite speakers is available online via the following URL: http://techignite.computer.org/speakers/.
About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.
The post Cray CEO to Speak on Convergence of Big Data, Supercomputing at TechIgnite appeared first on HPCwire.
For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-world applications are depends on whom you talk to and for what kinds of applications. Los Alamos National Lab, for example, has an active application development effort for its D-Wave system and LANL researcher Susan Mniszewski and colleagues have made progress on using the D-Wave machine for aspects of quantum molecular dynamics (QMD) simulations.
At CeBIT this week D-Wave and Volkswagen will discuss their pilot project to monitor and control taxi traffic in Beijing using a hybrid HPC-quantum system – this is on the heels of recent customer upgrade news from D-Wave (more below). Last week IBM announced expanded access to its five-qubit cloud-based quantum developer platform. In early March, researchers from the Google Quantum AI Lab published an excellent commentary in Nature examining real-world opportunities, challenges and timeframes for quantum computing more broadly. Google is also considering making its homegrown quantum capability available through the cloud.
As an overview, the Google commentary provides a great snapshot, noting soberly that challenges such as the lack of solid error correction and the small size (number of qubits) in today’s machines – whether “universal” digital machines like IBM’s or “analog” adiabatic annealing machines like D-Wave’s – have prompted many observers to declare useful quantum computing is still a decade way. Not so fast, says Google.
“This conservative view of quantum computing gives the impression that investors will benefit only in the long term. We contend that short-term returns are possible with the small devices that will emerge within the next five years, even though these will lack full error correction…Heuristic ‘hybrid’ methods that blend quantum and classical approaches could be the foundation for powerful future applications. The recent success of neural networks in machine learning is a good example,” write Masoud Mohseni, Peter Read, and John Martinis (a 2017 HPCwire Person to Watch) and colleagues (Nature, March 8, “Commercialize early quantum technologies”)
The D-Wave/VW project is a good example of a hybrid approach (details to follow) but first here’s a brief summary of recent quantum computing news:
- IBM released a new API and upgraded simulator for modeling circuits up to 20 qubits on its 5-qubit platform. It also announced plans for a software developer kit by mid-year for building “simple” quantum applications. So far, says IBM, its quantum cloud has attracted about 40,000 users, including, for example, the Massachusetts Institute of Technology, which used the cloud service for its online quantum information science course. IBM also noted heavy use of the service by Chinese researchers. (See HPCwire coverage, IBM Touts Hybrid Approach to Quantum Computing)
- D-Wave has been actively extending its development ecosystem (qbsolv (D-wave) and qmasm (LANL, et al.) and says researchers have recently been able to simulate a 20,000 qubit system on 1,000-qubit machine using qbsolv (more below). After announcing a 2,000-quibit machine in the fall, the company has begun deploying them. The first will be for a new customer, Temporal Defense System, and another is planned for the Google/NASA/USRA partnership which has a 1,000-qubit machine now. D-wave also just announced Virginia Tech and the Hume Center will begin using D-Wave systems for work on defense and intelligence applications.
- Google’s commentary declares: “We anticipate that, within a few years, well-controlled quantum systems may be able to perform certain tasks much faster than conventional computers based on CMOS (complementary metal oxide–semiconductor) technology. Here we highlight three commercially viable uses for early quantum-computing devices: quantum simulation, quantum-assisted optimization and quantum sampling. Faster computing speeds in these areas would be commercially advantageous in sectors from artificial intelligence to finance and health care.”
Clearly there is a lot going on even at this stage of quantum computing’s development. There’s also been a good deal of wrangling over just what is a quantum computer and the differences between IBM’s “universal” digital approach – essentially a machine able to do anything computers do now – and D-Wave’s adiabatic annealing approach, which is currently intended to solve specific classes of optimization problems.
“They are different kinds of machines. No one has a universal quantum computer now, so you have to look at each case individually for its particular strengths and weaknesses,” explained Martinis to HPCwire. “The D-wave has minimal quantum coherence (it loses the information exchanged between qubits quite quickly), but makes up for it by having many qubits.”
“The IBM machine is small, but the qubits have quantum coherence enough to do some standard quantum algorithms. Right now it is not powerful, as you can run quantum simulations on classical computers quite easily. But by adding qubits the power will scale up quickly. It has the architecture of a universal machine and has enough quantum coherence to behave like one for very small problems,” Martinis said.
Noteworthy, Google has developed 9-qubit devices that have 3-5x more coherence than IBM, according to Martinis, but they are not on the cloud yet. “We are ready to scale up now, and plan to have this year a ‘quantum supremacy’ device that has to be checked with a supercomputer. We are thinking of offering cloud also, but are more or less waiting until we have a hardware device that gives you more power than a classical simulation.”
Quantum supremacy as described in the Google commentary is a term coined by theoretical physicist John Preskill to describe “the ability of a quantum processor to perform, in a short time, a well-defined mathematical task that even the largest classical supercomputers (such as China’s Sunway TaihuLight) would be unable to complete within any reasonable time frame. We predict that, in a few years, an experiment achieving quantum supremacy will be performed.”Bo Ewald
For the moment, D-Wave is the only vendor offering near-production machines versus research machines, said Bo Ewald, the company’s ever-cheerful evangelist. He quickly agrees though that at least for now there aren’t any production-ready applications. Developing a quantum tool/software ecosystem is a driving focus at D-wave. The LANL app dev work, though impressive, still represents proto-application development. Nevertheless the ecosystem of tools is growing quickly.
“We have defined a software architecture that has several layers starting at the quantum machine instruction layer where if you want to program in machine language you are certainly welcome to do that; that is kind of the way people had to do it in the early days,” said Ewald.
“The next layer up is if you want to be able to create quantum machine instructions from C or C++ or Python. We have now libraries that run on host machines, regular HPC machines, so you can use those languages to generate programs that run on the D-Wave machine but the challenge that we have faced, that customers have faced, is that our machines had 500 qubits or 1,000 qubits and now 2,000; we know there are problems that are going to consume many more qubits than that,” he said.
For D-Wave systems, qbsolv helps address this problem. It allows a meta-description of the machine and the problem you want to solve as quadratic unconstrained binary optimization or QUBO. It’s an intermediate representation. D-Wave then extended this capability to what it calls virtual QUBOs likening it to virtual memory.
“You can create QUBOs or representations of problems which are much larger than the machine itself and then using combined classical computer and quantum computer techniques we could partition the problem and solve them in chunks and then kind of glue them back together after we solved the D-Wave part. We’ve done that now with the 1,000-qubit machine and run problems that have the equivalent of 20,000 qubits,” said Ewald, adding the new 2,000-qubit machines will handle problems of even greater size using this capability.
At LANL, researcher Scott Pakin has developed another tool – a quantum macro assembler for D-Wave systems (QMASM). Ewald said part of the goal of Pakin’s work was to determine, “if you could map gates onto the machine even though we are not a universal or a gate model. You can in fact model gates on our machine and he has started to [create] a library of gates (or gates, and gates, nand gates) and you can assemble those to become macros.”
Pakin said, “My personal research interest has been in making the D-Wave easier to program. I’ve recently built something really nifty on top of QMASM: edif2qmasm, which is my answer to the question: Can one write classical-style code and run it on the D-Wave?
“For many difficult computational problems, solution verification is simple and fast. The idea behind edif2qmasm is that one can write an ordinary(-ish) program that reports if a proposed solution to a problem is in fact valid. This gets compiled for the D-Wave then run _backwards_, giving it ‘true’ for the proposed solution being valid and getting back a solution to the difficult computational problem.”
Pakin noted there are many examples on github to provide a feel for the power of this tool.
“For example, mult.v is a simple, one-line multiplier. Run it backwards, and it factors a number, which underlies modern data decryption. In a dozen or so lines of code, circsat.v evaluates a Boolean circuit. Run it backwards, and it tells you what inputs lead to an output of “true”, which used in areas of artificial intelligence, circuit design, and automatic theorem proving. map-color.v reports if a map is correctly colored with four colors such that no two adjacent regions have the same color. Run it backwards, and it _finds_ such a coloring.
“Although current-generation D-Wave systems are too limited to apply this approach to substantial problems, the trends in system scale and engineering precision indicate that some day we should be able to perform real work on this sort of system. And with the help of tools like edif2qmasm, programmers won’t need an advanced degree to figure out how to write code for it,” he explained.
The D-Wave/VW collaboration, just a year or so old, is one of the more interesting quantum computing proof-of-concept efforts because it tackles an optimization problem of the kind that is widespread in everyday life. As described by Ewald, VW CIO Martin Hoffman was making his yearly swing through Silicon Valley and stopped in at D-Wave and talk turned to the many optimization challenges big automakers face, such as supply logistics, vehicle delivery, and various machine learning tasks and doing a D-Wave project around one of them. Instead, said Ewald, VW eventually settled on a more driver-facing problem.
It turns out there are about 10,000 taxis in Beijing, said Ewald. Each has a GPS device and their positions are recorded every five seconds. Traffic congestion, of course, is a huge problem in Beijing. The idea was to explore if it was possible to create an application running on both traditional computer resources and D-Wave to help monitor and guide taxi movement more quickly and effectively.
“Ten thousand taxis on all of the streets in Beijing is way too big for our machine at this point, but they came to this same idea we talked about with qbsolve where you partition problems,” said Ewald. “On the traditional machines VW created a map and grid and subdivided the grid into quadrants and would find the quadrant that was the most red.” That’s red as in long cab waits.
The problem quadrant was then sent to D-Wave to be solved. “We would optimize the flow, basically minimize the wait time for all of the taxis within the quadrant, send that [solution] back to the traditional machine which would then send us the next most red, and we would try to turn it green,” said Ewald.
According to Ewald, VW was able to relatively create the “hybrid” solutions quickly and “get what they say are pretty good results.” They have talked about then being able to extend this project to predict where traffic jams are going to be and give people perhaps 45 minute warnings that there’s the potential for a traffic jam at such and such intersection. The two companies have a press conference planned this week at CeBIT to showcase the project.
It’s good to emphasize that the VW/D-wave exercise is developmental – what Ewald labels as a proto application: “But just the fact that they were able to get it running is a great step forward in many ways in that we believe our machine will be used side by side with existing machines, much like GPUs were used in the early days on graphics. In this case VW has demonstrated quite clearly how our machine, our QPU if you will, can be used in helping accelerate the work being done on a traditional HPC machines.”
Image art, chip diagram: D-Wave
The post Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access appeared first on HPCwire.
Intel Corp. has begun shipping new storage drives based on its 3-D XPoint non-volatile memory technology as it targets data-driven workloads.
Intel’s new Optane solid-state drives, designated P4800X, seek to combine the attributes of memory and storage in the same device. The result is a new “data storage tier” intended to overcome growing storage bottlenecks in datacenters.
Intel said its new SSD based on Xpoint (pronounced “cross point”) memory technology would help speed applications for faster caching and storage while allowing datacenter operators to deploy larger datasets analyzed using large memory pools.
Intel argues that current storage approaches based on DRAM and NAND are contributing to the current datacenter storage gap, and that storage platform increasingly need to behave “like system memory.” DRAM is too expensive to scale while NAND can scale but falls short in terms of datacenter performance.
Hence, the Optane SSD leverages the Xpoint approach unveiled in 2015 to boost memory density by as much as ten times compared to conventional memory chips, claim Intel and development partner Micron Technology Inc. Optane, Intel’s first deployment of Xpoint memory stacks, is said to deliver a five- to eight-fold boost in performance for “low queue depth workloads.”
Faster caching and storage are also said to boost scaling for individual servers while speeding up latency-sensitive workloads, the company said Monday (March 20).
The first product in the Optane P4800X series comes with 375-Gb storage capacity in the form of an add-in card with both PCI Express and Non-Volatile Memory Express interfaces. Typical latency is rated at less than 10 microseconds.
Intel said its Optane SSDs combine emerging Xpoint memory media with its memory controller as well as proprietary interface hardware and software.
Last April, Intel (NASDAQ: INTC) demonstrated Optane SSDs operating at 2 Gb/sec speeds. Along with speed improvements, Intel and memory partner Micron (NASDAQ: MU) said last year they hoped to convince potential enterprise customers that XPoint memory platforms are more durable than current NAND flash technology as well as providing as much as a ten-fold increase in storage density for persistent data compared to DRAM.
In terms of endurance, Intel said Optane could handle up to 30 drive writes per day and up to 12.3 petabytes of written data. Hence, the SSDs target “write-intensive applications such as online transaction processing, high performance computing, write caching and logging,” the chipmaker said.
As flash storage makes greater inroads in datacenters, Intel’s storage SSDs based on 3-D Xpoint memory technology essentially creates a new storage category between flash and DRAM. The chipmaker argues its storage approach addresses the fundamental computing problems of moving data closer and linking to CPUs.
“Faster storage is important to computing because computing is done on data, and data is put in storage,” said Robert Crooke, general manager of Intel’s Non-Volatile Memory Solutions Group. “The longer it takes to get to that data, the slower the computing….”
Meanwhile, Micron is targeting its SSDs based on Xpoint technology at cloud applications, data analytics, online transaction processing and the Internet of Things.
The memory maker said last summer its Quantx line of SSDs also delivers read latencies at less than 10 microseconds and writes at less than 20 microseconds. That, Micron asserted, is 10 times better than NAND flash-based SSDs.
This article was first published on HPCwire’s sister publication, EnterpriseTech.
The post Intel Ships Drives Based on 3-D XPoint Non-volatile Memory appeared first on HPCwire.
AMSTERDAM, March 20, 2017 — At the occasion of the 25th PRACE Council Meeting in Amsterdam, the PRACE Members ratified a Resolution to proceed with the second phase of their Partnership: PRACE 2. The PRACE 2 programme defines the second period of PRACE from 2017 to 2020. With this agreement, PRACE will strengthen Europe’s position as world-class scientific supercomputing provider, a technology considered a key enabler for knowledge development, scientific research, big data analytics, solving global and societal challenges, and European industrial competitiveness.
In the context of the global HPC race between USA, Asia and Europe where European countries decided to compete allied, the overarching goal of PRACE is to provide the federated European supercomputing infrastructure that is science-driven and globally competitive. It builds on the strengths of European science providing high-end computing and data analysis resources to drive discoveries and new developments in all areas of science and industry, from fundamental research to applied sciences including: mathematics and computer sciences, medicine, and engineering, as well as digital humanities and social sciences. Recently PRACE was confirmed as the only e-Infrastructure on the ESFRI 2016 Roadmap (European Strategy Forum for Research Infrastructures).
“PRACE 2 is a natural next step in the successful pan-European collaboration in HPC. Our ultimate goal is to provide a world-class federated and sustainable HPC and data infrastructure to all researchers in Europe,” said Prof. Dr. Anwar Osseyran, Chair of the PRACE Council.
For the PRACE 2 programme, the PRACE Members have thoroughly discussed and defined the underlying funding model of the Research Infrastructure, based on the contribution of the 5 Hosting Members and the General Partners. The European Commission supports specific PRACE activities via project funding.
The new PRACE 2 programme will help to create a fertile basis for the sustainability of the infrastructure, in order to continue fostering world leading science as well as enabling technology development and industrial competitiveness in Europe through supercomputing. This will be accomplished through:
- Provisioning of a federated world-class Tier-0 supercomputing infrastructure that is architecturally diverse and allows for capability allocations that are competitive with comparable programmes in the USA and in Asia.
- A single, thorough Peer Review Process for resource allocation, exclusively based on scientific excellence of the highest standard.
- Coordinated High-Level Support Teams (HLST) that provide users with support for code enabling and scaling out of scientific applications / methods, as well as for R&D on code refactoring on the Tier-0 systems.
- Implementation actions in the areas of dissemination, industry collaboration, and training, as well as the exploration of future supercomputing technologies that will include additional application enabling investments co-ordinated with the support team efforts.
Hosting Members and General Partners undersigning the PRACE 2 programme will be eligible to apply for Tier-0 resources, provided to the PRACE 2 programme, which are then available to principal investigators from academia and industry in their countries. Scientists from other countries may be invited to contribute to these projects to benefit from these large allocations.
PRACE 2 will award substantially more core hours to larger projects than before, boosting scientific and industrial advancement in Europe. With 5 Hosting Members (France, Germany, Italy, Spain, and Switzerland) the capacity offering is planned to grow to 75 million node hours per year. Resources remain free of charge at the point of usage.
The impact of PRACE 2 is already visible for user communities: In the 14th Call for Proposals for Project Access, PRACE was able to make available 3 times more resources than in previous calls, offering a cumulated peak performance of more than 62 Petaflops in 7 complementary leading edge Tier-0 systems.
“We are very pleased with how the PRACE Members have come together and invested substantial efforts and resources in the project. PRACE 2 will deliver a much needed increase in computational power, and with the new High Level Support Teams we are also establishing a joint computational infrastructure that will strengthen European competitiveness,” said Prof. Erik Lindahl, Chair of the PRACE Scientific Steering Committee.
The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Seventh Framework Programme (FP7/2007-2013) under grant agreement RI-312763 and from the EU’s Horizon 2020 research and innovation programme (2014-2020) under grant agreements 653838 and 730913. For more information, see www.prace-ri.eu.
SUNNYVALE, Calif. & YOKNEAM, Israel, March 20, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today introduced a new line of 100Gb/s silicon photonics components to serve the growing demand of hyperscale Web 2.0 and cloud optical interconnects. The new product line provides module makers with access to a fully qualified portfolio of silicon photonics components and optical engine subassemblies.
“Our customers can now achieve significant time to market advantages for embedded modules and transceivers by using fully-qualified, and cost-effective silicon photonics component and chip sets,” said Amir Prescher, senior vice president of business development and general manager of the interconnect business at Mellanox. “PSM4 represents the highest volume, most cost effective and flexible building blocks for 100Gb/s for single mode fiber transceivers for data center applications. End customers benefit by having more supplier options and Mellanox benefits by scaling our high-volume silicon photonics products.”
Specifically, Mellanox is announcing the immediate availability of:
- 100Gb/s PSM4 silicon photonics 1550nm transmitter, with flip-chip bonded DFB lasers with attached 1m. fiber pigtail for reaches of 2km
- 100Gb/s PSM4 silicon photonics 1550nm transmitter, with flip-chip bonded DFB lasers with attached fiber stub for connectorized transceivers with reaches of 2km
- Low-power 100Gb/s (4x25G) modulator driver IC
- 100Gb/s PSM4 silicon photonics 1310 and 1550nm receiver array with 1m fiber pigtail
- 100Gb/s PSM4 silicon photonics 1310 and 1550 receiver array for connectorized transceivers
- Low-power 100Gb/s (4x25G) trans-impedance amplifier IC
These components are fully qualified for use in low-cost, electronics-style packaging, ensuring a low-risk, quick time to market advantage. Because the Mellanox silicon photonics platform eliminates the need for complex optical alignment of lenses, isolators, and laser subassemblies, customers can scale to high volume manufacturing easier and faster than traditional technologies.
Recently, Mellanox announced that it has shipped more 100,000 Direct Attach Copper (DAC) cables and more 200,000 optical transceiver modules for 100Gb/s networks, confirming the market demand and high volume manufacturing leadership for 100Gb/s interconnect products.
Mellanox will be exhibiting at the Optical Fiber Conference (OFC), March 21-23, at the Los Angeles Convention Center, Los Angeles, CA, booth no. 3715. Mellanox will be showcasing live demonstrations of its 100Gb/s end-to-end switching, network adapter and copper and optical cables and transceivers solutions, including:
- Live 200Gb/s silicon photonics demonstration
- Spectrum SN2700, SN2410 and SN2100 100Gb/s QSFP28/ SFP28 switches
- ConnectX-4 and ConnectX-5 25G/50G/100Gb/s QSFP28/SFP28 network adapters
- LinkX™ 25G/50G/100Gb/s DAC & AOC cables and 100G SR4 & PSM4 transceivers
- New Quantum switches with 40 ports of 200Gb/s QSFP28 in 1RUchassis
- New ConnectX-6 adapters with two ports of 200Gb/s QSFP28
- Silicon Photonics Optical engines and components
At OFC, the Company will also be demonstrating interoperability of the Mellanox Silicon Photonics 100Gb/s PSM4 with Innolight, AOI, Oclaro, and Hisense transceivers in both the Mellanox booth and in the adjacent Ethernet Alliance booth, no. 1709.
Mellanox Technologies (NASDAQ: MLNX) is a supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, cables and transceivers, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.
The post Mellanox Introduces 100Gb/s Silicon Photonics Line appeared first on HPCwire.
HANNOVER, Germany, March 20, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a leader in compute, storage and networking technologies including green computing, announces participation in the annual CeBIT exhibition being held in Hannover, Germany from March 20 to 24. Supermicro will showcase enterprise and data center solutions in Booth B-66.
This year Supermicro’s showcased products include the new SuperBlade and BigTwin high density computing solutions. The 8U SuperBlade is the newest in Supermicro’s blade systems. The new 8U SuperBlade supports both current and next generation Intel Xeon processor-based blade servers with the fastest 100G EDR InfiniBand and Omni-Path switches for mission critical, enterprise and data center applications. It also leverages the same Ethernet switches, chassis management modules, and software as the successful MicroBlade for improved reliability, serviceability, and affordability. It maximizes the performance and power efficiency with DP and MP processors in half-height and full-height blades, respectively. The new smaller form factor 4U SuperBlade maximizes density and power efficiency while enabling up to 140 dual-processor servers or 280 single-processor servers per 42U rack.
The Supermicro BigTwin is a breakthrough multi-node server system with a multitude of innovations and industry firsts. BigTwin supports maximum system performance and efficiency by delivering 30% better thermal capacity in a compact 2U form-factor enabling solutions with the highest performance processor, memory, storage and I/O. Continuing Supermicro’s NVMe leadership, the BigTwin is the first All-Flash NVMe multi-node system. BigTwin doubles the I/O capacity with three PCI-e 3.0 x16 slots and provides added flexibility with more than 10 networking options including 1GbE, 10G, 25G, 100G Ethernet and InfiniBand with its industry leading SIOM modular interconnect.
Supermicro will also be featuring the newest edition to the company’s NVMe Flash portfolio supporting Intel’s new Optane SSDs. Supermicro’s industry leading portfolio of 60+ NVMe Flash based solutions with Intel Optane SSDs can deliver up to 11 million Write IOPs and 30TB of high performance Optane storage in a 1U form factor.
“We look forward to our participation on CeBIT each year as an opportunity to showcase our industry leading NVMe storage and enterprise computing solutions,” said Charles Liang, President and CEO of Supermicro. “Both our BigTwin and SuperBlade systems are achieving market traction in high-density, energy-conscious data centers.”
Supermicro offers the industry’s most extensive selection of motherboards, servers and storage to support a wide range of markets including DS, DSS, Industrial and Machine Automation, Retail, Transport, Communication and Networking (Security), as well as Warm and Cold Storage.
Key server systems, storage systems, motherboards and Ethernet switches on display this year will include:
- BigTwin features (SYS-2028BT-HNR+). 4 dual Intel Xeon processor nodes in a 2U form factor, 24 DIMM slots for up to 3TB of memory, 6 NVMe U.2 drive bays per node, and an SIOM networking card per node
- The Supermicro MicroBlade represents an entirely new type of computing platform. It is a powerful and flexible extreme-density 6U/3U all-in-one total system that features 28/14 hot-swappable MicroBlade Server nodes supporting 28/14 Newest Dual-Node Intel Xeon processor-based UP systems with Intel Xeon processor E3-1200v5 family configurations with up to 2 SSDs/1 HDD per Node.
- NVMe Ultra Server for advanced in-memory computing. The system will include MemX, a high capacity, high performance, working memory and storage solution that offers superior performance at lower acquisition costs compared to traditional DRAM-only memory configurations. MemX uses NVMe-compatible, high performance HGST-branded Ultrastar SN200 family PCIe solid-state drives (SSDs) from Western Digital. The combined solution can deliver up to 11.7 terabytes (TB) of working memory and direct attached storage of 330 TB per 2U Ultra Server. The combination of Ultra Server and MemX is the ideal solution for Cloud Computing, in-Memory database, and big data analytics workloads used by Cloud Service Providers, Hyperscale, and Enterprise deployments
- 7U SuperServer Eight socket R1 (LGA 2011) supports Intel Xeon processor E7-8800 v4/v3 family (up to 24-Core), up to 24TB in 192 DDR4 DIMM slots, up to 15 PCI-E 3.0 slots (8 x16, 7 x8), 4x 10Gb LAN (SIOM), 1 dedicated LAN for IPMI Remote Management, 1 VGA, 2 USB 2.0, 1 COM via KVM, up to 12 Hot-swap 2.5″ SAS3 HDDs (w/ RAID cards), 20x 2.5″ or 6x 3.5″, internal HDDs (w/ RAID cards) (SYS-7088B-TR4FT)
- 4U SuperServer with Dual socket R3 (LGA 2011) supports Intel Xeon processor E5-2600 v4/ v3 family (up to 160W TDP), up to 3TB ECC 3DS LRDIMM, up to DDR4-2400MHz; 24x DIMM slots, 2 PCI-E 3.0 x16, 1 PCI-E 3.0 x8, SIOM for flexible networking options, 60x 3.5″ Hot-swap SAS3/SATA3 drive bays; 2x 2.5″ rear Hot-swap SATA drive bays; optional 6 NVMe bays, LSI 3108 SAS3 HW RAID controller, Server remote management: IPMI 2.0/KVM over LAN / Media over LAN (SSG-6048R-E1CR60N)
- 2U SuperServer with four hot-pluggable system nodes with: Single socket P (LGA 3647) supports Intel Xeon Phi x200 processor, optional integrated Intel Omni-Path fabric, CPU TDP support Up to 260W, up to 384GB ECC LRDIMM, 192GB ECC RDIMM, DDR4-2400MHz in 6 DIMM slots, 2 PCI-E 3.0 x16 (Low-profile) slots, Intel i350 Dual port GbE LAN, 1 Dedicated IPMI LAN port, 3 Hot-swap 3.5″ SATA drive bays, 1 VGA, 2 SuperDOM, 1 COM, 2 USB 3.0 ports (rear) (SYS-5028TK-HTR)
- 1U SuperServer Dual socket R3 (LGA 2011) supports Intel Xeon processor E5-2600 v4/ v3 family; QPI up to 9.6GT/s, up to 3TB ECC 3DS LRDIMM up to DDR4- 2400MHz; 24x DIMM slots, 2 PCI-E 3.0 x8 slots(2 FH 10.5″ L, 1 LP), 4x 10GBase-T ports, 10x 2.5″ SATA (Optional 8x SAS3 ports via AOC) Hot-swap Drive Bays, Diablo Technologies Memory1 Support (SYS-1028U-TR4T+)
- 1U SuperServer with dual socket Intel Xeon processor E5-2600 v4 family (up to 145W TDP), up to 4 co-processors, up to DDR4-2400MHz; 16x DIMM slots, 3 PCI-E 3.0 x16 slots, 1 PCI-E 3.0 x8 Low-profile slot, 2x 10GBase-T LAN via Intel X540, 2x 2.5″ Hot-swap drive bays, 2x 2.5″ internal drive bays (SYS-1028GQ-TXRT)
- 1U SuperServer with dual socket Intel Xeon processor E5-2600v4/v3 family (up to 145W TDP), up to 4 co-processors, up to 512GBECC 3DS LRDIMM , up to DDR4-2400MHz; 16x DIMM slots, 3 PCI-E 3.0 x16 slots, 1 PCI-E 3.0 x8 Low-profile slot, 2x 10GBase-T LAN via Intel X540, 2x 2.5″ Hot-swap drive bays, 2x 2.5″ internal drive bays (SYS-1028GQ-TXRT)
- 2U NVMe Mission Critical Storage Server with 40 Dual port NVMe Omni-Path SIOM support (SSG-2028R-DN2R40L)
- Intel Xeon-D 12-core embedded motherboard (X10SDV-12C-TLN4F): with up to 128GB memory, 6 SATA3 ports, 1 PCI-E 3.0 x16, 1 M.2 PCI-E 3.0 x4, and 2x 10GbE network connectivity
- Intel Xeon-D 4-core embedded motherboard (X10SDV -2C-TP8F): with up to 128GB memory, 2 PCI-E 3.0 x8, 1 M.2 PCI-E 3.0 x4, and 2x 10G SFP+ networking connectivity
- ATOM Motherboard, Intel Atom processor E3940, SoC, FCBGA 1296, up to 8GB Unbuffered non-ECC DDR3-866MHz SO-DIMM in 1 DIMM slot, Dual GbE LAN ports via Intel I210-AT, 1 PCI-E 2.0 x2 (in x8) slot, M.2 PCIe 2.0 x2, M Key 2242/2280, 1 Mini-PCIe with mSATA, 2 SATA3 (6Gbps) via SoC, 4 SATA3 (6Gbps) via Marvel 88SE9230, 1 DP (DisplayPort), 1 HDMI, 1 VGA, 1 eDP (Embedded DisplayPort), 1 Intel HD Graphics, 2 USB 3.0 (2 rear), 7 USB 2.0, (2 rear, 4 via headers, 1 Type A), 3 COM ports (1 rear, 2 headers), 1 SuperDOM, 4-pin 12v DC power connector (A2SAV)
- 1U Top-of-Rack 48x Port 100Gb/s switch (SSH-C48Q) – supports the 100Gbps Intel Omni-Path Architecture (OPA), 48x 100 Gb/s ports – QSFP28, optional RJ45 1G management port and USB serial console port
- 1U SuperSwitch Top of Rack Bare Metal 1/10G Ethernet switch with 48x 1Gbps Ethernet RJ45 ports and 4x SFP+ 10Gbps Ethernet ports (SSE-G3648BR)
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.
The post Supermicro Showcases Enterprise, Datacenter Solutions at CeBIT 2017 appeared first on HPCwire.