Feed aggregator

Mines students and staff featured in AISES's magazine

Colorado School of Mines - Tue, 11/07/2017 - 10:34

Colorado School of Mines students and staff who attended the American Indian Science and Engineering Society's national conference in September were featured in AISES's national conference wrap-up magazine Winds of Change. In addition to describing Mines' graduate school, the publication included a picture of Assistant Dean of Graduate Studies Jahi Simbai and mechanical engineering student Elise Tran and a quote from environmental engineering student Cheyenne Footracer.

From the publication:

Colorado School of Mines is a public research university devoted to engineering and applied science with a curriculum and research program geared toward responsible stewardship of the earth and its resources. It provides elevated opportunities for Native college students through an impactful scholarship experience designed to support their success.

"This is my second conference. I enjoy attending because there is a sense of unity with tribes from all over the nation, which is often difficult to come by when walking around campus." - Cheyenne Footrace, student at Colorado School of Mines.

Categories: Partner News

Small-scale gold mining project featured in State Magazine

Colorado School of Mines - Tue, 11/07/2017 - 09:34

State Magazine, a monthly publication of the U.S. Department of State, recently featured a photo of a Colorado School of Mines research project as part of an article about the U.S. Embassy’s efforts to tackle illegal gold mining in Peru.

Nicole Smith, a cultural anthropologist and assistant professor of mining engineering, is the primary investigator on the State Department-funded project, which is working with artisanal and small-scale gold miners in Peru to implement cleaner and safer ore-processing technologies.

Categories: Partner News

SC17: AI and Machine Learning are Central to Computational Attack on Cancer

HPC Wire - Tue, 11/07/2017 - 08:48

Enlisting computational technologies in the war on cancer isn’t new but it has taken on an increasingly decisive role. At SC17, Eric Stahlberg, director of the HPC Initiative at Frederick National Laboratory for Cancer Research in the Data Science and Information Technology Program, and two colleagues will lead the third Computational Approaches for Cancer workshop being held the Friday, Nov. 17, at SC17.

It is hard to overstate the importance of computation in today’s pursuit of precision medicine. Given the diversity and size of datasets it’s also not surprising that the “new kids” on the HPC cancer fighting block – AI and deep learning/machine learning – are also becoming the big kids on the block promising to significantly accelerate efforts understand and integrate biomedical data to develop and inform new treatments.

Eric Stahlberg

In this Q&A, Stahlberg discusses the goals of the workshop, the growing importance of AI/deep learning in biomedical research, how programs such as the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) are progressing, the need for algorithm assurance and portability, as well as ongoing needs where HPC technology has perhaps fallen short. The co-organizers of the workshop include Patricia Kovatch, Associate Dean for Scientific Computing at the Icahn School of Medicine at Mount Sinai and the Co-Director for the Master of Science in Biomedical Informatics, and Thomas Barr, Imaging Manager, Biomedical Imaging Team, Research Institute at Nationwide Children’s Hospital.

HPCwire: Maybe set the framework of the workshop with an overview of its goals. What are you and your fellow participants trying to achieve and what will be the output?

Eric Stahlberg: Great question. Cancer is an extremely complex disease – with hundreds of distinct classifications. The scale of the challenge is such that it requires a team approach to make progress. Now in its third year, the workshop continues to provide a venue that brings together communities and individuals from all interests and backgrounds to work together to impact cancer with HPC and computational methods. The workshops continue to be organized to help share information and updates on new capabilities, technologies and opportunities involving computational approaches, with an eye on seeing new collaborative efforts develop around cancer.

The first year of the workshop in 2015 was somewhat remarkable. The HPC community attending SC has had a long history of supporting the cancer research community in many ways, yet an opportunity to bring the community together had not yet materialized. The original intent was simply to provide a venue for those with in interest in cancer to share ideas, bring focus to potential priorities and look ahead as to what might be possible. The timing was incredible, with the the launching of the National Strategic Computing Initiative in July, opening up a whole new realm of potential possibilities and ideas.

By the time of the workshop last year, many of these possibilities started to gain traction – the Cancer Moonshot Initiative providing a strong and motivating context to get started, accelerate and make progress rapidly. Many new efforts were just getting off the ground, creating huge potential for collaboration – with established efforts employing computational approaches blending with new initiatives being launched as part of the Cancer Moonshot.

The workshop this year continues to build on the direction and success of the first two workshops. Similar to the first two workshops, speakers are being invited to help inform on opportunities for large scale computing in cancer research. This year’s workshop will feature Dr. Shannon Hughes from the NCI Division of Cancer Biology delivering a keynote presentation highlighting many efforts at the NCI where HPC can make a difference, particularly in the area of cancer systems biology. In addition, this year’s workshop brings a special emphasis to the role of machine learning in cancer – a tremendously exciting area, while also providing an opportunity to update the HPC community on the progress of collaborative efforts of the NCI working with the Department of Energy.

The response to the call for papers this year was also very strong – reflecting the rapidly growing role of HPC in accelerating cancer research. In addition to a report to compile summaries the workshop contributions, highlight key issues, and identify new areas of exploration and collaboration, the organizers are anticipating a special journal issue where these contributions can also be shared in full.

HPCwire: In the fight against cancer, where has computational technology had the greatest impact so far and what kinds of computational infrastructure have been the key enablers? Where has its application been disappointing?

Stahlberg: Computational technology has been part of the cancer fight for many years, providing many meaningful contributions along the way to advance understanding, provide insight, and deliver new diagnostic capabilities. One can find computational technology at work in many areas including imaging systems, in computational models used in cancer drug discovery, in assembling, mapping and analyzing genomes that enable molecular level understanding, even in the information systems used to manage and share information about cancer patients.

Answering the question as to where has computational technology been most disappointing in the fight against cancer is also difficult to answer given the breadth of areas where computational technology has been employed. However, given the near-term critical need to enable greater access to clinical data including patient history and outcomes, an area where great promise remains is in the use of computational technologies that make it possible for more patient information to be brought together more fully, safely and securely to accelerate progress in the direction of clinical impact.

HPCwire: What are the key computational gaps in the fight against cancer today? How do you see them being addressed and what particular technologies, life science and computational, are expected to have the greatest impact?

Stahlberg: The computational landscape for cancer is changing so rapidly that the key gaps continue to also change. Not too long ago, there was great concern about the amount of data being generated and whether this data could all be used effectively. Within just a few years, the mindset has changed significantly, where the challenge is now focusing on bringing what data we have available together and recognizing that even then, the complexity of the disease demands even more [data] as we aim for more precision in diagnosis, prognosis, and associated treatments.

With that said, computational gaps exist in nearly every domain and area of the cancer fight. At the clinical level, computation holds promise to help bring together, even virtually, the large amounts of existing data locked away in organizational silos. At the clinical level, there are important gaps in the data available for cancer patients pre and post treatment, a gap that may well be filled by both better data integration capabilities as well as mobile health monitoring. Understanding disease progression and response also presents a gap as well as an associated opportunity for computing and life science to work together to find efficient ways to monitor patient progress, track and monitor the disease at the cellular level, and create profiles of how different treatments impact the disease over time in the laboratory and in the clinic.

It is clear that technologies in two areas will be key in the near term – machine learning and technologies that enable algorithm portability.

In the near term, machine learning is expected to play a very significant role, particularly as the amount of data generated is expected to grow. We have already seen progress in automating feature identification as well as delivering approaches for predictive models for complex data. As the amount of available, reliable cancer data across all scales increases, the opportunity for machine learning to accelerate insight and leverage these new levels of information will continue to grow tremendously.

A second area of impact, related to the first, is in technologies for algorithm assurance and portability. As computing technology has become increasingly integrated within instruments and diagnostics at the point of acquisition exploding the volume of data collected, the need grows tremendously to move algorithms closer to the point of acquisition to enable processing before transport. The need for consistency and repeatability in the scientific research process requires portability and assurance of the analysis workflows. Portability and assurance of implementation are also important keys to eventual success in a clinical setting.

Efforts in delivering portable workflows through containers are also demonstrating great promise in moving the compute to the data, are providing an initial means at overcoming existing organizational barriers to data access.

Extending a bit further, technologies that enable portability of algorithms and of trained predictive models will also become keys to future success for HPC in cancer research. As new ways to encapsulate knowledge in the form of a trained neural network, parameterized set of equations, or other forms of predictive models, having reliable, portable knowledge will be a key factor to share insight and build the collective body of knowledge needed to accelerate cancer research.

HPCwire: While we are talking about infrastructure could you provide a picture of the HPC resources that NIH/NCI have, are they sufficient as is, and what the plans are for expanding them?

Biowulf phase three

Stahlberg: The HPC resource map for the NIH and NCI, like many large organizations, ranges from small servers to one of the largest systems created to support biological and health computation. There has been a wonderful recent growth in the available HPC resources available to NIH and NCI investigators, as part of a major NIH investment. The Biowulf team has done a fantastic job in raise the level of computing to now include 90,000+ processors. This configuration includes an expanded role of heterogeneous technologies in the form of GPUs, and presents a new level of computing capability available to the NIH and to NCI investigators.

In addition, NCI supports a large number of investigators through its multiple grant programs, where large scale HPC resources supported by NSF and others are being put to use in the war on cancer. While the specific details on the magnitude of computing this represents is not immediately available, the breadth of this level of support is expected to be quite substantial. At the annual meeting earlier this year for CASC (Coalition for Academic Scientific Computing), when asked which centers were supporting NCI funded investigators, nearly every attendee raised their hand.

Looking ahead, it would be difficult to make the case that even this level of HPC resources, will be sufficient as is, knowing the dramatic increases in the amount of data being generated currently, being forecast for the future, and the overall deepening complexity of cancer as new insights are revealed on an ongoing basis. With the emphasis on precision medicine and in the case of NCI, precision oncology, new opportunities to accelerate research and insight using HPC are quickly emerging.

Looking to the future of HPC and large scale computing in cancer research was one of the many factors supporting the new collaborative effort between the NCI and DOE. With the collaboration now having wrapped up the first year, new insights are being provided here that will merged with additional insights and information from the many existing efforts, to help inform future planning for HPC resources in the context of the emerging Exascale computing capabilities and emerging HPC technologies.

HPCwire: Interdisciplinary expertise is increasingly important in medicine. One persistent issue has been the relative lack of computational expertise among clinicians and life science researchers. To what extent is this changing and what steps are needed to raise computational expertise among this group?

Stahlberg: Medicine has long been a team effort, drawing from many disciplines and abilities to deliver care to the patient. There have been long-established centers in computational and mathematical aspects of medicine. One such example is the Advanced Biomedical Computing Center at the Frederick National Laboratory for cancer research which has been at the forefront of computational applications of medicine for more than twenty-five years. The difference today is the breadth of disciplines that are working together, and the depth of demand for computational scientists, as sources and volumes of available data in medicine and life sciences have exploded. The apparent shortage of computational expertise among clinicians and life sciences researchers is largely a result of the rapid rate of change in these areas, where the workforce and training have yet to catch up to the accelerating pace of technology and data-driven innovation.

Fortunately, many have recognized the need and opportunity for cross-disciplinary experience in computational and data sciences to enable ongoing advances in medicine. This appreciation has led to many new academic and training programs supported by NIH, NCI, as well as many catalyzed by health organizations and universities themselves that will help full future demand.

Collaborative opportunities between computational scientists and the medical research community are helping fill the immediate needs. One such example is the Joint Design of Advanced Computing Solutions for Cancer, a collaboration between the National Cancer Institute and the Department of Energy, brings together world-class computational scientists together with world-class cancer scientists in shared efforts to advance missions aims in both cancer and Exascale computing by pushing the limits of each together.

More organizations, seeing similar opportunities for cross-disciplinary collaboration in medicine, will certainly be needed to address the near-term demand while existing computational and data science programs adapt to embrace the medical and life sciences, and new programs begin to deliver the cross-trained, interdisciplinary workforce for the future.

HPCwire: Deep learning and Artificial Intelligence are the big buzzword in advanced scale computing today. Indeed, the CANDLE program efforts to learn how to apply deep learning in areas such as simulation (RAS effort), pre-clinical therapy evaluation, and outcome data mining are good examples. How do you see deep learning and AI being used near term and long-term in the war on cancer?

Stahlberg: While AI has a long history of application in medicine and life sciences, the opportunities for deep learning based AI in the war on cancer are just starting to be developed. As you mention, the application of deep learning in the JDACS4C pilots involving molecular simulation, pre-clinical treatment prediction, and outcome modeling are just developing the frontier of how this technology can be applied to effect and accelerate the war on cancer.  The CANDLE Exascale Computing project, led by Argonne National Laboratory, was formed out of the recognition that AI and deep learning in particular was intrinsic to each pilot, and had broad potential application across the cancer research space. The specific areas being explored by the three pilot efforts as part of the CANDLE project provide some insight into how deep learning and AI can be expected to have future impact in the war on cancer.

The pilot collaboration on cancer surveillance (pilot 3) led by investigators from the NCI Division of Cancer Control and Population Science and Oak Ridge National Laboratory demonstrating how deep learning can be applied to extract information from complex data, extracting biomarker information from electronic pathology reports. Similar capabilities have been shown to be possible with the processing of image information. Joined with automation, in the near term, deep learning can be expected to deepen and broaden the available insight about the cancer patient population in ways not otherwise possible.

The pilot collaboration on RAS-related cancers (pilot 2), led by investigators from Frederick National Laboratory and Lawrence Livermore National Laboratory follows in this direction, applying deep learning to extract and correlate features of potential interest from complex molecular interaction data.

The pilot collaboration on predictive cancer models (pilot 1), led by investigators from Frederick National Laboratory and Argonne National Laboratory are using deep learning based AI in a different manner, using deep learning to develop predictive models of tumor response.  While still very early, the potential use of deep learning for the development of predictive models in cancer is very exciting, opening doors to many new avenues to develop a ‘cancer learning system’ that will join data, prediction, and feedback in a learning loop that holds potential to revolutionize how we prevent, detect, diagnose, and treat cancer.

In the era that combines scale of big data, exascale computing and deep learning, new levels of understanding about the data are also possible and extremely valuable. Led by scientists at Los Alamos National Laboratory, Uncertainty Quantification, or UQ, is also an important element of the NCI collaboration with the DOE. Providing key insights into limits of the data and limits of the models, the information provided by UQ is helping to inform new approaches and priorities to improve both the robustness of the data and the models being employed.

These are just a few of the near-term areas where deep learning and AI are anticipated to have an impact. Looking long-term, the role of these technologies is difficult to forecast, but in drawing parallels from other disciplines, some specific areas begin to emerge.

First, for making predictions on complex systems such as is with cancer, ensemble and multi-model approaches are likely to be increasingly required to build consensus among likely outcomes across a range of initial conditions and parameters. Deep learning is likely to be used in both representing the complex systems being modeled, but also to inform the selections and choices to be involved in the ensembles. In a second future scenario, data-driven deep learning models may also form a future basis for portably representing knowledge about cancer, particularly in recognition of the complexity of the data and ongoing need to maintain data provenance and security. Deep learning models may be readily developed with locally accessible datasets, then shared with the community without sharing the actual data.

As a third future scenario, in translation to critical use scenarios, as core algorithms for research or central applications in the clinic, deep learning models provide a means for maintaining consistency, validation and verification in translation from research to clinical setting.

Anton 1 supercomputer specialized for life sciences modeling and simulation

HPCwire: One area that has perhaps been disappointing is predictive biology, and not just in cancer. Efforts to start with first principles, such as is done building modern jetliners, or even with experimental data from elucidation of various pathways, to ‘build’ drugs have had mixed results. Leaving aside things like structure scoring (docking, etc.), where is predictive biology headed in terms of fighting cancer and what’s the sense around needed technology requirements, for example specialized supercomputers such Anton1 and 2, and what’s the sense of needed basic knowledge to plug into predictive models?

Stahlberg: Biology has been a challenge for computational prediction given the overall complexity of the system and the role that subtle changes and effects can have in the overall outcome. The potential disappointment that may be assigned to predictive biology is most likely relative – relative to the what has been demonstrated in other disciplines such as transportation.

This is what makes the current era so very promising for accelerating progress on cancer and predictive biology. A sustained effort employing the lessons learned from other industries, where it is now increasingly possible to make the critical observations of biology at the fundamental level of the cell, combined with the computing capabilities that are rapidly becoming available, sets the stage for transforming predictive biology in a manner observed in parallel industries. Two elements of that transformation highlight the future direction.

First, the future for predictive biology is likely to be multi-scale, both in time and space, where models for subsystems are developed, integrated and accelerated computationally to support and inform predictions across multiple scales and unique biological environments, and ultimately for increasingly precise predictions for defined groups of individuals.  Given the multi-scale nature of biology itself, the direction is not too surprising. The challenge is in getting there.

One of the compelling features for deep learning in the biological domain is in its flexibility and applicability across the range of scales of interest. While not a substitute for fundamental understanding, deep learning enables a first step to support a predictive perspective for the complexity of data available in biology. This first step enables active learning approaches, where data is used to develop predictive models that are progressively improved with new data and biological insight obtained from experimentation aimed at reducing uncertainty around the prediction.

A critical area of need already identified is the need more longitudinal observations and data with which to have both greater insight into outcomes for patients, but also greater insight into the incremental changes of biological state over time at all scales. In the near term, by starting with the data we currently have available, advances will be made to help inform on the data expected to be required for improved predictions, whereby insights will be gained to define the information truly needed for confident predictions.

The role of specialized technologies will be critical, particularly in the context of predictive models, as the size, complexity and number of subsystems are studied and explored to align predictions across scales. These specialized technologies will lead the forefront of efficient implementations of predictive models, increasing the speed and reducing the costs required to study and inform decisions for increasingly precise and predictive oncology.

Brief Bio:
Eric Stahlberg is director of the HPC Initiative at Frederick National Laboratory for Cancer Research in the Data Science and Information Technology Program. In this role he also leads HPC strategy and exploratory computing efforts for the National Cancer Institute Center for Biomedical Informatics and Information Technology (CBIIT). Dr. Stahlberg also spearheads collaborative efforts between the National Cancer Institute and the US Department of Energy in such efforts as that Joint Design for Advanced Computing Solutions for Cancer (JDACS4C), the CANcer Distributed Learning Environment (CANDLE), and Accelerating Therapeutics for Opportunities in Medicine (ATOM). Prior to joining Frederick National Laboratory, he directed an innovative undergraduate program in computational science, led efforts in workforce development, led HPC initiatives in bioinformatics, and multiple state and nationally funded projects.

The post SC17: AI and Machine Learning are Central to Computational Attack on Cancer appeared first on HPCwire.

AWS Announces Availability of C5 Instances for Amazon EC2

HPC Wire - Tue, 11/07/2017 - 07:57

SEATTLE, Nov. 7, 2017 — Today, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ:AMZN), announced the availability of C5 instances, the next generation of compute optimized instances for Amazon Elastic Compute Cloud (Amazon EC2). Designed for compute-heavy applications like batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding, C5 instances feature 3.0 GHz Intel Xeon Scalable processors (Skylake-SP) up to 72 vCPUs, and 144 GiB of memory—twice the vCPUs and memory of previous generation C4 instances—providing the best price-performance of any Amazon EC2 instance. To get started with C5 instances, visit: https://aws.amazon.com/ec2/instance-types/c5/.

Optimized to deliver the right combination of CPU, memory, storage, and networking capacity for a wide range of workloads, all latest generation Amazon EC2 instance families—including C5—feature AWS hardware acceleration that delivers consistent, high performance, low latency networking and storage resources. C5 instances provide networking through the Elastic Network Adapter (ENA), a scalable network interface built by AWS to provide direct access to its networking hardware. Additional dedicated hardware and network bandwidth for Amazon Elastic Block Store (Amazon EBS) enables C5 instances to offer high performance storage through the scalable NVM Express (NVMe) interface. C5 instances introduce a new, lightweight hypervisor that allows applications to use practically all of the compute and memory resources of a server, delivering reduced cost and even better performance. C5 instances are available in six sizes—with the four smallest instance sizes offering substantially more Amazon EBS and network bandwidth than the previous generation of compute optimized instances.

“Customers have been happily using Amazon EC2’s unmatched selection of instances for more than 11 years, yet they’ll always take higher and more consistent performance if it could be offered in a cost-effective way. One of the challenges in taking this next step is how to leverage the cost efficiency of virtualization while consuming hardly any overhead for it,” said Matt Garman, Vice President, Amazon EC2, AWS. “We’ve been working on an innovative way to do this that comes to fruition with Amazon EC2 C5 instances. Equipped with our new cloud-optimized hypervisor, C5 instances set a new standard for consistent, high-performance cloud computing, eliminating practically any virtualization overhead through custom AWS hardware, and delivering a 25 percent improvement in compute price-performance over C4 instances—with some customers reporting improvements of well over 50 percent.”

Netflix is the world’s leading internet television network with 104 million members in over 190 countries enjoying more than 125 million hours of TV shows and movies per day. “In our testing, we saw significant performance improvement on Amazon EC2 C5, with up to a 140 percent performance improvement in industry standard CPU benchmarks over C4,” said Amer Ather, Cloud Performance Architect at Netflix. “The 15 percent price reduction in C5 will deliver a compelling price-performance improvement over C4.”

iPromote provides digital advertising solutions to 40,000 small and medium-sized businesses (SMBs). “iPromote processes billions of ad serving bid transactions every day,” said Matt Silva, COO at iPromote. “During testing, C5 instances improved our application’s request execution time by over 50 percent and significantly improved our network performance overall.”

Grail is a life sciences company whose mission is to detect cancer early, when it can be cured. “Our platform processes a huge amount of DNA sequencing data to detect faint tumor DNA signals in a sea of background noise,” said Cos Nicolaou, Head of Technology at Grail. “We are eager to migrate onto the AVX-512 enabled c5.18xlarge instance size. With this change, we expect to decrease the processing time of some of our key workloads by more than 30 percent.”

Alces Flight Compute makes it easy for researchers to spin up High Performance Computing (HPC) clusters of any size on AWS. “With the support for AVX-512, the new c5.18xlarge instance provides a 200 percent improvement in FLOPS compared to the largest C4 instance,” said Wil Mayers, Director of Research and Development for Alces. “This will reduce the execution time of the scientific models that our customers run on the Alces Flight platform. The larger c5.18xlarge size with 72vCPUs reduces the number of instances in the cluster, and has a direct benefit for our user base on both price and performance dimensions.”

Rescale enables customers in the aerospace, automotive, life sciences and energy sectors to run utility supercomputers using AWS. “C5 fully supports NVMe and is ideal for the I/O intensive HPC workloads seen on Rescale’s ScaleX® platform,” Ryan Kaneshiro, Chief Architect at Rescale, said. “C5’s higher clock speed and AVX-512 instruction set will allow our customers to run their CAE simulations significantly faster than on C4 instances.”

Customers can purchase Amazon EC2 C5 instances as On-demand, Reserved, or Spot instances. C5 instances are generally available today in the US East (N. Virginia), US West (Oregon) and, EU (Ireland) regions, with support for additional regions coming soon. They are available in six sizes with 2, 4, 8, 16, 36, and 72 vCPUs.

About Amazon Web Services

For 11 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 90 fully featured services for compute, storage, networking, database, analytics, application services, deployment, management, developer, mobile, Internet of Things (IoT), Artificial Intelligence (AI), security, hybrid, and enterprise applications, from 44 Availability Zones (AZs) across 16 geographic regions in the U.S., Australia, Brazil, Canada, China, Germany, India, Ireland, Japan, Korea, Singapore, and the UK. AWS services are trusted by millions of active customers around the world — including the fastest-growing startups, largest enterprises, and leading government agencies — to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit https://aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit www.amazon.com/about and follow @AmazonNews.

Source: Amazon

The post AWS Announces Availability of C5 Instances for Amazon EC2 appeared first on HPCwire.

Cosmos Code Helps Probe Space Oddities

HPC Wire - Tue, 11/07/2017 - 07:52

Nov. 7, 2017 — Black holes make for a great space mystery. They’re so massive that nothing, not even light, can escape a black hole once it gets close enough. A great mystery for scientists is that there’s evidence of powerful jets of electrons and protons that shoot out of the top and bottom of some black holes. Yet no one knows how these jets form.

Computer code called Cosmos now fuels supercomputer simulations of black hole jets and is starting to reveal the mysteries of black holes and other space oddities.

Cosmos code simulates wide-ranging astrophysical phenomena. Shown here is a multi-physics simulation of an Active Galactic Nucleus (AGN) jet colliding with and triggering star formation within an intergalactic gas cloud (red indicates jet material, blue is neutral Hydrogen [H I] gas, and green is cold, molecular Hydrogen [H_2] gas. (Chris Fragile)“Cosmos, the root of the name, came from the fact that the code was originally designed to do cosmology. It’s morphed into doing a broad range of astrophysics,” explained Chris Fragile, a professor in the Physics and Astronomy Department of the College of Charleston. Fragile helped develop the Cosmos code in 2005 while working as a post-doctoral researcher at the Lawrence Livermore National Laboratory (LLNL), along with Steven Murray (LLNL) and Peter Anninos (LLNL).

Fragile pointed out that Cosmos provides astrophysicists an advantage because it has stayed at the forefront of general relativistic magnetohydrodynamics (MHD). MHD simulations, the magnetism of electrically conducting fluids such as black hole jets, add a layer of understanding but are notoriously difficult for even the fastest supercomputers.

“The other area that Cosmos has always had some advantage in as well is that it has a lot of physics packages in it,” continued Fragile. “This was Peter Anninos’ initial motivation, in that he wanted one computational tool where he could put in everything he had worked on over the years.” Fragile listed some of the packages that include chemistry, nuclear burning, Newtonian gravity, relativistic gravity, and even radiation and radiative cooling. “It’s a fairly unique combination,” Fragile said.

The current iteration of the code is CosmosDG, which utilizes discontinuous Gelarkin methods. “You take the physical domain that you want to simulate,” explained Fragile, “and you break it up into a bunch of little, tiny computational cells, or zones. You’re basically solving the equations of fluid dynamics in each of those zones.” CosmosDG has allowed much higher order of accuracy than ever before, according to results published in the Astrophysical Journal, August 2017.

“We were able to demonstrate that we achieved many orders of magnitude more accurate solutions in that same number of computational zones,” stated Fragile. “So, particularly in scenarios where you need very accurate solutions, CosmosDG may be a way to get that with less computational expense than we would have had to use with previous methods.”

XSEDE ECSS Helps Cosmos Develop

Since 2008, the Texas Advanced Computing Center (TACC) has provided computational resources for the development of the Cosmos code—about 6.5 million supercomputer core hours on the Ranger system and 3.6 million core hours on the Stampede system. XSEDE, the eXtreme Science and Engineering Discovery Environment funded by the National Science Foundation, awarded Fragile’s group with the allocation.

“I can’t praise enough how meaningful the XSEDE resources are,” Fragile said. “The science that I do wouldn’t be possible without resources like that. That’s a scale of resources that certainly a small institution like mine could never support. The fact that we have these national-level resources enables a huge amount of science that just wouldn’t get done otherwise.”

And the fact is that busy scientists can sometimes use a hand with their code. In addition to access, XSEDE also provides a pool of experts through the Extended Collaborative Support Services (ECSS) effort to help researchers take full advantage of some of the world’s most powerful supercomputers.

Fragile has recently enlisted the help of XSEDE ECSS to optimize the CosmosDG code for Stampede2, a supercomputer capable of 18 petaflops and the flagship of TACC at The University of Texas at Austin. Stampede2 features 4,200 Knights Landing (KNL) nodes and 1,736 Intel Xeon Skylake nodes.

Taking Advantage of Knights Landing and Stampede2

The manycore architecture of KNL presents new challenges for researchers trying to get the best compute performance, according to Damon McDougall, a research associate at TACC and also at the Institute for Computational Engineering and Sciences, UT Austin. Each Stampede2 KNL node has 68 cores, with four hardware threads per core. That’s a lot of moving pieces to coordinate.

“This is a computer chip that has lots of cores compared to some of the other chips one might have interacted with on other systems,” McDougall explained. “More attention needs to be paid to the design of software to run effectively on those types of chips.”

Through ECSS, McDougall has helped Fragile optimize CosmosDG for Stampede2. “We promote a certain type of parallelism, called hybrid parallelism, where you might mix Message Passing Interface (MPI) protocols, which is a way of passing messages between compute nodes, and OpenMP, which is a way of communicating on a single compute node,” McDougall said. “Mixing those two parallel paradigms is something that we encourage for these types of architectures. That’s the type of advice we can help give and help scientists to implement on Stampede2 though the ECSS program.”

“By reducing how much communication you need to do,” Fragile said, “that’s one of the ideas of where the gains are going to come from on Stampede2. But it does mean a bit of work for legacy codes like ours that were not built to use OpenMP. We’re having to retrofit our code to include some OpenMP calls. That’s one of the things Damon has been helping us try to make this transition as smoothly as possible.”

McDougall described the ECSS work so far with CosmosDG as “very nascent and ongoing,” with much initial work sleuthing memory allocation ‘hot spots’ where the code slows down.

“One of the things that Damon McDougall has really been helpful with is helping us make the codes more efficient and helping us use the XSEDE resources more efficiently so that we can do even more science with the level of resources that we’re being provided,” Fragile added.

Black Hole Wobble

Some of the science Fragile and colleagues have already done with the help of the Cosmos code has helped study accretion, the fall of molecular gases, and space debris into a black hole. Black hole accretion powers its jets. “One of the things I guess I’m most famous for is studying accretion disks where the disk is tilted,” explained Fragile.

Black holes spin. And so do the disk of gasses and debris that surrounds it and falls in. However, they spin on different axes of rotation. “We were the first people to study cases where the axis of rotation of the disk is not aligned with the axis of rotation of the black hole,” Fragile said. General relativity shows that rotating bodies can exert a torque on other rotating bodies that aren’t aligned with it.

Fragile’s simulations showed the black hole wobbles, a movement called precession, from the torque of the spinning accretion disk. “The really interesting thing is that over the last five years or so, observers—the people who actually use telescopes to study black hole systems—have seen evidence that the disks might actually be doing this precession that we first showed in our simulations,” Fragile said.

Fragile and colleagues use the Cosmos code to study other space oddities such as tidal disruption events, which happen when a molecular cloud or star passes close enough that a black hole shreds it. Other examples include Minkowski’s Object, where Cosmos simulations support observations that a black hole jet collides with a molecular cloud to trigger star formation.

Golden Age of Astronomy and Computing

“We’re living in a golden age of astronomy,” Fragile said, referring to the wealth of knowledge generated from space telescopes like Hubble to the upcoming James Webb Space Telescope, to land-based telescopes such as Keck, and more.

Computing has helped support the success of astronomy, Fragile said. “What we do in modern-day astronomy couldn’t be done without computers,” he concluded. “The simulations that I do are two-fold. They’re to help us better understand the complex physics behind astrophysical phenomena. But they’re also to help us interpret and predict observations that either have been, can be, or will be made in astronomy.”

Source: Texas Advanced Computing Center

The post Cosmos Code Helps Probe Space Oddities appeared first on HPCwire.

Supercomputer Tour

University of Colorado Boulder - Mon, 11/06/2017 - 14:48
Categories: Partner News

Lockheed Martin and Mines create software academy

Colorado School of Mines - Mon, 11/06/2017 - 14:13

Lockheed Martin and Colorado School of Mines have partnered in a unique opportunity for software and radio frequency engineering students.

The Lockheed Martin Software Academy, which just completed its pilot program, selects a handful of students from Mines who commit to the rigors of an actual position at Lockheed Martin, while getting paid and receiving school credit. Students spend their spring semester at Lockheed Martin Space Systems Company in Littleton working on Orion, NASA’s first spacecraft designed for long-duration, human-related deep space exploration, in system and software optimization jobs, focusing on EM-2 (Exploration Mission 2), but also working to improve EM-1 (Exploration Mission 1). Upon completion of the spring semester, students enter the Lockheed Martin’s 12-week summer internship program. At the end of the summer, they produce a report to show the improvements of performance, successes, challenges, and the end results.

Upon completion of the pilot program earlier this year, Mines and Lockheed Martin signed a formal agreement. Lockheed Martin Software Academy seed money covers a Lockheed Martin lab and conference room at the university and Lockheed Martin mentoring for Mines students during the program.

Qualified and well-trained software and radio frequency engineers are in high demand, particularly for the next phase of Orion. Mines can provide a pipeline of this niche skillset to fill these positions.  There is tremendous growth in the aerospace industry, particularly with Colorado being second per capita in the nation for aerospace jobs. Almost 7 percent of Mines students go into aerospace post-graduation, and Lockheed Martin hopes to benefit from that statistic.

"Lockheed Martin has provided our students with an opportunity to make a meaningful difference in an area that their passions lie in.  As faculty, we are excited to see our students' work put into production and used in real world applications,” said Jeff Paone, faculty advisor for the program and associate professor of computer science at Mines.

Computer science student Izaak Sulka said, "When I was younger, I always expected that working on a space program would be exciting, although I never really imagined that I could be involved in one. While working on Orion, I've learned much more than I thought possible about software and software development. Younger me was right about half the time – it is beyond exciting.”

CONTACT
Anica Wong, Communications Specialist, Colorado School of Mines Foundation | 303-273-3904 | acwong@mines.edu
Allison Sharpe, Sr. Communications Rep, Lockheed Martin Communications | 720-842-6425 | allison.n.sharpe@lmco.com
 

Categories: Partner News

Explore global cuisine, culture at International Day Nov. 18

Colorado School of Mines - Mon, 11/06/2017 - 12:15

Colorado School of Mines has long attracted the best and brightest STEM students from around the world. Today, more than 70 countries are represented on campus – a fact that will be celebrated Nov. 18 at International Day, a fun-filled evening of global cuisine, cultural exhibits and performances. 

Hosted by the International Student Council and International Office, International Day is one of the biggest campus events of the year. The global celebration kicks off at 4:30 p.m. in the Green Center’s Friedhoff Hall with food samples from nearly two dozen countries – try Egyptian koshari, Malaysian chicken satay, French crepes and more. Food tickets are $1 each, with most samples ranging from 1 to 4 tickets. Cultural exhibits will also be set up around the hall, showcasing unique customs, artwork, clothing and artifacts.

The festivities will continue at 7 p.m. with a free culture show in the Green Center’s Bunker Auditorium. Student performances will include traditional music, fashion shows and dancing. Both the food sampling and culture show are open to the public. 

About 700 international students are currently enrolled at Mines for undergraduate and graduate studies, representing 11.5 percent of the total student body. 


Colorado School of Mines International Day

WHEN: 4:30-9 p.m. Saturday, Nov. 18
4:30-6:30 p.m. Food sampling
7-9 p.m. Culture show
WHERE: Green Center, 924 16th St., Golden
COST: Culture show, free; $1 per food ticket

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Faculty honored by Society of Exploration Geophysicists

Colorado School of Mines - Mon, 11/06/2017 - 09:48

Two Colorado School of Mines professors were among those honored at the Society of Exploration Geophysicists’ annual meeting, held Sept. 24-29 in Houston.

Petroleum Engineering Professor Manika Prasad received the Virgil Kauffman Gold Medal, awarded to a person who “has made an outstanding contribution to the advancement of the science of geophysical exploration as manifested during the previous five years.”

Prasad was cited for her extensive experimental work in rock physics. According to SEG, prominent members of the profession expressed support for the award for her study of seismic wave propagation in complex rocks, particularly her recent work with shale rocks.

“Manika’s scientific contributions to geophysics have led to advances in conventional and unconventional petroleum reservoir characterization, exploration and production,” according to SEG’s citation. “The Virgil Kauffman Gold Medal will recognize Manika’s status and record as a role model for men and women within SEG and across the greater geophysics community.”

Prasad joined Mines in 2004 and is director of the Center for Rock Abuse and the Physics of Organics, Carbonates, Clays, Sands and Shales (OCLASSH) consortium.

Geophysics Professor Yaoguo Li was also recognized, one of two people awarded Honorary Membership by SEG. The award is conferred upon those who “have made a distinguished contribution, which warrants exceptional recognition, to exploration geophysics or a related field or the advancement of the profession of exploration geophysics through service to the Society.”

“His record of service to SEG includes serving many years as an associate editor for Geophysics, working as a primary organizer for the first two well-attended and highly successful gravity, magnetic and electromagnetic SEG workshops in China, and serving as a SEAM board member for three years,” the SEG citation reads.

Li joined Mines in 1999 and leads the Center for Gravity, Electrical and Magnetic Studies.

CONTACT
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu

Categories: Partner News

Oklahoma State University Selects Advanced Clustering to Build New HPC Cluster

HPC Wire - Mon, 11/06/2017 - 09:26

KANSAS CITY, Mo., Nov. 6, 2017 — Oklahoma State University has selected Advanced Clustering Technologies to build and install its newest supercomputer, which will support computing- and data-intensive research across a broad range of Science, Technology, Engineering and Mathematics (STEM) disciplines.

The new supercomputer, which will be named after the university’s mascot, Pistol Pete, will serve as a campus-wide shared resource, available at no charge to all OSU faculty, staff, postdocs, graduate students and undergraduates, as well as to researchers and educators across Oklahoma.

“Pistol Pete will make it possible for us to continue to meet the ever-growing demand for computational and data-intensive research and education at OSU and across Oklahoma,” said Dr. Dana Brunson, Assistant Vice President for Research Cyberinfrastructure and Director of the OSU High Performance Computing Center as well as an Adjunct Associate Professor in the Mathematics and Computer Science departments. “This new cluster allows us to extend the mission of our High Performance Computing Center, which is to put advanced technology in the hands of our faculty, staff and students more quickly and less expensively, and with greater certainty of success.”

Dr. Brunson is principal investigator of the grant proposal, Acquisition of Shared High Performance Compute Cluster for Multidisciplinary Computational and Data-Intensive Research, that was awarded a $951,570 Major Research Instrumentation grant by the National Science Foundation.

The new cluster is powered by Intel® Xeon® Scalable Processors (previously known as Skylake). Pistol Pete consists of more than 5,300 compute cores in Advanced Clustering’s innovative ACTblade x110 systems. Along with the large compute capacity, a high-speed Lustre storage system and low-latency 100Gb/s Omni-Path fabric is part of the turn-key cluster being delivered.

“We are proud to be working with Dr. Brunson and Oklahoma State University to provide this new high performance computing resource,” said Kyle Sheumaker, President of Advanced Clustering Technologies, which specializes in providing turn-key HPC resources to universities, government agencies and enterprises across the country. Advanced Clustering also built the university’s previous cluster, called Cowboy, which at its deployment in 2012 was a 3,048-core cluster using servers based on the Intel Xeon processor E5 family. Cowboy gave the center nine times the capacity of its previous cluster in only two times the physical space.

Pistol Pete will support research in more than 30 departments and many collaborating research teams, including 129 faculty and more than 740 postdocs, graduate students and undergraduates. Research topics include bioinformatics, biomolecular model development; crop modeling, environment and ecosystem modeling; classical and quantum calculations of liquids, proteins, interfaces and reactions; design and discovery of organic semiconductor materials; cybersecurity and social network modeling; commutative algebra; graph-based data mining; renewable energy research; seismology; sociopolitical landscape modeling; and high energy and medical physics.

About the High Performance Computing Center at Oklahoma State University
The High Performance Computing Center (HPCC) facilitates computational and data-intensive research across a wide variety of disciplines. HPCC provides Oklahoma State University students, faculty and staff with cyberinfrastructure resources, cloud services, education and training, bioinformatics assistance, proposal support and collaboration. Learn more about HPCC resources at https://hpcc.okstate.edu/.

About Advanced Clustering Technologies 

Advanced Clustering Technologies has been building customized, turn-key high performance computing clusters, servers and workstations since the company’s founding in 2001. The company specializes in developing campus-wide supercomputers, which when fully utilized can help maintain and grow a university’s competitive edge by attracting researchers and students from around the globe. Advanced Clustering’s knowledgeable team of HPC experts provide phone and email support for the lifetime of your system. For more information, visit advancedclustering.com.

Source: Advanced Clustering Technologies

The post Oklahoma State University Selects Advanced Clustering to Build New HPC Cluster appeared first on HPCwire.

Mellanox Announces Innova-2 FPGA-Based Programmable Adapter Family

HPC Wire - Mon, 11/06/2017 - 07:45

SUNNYVALE, Calif. & YOKNEAM, Israel, Nov. 6, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced the Innova-2 product family of FPGA-based smart network adapters. Innova-2 is the industry leading programmable adapter designed for a wide range of applications, including security, cloud, Big Data, deep learning, NFV and high performance computing.

Innova-2 is available in multiple configurations, either open for customers’ specific applications or pre-programmed for security applications with encryption acceleration such as IPsec, TLS/SSL and more. For security applications, Innova-2 delivers 6X higher performance while reducing total cost of ownership by 10X when compared to alternative options. For Cloud infrastructures, Innova-2 enables SDN and virtualized acceleration and offloads. Deep learning training and inferencing applications will be able to achieve higher performance and better system utilization by offloading algorithms into Innova-2 FPGA and the ConnectX acceleration engines.

Innova-2 is based on an efficient combination of the state-of-the-art ConnectX-5 25/40/50/100Gb/s Ethernet and InfiniBand network adapter with Xilinx UltraScale FPGA accelerator. Innova-2 adapters deliver best-of-breed network and storage capabilities as well as hardware offloads to CPU-intensive applications.

“The Innova-2 product line brings new levels of acceleration to Mellanox intelligent interconnect solutions,” said Gilad Shainer vice president of Marketing, Mellanox Technologies. “We are pleased to equip our customers with new capabilities to develop their own innovative ideas, whether related to security, big-data analytics, deep learning training and inferencing, cloud and other applications. The solution allows our customers to achieve unprecedented performance and flexibility for the most demanding market needs.”

The Innova-2 family of dual-port Ethernet and InfiniBand network adapters supports network speeds of 10, 25, 40, 50 and 100Gb/s, while the PCIe Gen4 and OpenCAPI (Coherent Accelerator Processor Interface) host connections offer low-latency and high-bandwidth. Innova-2 allows flexible usage models, with transparent accelerations using Bump-in-the-Wire or Look-Aside architectures. The solution fits any server with its standard PCIe card form factor (Half Height, Half Length), enabling a wide variety of deployments in modern data centers.

“We are happy to continue our deep collaboration efforts with Mellanox and to deliver optimized compute platforms to our mutual customers,” said Brad McCredie, vice president, Cognitive Systems Development, IBM. “Innova-2 will enable us to maximize the performance of our cloud, deep learning and other platforms by utilizing Mellanox acceleration engines and the ability to connect to our industry leading OpenCAPI CPU interfaces.”

“At Secunet Security Networks AG, we specialize in bringing high-performance security solutions to our very demanding government and commercial customers,” said Dr. Kai Martius, CTO of Secunet. “The Innova family of adapters is an ideal element to our network security products, delivering protocol-sensitive accelerated crypto processing to our offerings.”

“Xilinx is pleased that our All Programmable UltraScale FPGAs are accelerating Mellanox’s Innova network adaptors,” said Manish Muthal, vice president of Data Center Business at Xilinx. “Our combined technology enables the rapid deployment of customized acceleration for emerging data center and high performance computing workloads.”

Mellanox will be showcasing Innova-2 adapters at the OpenStack Summit, Nov 6-8, booth B20, Sydney, Australia, and at SC17, Nov 13-16, booth # 653, at the Colorado Convention Center.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Announces Innova-2 FPGA-Based Programmable Adapter Family appeared first on HPCwire.

Trends in HPC and Machine Learning Drive Evolution of Cooling Solutions

HPC Wire - Mon, 11/06/2017 - 01:01

As seen at ISC17 and will be seen at SC17, the application of HPC in finance, logistics, manufacturing, big science and oil & gas is continuing to expand into areas of traditional enterprise computing often tied to the exploitation of Big Data. It is clear that all of these segments are using (or planning to use) machine learning and AI resulting in architectures that is very HPC-like.

The physical implementation of these systems requires a greater focus on heat capture and rejection due to the wattage trends in CPUs, GPUs and emerging neural chips required to meet accelerating computational demands in HPC-style clusters. The resulting heat and its impact on node, rack and cluster heat density is seen with Intel’s Knight’s Landing and Knight’s Mill, Nividia’s P100 and the Platinum versions of the latest Intel Skylake processors.

Wattages are now sufficiently high that to cool nodes containing these highest performance chips leaves one with little choice other than liquid cooling to maintain reasonable rack densities. In sustained compute sessions, there must be no throttling or down-clocking of the compute resources. If not addressed at the node level with liquid cooling, floor space build-outs or data center expansions become necessary. Even more importantly, reducing node and rack densities can drive an increase in interconnect distances between all types of cluster nodes.

 

Asetek RackCDU D2C™ Cooling

These developments are a direct result of a wattage inflection and not simply an extension of trends seen previously. Depending on the approach taken, machine learning and AI exacerbate this trend. Heat and wattage issues seen with GPUs during the training or learning phase of an AI application (especially if used in a deep learning/ neural network approach) are now well known.  And in some cases, these issues continue into application rollout if GPUs are applied to that as well.

Even if the architecture uses quasi-GPUs like Knight’s Mill in the training phase (via “basic” machine learning or deep learning followed by a handoff to scale-out CPUs like Skylake for actual usage) the issues of wattage/density/cooling remains. And it isn’t getting any better.

With distributed cooling’s ability to address site needs in a variety of heat rejection scenarios, it can be argued that the compute-wattage-inflection-point is a major driver in the accelerating global adoption of Asetek liquid cooling at HPC sites and by the OEMs that serve them.  And as will be shown at SC17, quite of few of the nodes OEMs are showing with liquid cooling are targeted at machine learning.

Given the variety of clusters (especially with the entrance of AI), the adaptability of the cooling approach becomes quite important. Asetek distributed pumping architecture is based on low pressure, redundant pumps and closed loop liquid cooling within each server node. This allows for a high level of flexibility in heat capture and heat rejection.

Asetek ServerLSL™ is a server-level liquid assisted air cooling (LAAC) solution. It can be used as a transitional stage in the introduction of liquid cooling or as a tool to enable the immediate incorporation of the highest performance computing nodes into the data center. ServerLSL allows the site to leverage existing HVAC, CRAC and CRAH units with no changes to data center cooling. ServerLSL replaces less efficient air coolers in the servers with redundant coolers (cold plate/pumps) and exhausts 100% of this hot air into the data center via heat exchangers (HEXs) in each server. This enables high wattage server nodes to have 1U form factors and maintain high cluster rack densities. At a site level, the heat is handled by existing CRACs and chillers with no changes to the infrastructure. With ServerLSL, liquid cooled nodes can be mixed in racks with traditional air-cooled nodes.

Asetek ServerLSL™ Cooling

While ServerLSL isolates the system within each server, Asetek RackCDU systems are rack-level focused, enabling a much greater impact on cooling costs of the datacenter overall. RackCDU systems leverage the same pumps and coolers used with ServerLSL nodes. RackCDU is in use by all of the current sites in the TOP500 using Asetek liquid cooling.

Asetek RackCDU provides the answer both at the node level and for the facility overall. As with ServerLSL, RackCDU D2C (Direct-to-Chip) utilizes redundant pumps/cold plates atop server CPUs & GPUs (and optionally other high wattage components like memory).  But the collected heat is move it via a sealed liquid path to heat exchangers in the RackCDU for transfer into facilities water. RackCDU D2C captures between 60% and 80% of server heat into liquid, reducing data center cooling costs by over 50% and allowing 2.5x-5x increases in data center server density.

The remaining heat in the data center air is removed by existing HVAC systems in this hybrid liquid/air approach. When there is unused cooling capacity available, data centers may choose to cool facilities water coming from the RackCDU with existing CRAC and cooling towers.

The high level of flexibility in addressing cooling at the server, rack, cluster and site levels provided by Asetek distributed pumping is lacking in approaches that utilize centralized pumping. Asetek’s approach continues to deliver flexibility in the areas of heat capture, coolant distribution and heat rejection.

At SC17, Asetek will also have on display a new cooling technology in which servers share a rack mounted HEX. The servers utilizing this shared HEX approach allow them to continue to be used if the site later moves to RackCDU.

To learn more about Asetek liquid cooling, stop by booth 1625 at SC17 in Denver.

Appointments for in-depth discussions about Asetek’s data center liquid cooling solutions at SC17 may be scheduled by sending an email to questions@asetek.com.

 

The post Trends in HPC and Machine Learning Drive Evolution of Cooling Solutions appeared first on HPCwire.

Arctur Offers Access to HPC infrastructure with HPC Challenge

HPC Wire - Fri, 11/03/2017 - 09:42

Nov. 3, 2017 — ARCTUR has announced that €350,000 of ARCTUR HPC resources will be granted to the most interesting applications for the “Be innovative! HPC CHALLENGE” competition. The competition is open for applications between November 6th and December 1st 2017. ARCTUR will identify the winning applications by mid-December and the winning projects are expected to start in January of 2018.

The goal of the competition is to encourage small and medium enterprises (among others) to scale up their computer simulations and modeling by using ARCTUR HPC infrastructure, writing an interesting use case that demonstrates the benefits of HPC and becoming strategic partners of the company.

Applicants must submit a simulation or computer modeling idea addressing a real use case and demonstrating a high potential benefit from using ARCTUR HPC infrastructure.

The applications will be ranked based on these key selection criteria:

  • An application addressing a real use case
  • The HPC viability and scaling up of the case
  • Added benefits expected by using HPC
  • Applicants’ experience in using HPC (or GRID) computing infrastructures
  • Repeatability

Out of all the received applications, ARCTUR will select up to 30 cases that will receive access to Arctur-2 HPC infrastructure. Based on the success of the application, ARCTUR will provide one of the three different subsidy levels:

  1. Top 10 applications: Free access
  2. Second best 10 applications: 50% subsidy
  3. Third best 10 applications: 25% subsidy

Included HPC Services

Each application is expected to use up to €20,000 in resources in HPC and cloud capabilities or in hours of support. Usage will be limited to a maximum of 3 months of time.

ARCTUR supports the use of HPC within the framework of the Fortissimo, CloudFlow and CAxMan European projects. You can find the details of various cases on the links below:

The competition details are published here: https://arctur.si/hpc-and-cloud/hpc-challenge.

Arctur team photo

About ARCTUR

ARCTUR d.o.o. is a privately owned enterprise with its own HPC infrastructure located in Nova Gorica, Slovenia, Europe. Arctur’s On-Demand HPC gives true computing power and pay-as-you-go pricing. The compute environment includes a complete HPC software stack, a non-virtualized compute cluster, GPUs, high-memory notes, high-performance storage, and expert support. It is the leading service provider of supercomputing in Europe.

Časnik Finance and Manager Magazine recognized Arctur Silver as one of the Top 50 information technology companies in Slovenia. The company was awarded the IDC Innovation Excellence Award for Outstanding Application of HPC in 2014 (Pipistrel) and 2016 (Ergolines).

The company has extensive experience in server virtualization and deployment, integration of disparate IT systems, IT support of project management and server farm leverage. It is a pioneer in adapting HPC services to the needs of small and medium enterprises (SME).

Source: ARCTUR

The post Arctur Offers Access to HPC infrastructure with HPC Challenge appeared first on HPCwire.

Student Cluster Competition Plus Gambling? Count me in!

HPC Wire - Fri, 11/03/2017 - 09:27

We all love HPC, right? We love college students too; particularly undergraduates that are striving to learn the ins and outs of HPC systems and applications – like in the SC17 Student Cluster Competition. It’s a fantastic combination.

But is it possible to make this event even more exciting? Yes, it is. Add gambling to the mix.

I’ve put together a SC17 Student Cluster Competition online betting pool. You get a virtual $1,000 to bet on any team you’d like, or even spread your bet among several teams. I’ll be updating the odds in future articles, just so you know how the field is shaping up. Bet wisely and have fun!!

Here’s the link to the betting pool.

The post Student Cluster Competition Plus Gambling? Count me in! appeared first on HPCwire.

SC17 Opening Plenary Announced: The Era of Smart Cities

HPC Wire - Fri, 11/03/2017 - 07:58

DENVER, Nov. 3, 2017 — SC17 has announced that its opening plenary will be “The Era of Smart Cities — Reimagining Urban Life with HPC.”

When: Monday, November 13, 2017 @ 5:30 p.m. MT, Denver, Colorado Convention Center

Access: Open to anyone with a SC17 conference badge

What: The 21st Century is frequently referenced as the “Century of the City,” reflecting the unprecedented global migration into urban areas. Urban areas are driving and being affected by factors ranging from economics to health to energy to climate change. The notion of a “smart” city is one that recognizes the use and influence of technology in cities. It is the wave of the future only made possible by high performance computing (HPC).

This expert plenary panel will discuss emerging needs and opportunities suggesting an increasing role for HPC in cities, with perspectives from city government, planning and design, and embedded urban HPC systems.

Why: HPC is already playing a key role in helping cities pursue objectives of societal safety, efficient use of resources, and an overall better quality of life. Intelligent devices enabled with HPC “at the edge” have potential to optimize energy generation and delivery, emergency response or the flow of goods and services, and to allow urban infrastructures to adapt autonomously to changes and events such as severe storms or traffic congestion. Smart city technology can even improve food safety inspections and help identify children most at risk for lead poisoning.

HPC is supporting the creation of breakthrough computational models to make all of this possible. Hear from industry experts who are pioneers in making these life-changing realties happen today.

Moderator: Charlie Catlett, Director, Urban Center for Computation & Data, Argonne National Laboratory

Panelists: Debra Lam, Managing Director for Smart Cities & Inclusive Innovation, Georgia Tech; Michael Mattmiller, Chief Technology Officer, the City of Seattle; Pete Beckman, Director, Co-Director, Northwestern Argonne Institute of Science and Engineering, Argonne National Laboratory

Source: SC17

The post SC17 Opening Plenary Announced: The Era of Smart Cities appeared first on HPCwire.

SC17 Cluster Competition Teams Unveiled!

HPC Wire - Thu, 11/02/2017 - 16:54

As we draw nearer to SC17, tensions are rising with each passing day. Sixteen teams of university undergrads are putting the final touches on their clusters and preparing for the grueling test that is the SC17 Student Cluster Competition.

Nothing comes easy in the student clustering game, and each team will have to be at the top of their game in order to have a chance for the SC17 Student Cluster Crown (although there’s not an actual crown.)

Millions of fans will be tracking the competition as the teams vie to turn in the best system performance on a suite of HPC benchmarks and applications – all while staying under the 3,000 watt power cap. For those of you who might not be familiar with how these competitions work, check out our overview here.

I recorded a webcast with Stephen Harrell, the chair of the cluster competition this year and an all-around Student Cluster Competition guru. In the video, we introduce the teams that will be competing this year and discuss their various approaches to the competition.

 

We’re going to be all over the competition this year, covering it through articles, videos, and perhaps even an interpretive dance or off-Broadway musical. Stay tuned for more.

The post SC17 Cluster Competition Teams Unveiled! appeared first on HPCwire.

SCinet at SC17: Meet the World’s Fastest and Most Powerful Temporary Network

HPC Wire - Thu, 11/02/2017 - 16:22

Nov. 2 — Most high-speed, high-bandwidth networks consist of fiber optic cable that is permanently installed in protective walls, ceilings and flooring, or buried underground in thick, waterproof conduit. But building the world’s fastest and most powerful  temporary network from the ground up each year for the SC conference necessitates a different approach.

In the days leading up to SC17, more than 180 volunteers will gather in Denver to set up SCinet, the high-capacity network that supports the revolutionary applications and experiments that are a hallmark of the SC conference. SCinet takes one year to plan, and those efforts culminate in a month-long period of staging, setup and operation of the network during the conference.

SCinet volunteers temporarily installed more than 100 miles of optical fiber in the Salt Palace Convention Center in less than one week in preparation for SC16 in Salt Lake City. Carpet and masonite hardboard were placed over the fiber to protect it from heavy equipment and the foot traffic of more than 11,000 attendees and exhibitors.

Miles of fiber will be temporarily taped to the cement floors and hung from the rafters of the Colorado Convention Center. Although carpet and rigid masonite will cover the fiber on the ground, it first will be exposed to the traffic of dozens of exhibitor carts, forklifts and other utility vehicles that pass nearby and sometimes over it via temporary thresholds as exhibits are constructed on the show floor.

Rebecca Hutchinson and Annette Kitajima are SCinet volunteers who co-chair the fiber team. Hutchinson said the need for on-site fiber repairs on the ground at SC varies greatly from year to year.

“In my eight years on the fiber team, we’ve had at least one conference where no repairs were required from setup to teardown,” Hutchinson said. “Another year, we were repairing fiber in the middle of the show floor as the exhibits opened on the second day.”

The fiber team uses fusion splicing (using heat to fuse the ends of two optical fibers), section replacement and hand termination to repair connections as quickly as possible, so disruptions are minimal.

When the conference closes on November 17, all of that fiber will be removed in less than 24 hours.

“Even if the fiber makes it through an SC unscathed, re-spooling at the end of the show takes its toll, with the inherent twists and kinks that come with a speedy teardown,” Kitajima said.

Members of the team will test all of the fiber during SCinet’s annual inventory meeting in the spring. Some of it will be damaged beyond repair, but most of it will be redeployed next year at SC18 in Dallas, just as most of the fiber in Denver is being reused from SC16 in Salt Lake City. This is part of SCinet’s sustainability strategy – recovering, refurbishing and reusing available resources whenever possible.

Brad Miller and Jerry Beck of the Utah Education and Telehealth Network are fiber experts and SCinet volunteers who spent additional time during the summer and fall testing and repairing a batch of used fiber from past SCs. Miller said that of the 84 reels the duo has tested, 24 have been refurbished. That amounts to 28 percent or 1.28 miles.

Beck starts the refurbishing process with an EXFO OTDR, or Optical Time Domain Reflectometer, to determine how well a given reel of used fiber will transmit light. The reflectometer measures the speed of light as it passes through the fiber to identify flaws and pinpoint “pretty much exactly where problems will be.” In most cases, replacing the ends of the cable resolves the issue.

To attach new connectors he uses a Swift F1 cable splicer. It expertly strips protective insulation and exposes fiber thinner than human hair, and then aligns, fuses and reinsulates it.

“Sometimes you can be doing splicing with this machine and a hair will fall into the splice area and it will indicate that it’s a fiber and will say your fiber isn’t clean,” Beck said. “All these fibers are measured in microns. It takes the actual fiber cores and lines them up so close that it tells me that the clad is three microns or three millionths of an inch.”  Beck said the work does involve an element of danger: Tiny unshielded fiber fragments can easily pierce fingers and are particularly painful if accidentally rubbed into an eye.

Watch Brad Miller and Jerry Beck demonstrate the fiber refurbishing process in this 3:44 video: https://youtu.be/FEls2MKUIBY or just watch below:

Source: Brian Ban, SC

The post SCinet at SC17: Meet the World’s Fastest and Most Powerful Temporary Network appeared first on HPCwire.

ACSCI at SC17: Students Explore Immigration Through a Big Data Lens

HPC Wire - Thu, 11/02/2017 - 15:36

Nov. 2 — Supercomputers have helped scientists discover merging black holes and design new nanomaterials, but can they help solve society’s most challenging policy issues?

At the International Conference for High Performance Computing, Networking, Storage and Analysis (also known as Supercomputing 2017 or SC17) in Denver, Colorado, from Nov. 12 to Nov. 15, undergraduate and graduate students from diverse disciplines and backgrounds will learn how to use advanced computing skills to explore the nation’s immigration policies as part of the Advanced Computing for Social Change Institute (ACSCI).

Daring Greatly, SC16 Advanced Computing for Social Change facilitators and participants. (Photo by Marques Bland)

The program, which debuted in 2016, teaches students how to use data analysis and visualization to identify and communicate data-driven policy solutions to pressing social issues. Organized by the Texas Advanced Computing Center (TACC), the Extreme Science and Engineering Discovery Environment (XSEDE, a National Science Foundation-funded organization), and SC17 the project is unique in its application of the world’s most cutting-edge technologies to address social change.

“The institute will help students realize their leadership potential and increase their confidence in their ability to effect social change, regardless of where their profession takes them in the future,” said Rosalia Gomez, TACC education and outreach manager and one of the organizers of the program.

“Our goal is to provide students with advanced computing skills and the ability to visualize complex data in a way that is useful to everyone, from those affected by immigration policy to those creating immigration policy,” said Ruby Mendenhall, an associate professor of sociology at the University of Illinois and another of the organizers. “It is our hope that students will provide new knowledge about U.S. immigration that can create social change.”

During the three-and-a-half-day program, students will learn computing, data analysis and visualization skills, and then form teams to tackle critical issues related to immigration. The students take a data-driven approach, analyzing large datasets to derive persuasive arguments for their positions.

“These students have so much to offer to the national conversation and debate,” said Kelly Gaither, director of visualization at TACC. “Data analysis and visualization provide a vehicle to get to the truth behind the rhetoric. While this is an emotionally charged topic, we can use advanced computing tools to discern fact from fiction and use this as a platform for constructive communication to move us forward.”

As part of the projects students will also receive mentorship, career development advice, and network with other advanced computing professionals. At the end of the program, the teams will present their analyses to experts in the supercomputing field.

Last year, at SC16 in Salt Lake City, Utah, students explored the Black Lives Matter movement and developed arguments advocating greater national unity and an end to police violence against people of color. The majority of student participants were from under-served communities and were first generation college students.

“The Advanced Computing for Social Change Institute aims to empower young minds in utilizing unique software and their talents for modeling and analyzing everyday social issues irrespective of their academic major, thus effectively making predictions pertaining to the inspired model,” said Stacyann Nelson, a graduate student at Florida A&M University and a part participant and current mentor in the program.

Participants in this year’s event come from colleges and universities across Colorado, including the University of Colorado Boulder, the University of Denver, the Colorado School of Mines, and the University of Colorado Denver.

“I am thrilled that several computer science students from the Colorado School of Mines will be participating in this important event,” said Tracy Camp, professor of Computer Science at the university. “These types of events are a win-win, as they offer an opportunity for our students to increase their skills and an opportunity for their efforts to impact the world in a positive way. The Colorado School of Mines is grateful that the event is just down the road in Denver.”

Link to original article with additional images.

Source: TACC

The post ACSCI at SC17: Students Explore Immigration Through a Big Data Lens appeared first on HPCwire.

Nvidia Charts AI Inroads, Inference Challenge at DC Event

HPC Wire - Thu, 11/02/2017 - 14:36

As developers flock to artificial intelligence frameworks in response to the explosion of intelligence machines, training deep learning models has emerged as a priority along with syncing them to a growing list of neural and other network designs.

All are being aligned to confront some of the next big AI challenges, including training deep learning models to make inferences from the fire hose of unstructured data.

These and other AI developer challenges were highlighted during this week’s Nvidia GPU technology conference in Washington. The GPU leader uses the events to bolster its contention that GPUs—some with more than 5,000 CUDA cores—are filling the computing gap created by the decline of Moore’s Law. The other driving force behind the “era of AI” is the emergence of algorithm-driven deep learning that is forcing developers to move beyond mere coding to apply AI to a growing range of automated processes and predictive analytics.

Nvidia executives demonstrated a range of emerging applications designed to show how the parallelism and performance gains of GPU technology along with the company’s CUDA programming interface complement deep learning development. Nvidia said Wednesday (Nov. 1) downloads of the CUDA API have doubled over the past year to 2 million.

“Becoming the world’s AI [developer] platform is our focus,” asserted Greg Estes, Nvidia’s vice-president for developer programs. Buttressing that strategy, the company also this week announced expansion of its Deep Learning Institute to address the growing need for more AI developers.

As it seeks to put most of the pieces in place to accelerate AI development, Nvidia has also pushed GPU technology into the cloud with early partners such as Amazon Web Services. The company’s GPU cloud seeks to create a “registry” of tools for deep learning development. Estes added that the company has taken a hybrid cloud approach in which the deep learning registry runs in the cloud or on premises while deep learning frameworks such as Tensor Flow can be delivered via application containers.

Last week Amazon became the first major public cloud provider to offer the top-shelf Nvidia Tesla Volta GPUs.

As the deep learning ecosystem comes together, Estes argued in a keynote address that the training of models to infer from huge data sets loom large. “AI inference is the next great challenge,” he argued. “It turns out the problem is pretty hard.” The scale of the challenge was illustrated by this statistic: There are an estimated 20 million inference servers currently crunching data to make inferences that run the gamut from educated guesses to reliable predictions.

Estes ticked off a growing list of emerging network designs that underpin current deep learning development, ranging from convolutional networks for visual data and recurrent nets for speech recognition to reinforcement learning and generative adversarial networks (in which two opposing networks seek to “fool” the other to, for example, spot a forgery).

Hence, Nvidia and its growing list of cloud and developer partners have been laboring to apply GPU parallelism and deep learning frameworks like TensorFlow to accelerate model training. The company released the latest version of its TensorRT AI inference software last month. The combination of Tesla GPUs and CUDA programmability are designed to “accelerate the growing diversity and complexity of deep neural networks,” Nvidia CEO Jensen Huang asserted.

Nvidia also uses its roadshows to demonstrate its GPU capabilities versus current CPUs. An inferencing example was intended to back its GPU performance claims. The demonstration involved sorting through aerial photos that were labeled according to land uses such as agriculture of an airstrip.

The company claims its approach could map an area the size of New Mexico in a day while a CPU platform would require three months to sort and organize the aerial photos.

Emerging deep learning tools could be applied to applications such as remote sensing and other satellites imagery where emerging vendors are struggling to sift through hundreds of petabytes of data.

Nvidia partner Orbital Insights Inc. said it is combining satellite imagery with deep learning tools for applications like determining crop yields and urban planning. CEO James Crawford said it is using GPUs and deep learning tools such as convolutional models to process images taken by synthetic aperture radars that are able to see through clouds.

As part of an effort to quantify Chinese energy output, the San Francisco-based startup trained a neural network to spot the natural gas flares associated with fracking. Among the unintended but valuable finds delivered by the deep learning model was uncovering details of China’s industrial capacity, including the number of blast furnaces used in steel production.

The post Nvidia Charts AI Inroads, Inference Challenge at DC Event appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator