Feed aggregator

Mines, nonprofit QL+ join forces to help injured veterans

Colorado School of Mines - Fri, 01/05/2018 - 09:28

The non-profit organization Quality of Life Plus (QL+) donated $50,000 to enhance a lab at Colorado School of Mines where students work exclusively to develop adaptive equipment to solve mobility challenges posed by veterans and first responders who have disabilities. The funds will be used to purchase equipment for measurement, prototyping and fabrication of innovative, custom devices to elevate the quality of life and independence of injured veterans and first responders.

On January 9 at 11:30 a.m., Mines will dedicate the lab and host an open house to showcase the work of Mines students for QL+ participants, known as challengers. The lab is located in Brown Hall, 1610 Illinois St., Golden.

“These are projects that can make a huge improvement in the lives of people who have sacrificed so much for our nation. The generous support from Quality of Life Plus will greatly enhance the efforts of Mines students on behalf of disabled veterans and first responders,” said Joel Bach, director of Mines’ Human Centered Design Studio.

Mines students are working with five QL+ challengers from around the country to engineer solutions for specific mobility problems through creativity, technology and empathy. One challenger is Velette Britt, an Air Force veteran from Colorado Springs who is paralyzed from the waist down. Velette is a competitive hand cyclist and avid skier whose goal is to compete in the Warrior Games and the National Veterans Wheelchair Games. This fall, Mines students designed a manual wheelchair that allows her to traverse curbs and bumps without having to do a “wheelie.” For a spring project, Velette has challenged the team to design comfortable cranks for her hand cycle and attachments to allow her to ride in inclement weather.

“We selected Colorado School of Mines as a partner university because it is well known for its terrific engineering program and outstanding students,” said Quality of Life Plus Founder and retired CIA Senior Executive Jon Monett. “There is no place better for us in this region than Mines, with its committed faculty and passionate students.”
Mines is one of seven QL+ partner universities connecting students with veterans through a customized, hands-on learning opportunity to produce assistive devices to improve their quality of life.  

At the open house on January 9, student researchers will discuss their projects while touring the lab and demonstrating the technologies used to develop adaptive devices. Mines President Paul C. Johnson will speak.

Rachelle Trujillo, Senior Director of Communications and Marketing, Colorado School of Mines Foundation | 303-273-3526 | rtrujillo@mines.edu

Emilie Rusch, Public Information Specialist, Colorado School of Mines | 303-273-3361 | erusch@mines.edu

Amber Humphrey, Social Media Director and Midwest Program Manager, Quality of Life Plus | 270-348-0103 | amber.humphrey@qlplus.org

Categories: Partner News

Chip Flaws Meltdown and Spectre Loom Large

HPC Wire - Thu, 01/04/2018 - 17:23

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. The bugs leave memory content open to malicious theft. Worse yet the fixes for these flaws are either unclear at this point or likely to incur significant slowdowns.

As the story evolved, many reports centered on the “Intel chip flaw” but the problem is much bigger than that and impacts AMD and ARM CPUs as well. The New York Times has done a great job of putting all the moving pieces together.

There are two major flaws, the Times reports. The first, dubbed Meltdown, has currently only been shown to impact Intel microprocessors (due to a type of speculative execution that Intel chips allow, covered comprehensively by Ars Technica). The Linux patch, called KPTI (formerly KAISER), has been shown to slow performance speeds of processors by as much as 30 percent, depending on the application.

The second issue, called Spectre, is conceivably even more problematic as it affects virtually all chip lines on the market, leaving potentially billions of devices, including phones, vulnerable to exploits. “Researchers believe this flaw is more difficult to exploit. There is no known fix for it and it is not clear what chip makers like Intel will do to address the problem,” wrote the Times.

Intel released a statement yesterday downplaying the ramifications and emphasizing that competing chips are also affected.

“Intel and other technology companies have been made aware of new security research describing software analysis methods that, when used for malicious purposes, have the potential to improperly gather sensitive data from computing devices that are operating as designed. Intel believes these exploits do not have the potential to corrupt, modify or delete data,” the company asserted.

“Recent reports that these exploits are caused by a ‘bug’ or a ‘flaw’ and are unique to Intel products are incorrect. Based on the analysis to date, many types of computing devices — with many different vendors’ processors and operating systems — are susceptible to these exploits.”

Intel went on to say that for the “average computer user,” performance impacts “should not be significant and will be mitigated over time.”

This prompted one contributor to a popular HPC mailing list to respond: “We, ‘non-average computer users,’ are still [verb of your choice here].”

As this issue was still coming to light, the US government issued a dire statement (on Jan. 3), implying the problematic CPUs were essentially unsalvageable. “The underlying vulnerability is primarily caused by CPU architecture design choices. Fully removing the vulnerability requires replacing vulnerable CPU hardware,” wrote US-CERT, the computer safety division of Homeland Security.

A revised version of the notice offers less extreme, but vague, guidance. Affected parties are now advised that “operating system and some application updates mitigate these attacks.”

There is still a lot of uncertainty about the full ramifications of these major flaws. AMD and ARM have also released statements:

AMD: https://www.amd.com/en/corporate/speculative-execution

ARM: https://developer.arm.com/support/security-update

The impacted tech companies have known about the flaws for months and have been working to solve the issues before a public disclosure. This is common practice to stay ahead of hackers, but the timing is bringing attention to a major stock sale made late last year by Intel CEO Brian Krzanich. On November 29, Krzanich sold off $24 million worth of company stock and options, reducing his share down to the bare minimum required by his contract with Intel. The scope of the transactions were within permissible bounds, but could draw additional scrutiny now with regard to the timing of the sell-off and the potential for hardware vulnerabilities to impact stock prices. A spokesperson for Intel said Krzanich’s sale was “unrelated.”

Computing professionals have taken to mailing lists, social media forums and message boards to vent frustrations and discuss strategies for balancing security interests with performance mandates. There is already talk of seeking compensation for lost performance. This is especially relevant when it comes to HPC systems, which not only comprise thousands of nodes, but have workloads and usage patterns that make them targets for the higher-range penalties.

Additional reading:



Ground zero post:


Meltdown and Spectre logos were designed by  Natascha Eibl and used above via Creative Commons license. 

The post Chip Flaws Meltdown and Spectre Loom Large appeared first on HPCwire.

Mellanox Ships BlueField System-on-Chip Platforms and SmartNIC Adapters to OEMs and Hyperscale Customers

HPC Wire - Thu, 01/04/2018 - 11:46

SUNNYVALE, Calif. & YOKNEAM, Israel, Jan. 4, 2018 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced the first shipments of its BlueField system-on-chip (SoC) platforms and SmartNIC adapters to major data center, hyperscale and OEM customers. Mellanox BlueField dual-port 100Gb/s SoC is ideal for cloud, Web 2.0, Big Data, storage, enterprise, high-performance computing, and Network Functions Virtualization (NFV) applications.

BlueField sets new NVMe-over-Fabrics performance records, demonstrating seven and a half million IOPS during initial testing, with zero CPU utilization. Furthermore, BlueField delivers under three microseconds of NVMe latency to enable less than five microseconds of additional latency for end to end access to remote NVMe device over a local NVMe device. BlueField’s NVMe over Fabrics advanced hardware acceleration offload guarantees maximum performance with no CPU utilization, thereby improving system total cost of ownership (TCO). In addition, BlueField delivers up to a smashing close to 400Gb/s of RDMA bidirectional traffic bandwidth over dual 100Gb/s ports.

“We are excited to ship BlueField systems and SmartNIC adapters to our major customers and partners, enabling them to build the next generation of storage, cloud, security and other platforms and to gain a competitive advantage,” said Yael Shenhav, vice president of products at Mellanox Technologies. “BlueField products achieve new performance records, delivering industry-leading NVMe over Fabrics and networking throughput. We are proud of our world-class team for delivering these innovative products, designed to meet the ever growing needs of current and future data centers.”

The BlueField family of products is a highly integrated system-on-a-chip optimized for NVMe storage systems, Network Functions Virtualization (NFV), security systems, and embedded appliances. BlueField dual port 100Gb/s SoC solutions combine Mellanox’s leading ConnectX-5 network acceleration technology with an array of high-performance 64-bit Arm A72 processor cores and a PCIe Gen3 and Gen4 switch.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Ships BlueField System-on-Chip Platforms and SmartNIC Adapters to OEMs and Hyperscale Customers appeared first on HPCwire.

Student-designed solar system park gets funding commitment

Colorado School of Mines - Thu, 01/04/2018 - 11:25
A solar system learning park designed by a group of Mines students could soon come to life in Denver’s Northeast Park Hill neighborhood.   The students of Team Naztek designed the scaled solar system – with an interactive sundial representing the sun and fiberglass planets placed to scale to represent their true distances in the solar system – for their Capstone Design project. They met with Denver Mayor Michael Hancock, Councilman Christopher Herndon, who represents the neighborhood, and other city officials Dec. 18 to present the final design.    The Denver Urban Renewal Authority (DURA), which served as the client for the senior design team, has pledged funding for the project to move forward and is in talks with Denver Parks and Recreation on how to proceed, redevelopment specialist Victor Caesar said.   “We’re hoping to see it implemented sometime soon,” said Emily Quaranta, a December graduate in mechanical engineering and the team’s communication lead. “A lot of senior design projects are just that – projects. If we have the opportunity to actually have it come to fruition, it would be unbelievable.”    DURA tasked the students with creating an active learning module that would stimulate curiosity and interest in STEM among kids ages 6-12 from low-income backgrounds living in Denver’s Northeast Park Hill neighborhood. The team came up with several different ideas – a radial swing to emphasize physics concepts and a math canopy among them – before deciding on the solar system park.   All eight planets – sorry, Pluto – are represented in the module. Both the sizes of the planets and the spacing between them are to scale, although different scales. Each foot of planet diameter represents 7,900 miles while every foot of space between planets correlates to 10 million miles. The smaller planets closer to the sun are mounted on poles, while the larger planets are hemispheres on the ground.    Educational plaques create a scavenger hunt around the solar system, challenging park visitors to find the planet with a wrinkled surface, the planet with the tallest mountain and more. Other plaques explain how old a 10-year-old Earthling would be on Venus or Jupiter or how much a 100-pound person would weigh on Mars or Saturn.    “We talked to several experts in STEM and they emphasized making it interactive but also relatable,” Quaranta said. “Instead of saying how many pounds you’d weigh on a planet, we compared it to a wallaby or a tiger.”   The team also met with community members – including neighborhood children at the Boys & Girls Club of Metro Denver – to gather input and get them interested in the project. Rounding out the team were Nolan Sneed, Alex Sauer, Zachary Waanders, Thomas Ladd and Kristen Smith.   “It’s been found that in lower-income neighborhoods, a lot of kids aren’t going into STEM,” Quaranta said. “That’s a problem.”   CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu  
Categories: Partner News

The @hpcnotes Predictions for HPC in 2018

HPC Wire - Thu, 01/04/2018 - 08:48

I’m not averse to making predictions about the world of High Performance Computing (and Supercomputing, Cloud, etc.) in person at conferences, meetings, causal conversations, etc.; however, it turns out to be a while since I have stuck my neck out and widely published my predictions for the year ahead in HPC. Of course, such predictions tend to be evenly split between inspired foresight and misguided idiocy. At least some of the predictions will have readers spluttering coffee in indignation at how wrong I am. But, where would the fun in HPC be if we all played safe? So, here goes for the @hpcnotes predictions for HPC in 2018 …


After spending much of 2017 being called out for ambitiously high pricing of Skylake for HPC customers, and following that with the months of Xeon Phi confusion – eventually publicly admitting at SC17 that Knights Hill has been cancelled, still not clear about the future of Phi overall – Intel seems to have continued into 2018 in the worst way, with news of kernel memory hardware bugs flooding the IT news and social media space. [NB: these bugs have now been confirmed to affect CPUs from AMD, ARM and other vendors too.] 2018 will also see widespread availability of AMD EPYC, Cavium ThunderX2, and IBM Power9 processors and so it seems Intel has a tough year ahead. The hardware bug is especially painful here as it negates the “Intel is the safe option” thinking. To be clear, HPC community consensus so far (including NAG’s impartial benchmarking work with customer codes) says Skylake is a very capable and performance leading processor. However, Skylake has three possible let downs: (1) price substantially higher, relative to the benefits gained, than customers are comfortable with; (2) reduced cache per core compared with other CPUs; (3) dependence on a code’s saturation of the vector units to extract the maximum performance. In some early benchmarks, EPYC and TX2 are winning on both price and performance. My prediction is that Intel will meaningfully drop the Skylake price early in 2018 to pull back into a competitive position on price/performance.

AI and ML

Sorry, the media and marketing hype for AI/ML taking over HPC shows no sign of going away. Yes, there are many real use cases for AI and ML (e.g., follow Paige Bailey and colleagues for real examples); however, the aggressive insertion of AI and ML labels into every HPC-related conference agenda (taking over from the mandatory mentions of Big Data) doesn’t add a lot of value, I think. I’m not suggesting that the HPC community (users or providers) ignore AI/ML – indeed, I would firmly advocate that you add these to your portfolio. But, HPC is an exceptionally powerful and widely applicable tool in its own right – it doesn’t need AI/ML to justify itself. My prediction is that AI/ML will continue to hog a share of the HPC marketing noise unrelated to the scale of actual use in the HPC arena.

New processors

As noted above, 2018 sees credible HPC processors from AMD (EPYC), Cavium (ThunderX2) and other ARM chips, and IBM (Power9) surge into general availability. In my view, these are not (yet) competing with Intel Xeon; they are competing with each other to be the best of the rest. Depending on how Intel behaves (NB: this is not just about technology) and how well AMD/ARM/IBM and their system partners actually execute on promises, one of these might close out 2018 being a serious competitor to Intel’s dominance of the HPC processor space. Either way, I predict we will see at least one meaningful (i.e., competitively won, large scale, for production use) HPC deployment of each of these processors in 2018. I’m also going to add a second prediction to this section: a MIPS based processor option will start to gain headlines as a real HPC processor candidate in 2018 (not just in China).


In most cases, HPC is still cheaper and more capable through traditional in-house systems than via cloud deployments. No amount of marketing changes that. Time might change it, but not by the end of 2018. However, cloud as an option for HPC is not going away. It does present a real option for many HPC workloads, and not just trivial workloads. I am hopeful we are at the end of the era where the cloud providers hoped to succeed by trying to convince everyone that “HPC in-house” advocates were just dinosaurs. The cloud companies all show signs of adjusting their offerings to the actual needs of HPC users (technical, commercial and political needs). This means that an impartial understanding of the pros and cons of cloud for your specific HPC situation is going to be even more critical in 2018. I am certainly being asked to help address the question of HPC in the cloud by my consulting customers with increasing frequency. Azure has been ramping up efforts in HPC (and AI) aggressively over the last few months through acquisitions (e.g., Cycle Computing) and recruitments (e.g., Developer Advocate teams), and I’d expect AWS and Google to do likewise. My prediction is that all three of the major cloud providers (AWS, Azure, Google) will deliver substantially more HPC-relevant solutions in 2018, and at least one will secure a major (and possibly surprising) real HPC customer win.


Nvidia also got an unwelcome start to 2018 as they tried to ban (via retrospective changes to license conditions) the use of their cheaper GPUs in datacenter (e.g., HPC, AI, …) applications. Of course, it is no surprise that Nvidia would prefer customers to buy the much more expensive high-end GPUs for datacenter applications. However, it doesn’t say much for the supposedly compelling business case or sales success of the high-end GPUs if they have to force people off the cheaper products first. We (NAG) have done enough benchmarking across enough different customer codes to know that GPUs are flat-out the fastest widely available processor option for codes that can take effective advantage of highly parallel architectures. However, when price of the high-end GPUs is taken into account, plus the performance left on the floor for the non-accelerated codes, then the CPUs often look a better overall choice. Ultimately, adapting many codes to use GPUs (not just a selected few codes to show easy wins) is a big effort. So is adapting workflows to the cloud. With limited resources available, I think users will decide that investing effort in cloud porting is a better long-term return than GPUs. Yes – oddly, I think cloud, not CPUs, will be the pressure that limits the success of GPUs! My prediction is that Nvidia’s unfortunate licensing assertions, coupled with marginal gains in performance relative to total cost of ownership (TCO), plus scarcity of software engineering resources, is that fewer newly deployed on-site HPC systems will be based around GPUs. On the other hand, I think use of GPUs in the cloud, for HPC, will grow substantially in 2018.


Yes, really. After all, exascale is within grasping distance now. We will see multiple systems at >0.1 EF in 2018. Exascale is being talked about in terms of when and which site first, rather than how and which country first. As exascale now seems likely to happen without all those disruptive changes that voices across the community foretold would be critical, computer science researchers and supercomputer center managers will need to start using the zettascale label to drive the next round of funding bids for novel technologies. There have already been a few small gatherings on zettascale, at least as far back as 2004 (!), but I predict 2018 will see the first mainstream meeting with a session focused on zettascale – perhaps at SC18?


The consumer world was wracked in 2017 by a range of large scale cybersecurity breaches. The government community has been hit badly in previous years too. Sadly, I see cybersecurity moving up the agenda in the HPC world. Not sad that it is happening, but sad that I think it will be forced to happen by one or more incidents. In general, HPC systems are fairly well protected, largely because they are expensive, capable assets and, in some cases, have regulatory criteria to meet. However, performance and ease-of-use for a predominantly research-led userbase have been the traditional strong drivers of requirements, often meaning the risk management decisions have been tilted towards a minimally compliant security configuration. (Security is arguably one area where HPC-in-the-cloud wins.) My prediction for 2018 is twofold: (1) there will be a major security incident on a high profile HPC system; (2) cybersecurity for HPC will move from a niche topic to a mainstream agenda item for some of the larger HPC conferences.

Finally, Growth

I saw HPC and related things such as AI, cloud, etc., gain lots of momentum in 2017. This included several technologies heralded in confidence finally coming to fruition, new HPC deployments across public and private sectors customers, a notable uptick in our HPC consulting work, interesting personnel moves, and an overall excitement and enthusiasm in the HPC community that had been dulled recently. My final prediction is that 2018 will see this growth and energy in the HPC community gather pace. I look forward to new HPC sites emerging, to significant new HPC systems being announced, and to the growing attention on the broader aspects of HPC beyond FLOPS – people, business aspects, impact stories, and more.

I hope you enjoyed my HPC predictions for 2018. Please do engage with me via Twitter (@hpcnotes) or LinkedIn (www.linkedin.com/in/andrewjones) if you want to comment on my inspired foresight or misguided idiocy. I’ll be back with a follow-up article in a week or two on how you can exploit these predictions to your advantage.

The post The @hpcnotes Predictions for HPC in 2018 appeared first on HPCwire.

PASC18 Outlines Opportunities for Student Participation, Issues Reminder for its Call for Proposals

HPC Wire - Thu, 01/04/2018 - 08:12

Jan. 4, 2018 — The PASC18 Organizing Team has begun the New Year with an announcement about opportunities for student participation, as well as with a reminder that submission deadlines are rapidly approaching.

Opportunities for student participation


PASC18 is announcing that this year for the first time, it will offer travel grants to enable two undergraduate or postgraduate students to attend the conference. The travel grants are generously provided by SIGHPC, with PASC18 covering the corresponding registration fees.

Applications are due by February 14, 2018, and further information on the application process is available at: pasc18.pasc-conference.org/about/student-travel-grants/


Submissions for the Student Volunteer Program are now open.

PASC18 is looking for enthusiastic students who are interested in helping us with the administration of the event. Selected students will be granted a complimentary registration for the conference.

Further information on this opportunity is available at: pasc18.pasc-conference.org/about/student-volunteer-program/

Call for submissions reminder


PASC18, co-sponsored by the Association for Computing Machinery (ACM), is the fifth edition of the PASC Conference series, an international platform for the exchange of competences in scientific computing and computational science, with a strong focus on methods, tools, algorithms, application challenges, and novel techniques and usage of high performance computing.

PASC18 welcomes submissions in the form of minisymposiapapers and posters. Contributions should demonstrate innovative research in scientific computing related to the following domains:

  • Chemistry and Materials
  • Life Sciences
  • Physics
  • Climate and Weather
  • Solid Earth Dynamics
  • Engineering
  • Computer Science and Applied Mathematics
  • Emerging Applications Domains (e.g. Social Sciences, Finance, …)

Submissions that are interdisciplinary in nature are strongly encouraged.

Full submission guidelines are available at: pasc18.pasc-conference.org/submission/submissions-portal/

Submissions are received through the online submission portal and the rapidly approaching submission deadlines are listed below: 

  • Minisymposia: January 7, 2018
  • Papers: January 19, 2018
  • Posters: February 4, 2018 


  • Florina Ciorba (University of Basel, Switzerland)
  • Erik Lindahl (Stockholm University, Sweden)


  • Florina Ciorba (University of Basel, Switzerland)
  • Erik Lindahl (Stockholm University, Sweden)
  • Sabine Roller (University of Siegen, Germany)
  • Jack Wells (Oak Ridge National Laboratory, US)


  • Sabine Roller (University of Siegen, Germany)
  • Jack Wells (Oak Ridge National Laboratory, US)

PASC18 Scientific Committee: pasc18.pasc-conference.org/about/organization

Further information on the conference is available at: pasc18.pasc-conference.org/

Source: PASC18

The post PASC18 Outlines Opportunities for Student Participation, Issues Reminder for its Call for Proposals appeared first on HPCwire.

Microsoft Extends Hybrid Cloud Push with Avere Deal

HPC Wire - Wed, 01/03/2018 - 23:00

Microsoft continued its foray into the high-end cloud storage sector with a deal this week to acquire hybrid cloud data storage and management vendor Avere Systems.

The deal announced on Wednesday (Jan. 3) follows Microsoft’s acquisition last August of Cycle Computing to bolster its “big compute” initiatives on the Azure cloud. Microsoft said the Cycle and Avere deals are part of its strategy to bring high-end computing to hybrid cloud deployments. (Note that Cycle and Avere had entered into a technology partnership a year and a half ago, so there was some history here already).

Terms of the deal for Pittsburgh-based Avere Systems were not disclosed.

Avere’s scalable cloud storage platform dubbed FXT Edge Filers targets enterprises trying to integrate applications requiring file systems into the cloud. Along with data storage access, the platform helps scale computing and storage depending on application requirements.

Jason Zander, vice president of Microsoft Azure, said the acquisition gives the public cloud vendor a combination of file system and caching technologies. Avere works with animation studios that run computing intensive workloads, and the deal is expected to give Microsoft entrée into those media and entertainment sectors.

“By bringing together Avere’s storage expertise with the power of Microsoft’s cloud, customers will benefit from industry-leading innovations that enable the largest, most complex high-performance workloads to run in Microsoft Azure,” Zander asserted in a blog post announcing the deal.

Avere Systems CEO Ron Bianchini added that the acquisition would expand the reach of its data storage technology from datacenters and public clouds to hybrid cloud storage and “cloud-bursting environments.”

Along with the entertainment sector, Avere Systems’ customers include the Library of Congress, Johns Hopkins University and automated test equipment manufacturer Teradyne. Its customer base also includes financial services and oil and gas customers along with the education, healthcare and manufacturing sectors.

Last year, Avere Systems announced an investment round that included Microsoft cloud rival Google. It also disclosed partnerships with private cloud storage vendors, including support for Dell EMC’s Elastic Cloud Storage platform. The partners said the software-defined object storage approach would allow private cloud customers to consolidate content archives and file storage systems in a central repository.

Meanwhile, Microsoft’s August 2017 acquisition of Cycle Computing combined the startup’s orchestration technology for managing Linux and Windows computing and data workloads with Microsoft’s Azure cloud computing infrastructure.

Observers praised Microsoft’s acquisition of Avere Systems, noting that its storage technology could help Microsoft boost its Azure revenues by allowing customers to use the public cloud while keeping some data on-premises.

The post Microsoft Extends Hybrid Cloud Push with Avere Deal appeared first on HPCwire.

Graduate student receives NAGT Outstanding TA Award

Colorado School of Mines - Wed, 01/03/2018 - 14:05

A Colorado School of Mines graduate student has been honored with the National Association of Geoscience Teachers Outstanding TA Award.

William Kyle Blount, a PhD student in hydrology, was nominated for the award by Terri Hogue, professor of civil and environmental engineering, in recognition of his numerous academic and community activities, research experience on hydrologic modeling and data analysis, strong rapport among students, countless contributions to improving the learning environment of his classroom and high standards for quality of analysis, writing and critical thinking.

Blount serves as TA for Hogue’s Hydrology and Water Resources Laboratory course (CEEN 482). The NAGT award honors both undergraduate and graduate students who have demonstrated excellence as teaching assistants. 

“I most enjoy interacting with the students,” Blount said. “Seeing them begin to grasp concepts by working through difficulties and learning to answer their own questions is most rewarding to me.”

Blount graduated from Texas A&M University in 2013 with a bachelor’s degree in environmental geosciences. While there, he completed an undergraduate thesis titled “Future Flooding in Houston: Modeling the Impacts of Climate and Land Cover Change on Hydrology in the Buffalo-San Jacinto Watershed” and was named Outstanding Graduating Senior for the Environmental Programs. 

He arrived at Mines in 2016 and is pursuing both his master’s and doctoral degrees in hydrology. His research interests include remote sensing, hydrologic modeling and land-atmosphere interactions in disturbed areas including urban regions and post-wildfire landscapes in the western U.S. 

“In order to be more effective, I design labs and activities to be discovery-based and allow students to be active participants in the learning process and engage with new material in practical, applied settings,” Blount said. “I also design activities to promote the development of professional skills: to improve technical writing abilities, promote self-directed learning and encourage students to understand how to locate answers to their own questions independently, which will all be necessary in future jobs.”

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Hyperion Sets Agenda for HPC User Forum in Europe

HPC Wire - Wed, 01/03/2018 - 12:46

Regional HPC strategies, including the perennial jostling for sway among Europe, the U.S., and Japan, will highlight the HPC User Forum, Europe, scheduled for March 6-7 in France. Hyperion Research, organizer of the event and administrator of the HPC User Forum, just released the preliminary agenda. Besides global competition, AI, industrial HPC use, and a dinner talk on quantum computing are all on the docket.

“Because of China’s rapid rise in the Top500 rankings and the Chinese government’s well-funded plans to deploy the first exascale system, many HPC observers see future supercomputing leadership as a two-horse race between relative newcomer China and long-time leader the United States. In reality, it’s at least a four-horse race that includes two other serious contenders, Europe and Japan,” says Steve Conway, SVP research, Hyperion.

Steve Conway, Hyperion SVP

“That’s one of the things we’ll highlight at the March HPC User Forum meeting: the global nature of the push to advance the state-of-the-art in supercomputing. We’ll have senior officials from Europe, Japan and the U.S.  We haven’t secured a speaker from China, but we’d welcome one. Once you abandon the idea that leadership is limited to Linpack performance, it becomes clear that each of these four contenders is likely to be the future supercomputing leader in some important respects and to advance the state-of-the-art in supercomputing. I’m talking about the 2023-2024 era, when we’re likely to see productive exascale supercomputers from multiple parts of the world, rather than just smaller-scale prototypes and early machines.”

Here are a few agenda highlights:

  • European HPC Strategy, Thomas Skordas or Leonardo Flores, European Commission
  • Japan’s Flagship 2020 Project, Shig Okaya, Flagship 2020/RIKEN
  • View from America, Dimitri Kuznesov and Barbara Helland, DOE
  • HPC-based AI/Deep Learning in the Commercial World, Arno Kolster, Providentia Worldwide
  • Worldwide Study on HPC Centers and Industrial Users, Steve Conway, Hyperion
  • Exascale Computing Project Update, Doug Kothe

According to Conway, the U.S. has a hefty lead in processors and accelerators, but he expects gains for ARM-based designs and indigenous Chinese processors. “When it comes to highly scalable system and application software, the U.S. and Europe stand out, with each excelling in different scientific domains. Europe is also very strong in scalable software for industrial applications. The U.S., Japan and Europe have the largest, most experienced HPC user communities but China is gaining ground quickly. Historically, Japan has shown the ability to mount herculean efforts and jump to the head of the pack,” he says.

So, the race is on. Interestingly, there’s still a fair amount of international collaboration. Rougly a year ago, France’s CEA and Japan’s RIKEN announced they would join forces to advance the ARM ecosystem. CEA and Teratec are co-hosting the March User Forum meeting. There are also initiatives such as the International Exascale Software Project. The exascale race promises to be more multi-dimensional than just a Linpack bake-off says Conway.

This meeting, held at the Très Grand Centre de Calcul du CEA (TGCC) on the Teratec campus in Bruyères-le-Châtel, France (close to Paris), will be the first HPC User Forum run by Hyperion since becoming an independent company in December. Registration for the conference is free. For more information about registering: http://www.hpcuserforum.com

The post Hyperion Sets Agenda for HPC User Forum in Europe appeared first on HPCwire.

TACC Supercomputers Help Researchers Design Patient-Specific Cancer Models

HPC Wire - Wed, 01/03/2018 - 11:20

Jan. 3, 2018 — Attempts to eradicate cancer are often compared to a “moonshot” — the successful effort that sent the first astronauts to the moon.

But imagine if, instead of Newton’s second law of motion, which describes the relationship between an object’s mass and the amount of force needed to accelerate it, we only had reams of data related to throwing various objects into the air.

This, says Thomas Yankeelov, approximates the current state of cancer research: data-rich, but lacking governing laws and models.

The solution, he believes, is not to mine large quantities of patient data, as some insist, but to mathematize cancer: to uncover the fundamental formulas that represent how cancer, in its many varied forms, behaves.

Model of tumor growth in a rat brain before radiation treatment (left) and after one session of radiotherapy (right). The different colors represent tumor cell concentration, with red being the highest. The treatment reduced the tumor mass substantially (Lima et. al. 2017, Hormuth et. al. 2015).

“We’re trying to build models that describe how tumors grow and respond to therapy,” said Yankeelov, director of the Center for Computational Oncology at The University of Texas at Austin (UT Austin) and director of Cancer Imaging Research in the LIVESTRONG Cancer Institutes of the Dell Medical School. “The models have parameters in them that are agnostic, and we try to make them very specific by populating them with measurements from individual patients.”

The Center for Computational Oncology (part of the broader Institute for Computational Engineering and Sciences, or ICES) is developing complex computer models and analytic tools to predict how cancer will progress in a specific individual, based on their unique biological characteristics.

In December 2017, writing in Computer Methods in Applied Mechanics and Engineering, Yankeelov and collaborators at UT Austin and Technical University of Munich, showed that they can predict how brain tumors (gliomas) will grow and respond to X-ray radiation therapy with much greater accuracy than previous models. They did so by including factors like the mechanical forces acting on the cells and the tumor’s cellular heterogeneity. The paper continues research first described in the Journal of The Royal Society Interface in April 2017.

“We’re at the phase now where we’re trying to recapitulate experimental data so we have confidence that our model is capturing the key factors,” he said.

To develop and implement their mathematically complex models, the group uses the advanced computing resources at the Texas Advanced Computing Center (TACC). TACC’s supercomputers enable researchers to solve bigger problems than they otherwise could and reach solutions far faster than with a single computer or campus cluster.

According to ICES Director J. Tinsley Oden, mathematical models of the invasion and growth of tumors in living tissue have been “smoldering in the literature for a decade,” and in the last few years, significant advances have been made.

“We’re making genuine progress to predict the growth and decline of cancer and reactions to various therapies,” said Oden, a member of the National Academy of Engineering.

Model Selection and Testing

Over the years, many different mathematical models of tumor growth have been proposed, but determining which is most accurate at predicting cancer progression is a challenge.

In October 2016, writing in Mathematical Models and Methods in Applied Sciences, the team used a study of cancer in rats to test 13 leading tumor growth models to determine which could predict key quantities of interest relevant to survival, and the effects of various therapies.

They applied the principle of Occam’s razor, which says that where two explanations for an occurrence exist, the simpler one is usually better. They implemented this principle through the development and application of something they call the “Occam Plausibility Algorithm,” which selects the most plausible model for a given dataset and determines if the model is a valid tool for predicting tumor growth and morphology.

The method was able to predict how large the rat tumors would grow within 5 to 10 percent of their final mass.

“We have examples where we can gather data from lab animals or human subjects and make startlingly accurate depictions about the growth of cancer and the reaction to various therapies, like radiation and chemotherapy,” Oden said.

The team analyzes patient-specific data from magnetic resonance imaging (MRI), positron emission tomography (PET), x-ray computed tomography (CT), biopsies and other factors, in order to develop their computational model.

Each factor involved in the tumor response — whether it is the speed with which chemotherapeutic drugs reach the tissue or the degree to which cells signal each other to grow — is characterized by a mathematical equation that captures its essence.

“You put mathematical models on a computer and tune them and adapt them and learn more,” Oden said. “It is, in a way, an approach that goes back to Aristotle, but it accesses the most modern levels of computing and computational science.”

The group tries to model biological behavior at the tissue, cellular and cell signaling levels. Some of their models involve 10 species of tumor cells and include elements like cell connective tissue, nutrients and factors related to the development of new blood vessels. They have to solve partial differential equations for each of these elements and then intelligently couple them to all the other equations.

“This is one of the most complicated projects in computational science. But you can do anything with a supercomputer,” Oden said. “There’s a cascading list of models at different scales that talk to each other. Ultimately, we’re going to need to learn to calibrate each and compute their interactions with each other.”

From Computer to Clinic

The research team at UT Austin — which comprises 30 faculty, students, and postdocs — doesn’t only develop mathematical and computer models. Some researchers work with cell samples in vitro; some do pre-clinical work in mice and rats. And recently, the group has begun a clinical study to predict, after one treatment, how an individual’s cancer will progress, and use that prediction to plan the future course of treatment.

At Vanderbilt University, Yankeelov’s previous institution, his group was able to predict with 87 percent accuracy whether a breast cancer patient would respond positively to treatment after just one cycle of therapy. They are trying to reproduce those results in a community setting and extend their models by adding new factors that describe how the tumor evolves.

The combination of mathematical modeling and high-performance computing may be the only way to overcome the complexity of cancer, which is not one disease but more than a hundred, each with numerous sub-types.

“There are not enough resources or patients to sort this problem out because there are too many variables. It would take until the end of time,” Yankeelov said. “But if you have a model that can recapitulate how tumors grow and respond to therapy, then it becomes a classic engineering optimization problem. ‘I have this much drug and this much time. What’s the best way to give it to minimize the number of tumor cells for the longest amount of time?'”

Computing at TACC has helped Yankeelov accelerate his research. “We can solve problems in a few minutes that would take us 3 weeks to do using the resources at our old institution,” he said. “It’s phenomenal.”

According to Oden and Yankeelov, there are very few research groups trying to sync clinical and experimental work with computational modeling and state-of-the-art resources like the UT Austin group.

“There’s a new horizon here, a more challenging future ahead where you go back to basic science and make concrete predictions about health and well-being from first principles,” Oden said.

Said Yankeelov: “The idea of taking each patient as an individual to populate these models to make a specific prediction for them and someday be able to take their model and then try on a computer a whole bunch of therapies on them to optimize their individual therapy — that’s the ultimate goal and I don’t know how you can do that without mathematizing the problem.”

The research is supported by National Science Foundation, the U.S. Department of Energy, the National Council of Technological and Scientific Development, Cancer Prevention Research Institute of Texas and the National Cancer Institute.

Source: Aaron Dubrow, TACC


The post TACC Supercomputers Help Researchers Design Patient-Specific Cancer Models appeared first on HPCwire.

Boyle part of team awarded $8M for algal biofuel research

Colorado School of Mines - Wed, 01/03/2018 - 10:34

A Colorado School of Mines professor is part of a team that has been awarded $8 million over five years by the U.S Department of Energy to engineer a particular strain of alga to produce renewable biofuel.

Nanette Boyle, assistant professor of chemical and biological engineering, will receive $616,000 over the next five years for her part in the project, which is to create a genome-scale metabolic model of the algae—Chromochloris zofingiensis—and use it to predict how carbon is directed through its metabolism and what genetic changes will lead to increased production of lipids, which can then be extracted and converted into biodiesel.

“I will also be performing isotope-assisted metabolic flux analysis to quantify carbon fluxes in the cell for both growth on glucose and carbon dioxide,” Boyle said. “This will validate or help to iteratively improve the predictions made using the metabolic model and identify any bottlenecks or undesired side products that can then be targeted using genetic engineering techniques.”

There are two main challenges in developing high-yielding algae strains, Boyle said. “First, our understanding of genetic regulation and cellular physiology lags behind other model organisms like E. coli and yeast,” Boyle said. “Second, we don’t have sophisticated genetic tools to introduce the desired changes.”

In addition to Boyle’s work, the team will collect data on a large scale to gain insight into genetic elements that control metabolic shifts responsible for lipid accumulation. This information will then be used to develop synthetic biology tools to enable fast and efficient engineering of the algae’s cells.

The project, “Systems analysis and engineering of biofuel production in Chromochloris zofingiensis, an emerging model green alga,” is led by Krishna Niyogi of the University of California, Berkeley. Investigators include Crysten Blaby, Brookhaven National Laboratory; Mary Lipton, Pacific Northwest National Laboratory; Sabeeha Merchant, UCLA; and Trent Northen, Lawrence Berkeley National Laboratory.

The grant is administered by the Genomic Science Program in the Energy Department’s Office of Biological and Environmental Research.

Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu

Categories: Partner News

Microsoft to Acquire Avere Systems

HPC Wire - Wed, 01/03/2018 - 07:17

Jan. 3, 2018 — Microsoft has signed an agreement to acquire Avere Systems, a leading provider of high-performance NFS and SMB file-based storage for Linux and Windows clients running in cloud, hybrid and on-premises environments.

Avere uses an innovative combination of file system and caching technologies to support the performance requirements for customers who run large-scale compute workloads. In the media and entertainment industry, Avere has worked with global brands including Sony Pictures Imageworks, animation studio Illumination Mac Guff and Moving Picture Company (MPC) to decrease production time and lower costs in a world where innovation and time to market is more critical than ever.

High performance computing needs however do not stop there. Customers in life sciences, education, oil and gas, financial services, manufacturing and more are increasingly looking for these types of solutions to help transform their businesses. The Library of Congress, John Hopkins University and Teradyne, a developer and supplier of automatic test equipment for the semiconductor industry, are great examples where Avere has helped scale datacenter performance and capacity, and optimize infrastructure placement.

Microsoft says that by bringing together Avere’s storage expertise with the power of Microsoft’s cloud, customers will benefit from industry-leading innovations that enable the largest, most complex high-performance workloads to run in Microsoft Azure.

Source: Microsoft

The post Microsoft to Acquire Avere Systems appeared first on HPCwire.

Contaminated water exposure study featured by multiple news outlets

Colorado School of Mines - Tue, 01/02/2018 - 10:45

A newly funded study of Colorado residents exposed to drinking water contaminated by firefighting foam used at a U.S. Air Force base was recently featured by multiple news outlets. Researchers at Colorado School of Mines and the Colorado School of Public Health at the University of Colorado Anschutz Medical Campus are collaborating on the study, which is being funded by the National Institute of Environmental Health Sciences. Leading the effort at Mines is Christopher Higgins, associate professor of civil and environmental engineering.

Among the news outlets to cover the story were the Associated Press, Colorado Springs Gazette, Colorado Public Radio, CBS4 Denver and KRDO News Channel 13. The AP story appeared in numerous local and national outlets, including The Denver Post, (Fort Collins) Coloradoan, KDVR Fox 31 Denver and The Seattle Times.

Categories: Partner News

ASC18 Competition Timeline Released

HPC Wire - Tue, 01/02/2018 - 07:14

Jan. 2, 2018 — The 2018 ASC Student Supercomputer Challenge (ASC18) has released its competition timeline (official website: http://www.asc-events.org/ASC18/index.php), with a starting date set for January 16, 2018. Making the announcement, the ASC18 organizers also promised that the upcoming competition would include a well-known AI application in one of its challenges, a tantalizing clue to the dozens of entrants from China, America, Europe and around the world who are expected to attend.

The timeline sets January 15, 2018, as the deadline for registration, before which all entrants must submit an application on the ASC website. Each team entering the competition must include one mentor and five undergraduate students, with multiple teams from a single university allowed. Competition will begin with the preliminary round, set to kick off on January 16 and conclude on March 20. Within this timeframe, participating teams must submit their written solutions and application optimization according to ASC requirements. Each entry will be scored by the judging panel. With the top 20 teams advancing to the final round. For five days, from May 5 to May 9, these teams will compete face-to-face for the ASC18 Grand Prix.

But while releasing the ASC18 timeline, the competition’s organizers have yet to reveal several key details, hoping to spark the curiosity of young supercomputing enthusiasts around the world. The organizing committee noted ASC18 will continue with past precedent in cooperating with an ultra-large-scale supercomputer system to serve as the competition platform. But no indication was given as to whether this would be the Gordon Bell Prize-winning Sunway TaihuLight featured in last year’s competition or another system altogether. Organizers have likewise not yet revealed the location of the upcoming competition.

Whatever surprises organizers have in store, ASC18 is sure to make waves as it follows past competitions in breaking new ground. The inaugural ASC in 2012 was the first event of its kind to use a world-class supercomputer system as its competition platform, while in 2014 ASC introduced Tianhe II, the world’s fastest supercomputer.

ASC has also given participating teams the chance to engage in major international science projects. In 2015, the competition partnered with SKA, the world’s largest radio telescope project. In 2016, ASC used the Gordon Bell Prize-winning numerical simulation of high-resolution waves to present teams with a challenge relating to driverless vehicles.

With its ever-increasing challenges and unmatched opportunities for participants, the ASC has won praise from numerous leading names in the supercomputing field. “The US’ SC is like a marathon, testing participants’ hard work and perseverance; Germany’s ISC is a sprint, testing innovation and adaptability. China’s ASC is a combination of both,” said OrionX partner and HPCwire correspondent Dan Olds, who has covered all three major supercomputing challenges.

Jack Dongarra, founder of TOP500, a ranking of the world’s most powerful supercomputer systems, and researcher at the Oak Ridge National Laboratory and University of Tennessee, has described ASC as having “by far the most intense competition” of any student supercomputer contests he has witnessed.

About ASC

Sponsored by China, ASC is one of the world’s three major supercomputer contests, alongside the US’ SC and Germany’s ISC. Held annually since 2012, the competition is devoted to cultivating young talent in the field of supercomputing.

Source: ASC

The post ASC18 Competition Timeline Released appeared first on HPCwire.

Mines team gets NASA funding for intelligent drilling system

Colorado School of Mines - Fri, 12/22/2017 - 09:16

An intelligent drilling system capable of characterizing materials as it drills into the lunar or Martian surface is under development at Colorado School of Mines.

A team of three Mines professors, led by Jamal Rostami, Haddon/Alacer Gold Endowed Chair in Mining Engineering and director of the Earth Mechanics Institute, has received NASA funding for the project. Co-investigators are Bill Eustes, associate professor of petroleum engineering, and Christopher Dreyer, research assistant professor of mechanical engineering and associate director of the Center for Space Resources

The NASA Early Stage Innovation funding, announced last month, will provide up to $500,000 over three years for the development of the drilling system, which will use artificial intelligence and pattern recognition to identify materials in real time. 

“Normally, we have to drill, get a sample, test and then characterize. That takes time and it’s expensive,” Rostami said. “Our project is about developing a system that can monitor drilling parameters and, by analyzing data, immediately see what it is going on in the subsurface and know if you’re in compacted soil, frozen soil, rocks and so on.”

That real-time functionality could be particularly useful in permanently shadowed regions of the Moon, where ice is known to exist and temperatures can fall as low as 40 Kelvin, or minus 233 degrees Celsius. That ice has been proposed for use in rocket propellant to fuel space missions but scientists don’t know how deep the layer of icy regolith goes, Dreyer said. 

“It could all be right at the surface,” Dreyer said. “To really understand the regions for eventual large-scale acquisition of resources, you need to study the entire area. It would be far too much material to return back to Earth. Imagine taking a core from a 40 Kelvin-region and trying to return it to Earth keeping it at 40 Kelvin the entire time. The more you can do in-situ, the better.” 

Mines alum Steve Nieczkoski ’86, CEO of Thermal Space in Boulder, is a specialty subcontractor on the project and will advise the group on material behavior at cryogenic temperatures.

And while the drilling system is being developed for use on the Moon and Mars, the technologies could have an impact much closer to home, too, Eustes said. 

“In oil and gas, there’s been tests with near-bit sensors that can actually identify fractures and lithology changes but unfortunately it’s not real time,” Eustes said. “Whatever we develop here will be useful on the Moon, on Mars but also out in the DJ Basin. Being able to characterize what’s going on downhole by just looking at data coming off the drill will be extremely useful not only for Martian and lunar drilling but Earth operations also.”

For Mines, the grant builds on the work already being done to apply the university's deep knowledge of terrestrial resources beyond Earth’s atmosphere, including the soon-to-launch space resources graduate program and two other projects recently funded by NASA: a feasibility study of commercial space-based solar power and research into the dynamic networking of small spacecraft.

“Exploration is the first phase in any mining and construction operation,” Rostami said. “This piece of equipment could be at the forefront of the exploration activities on extraterrestrial resource development.”

Photo credit: NASA/Joel Kowsky

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

HPCAC Announces the 9th Swiss Annual HPC Conference in Collaboration With HPCXXL User Group

HPC Wire - Fri, 12/22/2017 - 08:18

LUGANO, Switzerland, Dec. 22, 2017 — The HPC Advisory Council (HPCAC), a leading for community benefit, member-based organization dedicated to advancing high-performance computing (HPC) research, education and collaboration through global outreach, has announced the Swiss Annual HPC Conference taking place in Lugano, Switzerland, April 9-12 2018, hosted by the Swiss National Supercomputing Centre (CSCS). For the first time, the conference will be held in concert with the Winter HPCXXL User Group meeting.

The conference agenda focuses on HPC, Cloud, Containers, Deep Learning, Exascale, Open Stack and more. It draws peers, members and non-members alike, together from all over the world, for three days of immersive talks, tutorials and networking. It also attracts contributed talks from renowned subject matter experts (SMEs).

The first three days organized by HPCAC will focus on the intersection of private and public research and collaborative enablement with a combination of invited talks featuring industry visionaries with sponsor-led usage, innovation and future technology sessions. The fourth day will be entirely dedicated to NDA site updates from HPCXXL members.

Currently running through January 31 2018, open submissions enlist experts who define the topic and session type which can include best practices talks and workshops, live hands-on tutorials, etc. A select volunteer committee from amongst the council’s 400 plus members reviews all of the conference sessions to assure relevance and interest which helps to bring all of these expert specialists together to network, share latest findings, explore the newest technology, techniques, trends and more.

“The conference provides a variety of resources and access to experts to learn and teach and is inclusive of everyone. While we focus on educating, sharing and learning, it’s really about inspiring others,” said Gilad Shainer, HPC Advisory Council Chairman. “The open forum format and consistent delivery of expert content from leaders on the bleeding edge of HPC and forays into AI, for example, is what draws the highly diverse, multi-disciplined global community and curious to Lugano as attendees, as SMEs, as conference sponsors and to HPCAC as members.”

“We are very excited to organize a joint conference here in Lugano, bringing together the communities of HPCAC and HPCXXL,” said Hussein Harake, HPC system manager, CSCS. “We believe that such a collaboration will offer a unique opportunity for HPC professionals to discuss and share their knowledge and experiences.”

“We are enthusiastic about combining our efforts and offering our members broader content while continuing to have frank NDA conversations during the HPCXXL day,” said Michael Stephan, president, HPCXXL.

Registration is required to attend the Swiss Annual HPC Conference which charges a nominal fee of CHF 180 and includes breaks and lunch during the conference along with a special group outing. HPCXXL User Group attendees must also register for the members meeting which charges a separate fee of CHF 90 for break and lunch services during the one day session. Organizers now offer streamlined registration on a single site for all attendees, including sponsors and their delegates, and accommodates registration for one, three and four day schedules. Registration is open through April 1, 2018. Participants can take advantage of the early bird pricing of CHF 90 by registering before February 11, 2018. Deadline for submitting session proposals is January 31, 2018. Additional details and forms can be found on the HPCAC website.

Additional HPCAC Conferences

In addition to its four day combined 2018 Swiss Conference and HPCXXL User Group event, the HPCAC’s premiere EU conference, the council hosts multiple annual conferences throughout the world. The 2018 schedule leads off with the two day U.S.-based Stanford Conference in February; Australia’s two day conference planned for August; with one day sessions in Spain in September and China in October to close out the year. The council also supports two major student-focused competitions each year. The annual ISC-HPCAC Student Cluster Competition (SCC), in partnership with the ISC Group, officially kicked off in November with the reveal of nine of twelve international teams competing next June during the ISC High Performance Conference and Exhibition in Frankfurt, Germany. Launching in May, the annual RDMA competition is a six-month long programming competition between student teams throughout China that culminates in October with winning teams revealed at the annual China Conference. Sponsor opportunities are available to support all of these critical education initiatives. More information on becoming a member, conference dates, locations and sponsorships and student competitions and sponsorships is available on the HPC Advisory Council website.

About the HPC Advisory Council

Founded in 2008, the non-profit HPC Advisory Council is an international organization with over 400 members committed to education and outreach. Members share expertise, lead special interest groups and have access to the HPCAC technology center to explore opportunities and evangelize the benefits of high performance computing technologies, applications and future development. The HPC Advisory Council hosts multiple international conferences and STEM challenges including the RDMA Student Competition in China and the Student Cluster Competition at the annual ISC High Performance conferences. Membership is free of charge and obligation. More information: www.hpcadvisorycouncil.com.

Source: HPCAC

The post HPCAC Announces the 9th Swiss Annual HPC Conference in Collaboration With HPCXXL User Group appeared first on HPCwire.

Raj Rawat named 2018 Executive in Residence

Colorado School of Mines - Thu, 12/21/2017 - 11:03

The 2018 Joe Eazor Executive in Residence Seminar Series hosted by the Division of Economics and Business’ Engineering and Technology Management Program will be led by author, speaker, consultant and innovator Raj Rawat.

Open to all Mines students and faculty, the seminar series allows executives from industry to pass on insight and knowledge to students preparing for challenges that the seasoned executive understands well. This Engineering and Technology Management Program initiative facilitates active involvement by industry executives, through teaching, student advising activities and more.

Seminars begin Jan. 16 and take place 4-5:30 p.m. at Colorado School of Mines in Marquez Hall 126.

2018 Joe Eazor Executive in Residence Seminar Series Schedule

  • Jan. 16 - Reverse the Chase, Let Opportunity Chase You - Raj Rawat 
  • Jan. 30 - What Leaders Are Made Of - Greg Keller, James Jamison and Abby Benson 
  • Feb. 6 - Building Your Leadership Core - Katherine Knowles, Bart Lorang and Jessica Garcia 
  • Feb. 27 - Excellence and Leadership - Remy Arteaga and Julie Korak 
  • March 13 - Excellence = Fearless Life - George Promis and Nick Gromicko 
  • April 10 - Your Everest - Student Presenters 

Meet the speakers and learn more at EconBus.MINES.edu.

Raj Rawat: 2018 Executive in Residence
Raj Rawat is an author, speaker, consultant and innovator. After 20 years of leading “impossible” billion-dollar projects for Fortune 50 companies, he found his passion in inspiring companies and individuals to rise to their full potential.

Rawat’s recent book, “Find Your Everest: Before Someone Chooses It For You,” is gaining critical acclaim for its inspirational yet honest approach to think big and achieve. Rawat creates high-performance cultures by aligning individuals’ priorities with the organization’s performance targets.

Learn more at RajRawat.com.

Kelly Beard, Communications Specialist, Division of Economics and Business | 303-273-3452 | kbeard@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News


Subscribe to www.rmacc.org aggregator