Feed aggregator

IBM Reports 2017 Fourth-Quarter and Full-Year Results

HPC Wire - Thu, 01/18/2018 - 17:00

ARMONK, NY, Jan. 18, 2018 — IBM (NYSE:IBM) today announced fourth-quarter and full-year 2017 earnings results.

“Our strategic imperatives revenue again grew at a double-digit rate and now represents 46 percent of our total revenue, and we are pleased with our overall revenue growth in the quarter,” said Ginni Rometty, IBM chairman, president and chief executive officer. “During 2017, we strengthened our position as the leading enterprise cloud provider and established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.”

“Over the past several years we have invested aggressively in technology and our people to reposition IBM,” said James Kavanaugh, IBM senior vice president and chief financial officer. “2018 will be all about reinforcing IBM’s leadership position in key high-value segments of the IT industry, including cloud, AI, security and blockchain.”

Strategic Imperatives Revenue

Fourth-quarter cloud revenues increased 30 percent to $5.5 billion (up 27 percent adjusting for currency). Cloud revenue over the last 12 months was $17.0 billion, including $9.3 billion delivered as-a-service and $7.8 billion for hardware, software and services to enable IBM clients to implement comprehensive cloud solutions. The annual exit run rate for as-a-service revenue increased to $10.3 billion from $8.6 billion in the fourth quarter of 2016. In the quarter, revenues from analytics increased 9 percent (up 6 percent adjusting for currency). Revenues from mobile increased 23 percent (up 21 percent adjusting for currency) and revenues from security increased 132 percent (up 127 percent adjusting for currency).

Full-Year 2018 Expectations

The company will discuss 2018 expectations during today’s quarterly earnings conference call.

Cash Flow and Balance Sheet

In the fourth quarter, the company generated net cash from operating activities of $5.7 billion, or $7.8 billion excluding Global Financing receivables. IBM’s free cash flow was $6.8 billion. IBM returned $1.4 billion in dividends and $0.7 billion of gross share repurchases to shareholders. At the end of December 2017, IBM had $3.8 billion remaining in the current share repurchase authorization.

The company generated full-year free cash flow of $13.0 billion, excluding Global Financing receivables. The company returned $9.8 billion to shareholders through $5.5 billion in dividends and $4.3 billion of gross share repurchases.

IBM ended the fourth quarter of 2017 with $12.6 billion of cash on hand. Debt totaled $46.8 billion, including Global Financing debt of $31.4 billion. The balance sheet remains strong and is well positioned over the long term.

Segment Results for Fourth Quarter

  • Cognitive Solutions (includes solutions software and transaction processing software) — revenues of $5.4 billion, up 3 percent (flat adjusting for currency), driven by security and transaction processing software.
  • Global Business Services (includes consulting, global process services and application management) —revenues of $4.2 billion, up 1 percent (down 2 percent adjusting for currency). Strategic imperatives revenue grew 9 percent led by the cloud practice, mobile and analytics.
  • Technology Services & Cloud Platforms (includes infrastructure services, technical support services and integration software) — revenues of $9.2 billion, down 1 percent (down 4 percent adjusting for currency). Strategic imperatives revenue grew 15 percent, driven by hybrid cloud services, security and mobile.
  • Systems (includes systems hardware and operating systems software) — revenues of $3.3 billion, up 32 percent (up 28 percent adjusting for currency) driven by growth in IBM Z, Power Systems and storage.
  • Global Financing (includes financing and used equipment sales) — revenues of $450 million, up 1 percent (down 2 percent adjusting for currency).

Tax Rate

The enactment of the Tax Cuts and Jobs Act in December 2017 resulted in a one-time charge of $5.5 billion in the fourth quarter. The charge encompasses several elements, including a tax on accumulated overseas profits and the revaluation of deferred tax assets and liabilities. As a result, IBM’s reported GAAP tax rate, which includes the one-time charge, was 124 percent for the fourth quarter, and 49 percent for the full year. IBM’s operating (non-GAAP) tax rate, which excludes the one-time charge, was 6 percent for the fourth quarter; and 7 percent for the full year, which includes the effect of discrete tax benefits in the first and second quarters. Without discrete tax items, the full-year operating (non-GAAP) tax rate was 12 percent, at the low end of the company’s previously estimated range.

Full-Year Results

  • Full-year GAAP EPS from continuing operations of $6.14– Includes a one-time charge of $5.5 billion associated with the enactment of U.S. tax reform
  • Full-year operating (non-GAAP) EPS of $13.80– Excludes the one-time charge of $5.5 billion associated with the enactment of U.S. tax reform
  • Full-year revenue of $79.1 billion, down 1 percent

Link to full announcement.

Source: IBM

The post IBM Reports 2017 Fourth-Quarter and Full-Year Results appeared first on HPCwire.

Wilcox serves on AGU panel addressing climate intervention

Colorado School of Mines - Thu, 01/18/2018 - 14:50

Jen Wilcox, associate professor of chemical and biological engineering at Colorado School of Mines, served on a panel of the American Geophysical Union (AGU) that has adopted a revised position statement on climate intervention.

The statement, titled “Climate Intervention Requires Enhanced Research, Consideration of Societal and Environmental Impacts, and Policy Development,” was updated to reflect changes in the current understanding of climate intervention and discusses two categories of climate intervention: carbon dioxide removal and albedo modification, a process that would inject particles in the atmosphere that would reflect some of the sun’s radiation away from the earth’s surface. The statement also encourages national funding agencies to “create substantial research programs on climate intervention” to better understand associated risks and opportunities.

Wilcox was one of nine panelists who reviewed and revised the position statement. AGU’s position statement was featured in Eos, a news source focused on the Earth and space sciences, and can serve as a resource for policymakers drafting legislation that impacts members’ scientific disciplines.

 

CONTACT
Joe DelNero, Digital Media and Communications Manager, Communications and Marketing | 303-273-3326 | jdelnero@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Sowers discusses space resources on Colorado Public Television

Colorado School of Mines - Thu, 01/18/2018 - 13:44

George Sowers, professor of practice in mechanical engineering at Colorado School of Mines, appeared on a recent episode of Devil's Advocate with Jon Caldara, a current events show that airs on Colorado Public Television.

Sowers, the former chief scientist at ULA, discussed the growing field of space resources and how water on the Moon, space-based solar power and metallic asteroids could fuel future space exploration and development.

Categories: Partner News

PASC18 Announces Keynote Speaker, Extends Paper Deadline to Jan. 21

HPC Wire - Thu, 01/18/2018 - 13:23

Jan. 18, 2018 — The PASC18 Organizing Team is pleased to announce a keynote presentation by Marina Becoulet from CEA, and that the deadline for papers submissions has been extended until January 21, 2018.

Marina Becoulet. Image courtesy of PASC18.

PASC18 keynote presentation: Challenges in the First Principles Modelling of Magneto Hydro Dynamic Instabilities and their Control in Magnetic Fusion Devices

The main goal of the International Thermonuclear Experimental Reactor (ITER) project is the demonstration of the feasibility of future clean energy sources based on nuclear fusion in magnetically confined plasma. In the era of ITER construction, fusion plasma theory and modelling provide not only a deep understanding of a specific phenomenon, but moreover, modelling-based design is critical for ensuring active plasma control.

The most computationally demanding aspect of the project is first principles fusion plasma modelling, which relies on fluid models – such as Magneto Hydro Dynamics (MHD) – or increasingly often on kinetic models. The challenge stems from the complexity of the 3D magnetic topology, the large difference in time scales from Alfvenic (10-7s) to confinement time (hundreds of s), the large difference in space scales from micro-instabilities (mm) to the machine size (few meters), and most importantly, from the strongly non-linear nature of plasma instabilities, which need to be avoided or controlled.

The current status of first principles non-linear modelling of MHD instabilities and active methods of their control in existing machines and ITER will be presented, focusing particularly on the strong synergy between experiment, fusion plasma theory, numerical modelling and computer science in guaranteeing the success of the ITER project.

About the presenter

Marina Becoulet is a Senior Research Physicist in the Institute of Research in Magnetic Fusion at the French Atomic Energy Commission (CEA/IRFM). She is also a Research Director and an International expert of CEA, specializing in theory and modelling of magnetic fusion plasmas, in particular non-linear MHD phenomena. After graduating from Moscow State University (Physics Department, Plasma Physics Division) in 1981, she obtained a PhD in Physics and Mathematics from the Institute of Applied Mathematics, Russian Academy of Science (1985). She worked at the Russian Academy of Science in Moscow, on the Joint European Torus in the UK, and since 1998 has been employed at CEA/IRFM, France.

Call for submissions reminder: deadlines are rapidly approaching!

The deadline for papers submissions has been extended to Sunday, January 21, 2018.

PASC18 upcoming submission deadlines:

  • Papers: January 21, 2018
  • Posters: February 4, 2018

Submit your contributions through the online submissions portal.

Full submission guidelines are available at: pasc18.pasc-conference.org/submission/submissions-portal/

PASC18 Scientific Committee: pasc18.pasc-conference.org/about/organization

Further information on the conference and submission possibilities are available at: pasc18.pasc-conference.org/

Source: PASC18

The post PASC18 Announces Keynote Speaker, Extends Paper Deadline to Jan. 21 appeared first on HPCwire.

Supercomputer Simulations Enable 10-Minute Updates of Rain and Flood Predictions

HPC Wire - Thu, 01/18/2018 - 11:36

Jan. 18, 2018 — Using the power of Japan’s K computer, scientists from the RIKEN Advanced Institute for Computational Science and collaborators have shown that incorporating satellite data at frequent intervals—ten minutes in the case of this study—into weather prediction models can significantly improve the rainfall predictions of the models and allow more precise predictions of the rapid development of a typhoon.

Weather prediction models attempt to predict future weather by running simulations based on current conditions taken from various sources of data. However, the inherently complex nature of the systems, coupled with the lack of precision and timeliness of the data, makes it difficult to conduct accurate predictions, especially with weather systems such as sudden precipitation.

As a means to improve models, scientists are using powerful supercomputers to run simulations based on more frequently updated and accurate data. The team led by Takemasa Miyoshi of AICS decided to work with data from Himawari-8, a geostationary satellite that began operating in 2015. Its instruments can scan the entire area it covers every ten minutes in both visible and infrared light, at a resolution of up to 500 meters, and the data is provided to meteorological agencies. Infrared measurements are useful for indirectly gauging rainfall, as they make it possible to see where clouds are located and at what altitude.

For one study, they looked at the behavior of Typhoon Soudelor (known in the Philippines as Hanna), a category 5 storm that wreaked damage in the Pacific region in late July and early August 2015. In a second study, they investigated the use of the improved data on predictions of heavy rainfall that occurred in the Kanto region of Japan in September 2015. These articles were published in Monthly Weather Review and Journal of Geophysical Research: Atmospheres.

For the study on Typhoon Soudelor, the researchers adopted a recently developed weather model called SCALE-LETKF—running an ensemble of 50 simulations—and incorporated infrared measurements from the satellite every ten minutes, comparing the performance of the model against the actual data from the 2015 tropical storm. They found that compared to models not using the assimilated data, the new simulation more accurately forecast the rapid development of the storm. They tried assimilating data at a slower speed, updating the model every 30 minutes rather than ten minutes, and the model did not perform as well, indicating that the frequency of the assimilation is an important element of the improvement.

To perform the research on disastrous precipitation, the group examined data from heavy rainfall that occurred in the Kanto region in 2015. Compared to models without data assimilation from the Himawari-8 satellite, the simulations more accurately predicted the heavy, concentrated rain that took place, and came closer to predicting the situation where an overflowing river led to severe flooding.

According to Miyoshi, “It is gratifying to see that supercomputers along with new satellite data, will allow us to create simulations that will be better at predicting sudden precipitation and other dangerous weather phenomena, which cause enormous damage and may become more frequent due to climate change. We plan to apply this new method to other weather events to make sure that the results are truly robust.”

Source: RIKEN Advanced Institute for Computational Science

The post Supercomputer Simulations Enable 10-Minute Updates of Rain and Flood Predictions appeared first on HPCwire.

ALCF Now Accepting Proposals for Data Science and Machine Learning Projects for Aurora ESP

HPC Wire - Thu, 01/18/2018 - 11:25

Jan. 18, 2018 — The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, is now accepting proposals for data science and machine learning projects for its Aurora Early Science Program (ESP). The deadline to apply is April 8, 2018.

Now slated to be the nation’s first exascale system when it is delivered in 2021, Aurora will be capable of performing a quintillion calculations per second. The system is expected to have more than 50,000 nodes and more than 5 petabytes of total memory, including high-bandwidth memory.

The shift in plans for Aurora, which was initially scheduled to be a 180 petaflops supercomputer, is central to a new paradigm for scientific computing at the ALCF—the expansion from traditional simulation-based research to include data science and machine learning approaches. Aurora’s revolutionary architecture will provide advanced capabilities in the three pillars of simulation, data, and learning that will power a new era of scientific discovery and innovation.

The Aurora ESP, which kicked off with 10 projects in 2017, is expanding to align with this new paradigm for leadership computing. Currently underway, the 10 original projects will serve as simulation-based projects for the new system.

This call for proposals will bring in 10 additional projects in the areas of data science (e.g., data analytics, data-intensive computing, advanced statistical analyses) and machine learning (e.g., deep learning, neural networks). Proposals for crosscutting projects that involve simulation, data, and learning are also encouraged.

Aurora ESP projects will prepare key applications for the architecture and scale of the exascale supercomputer and solidify libraries and infrastructure to pave the way for other applications to run on the system.

The program provides an exciting opportunity to be among the first researchers in the world to run calculations and workflows on an exascale system. With substantial allocations of pre-production time on Aurora, ESP teams will be able to pursue scientific computing campaigns that are not possible on today’s leadership-class supercomputers.

For more information, visit the Aurora ESP webpage and the Proposal Instructions webpage.

For hands-on assistance in preparing for ESP proposals, the ALCF is hosting the Simulation, Data, and Learning Workshop from February 27–March 1, 2018. To register, visit the workshop webpage.

Source: ALCF

The post ALCF Now Accepting Proposals for Data Science and Machine Learning Projects for Aurora ESP appeared first on HPCwire.

Groundbreaking Conference Examines How AI Transforms Our World

HPC Wire - Thu, 01/18/2018 - 10:35

NEW YORK, Jan. 18, 2018 — ACM, the Association for Computing Machinery; AAAI, the Association for the Advancement of Artificial Intelligence; and SIGAI, the ACM Special Interest Group on Artificial Intelligence have joined forces to organize a new conference on Artificial Intelligence, Ethics and Society (AIES). The conference aims to launch a multi-disciplinary and multi-stakeholder effort to address the challenges of AI ethics within a societal context. Conference participants include experts in various disciplines such as computing, ethics, philosophy, economics, psychology, law and politics. The inaugural AIES conference is planned for February 1-3 in New Orleans.

“The public is both fascinated and mystified about how AI will shape our future,” explains AIES Co-chair Francesca Rossi, IBM Research and University of Padova. “But no one discipline can begin to answer these questions alone. We’ve brought together some of the world’s leading experts to imagine how AI will transform our future and how we can ensure that these technologies best serve humanity.”

Conference organizers encouraged the submission of research papers on a range of topics including building ethical AI systems, the impact of AI on the workforce, AI and the law, and the societal impact of AI. Out of 200 submissions, only 61 papers have been selected and will be presented during the conference.

The program of AIES 2018 also includes invited talks by leading scientists, panel discussions on AI ethics standards and the future AI, and the presentation of the leading professional and student research papers on AI. Co-chairs include Francesca Rossi, a computer scientist and former president of the International Joint Conference on Artificial Intelligence; Jason Furman, a Harvard economist and former Chairman of the Council of Economic Advisors (CEA); Huw Price, a philosopher and Academic Director of the Leverhulme Centre for Future of Intelligence; and Gary Marchant, Regent’s Professor of Law and Director of the Center for Law, Science and Innovation at Arizona State University.

AIES 2018 HIGHLIGHTS

INVITED TALKS

The Moral Machine Experiment: 40 Million Decisions and the Path to Universal Machine Ethics

Iyad Rahwan and Edmond Awad, Massachusetts Institute of Technology

Rahwan and Awad describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game they developed enabled them to gather 40 million decisions from 3 million people in 200 countries and territories. We report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. We also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. Rahwan and Ewad discuss how these three layers of preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics.

AI, Civil Rights and Civil Liberties: Can Law Keep Pace with Technology?

Carol Rose, American Civil Liberties Union

At the dawn of this era of human-machine interaction, human beings have an opportunity to shape fundamentally the ways in which machine learning will expand or contract the human experience, both individually and collectively. As efforts to develop guiding ethical principles and legal constructs for human-machine interaction move forward, how do we address not only what we do with AI, but also the question of who gets to decide and how? Are guiding principles of ‘Liberty and Justice for All’ still relevant? Does a new era require new models of open leadership and collaboration around law, ethics, and AI?

AI Decisions, Risk, and Ethics: Beyond Value Alignment

Patrick Lin, California Polytechnic State University

When we think about the values AI should have in order to make right decisions and avoid wrong ones, there’s a large but hidden third category to consider: decisions that are not-wrong but also not-right. This is the grey space of judgment calls, and just having good values might not help as much as you’d think here. Autonomous cars are used as the case study here. Lessons are offered for broader AI: such as  ethical dilemmas that can arise in everyday scenarios such as lane positioning and navigation—and not just in crazy crash scenarios. This is the space where one good value might conflict with another good value, and there’s no “right” answer or even broad consensus on an answer.

The Great AI/Robot Jobs Scare: reality of automation fear redux 

Richard Freeman, Harvard University

This talk will consider the impact of AI/robots on employment, wages and the future of work more broadly. We argue that we should focus on policies that make AI robotics technology broadly inclusive both in terms of consumption and ownership so that billions of people can benefit from higher productivity and get on the path to the coming age of intolerable abundance.

PANELS

What Will Artificial Intelligence Bring?

Brent Venable, Tulane University (Moderator); Paula Boddington, Oxford University; Wendell Wallach, Yale University; Jason Furman, Harvard University; and Peter Stone, UT Austin

World class researchers from different disciplines and best-selling authors will elaborate on the impact of AI on modern society and will answer questions. This panel is open to the public.

Prioritizing Ethical Considerations in Intelligent and Autonomous Systems: Who Sets the Standards? 

Takashi Egawa, NEC Corporation; Simson L. Garfinkel, USACM; John C. Havens, IEEE (moderator); Annette Reilly, IEEE; and Francesca Rossi, IBM and University of Padova

While dealing with intelligent and autonomous technologies, safety standards and standardization projects are providing detailed guidelines or requirements to help organizations institute new levels of transparency, accountability and traceability. The panelists will explore how we can build trust and maximize innovation while avoiding negative unintended consequences.

BEST PAPER AWARD (sponsored by the Partnership on AI)

Shared between the following two papers:

Transparency and Explanation in Deep Reinforcement Learning Neural Networks

Rahul Iyer, InSite Applications; Yuezhang Li, Google; Huao Li, University of Pittsburgh; Michael Lewis, Facebook; Ramitha Sundar, Carnegie Mellon; and Katia Sycara, Carnegie Mellon

For AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system and to form coherent explanations of the systems decisions and actions. This paper presents a novel and general method to provide a vizualization of internal states of deep reinforcement learning models, thus enabling the formation of explanations that are intelligible to humans.

An AI Race: Rhetoric and Risks 

Stephen Cave, Leverhulme Centre for the Future of Intelligence, Cambridge University  and; Seán S ÓhÉigeartaigh, Centre for the Study of Existential Risk, Cambridge University

The rhetoric of the race for strategic advantage is increasingly being used with regard to the development of AI. This paper assesses the potential risks of the AI race narrative, explores the role of the research community in responding to these risks, and discusses alternative ways to develop AI in a collaborative and responsible way.

For a complete list of research papers and posters which will be presented at the AIES Conference, visit http://www.aies-conference.com/accepted-papers/. The proceedings of the conference will be published in the AAAI and ACM Digital Libraries.

About ACM

ACM, the Association for Computing Machinery (www.acm.org), is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post Groundbreaking Conference Examines How AI Transforms Our World appeared first on HPCwire.

New Blueprint for Converging HPC, Big Data

HPC Wire - Thu, 01/18/2018 - 10:31

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), William Gropp (National Center for Supercomputing Applications), and Thomas Schulthess (Swiss National Supercomputing Centre), among others, has issued a comprehensive Big Data and Extreme-Scale Computing Pathways to Convergence Report. Not surprisingly it’s a large work not easily plumbed in a single sitting.

Convergence – harmonizing computational infrastructures to accommodate HPC and big data – isn’t a new topic. Recently, big data’s close cousin, machine learning, has become part of the discussion. Moreover, the accompanying rise of cyberinfrastructure as a dominant force in science computing has complicated convergence efforts.

The central premise of this study is that a ‘data-driven’ upheaval is exacerbating divisions – technical, cultural, political, economic – in the cyberecosystem of science. The report tackles in some depth a narrower slice of the problem. Big data, say the authors, has caused or worsened two ‘paradigm splits’: 1) one between the traditional ‘HPC and High-end Data Analysis (HDA)’ and 2) another between ‘stateless networks and stateful services’ provided by end systems. The report lays out a roadmap for mending these fissures.

 

This snippet from the report’s executive summary does a nice job of summing up the challenge:

“Looking toward the future of cyberinfrastructure for science and engineering through the lens of these two bifurcations made it clear to the BDEC community that, in the era of Big Data, the most critical problems involve the logistics of wide-area, multistage workflows—the diverse patterns of when, where, and how data is to be produced, transformed, shared, and analyzed. Consequently, the challenges involved in codesigning software infrastructure for science have to be reframed to fully take account of the diversity of workflow patterns that different application communities want to create. For the HPC community, all the imposing design and development issues of creating an exascale-capable software stack remain; but the supercomputers that need this stack must now be viewed as the nodes (perhaps the most important nodes) in the very large network of computing resources required to process and explore rivers of data flooding in from multiple sources.”

There’s a lot to digest here, including a fair amount of technical guidance. Issued at the end of 2017, the report is the result of workshops held in the U.S. (2013), Japan (2014), Spain (2015), Germany (2016), and China (2017); it grew out of prior efforts of the International Exascale Software Project (IESP). Descriptions and results of the five workshops (agendas, white papers, presentations, attendee lists) are available at the BDEC site (http://www.exascale.org/bdec/).

Jack Dongarra

Commenting on the work, Dongarra said, “Computing is at a profound inflection point, economically and technically. The end of Dennard scaling and its implications for continuing semiconductor-design advances, the shift to mobile and cloud computing, the explosive growth of scientific, business, government, and consumer data and opportunities for data analytics and machine learning, and the continuing need for more-powerful computing systems to advance science and engineering are the context for the debate over the future of exascale computing and big data analysis.”

The broad hope is that the ideas presented in the report will guide community efforts. Dongarra emphasized “High-end data analytics (big data) and high-end computing (exascale) are both essential elements of an integrated computing research-and-development agenda; neither should be sacrificed or minimized to advance the other.” Shown below are typical differences in the BDEC software ecosystem.

 

There’s too much in the report to adequately cover here. Here are the report’s summary recommendations:

“Our major, global recommendation is to address the basic problem of the two paradigm splits: the HPC/HDA software ecosystem split and the wide area data logistics split. For this to be achieved, there is a need for new standards that will govern the interoperability between data and compute, based on a new, common and open Distributed Services Platform (DSP), that offers programmable access to shared processing, storage and communication resources, and that can serve as a universal foundation for the component interoperability that novel services and applications will require.

“We make five recommendations for decentralized edge and peripheral ecosystems:

  • Converge on a new hourglass architecture for a Common Distributed Service Platform (DSP).
  • Target workflow patterns for improved data logistics.
  • Design cloud stream processing capabilities for HPC.
  • Promote a scalable approach to Content Delivery/Distribution Networks.
  • Develop software libraries for common intermediate processing tasks.

“We make five actionable conclusions for centralized facilities:

  • Energy is an overarching challenge for sustainability.
  • Data reduction is a fundamental pattern.
  • Radically improved resource management is required.
  • Both centralized and decentralized systems share many common software challenges and opportunities: 
(a) Leverage HPC math libraries for HDA.
(b) More efforts for numerical library standards.
(c) New standards for shared memory parallel processing.
(d) Interoperability between programming models and data formats.
  • Machine learning is becoming an important component of scientific workloads, and HPC architectures must be adapted to accommodate this evolution.”

Link to BDEC Report: http://www.exascale.org/bdec/

The post New Blueprint for Converging HPC, Big Data appeared first on HPCwire.

India’s Ministry of Earth Sciences Deploys New Cray XC40 Supercomputers and Cray Storage Systems

HPC Wire - Thu, 01/18/2018 - 08:38

SEATTLE, Jan. 18, 2018 — Global supercomputer leader Cray Inc. (Nasdaq: CRAY) today announced that as part of a $67 million contract with Cray to update its supercomputing facilities and systems, the Ministry of Earth Sciences in India has deployed two Cray XC40 supercomputers and two Cray ClusterStor storage systems. The combined systems are the largest supercomputing resource in India, and extend Cray’s leadership position in the weather forecasting and climate research communities.

The Ministry of Earth Sciences (MoES) is dedicated to providing world-class weather, climate, ocean, and seismological services to the citizens of India, and has significantly upgraded its high-performance computing capabilities to better support its operational and research activities. The two Cray systems are located at two divisions of MoES – the Indian Institute of Tropical Meteorology (IITM) in Pune, India, and the National Center for Medium Range Weather Forecasting (NCMRWF) in Noida, India.

The Cray supercomputer at IITM will be used for conducting research on improving weather and climate forecasts, and the system – named “Pratyush” which means the Sun – will also be used by other MoES organizations for research activities to improve their respective weather and climate services. The NCMRWF will use its Cray supercomputer to run daily, operational weather forecasts. The combined supercomputing systems have a peak performance of more than six petaflops, and more than 18 petabytes of Cray ClusterStor storage capacity.

“Our new Cray supercomputing systems provide MoES’ scientists with the computational power needed for producing more accurate and reliable weather forecasts at much higher resolutions,” said Dr. Madhavan Nair Rajeevan, Secretary, Ministry of Earth Sciences, Government of India. “Our country needs better forecasts for weather and climate events such as monsoons, tsunamis, cyclones, and extreme heat waves and cold snaps, and so it is imperative that we augment our HPC facilities with highly-advanced supercomputing systems. The two new Cray systems are major steps forward for MoES, and allows us to stand tall in the international weather and climate communities.”

Cray continues to strengthen its leadership position in the weather forecasting and climate research space, as an increasing number of the world’s leading centers rely on Cray supercomputers and storage systems to run their complex meteorological models. More than three-quarters of the World Meteorological Organization’s Long Range Global Modelling Centers have selected Cray supercomputers for numerical weather prediction, and MoES is the latest organization to deploy Cray systems for numerical weather prediction and climate research.

“MoES has made a substantial enhancement to its high-performance computing infrastructure, and we are honored Cray was chosen to provide both the supercomputing and storage technologies necessary for improving their extensive range of important weather services for the people of India,” said Peter Ungaro, president and CEO of Cray. “The world’s preeminent global weather centers, like MoES, continue to rely on Cray supercomputers to power their weather forecasts. Our leadership position in earth sciences is representative of our proven ability to build production-ready supercomputing and storage systems across many data-intensive workloads such as weather forecasting, analytics, and artificial intelligence.”

The Cray XC series of supercomputers are designed to handle the most challenging workloads requiring sustained multi-petaflop performance. The Cray XC40 supercomputers incorporate the Aries high performance network interconnect for low latency and scalable global bandwidth, the HPC-optimized Cray Linux Environment, the Cray programing environment consisting of powerful tools for application developers, as well as the latest Intel processors and NVIDIA GPU accelerators. The Cray XC supercomputers deliver on Cray’s commitment to performance supercomputing with an architecture and tightly-integrated software environment that provides extreme scalability and sustained performance.

Consisting of products and services, the multi-year contract is valued at more than $67 million. The systems were accepted in late 2017.

For more information on the Cray XC supercomputers, and Cray storage solutions, please visit the Cray website atwww.cray.com.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray Inc.

The post India’s Ministry of Earth Sciences Deploys New Cray XC40 Supercomputers and Cray Storage Systems appeared first on HPCwire.

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

HPC Wire - Wed, 01/17/2018 - 22:40

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular HPC applications and benchmarks and are sharing their results in a paper, available on arXiv.org.

Their method was to use the application kernel module of the XD Metrics on Demand (XDMoD) tool to run tests before and after the installation of the vulnerability patches. They recorded the performance difference for the following applications and benchmarks: NWChem, NAMD, the HPC Challenge Benchmark suite (HPCC) [which includes the memory bandwidth micro-benchmark STREAM and the NASA parallel benchmarks (NPB)], IOR, MDTest and interconnect/MPI benchmarks (IMB).

Most of the application kernels were executed on one or two nodes (8 and 16 cores respectively) of a development cluster at the Center for Computational Research. Each node has two Intel L5520 CPUs (Nehalem EP) connected by QDR Mellanox InfiniBand, and can access 3 PB IBM of shared GPFS storage system. The operating system is CentOS Linux release 7.4.1708.

The worst case performance hit went as high as 54 percent for select functions (e.g., MPI random access, memory copying and file metadata operations), while real-world applications showed a 2-3 percent decrease in performance for single node jobs and a 5-11 performance decrease for parallel two-node jobs. The authors indicate there may be a way to recoup some of this loss via compiler and MPI libraries.

Also notable, Fourier transformation (FFT), matrix multiplication and matrix
transposition get slower,  6.4 percent, 2 percent and 10 percent slower (on two nodes) respectively.

The findings of the SUNY team align with those of Red Hat, which earlier this month released the results from benchmark tests it conducted specifically to measure the impact of the kernel patches. Red Hat found that CPU-intensive HPC workloads suffered only a 2-5 percent hit “because jobs run mostly in user space and are scheduled using CPU-pinning or NUMA control.” In comparison, database analytics were found to take a modest 3-7 percent hit and OLTP database workloads suffered the most (8-19 percent degradation).

The SUNY researchers have plans to conduct additional testing “with a larger number of nodes and for more application kernels” once the updates are applied to their production system.

The XD Metrics on Demand (XDMoD) tool employed for the testing was originally developed to provide independent audit capability for the XSEDE program. It was later open-sourced and is now used widely across research and commercial HPC sites. The tool includes an application kernel performance monitoring module that “allows automatic performance monitoring of HPC resources through the periodic execution of application kernels, which are based on benchmarks or real-world applications implemented with sensible input parameters.”

The paper was authored by Nikolay A. Simakov, Martins D. Innus, Matthew D. Jones, Joseph P. White, Steven M. Gallo, Robert L. DeLeon and Thomas R. Furlani. It is available on arxiv.org.

The post Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads appeared first on HPCwire.

Fostering Lustre Advancement Through Development and Contributions

HPC Wire - Wed, 01/17/2018 - 17:47

Six months after organizational changes at Intel’s High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre development. Customers who have adopted the technology as their main parallel file system now have a clearer picture of what the future holds for the world’s most utilized parallel file system. Lustre remains strong and will continue to dominate the persistent parallel file system arena, at least for the foreseeable future.

Carlos Aoki Thomaz, Senior Product Manager at DDN

The new Lustre development and adoption strategy has turned out to be surprisingly simple, and more clear and consistent than anticipated. Like the old Whamcloud days, Lustre development has returned to a single code stream, thereby avoiding confusion and lack of discernment regarding different distributions, features, capabilities, and source code differentiation. Quietly released in July 2017, 2.10 is the LTS (Long Term Support) release of Lustre that should be the mainstream version through mid to early 2019.

As a major contributor to the Lustre community, DataDirect Networks (DDN) announced in 2016 that all of its Lustre features will be merged over time into the Lustre master branch. This convergence gives the entire community transparent access to the code, reducing the overhead of code development management and better aligning with the new features released in Lustre 2.10.

A very sophisticated set of features has been announced on Lustre 2.10 such as Progressive File Layouts (PFL), Project Quotas, IB Multi-rail and NRS Delay policy. Progressive file layouts allow system administrators and users to adjust file layouts, and how a file is stripped – the number of stripes and stripe block size now may vary according to the file size. There are several use cases that would take huge advantage leveraging PFL while simplifying the storage administration in the process. The storage administrator could define standard default layouts for different types of files, minimizing the need of users to manipulate file layouts by themselves (although the user is still able to define their own layouts). With the increasing utilization of flash technologies in a hybrid parallel file system (SSDs and NVMe devices mixed with standard rotational drives) it is now possible to create sophisticated mechanisms to optimize data location using PFL and OST pools.

Another feature, possibly the most latent need among the current Lustre users, is the Project Quotas. Project Quotas allows quota definition per “Project” which could be, for example, associated with a specific directory. Previously, Lustre only allowed standard POSIX User and group quotas. With Project Quotas we move one step ahead on the realm of managing spaces among users, groups and projects and planning for capacity and growth. Project Quota adds space accounting and enforcements of capacity utilization based on OSTs, sub-directories and file-sets, providing the granularity needed to manage several different use cases.

Some have asked about the impact of performance related to Project Quotas. Results of various tests have been impressive and encouraging, showing no degradation compared to the standard POSIX quota. Project Quotas is a feature available for Lustre running with a LDISKFS backend.

Although the feature has been only landed on Lustre 2.10, as the developer responsible for this feature, DDN has backported it into its Exascaler 3.2 (based on Lustre 2.7). Historically speaking, the latest and greatest version of Lustre usually brings the most advanced technologies with a price to pay, which is the un-tested and unproven chunk of codes that usually require a few cycles to stabilize. Since Project Quotas is a need for a huge range of customers that are not ready to move to Lustre 2.10 currently, Lustre 2.7 users can get the ability to run Project Quotas and get full support for it. In the case of customers running Project Quotas on Lustre 2.7, once they decide to upgrade to Lustre 2.10, data will be totally preserved (note that any users going from Lustre versions prior to 2.7, to Lustre 2.10 and activating Project Quotas require a reformat of the file system).

LNET IB Multi Rail allows users to take advantage of multiple infiniBand adapters, aggregating the bandwidth for Lustre LNET. This technique is widely used by Ethernet users through Ethernet Bonding. InfiniBand users were previously unable to “bond” interfaces and they were somehow limited to the performance of a single IB card. There was a need for increased bandwidth, especially on the client side. New architectures, such as HPE UV, have multiple sockets and a huge amount of memory capable to run multiple and much larger compute jobs. Those scenarios bring an unbalanced CPU/MEMORY to IO ratio, where even an IB EDR running 100Gbps may turn into a bottleneck. IB Multi Rail leverages Lustre on larger SMP like nodes, aggregating network bandwidth performance and proving a balanced CPU/Memory to IO ratio. On the server side, the biggest advantage is on the high availability capabilities. Having more than one IB link provides redundancy, avoiding scenarios that trigger server failover due an IB failure. Now in network failure scenarios the failures and their recovery are handled transparently, without compromising on performance.

NRS delay policy, which simulates high server load as way of validating the resilience of Lustre under load, is another feature introduced in Lustre 2.10. This is one valid way to perform fault injection and load simulation, usually very important during stabilization phases, performance characterization and overall debugging techniques.

Along with these recently announced features, a new approach has been proposed for Lustre’s policy engine (LiPE), designed to reduce installation and deployment complexity while delivering significantly faster results when executing and managing storage policies. LiPE relies on a set of components that allows the engine to:

  • Scan Lustre metadata targets (MDTs) quickly,
  • Create an in-memory map of the file system’s objects, and
  • Implement data management policies based on that mapped information.

This approach would allow users to define policies that trigger data automation via Lustre HSM hooks or external data management (copy tools, for example) mechanisms.

In the next stage of development, LiPE may be integrated with a File Heat Map mechanism for more automated and transparent data management, resulting in a better utilization of parallel storage infrastructure.

In regard to Lustre performance, a new initiative within the community is investigating the implementation of high-level tools, possibly at the user level, that would improve utilization and configuration of Lustre Quality of Service (QoS). In support of those efforts, a new QoS approach has been developed that is based on the Token Bucket Filter algorithm on the OST level. It allows system administrators to define the maximum number of RPCs to be issued by a user/group or job ID to a given OST. Throttling performance provides I/O control and bandwidth reservation that can guarantee that higher priority jobs run in a more predictable time, avoiding performance variations due to I/O delays.

In keeping with new HPC trends, a tremendous amount of work has also been invested in the integration of Lustre with Linux container-based workloads, providing native Lustre file system capabilities within containers, support for new kernel and specialized Artificial Intelligence and Machine Learning appliances.

2017 was a productive year for Lustre that showcased a very active and growing Lustre community and that positioned Lustre as the “go to” choice for many high-performance computing organizations and data centers. Moving into 2018, look for Lustre roadmaps to solidify this position with enhanced security, performance, Remote Access Service (RAS), and data management capabilities, as well as the addition of more enterprise-class features.

 

The post Fostering Lustre Advancement Through Development and Contributions appeared first on HPCwire.

Supercomputing-Backed Analysis Reveals Decades of Questionable Investments

HPC Wire - Wed, 01/17/2018 - 16:58

Jan. 17, 2018 — One of the key principles in asset pricing — how we value everything from stocks and bonds to real estate — is that investments with high risk should, on average, have high returns.

“If you take a lot of risk, you should expect to earn more for it,” said Scott Murray, professor of finance at George State University. “To go deeper, the theory says that systematic risk, or risk that is common to all investments” — also known as ‘beta’ — “is the kind of risk that investors should care about.”

This theory was first articulated in the 1960s by Sharpe (1964), Lintner (1965), and Mossin (1966). However, empirical work dating as far back as 1972 didn’t support the theory. In fact, many researchers found that stocks with high risk often do not deliver higher returns, even in the long run.

“It’s the foundational theory of asset pricing but has little empirical support in the data. So, in a sense, it’s the big question,” Murray said.

Isolating the Cause

In a recent paper in the Journal of Financial and Quantitative Analysis, Murray and his co-authors Turan Bali (Georgetown University), Stephen Brown (Monash University) and Yi Tang (Fordham University), argue that the reason for this ‘beta anomaly’ lies in the fact that stocks with high betas also happen to have lottery-like properties – that is, they offer the possibility of becoming big winners. Investors who are attracted to the lottery characteristics of these stocks push their prices higher than theory would predict, thereby lowering their future returns.

Scott Murray, Assistant Professor of Finance at Georgia State University

To support this hypothesis, they analyzed stock prices from June 1963 to December 2012. For every month, they calculated the beta of each stock (up to 5,000 stocks per month) by running a regression— a statistical way of estimating the relationships among variables — of the stock’s return on the return of the market portfolio. They then sorted the stocks into 10 groups based on their betas and examined the performance of stocks in the different groups.

“Theory predicts that stocks with high betas do better in the long run than stocks with low betas,” Murray said. “Doing our analysis, we find that there really isn’t a difference in the performance of stocks with different betas.”

They next analyzed the data again and, for each stock month, calculated how lottery-like each stock was. Once again, they sorted the stocks into 10 groups based on their betas and then repeated the analysis. This time, however, they implemented a constraint that required each of the 10 groups to have stocks with similar lottery characteristics. By making sure the stocks in each group had the same lottery properties, they controlled for the possibility that their failure to detect a difference in performance between in their original tests was because the stocks in different beta groups have different lottery characteristics.

“We found that after controlling for lottery characteristics, the seminal theory is empirically supported,” Murray said.

In other words: price pressure from investors who want lottery-like stocks is what causes the theory to fail. When this factor is removed, asset pricing works according to theory.

Identifying the Source

Other economists had pointed to a different factor — leverage constraints — as the main cause of this market anomaly. They believed that large investors like mutual funds and pensions that are not allowed to borrow money to buy large amounts of lower-risk stocks are forced to buy higher-risk ones to generate large profits, thus distorting the market.

Murray used the National Science Foundation-funded Wrangler supercomputer at the Texas Advanced Computing Center for his regression analysis. (Source: TACC)

However, an additional analysis of the data by Murray and his collaborators found that the lottery-like stocks were most often held by individual investors. If leverage constraints were the cause of the beta anomaly, mutual funds and pensions would be the main owners driving up demand.

The team’s research won the prestigious Jack Treynor Prize, given each year by the Q Group, which recognizes superior academic working papers with potential applications in the fields of investment management and financial markets.

The work is in line with ideas like prospect theory, first articulated by Nobel-winning behavioral economist Daniel Kahneman, which contends that investors typically overestimate the probability of extreme events — both losses and gains.

“The study helps investors understand how they can avoid the pitfalls if they want to generate returns by taking more risks,” Murray said.

To run the systematic analyses of the large financial datasets, Murray used the Wrangler supercomputer at the Texas Advanced Computing Center (TACC). Supported by a grant from the National Science Foundation, Wrangler was built to enable data-driven research nationwide. Using Wrangler significantly reduced the time-to-solution for Murray.

The plot shows the time-series of aggregate lottery demand. Aggregate lottery demand in any month t is measured as the equal-weighted (EWMAX) or value-weighted (VWMAX) average value of MAX across all stocks in the sample in month t. (Source: TACC)

“If there are 500 months in the sample, I can send one month to one core, another month to another core, and instead of computing 500 months separately, I can do them in parallel and have reduced the human time by many orders of magnitude,” he said.

The size of the data for the lottery-effect research was not enormous and could have been computed on a desktop computer or small cluster (albeit taking more time). However, with other problems that Murray is working on – for instance research on options – the computational requirements are much higher and require super-sized computers like those at TACC.

“We’re living in the big data world,” he said. “People are trying to grapple with this in financial economics as they are in every other field and we’re just scratching the surface. This is something that’s going to grow more and more as the data becomes more refined and technologies such as text processing become more prevalent.”

Though historically used for problems in physics, chemistry and engineering, advanced computing is starting to be widely used — and to have a big impact — in economics and the social sciences.

According to Chris Jordan, manager of the Data Management & Collections group at TACC, Murray’s research is a great example of the kinds of challenges Wrangler was designed to address.

“It relies on database technology that isn’t typically available in high-performance computing environments, and it requires extremely high-performance I/O capabilities. It is able to take advantage of both our specialized software environment and the half-petabyte flash storage tier to generate results that would be difficult or impossible on other systems,” Jordan said. “Dr. Murray’s work also relies on a corpus of data which acts as a long-term resource in and of itself — a notion we have been trying to promote with Wrangler.”

Beyond its importance to investors and financial theorists, the research has a broad societal impact, Murray contends.

“For our society to be as prosperous as possible, we need to allocate our resources efficiently. How much oil do we use? How many houses do we build? A large part of that is understanding how and why money gets invested in certain things,” he explained. “The objective of this line of research is to understand the trade-offs that investors consider when making these sorts of decisions.”

Source: Aaron Dubrow, TACC

The post Supercomputing-Backed Analysis Reveals Decades of Questionable Investments appeared first on HPCwire.

Inventor Claims to Have Solved Floating Point Error Problem

HPC Wire - Wed, 01/17/2018 - 15:59

“The decades-old floating point error problem has been solved,” proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and received a patent for a “processor design, which allows representation of real numbers accurate to the last digit.” The patent (No. 9,817,662, “Apparatus for Calculating and Retaining a Bound on Error During Floating Point Operations and Methods Thereof”) was issued on November 14, 2017.

Alan Jorgensen

Jorgensen presents his bounded floating point system as “a game changer for the computing industry,” tackling a pernicious problem that (as he cites) has been implicated in catastrophic failures, including the 1991 Patriot missile failure, which resulted in 28 U.S. military deaths.

The inventor patented a process that addresses floating point errors by computing “two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.”

Jorgensen says the method performs in real time and can operate in conjunction with existing hardware and software. Also, converting between existing standardized floating point and this new bounded floating point format can be done with simple operations, he says.

Unreported floating point errors are relevant for highly compute-intensive functions, especially where accuracy and safety are paramount, such as weather prediction, GPS, autonomous vehicles and finance. Jorgenson claims that his system guarantees accuracy of floating point values to plus or minus one in the last digit.

The invention is said to provide error information with minimal impact to performance or memory space compared with current methods. “In the current art, static error analysis requires significant mathematical analysis and cannot determine actual error in real time,” reads a section of the patent. “This work must be done by highly skilled mathematician programmers. Therefore, error analysis is only used for critical projects because of the greatly increased cost and time required. In contrast, the present invention provides error computation in real time with, at most, a small increase in computation time and a small increase in the maximum number of bits available for the significand.”

Read the patent filing in-full here.

The abstract offers a few more details:

The apparatus and method for calculating and retaining a bound on error during floating point operations inserts an additional bounding field into the standard floating-point format that records the retained significant bits of the calculating with notification upon insufficient retention. The bounding field, which accounts for both rounding and cancellation errors, has two parts, the lost bits D Field and the accumulated rounding error R Field. The D Field states the number of bits in the floating point representation that are no longer meaningful. The bounds on the real value represented are determined from the truncated floating point value (first bound) and the addition of the error determined by the number of lost bits (second bound). The true, real value is absolutely contained by the first and second bounds. The allowed loss (optionally programmable) of significant bits provides a fail-safe, real-time notification of loss of significant bits.

According to Jorgensen’s LinkedIn profile, he has a PhD in Computer Science and is a part time instructor at the University of Nevada, Las Vegas (UNLV) where he teaches computer science to non-computer science students.

The post Inventor Claims to Have Solved Floating Point Error Problem appeared first on HPCwire.

MLK Jr. Recognition Awards honor campus advocates

Colorado School of Mines - Wed, 01/17/2018 - 14:52

Three Colorado School of Mines community members received Martin Luther King Jr. Recognition Awards at a luncheon on January 17.

Louisa Duley, assistant director of admissions; Kristine Callan, physics teaching professor; and Olivia Cordova, a senior electrical engineering student, each received a recognition award for their appreciation for diversity and understanding its value on campus.

Recipients are chosen based on nominations by peers that highlight their efforts to develop innovative programs or policies that enhance diversity on campus, demonstrate a commitment to a philosophy of inclusion by initiating interactions between people of different backgrounds and their efforts to contribute to fostering understanding and respect for diversity within the campus community.

From Duley’s nomination:

Louisa has been able to translate her years of experience with SUMMET to other programs such as our Challenge program. Louisa’s care, understanding and respect for these students from the time they are prospective students until the day they proudly cross the stage at Commencement cannot be overstated. Louisa truly fosters understanding and respect and has helped the campus expand its reach to a more diverse group of students.

From Callan’s nomination:

Under Kristine's guidance, the student group Equality Through Awareness (ETA) grew beyond its humble roots within the Physics Department and quickly became a campus-wide phenomenon aiming to bring together students and faculty of all backgrounds in order to discuss the various challenges faced by underrepresented populations in STEM. ETA has since become a mainstay on campus, challenging the students, faculty and administration with taking an often times uncomfortable look at issues like implicit bias, stereotype threat, imposter syndrome, the socio-economic implications of engineering, student anxiety and sexual harassment.

From Cordova’s nomination:

Olivia is an inspiring individual in many ways beyond her work with the Society of Women Engineers. Through her role as an RA for the Nucleus Scholars group, Olivia often participates in presentations to share her experiences as a first-generation college student, educating the Mines community on the benefits diversifying our campus and reaching out to this underrepresented group as well as shedding light on the unique needs of first-generation college students.

 

CONTACT
Joe DelNero, Digital Media and Communications Manager, Communications and Marketing | 303-273-3326 | jdelnero@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Graves to receive first Women in Energy Pinnacle Award

Colorado School of Mines - Wed, 01/17/2018 - 11:28

Ramona Graves, dean of the College of Earth Resource Sciences and Engineering at Colorado School of Mines, has been chosen to receive the inaugural Pinnacle Award from Oil and Gas Investor.

The honor, recognizing a person who has had an extraordinary impact on the energy industry, will be celebrated at Oil and Gas Investor’s 25 Influential Women In Energy gala luncheon Feb. 6 in Houston.

“Dr. Graves’ lifetime of achievements — as a leader, teacher and scholar — make her our first choice for this honor,” said Rich Eichler, CEO of Hart Energy, a media and information company for the global energy industry and publisher of Oil and Gas Investor. “She has achieved great professional distinction in her own right, and she has launched many of today’s top energy executives on their own successful careers. Her rigorous approach to her field and her commitment to first-class scholarship have raised the standards of her students as individuals and of the industry as a whole.”

Graves, a Mines alumna, was the second woman in the nation to earn a doctorate in petroleum engineering. In 2017, the Society of Petroleum Engineers honored her with the International Distinguished Achievement Award for Petroleum Engineering Faculty

Co-director of the Center For Earth Materials, Mechanics, and Characterization (CEMMC), Graves’ primary research is in the area of reservoir characterization and laser/rock interaction.

Categories: Partner News

Colovore Announces 2 MW Phase 3 Colocation Expansion

HPC Wire - Wed, 01/17/2018 - 08:07

SANTA CLARA, Calif., Jan. 17, 2018 — Colovore has announced that it has begun construction on Phase 3, adding another 2 MW of capacity to its Santa Clara data center. Since launching in 2014, Colovore has grown rapidly by providing power densities in Bay Area colocation, exceptional uptime and service quality, and a cost-effective, pay-by-the-kW pricing model. As with Phase 2, all cabinets in Phase 3 will support 35 kW of critical load. The additional capacity is expected to be delivered in early Q3 and Colovore is now marketing Phase 3, adding much-needed high-density capacity to the tight Bay Area colocation marketplace.

Highlights / Key Facts

  • Customers utilize Colovore to host their high-performance (HPC) and Big Data infrastructure, private/hybrid cloud deployments, and internal lab environments
  • With power densities of 35 kW per rack, Colovore provides the highest footprint efficiency and lowest TCO in Bay Area colocation; customers can pack their racks full of servers and operate in a much smaller, cost-effective footprint than legacy colos
  • Colovore’s pay-by-the-kW pricing model allows customers to match their costs directly to their IT requirements as they go, providing significant cost savings and easy scalability– 1 kW at a time
  • With 9 MW of total power available at its facility, Colovore has plenty of capacity for future expansion beyond this 2 MW Phase 3

“We are clearly seeing increasing rollout of power-hungry, computing platforms supporting a number of fast-growing HPC applications,” stated Sean Holzknecht, President and Co-Founder of Colovore. “Artificial intelligence, Big Data, self-driving cars, and the Internet of Things are exploding and customers need data centers with next-generation power and cooling capabilities to support the underlying IT infrastructure. That is our specialty at Colovore.”

To learn more about how you can benefit from Colovore’s high-performance colocation solutions, contact Ben Coughlin at Colovore (tel. #408-330-9290) or email info@colovore.com.

About Colovore
Colovore is a leading provider of high-performance colocation services. Our 9 MW state-of-the-art data center in Santa Clara features power densities of 35 kW per rack and a pay-by-the-kW pricing model. We offer colocation the way you want it—cost-efficient, scalable, and robust. Colovore is profitable and backed by industry leaders including Digital Realty Trust. For more information please visit www.colovore.com.

Source: Colovore

The post Colovore Announces 2 MW Phase 3 Colocation Expansion appeared first on HPCwire.

Quantum Corporation Names Patrick Dennis CEO

HPC Wire - Tue, 01/16/2018 - 18:46

SAN JOSE, Calif., Jan. 16, 2018 — Quantum Corp. today announced that its board of directors has appointed Patrick Dennis as president and CEO, effective today. Dennis was most recently president and CEO of Guidance Software and has also held senior executive roles in strategy, operations, sales, services and engineering at EMC. He succeeds Adalio Sanchez, a member of Quantum’s board who had served as interim CEO since early November 2017. Sanchez will remain on the board and assist with the transition.

“Patrick has been a successful public company CEO and brings a broad range of experience in storage and software, including a proven track record leading business transformations,” said Raghu Rau, Quantum’s chairman. “The other board members and I look forward to working closely with him to drive growth, cost reductions, and profitability and deliver long-term shareholder value. We also want to thank Adalio for stepping in and leading the company during a critical transition period.”

“During my time as CEO, I’ve greatly appreciated the commitment to change I’ve seen from team members across Quantum and will be supporting Patrick in any way I can to build on the important work we started,” said Sanchez.

Dennis served as president and CEO of Guidance Software, a provider of cyber security software solutions, from May 2015 until its acquisition by OpenText last September. During his tenure, he turned the company around, growing revenue and significantly improving profitability. Before joining Guidance Software, Dennis was senior vice president and chief operating officer, Products and Marketing, at EMC, where he led the business operations of its $10.5 billion enterprise and mid-range systems division, including management of its cloud storage business. Dennis spent 12 years at EMC, including as vice president and chief operating officer of EMC Global Services, overseeing a 3,500-person technical sales force. In addition to his time at EMC, he served as group vice president, North American Storage Sales, at Oracle, where he turned around a declining business.

“With its long-standing expertise in addressing the most demanding data management challenges, Quantum is well-positioned to help customers maximize the strategic value of their ever-growing digital assets in a rapidly changing environment,” said Dennis. “I’m excited to be joining the company as it looks to capitalize on this market opportunity by leveraging its strong solutions portfolio in a more focused way, improving its cost structure and execution, and continuing to innovate.”

About Quantum

Quantum is a leading expert in scale-out tiered storage, archive and data protection, providing solutions for capturing, sharing, managing and preserving digital assets over the entire data lifecycle. From small businesses to major enterprises, more than 100,000 customers have trusted Quantum to address their most demanding data workflow challenges. Quantum’s end-to-end, tiered storage foundation enables customers to maximize the value of their data by making it accessible whenever and wherever needed, retaining it indefinitely and reducing total cost and complexity. See how at www.quantum.com/customerstories.

Source: Quantum Corp.

The post Quantum Corporation Names Patrick Dennis CEO appeared first on HPCwire.

New C-BRIC Center Will Tackle Brain-Inspired Computing

HPC Wire - Tue, 01/16/2018 - 15:17

WEST LAFAYETTE, Ind., Jan. 16, 2018 — Purdue University will lead a new national center to develop brain-inspired computing for intelligent autonomous systems such as drones and personal robots capable of operating without human intervention.

The Center for Brain-inspired Computing Enabling Autonomous Intelligence, or C-BRIC, is a five-year project supported by $27 million in funding from the Semiconductor Research Corp (SRC) via their Joint University Microelectronics Program, which provides funding from a consortium of industrial sponsors as well as from the Defense Advanced Research Projects Agency. The SRC operates research programs in the United States and globally that connect industry to university researchers, deliver early results to enable technological advances, and prepare a highly-trained workforce for the semiconductor industry. Additional funds include $3.96 million from Purdue and as well as support from other participating universities. At the state level, the Indiana Economic Development Corporation will be providing funds, pending board approval, to establish an intelligent autonomous systems laboratory at Purdue.

C-BRIC, which begins operating in January 2018, will be led by Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering (ECE), with Anand Raghunathan, Purdue professor of ECE, as associate director. Other Purdue faculty involved in the center include Suresh Jagannathan, professor of computer science and ECE; and Eugenio Culurciello, associate professor of biomedical engineering, ECE and mechanical engineering. The center will involve seven other universities, pending final contracts, which include Arizona State University, Georgia Institute of Technology, Pennsylvania State University, Portland State University, Princeton University, University of Pennsylvania, and University of Southern California, around seventeen faculty, and around 85 graduate students and postdoctoral researchers.

“The center’s goal is to develop neuro-inspired algorithms, architectures and circuits for perception, reasoning and decision-making, which today’s standard computing is unable to do efficiently,” Roy said.

Efficiency here implies energy use. For example, while advanced computers such as IBM’s Watson and Google’s AlphaGo have beaten humans at high-level cognitive tasks, they also consume hundreds of thousands of watts of power to do so, whereas the human brain requires only around 20 watts.

“We have to narrow this huge efficiency gap to enable continued improvements in artificial intelligence in the face of diminishing benefits from technology scaling,” Raghunathan said. “C-BRIC will develop technologies to perform brain-like functions with brain-like efficiency.”

In addition, the center will enable next-generation autonomous intelligent systems capable of accomplishing both “end-to-end” functions and completion of mission-critical tasks without human intervention.

“Autonomous intelligent systems will require real-time closed-loop control, leading to new challenges in neural algorithms, software and hardware,” said Venkataramanan (Ragu) Balakrishnan, Purdue’s Michael and Katherine Birck Head and Professor of Electrical and Computer Engineering. “Purdue’s long history of preeminence in related research areas such as neuromorphic computing and energy-efficient electronics positions us well to lead this effort.”

“Purdue is up to the considerable challenges that will be posed by C-BRIC,” said Suresh Garimella, Purdue’s executive vice president for research and partnerships and the R. Eugene and Susie E. Goodson Distinguished Professor of Mechanical Engineering. “We are excited that our faculty and students are embarking on this ambitious mission to shape the future of intelligent autonomous systems.”

Mung Chiang, Purdue’s John A. Edwardson Dean of the College of Engineering, said, “C-BRIC represents a game-changer in artificial intelligence. These outstanding colleagues in Electrical and Computer Engineering and other departments at Purdue will carry out transformational research on efficient, distributed intelligence.”

To achieve their goals, C-BRIC researchers will improve the theoretical and mathematical underpinnings of neuro-inspired algorithms.

“This is very important,” Raghunathan said. “The underlying theory of brain-inspired computing needs to be better worked out, and we believe this will lead to broader applicability and improved robustness.”

At the same time, new autonomous systems will have to possess “distributed intelligence” that allows various parts, such as the multitude of “edge devices” in the so-called Internet of Things, to work together seamlessly.

“We are excited to bring together a multi-disciplinary team with expertise spanning algorithms, theory, hardware and system-building, that will enable us to pursue a holistic approach to brain-inspired computing, and to hopefully deliver an efficiency closer to that of the brain,” Roy said.

Information about the SRC can be found at https://www.src.org/.

Source: Purdue University

The post New C-BRIC Center Will Tackle Brain-Inspired Computing appeared first on HPCwire.

New Center at Carnegie Mellon University to Build Smarter Networks to Connect Edge Devices to the Cloud

HPC Wire - Tue, 01/16/2018 - 15:14

PITTSBURGH, Jan. 16, 2018 — Carnegie Mellon University will lead a $27.5 million Semiconductor Research Corporation (SRC) initiative to build more intelligence into computer networks.

Researchers from six U.S. universities will collaborate in the CONIX Research Center headquartered at Carnegie Mellon. For the next five years, CONIX will create the architecture for networked computing that lies between edge devices and the cloud. The challenge is to build this substrate so that future applications that are crucial to IoT can be hosted with performance, security, robustness, and privacy guarantees.

“The extent to which IoT will disrupt our future will depend on how well we build scalable and secure networks that connect us to a very large number of systems that can orchestrate our lives and communities. CONIX will develop novel architectures for large-scale, distributed computing systems that have immense implications for social interaction, smart buildings and infrastructure, and highly connected communities, commerce, and defense,” says James H. Garrett Jr., dean of Carnegie Mellon College of Engineering.

CONIX, an acronym for Computing on Network Infrastructure for Pervasive Perception, Cognition, and Action, is directed by Anthony Rowe, associate professor of Electrical and Computer Engineering at Carnegie Mellon. The assistant director, Prabal Dutta, is an associate professor at the University of California, Berkeley.

IoT has pushed a major focus on edge devices. These devices make our homes and communities smarter through connectivity, and they are capable of sensing, learning, and interacting with humans. In most current IoT systems, sensors send data to the cloud for processing and decision-making. However, massive amounts of sensor data coupled with technical constraints have created bottlenecks in the network that curtail efficiency and the development of new technologies especially if timing is critical.

“There isn’t a seamless way to merge cloud functionality with edge devices without a smarter interconnect, so we want to push more intelligence into the network,” says Rowe. “If networks were smarter, decision-making could occur independent of the cloud at much lower latencies.”

The cloud’s centralized nature makes it easier to optimize and secure, however, there are tradeoffs. “Large systems that are centralized tend to struggle in terms of scale and have trouble reacting quickly outside of data centers,” explains Rowe. CONIX researchers will look at how machine-learning techniques that are often used in the context of cloud computing can be used to self-optimize networks to improve performance and even defend against cyberattacks.

Developing a clean-slate distributed computing network will take an integrated view of sensing, processing, memory, dissemination and actuation. CONIX researchers intend to define the architecture for such networks now before attempts to work around current limitations create infrastructure that will be subject to rip-and-repair updates, resulting in reduced performance and security.

CONIX’s research is driven by three applications:

Smart and connected communities—Researchers will explore the mechanisms for managing and processing millions of sensors’ feeds in urban environments. They will deploy CONIX edge devices across participating universities to monitor and visualize the flow of pedestrians. At scale, this lays the groundwork for all kinds of infrastructure management.

Enhanced situational awareness at the edge—Efforts here will create on-demand information feeds for decision makers by dispatching human-controlled swarming drones to provide aerial views of city streets. Imagine a system like Google Street View, only with live real-time data. This would have both civilian and military applications. For example, rescue teams in a disaster could use the system to zoom in on particular areas of interest at the click of a button.

Interactive Mixed Reality—Physical and virtual reality systems will merge in a collaborative digital teleportation system.  Researchers will capture physical aspects about users in a room, such as their bodies and facial expressions. Then, like a hologram, this information will be shared with people in different locations. The researchers will use this technology for meetings, uniting multiple CONIX teams. This same technology will be critical to support next-generation augmented reality systems being used in applications ranging from assisted surgery and virtual coaching to construction and manufacturing.

In addition to Carnegie Mellon and the University of California, Berkeley, other participants include the University of California, Los Angeles, University of California, San Diego, University of Southern California, and University of Washington Seattle.

CONIX is one of six research centers funded by the SRC’s Joint University Microelectronics Program (JUMP), which represents a consortium of industrial participants and the Defense Advanced Research Projects Agency (DARPA).

About the College of Engineering at Carnegie Mellon University

The College of Engineering at Carnegie Mellon University is a top-ranked engineering college that is known for our intentional focus on cross-disciplinary collaboration in research. The College is well-known for working on problems of both scientific and practical importance. Our “maker” culture is ingrained in all that we do, leading to novel approaches and transformative results. Our acclaimed faculty have a focus on innovation management and engineering to yield transformative results that will drive the intellectual and economic vitality of our community, nation and world.

About the SRC

Semiconductor Research Corporation (SRC), a world-renowned, high technology-based consortium serves as a crossroads of collaboration between technology companies, academia, government agencies, and SRC’s highly regarded engineers and scientists. Through its interdisciplinary research programs, SRC plays an indispensable part to address global challenges, using research and development strategies, advanced tools and technologies. Sponsors of SRC work synergistically together, gain access to research results, fundamental IP, and highly experienced students to compete in the global marketplace and build the workforce of tomorrow. Learn more at: www.src.org.

Source: Carnegie Mellon University

The post New Center at Carnegie Mellon University to Build Smarter Networks to Connect Edge Devices to the Cloud appeared first on HPCwire.

SRC Spends $200M on University Research Centers

HPC Wire - Tue, 01/16/2018 - 15:10

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communications, nanotechnology, and more. It’s not a bad way to begin 2018 for the winning institutions which include Notre Dame University, University of Michigan, University of Virginia, Carnegie Mellon University, Purdue University, and UC Santa Barbara.

SRC’s JUMP (Joint University Microelectronics Program) is a collaborative network of research centers sponsored by U.S. industry participants and DARPA. As described in the SRC web site, “[JUMP’s] mission is to enable the continued pace of growth of the microelectronics industry with discoveries which release the evolutionary constraints of traditional semiconductor technology development. JUMP research, guided by the university center directors, tackles fundamental physical problems and forges a nationwide effort to keep the United States and its technology firms at the forefront of the global microelectronics revolution.”

The six projects, funded over five years, were launched on January 1st and are listed below with short descriptions. Links to press releases from each center are at the end of the article:

  • ASCENT (Applications and Systems driven Center for Energy-Efficient Integrated NanoTechnologies at Notre Dame). “ASCENT focuses on demonstration of foundational material synthesis routes and device technologies, novel heterogeneous integration (package and monolithic) schemes to support the next era of functional hyper-scaling. The mission is to transcend the current limitations of high-performance transistors confined to a single planar layer of integrated circuit by pioneering vertical monolithic integration of multiple interleaved layers of logic and memory.”
  • ADA (Applications Driving Architectures Center at University of Michigan). “[ADA will drive] system design innovation by drawing on opportunities in application driven architecture and system-driven technology advances, with support from agile system design frameworks that encompass programming languages to implementation technologies. The center’s innovative solutions will be evaluated and quantified against a common set of benchmarks, which will also be expanded as part of the center efforts. These benchmarks will be initially derived from core computational aspects of two application domains: visual computing and natural language processing.”
  • Kevin Skadron, University of Virginia

    CRISP (Center for Research on Intelligent Storage and Processing-in-memory at University of Virginia). “Certain computations are just not feasible right now due to the huge amounts of data and the memory wall,” says Kevin Skadron, who chairs UVA Engineering’s Department of Computer Science and leads the new center. “Solving these challenges and enabling the next generation of data-intensive applications requires computing to be embedded in and around the data, creating ‘intelligent’ memory and storage architectures that do as much of the computing as possible as close to the bits as possible.”

  • CONIX (Computing On Network Infrastructure for Pervasive Perception, Cognition, and Action at Carnegie Mellon University). “CONIX will create the architecture for networked computing that lies between edge devices and the cloud. The challenge is to build this substrate so that future applications that are crucial to IoT can be hosted with performance, security, robustness, and privacy guarantees.”
  • CBRIC (Center for Brain-inspired Computing Enabling Autonomous Intelligence at Purdue University). Charged with delivering key advances in cognitive computing, with the goal of enabling a new generation of autonomous intelligent systems, “CBRIC will address these challenges through synergistic exploration of Neuro-inspired Algorithms and Theory, Neuromorphic Hardware Fabrics, Distributed Intelligence, and Application Drivers.”
  • ComSenTer (Center for Converged TeraHertz Communications and Sensing at UCSB). ComSenTer will develop the technologies for a future cellular infrastructure using hubs with massive spatial multiplexing, providing 1-100Gb/s to the end user, and, with 100-1000 simultaneous independently-modulated beams, aggregate hubs capacities in the 10’s of Tb/s. Backhaul for this future cellular infrastructure will be a mix of optical links and Tb/s-capacity point-point massive MIMO links.”

Links to individual press releases/program descriptions:

ASCENT, Notre Dame: https://www.src.org/newsroom/press-release/2018/921/

ADA, University of Michigan: https://www.src.org/newsroom/press-release/2018/922/

CRISP, University of Virginia: https://www.src.org/newsroom/press-release/2018/920/

CONIX, Carnegie Mellon: https://www.prnewswire.com/news-releases/new-center-headquartered-at-carnegie-mellon-university-will-build-smarter-networks-to-connect-edge-devices-to-the-cloud-300582210.html

CBRIC, Purdue: https://www.src.org/newsroom/press-release/2018/919/

ComSentTer, UCSB: https://www.src.org/program/jump/comsenter/

The post SRC Spends $200M on University Research Centers appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator