HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 31 min ago

House Subcommittee Tackles U.S. Competitiveness in Quantum Computing

Wed, 11/01/2017 - 13:46

How important is quantum computing? How are U.S. quantum research efforts stacking up against the rest of the world? What should a national quantum computing policy, if any, look like? Is the U.S. really falling behind in the crucial area? Late last month six leaders from government and industry tackled these questions at the Subcommittee on Research & Technology and Subcommittee on Energy Hearing – American Leadership in Quantum Technology.

While much of the testimony offered covered familiar ground for the HPC community, it was nonetheless an excellent overview of ongoing efforts and attitudes at the Department of Energy, National Science Foundation, IBM, among others. There was even a proposal put forth by one speaker for a National Quantum Initiative funded to the tune of $500 million over five years and focused on three areas: quantum enhanced sensors; optical photonic quantum communication networks, and quantum computers.

Given the breadth of material covered, it’s useful that the committee has made the formal statements by the six panelists readily available (just click on the name bulleted below). There’s also a video of the proceedings.

Panel 1:

  • Carl J. Williams, acting director, Physical Measurement Laboratory, National Institute of Standards and Technology (NIST)
  • Jim Kurose, assistant director, Computer and Information Science and Engineering Directorate, National Science Foundation
  • John Stephen Binkley, acting director of science, U.S. Department of Energy

Panel 2:

  • Scott Crowder, vice president and chief technology officer for quantum computing, IBM Systems Group
  • Christopher Monroe, distinguished university professor & Bice Zorn Professor, Department of Physics, University of Maryland; founder and chief scientist, IonQ, Inc.
  • Supratik Guha, director, Nanoscience and Technology Division, Argonne National Laboratory; professor, Institute for Molecular Engineering, University of Chicago

Full committee chair, Lamar Smith (R-Texas) noted “Although the United States retains global leadership in the theoretical physics that underpins quantum computing and related technologies, we may be slipping behind others in developing the quantum applications – programming know-how, development of national security and commercial applications…Just last year, Chinese scientists successfully sent the first-ever quantum transmission from Earth to an orbiting satellite… According to a 2015 McKinsey report, about 7,000 scientists worldwide, with a combined budget of about $1.5 billion, worked on non-classified quantum technology.”

Summarizing the comments the panelists is challenging other than to say quantum computing is important and could use more government support. Ideas on how such support might be shaped differed among speakers and it’s interesting to hear their recaps of ongoing quantum research. While we tend fix on quantum computing, exotic quibits, and revolutionizing computing (maybe), Williams’s (NIST) testimony reminded the gathering that quantum science has a wide-ranging reach:

“Atomic clocks define the second and tell time with amazing precision. For example, the most accurate U.S. atomic clock currently used for defining the second is the NIST-F2. It keeps time to an accuracy of less than a millionth of a billionth of a second. Stated in another way, the NIST-F2 clock will not lose a second in at least 300 million years. And just this month, NIST published a description of a radically new atomic clock design—the three-dimensional (3-D) quantum gas atomic clock. With a precision of just 3.5 parts error in 10 quintillion (1 followed by 19 zeros) in about 2 hours, it is the first atomic clock to ever reach the 10 quintillion threshold, and promises to usher in an era of dramatically improved measurements and technologies across many areas based on controlled quantum systems.”

The post House Subcommittee Tackles U.S. Competitiveness in Quantum Computing appeared first on HPCwire.

The Longest Mile Matters: URISC@SC17 Coming to Denver, Colorado

Wed, 11/01/2017 - 13:19

The Understanding Risk in Shared CyberEcosystems workshop will convene Saturday, November 11 through Thursday, November 16, 2017, in Denver, Colorado. In addition to 12 hours of cybersecurity training, URISC participants will attend SC17; the flagship high performance computing (HPC) industry conference and technology showcase that attracts more than 10,000 international attendees each year.

A STEM-Trek call for participation closed Sept. 11. Applications were accepted from cybersecurity professionals, HPC systems administrators, educators and network engineers who support research computing at US and sub-Saharan African colleges and universities in under-served regions. All serve in professional support roles at least 50 percent of the time where they help students, faculty and staff leverage locally-hosted, or remotely-accessed advanced cyberinfrastructure (CI) for education and open research.

US applications were reviewed and ranked by African reviewers, and vice versa. Thirty percent of African applications were received from women, and 80 percent of the pool reflects demographics that are typically under-represented in cybersecurity and HPC careers.

Thirty-percent of applicants were awarded grants which cover flights, lodging, ground transit and some meals; international scholars will also receive U.S. pocket-money. Every effort was made to shape the diversity of the final cohort so it mirrors the applicant pool in terms of gender, ethnicity, research domains, and regions represented, and the same consideration was applied when choosing presenters. Eight US participants are XSEDE Campus Champions (six are supported by the project, and two are self-funded), and five are from EPSCoR states (NSF Established Program to Stimulate Competitive Research). Special guests from Nepal (ICIMOD) and Canada (U-BC) are invited to attend; altogether, 34 URISC delegates, trainers and guests will represent 11 countries and 12 US states.

An introduction to open-source materials developed by the Center for Trustworthy Scientific Cyberinfrastructure (CTSC) will be shared, as well as coaching in the art of external relations – specifically how to foster administrative and legislative buy-in for a greater cybersecurity investment on college campuses. The agenda has been customized to consider what has become an increasingly diverse body of campus stakeholders—including researchers, students, faculty, government agency stakeholders, “long-tail” user communities, and regional industry partners.

On Friday, Nov. 10, a small delegation from the US, Botswana, South Africa and Nepal will visit the National Center for Atmospheric Research (NCAR) in Boulder. They will tour NCAR’s visualization lab and cybersecurity center, and meet researchers who lead a variety of global climate and environmental projects.

URISC @SC17 Program Committee:

Elizabeth Leake (STEM-Trek Nonprofit), URISC Planning Committee Chair and Facilitator;
Von Welch (IU/CTSC), Planning Committee Cybersecurity SME, Facilitator and Trainer;
Happy Sithole (Director, CHPC/Cape Town), URISC Planning Committee SME;
Bryan Johnston (Trainer, CHPC/Cape Town), URISC Trainer;
Meshack Ndala (Cybersecurity Lead, CHPC, Cape Town), URISC Trainer, SME.

Trainers and Special Guests, in order of appearance:

  • Von Welch (Indiana University), Directs NSF-supported Centers for Applied Cybersecurity Research and Trustworthy Scientific Cyberinfrastructure: “Cybersecurity Methodology for Open Science.”
  • Ryan Kiser (Indiana University): “Log Analysis for Intrusion Detection.”
  • Susan Ramsey (NCAR): “The Anatomy of a Breach.”
  • Jim Basney (National Center for Supercomputing Applications/CTSC): “Lightweight Cybersescurity Risk Assessment Tools for Cyberinfrastructure.”
  • Bart Miller and Elisa Heymann (UW-Wisconsin at Madison): “Secure Coding.”
  • Nick Roy (InCommon/Internet2): “Federated Trust: One Benefit of Regional Alliance Membership.”
  • Thomas Sterling (Indiana University, CREST): Sterling will share highlights of a new NSF-funded course titled, “High Performance Computing: Modern Systems and Practices, first edition,” scheduled for release Dec. 2017.
  • Happy Sithole (Director, South African Centre for HPC): Sithole will provide a brief welcome, and overview of technology initiatives supported by the CHPC.
  • Elizabeth Leake (Director and Founder, STEM-Trek Nonprofit): “The Softer Side of Cybersecurity.”
  • Bryan Johnston & Meshack Ndala (South African Centre for HPC): “Learn to be Cyber-Secure before you’re Cyber-Sorry.”
  • Florence Hudson (Senior Vice President and Chief Innovation Officer, Internet2): “IoT security challenges and Risk in Shared CyberEcosystems.”

Why the Longest Last Mile Matters

For more than 50 years, HPC has supported tremendous advances in all areas of science. Densely-populated, urban communities can more easily support subscription-based commodity networks and energy infrastructure that make it more affordable for nearby universities to engage with globally-collaborative science. Conversely, research centers that are located in sparsely-populated regions are disadvantaged since their last mile is much longer; there are fewer partners with which to cost-share regional connectivity. It’s more difficult for them to recruit and retain skilled personnel, they must travel longer distances to attend workshops and conferences, and it’s tougher to buy new hardware and software; there are many more competing priorities for limited funds, and they receive less federal grant support.

At the same time, they represent industrial landscapes that reflect globally-significant environmental factors, rich biodiversity, geology, and minerals. Every place on earth has a unique perspective of our universe, and less-populated regions offer the most detailed and unfettered vantage points. When researchers everywhere can access data that are generated by and stored at these sites, progress will be accelerated toward solutions to problems that impact global climate, environment, food and water security, public health, quality of life, and world peace.

While every HPC professional would benefit from attending the annual Supercomputing Conference, few from the communities STEM-Trek helps could afford to attend otherwise. Many are campus “tech generalists” who must balance administrative, support and teaching obligations; it’s more difficult for them to take time away from work because skill sets are usually one-deep (there is no back-up to mitigate the many crises that arise when centers function with inadequate and/or aging e-infrastructure). Because they wear both sysadmin and trainer hats, they rely on student labor to support their HPC resources. Their students learn more, and make an exponentially larger and more meaningful contribution to the global HPC workforce pipeline.

Even if they could take time away from work, they can’t afford to; in many cases, state and federal travel budgets have been legislatively restricted or eliminated altogether. Some of the countries that will be represented at URISC have consumer prices that are 80 and 90 percent lower than they are in the US and Europe where such conferences are typically held. This is also why they’re disadvantaged when it comes to purchasing new hardware, and why we encourage more affluent universities and government labs to donate decommissioned hardware so its life can be extended for another five to seven years in a light research and training capacity.

Despite these barriers, URISC attendance is easier to justify since participants will not only learn cybersecurity best practices from some of the world’s most informed specialists, they will become part of a multinational “affinity” network which offers a psycho-social framework of support for the future, and access a wealth of information at SC17.

Financial Support for URISC@SC17

This workshop is supported by US National Science Foundation grants managed by Indiana University and Oklahoma State University, with STEM-Trek donations from GoogleCorelight, and SC17 General Chair Bernd Mohr (Jülich Supercomputing Centre) with support from Inclusivity Chair Toni Collis (U-Edinburgh).

History of Southern Africa’s Shared CyberEcosystem

The SADC HPC Forum formed in 2013 when the University of Texas donated their decommissioned, NSF-sponsored Ranger system to the South African CHPC. Twenty-five Ranger racks were divided into ten smaller clusters and were installed in universities in the SADC region. It is their goal to develop a shared cyberecosystem for open science.

In 2016, a second system was donated by the University of Cambridge, UK. It was also split into small clusters that were installed in Madagascar and South Africa (North-West University). In 2017, Ghana joined the collaboration and CHPC installed a cluster there that will become part of the shared SADC cyberecosystem. The CHPC continues to lead training efforts in the region, and a dozen or so US and European HPC industry experts volunteer to advise as the shared African CI project continues to gain traction.

Many SADC delegates have trained as a cohort since 2013, and it has been a successful exercise in science diplomacy. Among them are network engineers, sysadmins, educators, computational, and domain scientists. While there are multiple language and other cultural disparities, as they train together with a common goal, the team has coalesced despite these differences. They are creating a procedural framework for human capital development, open science and research computing. The SADC HPC Forum serves to inform policy-makers who will then advocate for greater national investments in CI.

History of This STEM-Trek Workshop Series

This will be STEM-Trek’s third year to be involved with an SC co-located workshop for African stakeholders, and the second year to include US participants who work at resource-constrained centers, and therefore share many of the same challenges. In 2015, a workshop for SADC delegates was arranged by the Texas Advanced Computing Center (TACC) in Austin, Texas, and was co-facilitated by Melyssa Fratkin (TACC) and Elizabeth Leake (STEM-Trek). Last year’s “HPC On Common Ground @SC16” workshop in Salt Lake City featured a food security theme and was led by Elizabeth Leake (STEM-Trek), Dana Brunson (Oklahoma State University), Henry Neeman (University of Oklahoma), Bryan Johnston (South African Centre for High Performance Computing/CHPC) and Israel Tshililo (CHPC).

STEM-Trek will do it again in Dallas next year! The SC18 workshop will have an energy theme—stay tuned for more information!

About the CTSC

As the NSF Cybersecurity Center of Excellence, CTSC draws on expertise from multiple internationally-recognized institutions, including Indiana University, the University of Illinois, the University of Wisconsin at Madison, and the Pittsburgh Supercomputing Center. Drawing on this expertise, CTSC collaborates with NSF-funded research organizations to focus on addressing the unique cybersecurity challenges faced by such entities. In addition to our leadership team, a world-class CTSC Advisory Committee adds its expertise and a critical eye to the center’s strategic decision-making.

About STEM-Trek Nonprofit

STEM-Trek is a global, grassroots, nonprofit (501.c.3) organization that supports travel and professional development for HPC-curious scholars from under-represented groups and regions. Beneficiaries of our programs are encouraged to “pay-it-forward” by volunteering to serve as technology evangelists in their home communities or in ways that help STEM-Trek achieve its objectives. STEM-Trek was honored to receive the 2016 HPCwire Editors’ Choice Award for Workforce Diversity Leadership. Follow us on Twitter #LongestMileMatters, and FaceBook. For more information, visit our website: www.stem-trek.org.

The post The Longest Mile Matters: URISC@SC17 Coming to Denver, Colorado appeared first on HPCwire.

Supermicro Showcases Deep Learning Optimized Systems

Wed, 11/01/2017 - 08:08

WASHINGTON, Nov. 1, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today is showcasing GPU server platforms that support NVIDIA Tesla V100 PCI-E and V100 SXM2 GPUs at the GPU Technology Conference (GTC) Washington D.C., Ronald Reagan Building and International Trade Center in booth #506.

For maximum acceleration of highly parallel applications like artificial intelligence (AI), deep learning, autonomous vehicle systems, energy and engineering/science, Supermicro’s new 4U system with next-generation NVIDIA NVLink is optimized for overall performance. The SuperServer 4028GR-TXRT supports eight NVIDIA Tesla V100 SXM2 GPU accelerators with maximum GPU-to-GPU bandwidth for HPC clusters and hyper-scale workloads.  Incorporating the latest NVIDIA NVLink GPU interconnect technology with over five times the bandwidth of PCI-E 3.0, this system features independent GPU and CPU thermal zones to ensure uncompromised performance and stability under the most demanding workloads.

Similarly, the performance optimized 4U SuperServer 4028GR-TRT2 system can support up to 10 PCI-E Tesla V100 accelerators with Supermicro’s innovative and GPU optimized single root complex PCI-E design, which dramatically improves GPU peer-to-peer communication performance.  For even greater density, the SuperServer 1028GQ-TRT supports up to four PCI-E Tesla V100 GPU accelerators in only 1U of rack space.  Ideal for media, entertainment, medical imaging, and rendering applications, the powerful 7049GP-TRT workstation supports up to four NVIDIA Tesla V100 GPU accelerators.

“Supermicro designs the most application-optimized GPU systems and offers the widest selection of GPU-optimized servers and workstations in the industry,” said Charles Liang, President and CEO of Supermicro. “Our high performance computing solutions enable deep learning, engineering and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot and per dollar. With our latest innovations incorporating the new NVIDIA V100 PCI-E and V100 SXM2 GPUs in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world’s most complex and challenging problems.”

“Supermicro’s new high-density servers are optimized to fully leverage the new NVIDIA Tesla V100 data center GPUs to provide enterprise and HPC customers with an entirely new level of computing efficiency,” said Ian Buck, vice president and general manager of the Accelerated Computing Group at NVIDIA. “The new SuperServers deliver dramatically higher throughput for compute-intensive data analytics, deep learning and scientific applications while minimizing power consumption.”

With the convergence of Big Data Analytics, the latest NVIDIA GPU architectures, and improved Machine Learning algorithms, Deep Learning applications require the processing power of multiple GPUs that must communicate efficiently and effectively to expand the GPU network.  Supermicro’s single-root GPU system allows multiple GPUs to communicate efficiently to minimize latency and maximize throughput as measured by the NCCL P2PBandwidthTest.

For comprehensive information on Supermicro NVIDIA GPU system product lines, please go to https://www.supermicro.com/products/nfo/gpu.cfm.

About Super Micro Computer, Inc. 

Supermicro (NASDAQ: SMCI), a leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced Server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Super Micro Computer, Inc.

The post Supermicro Showcases Deep Learning Optimized Systems appeared first on HPCwire.

NVIDIA Announces New AI Partners, Courses, Initiatives to Deliver Deep Learning Training Worldwide

Wed, 11/01/2017 - 08:00

Nov. 1, 2017 — NVIDIA today announced a broad expansion of its Deep Learning Institute (DLI), which is training tens of thousands of students, developers and data scientists with critical skills needed to apply artificial intelligence.

The expansion includes:

  • New partnerships with Booz Allen Hamilton and deeplearning.ai to train thousands of students, developers and government specialists in AI.
  • New University Ambassador Program enables instructors worldwide to teach students critical job skills and practical applications of AI at no cost.
  • New courses designed to teach domain-specific applications of deep learning for finance, natural language processing, robotics, video analytics and self-driving cars.

“The world faces an acute shortage of data scientists and developers who are proficient in deep learning, and we’re focused on addressing that need,” said Greg Estes, vice president of Developer Programs at NVIDIA. “As part of the company’s effort to democratize AI, the Deep Learning Institute is enabling more developers, researchers and data scientists to apply this powerful technology to solve difficult problems.”

DLI – which NVIDIA formed last year to provide hands-on and online training worldwide in AI – is already working with more than 20 partners, including Amazon Web Services, Coursera, Facebook, Hewlett Packard Enterprise, IBM, Microsoft and Udacity.

Today the company is announcing a collaboration with deeplearning.ai, a new venture formed by AI pioneer Andrew Ng with the mission of training AI experts across a wide range of industries. The companies are working on new machine translation training materials as part of Coursera’s Deep Learning Specialization, which will be available later this month.

“AI is the new electricity, and will change almost everything we do,” said Ng, who also helped found Coursera, and was research chief at Baidu. “Partnering with the NVIDIA Deep Learning Institute to develop materials for our course on sequence models allows us to make the latest advances in deep learning available to everyone.”

DLI is also teaming with Booz Allen Hamilton to train employees and government personnel, including members of the U.S. Air Force. DLI and Booz Allen Hamilton will provide hands-on training for data scientists to solve challenging problems in healthcare, cybersecurity and defense.

To help teach students practical AI techniques to improve their job skills and prepare them to take on difficult computing challenges, the new NVIDIA University Ambassador Program prepares college instructors to teach DLI courses to their students at no cost. NVIDIA is already working with professors at several universities, including Arizona State, Harvard, Hong Kong University of Science and Technology and UCLA.

DLI is also bringing free AI training to young people through organizations like AI4ALL, a nonprofit organization that works to increase diversity and inclusion. AI4ALL gives high school students early exposure to AI, mentors and career development.

“NVIDIA is helping to amplify and extend our work that enables young people to learn technical skills, get exposure to career opportunities in AI and use the technology in ways that positively impact their communities,” said Tess Posner, executive director at AI4ALL.

In addition, DLI is expanding the range of its training content with:

  • New project-based curriculum to train Udacity’s Self-Driving Car Engineer Nanodegree students in advanced deep learning techniques as well as upcoming new projects to help students create deep learning applications in the robotics field around the world.
  • New AI hands-on training labs in natural language processing, intelligent video analytics and financial trading.
  • A full-day self-driving car workshop, “Perception for Autonomous Vehicles,” available later this month. Students will learn how to integrate input from visual sensors and implement perception through training, optimization and deployment of a neural network.

To increase availability of AI training worldwide, DLI recently signed new training delivery partnerships with Skyline ATS in the U.S., Boston in the U.K. and Emmersive in India.

More information is available at the DLI website, where individuals can sign up for in-person or self-paced online training.

Source: NVIDIA

The post NVIDIA Announces New AI Partners, Courses, Initiatives to Deliver Deep Learning Training Worldwide appeared first on HPCwire.

Cray Exceeds Q3 Target, Projects 10 Percent Growth for 2018

Tue, 10/31/2017 - 17:23

Cray announced yesterday (Monday) that revenue for its third quarter ending in September came in at $79.7 million, slightly higher than the $77.5 million booked in the third quarter of 2016.

The company reported that the pulling of a major acceptance from fourth into the third quarter provided a $20 million boost to its original Q3 target. Cray still expects total revenue for the year in the range of $400 million.

While Cray is meeting its expectations this year, market sluggishness, component delays and other factors have created a downturn since it closed out 2015 with record revenue of $725 million. But the market may finally be showing signs of a modest rebound, according to Cray CEO Peter Ungaro. On the Monday earnings call, he offered guidance that revenue is on track to grow about 10 percent in 2018.

Ungaro acknowledged that 10 percent growth at this point is not huge, but it could indicate a turning point. He said Cray is expecting significant large opportunities across multiple geographies, notably the Americas, EMEA, Asia Pacific and Japan (which Cray counts as a separate region). Some of this revenue will hit 2018, but most of it will be for systems put into production in 2019 and 2020, noted Ungaro.

“That to me really is starting to give us very early signs of a potential market rebound,” he said. “It’s early yet; this is the first quarter we are really beginning to see some signs so I want to be cautious but excited to see these signs. We always believed that the market was going to turn back around and that it wasn’t going to stay muted long.”

Cray would be in a better position for 2018 if the original CORAL contract to provide a 200-petaflops system to Argonne National Laboratory had not been rewritten over the summer, shifting the delivery date from 2018 to 2021. Ungaro said that details of the new exascale-class Aurora contract are still being finalized.

Cray CFO Brian Henry reported that total gross profit margin for the third quarter was about 36 percent with product margin coming in at 23 percent and service margin at 53 percent. He added this product margin was significantly lower than typical due to a $4.1 million anticipated loss on a single large contract scheduled for delivery in 2018. Much of this loss was attributed to rising memory prices. “Nobody would have thought that in the first quarter of 2016 [when they bid the contract] that the prices would be more than double at this time in 2017,” said Henry.

Over the last 12-18 months, Cray’s bottom line has been impacted by an underperforming market, component delays and more recently, slower-than-average adoption of Intel’s Skylake processors. Ungaro commented they usually get about 20-25 percent of their revenues through upgrades and this has slowed down. “The company hasn’t gotten that uptick with the Skylake processors,” he said, attributing this to the increased cost of memory tempering the usual price performance advantage. “We do have a number of opportunities for Skylake,” said Ungaro. “[And] we think it is going to become larger part of market opportunity in 2018, but it’s still relatively muted overall compared to a typical new processor coming to market.”

In closing, Ungaro reminded the investors on the call that the “biggest supercomputing industry conference of the year” SC17 will kick off in Denver in two weeks.

Wall Street reacted favorably to the earnings report. The stock is experiencing unusually high trading volume and shares are up 13.8 percent since Monday morning, closing at $20.65 on Tuesday.

The post Cray Exceeds Q3 Target, Projects 10 Percent Growth for 2018 appeared first on HPCwire.

Storage Strategies in the Age of Intelligent Data

Tue, 10/31/2017 - 14:40

From scale-out clusters on commodity hardware, to flash-based storage with data temperature tiering, cloud-based object storage, and even tape, there are a myriad of considerations when architecting the right enterprise storage solution. In this round-table webinar, we examine case studies covering a variety of storage requirements available today. We’ll discuss when and where to use various storage media in accordance with use cases, and we’ll look at security challenges and emerging storage technology coming online.

The post Storage Strategies in the Age of Intelligent Data appeared first on HPCwire.

ORNL’s DelCul, Wirth Named American Nuclear Society Fellows

Tue, 10/31/2017 - 10:52

OAK RIDGE, Tenn., Oct. 31, 2017 — Two researchers from the Department of Energy’s Oak Ridge National Laboratory have been elected fellows of the American Nuclear Society (ANS), a professional society that promotes the advancement and awareness of nuclear science and technology.

Guillermo Daniel (Bill) DelCul was cited by the ANS for his outstanding accomplishments in actinide and fission product separations, uranium processing chemistry and advanced fuel cycle development.

“Dr. Del Cul has developed new concepts to improve the nuclear fuel cycle and advance separations and waste treatment technologies. His innovations are being applied to many nuclear processes both nationally and internationally,” the ANS citation reads.

DelCul’s long career in nuclear science and engineering includes research and development activities in actinide separations, processing of used nuclear fuel, high temperature molten salts, technical support of enrichment activities and national security-related research.

The distinguished research staff member in the Process Engineering & Research group of ORNL’s Nuclear Security and Isotopes Technology Division was also honored with the Glenn T. Seaborg Award at the 40th Actinide Separations Conference in 2016.

Brian David Wirth, who holds the University of Tennessee (UT)-ORNL Governor’s Chair of Computational Nuclear Engineering, is a joint appointee of UT and ORNL, where he leads Scientific Discovery through Advanced Computing (SciDAC) projects on fusion plasma surface interactions and fission gas bubble evolution in nuclear fuel. Wirth previously served as a focus area lead for Fuels, Materials and Chemistry with DOE’s Consortium for the Advanced Simulation of Light Water Reactors modeling and simulation hub, where he continues to study nuclear fuels and structural materials to improve future nuclear energy production.

He was cited by the ANS “for seminal contributions to fundamental understanding of radiation damage in nuclear reactor materials providing the scientific basis for improved predictions of reactor performance and development of more damage tolerant materials for advanced fission and fusion power systems.”

Wirth is a recipient of the 2014 DOE Ernest Orlando Lawrence Award in Energy Science and Innovation, as well as the 2016 Mishima Award from the ANS for outstanding work in nuclear fuels and materials research.

DelCul and Wirth were recognized as new fellows Monday at the ANS Winter Meeting’s opening session in Washington, D.C.

UT-Battelle manages ORNL for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Source: ORNL

The post ORNL’s DelCul, Wirth Named American Nuclear Society Fellows appeared first on HPCwire.

SC17 Preview: Q&A with Inclusivity Chair Toni Collis

Tue, 10/31/2017 - 10:37

HPCwire interviews SC17 Inclusivity Chair Toni Collis about the importance of diversity and inclusion in high-performance computing. An applications consultant in HPC Research and Industry at Edinburgh Parallel Computing Centre (EPCC), Collis co-founded the Women in HPC (WHPC) network in early 2014 in the UK, with the intention of providing a UK network for women. A half-day workshop at SC14 in New Orleans jump-started the organization’s international activities, which have since expanded considerably. At SC17, the group hosts multiple BoFs, evening networking events and a mentoring program. “Women in HPC is truly an international network of volunteers from academia, industry and national labs from all around the world,” says Collis.

HPCwire: The HPC Matters program, initiated in 2014, raised awareness about the vital role HPC plays in helping make the world a better place. In that same spirit, what is meant by Inclusivity and why does Inclusivity matter?

Toni Collis

Toni Collis: The work of the ‘HPC Matters’ message can be incredibly important to improving diversity and inclusion in the community. Many of us need to know that our work has a positive impact on society. What is fascinating is that this is more prevalent in women than men: 50 percent of women report wanting their work to have a positive impact, whereas only 31 percent of men report this as being crucial to their career choices. As we all know, HPC and supercomputing are indeed vitally important to the progress of society, but we all need to make an effort to share this, and not just delve into the (important!) details of what we do. This is just one of the activities that the SC Inclusivity team is taking a strong interest in.

The SC17 Inclusivity activities were initiated as a recognition of the importance of diversity in our community. There is growing evidence that diversity is good for research and business, increasing ‘team’ or collective IQ, decision making, citation rates, a business’ return on equity and much more. We strongly believe that an individual’s personal characteristics should not be a barrier to participating, but we are also aware that the barrier to participation for anyone who belongs to an underrepresented group in any field can be high. In 2016, SC16’s General Chair, John West, established a Diversity team to look at this from the conference’s perspective: what can the conference do to impact the workforce, and crucially are their things at the conference that currently negatively impact or stall progress towards diversity? For SC17 the Executive and Steering committee wanted to publicly acknowledge that we are going beyond diversity. Diversity is important, but a feeling of inclusion for all can have an even bigger positive impact: our activities will benefit everyone, not just the underrepresented groups.

HPCwire: What are the focus areas of SC17’s Inclusivity program?

Collis: We have three main aims for 2017:

  • Expand our reach and ‘inclusivity’ activities for groups we have already identified as being under-represented in our community, such as women and underrepresented minorities;
  • To find out more about these and other groups, what is impacting them both positively and negatively, and if there are differences with the rest of the community;
  • To encourage a discussion on the importance and benefits of inclusion and diversity across the world-wide supercomputing community.

The SC17 conference on its own cannot change the recruitment and retention of a diverse and inclusive HPC workforce, but I believe that it can meaningfully contribute by sharing the information we find out, and encourage others to measure their demographics, analyse and address the situation in their own institutions.

To achieve these goals we will be continuing and expanding our provision for attendees with children. The conference now has a Child Policy which enables parents to bring their children to the conference, irrespective of their age while protecting the safety of the child. The conference is also continuing and expanding childcare provision and parents facilities, as well as emphasising how to participate in the Family Day. We have brought back the prayer room and we are constantly seeking to improve the attendee experience, both by expanding our ‘navigating SC’ sessions in an online FAQ and during the conference, so attendees can get answers to common concerns ahead of the conference. We also will be seeking feedback from attendees: by monitoring, measuring and understanding the attendee experience we aim to develop the conference to ensure that the environment is as inclusive and diverse as possible.

HPCwire: What is the unifying theme for this year’s SC WHPC activities?

Collis: At SC17 we aim to enhance the careers of women at all stages of their careers by providing them with a platform to showcase their work, and also to provide a wide variety of female role models, which both inspire women, but also help to address any implicit bias towards the role of women in HPC that both women and men may have. We will also spend a significant time building networking opportunities for women and also providing managers, hirers, leaders and their organisations with both the key facts around diversity and methods to improve diversity and inclusion.

The full list of WHPC activities at SC17 is available on our special SC17 event page https://www.womeninhpc.org/whpc-sc17/. But we also encourage people to join us (http://www.womeninhpc.org/membership/individual/), so they can stay in touch with our monthly newsletter. WHPC is also on Twitter (@women_in_hpc), Facebook (https://www.facebook.com/womeninhpc/) and LinkedIn (https://www.linkedin.com/groups/8105215) where we encourage conversation on what methods are effective on improving diversity, what works for the HPC community and seeking new ideas.

If you want to know more about the activities around diversity and inclusion at SC, please take a look at the SC Inclusivity pages http://sc17.supercomputing.org/inclusivity/.

HPCwire: I’ve asked you this before, but I think it’s important to reiterate: Who is Women in HPC for?

Collis: Women in HPC really is for everyone. Our vision is to address the underrepresentation of women, and we can’t do that without men being involved in the conversation. Our career activities are open to men, and we actively encourage men to attend our events. Crucially the support we offer women is beneficial to everyone, so although we market our activities as ‘Women in HPC’, we welcome attendance from all. By doing this we are not only building a network of women but a community of women and their allies and advocates who can encourage participation by all. Although our mission focuses on women, our work on disseminating change and best practice often applies to multiple areas, and we are also keen to actively engage with other underrepresented groups. More than anything, we are a welcoming and open community and hope to see more men participate in the discussion in Denver.

HPCwire: You’ve said that there’s no perfect way to be a woman in HPC? What does that mean?

Collis: I believe that the benefit of women to the community is not just our numbers but our diversity of experiences, ideas and innovation. Some people talk about giving women the skills to become more like those already in the community (i.e. men), but I believe that we shouldn’t be changing women but instead changing the system that doesn’t recognise the contribution of those that are a little different. There shouldn’t be an ideal or perfect route into HPC: our community thrives on its interdisciplinary ideas and applications. Instead we need to recognise that what is important is someone’s potential not whether their CV looks like the rest of the people we work with. If the community recognises this we will also expand the proportion of candidates from other groups, not just women, who can fulfill their potential in the supercomputing workforce.

Our uniqueness and diversity is the key to the benefit to the community and to change women to ‘fit in’ would lose that.

HPCwire: What are some strategies for helping women make the most of their time at technical events?

Collis: The same strategies that apply to men! The difference might be whether women are having the same experience of the conference as men. If you find the event intimidating (many do, and not just women!, as it is such a big conference), realise that you are not alone! Consider finding a ‘buddy’ early on in the conference that you can hang out with. Aim to identify the tech program elements that will be of most benefit to you before you turn up, so you can plan your week and avoid the overwhelming experience of choosing on the day. Don’t be afraid to approach speakers, either by asking a question or contacting them afterwards: doing this with a buddy can be less intimidating. The conference is also a great time to meet people who you have connected with virtually for the first time, but as it is such a busy week contact them before the conference to find a time that works for you both.

If you are a more experienced member of the SC community, you might want to use SC as an opportunity to give back. SC is a great way to provide some informal mentorship and sponsorship. Mentorship and sponsorship can have significant impact on the careers of all, with growing evidence that the mentorship is even more important to women than their male peers. SC can be a great place to meet potential mentors, sponsors and collaborators, whatever your career stage.

The post SC17 Preview: Q&A with Inclusivity Chair Toni Collis appeared first on HPCwire.

Asetek Announces NEC Corporation as New Datacenter OEM Partner

Tue, 10/31/2017 - 08:28

OSLO, Norway, Oct. 31, 2017 — Asetek (ASETEK.OL) today announced NEC Corporation as a new data center OEM partner. NEC Corporation, through its subsidiary NEC Fielding, Ltd., will deploy Asetek RackCDU Direct-to-Chip liquid cooling at a new HPC (High Performance Computing) installation in Japan. Asetek has already begun to make shipments in support of this installation. This follows the announcement of an undisclosed OEM partner on August 7th, 2017.

“We are pleased to work with NEC on this project and look forward to further collaboration in the future,” said John Hamill, Asetek Chief Operating Officer. “Partnering with leading OEMs such as NEC is a cornerstone of our strategy to develop the emerging data center market.”

“Liquid cooling technology is becoming a key supercomputer component. Asetek’s direct-to-chip technology enables more effective cooling and increased computational performance in high density HPC clusters, adding value for our end-users,” said Noritaka HOSHI, (Senior Manager), NEC Corporation.

Asetek RackCDU D2C is a hot water liquid cooling solution that captures between 60% and 80% of server heat, reducing data center cooling cost by over 50% and allowing 2.5x-5x increases in data center server density. Learn more at www.asetek.com.

About NEC Corporation

NEC Corporation is a leader in the integration of IT and network technologies that benefit businesses and people around the world. By providing a combination of products and solutions that cross utilize the company’s experience and global resources, NEC’s advanced technologies meet the complex and ever-changing needs of its customers. NEC brings more than 100 years of expertise in technological innovation to empower people, businesses and society.  For more information, visit NEC at http://www.nec.com.

About Asetek

Asetek is a global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK). For more information, visit www.asetek.com

Source: Asetek

The post Asetek Announces NEC Corporation as New Datacenter OEM Partner appeared first on HPCwire.

Mellanox to Present at Upcoming Investor Conferences

Tue, 10/31/2017 - 08:23

SUNNYVALE, Calif. and YOKNEAM, Israel, Oct. 31, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of end-to-end interconnect solutions for servers and storage systems, today announced that it will present at the following conferences during the fourth quarter of 2017:

  • Credit Suisse 21st Annual Technology, Media, and Telecom Conference in Scottsdale, AZ, Tuesday, Nov. 28th, 2:30 p.m., Mountain Standard Time.
  • NASDAQ 37th Investor Conference in London, England, Wednesday, Dec. 6th, 4:00 p.m., Greenwich Mean Time.
  • Barclays Global Technology, Media, and Telecommunications Conference in San Francisco, CA, Thursday, Dec. 7th, 3:00 p.m., Pacific Standard Time.

When available, a webcast of the live event, as well as a replay, will be available on the company’s investor relations website at: http://ir.mellanox.com.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services.

Source: Mellanox

The post Mellanox to Present at Upcoming Investor Conferences appeared first on HPCwire.

Cray Reports Third Quarter 2017 Financial Results

Mon, 10/30/2017 - 16:24

SEATTLE, Oct. 30, 2017 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced financial results for its third quarter ended September 30, 2017.

All figures in this release are based on U.S. GAAP unless otherwise noted.  A reconciliation of GAAP to non-GAAP measures is included in the financial tables in this press release.

Revenue for the third quarter of 2017 was $79.7 million, compared to $77.5 million in the third quarter of 2016.  Net loss for the third quarter of 2017 was $10.2 million, or $0.25 per diluted share, compared to a net loss of $23.0 million, or $0.58 per diluted share in the third quarter of 2016.  Non-GAAP net loss was $13.3 million, or $0.33 per diluted share for the third quarter of 2017, compared to non-GAAP net loss of $19.5 million, or $0.49 per diluted share for the same period of 2016.

Overall gross profit margin on a GAAP and non-GAAP basis for the third quarter of 2017 was 36%. Overall gross profit margin on a GAAP and non-GAAP basis for the third quarter of 2016 was 30% and 31%, respectively.

Operating expenses for the third quarter of 2017 were $54.7 million, compared to $52.1 million for the third quarter of 2016.  Non-GAAP operating expenses for the third quarter of 2017 were $43.9 million, compared to $49.3 million for the third quarter of 2016.  GAAP operating expenses for the third quarter of 2017 included $7.7 million in restructuring charges associated with our recent workforce reduction.

As of September 30, 2017, cash, investments and restricted cash totaled $183 million.  Working capital at the end of the third quarter was $337 million, compared to $342 million at the end of the second quarter.

“The third quarter was highlighted by several exciting customer wins and strategic developments,” said Peter Ungaro, president and CEO of Cray.  “We completed our recently announced strategic transaction with Seagate to broaden our storage portfolio and deepen our presence in the storage market.  In supercomputing, our CS500 cluster was selected by KISTI, a leading research institution in South Korea.  And just last week we announced that we are partnering with Microsoft to deliver an integrated cloud services offering which will allow customers unique access to our high-performance supercomputers in the Microsoft Azure cloud — expanding our reach to new customers through the cloud and adding a complementary offering to our product set.  While a slow-down in our primary target market has continued, I remain positive about our recent activity levels, win rates, and development efforts and I’m excited about our long-term prospects to drive growth.”

Outlook

A wide range of results remains possible for 2017.  Several acceptances are planned for completion late in the fourth quarter, some of which will be challenging.  Assuming Cray is able to complete these acceptances before year-end, Cray expects revenue for 2017 to be in the range of $400 million.  GAAP and non-GAAP gross margins for the year are expected to be in the low- to mid-30% range.  Non-GAAP operating expenses for 2017 are expected to be in the range of $180 million.  For 2017, GAAP operating expenses are anticipated to be about $21 million higher than non-GAAP operating expenses, driven by share-based compensation, restructuring, and costs related to the Seagate transaction.

While a wide range of results remains possible and it is still early in the planning process, Cray expects 2018 annual revenue to grow in the range of 10% compared to Cray’s current 2017 outlook.  Revenue is expected to be about $75 million in the first quarter of 2018.

Actual results for any future periods are subject to large fluctuations given the nature of Cray’s business.

Recent Highlights

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges.  Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability.  Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

 

Source: Cray

The post Cray Reports Third Quarter 2017 Financial Results appeared first on HPCwire.

IBM Demonstrates In-memory Computing with 1M PCM Devices

Mon, 10/30/2017 - 10:52

Last week IBM reported successfully using one million phase change memory (PCM) devices to implement and demonstrate an unsupervised learning algorithm running in memory. It’s another interesting and potentially important step in the quickening scramble to develop in-memory computing techniques to overcome the memory-to-processor data transfer bottlenecks that are inherent in von Neumann architecture. IBM promises big gains from PCM technology.

“When compared to state-of-the-art classical computers, this prototype technology is expected to yield 200x improvements in both speed and energy efficiency, making it highly suitable for enabling ultra-dense, low-power, and massively-parallel computing systems for applications in AI,” says IBM researcher Abu Sebastian in account of the work posted on the IBM Research Zurich web site

In this particular research, IBM demonstrated the ability to identify “temporal correlations in unknown data streams.” One of the examples, perhaps chosen with tongue-in-cheek, was use of the technique to detect and reproduce an image of computer pioneer Alan Turing. The full research is presented in a paper, ‘Temporal correlation detection using computational phase-change memory’, published in Nature Communications last week.

Evangelos Eleftheriou, an IBM Fellow and co-author of the paper, is quoted in the blog, “This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures. As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today’s computers. Given the simplicity, high speed and low energy of our in-memory computing approach, it’s remarkable that our results are so similar to our benchmark classical approach run on a von Neumann computer.”

IBM used PCM devices based on germanium antimony telluride alloy stacked and sandwiched between two electrodes. The extent of its crystalline versus amorphous structure (its phase) between the electrodes is changed by pulsing current through the device which heats up the material causing the phase change; this in turn controls its conductance levels. (For background see HPCwire article, IBM Phase Change Device Shows Promise for Emerging AI Apps)

Shown below is a schematic of the IBM algorithm.

To demonstrate the technology, the authors chose two time-based examples and compared their results with traditional machine-learning methods such as k-means clustering:

  • Simulated Data: one million binary (0 or 1) random processes organized on a 2D grid based on a 1000 x 1000 pixel, black and white, profile drawing of famed British mathematician Alan Turing. The IBM scientists then made the pixels blink on and off with the same rate, but the black pixels turned on and off in a weakly correlated manner. This means that when a black pixel blinks, there is a slightly higher probability that another black pixel will also blink. The random processes were assigned to a million PCM devices, and a simple learning algorithm was implemented. With each blink, the PCM array learned, and the PCM devices corresponding to the correlated processes went to a high conductance state. In this way, the conductance map of the PCM devices recreates the drawing of Alan Turing.
  • Real-World Data: actual rainfall data, collected over a period of six months from 270 weather stations across the USA in one hour intervals. If rained within the hour, it was labelled “1” and if it didn’t “0”. Classical k-means clustering and the in-memory computing approach agreed on the classification of 245 out of the 270 weather stations. In-memory computing classified 12 stations as uncorrelated that had been marked correlated by the k-means clustering approach. Similarly, the in-memory computing approach classified 13 stations as correlated that had been marked uncorrelated by k-means clustering.

Shown below is figure 5 from the paper with further details of the examples (click image to enlarge):

 

Experimental results. a A million processes are mapped to the pixels of a 1000 × 1000 pixel black-and-white sketch of Alan Turing. The pixels turn on and off in accordance with the instantaneous binary values of the processes. b Evolution of device conductance over time, showing that the devices corresponding to the correlated processes go to a high conductance state. c The distribution of the device conductance shows that the algorithm is able to pick out most of the correlated processes. d Generation of a binary stochastic process based on the rainfall data from 270 weather stations across the USA. e The uncentered covariance matrix reveals several small correlated groups, along with a predominant correlated group. f The map of the device conductance levels after the experiment shows that the devices corresponding to the predominant correlated group have achieved a higher conductance value

 

“Memory has so far been viewed as a place where we merely store information. But in this work, we conclusively show how we can exploit the physics of these memory devices to also perform a rather high-level computational primitive. The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes,” according to Sebastian, who is an exploratory memory and cognitive technologies scientist, IBM Research, and lead author of the paper. He also leads a European Research Council funded project on this topic

Here’s an excerpt from the paper and link to a short video on the work:

“We show how the crystallization dynamics of PCM devices can be exploited to detect statistical correlations between event-based data streams. This can be applied in various fields such as the Internet of Things (IoT), life sciences, networking, social networks, and large scientific experiments. For example, one could generate an event-based data stream based on the presence or absence of a specific word in a collection of tweets. Real-time processing of event-based data streams from dynamic vision sensors is another promising application area. One can also view correlation detection as a key constituent of unsupervised learning where one of the objectives is to find correlated clusters in data streams.”

Link to paper: https://www.nature.com/articles/s41467-017-01481-9

Link to article: https://www.ibm.com/blogs/research/2017/10/ibm-scientists-demonstrate-memory-computing-1-million-devices-applications-ai/

The post IBM Demonstrates In-memory Computing with 1M PCM Devices appeared first on HPCwire.

ORNL, City of Oak Ridge Partner on Sensor Project to Capture Trends in Cities

Mon, 10/30/2017 - 08:23

OAK RIDGE, Tenn., Oct. 30, 2017—Researchers at the Department of Energy’s Oak Ridge National Laboratory are partnering with the city of Oak Ridge to develop UrbanSense, a comprehensive sensor network and real-time visualization platform that helps cities evaluate trends in urban activity.

UrbanSense passively collects anonymous, open-source data from cellular towers to generate real-time estimates of population density in cities. Insights on how people interact with urban infrastructure helps cities like Oak Ridge, Tennessee (above), assess their needs and plan effectively for future development. Credit: Oak Ridge National Laboratory, U.S. Dept. of Energy

The project, initiated by ORNL’s Urban Dynamics Institute, centers on addressing cities’ real-world challenges through applied urban science.

“Preparing for urban growth and planning for future infrastructure development and resource demands are global problems, but cities need ways to be proactive on a local level,” said UDI director Budhendra Bhaduri. “Our goal in bringing science to cities is to put the right tools and resources in the hands of city managers and urban planners so that they can assess local impacts and make strategic decisions to get the best return on future investments.”

UDI researchers Teja Kuruganti and Gautam Thakur from ORNL’s Computer Science and Engineering Division are collaborating with Oak Ridge director of administrative services Bruce Applegate on the design and deployment of UrbanSense.

The prototype designed for Oak Ridge monitors population density, traffic flow and environmental data including air and water quality, with a total of seven sensors to be installed in the city. “The longer they are in place and the more data they collect, the better the city’s sense of its trends will be,” Thakur said.

The platform gathers open-source, anonymous data from virtual and physical sensors to generate population dynamics in real time. Virtual sensors include online public data sets such as AirNow.gov, which reports national air quality information, and other self-reported data from social media, such as Facebook “check-ins” and Twitter posts. UrbanSense also uses sensors that passively collect anonymous cellular tower data from open broadcasts by mobile networks as they manage their capacity, which can help estimate population density.

Commercially available physical sensors that monitor traffic flow, water and air quality can provide additional information relevant to strategic planning on a city level.

The cloud-based system, supported by ORNL servers, captures these multimodal trends and displays real-time dynamics via an online dashboard.

ORNL researchers Gautam Thakur (left) and Teja Kuruganti demonstrate UrbanSense, a novel sensor network aimed at helping cities manage their growth and evaluate future development opportunities. The platform collects open-source population, traffic and environmental data in cities and delivers real-time dynamics to users via an online dashboard. Credit: Jason Richards/Oak Ridge National Laboratory, U.S. Dept. of Energy

“We want to give cities like Oak Ridge a better sense of their population distribution and dynamics,” said Kuruganti. “Our project is about bringing technology to cities. We are using sensors to generate observations and insights to help cities measure their growth and success.”

As cities consider development, urban planners look at issues such as how many people travel in and out of the city, which events are attended and which roads are used most frequently. But the real-time population data necessary to assess these trends is not readily available.

Population information now available to U.S. cities comes from census reports and other kinds of static data that are infrequently updated. Estimates of population density, a measure of the number of people in a given area, are limited to “ambient” populations or activity averaged over 24 hours.

“These data do not tell cities where people are at a given time of day,” said Thakur. “UrbanSense augments existing technologies by offering near real-time estimates of urban population activity. This is a huge improvement over anything cities have had before.”

Cities can use this fine-resolution population and traffic data to optimize infrastructure, evaluate retail markets, manage traffic for local events and more strategtically assess their development potential. The initial feedback from users has been positive.

“The UrbanSense platform provides the city of Oak Ridge staff a 21st-century tool to analyze the rapid changes our community is undergoing through both commercial and residential development,” Applegate said. “The real-time data collected will not only increase our understanding of the city’s usage by residents and visitors but will also aid in the selection and prioritization of city-funded projects.”

As the first city to test the new technology, Oak Ridge is well positioned to share the outcomes and benefits of the project with other cities. “We are excited for the opportunity to demonstrate the ways UrbanSense can shift a municipality from a day-to-day approach to a longer range vision of urban development,” said Applegate.

Thakur also highlighted another advantage—the sensor network can be configured to include other kinds of data. “Our design is scalable and can include additional sensors, so it can easily be tailored to the unique needs of individual cities and the kinds of trends they are interested in examining.”

Kuruganti and Thakur are working to optimize UrbanSense and expand on the prototype. “We want to bring the technology to other cities,” Kuruganti said.

The Urban Dynamics Institute, located at ORNL, is pursuing novel science and technological solutions for global to local urban challenges.

ORNL is managed by UT-Battelle for the DOE Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

Source: ORNL

The post ORNL, City of Oak Ridge Partner on Sensor Project to Capture Trends in Cities appeared first on HPCwire.

Pawsey Expands Zeus to Meet Researchers’ Needs

Mon, 10/30/2017 - 08:14

Oct. 30, 2017 — Moving from a system designed for pre- and post- processing workloads, Zeus will become Pawsey’s new mid-range cluster in December 2017. The new cluster will target key science communities such as Bioinformatics, who have large compute requirements but aren’t ready to use a supercomputer such as Pawsey’s flagship system, Magnus.

Image courtesy of Pawsey Supercomputing Centre.

The addition of an extra 92 nodes to its current configuration will increase Zeus’ capabilities by providing more than 20 million core hours per year to researchers. The mid-range cluster will be the stepping stone leading researchers on their journey to the superscale.

The system will run the latest generation enterprise class Linux Operating System, known as SLES 12, which gives researchers access to cutting edge software and support new technologies such as Shifter, making running software on Zeus even easier.

“This is another piece of the puzzle, ensuring that Pawsey provides the most suitable resources for Australian researchers,” said David Schibeci, Head of Supercomputing at Pawsey. “We’ve seen an unquenchable thirst for compute power in this country and we are happy to continue to support science outcomes that come from that thirst”.

Zeus’ new configuration will incorporate 128 GB of memory and a 100 GB p/s high speed, low latency interconnect on each of the 92 nodes.

Pawsey are currently calling for enthusiastic researchers to test the new Zeus expansion. Interested researchers are encouraged to contact Pawsey to gain early access to the system, especially if Zeus is currently being used in their workflows.

The procurement of this expansion was made possible through the funding of the Australian Government’s National Collaborative Research Infrastructure Strategy (NCRIS) and will enable Pawsey to continue to deliver big science outcomes for the advancement of Australia.

About Pawsey Supercomputing Centre

Pawsey Supercomputing Centre is a high-performance computing facility located in Perth, Western Australia. With its multiple supercomputers, big-data storage systems, and visualisation resources, Pawsey supports world-leading research across Australia.

Source: Pawsey Supercomputing Centre

The post Pawsey Expands Zeus to Meet Researchers’ Needs appeared first on HPCwire.

Free Your Data Intensive Applications in the Flash Era

Mon, 10/30/2017 - 01:02

Today’s modern data intensive applications covering all the current buzz generating markets like IOT, analytics, machine learning, and multifaceted research initiatives are requiring unique approaches to processing, networking and storage. These applications call on diverse processor strategies to address the different workloads, with AMD, Intel Core and Xeon Phi, GPUs, and Power processors being deployed heterogeneously to address the different requirements. Obviously, higher performance requirements are a perpetual trend, but this super-linear increase in I/O pressure due to tougher I/O patterns, higher concurrency, and heavy read access are outstripping the default high performance I/O infrastructures ability to keep up. Extremely parallel file systems have dealt well with homogeneous large sequential I/O, a workload pattern that is just not found in these emerging applications.

Instead of taking the destructive approach of whole-sale replacement of existing file system technologies, DDN took the approach of leveraging the rapid commoditization of flash memory with a software-defined storage layer that enables applications by sitting between the application and the file system. In fact, IME can be deployed to cost-effectively extend the life of existing file system solutions. With a scale-out approach, DDN’s Infinite Memory Engine® (IME®) presents an I/O interface that sits above, but remains tightly integrated with the file system to transparently eliminate I/O bottlenecks. IME unlocks previously blocked applications by delivering predictable job performance, faster computation against data sets too large to fit in memory, and accelerate IO-intensive applications. By doing this in a completely software-defined approach, IME is server and storage-agnostic, and application transparent–maintaining file system semantics so no code changes, scheduler changes or API usage are required.

Not only does IME serve as an accelerating shim between application and file system, it also helps address the data center challenges found in high performance environments. IME’s strategic use of flash can reduce space power and cooling requirements from 10x to 300x over legacy storage approaches. This allows administrators to continue to scale applications to meet workload demands while maintaining high performance, independent of the amount of storage capacity behind the file system.

Developed from scratch specifically for the flash storage medium, IME delivers a unique approach to performance, while also delivering data security and durability. IME eliminates traditional I/O system slowdowns during extreme loading, dial-in resilience through erasure coding based on the per file or client basis and delivers lightning-fast rebuild due to its fully declustered distributed data architecture. These capabilities combine to bring freedom to complex applications through lower cost, while delivering deeper insight and smarter productivity.

To learn how you can leverage the power of IME to improve the efficiency of your compute, storage and networking, visit the DDN website.

The post Free Your Data Intensive Applications in the Flash Era appeared first on HPCwire.

Pages