Feed aggregator

Dr. Whitfield Diffie to Deliver a Keynote at Supercomputing Frontiers Europe 2018

HPC Wire - Tue, 02/06/2018 - 13:46

Feb. 6, 2018 — Supercomputing Frontiers Europe 2018 has announced that Dr. Whitfield Diffie will deliver a keynote address on Monday, March 12th, followed by a special session on cryptography and its applications in block-chain and distributed ledger technologies.

Supercomputing Frontiers has also announced that the final program is now available online:

https://supercomputingfrontiers.eu/2018/conference-programme/

For more information on the conference, including information on registration, visit the conference’s website: https://supercomputingfrontiers.eu/2018/

About Supercomputing Frontiers

Supercomputing Frontiers is an annual international conference that provides a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important visionary trends and substantial innovations in supercomputing.

Source: Supercomputing Frontiers

The post Dr. Whitfield Diffie to Deliver a Keynote at Supercomputing Frontiers Europe 2018 appeared first on HPCwire.

Dell EMC Expands Server Capabilities for Software-Defined, Edge and High-Performance Computing

HPC Wire - Tue, 02/06/2018 - 09:17

ROUND ROCK, Texas, Feb. 6, 2018 — Dell EMC announced three new servers designed for software-defined environments, edge and high-performance computing (HPC). The PowerEdge R6415PowerEdge R7415 and PowerEdge R7425 expand the 14th generation of the Dell EMC PowerEdge server portfolio with new capabilities to address the demanding workload requirements of today’s modern data center. All three rack servers with the AMD EPYC processor offer highly scalable platforms with outstanding total cost of ownership (TCO).

“As the bedrock of the modern data center, customers expect us to push server innovation further and faster,” said Ashley Gorakhpurwalla, president, Server and Infrastructure Systems at Dell EMC. “As customers deploy more IoT solutions, they need highly capable and flexible compute at the edge to turn data into real-time insights; these new servers that are engineered to deliver that while lowering TCO.”

The combined innovation of AMD EPYC processors and pioneering PowerEdge server technology deliver compute capabilities that optimally enhance emerging workloads. With up to 32 cores (64 threads), 8 memory channels and 128 PCIe lanes, AMD’s EPYC processors offer flexibility, performance, and security features for today’s software defined ecosystem.

“We are pleased to partner again with Dell EMC and integrate our AMD EPYC processors into the latest generation of PowerEdge servers to deliver enhanced scalability and outstanding total cost of ownership,” said Forrest Norrod, senior vice president and general manager of the Datacenter and Embedded Solutions Business Group (DESG), AMD. “Dell EMC servers are purpose built for emerging workloads like software-defined storage and heterogeneous compute and fully utilize the power of AMD EPYC. Dell EMC always keeps the server ecosystem and customer requirements top of mind, this partnership is just the beginning as we work together to create solutions that unlock the next chapter of data center growth and capability.”

Technology is at a relentless pace of scale and record adoption, which has resulted in emerging workloads that are growing in scale and scope. These workloads are driving new system requirements and features that are, in turn, advancing development and adoption of technologies such as NVMe, FPGAs and in-memory databases. The PowerEdge R6415, PowerEdge R7415 and PowerEdge R7425 are designed to scale-up as customers’ workloads increase and have the flexibility to support today’s modern data center.

Like all 14th generation PowerEdge servers, the new servers will continue to offer a scalable business architecture and intelligent automation with iDRAC9 and Quick Sync 2 management support. Integrated security is always a priority and the integrated cyber resilient architecture security features of the Dell EMC PowerEdge servers protects customers’ businesses and data for the life of the server.

These servers have up to 4TB memory capacity enhanced for database management system (DBMS) and analytics workload flexibility and are further optimized for the following environments:

  • Edge computing deployments – The highly configurable, 1U single-socket Dell EMC PowerEdge R6415, with up to 32 cores, offers ultra-dense and scale-out computing capabilities. Storage flexibility is enabled with up to 10 PCIe NVMe drives.
  • Software-defined Storage deployments – The 2U single-socket Dell EMC PowerEdge R7415 is the first AMD EPYCTM-based server platform certified as a VMware vSAN Ready Node and offers up to 20% better TCO per four-node cluster for vSAN deployments at the edge1. With 128 PCIe lanes, it offers accelerated east/west bandwidth for cloud computing and virtualization. Additionally, with up to 2TB memory capacity and up to 24 NVMe drives, customers can improve storage efficiency and scale quickly at a fraction of the cost of traditional-built storage.
  • High performance computing – The dual-socket Dell EMC PowerEdge R7425 delivers up to 24% improved performance versus the HPE DL385 for containers, hypervisors, virtual machines and cloud computing2 and up to 25% absolute performance improvement for HPC workloads like computational fluid dynamics (CFD)3. With up to 64 cores, it offers high bandwidth with dense GPU/FPGA capability. On standard benchmarks, the server with superior memory bandwidth and core density provided excellent results across a wide range of HPC workloads.

The new line of PowerEdge servers powered by AMD EPYC processor will be available to channel partners across the globe, so they can cover a broad spectrum of configurations to optimize diverse workloads for customers.

Availability

  • Dell EMC PowerEdge R7425, R7415, R6415 are available now worldwide.
  • vSAN Ready Nodes are available now with the PowerEdge R7425, R7415 and the R6415.

About Dell EMC

Dell EMC, a part of Dell Technologies, enables organizations to modernize, automate and transform their data center using industry-leading converged infrastructure, servers, storage and data protection technologies. This provides a trusted foundation for businesses to transform IT, through the creation of a hybrid cloud, and transform their business through the creation of cloud-native applications and big data solutions. Dell EMC services customers across 180 countries – including 98 percent of the Fortune 500 – with the industry’s most comprehensive and innovative portfolio from edge to core to cloud.

About Dell Inc.

Dell Inc., a part of Dell Technologies, provides customers of all sizes – including 98 percent of the Fortune 500 – with a broad, innovative portfolio from edge to core to cloud. Dell Inc. comprises Dell client as well as Dell EMC infrastructure offerings that enable organizations to modernize, automate and transform their data center while providing today’s workforce and consumers what they need to securely connect, produce, and collaborate from anywhere at any time.

Source: Dell EMC

The post Dell EMC Expands Server Capabilities for Software-Defined, Edge and High-Performance Computing appeared first on HPCwire.

Micron Announces Chief Financial Officer Transition

HPC Wire - Tue, 02/06/2018 - 08:36

BOISE, Idaho, Feb. 6, 2018 — Micron Technology, Inc. (Nasdaq:MU) announced today that the company has appointed David Zinsner as senior vice president and chief financial officer, effective Feb. 19, 2018. Zinsner succeeds Ernie Maddock, who is retiring from Micron but will remain with the company as an adviser through early June to ensure a smooth transition. Zinsner will report directly to Sanjay Mehrotra, president and CEO.

“On behalf of the company, I want to thank Ernie for his significant contributions to Micron,” Mehrotra said. “He has helped position the company for continued strong growth, and we wish him the best in his future endeavors.”

“I am fortunate to have been a part of Micron’s progress over the last few years,” Maddock said. “My focus now is on supporting Dave and Sanjay during the transition period to ensure a seamless and effective handoff.”

Zinsner joins Micron with over 20 years of financial and operations experience in the semiconductor and technology industry. He most recently served as president and chief operating officer at Affirmed Networks. Prior to that, Zinsner was senior vice president of finance and chief financial officer for eight years at Analog Devices, and before that, he was senior vice president and chief financial officer for four years at Intersil Corp.

“Dave brings a great combination of financial expertise and executive experience to Micron and has a strong track record of achieving outstanding results,” Mehrotra said. “We look forward to his leadership in driving our financial strategy and delivering significant value to our shareholders.”

“I am very excited to be joining Micron at a time when the company is uniquely positioned to take advantage of growing demand for memory and storage solutions across a wide range of industries,” Zinsner said. “I look forward to working with the Micron team to capitalize on those trends and to take the company to the next level.”

Zinsner holds a master’s degree in business administration, finance and accounting from Vanderbilt University and a bachelor’s degree in industrial management from Carnegie Mellon University.

In a separate press release, Micron today updated its financial outlook for its fiscal second quarter of 2018:investors.micron.com.

Additional information on David Zinsner is available at http://www.micron.com/media.

About Micron

We are an industry leader in innovative memory and storage solutions. Through our global brands — Micron, Crucial and Ballistix — our broad portfolio of high-performance memory and storage technologies, including DRAM, NAND, NOR Flash and 3D XPoint memory, is transforming how the world uses information to enrich life. Backed by nearly 40 years of technology leadership, our memory and storage solutions enable disruptive trends, including artificial intelligence, machine learning and autonomous vehicles, in key market segments like cloud, data center, networking and mobile. Our common stock is traded on the NASDAQ under the MU symbol. To learn more about Micron Technology, Inc., visit www.micron.com.

Source: Micron

The post Micron Announces Chief Financial Officer Transition appeared first on HPCwire.

IARPA Ramps Up Molecular Information Storage Program

HPC Wire - Tue, 02/06/2018 - 08:21

Later this month IARPA will hold a Proposers’ Day to kick off its planned, four-year Molecular Information Storage (MIST) project. “Today’s exabyte-scale data centers occupy large warehouses, consume megawatts of power, and cost billions of dollars to build, operate and maintain over their lifetimes. This resource intensive model does not offer a tractable path to scaling beyond the exabyte regime in the future,” says IARPA.

The search for new storage technologies is hardly new. In recent years the proliferation of data generating devices (scientific instruments and commercial IoT) and the rise of AI and data analytics capabilities to make use of vast datasets have boosted pressure to find alternative approaches to storage.

The MIST program is expected to last four years and be composed of two 24-month phases. “The desired capabilities” for both phases of the program are described by three Technical Areas (TAs):

  • TA1 (Storage).Develop a table-top device capable of writing information to molecular media with a target throughput and resource utilization budget. Multiple, diverse approaches are anticipated, which may utilize DNA, polypeptides, synthetic polymers, or other sequence-controlled polymer media.
  • TA2 (Retrieval).Develop a table-top device capable of randomly accessing information from molecular media with a target throughput and resource utilization budget. Multiple, diverse approaches are anticipated, which may utilize optical sequencing methods, nanopores, mass spectrometry, or other methods for sequencing polymers in a high-throughput manner.
  • TA3 (Operating System). Develop an operating system for use with storage and retrieval devices that coordinates addressing, data compression, encoding, error-correction and decoding of files from molecular media in a manner that supports efficient random access at scale. Multiple, diverse approaches are anticipated, which may draw on established methods from the storage industry, or develop new methods to accommodate constraints imposed by polymer media. The end result of the program will be technologies that jointly support end-to-end storage and retrieval at the terabyte scale, and which present a clear and commercially viable path to future deployment at the exabyte scale. Collaborative efforts and teaming among potential performers is highly encouraged.

“The scale and complexity of the world’s “big data” problems are increasing rapidly,” said MIST program manager, David Markowitz. “Use cases that require storage and random access from exabytes of mostly unstructured data are now well-established in the private sector and are of increasing relevance to the public sector.” Registration closes on February 14, 2018.

Not surprisingly, IARPA is emphasizing the multidisciplinary nature of the project Among disciplines expected to be tapped are: chemistry, synthetic biology, molecular biology, biochemistry, bioinformatics, microfluidics, semiconductor engineering, computer science and information theory. IARPA is seeking participation from academic institutions and companies from around the world.

The proposer’s day is February 212. Here’s a link to the program announcement: https://www.iarpa.gov/index.php/research-programs/mist

The post IARPA Ramps Up Molecular Information Storage Program appeared first on HPCwire.

‘Next Generation’ Universe Simulation Is Most Advanced Yet

HPC Wire - Mon, 02/05/2018 - 18:24

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate.

Like the original Illustris project, Illustris: The Next Generation (IllustrisTNG for short) follows the progression of a cube-shaped universe from just after the Big Bang to the present day using the power of supercomputing. New physics and other refinements have been added to the original model and the scope of the simulated universe has been expanded to 1 billion light-years per side (from 350 million light-years per side previously). The first results from the project have been published in three separate articles in the journal Monthly Notices of the Royal Astronomical Society (Vol. 475, Issue 1).

Visualization of the intensity of shock waves in the cosmic gas (blue) around collapsed dark matter structures (orange/white). Source: IllustrisTNG

A press release put out by the Max Planck Institute for Astrophysics, one of the partners, highlights the significance:

At its intersection points, the cosmic web of gas and dark matter predicted by IllustrisTNG contains galaxies quite similar to the shape and size of real galaxies. For the first time, hydrodynamical simulations could directly compute the detailed clustering pattern of galaxies in space. Comparison with observational data—including newest large surveys—demonstrate the high degree of realism of IllustrisTNG. In addition, the simulations predict how the cosmic web changes over time, in particular in relation to the underlying “back bone” of the dark matter cosmos.

“It is particularly fascinating that we can accurately predict the influence of supermassive black holes on the distribution of matter out to large scales,” said principal investigator Prof. Volker Springel of the Heidelberg Institute for Theoretical Studies. “This is crucial for reliably interpreting forthcoming cosmological measurements.”

The team also includes researchers from the Max Planck Institutes for Astronomy (MPIA, Heidelberg) and Astrophysics (MPA, Garching), Harvard University, the Massachusetts Institute of Technology (MIT) and the Flatiron Institute’s Center for Computational Astrophysics (CCA).

Thin slice through the cosmic large-scale structure in the largest simulation of the IllustrisTNG project. The displayed region extends by about 1.2 billion lightyears from left to right. The underlying simulation is presently the largest magneto-hydrodynamic simulation of galaxy formation, containing more than 30 billion volume elements and particles.

To capture the small-scale turbulent physics at the heart of galaxy formation, astrophysicists used a powerful version of the highly parallel moving mesh code, AREPO, which they deployed on Germany’s fastest supercomputer, Hazel Hen. Ancillary and test runs of the project were also run on the Stampede supercomputer at the Texas Advanced Computing Center, at the Hydra and Draco supercomputers at the Max Planck Computing and Data Facility, and on MIT/Harvard computing resources.

As detailed on the project website, IllustrisTNG actually consists of 18 simulations in total at varying scales. The largest (the highest-resolution TNG300 simulation) occupied 2,000 of Hazel Hen’s Xeons for just over two months. The simulations together generated more than 500 terabytes of data and will keep the team busy for years to come.

A visualization from the project shows the formation of a massive “late-type,” star-forming disk galaxy.

 

Read more about IllustrisTNG at their website.

The post ‘Next Generation’ Universe Simulation Is Most Advanced Yet appeared first on HPCwire.

Former Intel Pres Launches ARM-based Server Chip Venture

HPC Wire - Mon, 02/05/2018 - 14:00

The “leadership journey” of Renée James, who held the highest rank of any woman in the history of Intel Corp., has emerged as CEO of a well-heeled, venture-backed company developing ARM-based chips, based on technology from Applied Micro Circuits, that will compete with Intel and others for a portion of the server chip market.

James’s new company, Ampere, located near Intel headquarters in Santa Clara, is funded by private equity firm The Carlyle Group, which James joined in 2016. When she resigned from Intel the previous year, concluding 28 years there, she held the position of president, considered the no. 2 spot to CEO Brian Krzanich.

Ampere processors are built for private and public clouds and deliver 64-bit Arm server chips with, according to the company, “high memory performance and substantially lower power and total cost of ownership.” Ampere processors offer a custom core Armv8-A 64-bit server operating at up to 3.3 GHz, 1TB of memory at a power envelope of 125 watts, the company said, adding that the processors are sampling now and will be in production in the second half of the year.

According to a story in the New York Times, in interviews James is not taking an adversarial stance toward her former company, stating that she respects Intel and that Ampere processors will handle specific cloud services workloads that are not in Intel’s bailiwick.

“I think they’re the best in the world at what they do,” James told the Times. “I just don’t think they’re doing what comes next.”

Renée James

“We have an opportunity with cloud computing to take a fresh approach with products that are built to address the new software ecosystem,” said James in a company announcement. “The workloads moving to the cloud require more memory, and at the same time, customers have stringent requirements for power, size and costs. The software that runs the cloud enables Ampere to design with a different point of view. The Ampere team’s approach and architecture meets the expectation on performance and power and gives customers the freedom to accelerate the delivery of the most memory-intensive applications and workloads such as AI, big data, storage and database in their next-generation data centers.”

When Paul Otellini ended his tenure at CEO of Intel in 2013, James was thought to be a candidate to take his role. As it turned out, James and Krzanich put forward a plan in which she would become president and he would be CEO, a proposal approved by Intel’s board of directors.

But she never lost sight of taking on a company’s top spot, and two years later, when she resigned from Intel, she stated in a memo sent to employees that “when Brian and I were appointed to our current roles, I knew then that being the leader of a company was something that I desired as part of my own leadership journey.”

The emergence of the new, 250-employee company from stealth mode comes at a time when the Meltdown and Spectre security design flaws in x86 processors has put Intel on the defensive.

On the other hand, Arm processors – available from companies such as Qualcomm and Cavium – have not yet made major inroads into the hyperscale cloud and data center server market, 98 percent of which is controlled by Intel.

The post Former Intel Pres Launches ARM-based Server Chip Venture appeared first on HPCwire.

NSB Issues Warning Call on U.S. STEM Worker Development

HPC Wire - Mon, 02/05/2018 - 13:41

Last week the National Science Board issued a companion policy statement – Our Nation’s Future Competitiveness Relies on Building a Stem-capable U.S. Workforce – meant to reinforce worrisome data scattered throughout the 2018 National Science & Engineering Indicators report released in mid-January.

“The U.S. can no longer rely on a distinct and relatively small “STEM workforce.” Instead, we need a STEM-capable U.S. workforce that leverages the hard work, creativity, and ingenuity of women and men of all ages, all education levels, and all backgrounds. Our Nation’s Future Competitiveness Relies on Building a Stem-capable U.S. Workforce,” argues the NSB.

The policy statement is a manifesto around a topic that has resonated in the HPC community; however it sometimes seems the community has grown “tone-deaf” because such calls have become perennial and progress seems spare. NSB noted this too:

“Numerous entities, including the National Science Foundation (NSF), have undertaken a myriad of initiatives spanning decades aimed at leveraging the talents of all segments of our population, especially groups historically underrepresented in STEM. Yet, in spite of some progress, crippling disparities in STEM education remain…”

The NSB offers the following rather broad ideas, steering clear of specifics:

“Considering the increasing demands placed on students, workers, businesses, and government budgets, institutions must partner to build the U.S. workforce of the future. These joint efforts are necessary in order to prosper in an increasingly globally competitive knowledge- and technology-intensive world.

  • Governments at all levels should empower all segments of our population through investments in formal and informal education and workforce development throughout an individual’s life-span. This includes redoubling our commitment to training the next generation of scientists and engineers through sustained and predictable Federal investments in graduate education and basic research.
  • Businesses should invest in workplace learning programs–such as apprenticeships and internships–that utilize local talent. By leveraging partnerships between academic institutions and industry, such as those catalyzed by NSF’s Advanced Technological Education Program (ATE), businesses will be less likely to face a workforce “skills gap.”
  • Governments and businesses should expand their investments in community and technical colleges, which continue to provide individuals with on-ramps into skilled technical careers as well as opportunities for skill renewal and development for workers at all education levels throughout their careers.
  • To accelerate progress on diversifying the STEM-capableS. workforce, the Nation should continue to invest in underrepresented segments of the population and leverage Minority Serving Institutions to this end.
  • Collectively, we must proceed with urgency and purpose to ensure that this Nation and all our people are ready to meet the challenges and opportunities of the future.”

The lack of diversity and effective recruitment from underserved population segments is a point of emphasis in the policy statement. Another of the biggest worries called out in the statement is the decline in international graduate level STEM students and changing attitudes towards remaining in the U.S.

“While the U.S. remains the top designations on for internationally mobile students, its share of these students declined from 25% in 2000 to 19% in 2015 as other countries increasingly compete for them…Our Nation’s ability to attract students from around the world is important, but our competitive advantage in this area is fully realized when these individuals stay to work in the United States post-graduation.

“The overall “stay rates” for foreign-born non-citizens who received a Ph.D. from U.S. institutions have generally trended upwards since the turn of the century, reaching 70% for both the 5-year and 10-year stay rates in 2015.31 However, the percentage of new STEM doctorates from China and India—the two top countries of origin—with definite plans to stay in the U.S. has declined over the past decade (from 59% to 49% for China and 62% to 51% for India). As other nations build their innovation capacity through investments in R&D and higher education, we must actively find ways to attract and retain foreign talent and fully capitalize on our own citizens.”

Link to the NSB companion policy statement: https://www.nsf.gov/news/news_summ.jsp?cntn_id=244391&WT.mc_id=USNSF_62&WT.mc_ev=click

HPCwire article on the full 2018 S&E Indicators Report: U.S. Leads but China Gains in NSF 2018 S&E Indicators Report

The post NSB Issues Warning Call on U.S. STEM Worker Development appeared first on HPCwire.

Quantum physics research boosted by grant, new hires

Colorado School of Mines - Mon, 02/05/2018 - 10:49

A Colorado School of Mines professor has received a $1.5 million National Science Foundation grant to develop an online gateway that will allow researchers around the world to run quick quantum-computing simulations with the click of a button and without having to invest millions of dollars before building their experimental platform.

Lincoln Carr, professor of physics and head of the Carr Theoretical Physics Research Group, is the principal investigator on the three-year Office of Advanced Cyberinfrastructure grant, which aims to tackle a major barrier to the goal of universal quantum computing.

While in principle fulfilling theoretical physicist Richard Feynman's original 1982 vision of quantum computing, today’s “analog” quantum computers, or quantum simulators, require their own dedicated experimental platforms, at the cost of several millions of dollars to build and months of work in order to perform a specific quantum calculation. 

Researchers led by Carr will develop a widely accessible and easy-to-use software to shortcut those design considerations. To start, it will be an open-source software package, which any experimentalist can download and use to design and benchmark their quantum simulator architecture of choice. 

Ultimately, the goal is to provide an even simpler graphical version via web interface, working in collaborating with the Science Gateways Community Institute to build an online science gateway that allow experimentalists to run quick and secure simulations and tests on dedicated high-performance computing resources at Colorado School of Mines. 

“By creating a science gateway, you’ll go through a browser and press a button – just like on an iPad – you press a button on the little quantum problem you want to try, to model your quantum appliance, and then our supercomputer does the calculations and spits out the answer right into your browser,” Carr said. “It will enable not only much broader open-source science but it also enables citizen science. People without a technical background can try some quantum calculations with a press of a button.”

Carr’s grant builds on the work of a growing group of Mines professors in the area of quantum computing. Three new physics professors have been recently hired, each with expertise in a different computing platform: Assistant Professor Meenakshi Singh specializes in structured nanowires, Josephson junctions and semiconducting quantum computing; Assistant Professor Zhexuan Gong is an expert in ion trap quantum computing and will work with John Bollinger’s group at NIST; and Dr. Eliot Kapit, who will join Mines as an assistant professor in July, works on superconducting devices and has experimental collaborations with John Martinis at Google, David Schuster at University of Chicago and David Pappas at NIST, researching quantum many-body physics and error correction. A fourth professor will be hired this year. 

“With the second quantum revolution, the goals are pretty immediate – a universal quantum computer that can out-compute classical computers,” Carr said. “What Mines is doing is very intelligently investing in that area on multiple fronts by doing a cluster hire. We’re investing in the next quantum leap.”

On a more fundamental level, Carr also recently had a paper published in the journal Physical Review Letters, the sixth quantum physics paper published by Mines faculty in 2017 in the top physics journal. The paper, “Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks,” published in December, addresses the foundational issue of complexity and its origins.

“What we discovered is a way to apply the same tool we use on the brain to understand human thought to quantum states. I saw complex networks in quantum states,” Carr said. “Up until now people have thought of quantum complexity as maybe a qualitative thing, maybe a buzz word, people debate about it. We put a number to it, several numbers, and we clarified what complexity is in quantum states. No one has done that – Mines students are awesome.”

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

 

Categories: Partner News

XSEDE’s Maverick Helps Explore Next Generation Solar Cells and LEDs

HPC Wire - Mon, 02/05/2018 - 09:16

Feb. 5, 2018 — Solar cells can’t stand the heat. Photovoltaics lose some energy as heat in converting sunlight to electricity. The reverse holds true for lights made with light-emitting diodes (LED), which convert electricity into light. Some scientists think there might be light at the end of the tunnel in the hunt for better semiconductor materials for solar cells and LEDs, thanks to supercomputer simulations that leveraged graphics processing units to model nanocrystals of silicon.

Defect-induced conical intersections (DICIs) allow one to connect material structure to the propensity for nonradiative decay, a source of heat loss in solar cells and LED lights. XSEDE Maverick supercomputer allocation accelerated the quantum chemistry calculations. Credit: Ben Levine.

Scientists call the heat loss in LEDs and solar cells non-radiative recombination. And they’ve struggled to understand the basic physics of this heat loss, especially for materials with molecules of over 20 atoms.

“The real challenge here is system size,” explained Ben Levine, associate professor in the Department of Chemistry at Michigan State University. “Going from that 10-20 atom limit up to 50-100-200 atoms has been the real computation challenge here,” Levine said. That’s because the calculations involved scale with the size of the system to some power, sometimes four or up to six, Levine said. “Making the system ten times bigger actually requires us to perform maybe 10,000 times more operations. It’s really a big change in the size of our calculations.”

Levine’s calculations involve a concept in molecular photochemistry called a conical intersection – points of degeneracy between the potential energy surfaces of two or more electronic states in a closed system. A perspective study published September of 2017 in the Journal of Physical Chemistry Letters found that recent computational and theoretical developments have enabled the location of defect-induced conical intersections in semiconductor nanomaterials.

“The key contribution of our work has been to show that we can understand these recombination processes in materials by looking at these conical intersections,” Levine said. “We’ve been able to show is that the conical intersections can be associated with specific structural defects in the material.”

The holy grail for materials science would be to predict non-radiative recombination behavior of a material based on its structural defects. These defects come from ‘doping‘ semiconductors with impurities to control and modulate its electrical properties.

Looking beyond the ubiquitous silicon semiconductor, scientists are turning to silicon nanocrystals as candidate materials for the next generation of solar cells and LEDs. Silicon nanocrystals are molecular systems in the ballpark of 100 atoms with extremely tunable light emission compared to bulk silicon. And scientists are limited only by their imagination in ways to dope and create new kind of silicon nanocrystals.

“We’ve been doing this for about five years now,” Levine explained about his conical intersection work. “The main focus of our work has been proof-of concept, showing that these are calculations that we can do; that what we find is in good agreement with experiment; and that it can give us insight into experiments that we couldn’t get before,” Levine said.

Levine addressed the computational challenges of his work using graphics processing unit (GPU) hardware, the kind typically designed for computer games and graphics design. GPUs excel at churning through linear algebra calculations, the same math involved in Levine’s calculations that characterize the behavior of electrons in a material. “Using the graphics processing units, we’ve been able to accelerate our calculations by hundreds of times, which has allowed us to go from the molecular scale, where we were limited before, up to the nano-material size,” Levine said.

Cyberinfrastructure allocations from XSEDE, the eXtreme Science and Engineering Discovery Environment, gave Levine access to over 975,000 compute hours on the Maverick supercomputing system at the Texas Advanced Computing Center (TACC). Maverick is a dedicated visualization and data analysis resource architected with 132 NVIDIA Tesla K40 “Atlas” GPU for remote visualization and GPU computing to the national community.

“Large-scale resources like Maverick at TACC, which have lots of GPUs, have been just wonderful for us,” Levine said. “You need three things to be able to pull this off. You need good theories. You need good computer hardware. And you need facilities that have that hardware in sufficient quantity, so that you can do the calculations that you want to do.”

Levine explained that he got started using GPUs to do science ten years ago back when he was in graduate school, chaining together SONY PlayStation 2 video game consoles to perform quantum chemical calculations. “Now, the field has exploded, where you can do lots and lots of really advanced quantum mechanical calculations using these GPUs,” Levine said. “NVIDIA has been very supportive of this. They’ve released technology that helps us do this sort of thing better than we could do it before.” That’s because NVIDIA developed GPUs to more easily pass data, and they developed the popular and well-documented CUDA interface.

“A machine like Maverick is particularly useful because it brings a lot of these GPUs into one place,” Levine explained. “We can sit down and look at 100 different materials or at a hundred different structures of the same material.” We’re able to do that using a machine such as Maverick. Whereas with a desktop gaming machine just has one GPU, we can do one calculation at a time. The large-scale studies aren’t possible,” said Levine.

Now that Levine’s group has demonstrated the ability to predict conical intersections associated with heat loss from semiconductors and semiconductor nanomaterials, he said the next step is to do materials design in the computer.

Said Levine: “We’ve been running some calculations where we use a simulated evolution, called a genetic algorithm, where you simulate the evolution process. We’re actually evolving materials that have the property that we’re looking for, one generation after the other. Maybe we have a pool of 20 different molecules. We predict the properties of those molecules. Then we randomly pick, say, less than ten of them that have desirable properties. And we modify them in some way. We mutate them. Or in some chemical sense ‘breed’ them with one another to create new molecules, and test those. This all happens automatically in the computer. A lot of this is done on Maverick also. We end up with a new molecule that nobody has ever looked at before, but that we think they should look at in the lab. This automated design processes has already started.”

The study, “Understanding Nonradiative Recombination through Defect-Induced Conical Intersections,” was published September 7, 2017 in the Journal of Physical Chemistry Letters (DOI: 10.1021/acs.jpclett.7b01707). The study authors are Yinan Shu (University of Minnesota); B. Scott Fales (Stanford University, SLAC); Wei-Tao Peng and Benjamin G. Levine (Michigan State University). The National Science Foundation funded the study (CHE-1565634).

Source: Jorge Salazar, TACC

The post XSEDE’s Maverick Helps Explore Next Generation Solar Cells and LEDs appeared first on HPCwire.

ISC High Performance Organizers Announce Return of ISC STEM Student Day

HPC Wire - Mon, 02/05/2018 - 08:36

FRANKFURT, Germany, Feb. 5, 2018 – The organizers of ISC High Performance are excited to announce the return of the ISC STEM Student Day, with an attractive program for students and sponsoring organizations. This year, students attending the program will receive an exclusive tutorial on high performance computing, machine learning and data analytics, and sponsoring organizations will receive an increased visibility during the full-day program.

From all received applications, 200 regional and international Science, Technology, Engineering, and Mathematics (STEM) students who show a keen interest in HPC, will be admitted into this program for free. These students range from those enrolled in bachelor’s degree programs right through those completing their PhD research in fields such as computer science, computer engineering, information technology, autonomous systems, physics and mathematics. The age group ranges from 19 – 30.

This non-profit effort was first introduced in 2017, and it attracted around 100 students mainly from Germany, the US, the UK, Spain, and China, and it was sponsored by 10 organizations in the HPC space.

“Unlike regular STEM events, the aim of our program is to connect the next generation of regional and international STEM practitioners with the HPC industry and its key players. We hope this will encourage them to pursue careers in this space, and we will see them as part of the HPC user community in the future.”

“The ISC STEM Student Day is also a great opportunity for organizations to associate themselves as STEM employers and invest in their future HPC user base,” said Martin Meuer, the general co-chair of the ISC High Performance conference series. 

The 2018 ISC STEM Student Day will take place on Wednesday, June 21 during the ISC High Performance conference and exhibition. The full-conference will be held from Sunday, June 24 – 28 at Messe Frankfurt and expects 3500 attendees. 

This program is very attractive to organizations that offer HPC training, education opportunities, as well as employment opportunities as some of these students are about to graduate soon. Please contact sales@isc-group.com to get involved.

A returning builder-level sponsor, PRACE’s Communication Officer, Marjolein Oorsprong had the following to say about the event:

“The ISC STEM Student Day (gala evening and booth visits) allowed PRACE to come into direct contact with students who showed a keen interest in HPC and inform them of all our student-related activities. It provided an opportunity to explain to them how participating in our training events can benefit their studies. By sponsoring events such as the ISC STEM Student Day, we hope to attract more applications to the PRACE Summer of HPC program, the International HPC Summer School, and other PRACE Training courses.”

“Experienced members of our executive and management staff were not only given the chance to share their knowledge from within the industry, but also gain valuable insights from academic leaders and have constructive one-on-one conversations with students at the ISC STEM Day & Gala. This event is a great opportunity for tech organizations, such as Super Micro, to meet the brightest stars in STEM,” remarked Cheyene van Noordwyk, Marketing Manager at Super Micro Computer.

The 2018 ISC STEM program includes a day tutorial on HPC Applications, Systems & Programming Languages, conducted by Dr. Bernd Mohr, Senior Scientist at the Jülich Supercomputing Centre and a tutorial on Machine Learning & Data Analytics by his colleague, Prof. Morris Riedel, who is the Head of Research Group High Productivity Data Processing at the Jülich Supercomputing Centre & an Adjunct Associated Professor in high productivity processing of big data at the  School of Engineering and Natural Sciences, University of Iceland.

Later in the day, the students will have the chance to visit the booths and exhibits of sponsoring organizations to gain an impression of their product offering and business units. Another great highlight of the program is the evening event, which includes a dinner and a job fair set in a casual atmosphere in a nearby hotel. All sponsoring organizations will have an exclusive two hour of face-to-face time with the students at the STEM job fair.

Students will be able to register for the program starting mid-April via the program webpage.

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

Source: ISC High Performance

The post ISC High Performance Organizers Announce Return of ISC STEM Student Day appeared first on HPCwire.

BOXX Demos APEXX S3 Workstation at SOLIDWORKS World 2018

HPC Wire - Mon, 02/05/2018 - 08:04

AUSTIN, Tex., Feb. 5, 2018 — BOXX Technologies, a leading innovator of high-performance workstations, rendering systems, and servers, today announced that APEXX S3, the world’s fastest workstation, will be demonstrated at BOXX booth #811 at SOLIDWORKS World 2018, Feb. 4-7, at the Los Angeles Convention Center in Los Angeles, CA. A Dassault Systèmes Corporate Partner, designated SOLIDWORKS Solution Partner, and leading manufacturer of CATIA and SOLIDWORKS-certified workstations, BOXX will also debut APEXX T3, a new 16-core AMD Ryzen Threadripper workstation, as well as two GoBOXX mobile workstations. BOXX will continue its longstanding tradition of allowing attendees to test systems by bringing along their assemblies.

“We design our own chassis so we’re SOLIDWORKS users ourselves,” said BOXX VP of Engineering Tim Lawrence. “That experience provides us with a unique perspective and clear understanding of engineering and product design workflows, as well as which configurations will provide optimal application performance. Our best-selling APEXX S3 is the embodiment of BOXX expertise.”

APEXX S3 features the latest Intel Core i7 8700K processor overclocked to 4.8 GHz. The liquid-cooled system sustains that frequency across all cores—even in the most demanding situations. 8th generation Intel processors offer a significant performance increase over previous Intel technology and BOXX is the only workstation manufacturer providing the new micro architecture professionally overclocked and backed by a three-year warranty. In addition to overclocking, BOXX went a step further by removing unused, outdated technology (like optical drive bays) in order to maximize productive space. Inside its compact, industrial chassis, the computationally dense APEXX S3 supports up to two dual-slot NVIDIA or AMD Radeon Pro professional graphics cards, an additional single-slot card, and features solid state drives and faster memory at 2600MHz DDR4.

At SOLIDWORKS World, the best-in-class APEXX S3, configured with dual NVIDIA Quadro P5000 GPUs, will demonstrate SOLIDWORKS 2018, as well as an upcoming version of SOLIDWORKS Visualize featuring OptiX AI-accelerated denoising technology. The workstation offers professional grade performance for all 3D CAD, animation, motion media, and rendering applications and will be utilized inside the SOLIDWORKS booth as well.

Along with the APEXX S3, attendees will have an opportunity to see the soon-to-be-released APEXX T3, an AMD-based workstation featuring a 16-core Ryzen Threadripper processor and Radeon Pro WX9100 GPU. At the BOXX booth, APEXX T3 will demonstrate both SOLIDWORKS 2018 and AMD ProRender for SOLIDWORKS, while also making its debut inside the AMD booth.

The BOXX booth is also home to mobile workstation demonstrations including the GoBOXX MXL VR. Designed for engineers and architects eager to incorporate mobile virtual reality into their workflow, GoBOXX MXL VR features a true desktop-class Intel Core i7 processor (4.0GHz), NVIDIA GeForce graphics, and up to 64GB of RAM. GOBOXX MXL VR will demonstrate a SOLIDWORKS to VR workflow, as will the ultra-thin GoBOXX SLM VR. Featuring a four-core, Intel Core i7 processor and professional NVIDIA Quadro graphics on a 15″ full HD (1920×1080) display, GoBOXX SLM VR provides ample performance and reliability, enabling engineers and product designers to work from anywhere.

“Year after year, our booth is a top destination for SOLIDWORKS World attendees,” said Shoaib Mohammad, BOXX VP of Marketing and Business Development. “We offer consultation with experts, allow user participation in our demos, and help SOLIDWORKS users determine firsthand which BOXX solutions will suit their workflows and empower them to work faster and more efficiently than ever before.”

For further information and pricing, contact a BOXX sales consultant in the US at 1-877-877-2699. Learn more about BOXX systems, finance options, and how to contact a worldwide reseller, by visiting www.boxx.com.

About BOXX Technologies

BOXX is a leading innovator of high-performance computer workstations, rendering systems, and servers for engineering, product design, architecture, visual effects, animation, deep learning, and more. For 22 years, BOXX has combined record-setting performance, speed, and reliability with unparalleled industry knowledge to become the trusted choice of creative professionals worldwide. For more information, visit www.boxx.com.

Source: BOXX Technologies

The post BOXX Demos APEXX S3 Workstation at SOLIDWORKS World 2018 appeared first on HPCwire.

Frank Baetke Departs HPE, Stays on as EOFS Chairman

HPC Wire - Fri, 02/02/2018 - 13:03

EOFS Chairman Frank Baetke announced today that he is leaving HPE, where he served as global HPC technology manager for many years. He will keep his post as chair of the European Open File System (EOFS) organization, a position he was elected to in 2015. Baetke was a long-time liaison for HP-CAST, HPE’s user group meeting for high performance computing.

In a letter to the EOFS community, Baetke writes:

I’m no longer with HPE officially, but will definitely remain active on the HPC landscape. I will also remain active as EOFS Chairman representing HPE as a delegate as membership is by corporation or institution.

In that role I will be planning again for Birds of a Feather Sessions (BoFs) at the upcoming ISC’18 in June – the success we had at SC17 with two very well-attended BoFs is really encouraging. Let me also remind you on the upcoming LUSTRE User Group Conference LUG 2018 organized by OpenSFS. The conference will be held April 23-26, 2018 at Argonne National Laboratory in Argonne, Illinois, see: http://opensfs.org/events/lug-2018 Registration is open.

I’m sure I’ll meet most of you at the upcoming ISC’18 event in Frankfurt in June, 24-28, see http://isc-hpc.com/ 

HP-CAST 30 will take place prior to ISC 2018 at the Marriott Frankfurt as planned. We are told that HPE’s Liz King and Ben Bennett are now at the helm.

The post Frank Baetke Departs HPE, Stays on as EOFS Chairman appeared first on HPCwire.

ASC18, featuring AI and Nobel Prize-winning applications, opens in Beijing. Finals to be hosted by Nanchang University in May

HPC Wire - Fri, 02/02/2018 - 09:57

On January 30, the opening ceremony of the 2018 ASC Supercomputer Challenge (ASC18) was held in Beijing. The event was attended by hundreds of spectators, including academicians from the Chinese Academy of Engineering (CAE), heads of supercomputing centers, experts in supercomputing and AI, as well as professors and students from participating teams. The ceremony marks the formal start of a two-month competition that will see more than 300 teams of university students from across the world participate, with the top 20 competing in the final round at Nanchang University in May. At the ceremony, organizers explained this year’s challenges, including AI machine reading and comprehension and a Nobel Prize-winning application.

The AI challenge will give students the chance to tackle Answer Prediction for Search Query in natural language reading and comprehension, and is provided by Microsoft. Teams must create an AI answer prediction method and model based on massive amounts of data generated by real questions from search engines such as Bing or voice assistants such as Cortana, providing accurate answers to queries and facilitating the development of AI to better address the cognitive challenge.

ASC18 also includes a supercomputing application for cryo-electron microscopy, a newly-developed technology whose developers were awarded the 2017 Nobel Prize in Chemistry. Allowing scientists to solve challenges in structural biology beyond the scope of traditional X-rays and crystallology, cryo-electron microscopy is based on RELION, a 3D reconstruction software. By including RELION among the challenges of this year’s ASC, the competition organizers aim to keep today’s computing students abreast of the latest cutting-edge developments in scientific discovery and spark their passion for exploring the unknown.

ASC will also host a two-day training camp to get participants up to speed for the competition. Supercomputing and AI experts from the Chinese Academy of Sciences, Microsoft, NVIDIA, Intel, and Inspur will lead sessions on designing and building supercomputer systems, the latest architectures of accelerated computing, AI application optimization and other related topics.

Initiated by China, the ASC Student Supercomputer Challenge is the world’s biggest student supercomputer competition. Since its launch, the ASC challenge has been held 7 times, supported by experts and institutions from across Asia, the US and Europe. Through promoting exchanges and furthering the development of talented young minds in the field of supercomputing around the world, the ASC aims to improve applications and R&D capabilities of supercomputing and accelerate technological and industrial innovation.

“ASC allows participating students to improve their knowledge and ability through practical application,” said Wang Endong, founder of the ASC, member of the CAE, and the chief scientist at Inspur. “This helps nurture well-rounded talents who have both the knowledge and skills for operating software and hardware. Such talents will be better able to play a role in the development of the AI industry.”

Li Bohu, a member of the Chinese Academy of Engineering, said that a deep integration of supercomputing and modeling simulation has become the third research method for understanding and transforming the world, following theoretical study and experimental research. Li anticipated that further integration and development of supercomputing and other expertise will revolutionize society, transform industries, and lead to major changes in how we live and work.

Deng Xiaohua, the vice president of Nanchang University and host of the final round of the ASC18, said that as one of China’s Project 211 institutions, a member of the high-level university project and a university focusing on building world-class disciplines, Nanchang University has always placed high importance on enhancing students’ ability to study and work with supercomputers. The university is dedicated to advancing innovations in scientific research, AI, and big data through supercomputing. As an international platform for supercomputing and the development of talented minds, ASC18 will greatly advance the development of new talents in Nanchang University and further international exchanges and cooperation.

The post ASC18, featuring AI and Nobel Prize-winning applications, opens in Beijing. Finals to be hosted by Nanchang University in May appeared first on HPCwire.

Stanford Lab Uses Blue Waters Supercomputer to Study Epilepsy

HPC Wire - Fri, 02/02/2018 - 08:02

Feb. 2, 2018 — Epilepsy is the fourth most common human neurological disorder in the world—a disorder characterized by recurrent, unprovoked seizures. According to the Centers for Disease Control, a record number of people in the United States have epilepsy: 3.4 million total, including nearly half a million children. At this time, there’s no known cause or cure, but with the help of NCSA’s Blue Waters supercomputer at the University of Illinois at Urbana-Champaign, researchers like Ivan Soltesz are making progress.

The Soltesz lab at Stanford University is using Blue Waters to create realistic models of the hippocampus in rat brains. The hippocampus is a seahorse-shaped structure located in the temporal lobe of the brain, and is responsible for forming short-term memories. The hippocampus is thought to be the site of origin of temporal lobe epilepsy, which is the most common variant of the disease.

Generally, areas of the brain process information independently but periodically synchronize their activity when information is transferred across brain regions, for instance as part of the formation of new memories. However, in the epileptic brain, this process of synchronization can result in seizures.

“There is a theory that the healthy hippocampus generates very strong oscillations in the electric field during movement. In epilepsy it is thought that something goes wrong in the mechanisms that generate these oscillations. Instead of normal oscillations, seizures are generated,” says Ivan Raikov, who is part of the Soltesz lab.

The team’s computational research aims to replicate the results of past behavioral research. In those experiments, researchers had rats run on treadmills for several minutes until cells in the hippocampus recognized locations. Often the rats had to run repeatedly to learn their environment.

With the computational modeling on Blue Waters, the Soltesz team began simulating ten-second real time long experiments in order to model those same rat on a treadmill processes. However, even 10-second simulations can take an incredible amount of computer processing power to complete. Using 2,000 nodes (64,000 core equivalents) on Blue Waters, these simulations still take 14 hours of compute time. If this were done on a new, high end laptop it would take almost 26 years to simulate one 10-second experiment.

“Running long, realistic simulations would be impossible to do on other systems,” Raikov says. “It is the computational capacity of Blue Waters that makes it possible to have at least several tens of seconds of physical time represented.”

The team is optimistic that these simulations will help them acquire needed basic knowledge—the role of the hippocampus and how information is transmitted. Understanding how this region of the brain works will potentially allow them to relate the structural differences between a typical brain and one affected by epilepsy.

“We’re hoping once we understand the basic principles of how oscillations are generated in the hippocampus, we can incorporate epileptic changes in our models and how that changes the oscillations,” Raikov says. “If we model the damage that is caused by epilepsy, can we have a simulation that generates seizures or seizure-like behavior? In that way we hope to see some connection between the changes that occur in the brain during the seizure and other pathological events.”

In addition to uncovering the relationship between the hippocampus and pathology, they are also looking to answer what they consider a very fundamental question: How is spatial information represented in the hippocampus and does the oscillatory process have anything to do with it?

“These are very big questions in hippocampal neuroscience, and we believe that understanding those will give us a way to understand episodic memory specifically,” Raikov says.

For other epilepsy related research using Blue Waters, see Curtis Johnson’s work in the Blue Waters Annual Report and on the Blue Waters site.

Source: Susan Szuch, NCSA

The post Stanford Lab Uses Blue Waters Supercomputer to Study Epilepsy appeared first on HPCwire.

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

HPC Wire - Thu, 02/01/2018 - 22:17

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling the competitive sector to deploy the latest AI technologies. Beyond the requirement for accurate and speedy seismic and reservoir simulation, oil and gas operations face torrents of sensor, geolocation, weather, drilling and seismic data. Just the sensor data alone from one off-shore rig can accrue to hundreds of terabytes of data annually, however most of this remains unanalyzed, dark data.

A collaboration between Nvidia and Baker Hughes, a GE company (BHGE) — one of the world’s largest oil field services companies — kicked off this week to address these big data challenges by applying deep learning and advanced analytics to improve efficiency and reduce the cost of energy exploration and distribution. The partnership leverages accelerated computing solutions from Nvidia, including DGX-1 servers, DGX Station and Jetson, combined with BHGE’s fullstream analytics software and digital twins to target end-to-end oil and gas operations.

Source: Nvidia

“It makes sense if you think about the nature of the operations, many remote sites, often in difficult locations,” said Binu Mathew, vice president of digital development at Baker Hughes. “But also when you look at it from an industry standpoint, there’s a ton of data being generated, a lot of information, and you often have it in two windows: you have an operator who will have multiple streams of data coming in, but relatively little actual information, because you’ve got to use your own experience to figure out what to do.

“On the flip side you have a lot of very smart, very capable engineers who are very good at building physics models, geological models, who often take weeks or months to fill out these models and run simulations, so they operate in that kind of timeframe. In between you’ve got a big challenge of not being able to have enough actual data crossing silos into a system that can analyze this data that you can take operational action from. This is the area that we at Baker Hughes Digital plan to address. We plan to do it because the technologies are now available in the industry: the rise of computational power and the rise of analytical techniques.”

Mathew’s account of the magnitude of data being generated by the industry leaves little doubt that this is an exascale-class challenge that requires new approaches and efficiencies.

“Even if you don’t talk about things like imaging data — which adds a whole order of magnitude to it — but, just in terms of what you’d call semi-structured data, essentially data coming up from various sensors, it’s in the hundreds of petabytes annually,” Mathew said. “And if you take a deep water rig you’re talking about in the region of a terabyte of data coming in per day. To analyze that kind of data at that kind of scale, the computational power will run into the exaflops and potentially well beyond.”

Source: Nvidia; BHGE

Like an increasing number of groups across academia and industry, Baker Hughes is tackling this extreme-scale challenge using a combination of physics-based and probabilistic models.

“You cannot analyze all that data without something like AI,” said Mathew. “If you go back to the practical models, the oil and gas industry has been very good at coming up with physics based models, and they will still be absolutely key at the core for modeling seismic phenomenon. But to scale those models requires combining physics models with the pattern matching capabilities that you get with AI. That’s the sea change we’ve seen in the last several years. If you look at image recognition and so on, deep learning techniques are now matching or exceeding human capabilities. So if you combine those things together you get into something that’s a step change from what’s been possible before.”

Nvidia is positioning its GPU technologies to fuel this transformation by powering accelerated analytics and deep learning across the spectrum of oil and gas operations.

“With GPU-accelerated analytics, well operators can visualize and analyze massive volumes of production and sensor data such as pump pressures, flow rates and temperatures,” stated Nvidia’s Tony Paikeday in a blog post. “This can give them better insight into costly issues, such as predicting which equipment might fail and how these failures could affect wider systems.

“Using deep learning and machine learning algorithms, oil and gas companies can determine the best way to optimize their operations as conditions change,” Paikeday continued. “For example, they can turn large volumes of seismic data images into 3D maps to improve the accuracy of reservoir predictions. More generally, they can use deep learning to train models to predict and improve the efficiency, reliability and safety of expensive drilling and production operations.”

The collaboration with BHGE will leverage Nvidia’s DGX-1 servers for training models in the datacenter; the smaller DGX Station for computing deskside or in remote, bandwidth-challenged sites; and the Nvidia Jetson for powering real-time inferencing at the edge.

Jim McHugh, Nvidia vice president and general manager, said in an interview that Nvidia excels at bringing together this raw processing power with an entire ecosystem: “Not only our own technology, like CUDA, Nvidia drivers, but we also bring all the leading frameworks together. So when people are going about doing deep learning and AI, and then the training aspect of it, the most optimized frameworks run on DGX, and are available via our NGC [Nvidia GPU cloud] as well.”

Cloud connectivity is a key enabler of the end-to-end platform. “One of the things that allows us to access that dark data is the concept of edge to cloud,” said Mathew. “So you’ve got the Jetsons out at the edge streaming into the clouds, where we can do the training of these models because training is much heavier and using DGX-1 boxes helps enormously with that task and running the actual models in production.”

Baker Hughes says it will work closely with customers to provide them with a turnkey solution. “The oil and gas industry isn’t homogeneous, so we can come out with a model that largely fits their needs but with enough flexibility to tweak,” said Mathew. “And some of that comes inherently from the capabilities you have in these techniques, they can auto-train themselves, the models will calibrate and train to the data that’s coming in. And we can also tweak the models themselves.”

For Nvidia, partnering with BHGE is part of a broader strategy to work with leading companies to bring AI into every industry. The self-proclaimed AI computing company believes technologies like deep learning will effect a strong virtuous cycle.

“The thing about AI is when you start leveraging the algorithms in deep neural networks, you end up developing an insatiable desire for data because it allows you to get new discoveries and connections and correlations that weren’t possible. We are coming from a time when people suffered from a data deluge; now we’re in something new where more data can come, that’s great,” said McHugh.

Doug Black, managing editor of EnterpriseTech, contributed to this report.

The post Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector appeared first on HPCwire.

UK Space Agency Backs Development of Supercomputing Technologies

HPC Wire - Thu, 02/01/2018 - 15:53

GLASGOW, Scotland, Feb. 01, 2018 — The UK Space Agency has awarded more than £4 million to Spire Global, a satellite powered data company, to demonstrate cutting-edge space technology including ‘parallel super-computing’, UK Government ministers Lord Henley and Lord Duncan announced today.

Today’s announcement gives the green light to missions designed to showcase the technology and put UK companies into orbit faster and at a lower cost. The UK is the largest funder of the European Space Agency’s Advanced Research in Telecommunications Satellites (ARTES) programme, which transforms research into successful commercial projects.

The funding from the UK Space Agency was announced by Lord Henley, Parliamentary Under-Secretary of State at the Department for Business, Energy and Industrial Strategy, on a visit to Spire’s UK base in Glasgow, where the company intends to create new jobs to add to its existing workforce.

Business Minister, the Rt. Hon. Lord Henley, said:

“Thanks to this new funding, Spire will be able to cement its activities in the UK, develop new technologies and use space data to provide new services to consumers that will allow businesses to access space quicker and at a lower cost – offering an exciting opportunity for the UK to thrive in the commercial space age.

“Through the government’s Industrial Strategy, we are encouraging other high-tech British businesses to pursue more commercial opportunities with the aim of growing the UK’s share of the global space market to 10% by 2030.”

UK Government Minister for Scotland Lord Duncan said:

“Spire Global is at the cutting edge of technology, using satellite data to track ships, planes and weather in some of the world’s most remote regions. They’re also an important employer in Glasgow, investing in the area and recognising the talent of Scotland’s world class engineers and scientists. We know that the space industry is important to Scotland’s economy and this UK Government funding will help companies like Spire stay at the forefront of this field.”

The ARTES Pioneer programme is designed to support industry by funding the demonstration of advanced technologies, systems, services and applications in a representative space environment. Part of this is to support one or more Space Mission Providers, which could provide commercial services to private companies or public bodies.

“Spire’s infrastructure, capabilities, and competencies all support our submission to this program. For the launch of our 50+ satellite constellation, we quickly became our own best customer,” said Theresa Condor, Spire’s EVP of Corporate Development. “We’re looking forward to demonstrating our end-to-end service and infrastructure on this series of validation missions. ‘Space as a Service’ means going from mission technical architecture to customer data/service verification, along with the ongoing development of critical enabling technologies.”

One validation mission will develop parallel super-computing in space – a core component for future computationally intensive missions. A second, exploitation of Global Navigation Satellite System (GNSS) for weather applications, will leverage Galileo signals for GNSS Radio Occultation. Radio occultation is a key data input for the improvement of weather forecasts. Upon completion, the GNSS-RO technology can be immediately commercialized.

Source: UK Space Agency

The post UK Space Agency Backs Development of Supercomputing Technologies appeared first on HPCwire.

One Stop Systems, Inc. Announces Pricing of Initial Public Offering

HPC Wire - Thu, 02/01/2018 - 15:39

ESCONDIDO, Calif., Feb. 01, 2018 — One Stop Systems, Inc. (NASDAQ:OSS), a provider of ultra-dense high-performance computing (HPC) systems, today announced the pricing of its initial public offering of 3,800,000 shares of common stock at a public offering price of $5.00 per share, before underwriting discounts and commissions.  In addition, One Stop Systems and a selling stockholder have granted underwriters a 45-day option to purchase up to 570,000 additional shares of common stock at the initial public offering price, less the underwriting discount, to cover over-allotments, if any.

One Stop Systems’ common stock has been approved for listing on the NASDAQ Capital Market and is expected to begin trading under the ticker symbol “OSS” on February 1, 2018.  The offering is expected to close on February 5, 2018, subject to customary closing conditions.

Roth Capital Partners is acting as sole book-running manager and Benchmark is acting as co-manager for the offering.

A registration statement relating to the securities being sold in the offering was declared effective by the Securities and Exchange Commission on January 31, 2018. The offering is being made only by means of a prospectus. A copy of the final prospectus related to the offering, when available, may be obtained from Roth Capital Partners, LLC, 888 San Clemente Drive, Newport Beach, California 92660, Attention: Equity Capital Markets, or by calling (800) 678-9147 or emailing rothecm@roth.com.

About One Stop Systems

One Stop Systems designs and manufactures ultra-dense high-performance computing (HPC) systems for deep learning, oil and gas exploration, financial trading, media and entertainment, defense and traditional HPC applications requiring the fastest and most efficient data processing. By utilizing the power of the latest GPU accelerators and NVMe flash cards, our systems stay on the cutting edge of the latest technologies. We have a reputation as an innovator in hyper-converged and composable infrastructure solutions using the latest technology and design equipment to operate with the highest efficiency. We offer these exceptional systems to customers for lease or purchase. One Stop Systems continuously works to meet its customers’ greater needs. For more information about One Stop Systems, go to www.onestopsystems.com.

Source: One Stop Systems, Inc.

The post One Stop Systems, Inc. Announces Pricing of Initial Public Offering appeared first on HPCwire.

More Than 100 Channel Partners Now Bringing Qumulo File Fabric to Market

HPC Wire - Thu, 02/01/2018 - 15:31

SEATTLE, Feb. 1, 2018 – Qumulo, the leader in universal-scale file storage, today announced that more than 100 channel partners are bringing Qumulo File Fabric (QF2) to power data intensive businesses spanning media and entertainment, life sciences, oil and gas, automotive, telecommunications, higher education and rapidly emerging workloads for IoT, machine learning and artificial intelligence. The company also announced that Leonard Iventosch has taken on the role as Chief Channel Evangelist to advance channel programs and partnerships.

ePlusApplied ElectronicsRed8ASG VirtualP1 TechnologiesMelroseMacTechnologent and Fusionstorm are among more than 100 global channel partners that are helping their customers transform their businesses by meeting customer demand for modern, scalable and high performance file storage that spans the data center and the cloud. Through these strategic partnerships, Qumulo has quickly become the world’s most trusted solution to store, manage and curate its data forever, with marquee customers in nearly every industry segment.

Company Welcomes Chief Channel Evangelist Leonard Iventosch

Leonard Iventosch has over 30 years of experience leading and building high performing channel sales teams at companies like Data General, NetApp, Isilon Systems, EMC and Nimble Storage. Setting the industry standard for channel sales leadership, Iventosch is widely known for building strong, trusted executive relationships within the channel community. At Qumulo, Iventosch will work to develop deeper engagement with VAR partners, help bring the Partner1st Program to market and expand the company’s global channel network throughout North America and EMEA.

“I’ve been steeped in this market and have witnessed the incredible growth of Qumulo firsthand,” said Iventosch. “This growth has been fueled by a market appetite for a more modern file-based storage solution, coupled with a strong channel committed to delivering customers a superior storage experience. Since day one, Qumulo has adopted a partner-first approach, allowing them to deepen relationships with resellers and partners and develop a world-class channel opportunity.”

Qumulo Launches Partner1st Program

Qumulo also announced the introduction of the Qumulo Partner1st Program, designed to enable Qumulo’s rich ecosystem of channel partners to meet the rising demand for QF2. The new program provides partners with the tools, programs and resources they need to grow their businesses with Qumulo.

“Leading resellers are joining the Partner1st Program with Qumulo because they want innovative new technology, great products, strong margins and a company that is 100 percent dedicated to the channel,” said Eric Scollard, VP of Worldwide Sales at Qumulo. “They are increasingly disheartened by the legacy vendors and their solutions which do not provide a path to help their customers on their journey to the cloud and who have taken them for granted for too long.”

The Qumulo Partner1st Program delivers:

  • Partner Tools and Analytics. Self-service partner portal is available for training, marketing materials, sales enablement tools, and provides scorecards, analytics and performance metrics for visibility and reporting.
  • Demand Generation. Partners have access to Qumulo’s pipeline of enterprises that have expressed interest in QF2 and to help identify opportunities and close bigger deals faster.
  • Partner Marketing. Complete support of Qumulo field marketing resources with points of contact, canned campaigns and events to help drive demand.
  • Training. Self-paced online sales training enables our partner reps and SEs to confidently position QF2 in a matter of hours.
  • Technical Certifications. Technical programs enable partner SEs to independently design and architect customer solutions.
  • Rich Margins. Qumulo offers competitive discounts and industry leading margins across all partner tiers.
  • Deal Security. Approved deal registrations receive aggressive discounts, giving partners larger returns on their investment in Qumulo.
  • 100 Percent Channel Commitment. Qumulo is fully committed to partners with no direct transactions.
QF2 is the world’s first universal-scale file storage system. It was designed from the ground up to meet all of today’s requirements for scale. QF2 runs in the data center and the public cloud, and can scale to billions of files. It handles small files as efficiently as large ones. QF2’s analytics let administrators drill down to the file level, get answers and solve problems in real-time. With QF2, enterprise customers have the freedom to store, manage and access their file-based data in any operating environment, at petabyte and global scale.”File-based data is driving innovation in every modern business today more than ever,” said John Murphy, Executive Vice President, ASG Virtual. “We hear from our customers that storing and managing file data at scale is very challenging. The market is demanding a fresh approach to storing and managing file-based data at scale. We are excited to partner with Qumulo, the leaders in universal-scale file storage with QF2. QF2 is a modern architecture that provides billion-file scale, delivers high performance and spans the data center and the public cloud. Our partnership with Qumulo allows us to bring innovative technology and world-class support to our customers in data-intensive industries. On top of that, Qumulo’s Partner1st Program enables Advance Systems Group (aka. Virtual Enterprises) with all the tools, marketing and support we need need to bring QF2 to the masses.”

About Qumulo

Qumulo is the leader in universal-scale file storage. Qumulo File Fabric (QF2) gives data-intensive businesses the freedom to store, manage and access file-based data in the data center and on the cloud, at petabyte and global scale. Founded in 2012 by the inventors of scale-out NAS, Qumulo serves the modern file storage and management needs of Global 2000 customers. For more information, visit www.qumulo.com.

Source: Qumulo

The post More Than 100 Channel Partners Now Bringing Qumulo File Fabric to Market appeared first on HPCwire.

Optalysys, Ltd. Forms New Scientific Advisory Board

HPC Wire - Thu, 02/01/2018 - 13:00

GLASSHOUGHTON, WEST YORKSHIRE, U.K. and RENO, Nev., Feb. 1, 2018 — Optalysys Ltd. (@Optalysys), a start-up commercializing light-speed optical coprocessors for AI/deep learning, today announced the formation of its first Scientific Advisory Board (SAB) comprising experts in AI/machine learning, bioinformatics/genomics and optical pattern recognition. The inaugural SAB members include Professor Douglas Kell of The University of Manchester, Professor Timothy Wilkinson of University of Cambridge and ex-senior NASA scientist, Dr. Richard Juday.

“Collectively, these experts have deep knowledge in areas most critical to our long-term success,” said Dr. Nick New, founder and director, Optalysys. “We’re excited to work closely with them through the process of bringing to market our unique optical approach to super-fast, low-power computing to enable more tech innovators and scientists to create a better world.”

SAB members will serve as strategic and scientific advisors as Optalysys commercializes its patented optical co-processing technology that delivers sophisticated pattern recognition and convolution-based processes for deep-learning applications. Complementary to conventional computer hardware, Optalysys’s technology is designed to spur advancements in the Internet of Things (IoT) (e.g., self-driving cars) and edge computing, genomics, medical imaging, weather forecasting and similar industries looking to improve the quality of life and commerce around the globe.

Douglas Kell, D.Phil., research professor of Bioanalytical Science, The University of Manchester. Dr. Kell is a pioneer of the use of artificial neural networks to solve biological problems. He was involved in research to create a Robot Scientist (“Adam”). While researching yeast-based functional genomics, Adam became the world’s first machine to discover new scientific knowledge independently of its human creators. Dr. Kell has received numerous awards, such as the Fleming Award of the Society for General Microbiology (1986); Royal Society of Chemistry Interdisciplinary Science Award (2004); FEBS-IUBMB Theodor Bücher prize, Royal Society/Wolfson Merit Award and Royal Society of Chemistry Award in Chemical Biology (all awarded in 2005); Royal Society of Chemistry/ Society of Analytical Chemistry Gold Medal (2006); and Commander of the Most Excellent Order of the British Empire (CBE) (2014). He served as a member (2000-2006) and as chief executive (2008-2013) of the Biotechnology and Biological Sciences Research (BBSRC) Council. The BBSRC Council is a U.K. research council, and the largest U.K. public funder of non-medical bioscience. He holds a doctorate from St. John’s College, Oxford, and has published over 400 scientific papers, 56 of which have been cited over 100 times.

Timothy Wilkinson, Ph.D., professor of Photonic Engineering, University of Cambridge. Professor Wilkinson is a leading expert in freespace optics, devices and systems. He developed the binary phase-only matched filter (BPOMF) and 1/f joint transform correlators, and holds several patents. He is currently working on next-generation liquid crystal devices suitable for 3D holographic displays. He holds a doctorate from Magdalene College, University of Cambridge, and has authored over 200 refereed journal papers.

Dr. Richard Juday, retired NASA scientist. Dr. Juday set up and managed the Hybrid Vision Laboratory in the Automation and Robotics Division at NASA’s Johnson Space Center, Houston, Texas, for conducting research in digitally implemented vision and applications to human low-vision difficulties. He is an expert in the development of spatial light modulators and pattern recognition algorithms for optical correlation, and he holds nine U.S. patents. Dr. Juday has published more than 100 publications including co-authorship of a Cambridge University Press book, Correlation Pattern Recognition. He is a Fellow member of SPIE and OSA, two international professional societies for optics and photonics. His other notable work includes a NASA-funded project to explore the principles of future space travel technology using White-Juday warp field interferometers. He holds a doctorate in electrical engineering from Texas A&M University.

“I’m keen on joining the efforts to ensure success for Optalysys’s powerful, environmentally responsible approach to the processing infrastructure necessary to move AI and machine learning to their next levels,” said Professor Kell. “This company and its technology are coming to market at an ideal time to make a big difference to a disruptive technology, and I’m looking forward to doing my part to helping make this happen.”

About OPTALYSYS

Optalysys is developing optical computing platforms that will unlock new levels of processing capability at a fraction of the cost and energy consumption of conventional computers. Its first coprocessor is based on an established diffractive optical approach that uses low-power laser light in place of electricity. This inherently parallel method is highly scalable and will provide a new paradigm of computing.

Source: OPTALYSYS

The post Optalysys, Ltd. Forms New Scientific Advisory Board appeared first on HPCwire.

Bazilian tapped to lead Payne Institute for Earth Resources

Colorado School of Mines - Thu, 02/01/2018 - 10:50
  Morgan Bazilian, former lead energy specialist at the World Bank, will join Colorado School of Mines in February as the executive director of the Payne Institute for Earth Resources and research professor of public policy.   As the Institute's inaugural executive director, Dr. Bazilian will be responsible for guiding and disseminating its policy-focused research and analysis, serving as the intellectual leader of the organization, which is dedicated to informing and shaping sound public policy on earth resources, energy and the environment.    Named in honor of longtime energy executive Jim Payne '59 and his wife, Arlene, in recognition of their $5 million investment in 2015, the Payne Institute conducts cutting-edge quantitative policy analysis and educates current and future leaders on the security, governance and policy challenges presented by the rapid changes being witnessed in the energy, environment and natural resource sectors. 

"We look forward to achieving Jim and Arlene's vision for the Payne Institute under Morgan's leadership," said Mines President Paul C. Johnson. "Today more than ever, our nation and the world need honest and objective brokers of information and venues to have productive, balanced and science-informed public policy discussions, especially related to earth, energy and the environment. Through the Payne Institute and Morgan's leadership, Mines is positioned to play a leadership role both nationally and internationally." 

A widely recognized expert in energy and natural resources planning, investment, security, governance and international affairs, Bazilian has more than two decades of experience in commercial, academic and government settings. He is a member of the World Economic Forum's Global Advisory Council on Energy and serves on the Global Advisory Council of the Sustainable Finance Programme at Oxford University. Prior to the World Bank, he was a deputy director at the U.S. National Renewable Energy Laboratory (NREL) and a senior diplomat at the United Nations. Earlier in his career, he worked in the Irish government as principal advisor to the energy minister, and was the deputy CEO of the Irish National Energy Agency. He was also the European Union's lead negotiator on low-carbon technology at the UN climate negotiations. 

"We are excited to welcome Morgan to campus," said Roderick Eggert, Viola Vestal Coulter Chair in Mineral Economics and interim director of the Economics & Business Division at Mines. "With his range of accomplishments and experience, he will inject energy and insight into our initiatives on resources policy."   Bazilian holds two master's degrees and a PhD in areas related to energy systems and markets, and has been a Fulbright fellow. He holds, or has held, several academic affiliations including at Columbia University, Cambridge University, the Royal Institute of Technology of Sweden, the Center for Strategic and International Studies and the International Institute for Applied Systems Analysis. He is on the editorial boards of Environmental Research Letters, Energy Strategy Reviews and Energy Research and Social Science. He has published over 100 articles in learned journals. His book, "Analytical Methods for Energy Diversity and Security," is considered a seminal piece in the area of energy finance and security. His work has been published in Foreign Affairs, Nature Energy, Nature Climate Change and the Proceedings of the National Academy of Science.    "I am thrilled and honored to be joining what I believe is one of the world's finest energy, environment and natural resources research institutions in the world, and hope to help build the Payne Institute into one of the country's foremost public policy Institutes in these sectors," Bazilian said. 

CONTACT
Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Pages

Subscribe to www.rmacc.org aggregator