Feed aggregator

Hao Zhang discusses abandoned mine robot research with Associated Press

Colorado School of Mines - Thu, 02/01/2018 - 10:00

Hao Zhang, assistant professor of computer science at Colorado School of Mines, was recently interviewed by the Associated Press about his research to develop a robot to explore abandoned mines.

The story, which included video footage of a prototype robot at the Edgar Experimental Mine in Idaho Springs, was picked up by multiple national, regional and local publications, including The Washington Post, Fox News, Christian Science Monitor, The Seattle Times, The Denver Post, Colorado Public Radio, (Fort Collins) Coloradoan and The Durango Herald

Categories: Partner News

ASC18 challenges university students with latest Nobel Prize-winning supercomputing application

HPC Wire - Thu, 02/01/2018 - 09:50

In 2017, the Nobel Prize in Chemistry was awarded to three scientists “for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution”. Now, ASC18 has chosen cryo-electron microscopy (cryo-EM) as the latest challenge for 300 student teams in this year’s competition, with participants tasked with optimizing the computing capacity of RELION, a 3D reconstruction software.

As an important method in structural biology, cryo-EM allows the observation of the structure of bio macromolecules such as nucleic acids and proteins. In contrast to X-ray crystallography and magnetic resonance imaging, cryo-EM allows high-resolution 3D imaging without damaging the structures of the bio macromolecules, allowing for more in-depth exploration of macromolecular structures and providing greater insight into their functions.

Image from a cryo-EM

Cryo-EM technology includes three main steps: First, specimens are rapidly frozen, so that the proteins and the aqueous solutions form a glassy state, preserving the protein structure. Next, high-resolution 2D projection images are acquired through cryo electron tomography. Finally, 3D density maps are drawn from the 2D image using 3D reconstruction software.

However, the success of cryo-EM faces numerous challenges. Cryo electron tomography and image processing algorithm have long hindered the development of cryo-EM. As macromolecular solution transits to a glassy state, cryo-EM technology requires thousands of high-quality images collected with high throughput. The image processing that follows, including the correction and evaluation of the micrographs, CTF estimates, particle selection and sorting, 2D and 3D classification, orientation refinement and reconstruction, and post-processing, requires coordinating advanced computing resources and efficient algorithms. Since the development of cryo-EM nearly 30 years ago, developing fast and efficient algorithms to process images has remained a significant challenge.

The ASC18 challenge provides participating teams with a cryo-EM data set of apo-ferritin, requiring teams to perform 2D and 3D classification and 3D model reconstruction of the sequenced particle images with the most advanced supercomputing technology and RELION, the most commonly used software in cryo-EM. Teams will need to maintain a resolution of 3.3 Å while reducing computing time as much as possible. As part of the competition, teams must submit an introduction the supercomputing environment including hardware platforms, OS, TPL, and databases, as well as the methods of evaluation and optimization. They also need to submit RELION output files, including images of 2D classification, 3D classification, and 3D reconstruction.

Examples of cryo-EM 3D imaging

About ASC

Initiated by China, the ASC Student Supercomputer Challenge is the world’s biggest student supercomputer competition. Since the first ASC challenge was launched in 2012, the competition has continued to grow in influence, with more than 1,100 teams and 5,500 young talents from around the world having participated.

The post ASC18 challenges university students with latest Nobel Prize-winning supercomputing application appeared first on HPCwire.

ThinkParQ Announces Global BeeGFS File System Expansion Plans and New Features for 2018

HPC Wire - Thu, 02/01/2018 - 09:49

KAISERSLAUTERN, Germany, Feb. 1, 2018 — ThinkParQ, the company behind the parallel cluster file system BeeGFS, looks back at a record setting year 2017 with a steadily growing user base in EMEA and a successful entry into US and APAC with several new big players as partners and customers in those regions and is now preparing to take its next big step in 2018.

Listening to the customer demands worldwide, ThinkParQ announces the immediate availability of the BeeGFS native Windows client beta release, which will enable customers to seamlessly combine I/O-demanding Linux and Windows workflows in parallel. This new component is expected by many of ThinkParQ’s partners and customers to be a gamechanger for the world of parallel storage.

In addition, the already announced version 7.0 of BeeGFS, which changes the economics of storage by introducing an innovative new concept to take advantage of SSDs, will become generally available early March. As a key success, BeeGFS has been chosen to power the burst-buffer of the ABCI supercomputer when it begins operations in 2018, which is expected to be the third most powerful supercomputer in the world.

To further continue and strengthen its focus of worldwide expansion, the company appointed Frank Herold as CEO effective today. Sven Breuner who is a founding member of ThinkParQ and has led the company since 2014 will continue to contribute to the success of BeeGFS as a major shareholder and member of the board, while he works together with Frank during the transition period.

“Frank has a long history in the field of large scale enterprise file systems and is well experienced in expansion into new markets, which is exactly our strategic focus for the years ahead. With his strong technical and management background, I trust that he is the ideal person to manage the company based on a deep understanding of customer and market demands.” says Sven Breuner, former CEO of ThinkParQ.

Frank stated, “Over the past years, ThinkParQ has developed an excellent solution and a very positive reputation with an enthusiastic and rapidly-growing customer base. ThinkParQ addresses the complex requirement of managing large amounts of data with BeeGFS as an easy to use and scalable software-defined storage solution on-premise and in the cloud. Together with the talented and highly motivated team behind BeeGFS, I am eager to continue building on the company’s upward growth trajectory.”

About BeeGFS

BeeGFS is a parallel file system that was designed specifically to deal with I/O intensive workloads in performance-critical environments and with a strong focus on easy installation and high flexibility, including converged setups where storage servers are also used for compute jobs. BeeGFS transparently spreads user data across multiple servers. Therefore, by increasing the number of servers and disks in the system, performance and capacity of the file system can simply be scaled out to the desired level, seamlessly from small clusters up to enterprise-class systems with thousands of nodes, on-premise or in the cloud. BeeGFS is powering the storage of hundreds of scientific and industry customer sites worldwide. Visit beegfs.io for more information.

About ThinkParQ

ThinkParQ was founded as a spin-off from the Fraunhofer Center for High Performance Computing by the key people behind BeeGFS to bring fast, robust, scalable storage solutions to market. ThinkParQ is responsible for support, provides consulting, organizes and attends events, and works together with system integrators to create turn-key solutions. ThinkParQ and Fraunhofer internally cooperate closely to deliver high quality support services and to drive further development and optimization of BeeGFS for tomorrow’s performance-critical systems. Visit www.thinkparq.com to learn more about the company.

Source: ThinkParQ

The post ThinkParQ Announces Global BeeGFS File System Expansion Plans and New Features for 2018 appeared first on HPCwire.

Fibre Channel Industry Association Elects 2017/18 Board of Directors

HPC Wire - Thu, 02/01/2018 - 09:39

MINNEAPOLIS, Feb. 1, 2018 — In a move that supports the continued innovation of Fibre Channel (FC) and its legacy of data reliability, integrity and security, the Fibre Channel Industry Association (FCIA) today announced its 2017-18 Board of Directors.

“The fulfilment of these roles by top storage industry technologists brings a wealth of expertise and contacts to the association,” said Mark Jones, president and chairman of the board, FCIA. “Fibre Channel remains the standard storage networking protocol of choice and has a long, innovative future as evidenced by the completion of the INCITS T11 FC-NVMe standard in 2017.”

The members of the 2017/2018 FCIA Board of Directors are:

 FCIA Officers:

  • Chairman and President: Mark Jones, Broadcom Limited
  • Treasurer: Craig Carlson, QLogic a Cavium Company
  • Secretary and Education Chair: J Metz, Cisco

Members at Large:

  • Marketing Chair: Rupin Mohan, Hewlett Packard Enterprise
  • Kevin Ehringer, DCS
  • Chris Lyon, Amphenol
  • David Rodgers, Teledyne LeCroy
  • Steven Wilson, T11 Chair (Non-Voting)

“2018 is poised to be another great year for the fibre channel industry,” said Mike Heumann, managing partner, G2M Research. “The current footprint for NVMe-enabled systems and devices can be expected to grow exponentially by 2020 to be a multi-billion dollar market, as data centers, service providers and telecom equipment providers and carriers start to utilize NVMe-capable enabled infrastructure platforms such as FC-NVMe.”

In 2017, there was tremendous FC industry progress and technical developments that helped shape the future of the storage industry including:

  • Completion of the second industry-wide multi-vendor plugfest focused on NVMe over FC Fabric the week of October 30 at UNH-IOL, with nine industry-leading companies participating
  • The first validation of the newly completed INCITS T11 FC-NVMe standard
  • Live NVMe Over Fabrics technology demonstration operated by multiple fibre channel vendors
  • Continued development of Gen 6 Fibre Channel, the industry’s fastest industry standard networking protocol that enables storage area networks of up to 128GFC
  • The advancement of a 64GFC specification that is set to double data bandwidth over existing Gen 6 32GFC and 128GFC
  • Launch of an FCIA Education Committee focused on providing value to IT professionals by providing insight into FC technology
  • The publication of the 2017 Fibre Channel Solutions Guide, available online at http://fibrechannel.org/.

FCIA BrightTALK Webcasts

In 2017, FCIA continued its series of BrightTALK webcasts where thought leaders actively share their insights and provide up-to-date information on the Fibre Channel industry. Don’t miss the opportunity to attend the next webcast scheduled for Thursday, February 6 at 11:00 a.m. PST titled “Fibre Channel Performance: Congestion, Slow Drain, and Over-Utilization, Oh My!” Ed Mazurek, Cisco, Earl Apellanes, Broadcom, David Rogers, Teledyne LeCroy will be leading the discussion, followed by a Q&A session.

Register today for free and view previous FCIA webcasts at: http://bit.ly/2DyNX9L

About FCIA

The Fibre Channel Industry Association (FCIA) is a non-profit international organization whose sole purpose is to act as the independent technology and marketing voice of the Fibre Channel industry. We are committed to helping member organizations promote and position Fibre Channel, and providing a focal point for Fibre Channel information, standards advocacy, and education.  FCIA members include manufacturers, system integrators, developers, vendors, industry professionals, and end users. Our member-led working groups and committees focus on creating and championing the Fibre Channel technology roadmaps, targeting applications that include data storage, video, networking, and storage area network (SAN) management. For more info, go to http://www.fibrechannel.org.

Source: FCIA

The post Fibre Channel Industry Association Elects 2017/18 Board of Directors appeared first on HPCwire.

Analytic Engineering Powers AI Driven Decision Support System Development at Verne Global

HPC Wire - Thu, 02/01/2018 - 09:24

LONDON and KEFLAVIK, Iceland, Feb. 1, 2018 — Verne Global, a provider of highly optimised, secure, and 100% renewably powered data center solutions, has announced Analytic Engineering, a German company pioneering the use of artificial intelligence (AI) in decision support system engineering, is moving their graphics processing unit (GPU) infrastructure to Verne Global’s Icelandic campus.

This move enables Analytic Engineering to leverage the dedicated compute environments at Verne Global, which are specifically designed to support HPC and intensive computing applications. Analytic Engineering utilises large-scale combinatorics, discrete and continuous optimisation, and finite element simulations in its development process which required a specialised data center that could accommodate the niche design and power requirements for these types of highly intensive computing applications.

“At Verne Global’s campus, we can grow our business faster and apply more compute resources to our programs than at any other data center that we evaluated,” said Tobias Seifert, Co-CEO at Analytic Engineering. “This is a critical competitive advantage to us, as we look to deliver highly complex software solutions that enable our customers to iterate faster through applications driven by AI and Machine Learning.”

“With Verne Global, we have a partner that can fully meet the demands we anticipate needing far into the future,” added Seifert.

According to Seifert, many data center operators have been slow to prepare their business models and campus infrastructure to accommodate the rapid advances in computational loads and processing power. Analytic Engineering’s site-selection criteria included:

  • Complete control – a data center campus that provided complete control of the hardware and software programs, allowing Analytic Engineering to fully leverage the computational benefits offered by AI and Machine Learning with highly customised software solutions
  • Customer focused – a data center operator that listened to what the company needs today and will help Analytic Engineering plan for tomorrow. The team at Verne Global focused on listening to what Analytic Engineering wanted to do, what it needed, and was dedicated to making it work within its campus.
  • Cost optimised – a data center that had access to affordable, scalable and sustainable power with an option to secure power pricing over the long term. These costs savings are enabling Analytic Engineering to invest more resources into development and compute, lowering overall operating expense and leveraging Iceland’s renewable energy to dramatically reduce their carbon footprint.

“Today’s computational environments are changing rapidly as more companies are looking to utilise HPC and intensive applications across an increasingly wide variety of industries. At Verne Global we have fully optimised our campus to meet the specific requirements of the international HPC community,” said Jeff Monroe, CEO at Verne Global.

Verne Global’s Director of Research, Spencer Lamb, added “We are delighted to welcome Analytic Engineering to our campus. They join a growing community of innovative high-tech companies choosing Verne Global as the location for their demanding AI and Machine Learning workloads.”

Source: Verne Global

The post Analytic Engineering Powers AI Driven Decision Support System Development at Verne Global appeared first on HPCwire.

Nobel laureate Sir Fraser Stoddart to give lecture Feb. 5

Colorado School of Mines - Thu, 02/01/2018 - 09:09

Sir Fraser Stoddart, winner of the 2016 Nobel Prize in Chemistry, will be at Colorado School of Mines on Feb. 5 to give a public talk about his Nobel-winning research on the design and synthesis of molecular machines.

Stoddart, professor of chemistry at Northwestern University, shared the 2016 Nobel Prize with Jean-Pierre Sauvage of the University of Strasbourg, France, and Bernard Feringa of the University of Groningen, the Netherlands, for the contributions they each made toward the development of molecules with controllable movements. 

His talk, “Materials Beyond Cyclodextrins: Emergence Opens up a Whole New World,” will begin at 4 p.m. Feb. 5 in Room 209 of Coolbaugh Hall, 1012 14th St. A reception will follow at 5 p.m. in the Coolbaugh Hall atrium, with drinks and refreshments provided. 

The Chemistry Department’s Student-Invited Seminar Committee will also host a student Q&A session with Stoddart from 3 to 4 p.m. in Coolbaugh Room 219. 

The Nobel-winning breakthrough began with the development of a new way to link molecules—the mechanical bond. Instead of bonding through ionic or covalent means, molecules are instead coupled in a physical manner by entangling them in space. Sterics, complexation and coordination drive this supramolecular assembly, with the building blocks being held together by intermolecular forces.

These supramolecular structures with switchable mechanical responses open up a whole new world in functional materials, drug delivery and the development of nanotechnology in general.

Photo credit: ©​ Nobel Media AB 2016/ Alexander Mahmoud (portrait), ©​ Nobel Media AB 2016/ Pi Frisk (award ceremony)

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu



Categories: Partner News

UK’s Jisc Delivers World’s First 400G National Research and Education Network with Ciena

HPC Wire - Thu, 02/01/2018 - 08:15

LONDON and HANOVER, Md., Feb. 1, 2018 — Jisc, which operates the busiest National Research and Education Network (NREN) in Europe by volume of data carried, is deploying Ciena’s (NYSE: CIEN) 6500 packet-optical platform powered by WaveLogic Ai coherent optics to provide unprecedented high-capacity 400G wavelength connectivity. The deployment makes the Janet Network one of the most digitally-advanced NRENs globally in terms of scale, automation and network intelligence.

“Our vision is for the UK to be at the forefront of scientific research. To make that happen, we must have a highly robust network powered with industry-leading technology that can scale to support bandwidth-intensive applications like genome editing and The Square Kilometre Array,” said Jeremy Sharp, Network Infrastructure Director, Jisc.

“Working with Ciena, the Janet Network was the first NREN to provide 100G for users and, as demand has grown, is now the first to provide 400G. WaveLogic Ai enables us to operate efficiently and accurately engineer the network for optimal capacity to manage massive flows from new data-intensive research activities,” Sharp added.

In the UK, all further and higher education organisations, including universities, colleges and research centres, are connected to the world by virtue of the internet through the Janet Network, along with some alternative education providers, other public bodies and science parks. Jisc is also responsible for all .ac.uk and .gov.uk domains.

“We are helping bring a new paradigm for optical networks by making the network more programmable and responsive to changing user demands while using less hardware,” said Rod Wilson, Chief Technologist for Research Networks, Ciena. “WaveLogic Ai focuses on delivering considerable digital advantage, financial savings and efficiencies.”

About Jisc

Jisc is the UK’s expert member organisation for digital technology and digital resources in higher education, further education, skills and research. Our vision is to make the UK the most digitally advanced education and research nation in the world.  We play a pivotal role in the development, adoption and use of technology by UK universities and colleges, supporting them to improve learning, teaching, the student experience and institutional efficiency, as well as enabling more powerful research.

At the heart of Jisc’s support is Janet – the UK’s world-class National Research and Education Network (NREN). Owned, managed and operated by Jisc, Janet comprises a secure, state-of-the-art network infrastructure spanning all four nations of the UK.

About Ciena

Ciena (NYSE: CIEN) is a network strategy and technology company. We translate best-in-class technology into value through a high-touch, consultative business model – with a relentless drive to create exceptional experiences measured by outcomes.

Source: Ciena

The post UK’s Jisc Delivers World’s First 400G National Research and Education Network with Ciena appeared first on HPCwire.

Deep Cognition and Exxact Announce AI Server Appliance Partnership

HPC Wire - Thu, 02/01/2018 - 03:30

IRVING, Texas, Feb. 1, 2018  — Exxact Corporation, a leading provider of high performance computing, and Deep Cognition, a leading provider of software that simplifies and accelerates AI, announced a new partnership today.  The collaboration resulted in Exxact’s high performance computers pre-configured with Deep Learning Studio, Deep Cognition’s software platform to simplify and accelerate AI development and deployment.  The final product is a simple yet powerful AI Server Appliance that brings the promise of AI to all organizations with and without data science/AI expertise in-house.

Exxact Deep Learning Studio solutions are equipped with the latest in GPU hardware and are designed for maximum performance and efficiency. Each solution is tested, validated and fully turnkey, enabling the ability for developers to get started with Deep Learning Studio the moment their system is on. Deep Learning Studio, provides a comprehensive GUI to develop, train, deploy and manage AI.  Highly advanced modeling is supported through robust integration with leading AI frameworks such as TensorFlow, Keras and MXNet.  For more information about the full suite of deep learning resources available with Exxact servers, please click here. To learn more about Exxact Deep Learning Studio solutions, please click here.

“Our new partnership with Deep Cognition will bring forth a new portfolio of simplified, turn-key systems to help organizations advance their AI research with ease.” said Jason Chen, Vice President of Exxact Corporation. “By combining Exxact systems with Deep Learning Studio software, we have created solutions that are ready right out of the box and enables users to easily design, train, and deploy deep learning models without any coding involved.”

“AI has tremendous potential to transform organizations in 2018 but the path to success is fraught with challenges.  Our partnership with Exxact to bring powerful AI server appliances to market removes many of those challenges,” said Mandeep Kumar, Chief Executive Officer, Deep Cognition.

For more information or inquiries about Exxact Deep Learning server appliances, please contact the Exxact sales department here.

About Exxact Corporation

Exxact develops and manufactures innovative computing platforms and solutions that include workstation, server, cluster, and storage products developed for Life Sciences, HPC, Big Data, Cloud, Visualization, Video Wall, and AV applications. With a full range of engineering and logistics services, including consultancy, initial solution validation, manufacturing, implementation, and support, Exxact enables their customers to solve complex computing challenges, meet product development deadlines, improve resource utilization, reduce energy consumption, and maintain a competitive edge. Visit Exxact Corporation at www.exxactcorp.com.

About Deep Cognition

Deep Cognition is an artificial intelligence software company whose flagship product, Deep Learning Studio, simplifies and accelerates AI development and deployment.  The company is headquartered in Irving, TX with an office in Bangalore, India.  The company serves a diverse customer base using Deep Learning Studio in a variety of industries and functions across the globe.  Deep Learning Studio can be used to develop and deploy AI for thousands of use cases.  Visit Deep Cognition at www.deepcognition.ai.

Source: Exxact Corporation

The post Deep Cognition and Exxact Announce AI Server Appliance Partnership appeared first on HPCwire.

CFO Steps down in Executive Shuffle at Supermicro

HPC Wire - Wed, 01/31/2018 - 13:35

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and also issued preliminary financial information for the quarter ending last December (Q2 2018 for the company). While Supermicro’s business remains robust, the company continues to be dogged by its failure to file required financial reports in 2017.

Supermicro reported the resignations of CFO Howard Hideshima, SVP international sales Wally Liaw (also from the board of directors), and SVP worldwide Sales Phidias Chou. The company declined to comment if there was a link between the just completed audit and the departures.

Kevin Bauer was named SVP CFO. Bauer joined Supermicro in January 2017 and has previously served as SVP, Corporate Development and Strategy. Prior to joining Supermicro, Mr. Bauer was the Senior Vice President and Chief Financial Officer of Pericom Semiconductor Corporation from February 2014 until its sale to Diodes Inc. in November 2015 and, thereafter, assisted Diodes with the integration of Pericom until November 2016. Don Clegg, VP Marketing and Business development, was named interim head of worldwide sales.

On last week’s analyst call Peter Hayes, SVP, investor relations said, “[T]he company has completed the previously disclosed investigation conducted by the audit committee. Additional time is required to analyze the impact if any other results of the investigation on the company’s historical financial statement, as well as to conduct additional reviews before the company will be able to finalize its annual report on Form 10-K for the fiscal year ended June 30, 2017.

“The company is unable at this time to provide a date as to when the Form 10-K will be filed or to predict whether the company’s historical financial statements will be adjusted or if so the amount of any such adjustment. The company intends to file its quarterly reports on Form 10-Q for the quarters ended September 30 and December 31, 2017 promptly after filing the Form 10-K.” Supermicro is currently facing a March 13, 2018 deadline for its late filings.

Chairman and CEO Charles Liang was bullish on the business. “Our second quarter revenues were in the range of $840 million to $850 million surpassing our quarterly guidance and exceeds our long-term guidance of reaching the $3 billion of run rate by December 2017,” said Liang. “In this seasonally strong quarter we achieved over 30% revenue growth year-over-year and we saw the [past supplement] throughout our quarter. We grew in all market vertical including accelerated, AI, machine learning, storage, IoT embedded, Internet data center, cloud and particularly strong growth with large enterprise customers.”

As with many computer makers, margins of late have been squeezed by pricing – primarily DRAM and NVRAM – supply shortage. Supermicro is a leading manufacturer (OEM and ODM) of HPC servers, datacenter servers, and storage products.

Here’s a summary of the key filing issues, taken from a Supermicro release in November after receiving a letter of non-compliance from Nasdaq:

“The Letter was sent as a result of the Company’s delay in filing its Quarterly Report on Form 10-Q for the period ended September 30, 2017 (the “Form 10-Q”) and its continued delay in filing its Annual Report on Form 10-K for the fiscal year ended June 30, 2017 (the “Form 10-K”). The Company previously submitted a plan of compliance to NASDAQ on November 10, 2017 (the “Plan) with respect to its delay in filing the Form 10-K (the “Initial Delinquent Filing”).

“As previously disclosed by the Company, additional time is needed for the Company to compile and analyze certain information and documentation and finalize its financial statements as of and for the year ended June 30, 2017, as well as complete a related audit committee investigation, in order to permit the Company’s independent registered public accounting firm to complete its audit of the financial statements and the Company’s internal controls over financial reporting as of June 30, 2017.”

The post CFO Steps down in Executive Shuffle at Supermicro appeared first on HPCwire.

Intel Promotes Four Corporate Officers

HPC Wire - Wed, 01/31/2018 - 11:59

SANTA CLARA, Calif., Jan. 31, 2018 – Intel Corporation today announced that its board of directors has promoted four corporate officers.

Leslie S. Culbertson was promoted to executive vice president and general manager of the Intel Product Assurance and Security Group from senior vice president of Human Resources. In this role, Culbertson will lead Intel’s cross-company efforts to continuously improve Intel’s product security. Most recently, Culbertson was Intel’s chief human resource officer (CHRO). Culbertson joined Intel in 1979 and was elected to corporate vice president and director of finance in 2003.

Dr. Ann B. Kelleher was promoted to senior vice president from corporate vice president of Intel’s Technology Manufacturing Group. She is responsible for corporate quality assurance, corporate services, customer fulfillment and supply chain management. She is also responsible for strategic planning for the company’s worldwide manufacturing operations. Kelleher joined Intel in 1996 as a process engineer, going on to manage technology transfers and factory ramp-ups in a variety of positions spanning 200mm and 300mm technologies.

Dr. Michael Mayberry was promoted to senior vice president and Intel’s chief technology officer from corporate vice president. He is also the managing director of Intel Labs. He is responsible for Intel’s global research efforts in computing and communications. In addition, he leads the Corporate Research Council, which drives allocation and prioritization of directed university research across Intel. Mayberry joined Intel in 1984 as a process integration engineer, and has held various positions since then. In 2005, he moved to Components Research and was responsible for research to enable future process options for Intel’s technology development organizations.

Matthew M. Smith was promoted to senior vice president and chief human resource officer from corporate vice president of human resources. As CHRO, he will join Intel’s management committee and lead the global HR team responsible for developing and executing HR strategies in support of Intel’s businesses. Smith joined Intel in 1997.

Source: Intel Corp.

The post Intel Promotes Four Corporate Officers appeared first on HPCwire.

Adaptive Computing Enterprises Launches New Product for HPC Cloud Bursting

HPC Wire - Wed, 01/31/2018 - 11:40

NAPLES, Fla., Jan. 31, 2018 — Adaptive Computing announces its new Cloud Solution, Moab/NODUS Cloud Bursting, making HPC cloud strategies more accessible than ever before.

“Considering that public cloud bursting is usually extremely challenging, the Moab/NODUS Cloud Bursting Solution is brilliant in its ability to integrate seamlessly with existing management infrastructure,” said Art Allen, President, Adaptive Computing Enterprises, Inc.

Adaptive’s product Moab, already a world leader in dynamically optimizing large-scale HPC computing environments, has been enhanced to extend running HPC workloads into public clouds. Adaptive’s engineering team has done a fantastic job of making clouds work for HPC. The Moab/NODUS Cloud Bursting Solution is powerful, flexible, easy to implement, and manage. From the automated deployment and release of nodes, to the ease of use for admins, Moab/NODUS Cloud Bursting makes access to multiple public clouds easily attainable.

The Moab/NODUS Cloud Bursting Solution provides Infrastructure and Operations leaders the ability to achieve substantial cost savings by reducing on-premise infrastructure refresh costs; by meeting SLA’s, thereby avoiding financial penalties; and by increasing revenue by speeding up time to market for various use cases.

Adaptive Computing Enterprises was acquired in September 2016 by Arthur L. Allen, long-time software industry veteran with more than 40 years creating software companies and technology solutions. Adaptive Computing has been totally transformed into a new, innovative company.

“While we will continue to be price competitive, Adaptive Computing will lead with product innovation, product quality, and extraordinary customer support,” Allen said. “The company is striving to be the thought leader in the HPC and Data Center spaces. The first of such solutions is Moab/NODUS Cloud Bursting.”

About Adaptive Computing

Adaptive Computing’s Workload and Resource Orchestration software platform, Moab, is a world leader in dynamically optimizing large-scale computing environments. Moab intelligently places and schedules workloads and adapts resources to optimize application performance, increase system utilization, and achieve organizational objectives. Moab’s unique intelligent and predictive capabilities evaluate the impact of future orchestration decisions across diverse workload domains (HPC, HTC, Big Data, Grid Computing, SOA, Data Centers, Cloud Brokerage, Workload Management, Enterprise Automation, Workflow Management, Server Consolidation, and Cloud Bursting); thereby optimizing cost reduction and speeding product delivery. Moab gives enterprises a competitive advantage, inspiring them to develop cancer-curing treatments, discover the origins of the universe, lower energy prices, manufacture better products, improve the economic landscape, and pursue game-changing endeavors.

Source: Adaptive Computing

The post Adaptive Computing Enterprises Launches New Product for HPC Cloud Bursting appeared first on HPCwire.

PRACE Summer of HPC 2018 Opens Applications

HPC Wire - Wed, 01/31/2018 - 10:33

Jan. 31, 2018 — Early-stage postgraduate and late-stage undergraduate students are invited to apply for the PRACE Summer of HPC 2018 programme, to be held in July & August 2018. Consisting of a training week and two months on placement at top HPC centres around Europe, the programme offers participants the opportunity to learn and share more about PRACE and HPC, and includes accommodation, a stipend and travel to their HPC centre. Applications are open.

About the PRACE Summer of HPC Programme

PRACE Summer of HPC is a PRACE outreach and training programme that offers summer placements at top HPC centres across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualisation or video of their results.

The programme will run from 2 July to 31 August 2018. It will begin with a kick-off training week at EPCC Supercomputing Centre in Edinburgh – to be attended by all participants.

Flights, accommodation and a stipend will be provided to all successful applicants. Two prizes will be awarded to the participants who produce the best project and best embody the outreach spirit of the programme.

Participating in the PRACE Summer of HPC Programme

Applications are welcome from all disciplines. Previous experience in HPC is not required as training will be provided. Some coding knowledge is a prerequisite, but the most important attribute is a desire to learn, and share experiences with HPC. A strong visual flair and an interest in blogging, video blogging or social media are desirable.

Project Descriptions

Project descriptions with more detailed prerequisites and more information on applying are available on the PRACE Summer of HPC website www.summerofhpc.prace-ri.eu.


Applications opened on 11 January 2018 and applications can be submitted on the Summer of HPC website:


The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreement 730913. For more information, see www.prace-ri.eu.

Source: PRACE

The post PRACE Summer of HPC 2018 Opens Applications appeared first on HPCwire.

IBM Adds V100 GPUs to IBM Cloud; Targets AI, HPC

HPC Wire - Wed, 01/31/2018 - 09:53

IBM today announced availability of Nvidia V100 GPUs in the IBM Cloud. The move keeps pace with Amazon and Microsoft Azure which also offer V100s and is more evidence of the effort by major cloud providers to rapidly beef up their AI and HPC capabilities. In making the announcement, IBM was quick to note it was “first to offer a comprehensive suite of GPUs including the P100, K80 and M60 on IBM Cloud bare metal and virtual servers.”

John Considine, GM of Cloud Infrastructure Services, IBM, wrote in a blogpost, “Starting today, you can equip individual IBM Cloud bare metal servers with up to two Nvidia Tesla V100 PCIe GPU accelerators – Nvidia’s latest, fastest and most advanced GPU architecture. The combination of IBM high-speed network connectivity and bare metal servers with the Tesla V100 GPUs provides higher throughput than traditional virtualized servers.”

“We’re focused on delivering new AI capabilities both in the cloud and on premises to help enterprises not only gain critical insights from their data, but also create new value with that data. We’ve been working closely with Nvidia to bring their latest GPU technology, Nvidia Tesla V100, to the cloud.”

For now, the V100 will be on x86-based servers in the cloud and only on bare metal servers. Plans for offering V100 on POWER9-based servers in the cloud were not disclosed. “IBM does not comment on the status of future plans. We are working side by side with the IBM Power Systems team to ensure that IBM Cloud will deliver access to the best of IBM technology to allow customers to run HPC and AI workloads,” said a spokesman.

The V100 is available in IBM’s POWER9 server line. Considine wrote, “To power on-premises workloads, IBM also offers the industry’s only CPU-to-GPU Nvidia NVLink connection on our latest POWER9 servers.” POWER9 is IBM latest processor (see HPCwire article,  IBM Begins Power9 Rollout with Backing from DOE, Google). The new processor is being used in Summit and Sierra, two pre-exascale supercomputers funded by DoE.

Like others, IBM is touting its cloud-based ability to handle AI workloads including the ability to speed training of “deep learning models and to create powerful cloud-native applications.” One NASA example cited is fascinating:

NASA Frontier Development Lab this past summer, a team of researchers and data scientists used machine learning techniques on the IBM Cloud to develop new processes for 3D modeling of asteroids from radar data. With an average of 35 new asteroids and near-Earth objects discovered each week, there is currently more data available than experts can keep up with, and existing 3D modeling processes can take several months. Using NVIDIA P100 GPUs on the IBM Cloud and IBM Cloud Object Storage, the team was able to generate asteroid shapes images an average of five to six times faster than previous processes allowed.”

Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA is quoted in the blog, “The new IBM Cloud offerings based on our Volta technology provide incredible processing speeds and the ability to scale up or down on demand for HPC and deep learning.” IBM also collaborates with technology partners such as Rescale and Bitfusion to enable instant access to GPUs on IBM Cloud.

Feature image: Inside IBM Cloud, Dallas

The post IBM Adds V100 GPUs to IBM Cloud; Targets AI, HPC appeared first on HPCwire.

Supermicro Announces the Appointment of Kevin Bauer as Chief Financial Officer

HPC Wire - Wed, 01/31/2018 - 09:20

SAN JOSE, Calif., Jan. 31 — Super Micro Computer, Inc. (NASDAQ:SMCI), a global leader in high-performance, high-efficiency server, storage technology and green computing, today announced that Kevin Bauer has been appointed as Senior Vice President and Chief Financial Officer, effective immediately. Mr. Bauer joined Supermicro in January 2017 and has previously served as Senior Vice President, Corporate Development and Strategy. Prior to joining Supermicro, Mr. Bauer was the Senior Vice President and Chief Financial Officer of Pericom Semiconductor Corporation from February 2014 until its sale to Diodes Inc. in November 2015 and, thereafter, assisted Diodes with the integration of Pericom until November 2016. Prior to that he was Chief Financial Officer of Exar Corporation from June 2009 through December 2012 and Corporate Controller from August 2004 to June 2009. Mr. Bauer has over 33 years of finance experience and received an MBA from Santa Clara University and a BS in Business Administration from California Lutheran University.

“We are delighted to announce the appointment of Kevin at this time,” said Charles Liang, President and Chief Executive Officer of Supermicro. “We believe that Kevin’s background and experience will be of substantial benefit to the company as we continue to grow our operations. The fact that he has been with us for a year is also a significant advantage.”

“With the introduction of Supermicro 3.0, my excitement about our opportunity has grown over the last year,” said Kevin Bauer. “The company’s growth continues and I look forward to working with Charles to take the company to the next level of performance.”

About Super Micro Computer, Inc.

Supermicro, a global leader in high-performance, high-efficiency server technology and innovation is a premier provider of end-to-end green computing solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro’s advanced Server Building Block Solutions offer a vast array of components for building energy-efficient, application-optimized, computing solutions. Architecture innovations include Twin, TwinPro, FatTwin, Ultra Series, MicroCloud, MicroBlade, SuperBlade, Double-sided Storage, Battery Backup Power (BBP) modules and WIO/UIO.

Products include servers, blades, GPU systems, workstations, motherboards, chassis, power supplies, storage, networking, server management software and SuperRack cabinets/accessories delivering unrivaled performance and value.

Founded in 1993 and headquartered in San Jose, California, Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative. The Company has global logistics and operations centers in Silicon Valley (USA), the Netherlands (Europe) and its Science & Technology Park in Taiwan (Asia).

Source: Supermicro

The post Supermicro Announces the Appointment of Kevin Bauer as Chief Financial Officer appeared first on HPCwire.

Fleckenstein interviewed by Denver7 about oil, gas production in Denver-Julesberg Basin

Colorado School of Mines - Wed, 01/31/2018 - 08:43

Will Fleckenstein, director of strategic relationships and enterprises for the College of Earth Resource Sciences and Engineering at Colorado School of Mines, was recently interviewed by Denver7 about oil and gas production in the Denver-Julesberg Basin.

Categories: Partner News

PSSC Labs Announces Hadoop Big Data Servers with Intel Xeon Scalable Processors

HPC Wire - Wed, 01/31/2018 - 08:27

LAKE FOREST, Calif., Jan. 31, 2018 — PSSC Labs, a developer of custom HPC and Big Data computing solutions, today announced a new version of its CloudOOP 12000 cluster server integrated with Intel’s newest Xeon Scalable Processors to deliver breakthrough performance for handling big data tasks such as real-time analytics, virtualized infrastructure and high-performance computing.

In additional to advanced architecture, the Intel new processors feature a rich suite of platform and security innovations for enhanced application performance including Intel AVX-512, Intel Mesh Architecture, Intel QuickAssist, Intel Optane SSDs and Intel Omni-Path Fabric.

The CloudOOP 12000 is the only server specifically designed for Hadoop, Kafka, Big Data and Internet of Things (IoT). It offers 2x the density and up to 35% lower power draw than traditional manufacturers, as well as a near 50% increase in data throughput performance. Reducing power draw means a lower datacenter footprint and a significantly reduced total cost of ownership with an over 90% efficiency rating.

Organizations can custom build CloudOOP 12000 servers from the company’s new EZ System Configurator https://www.pssclabs.com/servers/big-data-servers/#chassis.  Every system is ready-to-run on delivery and comes with all necessary hardware, software, networking, integrations, as well as support from PSSC Lab’s US-based team of experienced engineers. Each CloudOOP 12000 is also customized with the Hadoop distribution of choice.

CloudOOP 12000 big data servers feature:

  • Support for Intel Xeon Scalable Processors. Intel Xeon Scalable Processors deliver an overall performance increase up to 1.65x from the previous generation, and up to 5x OLTP warehouse workloads versus the current install base.
  • OS installed on separate SSD hard drive(s) to increase system speed, reliability and usability support for Red Hat, CentOS, Ubuntu and most other Linux distributions, as well as Microsoft Windows
  • Supports up to 512GB DDR memory modules featuring Error Correcting (ECC) support that automatically detects and corrects memory errors
  • Redundant 90%+ Energy Efficient Power Supplies
  • Redundant RAID 1 Mirror Operating System Hard Drives
  • Up to 144TB storage with support for SSD, SAS and SATAIII
  • Configurable to RAID 0, 1, 5, 10 or JBOD
  • Dual GigE network bandwidth comes standard, with 10GigE, 40GigE, 100GigE and Infiniband network connectivity possible.  Also, optional network integration with Intel OmniPath Architecture.

“Our CloudOOP 12000 series not only reduces both CapEx and OpEX expenses but offers superior data I/O performance for real-time processing,” said Alex Lesser, Executive Vice President of PSSC Labs. “It’s the perfect pre-configured server for a variety of applications across government, academic and commercial environments including Design & Engineering, Life Sciences, Physical Science, Financial Services and Machine/Deep Learning.”

Every PSSC Labs server and cluster comes with three-year unlimited phone / email support package (additional support available) with all support provided by a US-based team of experienced engineers.  Pricing for a CloudOOP 12000 with Intel Xeon Scalable processors start at only $2,500.

PSSC Labs services with every custom-build include:

  • Operating system installation
  • Storage partitioning and configuration (JBOD or RAID)
  • Net Connect integration service for easy network boot
  • BIOS customization
  • Power and cooling consultation prior to delivery
  • Mac address recording and reporting
  • Node name labeling
  • Rack ‘n Roll Services Available for Turn-Key Installation

About PSSC Labs

For technology powered visionaries with a passion for challenging the status quo, PSSC Labs is the answer for hand-crafted HPC and Big Data computing solutions that deliver relentless performance with the absolute lowest total cost of ownership.  All products are designed and built at the company’s headquarters in Lake Forest, California.

Source: PSSC Labs

The post PSSC Labs Announces Hadoop Big Data Servers with Intel Xeon Scalable Processors appeared first on HPCwire.

Machine Reading Comprehension Microsoft Sets a Tough AI Question for the ASC18 Contest

HPC Wire - Wed, 01/31/2018 - 01:01

More than 300 global teams participating in the ASC18 Student Supercomputer Challenge will challenge the machine reading comprehension in the coming months, a highly challenging artificial intelligence contest set by Microsoft. All ASC18 participant teams will independently develop the machine reading comprehension, asking and answering algorithm models with CNTK deep learning framework, and receive training with the latest super computation technology in combination with a dataset called MS MARCO, in an attempt to make the machine answer questions more accurately.

Enabling a machine to complete reading comprehension and question answering in connection with natural languages is one of the core difficulties of artificial intelligence (AI) and a core difficulty in the current intelligent voice interaction and man-machine dialogues. In general, people can easily summarize an article after completely reading it by giving its character, place, process, etc. The research on machine reading comprehension is to endow a computer with reading ability equal to a man, i.e., to make the computer read an article and then answer any questions related to the information therein. Such ability which is easy for a man but hard for a computer. For quite a long time, the research on natural language processing is based on sentence-level reading comprehension. For instance, the computer is provided with a sentence, and then made to comprehend the subject, object, verb, attribute, adverbial and complement, character, process, etc. therein. However, comprehension of long texts has long been a difficulty in the research, because it involves research content of a higher dimension such as consistency between sentences, context and inference.

At present, top-level AI experts and scholars of Microsoft, Carnegie Mellon University and Stanford University are working on this complicated task. It means that this current weak AI will take a big step towards strong AI if this goal is realized. As shown in the most recent ranking list of a text comprehension challenge called SQuAD (Stanford Question Answering Dataset) launched by Stanford University not long ago, the EM (Exact Match, meaning the complete match between the predicted and real answers) value of the R-NET model submitted by the Natural Language Computation Team, Microsoft Research Asia on January 3, 2018 was scored 82.650, being the highest point and first to exceed human’s 82.304.

Moreover, judging from the ASC18 contest question already issued, Microsoft (MS MARCO), a more difficult machine reading comprehension and question answering dataset, will be used in the contest. This dataset was created based on the real data collected from Bing and Cortana and consists of 100,000 questions, 1 million paragraphs and more than 200,000 file links. In the preliminary round of the ASC18 contest, Microsoft will provide some data of the dataset for use in the training model. In the final round, Microsoft will provide a brand-new test set to be challenged by the contestants. Meanwhile, to make the college students better set about answering and learn the contest question, Microsoft will also provide the CNTK-based datum codes and relevant theses as references.

The final judging criterion of the ASC18 AI contest question is based on the accuracy of machine reading identification of the training models of all teams, so the team members are required to proficiently master the algorithmic characteristics of the machine reading comprehension and question answering and Microsoft’s CNTK deep learning framework within two months. How to fully dig out and utilize the computing potential of different hardware becomes the key to winning the contest, since the dataset of the contest question is in a large scale. The ASC18 AI contest question requires the teams to artificially develop a machine reading comprehension algorithm model respectively, and speed up training and improve accuracy with the latest super computation technology, especially verifying the result of model training with a true question dataset. This is undoubtedly a “super challenge” to the college student contestants in their undergraduate years.

The ASC Student Supercomputer Challenge is the globally largest super computation contest for college students launched by China. Originated in 2012, it has developed for seven years, being more and more influential. Up to now, the ASC contest has attracted more than 5,500 young participants around the world, and the total number of participant teams is more than 1,100.

The post Machine Reading Comprehension Microsoft Sets a Tough AI Question for the ASC18 Contest appeared first on HPCwire.

Pair of Crays Advance Petascale Weather Forecasting in India

HPC Wire - Tue, 01/30/2018 - 17:13

India is stepping up its supercomputing prowess with the launch of two Cray XC40 supercomputers this month. The larger 4 petaflops unit (“Pratyush”), located at the Indian Institute of Tropical Meteorology (IITM) in Pune, will primarily be aimed at improving weather and climate models. The second 2.8 petaflops machine (“Mihir”), installed at the National Centre for Medium Range Weather Forecast (NCMRWF) in Noida (near Delhi), mainly will be used to support daily operational forecasts.

Accepted in late 2017, Pratyush and Mihir (Indian names for the sun) represent 6.8 petaflops of peak capacity and an investment of Rs 450 crore (~$70 million USD) by the India government. The additional computational power, connected to 18 petabytes of Cray ClusterStor storage capacity, will enable the Indian Ministry of Earth Sciences (MoES) to produce weather and climate models at much higher resolutions, down to 3 km at regional scale and 12 km at global scale.

At a dedication for the IITM system held on Jan 8, Union minister for science and technology Harsh Vardhan upheld the new resource as the fourth fastest dedicated weather and climate science machine in the world, on par with machines owned by Japan, the United States, the United Kingdom and South Korea. The system should put India back into the top quintile of world’s fastest computers as ranked by the Top500.

The minister said the new machines would provide cutting-edge forecast services to the citizens of India, to enable them to better understand and prepare for weather and climate conditions like monsoons, tsunamis, cyclones, earthquakes, lightning, flood, drought, and other extreme events. The computers will also be used for air quality control, water resource management and to support the fishing industry.

At the inauguration for NCMRWF cluster (held today, Jan. 30), Science Minister Vardhan said they intend to start block-level weather forecasting by June 2018, which will further advance critical alert systems in the nation.

“The ministry of earth sciences is doing historic work,” Vardhan told Hindustan Times. “The 2004 tsunami caught us unawares, but now India has a tsunami-warning system that can provide alerts and information to neighbouring countries as well. It is because of these early warnings that recent cyclones have not been as devastating as the ones (that hit the subcontinent) a decade ago.”

The supercomputer program in India began in the late 1980s, when the US stopped the export of a Cray supercomputer due to technology embargoes enacted by the U.S. and Europe. The relationship is back on track now.

Cray provided India with its current top number cruncher back in 2015. Located at the Indian Institute of Science’s Supercomputer Education and Research Centre (SERC), the XC40 ranks 228 on the latest Top500 list with 901.5 Linpack teraflops (1.2 petaflops Rpeak). Another Cray (an XC30) is installed at the Tata Institute of Fundamental Research part of the Indian Lattice Gauge Theory Initiative.

The post Pair of Crays Advance Petascale Weather Forecasting in India appeared first on HPCwire.

U.S Leads but China Gains in NSF 2018 S&E Indicators Report

HPC Wire - Tue, 01/30/2018 - 15:13

For now, the U.S. still enjoys a leadership position in science and engineering worldwide according to the Science and Engineering Indicators 2018 report issued this month by the National Science Board, the governing body for the National Science Foundation.

The U.S. invests the most in research and development (R&D), attracts the most venture capital, awards the most advanced degrees, provides the most business, financial, and information services, and is the largest producer in high-technology manufacturing sectors, according to the report. On the downside, China’s dramatic rise as a force in science continues to challenge U.S. preeminence. There’s also been a recent drop in the number of international students seeking graduate degrees in the U.S. “These students are a critical component of the U.S. workforce in these high demand fields,” says the report.

Issued every two years, the S&E Indicators report is painted with a broad brush that catches major trends but is slim on details for individual industries and science domains. Broadly, it’s a “scorecard” of U.S. S&E activities compared with the global community. The 2018 report again shows the U.S. leading in most categories, but NSB Chair Maria Zuber, struck a cautionary note on the results.

“This year’s report shows a trend that the U.S. still leads by many S&T measures, but that our lead is decreasing in certain areas that are important to our country. That trend raises concerns about impacts on our economy and workforce, and has implications for our national security. From gene editing to artificial intelligence, scientific advancements come with inherent risks. And it’s critical that we stay at the forefront of science to mitigate those risks,” said Zuber, who is also VP for Research at Massachusetts Institute of Technology.

NSC has created a web presentation that makes it easy to roam through the report. NSB selected 42 S&E indicators and presented material in eight chapters covering education, workforce, R&D, public attitudes towards science, IP/Innovation, and industry. Data is drawn from many sources covering varying time periods. A fast way to zip through the material is to simply click on the figures section and march through them.

Here are a few noteworthy report bullets:

  • Global R&D. The U.S. led the world in R&D expenditures at $496 billion (26 percent share of the global total), but China was a decisive second at 2 percent ($408 billion). China has grown R&D spending roughly 18 percent annually since 2000; its focus is primarily on development rather than basic or applied research. During the same time frame, U.S. R&D spending has grown by 4 percent.
  • Venture Capital. China is no slouch here either; VC spending rose in China from approximately $3 billion in 2013 to $34 billion in 2016, climbing from 5 percent to 27 percent of the global share, the fastest increase of any economy. The U.S. attracted “nearly $70 billion,” slightly more than half of the $130 billion VCs invested globally in 2016.
  • International Grad Students. The number of international students in the U.S. dropped between the fall of 2016 and the fall of 2017 with the largest declines in graduate level computer science (13 percent) and engineering (8 percent). That’s not surprising given the current political climate. “These students are a critical component of the U.S. workforce in these high demand fields,” according to the report.
  • Regional S&E Specialization. Invention is a good example. “Of the three leaders in U.S. Patent and Trademark Office patents, U.S. and EU inventions are concentrated in chemistry and health, including pharmaceuticals and biotechnology. Japan’s patents are primarily in semiconductors, telecommunications, optics, and materials and metallurgy. Information and communication technologies—including digital communications, semiconductors, telecommunications, and optics—are mainstays of South Korea and China.”

Like past efforts, the 2018 Science and Engineering Indicators report is a massive effort.

“NSF’s Science and Engineering Indicators is the highest-quality and most comprehensive source of information on how the U.S. scientific and engineering enterprise is performing domestically and internationally,” said NSF Director France Córdova. “The 2018 report presents a wealth of easily accessible, vital data. It provides insights into how science and engineering research and development are tied to economic and workforce development, as well as STEM education, in the U.S. and abroad.”

S&E workforce assessment is always an important part of the report and nowhere is the workforce challenge greater than in computer sciences. The Bureau of Labor Statistics has projected 23 percent growth from 2014 to 2024 in the computer systems design and related services industry – from 1,777,700 jobs in 2014 to 2,186,600 jobs in 2024.

Commenting on this year’s results, Steve Conway, senior VP, Research, Hyperion Research, noted that China has moved far ahead in graduating bachelor degree level students. In 2015, the report’s most recent year, China graduated 1.6 million students, compared with 742,000 in the U.S. and 780,000 in Europe’s top eight countries combined. But that’s misleading, according to Conway.

“Things looks different at the Ph.D. level, where China had a modest lead over the U.S., 34,000 to 25,000, but Europe’s top eight countries together produced 58,000 doctoral graduates. This is further evidence that Europe is as strong a contender in the exascale race as the U.S. and China, adding to the EuroHPC initiative’s recently announced plan to spend another 1 billion euros on exascale development by 2020,” said Conway. (see chart below)

Interestingly, the number of computer programmer positions is forecast to shrink by 8 percent over the 2014-2024 timeframe while computer and mathematical scientists positions will grow 14.9 percent according to BLS. Again, the Indicators report is a broad measure. Within HPC the mix of skills sought changes over time. Expertise in parallel programming, for example, is at a premium currently and likely to remain so for some time.

This year the NSB slightly revised its definition of technology-rich industries. Here are the new ones:

Knowledge- and technology-intensive (KTI) industries: Those industries that have a particularly strong link to science and technology. These industries are five service industries—financial, business, communications, education, and health care; five high-technology manufacturing industries—aerospace; pharmaceuticals; computers and office machinery; semiconductors and communications equipment; and measuring, medical, navigation, optical, and testing instruments; and five medium-high-technology industries—motor vehicles and parts, chemicals excluding pharmaceuticals, electrical machinery and appliances, machinery and equipment, and railroad and other transportation equipment.

Knowledge-intensive services industries: Those industries that incorporate science, engineering, and technology into their services or the delivery of their services, consisting of business, information, education, financial, and health care.

According to the report KTI industries produce roughly one-third of the world GDP. “America” leads in providing business, financial, and information services, accounting for 31percent of the global share, followed by the European Union (EU) at 21 percent. China is the third largest producer of these services at 17 percent and continues to grow at a far faster rate (19 percent annually) than the U.S. and other developed countries. The U.S. is the largest producer of high technology manufacturing (31 percent global share). This includes production of air and spacecraft, semiconductors, computers, pharmaceuticals, and measuring and control instruments. China is second at 24 percent, more than doubling its share over the last decade.

Amid all of the gut-wrenching of the international S&E scorekeeping it’s worth remembering that modern science is intensely collaborative and cuts across national borders. This is evident in scientific publishing. Among the major producers of S&E publications, the United Kingdom had the highest international collaboration rate (57 percent) in 2016, followed by France (55 percent), and Germany (51 percent). The U.S. followed with a 37 percent international collaboration rate, up 12 percent from 2006.

Overall, there are few dramatic changes noted but some accelerated trends in the 2018 S&E Indicators report. Fortunately NSB has made the report easy to peruse.

Link to report: https://www.nsf.gov/statistics/2018/nsb20181/

The post U.S Leads but China Gains in NSF 2018 S&E Indicators Report appeared first on HPCwire.

Andrew Jones to Deliver Webinar on Benchmarking of HPC Systems

HPC Wire - Tue, 01/30/2018 - 14:39

Jan. 30, 2018 — Andrew Jones, Vice-President, Strategic HPC Consulting and Services will deliver an impartial webinar looking at benchmarking of HPC systems. This webinar is designed for anyone involved in high performance or scientific computing.

Are you trying to address any of these three questions:

  • Which processor or system architecture is right for your HPC needs?
  • How fast does your code run and where to optimize it?
  • What do benchmarks really mean?

In this live webinar Andrew will answer these questions and discuss the topics below. There will be an opportunity to have your own questions answered, either during the session or post event.

Key topics:

  • When to use benchmarking
  • What benchmarks can and can’t tell you
  • Rules, consistency and pitfalls
  • Selecting the most appropriate benchmarks
  • Extrapolating to larger scales or newer technologies
  • Tips, tricks, and best practice

Please click here to register for the webinar.

Source: NAG

The post Andrew Jones to Deliver Webinar on Benchmarking of HPC Systems appeared first on HPCwire.


Subscribe to www.rmacc.org aggregator