XSEDE News

Subscribe to XSEDE News feed
This is the XSEDE user news RSS feed. You can view news in the XSEDE User Portal at https://www.xsede.org/news?p_p_id=usernews_WAR_usernewsportlet&p_p_lifecycle=0&_usernews_WAR_usernewsportlet_view=default.
Updated: 13 hours 39 min ago

APPLICATION DEVELOPER AND SYSTEMS ANALYST I

Thu, 03/23/2017 - 13:17

Title: APPLICATION DEVELOPER AND SYSTEMS ANALYST I
Deadline to Apply: 2017-04-21
Deadline to Remove: 2017-04-21
Job Summary: Owned by the U.S. Department of Energy and operated by Fermi Research Alliance, LLC, Fermilab is home to the country’s high energy physics program and host to over 1500 physicists from around the world. Fermilab offers a diverse and energetic professional setting for a challenging career.

Fermilab’s Scientific Distributed Computing Solutions Department seeks an Application Developer and Systems Analyst I to work with a team of developers, computing services specialists, and scientists to contribute to the development, deployment and support of the workflow/workload management system used by scientists to provision and manage computational resources in Grids, High Performance Computing Clusters and Commercial and Community based Cloud service providers.
Job URL: https://jobs-us.technomedia.com/fermilab/?_3x2123322Z3U45Kb895cbde-dcc6-4a6b-ba62-3a2d082a58a0&offerid=206
Job Location: Batavia, IL
Institution: Fermi National Accelerator Laboratory
Requisition Number:
Posting Date: 2017-03-23
Job Posting Type: Job
Please visit http://hpcuniversity.org/careers/ to view this job on HPCU.
Please contact jobs@hpcuniversity.org with questions.

HPC Storage Software Engineer I

Thu, 03/23/2017 - 12:59

Title: HPC Storage Software Engineer I
Deadline to Apply: 2017-04-30
Deadline to Remove: 2017-04-30
Job Summary: Position Details – “What You Will Do”
As part of the Data Analysis Services Group (DASG), NCAR is hiring a Software Engineer/Programmer I to performs systems programming, administration and technical support for the Computational & Information Systems Laboratory’s (CISL) high-performance storage systems, high-performance networks and data transfer services. This position will assist with the installation, maintenance, administration, troubleshooting and operation of both software and hardware systems. The environment is composed of multi-vendor resources with numerous specialized hardware components. The Software Engineer I will develop web-based documentation of system procedures and applications. This position will also develop programs to support automated system maintenance, monitor system resources and usage, and implement system security policies.
The primary job location for this position is in Boulder Colorado. Production systems supported are located at the NCAR Wyoming Supercomputing Center (NWSC) located in Cheyenne Wyoming. Will be required to work at the NWSC during periods of system installation, system upgrade, or system troubleshooting.
Software Engineering and Development
As part of the team, develops, implements and documents new features or capabilities in system administration and system monitoring software. Develops and maintains systems software as necessary for the deployment and management of high-performance parallel file systems and networks. Develops and maintains security monitoring and analysis software. Helps define group standards and guidelines for software development and documentation.
Research and Evaluation
Assists the Data Analysis Services Group in research, planning, and recommendations to the High-End Services Section and CISL management for hardware and software products, configurations and functional enhancements or upgrades in support of the Data Analysis and Visualization missions of CISL. Evaluates, benchmarks, and reports on new hardware and software systems related to high-performance storage solutions, both centralized and compute localized.
Participates in HPC Futures Lab I/O Innovation Lab research projects. This may include development of systems level code to support different file systems and storage hardware. The research will include shared file systems across both local and wide area networks, evaluation of cloud based object store solutions, evaluation of high-performance networks, and evaluation of new storage and memory technologies.
Operational Monitoring and Troubleshooting
Operates and monitors the behavior of the groups server and storage systems, networks and related hardware on a routine, daily basis to ensure proper and efficient operations. Alerts other DASG staff, vendor representatives and/or CISL Operations staff of anomalistic conditions or behaviors, as appropriate, and takes remedial actions as necessary. Diagnoses and may repair failed hardware components.
Provides service on a 7x24 on-call basis troubleshooting and resolving system related problems presented by users or identified as part of ongoing monitoring. Refers and escalates problems to senior members of the DASG staff as appropriate.
Systems Administration
Provides systems support for diverse architectures. Installs and upgrades system hardware and software, including computational systems, high-performance storage systems and a variety of network fabrics. Helps define standards and guidelines for operation and maintenance and produces systems operation and procedural documentation. Compiles, installs and maintains commercial and free application software.
Organizational Representation and Reporting
Provides regular DASG activities reports to management and may contribute to CISL or NCAR annual report and development plans. Attends group, section and divisional meetings and may represent the Data Analysis Services Group and its activities at such meetings.
Minimum Job Requirements – “What You Need”
Bachelor’s degree and one to three years of experience; Associate’s degree and three to four years of experience; or an equivalent combination of education and experience in one or more of the following fields: Computer Science, Mathematics, Computer Engineering, Information Sciences, Software Engineering, or equivalent.
Experience should be in the following areas:
Experience with the UNIX operating system environment, specifically Linux, with an emphasis on storage systems, file systems and networks.
Experience in UNIX scripting languages and at least one higher level programming language.
Demonstrated skill in at least one common scripting language such as Perl, csh, Python, PHP
Demonstrated basic knowledge of a higher level programming language and general software engineering practices.
Demonstrated skill in the administration of Linux based stand-alone and/or clustered systems including a basic knowledge of file systems and computer security.
Demonstrated skill in performing tasks requiring organization and attention to detail.
Good English written and verbal communication skills and the ability to write systems documentation.
Job URL: https://ucar.silkroad.com/epostings/index.cfm?fuseaction=app.jobinfo&jobid=218123&version=1#.WMbkJ179mKs.gmail
Job Location: Boulder, CO
Institution: Nation
Requisition Number: 17108
Posting Date: 2017-03-23
Job Posting Type: Job
Please visit http://hpcuniversity.org/careers/ to view this job on HPCU.
Please contact jobs@hpcuniversity.org with questions.

R HPC Training Webinar

Thu, 03/23/2017 - 12:45

This training will provide existing R users with training on how to use R in an HPC environment. We will cover logistical issues with getting your R code up and running on TACC resources, as well as how to take advantage of the unique resources available at TACC. This includes covering usage of the ‘parallel’ package in R and how to manage CRAN packages in a multi-user environment. Previous familiarity with R is assumed.

Registration: https://www.xsede.org/web/xup/course-calendar

Please submit any questions you may have via the Consulting section of the XSEDE User Portal.

https://portal.xsede.org/help-desk

NHERI DesignSafe Training Webinar - Unleashing Jupyter and R in DesignSafe

Thu, 03/23/2017 - 10:28

This webinar focuses on utilizing Jupyter and R within the DesignSafe environment. Topics covered include:

- Advanced Data Analysis
- Data Correlation
- Plotting

Please see the following link to register for the event:

https://www.designsafe-ci.org/learning-center/training/032917/

Registrants will be contacted via email shortly before the event with connection information for the webinar.

About NHERI and DesignSafe:

NHERI – NATURAL HAZARDS ENGINEERING RESEARCH INFRASTRUCTURE
The Natural Hazards Engineering Research Infrastructure (NHERI) is a NSF funded, distributed, multi-user, national facility that provides the natural hazards engineering community with state-of-the-art research infrastructure.

DESIGNSAFE CYBERINFRASTRUCTURE
DesignSafe-ci.org (DesignSafe) is the CI component of the NHERI collaboration. DesignSafe embraces a “cloud” strategy for the “big data” generated in natural hazards engineering research. It supports research workflows, data analysis and visualization, and the full lifecycle of data required by engineers and scientists to effectively address the threats posed to civil infrastructure by natural hazards.

Sr. Software Engineer

Tue, 03/21/2017 - 14:29

Title: Sr. Software Engineer
Deadline to Apply: 2017-04-21
Deadline to Remove: 2017-04-21
Job Summary: The Sr. software engineer will serve as technical resource to all users on highly complex code development, architecture, debugging, profiling, optimization, documentation, installation and maintenance of open source scientific applications; data mining and best practices to utilize HPC resources. The incumbent will enable faculty to advance research-computing agendas by interacting directly with researchers, providing feedback on how to improve application performance, and actively participating in application development. There will be also many opportunities to establish scientific collaborations and partnerships with research groups. This position will serve as lead on moderate to large IT architecture and applications development projects. The position will be expected to influence research groups towards innovative solutions and provide oversight to lower level staff. As a project lead, the incumbent is expected to interact with various departments and external constituents outside of JHU.
Job URL: https://jobs.jhu.edu/jhujobs/jobview.cfm?reqId=313472&postId=14077
Job Location: Baltimore, MD
Institution: Johns Hopkins University
Requisition Number: 313472
Posting Date: 2017-03-21
Job Posting Type: Job
Please visit http://hpcuniversity.org/careers/ to view this job on HPCU.
Please contact jobs@hpcuniversity.org with questions.

Jetstream: New VM flavors (sizes) being introduced

Tue, 03/21/2017 - 07:28

Jetstream is increasing the diversity of flavors (VM sizes) available to users. The current m1. flavors will have their root disks capped at 60GB. A new storage s1. flavor is being created to accommodate the flavors where the root disk is greater than 60GB.

This change is being undertaken to encourage users to adopt modern cloud computing techniques; in particular, encouraging the use of volumes which are persistent storage entities and discouraging the use of the local ephemeral storage associated with computing instances.

It should be noted that imaging requests for s1.* instances via Atmosphere will not be honored. As before, imaging should be done on the smallest m1.* size possible. Information about imaging and disk sizes is available via wiki.jetstream-cloud.org.

Please send any questions or concerns to Jetstream support via help@xsede.org.

Wrangler Maintenance 3/28

Mon, 03/20/2017 - 19:24

Wrangler will not be available from 9 a.m. to 11 a.m. (CT) on Tuesday, 28 March 2017. System maintenance will be performed during this time.

Thank you,

- TACC Team

Jetstream Atmosphere Web Interface Maintenance - 3/28/17 Noon to 8p Eastern

Mon, 03/20/2017 - 12:24

Jetstream’s Atmosphere interface will be offline on Tuesday, March 28, 2017 from 12pm Eastern through 8pm Eastern for upgrades/enhancements.

Existing/running instances will still be available via ssh. API users will not be affected by this upgrade.

Please contact help@xsede.org with any questions or concerns.

XSEDE HPC Monthly Workshop - April 18-19, 2017 - MPI

Mon, 03/20/2017 - 12:02

XSEDE HPC Workshop: MPI
April 18-19, 2017

XSEDE along with the Pittsburgh Supercomputing Center is pleased to announce a two-day MPI workshop, to be held April 18-19, 2017.

This workshop is intended to give C and Fortran programmers a hands-on introduction to MPI programming. Both days are compact, to accommodate multiple time zones, but packed with useful information and lab exercises. Attendees will leave with a working knowledge of how to write scalable codes using MPI – the standard programming tool of scalable parallel computing.

Due to demand, this workshop will be telecast to several satellite sites.

This workshop is NOT available via a webcast.

You may attend this event at any of the following sites:

  • Pittsburgh Supercomputing Center
  • University of Colorado Boulder
  • University of Delaware
  • National University
  • Tufts University
  • Howard University
  • Georgia State University
  • University of Utah
  • University of Houston – Clear Lake
  • Purdue University
  • Ohio Supercomputer Center
  • Pennsylvania State University
  • California State Polytechnic University, Pomona
  • Old Dominion University
  • University of Nebraska-Lincoln

Please choose the appropriate link on the XSEDE Portal Registration pages: https://portal.xsede.org/course-calendar

For more information about this event including the agenda and links to the power point presentations, please visit the following page:

https://www.psc.edu/136-users/training/2532-xsede-hpc-workshop-april-18-19-2017-mpi

ECSS symposium 3/21 10am Pacific/1pm Eastern

Thu, 03/16/2017 - 15:07

Please join us for the 3/21 ECSS symposium, 10am Pacific/1pm Eastern.

https://zoom.us/j/350667546

Dial: +1 408 638 0968 (US Toll) or +1 646 558 8656 (US Toll)
Meeting ID: 350 667 546

David O’Neal (PSC) will be discussing his work with PI Curtis Marean at Arizona State. The title of the talk is “The Paleoscape Project for Studies of Modern Human Origins”. The work is fascinating and is a former best paper winner at the XSEDE conference. It has been hypothesized that the Cape area of South Africa, due to its uniquely rich coastal and terrestrial food resources, may have been the refuge region for the progenitor lineage of all modern humans during harsh global glacial phases. Human origins research recognizes the evolutionary significance of paleoclimate and paleoenvironment. There are three components to the ECSS work: 1) run a South African regional climate model to hindcast the climate parameters needed to project vegetation and other resources into the past, 2) run vegetation projections from these climate projections, and 3) run multiple agent-based simulations of the foragers on the these ancient paleoscapes. This unique endeavor is made possible by an unprecedented collaboration of scientists from several countries and many disciplines. This work was also the subject of a Campus Champion Fellows collaboration with Fellow Eric Shook.

For the full abstract please see http://www.xsede.org/ecss-symposium.

RESEARCH PROGRAMMER / SENIOR RESEARCH PROGRAMMER

Thu, 03/16/2017 - 14:28

RESEARCH PROGRAMMER / SENIOR RESEARCH PROGRAMMER
CyberGIS Center

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers and students together to solve grand challenges at rapid speed and scale.

NCSA is currently seeking one or more Research Programmers/Senior Research Programmers to design, develop, test and implement software and interfaces for the CyberGIS Center for Advanced Digital and Spatial Studies (CyberGIS), and play a major role in developing the CyberGIS platform and associated geospatial software and tools with emphasis on computation, performance, and scalability to big geospatial data and complex spatial analysis and modeling.

http://www.ncsa.illinois.edu/about/jobs/A1700102

HPC Software Engineer

Thu, 03/16/2017 - 14:09

Title: HPC Software Engineer
Deadline to Apply: 2017-04-16
Deadline to Remove: 2017-04-16
Job Summary: The HPC Software Engineer will be an integral member of multiple research teams focused on cutting-edge computational astrophysics research. The HPC Software Engineer will work with researchers associated with the Department of Astrophysical Sciences to provide domain-centric computational expertise in algorithm development and selection, code development, and optimization to create efficient and scalable research code.
Job URL: https://main-princeton.icims.com/jobs/7269/hpc-software-engineer/job?hub=15
Job Location: Princeton, NJ
Institution: Princeton University
Requisition Number: 20177269
Posting Date: 2017-03-16
Job Posting Type: Job
Please visit http://hpcuniversity.org/careers/ to view this job on HPCU.
Please contact jobs@hpcuniversity.org with questions.

Research Software Engineer

Thu, 03/16/2017 - 14:07

Title: Research Software Engineer
Deadline to Apply: 2017-04-16
Deadline to Remove: 2017-04-16
Job Summary: The Research Software Engineer will be an integral member of multiple research teams focused on cutting-edge computational neuroscience research. The Research Software Engineer will work with researchers associated with Princeton Neuroscience Institute (PNI) to provide domain-centric computational expertise in algorithm development and selection, code development, and optimization to create efficient and scalable research code.
Job URL: https://main-princeton.icims.com/jobs/7180/research-software-engineer/job?hub=15&mobile=false&width=1200&height=500&bga=true&needsRedirect=false&jan1offset=-300&jun1offset=-240
Job Location: Princeton, NJ
Institution: Princeton University
Requisition Number: 20177180
Posting Date: 2017-03-16
Job Posting Type: Job
Please visit http://hpcuniversity.org/careers/ to view this job on HPCU.
Please contact jobs@hpcuniversity.org with questions.

DOE Office of Science Graduate Student Research (SCGSR) Program

Thu, 03/16/2017 - 14:02

Title: DOE Office of Science Graduate Student Research (SCGSR) Program
Deadline to Apply: 2017-05-16
Deadline to Remove: 2017-05-16
Job Summary: The goal of the Office of Science Graduate Student Research (SCGSR) program is to prepare graduate students for science, technology, engineering, or mathematics (STEM) careers critically important to the DOE Office of Science mission, by providing graduate thesis research opportunities at DOE laboratories. The SCGSR program provides supplemental awards to outstanding U.S. graduate students to pursue part of their graduate thesis research at a DOE laboratory in areas that address scientific challenges central to the Office of Science mission. The research opportunity is expected to advance the graduate student’s overall doctoral thesis while providing access to the expertise, resources, and capabilities available at the DOE laboratories.
Job URL: https://science.energy.gov/wdts/scgsr/
Job Location: Various
Institution: U.S. Department of Energy
Requisition Number:
Posting Date: 2017-03-16
Job Posting Type: Graduate Fellowship
Please visit http://hpcuniversity.org/careers/ to view this job on HPCU.
Please contact jobs@hpcuniversity.org with questions.

XSEDE Short Survey: Training

Thu, 03/16/2017 - 12:43

The XSEDE Training team would appreciate your input and feedback on several aspects of training, such as topics and delivery methods. Our goal is to effectively improve and expand XSEDE training offerings. Please take a few minutes to complete a five-question survey currently available at the XSEDE User Portal, visible after you sign in:

https://portal.xsede.org/

Thank you in advance for your input and feedback.

Sincerely,
Susan Mehringer
XSEDE Training Lead

MATLAB License Server Maintenance 21 March 2017

Thu, 03/16/2017 - 12:16

ATS Systems have scheduled a maintenance window for Matlab license server on Tuesday, 03/21/17 at 6PM CST in order to upgrade the Matlab license server to the recently released version 2017a. This maintenance is expected to take 180 minutes to complete. Users may be unable to access services dependent on these servers while the maintenance is being performed. Matlab on TACC system may not be able to check out proper license during this period.

XSEDE ALLOCATION REQUESTS Open Submission, Guidelines, Resource and Policy Changes

Thu, 03/16/2017 - 08:23

XSEDE is now accepting Research Allocation Requests for the allocation period, July 1, 2017 to June 30, 2018. The submission period is from March 15, 2017 thru April 15, 2017. Please review the new XSEDE systems and important policy changes (see below) before you submit your allocation request through the XSEDE User Portal
————————————————————
Important
A recent change to the submission of proposals is that the Allocations proposal submission system(XRAS) will force submissions to adhere to the uploaded document page limits, these can be found at https://portal.xsede.org/group/xup/allocation-policies#63

NEW XSEDE Resources:
See the Resource Catalog for a list of XSEDE compute, visualization and storage resources, and more details on the new systems (https://portal.xsede.org/web/guest/resources/overview).

  • The Texas Advanced Computing Center(TACC) introduces their new resource Stampede 2. Stampede 2 will enter full production in the Fall 2017 as the 18 petaflop national resource that builds on the successes of the original Stampede system it replaces. The first phase of the Stampede 2 rollout features the second generation of processors based on Intel’s Many Integrated Core (MIC) architecture. These 4,200 Knights Landing (KNL) nodes represent a radical break with the first generation Knights Corner (KNC) MIC coprocessor. Unlike the legacy KNC, a Stampede KNL is not a coprocessor: each 68-core KNL is a stand-alone, self-booting processor that is the sole processor in its node. Phase 2 will add approximately 50% of additional compute power to the system as a whole by introducing new nodes equipped with a future Intel processor. When fully deployed, Stampede 2 will deliver twice the performance of the original Stampede system. Please note that Stampede 2 is allocated in service units (SU)s. An SU is defined as 1 wall-clock node hour not core hours!

Starting this submission period both the Pittsburgh Supercomputing Center(PSC) and the San Diego Supercomputer Center(SDSC) will be allocating their GPU compute resources separately from their standard compute nodes, please see details below:

  • SDSC’s Comet GPU has 36 general purpose GPU nodes, with 2 Tesla K80 GPU graphics cards per node, each with 2 GK210 GPUs (144 GPUs in total). Each GPU node also features 2 Intel Haswell processors of the same design and performance as the standard compute nodes (described separately). The GPU nodes are integrated into the Comet resource and available through the SLURM scheduler for either dedicated or shared node jobs (i.e., a user can run on 1 or more GPUs/node and will be charged accordingly). Like the Comet standard compute nodes, the GPU nodes feature a local SSD which can be specified as a scratch resource during job execution – in many cases using SSD’s can alleviate I/O bottlenecks associated with using the shared Lustre parallel file system.
    • Comet’s GPUs are a specialized resource that performs well for certain classes of algorithms and applications. There is a large and growing base of community codes that have been optimized for GPUs including those in molecular dynamics, and machine learning. GPU-enabled applications on Comet include: Amber, Gromacs, BEAST, OpenMM, TensorFlow, and NAMD.
  • PSC introduces Bridges GPU, a newly allocatable resource within Bridges that features 32 NVIDIA Tesla K80 GPUs and 64 NVIDIA Tesla P100 GPUs. Bridges GPU complements Bridges’ Regular, Bridges Large, and its Pylon storage system to accelerate deep learning and a wide variety of application workloads. The 16 GPU nodes, each with 2 NVIDIA Tesla K80 GPU cards, 2 Intel Xeon CPUs (14 cores each), and 128GB of RAM and 32 GPU nodes, each with 2 NVIDIA Tesla P100 GPU cards, 2 Intel Xeon CPUs (16 cores each), and 128GB of RAM.
    • The PSC’s Bridges is a uniquely capable resource for empowering new research communities and bringing together HPC and Big Data. Bridges integrates a uniquely flexible, user-focused, data-centric software environment with very large shared memory, a high-performance interconnect, and rich file systems to empower new research communities, bring desktop convenience to HPC and drive complex workflows.
    • Bridges supports new communities through extensive interactivity, gateways, persistent databases and web servers, high productivity programming languages, and virtualization. The software environment is extremely robust, supporting enabling capabilities such as Python, R, and MATLAB on large-memory nodes, genome sequence assembly on nodes with up to 12TB of RAM, machine learning and especially deep learning, Spark and Hadoop, complex workflows, and web architectures to support gateways.

Storage Allocations: Continuing this submission period, access to XSEDE storage resources along with compute resources will need to be requested and justified, both in the XRAS application and the body of the proposal’s main document. The following XSEDE sites will be offering allocatable storage facilities, these are:

    • SDSC (Data Oasis)
    • TACC (Ranch)
    • TACC (Wrangler storage)
    • PSC (Pylon)
    • IU-TACC (Jetstream)

Storage needs have always been part of allocation requests, however, XSEDE will be enforcing the storage awards in unison with the storage sites. Please see (https://www.xsede.org/storage).

Estimated Available Service Units/GB for upcoming meeting:
Indiana University/TACC (Jetstream) 5,000,000
LSU (SuperMIC) 6,500,000
Open Science Grid (OSG) 2,000,000
PSC Bridges(Regular Memory) 38,000,000
PSC Bridges (Large Memory) 700,000
PSC Bridges (Bridges GPU) TBD
PSC Persistent disk storage (Pylon) 2,000,000
SDSC Dell Cluster with Intel Haswell Processors (Comet) 80,000,00
SDSC Dell Cluster with Intel Haswell Processors (Comet GPU) TBD
SDSC Medium-term disk storage (Data Oasis) 300,000
Stanford Cray CS-Storm GPU Supercomputer(XStream) 500,000
TACC HP/NVIDIA Interactive Visualization and Data Analytics System (Maverick) 4,000,000
TACC Dell/Intel Knight’s Landing System (Stampede2 – Phase 1) 10,000,000 node hours
TACC Data Analytics System (Wrangler) 180,000 node hours
TACC Long-term Storage (Wrangler Storage) 500,000
TACC Long-term tape Archival Storage (Ranch) 2,000,000

Allocation Request Procedures:

  • In the past code performance and scaling was to be a section addressed in all research requests main document, this section seems to have been overlooked by many PIs in the recent quarterly research submission periods which has led to severe reductions or even complete rejection of both new and renewal requests. Continuing this quarterly submission period it will be mandatory to upload a scaling and code performance document detailing your code efficiency. Please see section 7.2 Review Criteria, of the Allocations Policy document (https://portal.xsede.org/group/xup/allocation-policies).
  • Also, it has become mandatory to discuss/detail, in the main document, the disclosure of access to other cyberinfrastructure resources(e.g. NSF Blue Waters, DOE INCITE resources, local campus, …) should be detailed in the main document. Please see section 7.3 Review Criteria, of the Allocations Policy document(https://portal.xsede.org/group/xup/allocation-policies). The failure to disclose access to these resources could lead to severe reductions or even complete rejection of both new and renewal requests. If there is no access to other cyberinfrastructure resources this should be made clear as well.
  • The XRAC review panel has asked that the PIs include the following: "The description of the computational methods must include explicit specification of the integration time step value, if relevant (e.g. Molecular Dynamics Simulations). If these details are not provided a 1 femtosecond (1fs) will be assumed with this information being used accordingly to evaluate the proposed computations."
  • All funding used to support the Research Plan of an XRAC Research Request must be reported in the Supporting Grants form in the XRAS submission. Reviewers use this information to assess whether the PI has enough support to accomplish the Research Plan, analyze data, prepare publications, etc.
  • Publications that have resulted from the use of XSEDE resources should be entered into your XSEDE portal profile which you will be able to attach to your Research submission.
  • Also note that it is expected that the scaling and code performance information is from the resource(s) being requested in the research request.

Policy Changes: Allocations Policy document(https://portal.xsede.org/group/xup/allocation-policies)

  • Storage allocation requests for Archival Storage in conjunction with compute and visualization resources and/or Stand Alone Storage need to be requested explicitly both in your proposal (research proposals) and also in the resource section of XRAS.
  • Furthermore, the PI must describe the peer-reviewed science goal that the resource award will facilitate. These goals must match or be sub-goals of those described in the listed funding award for that year.
  • After the Panel Discussion of the XRAC meeting, the total Recommended Allocation is determined and compared to the total Available Allocation across all resources. Transfers of allocations may be made for projects that are more suitable for execution on other resources; transfers may also be made for projects that can take advantage of other resources, hence balancing the load. When the total Recommended considerably exceeds Available Allocations a reconciliation process adjusts all Recommended Allocations to remove oversubscription. This adjustment process reduces large allocations more than small ones and gives preference to NSF-funded projects or project portions. Under the direction of NSF, additional adjustments may be made to achieve a balanced portfolio of awards to diverse communities, geographic areas, and scientific domains.
  • Conflict of Interest (COI) policy will be strictly enforced for large proposals. For small requests, the PI/reviewer may participate in the respective meeting, but leave the room during the discussion of their proposal.
  • XRAC proposals for allocations request resources that represent a significant investment of the National Science Foundation. The XRAC review process therefore strives to be as rigorous as for equivalent NSF proposals.
  • The actual availability of resources is not considered in the review. Only the merit of the proposal is. Necessary reductions due to insufficient resources will be made after the merit review, under NSF guidelines, as described in Section 6.4.1.
  • 10% max advance on all research requests, as described in Section 3.5.4

Examples of well-written proposals:
For more information about writing a successful research proposal as well as examples of successful research allocation requests please see: (https://portal.xsede.org/successful-requests)

If you would like to discuss your plans for submitting a research request please send email to the XSEDE Help Desk at help@xsede.org. Your questions will be forwarded to the appropriate XSEDE Staff for their assistance.

Ken Hackworth
XSEDE Resource Allocations Coordinator
help@xsede.org

HPC Software Engineer/Programmer II

Wed, 03/15/2017 - 08:57

Overview

HPC Software Engineer

The HPC Software Engineer will be an integral member of multiple research teams focused on cutting-edge computational astrophysics research. The HPC Software Engineer will work with researchers associated with the Department of Astrophysical Sciences to provide domain-centric computational expertise in algorithm development and selection, code development, and optimization to create efficient and scalable research code.

The ideal candidate will have a strong background in scientific programming, high performance computing, academic research, and an interest in computational Astrophysics.

This HPC Software Engineer will be one of a team of high performance computing software engineers, which will collectively provide computational research expertise to multiple divisions within the University.

The position requires one to work closely with colleagues in the Office of Information Technology (OIT) as well as with faculty, student/postdoctoral researchers, and technical staff in the Astrophysical Sciences department to enable and accelerate high performance computing efforts.

Responsibilities

Responsibilities:
Port and tune existing research computing codes to new and emerging hardware.
Lead and co-lead the design and construction of increasingly complex research software systems.
Provide technical expertise and guidance for improving the performance and quality of existing astrophysics code bases.
Understand and address software engineering questions that arise in research planning.
Maintain knowledge of current and future software development tools and techniques, programming languages, and high-performance computing hardware.
Co-author scientific publications.
Qualifications

Essential Qualifications:

5 years of programming experience, particularly in the languages used in high-performance computing applications: C/C++, FORTRAN, and Python.
Parallel programming expertise with MPI and OpenMP on computational clusters and supercomputer platforms.
Experience tuning and optimizing scientific software
Demonstrated successes contributing to a collaborative research team
Ability to work independently.
Ability to learn new systems beyond area of core knowledge.
Ability to communicate effectively with a diverse user base having varied levels of technical proficiencies.
Academic research experience.
Preferred Qualifications:

Experience with structured-mesh codes for CFD and/or particle-in-cell methods.
Experience programming and optimizing for many-core CPUs and GPUs.
Background in astrophysics, engineering, physics, or related field.

Education:

Bachelor’s degree, or equivalent experience in a related field. A Ph.D. in astrophysics, engineering, physics, or related field is preferred.

https://main-princeton.icims.com/jobs/7269/hpc-software-engineer/job?hub=15

Research Software Engineer

Wed, 03/15/2017 - 08:55

Overview

The Research Software Engineer will be an integral member of multiple research teams focused on cutting-edge computational neuroscience research. The Research Software Engineer will work with researchers associated with Princeton Neuroscience Institute (PNI) to provide domain-centric computational expertise in algorithm development and selection, code development, and optimization to create efficient and scalable research code.

The ideal candidate will have a strong background in scientific programming, academic research, and an interest in computational Neuroscience.

The Research Software Engineer will be one of a team of high performance computing software engineers, which will collectively provide computational research expertise to multiple divisions within the University.

The position requires one to work closely with colleagues in the Office of Information Technology (OIT) as well as with faculty researchers, student/postdoctoral researchers, and technical staff in the Princeton Neuroscience Institute to enable and accelerate high performance computing efforts within PNI.

Responsibilities

Responsibilities:
Parallelize, debug, port, and tune existing research computing codes.
Lead and co-lead the design and construction of increasingly complex research software systems.
Provide technical expertise and guidance for improving the performance and quality of existing neuroscience code bases.
Understand and address software engineering questions that arise in research planning.
Maintain knowledge of current and future software development tools and techniques, programming languages, and high-performance computing hardware.
Qualifications

Essential Qualifications
Strong programming skills, particularly in the languages used in high-performance computing applications: C/C++, FORTRAN, and Python.
Parallel programming experience on computational clusters and supercomputer platforms.
Demonstrated successes working in a collaborative environment as well as independently.
Ability to learn new systems beyond area of core knowledge.
Ability to communicate effectively with a diverse user base having varied levels of technical proficiencies.
Preferred Qualifications
Machine learning, signal processing, or image analysis programming experience.
Academic research experience.
Background in neuroscience or a related field.
Education
Bachelor’s degree, or equivalent experience in a related field. A Ph.D. in a related field is preferred.

https://main-princeton.icims.com/jobs/7180/research-software-engineer/job?hub=15&mobile=false&width=1200&height=500&bga=true&needsRedirect=false&jan1offset=-360&jun1offset=-300

Sr. Software Engineer

Wed, 03/15/2017 - 08:36

Sr. Software Engineer
Requisition #: 313472

Range: PF

Level: 4
Salary: Commensurate with Experience
Status: Full Time
School: Krieger School of Arts and Sciences
Location: Homewood Campus
Location City: Baltimore
Location State: MD
Resume Required for Application: Yes
Area of Interest: Technical
Contact: Central Talent Acquisition Office 443-997-5100

General Description

The Maryland Advanced Research Computing Center (MARCC) is a state of the art High Performance Computing (HPC) facility that provides resources (HPC, storage and analytics) for researchers at Johns Hopkins University, The University of Maryland at College Park and eventually to all other schools in the state of Maryland. The Sr. software engineer will serve as technical resource to all users on highly complex code development, architecture, debugging, profiling, optimization, documentation, installation and maintenance of open source scientific applications; data mining and best practices to utilize HPC resources. The incumbent will enable faculty to advance research-computing agendas by interacting directly with researchers, providing feedback on how to improve application performance, and actively participating in application development. There will be also many opportunities to establish scientific collaborations and partnerships with research groups. This position will serve as lead on moderate to large IT architecture and applications development projects. The position will be expected to influence research groups towards innovative solutions and provide oversight to lower level staff. As a project lead, the incumbent is expected to interact with various departments and external constituents outside of JHU.

Internal and External Contacts:

This position will interact with an array of departmental and central administrative offices, faculty, staff, researchers, and students, and with numerous external constituents (i.e. other college administrators and faculty, private businesses, industry partners, officials of federal and local agencies and research foundations) for the purpose of accomplishing HPC technology goals. This includes providing instruction on protocol, regulations and guidelines pertinent to the agency and/or University. Works routinely with University faculty, administrators, students, and researchers. Collaborates regularly with professional colleagues from the central IT@JHU organization, and from other academic departments. Collaborates regularly with colleagues in industry and at other peer institutions.

Essential Duties & Responsibilities

Application Support

Establish collaborations with research groups
Collaborate with research groups in application development, optimization
Develop common tools that benefit application optimization and performance
Provide software architecture expertise to procure external funding
Ensure solutions released to the community are stable and usable.
Ensure resources meet the community’s needs and are highly available to the group with limited interruption.
Perform thorough and complex programming including designing architectural protocols to address research needs of faculty and students in a comprehensive manner.
Install and maintain scientific applications
Identify and debug problems with scientific applications
General HPC Support

Extensively document processes so that users can easily find useful information and other IT staff can perform routine tasks and provide backup.
Conduct extensive research to resolve HPC challenges
Perform highly complex data analysis, data mining and visualization of results
Work closely with the facility’s director and oversight groups to successfully implement policies and procedures
Continuously evaluate new tools and technologies for use in existing and future clusters
Recommend solutions and new technologies
Provide required facility activity data for University and government reports.
Documentation resources and availability
Methodology and techniques to evaluate new applications and tools
Training/Education

Provides regular workshops on HPC related topics to ensure the effective utilization of resources
Develops materials and workshops describing best practices on application development
Attend department and University-sponsored training to increase knowledge, improve skills, and learn new skills. May substitute University training for supervisor approved commercial job related course offerings.
Attends at least three full-day in-class or on-line training programs per year, and department training pertinent to their job.
Ability to utilize web resources to complement local resources
Provide basic and intermediate workshops on scientific topics
Other duties as assigned by supervisor

Qualifications

Minimum Qualifications:

Bachelor’s degree. Six years related experience. Additional education may substitute for required experience and additional related experience may substitute for required education, to the extent permitted by the JHU equivalency.

JHU Equivalency Formula: 30 undergraduate degree credits (semester hours) or 18 graduate degree credits may substitute for one year of experience. Additional related experience may substitute for required education on the same basis. For jobs where equivalency is permitted, up to two years of non-related college course work may be applied towards the total minimum education/experience required for the respective job.

Related work experience:

Minimum 6 years of demonstrated experience in developing scientific applications
Experience leading software development projects
Proficiency in configuration of the HPC software stack, including MPI, OpenMP, Intel, and GNU compilers, Math libraries.
Experience with scientific application management packages like pymodules, Environment modules
Experience with queuing systems like SLURM, PBS, Torque
Excellent scripting skills, python, perl, shell
Knowledge of database processes and interfaces with web portals
Knowledge of scientific software applications in academic supercomputing environments is desired.
Ability to maintain confidentiality
Excellent customer service skills
Excellent communication skills
Must demonstrate strong critical thinking and analytical reasoning.
Knowledge and Skill:

Proven expertise in scientific programming languages, C, C++, or Fortran
Object oriented design experience
Experience with industry standard software development tools (e.g., subversion, eclipse)
Understanding of software lifecycle, design, implementation, testing
Extensive experience in parallel programming, MPI and/or OpenMP
In-depth knowledge in the design, organization of cutting-edge technology in HPC environments.
Advanced knowledge of Linux, PHP/Python/Perl technology/toolkits.
Understanding of HPC Cluster management software.
Familiarity with massive high performance parallel storage and methodologies.
Understand, implement, troubleshoot, and support batch and workload management systems, including diagnosis of failed jobs, implementation of policies, and investigations of new features and services.
Install and configure infrastructure applications by following industry best practices to deliver effective solutions.
Proficiency on scientific applications like Matlab, R, others per discipline.
GPU and Cuda programming would be a plus.
Proficiency with visualization packages, Visit, Paraview and graphics programming.
Must have the ability to multi-task and prioritize.
Must be adaptable and able to meet conflicting deadlines.
Exceptional organizational skills.
The ability to interact with peer institutions to support HPC directives effectively; furthering the goals of the MARCC facility.
Excellent oral and written interpersonal skills in terms of customer service, training, and evangelism of new technologies, negotiation, and persuasion.
Produce effective and thorough technical documentation.
Provide outstanding direct and indirect user support.
Research, recommend, and implement new technologies based on the value to the research facility.
In-depth understanding of data management best practices.
Understanding of data architecture.
Experience, Licensure, Certification: Identify the level required/preferred to perform this job.
Preferred Qualifications

Master’s Degree or PH.D. strongly preferred

NOTE: The successful candidate(s) for this position will be subject to a pre-employment background check.

If you are interested in applying for employment with The Johns Hopkins University and require special assistance or accommodation during any part of the pre-employment process, please contact the HR Business Services Office at 443-997-5100. For TTY users, call via Maryland Relay or dial 711.

https://jobs.jhu.edu/jhujobs/jobview.cfm?reqId=313472&postId=14077

Pages