This position is funded for 3 years by an award from the National Science Foundation and is essential to meet a contractual obligation to NSF. The position is tasked with developing software and hardware as outlined in the sponsored research agreement.
We seek a highly skilled Scientific DevOps Engineer to lead the development and deployment of a Kubernetes-based platform focused on serving reproducible science products, high-performance computing resources, and scientific AI workloads. The candidate will also be responsible for interfacing scientific data management repositories with various scientific instruments and data sources. Experience in metadata schemas, ontologies, search tools, containerization, and CI/CD is highly valued. We welcome candidates with diverse backgrounds and interests. We value creativity and unique interest in diverse disciplines.
Scientific DevOps (40%):
• Design, implement, and maintain a Kubernetes-based JupyterHub platform for reproducible scientific computing.
• Deploy and manage containers using Docker or similar technologies.
• Utilize Continuous Integration and Continuous Deployment (CI/CD) pipelines to automate testing and deployment processes.
• Apply open Science principles to develop transparent, replicable systems and pipelines.
• Monitor system performance and uptime, ensuring high availability and reliability.
Scientific Software Development (30%):
• Develop software interfaces for scientific data management repositories to interact with various scientific instruments.
• Implement metadata schemas, ontology models, and search algorithms to facilitate efficient data retrieval.
• Integrate data pipelines for batch and real-time processing.
User Engagement & Training (15%):
• Develop and deliver training programs for researchers in utilizing the platform.
• Engage with a diverse user base to assess needs, gather feedback, and implement enhancements.
Cluster Management (10%):
• Administer and maintain high-performance parallel file systems.
• Manage, administer, and maintain AI HPC cluster.
• Optimize cluster resources for computational tasks, storage, and data transfer.
Diversity, Equity, and Inclusion (DEI) (5%):
• Lead and participate in initiatives that promote DEI within the workspace.
• Monitor and report on DEI metrics and outcomes, recommending improvements as needed.
- A PhD or Doctorate Degree in Mechanical Engineering or related field is required.
- 0-3 years of Experience is required.
- Proven experience with Kubernetes and container orchestration.
- Expertise in deploying containers using Docker or similar technologies.
- Proficient in implementing Continuous Integration and Continuous Deployment (CI/CD) pipelines.
- Extensive experience in scientific computing and data management.
- Expertise in metadata schemas, ontologies, and search tools.
- Strong skills in Python or similar languages.
- Knowledge of parallel file systems like Lustre, GPFS, etc.
- Support of open science and knowledge of FAIR principles.
- Demonstrated commitment to promoting diversity, equity, and inclusion.
- Typically sitting at a desk/table
- Typically standing, walking
- University City – Philadelphia, PA
This position is classified as Exempt grade N. Compensation for this grade ranges from $ 90,430.00- Maximum: $ 135,640.00. Please note that the offered rate for this position typically aligns with the minimum to midrange of this grade, but it can vary based on the successful candidate’s qualifications and experience, department budget, and an internal equity review.
We encourage you to explore Drexel's Professional Staff salary structure and Compensation Guidelines & Policies for more details on our compensation framework.
You can also find valuable information about our benefits in the Benefits Brochure.
Special Instructions to the Applicant
Please make sure you upload your CV/resume and cover letter when submitting your application.
Review of applicants will begin once a suitable candidate pool is identified.