Data Engineer


We seek a skilled and motivated Data Engineer to join our dynamic team. As a Data Engineer, you will play a crucial role in optimising and enhancing our data pipelines, ensuring efficient data processing, storage and retrieval. You will work closely with cross-functional teams to analyse requirements, propose new data pipeline architectures, and implement solutions to improve performance and scalability.

The tasks for this post include the following:

  • Analyse existing data pipelines and identify areas for improvement, optimisation, and scalability.
  • Work closely with Bioinformaticians and annotators to integrate data pipelines with existing systems and applications.
  • Monitor data pipeline performance, troubleshoot issues, and implement solutions to ensure reliability and efficiency.
  • Stay current with industry trends and best practices in data engineering and recommend new technologies or tools to enhance data infrastructure.
  • Document data pipelines, processes, and workflows for internal reference and knowledge sharing.

The successful candidate will report directly to the PDBe Technical Project Lead as a Technical Officer. This post is an opportunity for the right person to bring IT skills and innovative ideas to help sustain the growing amount of structural biology data in the PDB and ensure that PDBe, PDBe-KB and AFDB services remain sustainable.

You have


  • MSc in computer science, IT or a related field, or in bioinformatics with a demonstrated IT expertise
  • Expert in Data Modelling and Advanced SQL
  • Proficiency in Python programming
  • Proficiency in ETL (Extract, Transform, Load) processes and tools for large-scale data processing.
  • Strong understanding of relational databases (Oracle, PostgreSQL) and experience optimising database performance.
  • Proficiency in data warehousing (Redshift, BigQuery)
  • Strong communication and collaboration skills, with the ability to work effectively in a team environment.
  • Proficiency in oral and written English

You might also have


  • PhD in computer science, IT or a related field, or in bioinformatics with a demonstrated IT expertise
  • Experience in big data technologies and frameworks, such as Apache Spark, Hadoop or similar platforms
  • Hands-on experience with CI/CD (GitLab CI/GitHub Actions)
  • Familiarity with Java
  • Familiarity with Google Cloud Platform or AWS
  • Familiarity with data modelling techniques for AI (Artificial Intelligence) and ML (Machine Learning) applications
  • Familiarity with Neo4J or other graph databases is an added advantage
  • Familiarity with data visualisation (Tableau, PowerBI)
  • Knowledge of, or affinity with, structural biology and bioinformatics
  • Experience working in international teams

The closing date for this post is 12th May 2024.

For more information and to apply, visit: