Senior Data Platforms Engineer

New Yesterday

Overview

The Senior Data Platforms Engineer at Simple Machines is a dynamic, hands-on role focused on building real-time data pipelines and implementing data mesh architectures to enhance client data interactions. This position blends deep technical expertise in modern data engineering methods with a client-facing consulting approach, enabling clients to effectively manage and utilise their data. Within a team of top-tier engineers, the role involves developing greenfield data solutions that deliver tangible business outcomes across various environments.

We engineer data to life.

Responsibilities

  • Technical Responsibilities
  • Developing Data Solutions: Implement and enhance data-driven solutions integrating with clients' systems using tools such as Databricks, Snowflake, Google Cloud, and AWS. Apply data products, data contracts, and data mesh principles for a decentralized, consumer-oriented approach to data management.
  • Data Pipeline Development: Develop and optimise high-performance, batch and real-time data pipelines using streaming technologies (Kafka, Flink) and workflow orchestration (Dataflow, Airflow).
  • Database and Storage Optimization: Optimise and manage databases from relational (PostgreSQL, MySQL) to NoSQL (MongoDB, Cassandra), focusing on accessibility, integrity, and performance.
  • Big Data Processing & Analytics: Utilise Apache Spark and Apache Flink for large-scale data processing and transformations.
  • Cloud Data Management: Implement and oversee cloud data services (AWS Redshift, S3; Google BigQuery, Cloud Storage) to improve data sharing and interoperability.
  • Security and Compliance: Ensure data practices comply with security policies and regulations, embedding security by design and governance of data assets.
  • Consulting Responsibilities
  • Client Advisory: Provide expert advice on data practices aligned to business requirements and project goals.
  • Training and Empowerment: Educate client teams on technologies and data management strategies to enable efficient utilisation and maintenance of solutions.
  • Professional Development: Stay current with industry trends and pursue certifications relevant to technologies used across clients.

Qualifications

  • Core Data Engineering Tools & Technologies: Proficiency in SQL and Spark; familiarity with Databricks and Snowflake; experience with AWS S3, Google Cloud BigQuery; Cassandra, MongoDB, Neo4j, and HDFS. Skilled in orchestration (Airflow, AWS Glue, dbt) and streaming (Kafka, AWS Kinesis, Pub/Sub, Azure Event Hubs).
  • Data Storage & Modelling: Experience with data warehousing (BigQuery, Snowflake, Databricks) and data formats (Parquet, Delta, ORC, Avro, JSON).
  • Large-scale Data Systems: Experience building and managing production data pipelines and data-intensive applications.
  • Infrastructure & Programming: Data systems infrastructure with IaC (Terraform, Pulumi); programming in Python and SQL; knowledge of Java, Scala, GoLang, Rust is advantageous.
  • CI/CD & Testing: Experience with CI/CD (GitHub Actions, ArgoCD) and data testing/quality tools (DBT, Great Expectations, Soda).
  • Commercial & Consulting Experience: Ability to apply data engineering in a commercial context; experience in consultancy or professional services and stakeholder engagement.
  • Delivery & Methodologies: Familiarity with Agile/Scrum/Kanban project delivery.

Experience & Education

  • Experience: 5+ years of data engineering or equivalent in commercial, enterprise, or startup environments. Consulting experience is advantageous.
  • Education: Degree or equivalent in computer science or a related field.
#J-18808-Ljbffr
Location:
London, England, United Kingdom
Salary:
£125,000 - £150,000
Job Type:
FullTime
Category:
IT & Technology

We found some similar jobs based on your search