Collaboration with Stakeholders: Partner with analysts, scientists, and business teams to translate requirements into technical solutions.
Performance Optimization: Enhance existing data processes for speed and scalability, ensuring efficient data processing.
Data Transformation: Convert raw data from multiple sources into clean, reliable datasets for analysis and reporting, ensuring quality and consistency.
Data Modelling: Develop and maintain logical and physical data models to support reporting and analysis needs, implementing best practices for data warehouse design.
Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient data pipelines using DBT, SQL, Python, and ETL frameworks.
Requirements
terraform
ci/cd
databricks
dbt
kafka
kinesis
Are a data engineer, analytics engineer, software engineer, or technical data analyst with a passion for data modelling and transformation.
Are comfortable in an agile environment using Terraform, CI/CD, pair programming, and deployment strategies.
Have experience delivering analytical solutions (e.g. in the Databricks stack).
Can orchestrate complex data pipelines.
Enjoy building scalable, resilient analytical products.
Possess data modelling expertise using tools such as DataForm or DBT.
Have experience with streaming technologies (Kafka Streams, Amazon Kinesis or similar) for sourcing and transforming large-scale data.
Benefits
Seek continuous learning opportunities to deepen or broaden your knowledge.