Developer at Vallum Associates in London Area, United Kingdom | Kablio
Login
Sign up
Apply Now
Save Job
About us
Talk to us
Privacy
Terms & Conditions
Popular Jobs
Blog
Engineering & Design
Management
Trades & Technicians
Top Energy Companies in London
Jobs in London
Receptionist
General Operative
Marketing
Data Analyst
Cleaner
Customer Service
Project Manager
Executive Assistant
Electrician
Kablio AI
If you're someone who helps build and power the world (or dreams to), Kablio AI is your pocket-sized recruiter that gets you hired.
Copyright © 2025 Kablio
Developer
Employer undisclosed
Role managed by a recruiter
Design and optimize scalable data pipelines on Azure/Fabric using Spark and Python.
3 days ago ago
Expert & Leadership (13+ years), Experienced (8-12 years), Intermediate (4-7 years)
Full Time
London Area, United Kingdom
Hybrid
Your recruiting firm
High-quality recruitment at fair prices.
Placement of candidates in energy, utilities, renewables, insurance, tech, and commodities.
Operates across the UK, USA, Europe, Middle East, and APAC.
Builds dedicated teams for large-scale engineering and energy projects.
Forms long-term strategic partnerships, embedding consultants as extensions of client teams.
Combines recruitment with project consultancy, often assembling whole teams rather than filling single roles.
Assignments include senior insurance roles at Lloyd’s Market and specialist MEP/FP engineers on major builds.
About the client
About the client
Information not given or found
Role
Description
data transformations
dataflows
data validation
access control
data pipelines
stakeholder collaboration
Implement complex transformations, aggregations, and joins ensuring performance, and reliability
Develop and manage dataflows, and semantic models to support specific analytics related business requirements
Implement and apply robust data validations, cleansing, and profiling techniques to ensure data accuracy and consistency across datasets
Create, maintain, and update documentation and internal knowledge repository
Implement role-based access, data masking, and compliance protocols
Work collaboratively with analysts and business stakeholders to translate requirements into technical solutions
Design, build, and optimise scalable data pipelines for batch and streaming workloads
Requirements
microsoft fabric
azure
spark
python
gitlab
docker
Experience of programming under Microsoft Fabric Platform
Familiar with the Agile. Well communication.
Spark Streaming/batch processing
Experience of programming under Microsoft Azure Cloud Platform
Spark application performance tuning
Good English listening and speaking for communicating requirements and development tasks/issues
Java programming language, OOP knowledge
Hands-on experience with lake houses, dataflows, pipelines, and semantic models
Performance tune and optimise jobs and workloads to reduce latency
Experience with One Lake, Azure Data Lake, and distributed computing environments
Python / Notebook programming
Ability to prepare and process datasets for Power BI usage
Experience of using the tools: Gitlab, Python unit test, CICD pipeline.
Knowledge of Docker / Kubernetes
Good skill of troubleshooting
Database knowledge, including Relational Database and Non-SQL database.
Familiarity with time-series data, market feeds, transactional records, and risk metrics
Delta table Optimization
Familiarity with Git, DevOps pipelines, and automated deployment
Ability to build ETL workflows
Understanding of financial regulations such as GDPR, SOX etc.
Strong communication skills with a collaborative mindset to work with and manage stakeholders
Have knowledge of Spark Programming Ability to write Spark code for large scale data processing, including RDDs, Data Frames, and Spark SQL
Benefits
PySpark programming
Training + Development
Information not given or found
Interview process
Information not given or found
Visa Sponsorship
Information not given or found
Security clearance
Information not given or found
Share this job