
Senior Data Enginner
Swinerton
The Role
Overview
Design and maintain scalable Azure/Databricks data pipelines and lakehouse solutions
Key Responsibilities
- etl pipelines
- streaming pipelines
- data architecture
- data integration
- workflow orchestration
- data governance
Tasks
-Ensure data quality, consistency, and reliability through robust validation, monitoring, and error-handling processes. -Design, build, and maintain scalable ETL/ELT pipelines for ingesting, transforming, and delivering data from diverse sources (structured, semi-structured, unstructured) using Azure Data Factory, Databricks, and related tools. -Mentor and support junior engineers, promoting a culture of learning and excellence. -Implement privacy and security best practices in data pipelines (e.g., data masking, encryption, role-based access control). -Architect, optimize, and evolve data storage solutions and data models, including medallion architecture (bronze, silver, gold layers), for scalable and cost-efficient lakehouse platforms on Azure and Databricks. -Optimize data engineering capabilities by leveraging existing and emerging tools and technologies, with a focus on performance, cost efficiency, scalability, data workflow management, and reliable deployment. -Integrate data from diverse internal and external sources, ensuring interoperability and consistency across platforms. -Foster alignment and adoption of data engineering solutions by building strong relationships and ensuring business relevance. -Build and manage real-time / streaming data pipelines using Azure Event Hubs, Kafka, or Spark Structured Streaming. -Engage with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions that drive business value. -Collaborate with data scientists to enable seamless integration of data pipelines with analytics and machine learning workflows, including ML model deployment and monitoring. -Implement workflow orchestration tools (e.g., Airflow, dbt) for managing complex data workflows. -Apply data governance, security, privacy, and compliance standards within engineering solutions, following organizational and regulatory guidelines.
Requirements
- databricks
- azure
- sql
- python
- 5+ years
- bachelor’s
What You Bring
-Skilled in handling structured, semi-structured (e.g., JSON, XML), and unstructured data. -Advanced experience with Databricks, including Spark, Delta Lake, and medallion architecture. -Excellent problem-solving, critical thinking, and communication skills; ability to work independently and collaboratively with cross-functional teams. -5+ years in data engineering or related roles, with a track record of delivering production-ready data solutions. -Construction industry experience is preferred but not required. -Proven ability to work independently in a remote or hybrid environment with minimal supervision, while adapting to shifting priorities and evolving business needs. -Bachelor’s degree in Computer Science, Engineering, or a related quantitative field required; Master’s degree preferred. Equivalent work experience will also be considered. -Deep expertise in Azure data services (e.g., Azure Databricks, Azure Data Lake). -Experience with real-time/streaming data solutions (e.g., Azure Event Hubs, Kafka, Spark Structured Streaming). -Experience with GenAI/LLM integration (e.g., vector databases, RAG pipelines) is a plus. -Strong experience with data modeling, data warehousing, lakehouse design patterns, and data governance best practices. -Experience with workflow orchestration tools (e.g., Airflow, dbt). -Proficiency in SQL and Python (including pandas, PySpark, or similar frameworks). -Ability to connect business needs with technical capabilities, ensuring solutions are scalable and value-driven. -Experience with containerization (Docker, Kubernetes) and/or serverless compute (Azure Functions) is a plus. -Communicate complex technical concepts through clear, actionable insights and documentation tailored to both technical and non-technical audiences. -Experience with CI/CD for data pipelines and infrastructure as code (e.g., Azure DevOps, GitHub Actions, Terraform).
People Also Searched For
Quality Control jobs in San Jose , California , US
Quality Manager jobs in San Jose , California , US
Quality Engineer jobs in San Jose , California , US
Quality Control jobs in California , US
Quality Manager jobs in California , US
Quality Engineer jobs in California , US
Quality Control jobs in San Jose , US
Quality Manager jobs in San Jose , US
Quality Engineer jobs in San Jose , US
The Company
About Swinerton
-Tackles projects ranging from high-rise buildings to energy facilities with a diverse portfolio. -Known for adaptability in delivering both traditional and cutting-edge projects. -Operates across multiple sectors, including residential, commercial, energy, and industrial developments. -Combines technical knowledge with a focus on quality and value in every project. -Recognized for expertise in large-scale infrastructure, such as bridges, airports, and transport hubs. -Achievements include iconic skyscrapers and sustainable energy projects.
Sector Specialisms
Healthcare
Life Sciences
Affordable Housing
Aviation
Parking Structures
Industrial
Energy
Mass Timber Construction
Renewable Energy
