
Schneider Electric
Global leader in electrification, automation and digitization for industries, infrastructure and buildings.
Expert, Data Operations Engineer
Run, optimize, and scale AWS data platforms, building ETL pipelines and ensuring reliability.
Job Highlights
About the Role
The role involves optimizing Amazon Redshift and Athena performance, building and operating ETL/ELT pipelines with AWS Glue and Airflow, managing semantic layers and metadata, applying best practices for partitioning and compression, and monitoring data workflows for high availability. Automation of data processes using Python, SQL, and AWS‑native tools, as well as enforcing security and governance through Lake Formation and IAM, are also key responsibilities. • Build and operate ETL/ELT pipelines using AWS Glue and orchestrate them with Airflow. • Manage semantic layers and metadata to support reliable analytics and AI. • Apply best practices for data partitioning, compression, and columnar storage. • Monitor, troubleshoot, and automate observability for high‑availability data workflows. • Automate data processes with Python, SQL, and AWS‑native tools. • Enforce data security and governance using Lake Formation and IAM with least‑privilege controls. • Support monitoring, auditing, and compliance via CloudWatch and CloudTrail. • Continuously improve architecture by adopting AWS best practices and emerging patterns. • Collaborate with Operations, Data Governance, and PMO to meet delivery standards.
Key Responsibilities
- ▸etl pipelines
- ▸data orchestration
- ▸semantic layer
- ▸partitioning
- ▸workflow monitoring
- ▸security governance
What You Bring
Candidates should have a bachelor’s degree in Computer Science, IT, or equivalent experience in data management, and be hands‑on with core AWS services such as S3, KMS, Lambda, Glue/Spark, SQS, EventBridge, and Step Functions. Required skills include strong networking fundamentals, expertise in Redshift concurrency scaling and Athena tuning, advanced SQL, Terraform, Spark performance tuning, serverless pipelines, NoSQL databases, and strong communication and collaborative abilities. • Optimize Amazon Redshift performance and Athena query tuning. • Bachelor’s degree in Computer Science/IT or equivalent data‑management experience. • Hands‑on experience with AWS services: S3, KMS, Lambda, Glue/Spark, SQS, EventBridge, Step Functions. • Strong AWS networking knowledge: VPC, subnets, routing, NAT gateways, security groups. • Expertise in Redshift concurrency scaling and Athena performance tuning. • Proficiency with IAM roles, policies, and cross‑account access. • Advanced SQL skills and experience with distributed engines like Redshift and Athena. • Experience writing and maintaining infrastructure as code with Terraform. • Solid understanding of Spark architecture and performance optimization. • Familiarity with serverless and event‑driven pipeline designs. • NoSQL experience with DynamoDB or MongoDB. • Strong communication, multitasking ability, and collaborative, data‑driven mindset.
Requirements
- ▸redshift
- ▸athena
- ▸aws
- ▸terraform
- ▸spark
- ▸bachelor's
Benefits
The full‑time position offers a compensation range of $114,400 to $171,600 per year, which includes base salary and a short‑term incentive. Benefits include medical, dental, vision, basic life insurance, Benefit Bucks, flexible work arrangements, paid family leave, a 401(k) plan with matching contributions, 12 holidays, 15 days of paid time off (pro‑rated in the first year), stock purchase eligibility, and military leave benefits. • Salary range $114.4k–$171.6k with short‑term incentive. • Comprehensive benefits: medical, dental, vision, life insurance, Benefit Bucks, flexible work options, paid family leave, 401(k) match, holidays, PTO, stock purchase eligibility, and military leave.
Work Environment
Office Full-Time