Want to hear how I work? Hit play.Find roles with Kablio AI to help build and power the world.Kablio AI helps you secure roles in construction, clean energy, facilities management, engineering, architecture, sustainability, environment and other physical world sectors.
Get hired, get rewarded!
Land a job through Kablio and earn a 5% salary bonus.
Exclusive benefits
5%Bonus
Platform Engineer - Data & AI
Equinix
Global leader in data center and interconnection services, enabling digital transformation.
Senior Platform Engineer designing AI/GenAI data pipelines & cloud platforms
Build reusable frameworks and infrastructure-as-code (IaC) using Terraform, Kubernetes, and CI/CD to drive self-service and automation
Build and orchestrate multi-agent systems using frameworks like CrewAI, LangGraph, or AutoGen for use cases such as pipeline debugging, code generation, and MLOps
Architect and manage multi-cloud and hybrid cloud platforms (e.g., GCP, AWS, Azure) optimized for AI, ML, and real-time data processing workloads
Create extensible CLIs, SDKs, and blueprints to simplify onboarding, accelerate development, and standardize best practices
Foster a culture of ownership, continuous learning, and innovation
Lead initiatives in data modeling, semantic layer design, and data cataloging, ensuring data quality and discoverability across domains
Collaborate across teams to shape the next generation of intelligent platforms in the enterprise
Drive technical leadership across AI-native data platforms, automation systems, and self-service tools
Integrate LLM APIs (OpenAI, Gemini, Claude, etc.) into platform workflows for intelligent automation and enhanced user experience
Design and develop event-driven architectures using Apache Kafka, Google Pub/Sub, or equivalent messaging systems
Collaborate across teams to enforce cost, reliability, and security standards within platform blueprints
Guide adoption of data fabric and mesh principles for federated ownership, scalable architecture, and domain-driven data product development
Implement enterprise-wide data governance practices, schema enforcement, and lineage tracking using tools like DataHub, Amundsen, or Collibra
Work with engineering by introducing platform enhancements, observability, and cost optimization techniques
Develop and maintain real-time and batch data pipelines using tools like Airflow, dbt, Dataform, and Dataflow/Spark
Build and expose high-performance data APIs and microservices to support downstream applications, ML workflows, and GenAI agents
Streamline onboarding, documentation, and platform implementation & support using GenAI and conversational interfaces
Ensure platform scalability, resilience, and cost efficiency through modern practices like GitOps, observability, and chaos engineering
What you bring
looker
prometheus
rag
kubernetes
kafka
python
Experience with Looker Modeler, LookML, or semantic modeling layers
Familiarity with observability tools (Prometheus, Grafana, OpenTelemetry) and strong debugging skills across the stack
Experience with RAG pipelines, vector databases, and embedding-based search
Prior implementation of data mesh or data fabric in a large-scale enterprise
Proven experience building scalable, efficient data pipelines for structured and unstructured data
Experience in developing and integrating GenAI applications using MCP and orchestration of LLM-powered workflows (e.g., summarization, document Q&A, chatbot assistants, and intelligent data exploration)
Experience with GenAI/LLM frameworks and tools for orchestration and workflow automation
Proficiency in designing and managing Kubernetes, serverless workloads, and streaming systems (Kafka, Pub/Sub, Flink, Spark)
Hands-on expertise building and optimizing vector search and RAG pipelines using tools like Weaviate, Pinecone, or FAISS to support embedding-based retrieval and real-time semantic search across structured and unstructured datasets
Deep knowledge of data modeling, distributed systems, and API design in production environments
Strong programming background in Java, Python, SQL, and one or more general-purpose languages
Experience with ML Platforms (MLFlow, Vertex AI, Kubeflow) and AI/ML observability tools
Experience with metadata management, data catalogs, data quality enforcement, and semantic modeling & automated integration with Data Platform
5+ years of hands-on experience in Platform or Data Engineering, Cloud Architecture, AI Engineering roles
Benefits
Work with a high-energy, mission-driven team that embraces innovation, open-source, and experimentation
Hey there! Before you dive into all the good stuff on our site, let’s talk cookies—the digital kind. We use these little helpers to give you the best experience we can, remember your preferences, and even suggest things you might love. But don’t worry, we only use them with your permission and handle them with care.