

Leading company in sustainable technology solutions.
Collaborate with business teams to convert functional problems into scalable AI-driven solutions.
Build retrieval-augmented generation (RAG) architectures and vector databases (Pinecone, FAISS, Chroma).
Mentor junior engineers and contribute to building an internal AI/Data capability.
Hands-on with LLMs, vector databases, RAG pipelines, embeddings, model fine-tuning.
Own the architecture of data lakes, data warehouses (Snowflake/BigQuery/Redshift), and real-time streaming systems.
Translate complex technical concepts into clear business insights.
Lead experimentation with multimodal AI (text, image, document intelligence).
Ensure high data quality, governance, lineage, security, and compliance with organisational standards.
Position Overview: Lead the end-to-end design, development, and optimisation of enterprise data platforms while driving the adoption of generative AI solutions across the organisation. This role combines deep expertise in modern data engineering, cloud-based architectures, and advanced AI/LLM technologies to build scalable, secure, and intelligent data-driven systems.
Implement MLOps & LLMOps pipelines for CI/CD, automated testing, and monitoring of AI models.
Create custom AI agents for internal workflows—credit analysis, due diligence, underwriting support, customer interactions, etc.
Design and implement robust data pipelines using modern tools (e.g., Spark, Databricks, Kafka, Airflow).
Develop, fine-tune, and deploy LLM-based solutions (e.g., GPT, Llama, Claude models) for business automation, insights generation, and decision support.
Deploy and manage AI and data workloads on cloud platforms (AWS / Azure / GCP).
Evaluate, adopt, and integrate new AI tools, frameworks, and model providers.
Work closely with analytics, business, product, and engineering teams to deliver impactful AI and data initiatives.
Build and maintain scalable ETL/ELT frameworks for structured, semi-structured, and unstructured data.
Integrate AI systems with enterprise applications and APIs for seamless workflow automation.
Optimise data models for analytics, ML workloads, and business intelligence use cases.
Understanding of MLOps tools: MLflow, Vertex AI, SageMaker, Weights & Biases
Familiarity with containerisation and orchestration (Docker, Kubernetes).
Expertise in data pipeline orchestration: Airflow, Prefect, DBT. Experience with cloud-native data architectures: AWS/GCP/Azure.
Strong command of Python, SQL, PySpark, Scala (optional).
1. Data Engineering & Architecture
2. Generative AI & Machine Learning
Background in sectors like BFSI, real estate, technology, consulting, or enterprise preferred
Desired Qualifications & Experience: Bachelor’s degree or equivalent with 6–12 years of experience in Data Engineering with at least 1–2 years in Generative AI/LLM technologies. Proven experience architecting scalable data platforms and deploying AI models into production.
Nope
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam tortor lacus, ultrices eu purus sit amet, ullamcorper maximus lacus. Suspendisse in iaculis massa, in ullamcorper quam. Interdum et amet.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam tortor lacus, ultrices eu purus sit amet, ullamcorper maximus lacus. Suspendisse in iaculis massa, in ullamcorper quam. Interdum et amet.