Description
spec translation
api development
spatial pipelines
columnar analytics
performance profiling
testing benchmarking
Key responsibilities include translating business requirements into technical specifications, developing H3‑driven spatial analytics pipelines, and implementing performance‑optimized algorithms for large datasets. The engineer will build API‑driven backend services with FastAPI, manage columnar analytics stacks like DuckDB and PyArrow, and create spatial computation utilities using H3, Shapely, and related libraries. Additional duties involve writing testable code, benchmarking, profiling, and maintaining CI‑ready open‑source‑style repositories.
- Translate business requirements into technical specifications.
- Design and implement H3‑driven spatial analytics pipelines, including aggregation, density estimation, hotspot detection, and proximity analysis.
- Develop deterministic, reproducible spatial reasoning functions and performance‑optimized algorithms for large datasets.
- Build API‑driven backend services using FastAPI, Pydantic, and asynchronous Python.
- Work with columnar analytics stacks (DuckDB, PyArrow, Parquet/GeoParquet) and perform vectorized processing with NumPy, pandas, or Polars.
- Implement spatial computations with H3, Shapely, and lightweight geospatial utilities.
- Write testable, benchmarked code employing pytest and async testing patterns; create spatial correctness tests and benchmarking suites.
- Deploy monitoring tools to track system architecture status and data flow performance.
- Optimize memory, CPU, and data layout using profiling and performance tools; ensure memory efficiency, batching, and query planning.
- Manage projects with Python package managers (uv, poetry) and maintain CI‑ready repositories with linting, formatting, and type checking.
Requirements
python
h3
spatial stats
llm integration
rag
databricks
Candidates must have strong Python engineering experience in production systems, hands‑on work with geospatial data and H3 or similar indexing, and a solid understanding of spatial statistics and pattern detection. Experience with backend integration of LLMs, reproducible pipelines, and performance tuning is required. Preferred qualifications include familiarity with MCP‑style interfaces, RAG and embeddings AI development, benchmark framework design, and experience with data lakehouse platforms such as Databricks.
- Experience with geospatial data and H3 or similar spatial indexing systems.
- Strong production‑level Python engineering skills.
- Proven ability to design efficient spatial data pipelines at scale.
- Solid understanding of spatial statistics and pattern detection.
- Ability to integrate backend systems with LLMs and AI components.
- Excellent written and verbal communication skills.
- Familiarity with Model Context Protocol (MCP) tool interfaces.
- Experience with RAG, embeddings, and AI application development.
- Background in designing benchmark and AI evaluation frameworks.
- Experience with data lakehouse platforms such as Databricks.
- Knowledge of geospatial metadata requirements and Security+ certification.
Benefits
NV5 offers a competitive compensation package with medical, dental, life insurance, PTO, 401(k), and professional development opportunities. The company provides equal employment opportunities and complies with all applicable non‑discrimination laws.
- Competitive compensation package.
- Medical, dental, and life insurance.
- Paid time off and 401(k) retirement plan.
- Professional development and advancement opportunities.
- Fully remote work with <10 % travel.
Training + Development
Information not given or found