Produces construction materials like aggregates, asphalt, and ready-mixed concrete.
Build/optimize data pipelines and deploy ML models for production analytics.
13 days ago ago
Junior (1-3 years), Intermediate (4-7 years)
Full Time
Birmingham, AL
Office Full-Time
Company Size
10,000 Employees
Service Specialisms
Construction materials
Aggregates production
Asphalt production
Ready‑mixed concrete
Calcium products
Sector Specialisms
Construction Aggregates
Asphalt
Ready-Mixed Concrete
Roads
Tunnels
Bridges
Railroads
Airports
Role
Description
data architecture
data pipelines
data modeling
nlp
ml modeling
ai agents
Apply understanding of data management and data engineering principles to maintain scalable data architecture.
Contribute to team efforts, including taking on new tasks as assigned by the supervisor.
Design, build, and maintain robust and scalable data pipelines to process, transform, and organize large, complex datasets from disparate sources. Identify, assess, and integrate valuable data sources, developing automated processes for continuous data collection and ingestion.
Assist with special projects as needed to support departmental goals.
Design dimensional data models using methodologies to ensure enterprise data consistency.
Apply expertise in natural language processing (NLP) and text mining techniques where applicable.
Undertake meticulous preprocessing, cleansing, and transformation of large structured and unstructured datasets to ensure data quality, usability, and accuracy for modeling.
Design, build, and rigorously validate machine learning and statistical models (including regression, classification, clustering, and ensemble methods) for predictive and prescriptive analytics.
Handle cross-functional support duties, such as helping other departments with specific projects when required.
Design and implement data-grounded AI agents using large language models (LLMs) and specialized toolkits (e.g., LangChain, agent frameworks) to automate complex decision-making and data querying workflows.
Analyze large amounts of information to discover critical trends and patterns. Apply the scientific method to design experiments, formulate hypotheses, and conduct rigorous testing.
Requirements
snowflake
python
sql
pyspark
mlops
aws
Hands-on experience with Snowflake, JIRA or ServiceNow.
Extensive experience developing predictive data models, quantitative analyses, and visualization of large data sources, including both structured and unstructured data.
Experience leading or significantly contributing to the development of complex data solutions.
Hands-on expertise in data management, programming, and processing large data volumes using technologies such as Python, SQL, and PySpark.
Basic idea or hands on with Tableau or similar data visualization tools/stacks.
Use deep analytical skills and data science knowledge to address complex, real-world business challenges and drive measurable impact.
Hands-on experience with MLOps, Git Version Control, Unit/Integration/End-to-End Testing, CI/CD, and release management processes.
5 years of experience with statistical and programming languages for data analysis, specifically Python (including PySpark, NumPy, Pandas, Scikit-learn) and SQL.
Practical experience with big data processing frameworks such as Spark or similar distributed computing environments.
Familiarity with project management principles and best practices.
5 years of demonstrable experience in a data-focused role encompassing data exploration, data cleaning, and data visualization. Experience with cloud platforms (AWS, Azure, GCP)