Title: Data Engineer
Location: Blue Ash, OH
Duration: 6 Months (Contract or Contract-to-Hire)
Work Type: Onsite (local candidates strongly preferred)
The team is seeking a Data Engineer experienced in implementing modern data solutions in Azure, with strong hands-on skills in Databricks, Spark, Python, and cloud-based DataOps practices. The role includes analyzing, designing, and developing data products, pipelines, and information architecture deliverables with a focus on treating data as an enterprise asset. This position also supports cloud infrastructure automation and CI/CD using Terraform, GitHub, and GitHub Actions to deliver scalable, reliable, and secure data solutions.
Analyze, design, and develop enterprise data solutions using Azure, Databricks, Spark, Python, and SQL
Develop, optimize, and maintain Spark/PySpark pipelines, including managing data skew, partitioning, caching, and shuffle optimization
Build and support Delta Lake tables and data models for analytical and operational use cases
Apply reusable design patterns, data standards, and architecture guidelines, including collaboration with internal clients when needed
Use Terraform to provision and manage cloud and Databricks resources following Infrastructure as Code (IaC) practices
Implement and maintain CI/CD workflows using GitHub and GitHub Actions for source control, testing, and pipeline deployment
Manage Git-based workflows for Databricks notebooks, jobs, and data engineering artifacts
Troubleshoot failures and improve reliability across Databricks jobs, clusters, and pipelines
Apply cloud computing skills to deploy fixes, upgrades, and enhancements in Azure environments
Work with engineering teams to enhance tools, systems, development processes, and data security
Participate in the development and communication of data strategy, standards, and roadmaps
Draft architectural diagrams, interface specifications, and other design documents
Promote reuse of data assets and support enterprise data catalog practices
Deliver timely support and communication to stakeholders and end users
Mentor team members on data engineering principles, best practices, and emerging technologies
5+ years of experience as a Data Engineer
Hands-on experience with Azure Databricks, Spark, and Python
Experience with Delta Live Tables (DLT) or Databricks SQL
Strong SQL and database background
Experience with Azure Functions, messaging services, or orchestration tools
Familiarity with governance, lineage, or cataloging tools such as Purview or Unity Catalog
Experience monitoring and optimizing Databricks clusters or workflows
Experience working with Azure cloud data services and understanding their integration with Databricks and enterprise platforms
Experience with Terraform for cloud provisioning
Experience with GitHub and GitHub Actions for version control and CI/CD automation
Strong understanding of distributed computing concepts including partitions, joins, shuffles, and cluster behavior
Familiarity with SDLC and modern engineering practices
Ability to balance multiple priorities, work independently, and maintain organization
The client is a large, forward-leaning technology organization focused on delivering modern, scalable data solutions across the enterprise. The team emphasizes cloud-native engineering practices, strong data governance, and high-quality data products that support analytics, operational insights, and digital transformation initiatives. Engineers collaborate in a fast-paced, agile environment with a focus on innovation, reliability, and continuous improvement.