Mid-level Data Engineer responsible for designing, building, and maintaining data pipelines, data models, and data infrastructure to support analytics and AI initiatives. Additionally, design and build reusable data pipeline components and frameworks, to accelerate the growth of new data initiatives at Mirion.
Responsibilities
- Design and implement scalable data pipelines using Spark, Azure Databricks, and similar tools.
- Build and optimize data models and dimensional schemas for both analytics and AI/ML use-cases.
- Develop data quality checks and monitoring to ensure data integrity.
- Write efficient SQL queries and optimize data warehouse performance.
- Collaborate with analytics and data science teams to understand data requirements
- Contribute to data documentation and best practices across the platform.
- Troubleshoot data pipeline failures and optimize for performance
Minimum Qualifications
- 2-4 years experience in data engineering, analytics engineering, or related field.
- Strong SQL expertise and experience with data warehouses (Snowflake, BigQuery, Redshift,etc.).
- Experience building and maintaining data pipelines using Spark, Airflow, or similar tools.
- Proficiency in Python or Scala for data processing.
- Understanding of data modeling, ETL/ELT concepts, and dimensional design.
- Experience with cloud platforms (AWS, GCP, or Azure)
Preferred Qualifications
- Experience with Databricks or Delta Lake.
- Experience with real-time data streaming (Kafka, Kinesis).
- Experience managing infrastructure in Azure or AWS cloud.
- Knowledge of data governance and metadata management.
- Background in building data platforms from scratch.