Job Description
- The Client is seeking highly skilled Data Engineers to join the team. The ideal candidates will have strong expertise in modern data engineering tools and cloud technologies, with proven hands-on experience in Python, PySpark, Snowflake, EMR, and EKS.
Responsibilities:
- Design, build, and optimize scalable data pipelines for large-scale data processing.
- Develop and maintain ETL/ELT workflows using Python and PySpark.
- Implement and manage data solutions on Snowflake and AWS services such as EMR and EKS.
- Ensure data quality, reliability, and availability across systems.
- Collaborate with data scientists, analysts, and business stakeholders to deliver efficient data solutions.
- Monitor, troubleshoot, and optimize data pipelines for performance and cost efficiency.
- Work in an agile environment, contributing to sprint planning, daily stand-ups, and retrospectives.
Required Skills & Experience:
- 3–6+ years of hands-on experience in data engineering.
- Strong proficiency in Python and distributed computing with PySpark.
- Deep understanding and practical knowledge of Snowflake (data modeling, performance tuning, query optimization).
- Hands-on experience with AWS EMR (Spark/Hadoop clusters) and EKS (Kubernetes on AWS).
- Solid understanding of data warehousing, data lakes, and ETL best practices.
- Strong problem-solving and debugging skills in large-scale data environments.
- Experience working in agile delivery models.
Job Type: Contract
Pay: $40.00 - $50.00 per hour
Expected hours: 40 per week
Work Location: In person