Our Client is seeking a driven Data Engineer to develop, expand, and maintain their serverless data infrastructure and pipelines.
The candidate will work with large-scale datasets and cloud technologies (including but not limited to AWS, Python, and SQL) to enhance our existing data processes.
To be successful in this role, the Data Engineer will be a passionate self-starter who can lead and maintain efforts to:
· Enhance current data analysis, update, and validation processes.
· Ensure data infrastructure is set up to handle new and varied data sources, including client data.
Key Responsibilities
You will architect, develop, and maintain scalable data pipelines to ingest, process, and transform data from multiple sources. This includes building and optimizing ETL/ELT workflows, maintaining data warehouses and lakes, and ensuring data quality and reliability.
You will have the opportunity to drive and shape our data infrastructure in collaboration with analysts, software engineers, and DevOps to understand data requirements and implement solutions that meet business needs. In addition, you will monitor pipeline performance, troubleshoot issues, and implement best practices for data governance and security.
Required Qualifications
- Bachelor's degree in Engineering, Computer Science, Information Systems, or equivalent practical experience
- Strong proficiency in Python (and Typescript, NodeJS) for data processing and automation
- Hands-on experience with AWS services including Dynamo, DMS, EC2, S3, Glue, Lambda, and Aurora serverless architecture
- Experience designing and building data pipelines and ETL processes
- Solid understanding of SQL and Postgres database technologies (relational and NoSQL)
- Knowledge of data modeling, warehousing concepts, and dimensional modeling
Preferred Qualifications
- Experience with workflow orchestration tools like AWS Step Functions or Apache Airflow
- Familiarity with big data technologies such as Spark, Hadoop, or Kafka
- Knowledge of containerization and infrastructure as code (Docker, Terraform, ECS)
- Understanding of data governance, security, and compliance requirements
- Experience with AI/ML, HR tech, and/or B2B SaaS is a plus
Technical Skills
Python, AWS (DMS, EC2, S3, Glue, Lambda, and Aurora serverless architecture), SQL, Postgres, ETL/ELT development, data warehousing, version control (Git), CI/CD pipelines
Pay: $95,000.00 - $135,000.00 per year
Application Question(s):
- Do you need sponsorship to work full-time?
Experience:
- data processing and automation: 5 years (Required)
- Python: 5 years (Required)
- AWS: 5 years (Required)
Work Location: In person