Job Summary
We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate will work closely with data analysts, data scientists, and business teams to ensure reliable data flow and optimized data systems that support decision-making.
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT data pipelines
- Build and optimize data architectures, data lakes, and warehouses
- Ensure data quality, integrity, and availability across systems
- Work with large datasets using distributed computing frameworks
- Collaborate with cross-functional teams to understand data requirements
- Optimize SQL queries and improve data performance
- Implement data governance, security, and compliance standards
- Monitor and troubleshoot data pipeline issues
Required Skills & Qualifications
- Strong experience in SQL and database management (MySQL, PostgreSQL, etc.)
- Proficiency in Python / Scala / Java
- Hands-on experience with ETL tools (Informatica, Talend, Apache NiFi, etc.)
- Experience with big data technologies like Apache Spark, Hadoop
- Familiarity with cloud platforms (AWS, Azure, Google Cloud Platform)
- Experience with data warehousing tools (Snowflake, Redshift, BigQuery)
- Knowledge of data modeling and schema design
- Understanding of data pipeline orchestration tools (Airflow, etc.)
Preferred Qualifications
- Experience with real-time data processing (Kafka, Spark Streaming)
- Knowledge of DevOps practices and CI/CD pipelines
- Experience working in Agile environments
- Strong problem-solving and analytical skills
Pay: $109,467.80 - $131,832.19 per year
Benefits:
- Dental insurance
- Employee assistance program
- Flexible schedule
- Parental leave
- Retirement plan
- Tuition reimbursement
Work Location: Remote