Responsibilities
- Assist in designing and developing scalable and reliable data pipelines to support business operations and analytics
- Help integrate, clean, and organize data from multiple structured and unstructured sources
- Work with Data Scientists and Analysts to support machine learning, reporting, and dashboarding workflows
- Contribute to building and maintaining ETL/ELT processes and support automation of routine data tasks
- Monitor and troubleshoot basic data pipeline issues under the guidance of senior engineers
- Support cross-functional teams in creating and updating data models and schemas
- Follow best practices for data governance, documentation, and data security
Requirements
- Strong foundational skills in Python (data manipulation, scripting, and basic pipeline development) and SQL (writing queries and basic optimization)
- Familiarity with MongoDB and/or relational databases (e.g., PostgreSQL, MySQL)
- Exposure to big data or data processing technologies such as Apache Spark, Hadoop, or Kafka through coursework or projects
- Basic understanding of ETL/ELT concepts and data pipeline workflows
- Experience or academic exposure to at least one cloud platform (AWS, Azure, or GCP)
- Strong analytical and problem-solving skills with a willingness to learn and grow
- Good communication and teamwork skills, with the ability to work in hybrid cross-functional environments
Job Type: Contract
Pay: From $30.00 per hour
Application Question(s):
- Will you now or in the future require visa sponsorship for employment( e.g. H-1B visa, OPT STEM extension, etc.) ?
Work Location: In person