Job Overview
We are seeking a highly skilled Data Engineer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and architectures to support advanced analytics and business intelligence initiatives. This role offers the opportunity to work with cutting-edge technologies such as AWS, Azure Data Lake, Hadoop, and Spark, contributing to the organization’s data-driven decision-making processes. The position requires strong technical expertise, analytical skills, and experience with various data management tools and frameworks.
Duties
- Design, develop, and optimize ETL processes using tools like Informatica, Talend, and custom scripting in Python, Bash, or Shell Scripting.
- Build and maintain scalable data pipelines utilizing Apache Hive, Spark, and other big data technologies.
- Develop and manage data warehouses and databases including Microsoft SQL Server, Oracle, and cloud-based solutions such as Azure Data Lake.
- Implement data models, database schemas, and design strategies for efficient data storage and retrieval.
- Integrate diverse data sources, including Linked Data and RESTful APIs, ensuring seamless data flow across platforms.
- Collaborate with cross-functional teams using Agile methodologies to deliver high-quality data solutions aligned with business needs.
- Conduct analysis, generate insights, and support model training efforts through effective data management practices.
- Maintain documentation of architectures, workflows, and processes to ensure compliance and facilitate future enhancements.
- Monitor system performance, troubleshoot issues, and implement improvements for reliability and efficiency.
Requirements
- Proven experience with cloud platforms such as AWS and Azure.
- Strong programming skills in Java, Python, VBA, and shell scripting (Bash/Unix shell).
- Extensive knowledge of big data technologies including Hadoop, Apache Hive, and Spark.
- Hands-on experience with relational databases like Microsoft SQL Server, Oracle, and familiarity with data warehouse concepts.
- Proficiency in ETL tools such as Informatica or Talend; experience with RESTful API integration is a plus.
- Solid understanding of database design, modeling, analysis skills, and analytics best practices.
- Familiarity with modern data architecture components such as Data Lake, Model training, and linked data concepts.
- Experience working within an Agile environment with a focus on collaboration, continuous improvement, and delivery of scalable solutions.
- Strong problem-solving skills combined with excellent communication abilities to translate complex requirements into technical solutions. Join our team to leverage your expertise in building innovative data systems that empower strategic decision-making across the organization!
Job Types: Full-time, Contract
Pay: $60.00 - $70.00 per hour
Expected hours: 40 per week
Benefits:
- Employee assistance program
- Flexible schedule
- Health insurance
- Relocation assistance
Work Location: In person