USC/ GC ONLY,
W2
Job Overview
We are seeking a dynamic and innovative Data Engineer to join our team and drive the development of scalable, high-performance data solutions. In this role, you will be instrumental in designing, building, and maintaining robust data pipelines and architectures that empower data-driven decision-making across the organization. Your expertise will enable seamless integration of diverse data sources, optimize data workflows, and support advanced analytics initiatives. This is an exciting opportunity for a motivated professional eager to leverage cutting-edge technologies to transform complex data into actionable insights.
Responsibilities
- Design, develop, and implement scalable data pipelines using ETL (Extract, Transform, Load) processes to facilitate efficient data flow across various platforms.
- Build and maintain large-scale data warehouses utilizing tools such as Apache Hive, Hadoop, Spark, and cloud-based solutions like Azure Data Lake and AWS.
- Collaborate with cross-functional teams to gather requirements and translate them into effective data models and architectures.
- Manage database systems including Microsoft SQL Server, Oracle, and other relational databases; optimize query performance and ensure data integrity.
- Develop and deploy RESTful APIs for seamless data access and integration with external systems.
- Utilize programming languages such as Python, Java, Bash (Unix shell), VBA, and Shell Scripting to automate workflows and enhance system functionality.
- Support model training, analysis skills development, and looker-based reporting to provide actionable insights for business stakeholders.
- Implement best practices in database design, data governance, security protocols, and compliance standards within an Agile development environment.
- Conduct analysis of big data sets to identify patterns, trends, and opportunities for process improvements or new product features.
- Stay current with emerging technologies like Informatica, Talend, Linked Data concepts, and cloud platforms such as Azure Data Lake to continually enhance our data infrastructure.
Experience
- Proven experience as a Data Engineer or in a similar role with a strong understanding of Big Data ecosystems including Hadoop, Spark, Apache Hive, and related tools.
- Hands-on expertise with cloud platforms such as AWS or Azure Data Lake for scalable storage solutions.
- Extensive knowledge of SQL databases including Microsoft SQL Server, Oracle, and experience designing efficient database schemas.
- Proficiency in programming languages such as Python and Java for developing data pipelines and automation scripts.
- Familiarity with ETL tools like Talend or Informatica for integrating diverse data sources effectively.
- Experience working within an Agile methodology environment to deliver iterative improvements on complex projects.
- Strong analysis skills with the ability to interpret large datasets using analytics tools like Looker or similar BI platforms.
- Knowledge of RESTful API development for system integration purposes.
- Ability to perform model training tasks and contribute to analytics initiatives that support strategic decision-making.
- Excellent problem-solving skills combined with attention to detail in database design and shell scripting tasks. Join us if you're passionate about transforming raw data into strategic assets! Bring your expertise in modern data technologies to a collaborative environment where innovation drives success!
Pay: $70.00 - $90.00 per hour
Work Location: Remote