Duties: Implement and curate our big data strategy and to expand and optimize data pipelines and data movement across teams. Build and deploy solutions that enable the Organization to easily wrangle, prep, and leverage data in support of the ongoing needs of our clients. Collaborate with our Data Science, Engineering and Product teams to understand requirements and to provide guidance and recommendations on solutions that meet their needs. Create and maintain efficient and scalable data pipeline architectures. Define, implement, and evolve the necessary data models to support our big data lifecycle needs. Build solutions to ingest, transform, and load data from a wide variety of data sources. Evaluate the existing data ecosystem and recommend tools, processes, and methods that support the target strategy. Refine the existing processes, guidelines, and tools for data management solutions covering data movement, data security, data privacy, and metadata management. Contribute to technical strategy, architectures, and risk consultation for Engineering and Product teams. Establish analytics tools that enable data scientists and business analysts in building and optimizing models that support our business goals. Work with all stakeholders including executive, product, and engineering teams to assist with data-related issues and needs.
Requirements: Bachelor’s Degree in Computer Science, or related Software Engineering field with 3 years’ experience in Data Engineering role and familiarity with a wide variety of database technologies, both rational and non-rational. 3 years' experience must include: big data technologies and frameworks i.e. Snowflake, Big Query, Redshift, etc.; implementation of large scale data lake solutions and toolsets; big data solutions developed in large cloud computing infrastructures such as AWS or Azure; ETL tools i.e. Fivetran, Stitch, and Hevo; Data Orchestration to allow analyzing and maintaining raw big data; advanced SQL; authoring, evaluating and optimizing queries.