Company: Truly Free Home/TrulyFree.com
Position: Data Engineer
Reports to: Chief Data & Analytics Officer
Location: Remote in the US
Role Overview
The Data Engineer is the owner of the data layer that enables speed, scale, and trust across the
Business Intelligence technology stack. This role is responsible for ensuring that clean, well-structured,
and analytics-ready data is consistently available to the Business Intelligence team, eliminating the
need for downstream data wrangling and ad hoc transformations. By owning the full lifecycle of ETL
and ELT pipelines and core data transformations, the Data Engineer enables the Business Intelligence
team to focus on analysis, insight generation, and advanced measurement rather than data
preparation.
Working within the company’s modern data stack, including Fivetran for data ingestion, BigQuery as the
cloud data warehouse, dbt for data transformation, and SQL and Python for pipeline development and
automation, the Data Engineer is accountable for building and maintaining production-grade data
pipelines and transformation layers. BI Analysts and Data Scientists are consumers of these
transformed datasets. They are not responsible for building or maintaining dbt transformations, which
are owned by the Data Engineer to ensure consistency, reliability, and reuse across the organization.
These models are designed explicitly to support downstream analytics and reporting, with a strong
understanding of how datasets are consumed by Looker.
The Data Engineer partners closely with BI Analysts and Data Scientists as a core member of the
Business Intelligence team. While BI Analysts and Data Scientists consume analytics-ready datasets,
the Data Engineer owns the underlying transformation logic and modeling standards that ensure data
consistency, reliability, and reuse across the organization. This role supports a large and diverse group
of stakeholders, including marketing teams, executive leadership, and downstream business users who
rely on Looker dashboards and reports for high-impact, revenue-influencing decisions. Experience
designing data models that perform reliably at scale in Looker, including an understanding of how
Looker explores, joins, and aggregates data, is a key factor in success for this role.
** This role requires experience with ecommerce, CPG, DTC, and subscription.
Responsibilities and Duties
Data Pipeline and Transformation Ownership
- Own the design, development, and ongoing operation of ETL and ELT pipelines ingesting data from marketing platforms, ecommerce systems, finance tools, and internal applications. This includes both managing existing vendor-supported pipelines and designing and building new pipelines when required.
- Evaluate data ingestion and transformation approaches, including third-party tools and custom-built solutions, selecting the appropriate method based on reliability, scalability, and business needs.
- Build, maintain, and version control all dbt transformation models that produce analytics-ready datasets for reporting and analysis.
- Design core business datasets for consistency, reuse, and performance across all BI use cases, including downstream consumption in Looker.
- Eliminate redundant, ad hoc, or analyst-managed data transformations by centralizing transformation logic within the data platform.
Data Quality, Reliability, and Availability
- Partner with BI Analysts to implement and maintain trusted, well-governed sources of truth for datasets and metrics in alignment with established business definitions and data dictionary standards.
- Implement automated data quality checks, monitoring, and alerting to ensure data accuracy, completeness, and freshness.
- Proactively identify and resolve data pipeline failures, schema changes, and upstream data issues before they impact analytics.
- Establish and maintain service level expectations for data availability and pipeline reliability.
- Partner with stakeholders to define and maintain a trusted source of truth for datasets and metrics.
Analytics and Data Science Enablement
- Ensure that analysts and Data Scientists have consistent, timely access to analytics-ready data without the need for manual preparation.
- Support the operationalization and refresh of analytics and data science models by ensuring stable inputs, schemas, and dependencies.
- Enable consistent consumption of model outputs across BI tools and downstream reporting use cases.
- Optimize warehouse performance and costs to support fast, scalable analytics.
Platform Standards and Documentation
- Define and enforce standards for data modeling, transformations, naming conventions, and pipeline development.
- Document data models, pipelines, and transformation logic to support transparency and self-service analytics.
- Manage access controls and permissions within the data warehouse in alignment with governance and security requirements.
Collaboration and Continuous Improvement
- Work as a core member of the Business Intelligence team alongside BI Analysts and Data Scientists.
- Partner with the Chief Data and Analytics Officer on data platform roadmap planning and technical decision-making.
- Continuously improve pipeline performance, maintainability, and scalability as the business grows.
Success Measures
- Analytics-ready data is consistently available to BI Analysts and Data Scientists without the need for manual data wrangling or ad hoc transformations.
- dbt transformation models are centralized, well-documented, version-controlled, and reliably maintained by the Data Engineer.
- Core business datasets are trusted as sources of truth for reporting and analysis across marketing, operations, and finance.
- Data pipelines meet defined expectations for freshness, reliability, and uptime, with data-related incidents decreasing over time.
- Analysts and Data Scientists are able to move faster on analysis, insight generation, and advanced measurement due to improved data availability and quality.
- Data warehouse performance supports fast, scalable analytics while controlling costs as data volume and usage grow.
- Data platform standards and documentation are consistently followed and enable reuse, transparency, and maintainability.
Education, Experience, and Qualifications
Education
- Bachelor’s degree in Computer Science, Information Systems, Data Engineering, Analytics, or a related technical field, or equivalent practical experience.
- 4+ years of professional experience in a Data Engineer, Analytics Engineer, or similar role supporting analytics and business intelligence.
- Experience using dbt or equivalent data transformation tools to build, test, document, and maintain centralized, production-grade data transformation models.
- Experience integrating data from marketing platforms, ecommerce systems, and finance tools.
- Proficiency in Python for data pipeline development, automation, or orchestration in a production analytics environment.
Professional Experience
- Proven experience owning end-to-end ETL and ELT pipelines in a production analytics environment.
- Strong experience working with cloud data warehouses, with BigQuery preferred.
- Advanced SQL skills, including the ability to design and optimize queries for analytics and reporting use cases.
- Experience supporting analytics and data science workflows by delivering analytics-ready datasets.
- Experience implementing data quality checks, monitoring, and pipeline reliability practices in support of strong data governance.
- Experience collaborating closely with BI Analysts and Data Scientists as part of a shared Business Intelligence team.
Technical and Analytical Competency
- Strong command of SQL, with the ability to design, optimize, and maintain complex queries and analytical datasets.
- Demonstrated proficiency with dbt, including building, testing, documenting, and maintaining production-grade data transformation models.
- Strong understanding of data modeling concepts for analytics, including fact and dimension design, metric consistency, and historical data management.
- Experience working in BigQuery or a comparable cloud data warehouse, with an understanding of performance optimization and cost management.
- Ability to implement data quality checks, monitoring, and alerting to ensure accuracy, completeness, and freshness of data.
- Understanding of how analytical datasets are consumed by BI tools and reporting layers, and how data structure impacts usability and performance.
- Analytical mindset with the ability to reason about data correctness, edge cases, and downstream analytical impact.
- Ability to translate business and analytical requirements into scalable, maintainable technical solutions.