Parker's mission is to increase the number of financially independent people. We believe we can achieve this goal by building tools that enable independent business owners to scale their businesses profitably. Our core product combines a virtual credit card system with dynamic spending limits and software tooling to help merchants grow and optimize their profitability.
We are looking to expand our headcount quickly to support the demand. Our investors include Solomon Hykes (founder of Docker), Paul Buchheit (founder of Gmail), Paul Graham (founder of Y Combinator), Robert Leshner (founder of compound.finance), and many more. We are a Series B company that has raised over $50M from top-tier fintech investors.
We're looking for a Data Engineer to join our data team and help build reliable, scalable, and well-documented data systems. This is an excellent opportunity for someone who has an interest in Fintech career to grow within a modern data stack environment. You'll support the development of data pipelines, help maintain our data infrastructure, and collaborate with analysts, data scientists and backend engineers to make data accessible and trustworthy.
Assist in building and maintaining data pipelines (ETL/ELT) for internal and external data
Support data ingestion from APIs, files, and databases into our data warehouse
Write SQL queries/ Python scripts to clean, join, and transform data for reporting and analysis
Monitor data quality and troubleshoot pipeline issues
Contribute to documentation and testing of data workflows
Learn and work with tools like dbt, Dagster
Follow best practices for version control (Git) and coding standards
Languages: Python, SQL
Data Warehouses: Redshift, Snowflake, BigQuery, Postgres
Orchestration & Integration: Dagster, Airbyte, dbt, Prefect
Cloud: AWS (S3, Glue, Lambda), GCP, or similar
Dev Tools: GitHub, Docker, VS Code
3+ years of experience in a data, backend, or analytics role (internships count!)
Strong SQL skills and an interest in analytics engineering
Intermediate Python knowledge (e.g., working with data, files, APIs)
Understanding of relational databases and columnar data warehouse and data modeling concepts
Comfortable with Git and command-line tools
Curiosity and willingness to learn modern data tooling
Clear communication and collaboration skills
Experience with dbt, Airflow, Dagster, or similar tools
Exposure to cloud platforms (AWS, GCP, etc.)
Familiarity with data quality, observability, or testing frameworks
Past projects involving large datasets or data APIs
Exposure to GraphQL and Typescript
Compensation Range: $150K - $200K