Position: Data Engineer (Databricks/Cloud Data Pipelines/Azure)
Location: Mostly Remote but Hybrid – 20% Onsite in Montgomery County, MD - near Northern DC metro area)
Contract-to-Hire (Candidates must be willing to convert to FTE after contract period)
- Target Pay Rate Range: $62 - $72 / hour w2
- Target Conversion Salary: $130K - $140K (depending on experience)
Benefits: This job is eligible for medical, dental, vision, and 401(k)
Work Auth: Green Card or U.S. Citizen or (Must be eligible to work on W2 without requiring sponsorship now or in the future)
About the Role:
A mission-driven organization supporting a vital medical professional community is seeking a Data Engineer to join its Data Governance and Analytics team. This role focuses on building and optimizing scalable data pipelines, enabling advanced analytics and machine learning models, and supporting enterprise data governance standards. You’ll work on high-impact initiatives that shape data strategy and architecture across the organization.
The ideal candidate is a hands-on engineer with expertise in Databricks, Azure, and CI/CD automation, combined with strong Python and SQL skills. You’ll lead technical best practices, mentor peers, and drive innovation in data engineering and DevOps.
Minimum Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or related field
- 7+ years building enterprise-level data solutions on cloud platforms
- Proven expertise in Databricks, Azure, Python, SQL, and Apache Spark
- Strong experience with CI/CD automation and DevOps practices
- Familiarity with data governance frameworks and security compliance
- Excellent collaboration and communication skills
- Nice To Have: Experience in Non-Profit, Associations, or Mission Driven Organizations
Responsibilities:
- Design and optimize scalable data pipelines for analytics, AI/ML, and BI workloads
- Implement ETL/ELT solutions for multi-source data ingestion and transformation
- Drive DevOps adoption with CI/CD automation and Infrastructure as Code
- Manage and optimize Azure cloud data storage for cost and performance
- Ensure data quality, integrity, and compliance with governance standards
- Collaborate with architects, analysts, and business stakeholders to deliver data-driven solutions
- Mentor junior engineers and foster continuous improvement
- Evaluate emerging technologies to enhance scalability and innovation
Desired Skills:
- Databricks – Advanced experience with Databricks Workflows for orchestrating production-grade pipelines
- Azure Cloud – Expertise in Azure Data Factory, Data Lake, Synapse, Key Vault, and Monitor
- CI/CD & DevOps – Automated deployments, Infrastructure as Code (Terraform), GitHub Actions, Azure DevOps
- Programming – Strong Python and SQL for ETL/ELT, analytics, and ML workflows
- Apache Spark (PySpark) – Distributed data processing and real-time analytics
- Data Governance – Data quality, lineage, cataloging, and compliance frameworks
- ETL/ELT Design – Structured streaming, batch ingestion, orchestration (Airflow, Kafka)
- RESTful APIs – Integration with OAuth authentication, JSON parsing, and scalable pipelines
- Mentorship – Ability to guide junior engineers and promote best practices
Addison Group is an Equal Opportunity Employer. Addison Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, gender, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Addison Group complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. Reasonable accommodation is available for qualified individuals with disabilities, upon request.
IND 005-009