W2 ONLY.
FRESH GRADUATES
Join our innovative team as a Data Engineer and play a pivotal role in transforming raw data into actionable insights! In this dynamic position, you will design, develop, and optimize scalable data pipelines and architectures that empower data-driven decision-making across the organization. Your expertise will help harness the power of big data technologies and cloud platforms to support advanced analytics, machine learning, and business intelligence initiatives. If you thrive in a fast-paced environment where your technical skills can make a real impact, this is the opportunity for you!
- Develop, implement, and maintain robust ETL (Extract, Transform, Load) processes to facilitate seamless data flow from diverse sources such as AWS cloud services, Hadoop clusters, and on-premises databases.
- Design and optimize data warehouse solutions using tools like Microsoft SQL Server, Oracle, and Azure Data Lake to support complex analytics and reporting needs.
- Build scalable data pipelines utilizing Apache Spark, Hive, and Hadoop ecosystem components to process large volumes of structured and unstructured data efficiently.
- Collaborate with cross-functional teams to understand data requirements and translate them into technical specifications for database design and data modeling.
- Implement RESTful APIs for data integration and ensure secure access through authentication protocols.
- Conduct model training and analysis to support predictive analytics initiatives, leveraging Python, VBA, Bash scripting, and Shell scripting for automation.
- Utilize tools such as Talend, Informatica, Looker, and Apache Hive to streamline data workflows and enable insightful visualizations for stakeholders.
- Participate actively in Agile development cycles to deliver iterative improvements while maintaining high standards of code quality and documentation.
- Proven experience with cloud platforms such as AWS or Azure Data Lake for managing large-scale data environments.
- Strong proficiency in programming languages including Java, Python, VBA, Bash (Unix shell), and Shell scripting for automation tasks.
- Extensive knowledge of big data technologies like Hadoop ecosystem components (Hadoop, Spark, Hive) for processing massive datasets efficiently.
- Hands-on experience with SQL-based databases such as Microsoft SQL Server, Oracle Database, and Data Warehouse architectures.
- Familiarity with ETL tools including Talend or Informatica to design efficient data pipelines.
- Ability to analyze complex datasets using analytics tools like Looker or similar BI platforms to generate actionable insights.
- Solid understanding of database design principles and linked data concepts for integrating diverse datasets seamlessly.
- Experience with RESTful API development for secure data exchange between systems.
- Strong analytical skills with the ability to interpret large datasets and communicate findings effectively.
- Knowledge of Agile methodologies to collaborate effectively within fast-moving project teams. Embark on a journey where your technical expertise fuels innovation! We’re committed to fostering an inclusive environment that supports your growth while offering opportunities to work on cutting-edge projects in big data analytics. Join us in shaping the future of data-driven decision-making!
Pay: $35.00 - $45.00 per hour
Work Location: Remote