Data Piper provides specialized and competitive talent to help you build your team’s capabilities and success. Our staff have proven experience from the most competitive companies in Silicon Valley, so you know they have the experience and dedication that you can trust. For short or long-term assignments, we will take the hassle out of finding the next perfect fit for your team.
Requirements :
Demonstrated proficiency in Scala (Object Oriented Programming) or Python, SQL or SPARK SQL
Experience with Databricks, including Delta Lake
Experience with Azure and cloud environments, including Azure Data Lake Storage (Gen2), Azure Blob Storage, Azure Tables, Azure SQL Database, Azure Data Factory
Experience with ETL/ELT patterns, preferably using Azure Data Factory and Databricks jobs
Fundamental knowledge of distributed data processing and storage
Fundamental knowledge of working with structured, unstructured, and semi structured data
In collaboration with the Product, Development, and Enterprise Data teams, the Data Integration Engineer will design and maintain batch and streaming integrations across a variety of data domains and platforms
Work with stakeholders to define and develop data ingest, validation, and transform pipelines
Participate in solution and architecture design & planning
Troubleshoot data pipelines and resolve issues in alignment with SDLC
Ability to diagnose and troubleshoot data issues, recognizing common data integration and transformation patterns
Estimate, track, and communicate status of assigned items to a diverse group of stakeholders