Major responsibilities:
- Act as a primary technical point of contact for the client, clarifying requirements and translating business goals into technical solutions
- Own and drive the DWH delivery plan, including prioritization, planning, and progress tracking
- Identify risks and dependencies (data, legacy systems, delivery) and proactively propose mitigation strategies
- Perform code reviews and ensure high quality of SQL, PySpark, and data models across the team, ensure consistent quality and architectural alignment
- Align technical solutions with stakeholders and present trade-offs and recommendations
- Distribute tasks within the team and maintain a sustainable development pace.
We'd love to hear from you if you have:
- 5+ years of experience in Data Engineering, including 2+ years in Azure
- Expert-level SQL skills (complex analytical queries, performance optimization)
- Strong experience with Azure Synapse Analytics, PySpark
- Hands-on experience with Azure Data Lake Storage Gen2 and data layer design (Raw / Silver / Gold)
- Experience with Azure SQL Managed Instance
- Strong knowledge of data modeling (Kimball / Inmon, SCD, historical data handling)
- Solid experience designing and building enterprise Data Warehouses (fact/dimension modeling, aggregation layers)
- Experience in technical leadership (code reviews, architecture decisions, mentoring)
- Experience working in Scrum teams and managing delivery from high-level requirements
- English level B2+ (regular communication with the client).
Nice to have:
- Experience with Python and Azure Data Factory
- Background in insurance domain (premiums, commissions, taxes, etc.)
- Familiarity with BI tools (e.g., Power BI)
- Experience with lift-and-shift migration from legacy systems
- Azure certifications (e.g., DP-203).