Major responsibilities:
- Develop and maintain ETL processes using C# applications (Console Apps, Windows Services, Scheduled Jobs) to automate data ingestion and exporting
- Analyze and integrate new third-party data sources (JSON, XML, CSV, flat files, APIs), ensuring reliable parsing, validation, and transformation
- Design and optimize complex SQL queries, stored procedures, and functions (T-SQL) to load and transform data in the Data Warehouse
- Ensure high performance and scalability of data pipelines, including batch and streaming processing
- Troubleshoot and resolve complex data integration and performance issues as a subject matter expert.
We'd love to hear from you if you have:
- 5+ years of experience in Data Engineering or similar roles
- Strong hands-on experience with Azure data stack: Azure Data Factory, Azure Synapse, PySpark, Spark SQL, Delta Lake
- Solid understanding of Data Warehouse concepts: Star schema, Snowflake schema, data normalization, data modeling, Slowly Changing Dimensions
- Good command of SQL (T-SQL) and experience in building production-grade data pipelines
- English level B2 or highe.
Nice to have:
- Experience with Azure Log Analytics and KQL
- Familiarity with Azure CLI, CI/CD pipelines, and Azure DevOps
- Experience working in Scrum or Kanban teams
- GitHub profile with data engineering examples or pet projects.