Formal qualifications : An undergraduate qualification (Bachelor's degree or equivalent) in the relevant IM discipline and / or Technical competencies and certification with relevant years of experience in a similar role.
Role-specific knowledge : Data LakeData ModelingData ArchitectureAzure Data EnvironmentSpecialist Areas : Strong experience of building large scale file shipping and pipelines, ideally using Azure services such as AzCopy and Azure Data Lake.
Experience of managing unstructured file meta-data, conversions, standardisation and related workflows.Experience of building analysis jobs that scale on technologies such as Databricks or Azure Batch.
Key Skills : Phyton ProficientPySpark - ProficientSQL CompetentSolution Architecture CompetentAPI Design - CompetentContainers CompetentCI / CD CompetentAzure Cloud - CompetentData Stream patterns and technology ProficientData stream patterns and technology ProficientData engineering design patterns CompetentMining data BeneficialyResponsibilitiesDevelop ETL pipelines.
The data transformations will be developed in Azure Databricks using Python and on Azure SQL using T-SQL and deployed using ARM templatesCombine and curate data in a central data lakeServe data for application and analytics through a variety of technologies such as SQL, Server Synapse, CosmosDB and TSIBuild transformation pipelines into dimensions and facts and therefore a strong knowledge of standard BI concepts is mandatoryBuild stream pipelines leverage IoT Hub, Event Hub, Databricks streaming and other Azure stream technologies.
Work in a fluid environment with changing requirements whilst maintaining absolute attention to detail.