Responsible for overseeing junior data engineering activities and aiding in building the organisations data collection systems and processing pipelines.
Oversee infrastructure, tools and frameworks used to support the delivery of end-to-end solutions to business problems through high performing data infrastructure.
Responsible for expanding and optimising the organisations data and data pipeline architecture, whilst optimising data flow and collection to ultimately support data initiatives
Experience Description :
Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
Provide Data engineering guidance, information services and ensure an effective data engineering capability, works closely 3 with data analysts and data scientists to ensure and effective data team.
Collaborate with technology and project teams
Manage SLAs and technical service delivery of vendors in the development, implementation and customer service requirements for all Data Engineering requirements.
Nature of relationship : Manage the relationship
Post Graduate Degree / Masters Degree
8-10 years Experience
Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools : Azkaban, Luigi, Airflow, etc.
AWS cloud services : EC2, EMR, RDS, Redshift.
Storm, Spark-Streaming, etc. Experience with object oriented / object function scripting languages : Python, Java, C++, Scala, etc