Role DescriptionThis role provides an exciting opportunity to roll out a new strategic initiative within the firm-Enterprise Infrastructure Big Data Service.
The Big Data Developer serves as a development and support expert with responsibility for the design, development, automation, testing, support and administration of the Enterprise Infrastructure Big Data Service.
The roles require experience with both Hadoop and Kafka. This will involve building and supporting a real time streaming platform utilized by data engineering community.
The incumbent will be responsible for developing features, ongoing support and administration, and documentation for the service.
The platform provides a messaging queue and a blueprint for integrating with existing upstream and downstream technology solutions.
Candidate DescriptionThe incumbent will have the opportunity of working directly across the firm with developers, operations staff, data scientists, architects and business constituents to develop and enhance the big data service.
Development and deployment of data applicationsDesign & Implementation of infrastructure tooling and work on horizontal frameworks and librariesCreation of data ingestion pipelines between legacy data warehouses and the big data stackAutomation of application back-end workflowsBuilding and maintaining backend services created by multiple services frameworkMaintain and enhance applications backed by Big Data computation applicationsBe eager to learn new approaches and technologiesStrong problem solving skillsStrong programming skillsBackground in computer science, engineering, physics, mathematics or equivalentWorked on Big Data platforms (Vanilla Hadoop, Cloudera or Hortonworks)Preferred : Experience with Scala or other functional languages (Haskell, Clojure, Kotlin, Clean)Preferred : Experience with some of the following : Apache Hadoop, Spark, Hive, Pig, Oozie, ZooKeeper, MongoDB, CouchbaseDB, Impala, Kudu, Linux, Bash, version control tools, continuous integration tools