Job Description
Big Data Engineer that will work on the collecting, storing, processing, and analysing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
You will also be responsible for integrating them with the architecture used across the company.
Responsibilities
Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
Implementing ETL process if importing data from existing data sources is relevant
Monitoring performance and advising any necessary infrastructure changes
Defining data retention policies
Skills and Qualifications
Proficiency with Hadoop v2, MapReduce, HDFS
Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
Experience with Spark
Experience with various messaging systems, such as Kafka or RabbitMQ
Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2OGood understanding of Lambda Architecture, along with its advantages and drawbacks
Experience with Cloudera / MapR / Hortonworks
Proficient understanding of distributed computing principles
Management of Hadoop cluster, with all included services
Ability to solve any ongoing issues with operating the cluster
Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
Experience with integration of data from multiple data sources
Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
Knowledge of various ETL techniques and frameworks, such as Flume