AWS Data Engineer
Reverside Professional Services
Johannesburg, Gauteng, ZA
15h ago
source : Careers24
  • Introduction : *Reverside is an IT services provider; we are always looking for professional candidates to join our team in Software Development, providing opportunities to work on exciting projects, within our well established client base.
  • Description : AWS Data Engineer in Johannesburg*We are looking for* *AWS Data Engineer professionals with 5+ years of solid development experience in Hadoop and has a solid knowledge base of the SDLC.*
  • Requirements :

    Responsibilities : Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties - Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue, Snowflake.

  • Analyze, re-architect and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS or 3rd party services.
  • Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala.
  • Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.
  • Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.* Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.
  • Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies.
  • Creation and support of real-time data pipelines built on AWS technologies including Glue, Redshift / Spectrum, Kinesis, EMR and Athena* Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency* Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems.
  • Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning* Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers*Qualifications Bachelor's Degree in Computer Science, Information Technology or other relevant fields* Has experience in any of the following AWS Athena and Glue Pyspark, EMR, DynamoDB, Redshift, Kinesis, Lambda, Snowflake* Proficient in AWS Redshift, S3, Glue, Athena, DynamoDB, EMR* Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations *Work Experience Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building / operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets* Experience working with distributed systems as it pertains to data storage and computing* Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.* Build processes supporting data transformation, data structures, meta data, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected data sets.* Working knowledge of message queuing, stream processing, and highly scalable Big Data, data stores.
  • Strong project management and organizational skills.* Experience supporting and working with cross-functional teams in a dynamic environment.
  • Experience in a Data Engineer or similar roles* Experience with big data tools is a must : Hadoop, Spark, Kafka, etc.* Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools : Azkaban, Luigi, Airflow, etc.* Experience with AWS cloud services : EC2, EMR, RDS, Redshift* Experience with stream-processing systems : Storm, Spark-Streaming, etc.
  • Experience with object-oriented / object function scripting languages : Python, Java, C++, Scala, etc
  • Report this job
    checkmark

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Apply
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Continue
    Application form