Intermediate Data Engineer- (Remote)
2d ago


Intermediate Data Engineer, Data Processing and Archiving, SQL, Big Data Analytics, Big Data This is a great opportunity to work with an industry leading company in the finance sector.

The role will be based at our headquarters in London. The ideal candidate will have strong experience of working on large scale data processing projects within the financial services sector.

You'll need to be comfortable using Python and Java programming languages for both developing new products as well as supporting existing ones through enhancements or upgrades.

Experience of MongoDB is also desirable but not essential. You should ideally have gained substantial experience building scalable solutions that leverage Apache Hadoop & Spark technology for managing massive volumes of data across different platforms (e.

g., AWS). This includes your own original


  • Extensive understanding and application of Python Development and building complex solutions and applications.
  • Extensive understanding and application of Python and data processing and transformation
  • Strong understanding of Microsoft SQL Server and T-SQL Development
  • Strong Understanding of Code Optimization
  • Understanding of Real-Time Data Processing and Streaming
  • Understanding of Apache Beam
  • Advantageous
  • Understanding of Apache NiFi and building complex workflows.
  • Understanding of opensource streaming technologies (Apache Spark Streaming, Flink)
  • Understanding of Apache Kafka
  • Understanding of Apache Spark
  • Understanding of Apache Hadoop
  • Competency in Python and developing Spark models.
  • Fundamental understanding of MongoDB
  • Fundamental understanding of Redis
  • Fundamental understanding of Databricks
  • Fundamental understanding and optimisation of Linux and cloud environments.

    Key Performance Areas :

  • Effectively conceptualise, design and create high quality, custom workflows and analytics solutions.
  • Develop, test, and implement big data solution designs.
  • Understand client requirements and establish knowledge of data for accurate design, analysis and
  • retrieval.
  • Pull data from various data sources and combine it to store it in a datastore for analysis and
  • retrieval.
  • Collaborating with end users on standardized and best practice approaches.
  • Making suggestions and enhancements on existing solutions.
  • Providing regular and timely feedback to clients on design and build status
  • Educating requestors as to appropriate and desirable parameters to ensure they get the information
  • they need
  • Ensure all tasks are updated on agile boards in a timely manner
  • Assist Project Managers and Change and Training Managers with any project and training related
  • administration tasks
  • Actively Upskill in relevant technologies as prescribed by team leadership
  • Integration and execution with Machine-learning models and AI in-flow.
  • Documentation and Design of Solutions
  • Job Related Experience :

  • Minimum 4+ years work experience with exposure to data pipeline development and solutions
  • architecture as well as project management / coordination experience.
  • Behavioural Competencies :

  • Good Communication Skills
  • Good Presentation Skills
  • Good Adaptability
  • Must take Initiative
  • Good at Planning and organising
  • Good at Teamwork
  • Good at Influencing
  • Good at Problem Solving
  • Must have Attention to Detail
  • Must be good at Analytical Thinking
  • Must have a desire for Innovation
  • Must be able to Conceptualise ideas
  • Qualifications

    B Degree Computer Science / Engineering

    Hortonworks Certified

    Report this job

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Application form