Big Data Architect / Engineer ( Hadoop / MapReduce / HDFS )

Big Data & Data Science | GBP450 - GBP550 per day | London, United Kingdom

Big Data Architect / Engineer ( Hadoop / MapReduce / HDFS )

6 Months Contract

£450 – 550 / Day

Central London

The Role:

Big Data Architect / Engineer ( Hadoop / MapReduce / HDFS ) is urgently required to join a leading Data Analytics company in the heart of London on an initial 6 months contract.

The successful Big Data Architect / Engineer ( Hadoop / MapReduce / HDFS ) will have a track-record of delivering of designing, developing and testing big data solutions and will possess complete technical knowledge of all phases involved in building large scale applications and data processing systems.

The ideal Big Data Architect / Engineer ( Hadoop / MapReduce / HDFS ) will be from an Enterprise / Start-up opensource background and as a senior member of the team will lead and mentor Junior members of the team by example.

Key Responsibilities:

  • Strong experience in reviewing backend Architecture and a track-record of implementing big data tools and frameworks.
  • Design, build, test and maintain data management systems, making sure they meet business requirements and user needs.
  • You will work closely with architects, DevOps and developers to deliver services that are automated, reliable and secure whilst recommending ways to improve data efficiency and reliability.
  • Supporting key technologies as part of the migration to big data processing such as Hadoop, Hive, Spark, Presto.
  • Supporting​​​​ in​​ the ​​implementation​ ​of ​​the​ ​Data ​​Lakes and more ​​traditional database​ ​technologies​ ​such​ ​as​ ​PostgreSQL.
  • Develop​ ​and​ ​implement​ ​scripts​ ​for​​ database​​maintenance, monitoring, ​​and​ ​performance​ ​tuning​ ​to​ ​be​ ​applied​​ across​ ​the​ ​business.
  • Provide​ ​high​​ level​ ​analysis​​ and​​ illustrate​ ​best​ ​practice​​in the​ ​form​​ of​​ documentation​​ outlining​ ​approach​ ​and​​ methodologies​ ​followed.

Essential Skills and Experience:

  • Good knowledge ​​of​ ​various​ ​ETL​ ​techniques​ ​and​ ​frameworks, ​such​ ​as​ ​Flume etc.
  • Experience​​ with​ ​integration​ ​of​ ​data​ ​from​ ​multiple​​ data​ ​
  • Strong knowledge​ ​of ​​Big​ ​Data​ ​querying​ ​tools, ​such​ ​as​ ​Hive, ​​and ​​
  • Experience and proficiency ​​with​ ​Hadoop​ ​v2, ​MapReduce,​ ​
  • Ability​​ to​ ​solve​​ any​ ​ongoing​ ​issues​ ​with​ ​operating​​ the​​ cluster​ ​and​​ data ​​
  • Management​​ of​ ​Hadoop​​ cluster, ​​with ​​all​ ​included​​

Desirable Skills and Experience:

  • Google Cloud Platform experience.
  • Kubernetes experience.
  • Exposure and/or interest in Machine Learning and Data science.

The culture is one that promotes creativity and the ability to think outside of the box. To apply for this role please urgently send an update of your CV to tom.goulding@venturi-group.com.

Venturi is a staffing business dedicated to you, differentiating ourselves in the marketplace by quality of service and candidate delivery. Our highly skilled and experienced staff operate within dedicated markets to give you the best service possible. Venturi markets include Business Intelligence, Development IT & Legal IT. Venturi operates as an employment agency and employment business. No terminology in this advert is intended to discriminate on the grounds of age, and we confirm that we are happy to accept applications from persons of any age for this role.

,

Apply Now