Big Data Engineer - Jumia

Jumia

Engineering | Porto, PT

Jumia is the leading pan-Africa e-commerce platform. Founded in 2012, Jumia’s mission is to improve the quality of everyday life in Africa by leveraging technology to deliver innovative, convenient and affordable online services to consumers, while helping businesses grow as they use our platform to reach and serve consumers.

Our platforms consist of our marketplace, which connects sellers with consumers, our logistics service, which enables the shipment and delivery of packages from sellers to consumers, and our payment service, which facilitates transactions among participants active on our platform in selected markets. Through our online platforms, consumers can access a wide range of physical and digital goods and services, fashion, electronics, beauty products and also hotel and flight bookings or restaurant delivery.

With over 3,000 employees in 14 countries spanning across 6 African regions, Jumia is led by top talented leaders offering a great mix of local and international talents and is backed by very high-profile shareholders. Jumia is committed to creating sustainable impact for Africa. Jumia offers unique opportunities in a vibrant and booming environment, creating new jobs, new skills, and empowering a new generation. We are looking for talented people with a passion for Africa to join our team and embark on our exciting journey!


 


Main Responsibilities:



  • Contribute in the design and construction of the company data lake

  • Collect, store, process, and support the analysis of huge sets of data, both structured and unstructured

  • Choosing optimal solutions to use in big data use cases, then maintain, implement, monitor and integrate them with the data and IT architecture used across the company

  • Build and teach the company about big data technologies, participate actively into the journey setup, from the discovery phase until the corporate data centric transformation

  • Build solutions with key concepts: security and privacy by design


 


 


Requirements:



  • Knowledge of the Linux operation system (OS, networking, process level)

  • Understanding of Big Data technologies (Hadoop, Hbase, Spark, Kafka, Flume, Hive, etc)

  • Understanding of one or more object-oriented programming languages (Java, C++, C#, Python)

  • Fluent in at least one scripting language (Shell, Python, Ruby, etc.)

  • Experience with at least one Hadoop distribution (Cloudera, MapR or preferably Hortonworks)

  • Experience building complex data processing pipelines using continuous integration tools

  • Experience with Cassandra, MongoDB or equivalent NoSQL databases

  • Experience developing in an Agile environment

  • Bachelor’s / Master’s / PhD degree in Computer Science or related field

  • 3+ years of building data pipelines experience or equivalent

  • Nice to have: Experience in designing big data/distributed systems

  • Nice to have: Experience creating and driving large scale ETL pipelines


 


 


We offer:



  • A unique experience in an entrepreneurial, yet structured environment

  • The opportunity to become part of a highly professional and dynamic team working around the world

  • An unparalleled personal and professional growth as our longer-term objective is to train the next generation of leaders for our future internet ventures


 


 


Please send your CV in English. CV in other languages will not be considered.