Data Engineer
- Posted On: 2025-08-18 21:14:22
- Openings: 10
- Applicants: 0
Job Description
We are building Engineering centres of excellence across multiple regions and are looking for smart, talented, driven engineers. This is a unique opportunity to be part of the core engineering team of a fast-growing fintech poised for more rapid growth in the coming years.
To gain deeper insights into FairMoneys pivotal role in reshaping Africas financial landscape, we invite you to watch this informative video.
At FairMoney, we are making a lot of data-driven decisions in real-time: risk scoring, and fraud detection as examples.
Our data is mainly produced by our backend services, and is being used by the data science team, BI team, and management team. We are building more and more real-time data-driven decision-making processes, as well as a self-serve data analytics layer.
As a senior data engineer at FairMoney, you will help build our Data Platform:
- Ensure data quality and availability for all data consumers, mainly data science and BI teams. ingest raw data into our BigQuery DataWarehouse
- Make sure data is processed and stored efficiently: work with backend teams to offload data from backend storage
- Work with data scientists to build real-time machine learning feature computation pipelines
- Spread best practices in terms of data architecture across all tech teams
- Effectively form relationships with the business in order to help with the adoption of data-driven decision-making.
You will be part of the Datatech team, sitting right between data producers and data consumers. You will help build the central nervous system of our real-time data processing layer by building an ecosystem around data contracts between producers and consumers.
Our current stack is made of:
- Batch processing jobs (Apache Spark in Python or Scala)
- Streaming jobs (Apache Flink deployed on Kinesis Data Analytics)
- REST apis (Python FastApi)
- AWS Kinesis / Apache Kafka as message bus
- AWS Lambda as lightweight processors
- Apache Iceberg as an analytics table format
- BigQuery as a data warehouse
Skills
You will work on a daily basis with the below tools, so you need working experience on:
- Languages: Python and Scala
- Big data processing frameworks:
- Streaming: Apache Flink
- Batch: all or one of Apache Spark - Apache Flink - Apache Beam
- Streaming services: Apache Kafka / AWS Kinesis
- Managed cloud services: one of AWS EMR / AWS Kinesis Data Analytics
- Docker
- Building REST APIs
Ideally, you should have at least 3 years of experience with:
- Deployment/ management of stateful streaming jobs
- The Kafka ecosystem: Kafka connects mainly
- Infrastructure as code frameworks (Terraform)
- Ar
More Info
Education
Required Skills
Contact Details
Latest Job
Similar Jobs
- 5 years
- Hyderabad
- 2 Months
- 3 years
- Bengaluru
- 2 Months
- 2 years
- Bengaluru
- 2 Months
- 5 years
- Bengaluru
- 2 Months
- 6+ years
- Mumbai
- 2 Months
- Fresher
- Mumbai
- 2 Months
