Flutterwave was founded on the principle that every African must be able to participate and thrive in the global economy. To achieve this objective, we have built a trusted payment infrastructure that allows consumers and businesses (African and International) make and receive payments in a convenient borderless manner.
To make it easier for Africans to build global businesses that can make and accept any payment, anywhere from across Africa and around the world.
- Advertisement -
Simplifying payments for endless possibilities.
The role: Are you an avid learner, constantly looking to improve and innovate? We are seeking a highly skilled and experienced Data Engineer to partner with in driving our data strategy.You’ll play a key role in driving our journey towards becoming a fully data-driven organization and embedding a data-first culture across the business.
- Advertisement -
- Advertisement Sovrn -
Principal Duties and Responsibilities
- Design and build high-performance, secure, and scalable data pipelines to support data science projects following software engineering best practices.
- Curate, wrangle, prepare data, and feature engineering to be used in machine learning models
- Design and develop the data and analytics platform selecting the right technologies for each problem at hand (big-data stack, SQL, no-SQL, etc.)
- Build a modular pipeline to construct features and modeling tables.
- Build a sense of trust and rapport that creates a comfortable & effective workplace and experience in working as part of an agile squad
- Work with Data Analytics and Science team to understand the business needs and build impactful analytics solutions.
- Coordinate and collaborate with the Data Analytics team & other engineering teams to align on our roadmap
Competency and Skill Requirement
- 4+ years’ experience in a data role (Data Engineer, Data Analyst, Analytics Engineer, etc.)
- 2+ years of hands-on experience building data pipelines in production and the ability to work across structured, semi-structured, and unstructured data.
- Hands-on experience implementing ETL (or ELT) best practices at scale .2+ years of experience in ML pipeline for streaming/batch workflow.
- Hands-on knowledge and experience in working with the modern data stack (Snowflake, BigQuery, Redshift, dbt)
- Hands-on knowledge and experience with orchestration tools (Airflow, Prefect, or Dagster)
- Professional experience using Python for data processing, SQL, Git (as source code versioning and CI/CD), and Apache Kafka.
- Natural ability to manage multiple initiatives and clients
- Authorization to work in the country without sponsorship