Senior Full Stack Data Engineer
OfferZen|Posted 11 days ago
Sign up or log in to apply:
Skills and experience
Location and salary
Role description
Mission
At OfferZen, our ability to use data is core to improving processes, identifying new opportunities and guiding decision-making across the organisation as we continue to scale our operation.
As a Senior Full Stack Data Engineer and pragmatic data generalist, your mission is to take end-to-end ownership of OfferZen’s data platform. You will apply DataOps principles to build, maintain, and strategically simplify our entire data ecosystem. This includes managing core data infrastructure, supporting production machine learning models, and ensuring reliable data is accessible for analytics.
Working autonomously, you will be the key technical expert responsible for the entire data lifecycle, ensuring the platform is robust, scalable, and creates tangible business value.
Outcomes
- Custom and off-the-shelf data pipelines are designed, implemented, monitored, and optimized to ensure reliability and performance.
- The existing data platform is progressively simplified and documented to improve maintainability and reduce complexity.
- Existing machine learning models (e.g., Candidate Recommendations) are supported and maintained in production, and new hosted models are integrated into our systems to ensure continued business value.
- Robust testing, monitoring, and data governance frameworks are established and maintained to ensure data quality and trust.
- The scalability, performance, and cost-effectiveness of our AWS data infrastructure (including Redshift, Athena, etc.) are continuously improved.
- Analysts and other stakeholders are empowered with access to reliable, well-structured data through our warehouse and tooling.
Competencies
Data Engineering (Core)
- Expert knowledge of SQL, Python and Spark.
- Strong experience designing, building, and maintaining data pipelines and warehouses, with a focus on healthy database performance.
- Familiarity with data lake/ lakehouse architectures utilizing big data file formats like Apache Parquet and high-performance table formats like Apache Iceberg.
- Deep, hands-on experience with the AWS data ecosystem, specifically Redshift, Glue, Step Functions, Lambda and Athena. Bonus for general AWS experience.
- Proven, advanced experience with DBT.
- Proficient with Infrastructure as Code, specifically Terraform.
- Experience managing ETL tools like Fivetran and Hevo.
- Familiarity with BI tooling (e.g., Looker) is a plus.
Machine Learning Engineering (Required)
- Strong proficiency in Python and machine learning frameworks (e.g., SKLearn, TensorFlow, PyTorch).
- Experience deploying and monitoring machine learning models in a production environment.
- Solid knowledge of data manipulation, preprocessing, and feature engineering for ML tasks.
- Experience with recommender systems is highly desirable.
- Familiarity with vector databases (e.g., Pinecone) is a plus.
Background and Personality
- 6+ years of experience in data engineering or a similar role.
- A self-starter who is comfortable working as an individual contributor and managing their own roadmap and priorities.
- A pragmatic mindset with a strong sense of ownership. You can take a complex problem and distill it down to its essential parts to deliver value.
- Experience working collaboratively with Data Scientists and Analysts to solve problems and support their data needs.
- A team player that doesn’t give up when faced with complex legacy systems or challenging problems.
- Strong communication skills, capable of collaborating effectively to align on strategy and provide technical guidance.
- Cares about efficiency, not just in system performance but also in process and design.
This role can be based locally or remotely - we welcome all applicants based in South Africa.
Sign up or log in to apply:
About OfferZen
Perks at OfferZen
Tech Stack
application and data






















utilities





dev ops








business tool







