About RelyComply

Website

Address

146 Campground Road, Newlands, Cape Town

Company Size

15-50
Highlights
Summary

What we do

Spend a little time in big corporations or even small fintechs and startups and it soon becomes apparent that implementing a risk management and compliance programme is no small feat. Existing solutions are often outdated, with little in the way of automation and utilisation of modern AI tools.

Companies without seamless digital onboarding are suffering troublesome declines in new business whilst companies are almost universally seeing an increase in fraud and money laundering that is causing financial losses, reputational damage and increased compliance workloads. In order to tackle this, many companies are turning to end-to-end solutions for KYC and AML that are API accessible, for easy integration into their digital onboarding processes, and powered by Artificial Intelligence to minimize false-positive rates whilst maximising the detection of fraud and money laundering.

RelyComply was born in response to this pressing challenge. We recognised the need for compliance solutions within the financial services industry which are automated, possess the capabilities of AI-powered technology and are easily embeddable into any business system as a result of API-driven flexibility.

We are developing solutions which are scalable, meeting the rising needs of rapidly growing businesses and businesses embarking on digital transformation journeys and solutions which are easily integratable, without fears of having to rebuild current business systems.

We are constantly researching cutting-edge technology and how it can be incorporated into our products, ensuring that our solutions are affordable, where even small fintechs can have access to modern technology for smart, fast and affordable compliance.

This is our vision, and this is exactly what we have built, and continue to invest in: a reliable, cost-effective, rapid and hassle-free next-generation AI-powered solution which takes care of all your KYC and AML/CTF compliance needs.

Our Hiring Process

● Initial chat with one of our senior engineers to talk (30 minutes) about the company in general, the basics of our tech needs, and what you are looking for in your next job. This chat gives you a chance to find out a bit more about the company and the role and see if you are interested.

● Technical take-home task (should take 2 to 3 hours max). While we appreciate this can be a bit of a pain, we need to know you can write code. This is not a long task and the point is to get the basics right, no need to over-engineer or waste your time.

● Short written take-home task (15 to 30 mins): We have found that written communication is one of the best predictors of a broader set of skills, and it is doubly critical in a remote organisation. This will be a very short piece written to an imaginary client to elicit requirements. You can find the task here. Both these tasks must be completed within a week.

● Extensive technical interview (1.5 to 2 hrs) with a few members of the team taking specific sections. Using your task as a jump-off point we use this session to try and get a deeper understanding of where your skills and experience lie. The topics here will be broad-ranging from technical systems knowledge to ideas about programming approaches, to architecture and design strategies. The point is to understand where your skill sets could fit in and complement our existing team. This is not meant to be a pass-fail exercise but rather to see if there is a complementary fit.

● Final stage interview with our CTO. This is not a technical interview but more a chance to finalise any questions you may have and meet the company's technical leader.

● Contracting and formal offer: We then need to sort out the contracting, and make a formal offer.

Technical Stack and Development Environment

RelyComply has many components and utilizes both standard battle-tested technologies, as well as more recent cutting-edge technologies. The product is new, and benefits from being built in a modern fashion.

Our overriding technical considerations are to build the product in such a way that we can remain agile to customer needs, while not compromising the architectural integrity of the platform. This leads us to work by the principle of least power where possible, using the least complex technologies where we can in order to reduce the amount of cognitive overhead when we need to make changes while concentrating heavily on the extensibility of our architecture to ensure that forth-coming (and as yet unknown) changes can be easily implemented without major re-architecture. This leads to an obsession with building reusable frameworks within the code base, utilizing modern programming techniques as much as possible to reduce the size of the code base and ensure that the development effort required to make a change is proportional to the change in the feature requirements.

The core web application is a Python/Django app. We have kept the front-end as simple as possible, server-side rendering almost everything and using javascript sparingly. We have avoided making an SPA because of the inherent increase in complexity this creates.

Where a highly interactive frontend is required (for example when investigating large transaction monitoring results) we segregate that into a separate system (from an architecture perspective) to keep the functionality confined and our general architecture simpler. This also allows us to choose the best front-end technology for the job.

Our reporting dashboards utilise the Dash framework, which allows us to build highly interactive and attractive dashboards without having to develop extensive frontend code and manage the state between the frontend and the server.

We expose a fully functional GraphQL API that can query, configure and drive any part of the system. This API is often also used by the backend to keep consistent where that is the simplest option. The GraphQL API is built using graphene but is highly tailored for our usage to allow functionality to be exposed rapidly and in a uniform manner. Our API documentation is automatically generated from the GraphQL schema.

Where custom integration with a client’s system is required, we create client-specific microservices that intermediate the requests and utilise the GraphQL API to enact them on our side. This maintains the architectural integrity of the system while reducing the technical overhead for clients.

On top of the GraphQL API we have built a CLI. This makes integration simpler, while also giving power users the ability to configure the system. Pushing configuration to the CLI has allowed us to keep the user interface much smaller, and lets us release features much more quickly. Considering that configuration is generally performed only at the start of a project, and then only intermittently thereafter by a very small amount of users, who have to be reasonably technical in any case given the nature of the domain (and who probably prefer a CLI + configuration file system) we are happy with this trade-off. Doing this has dramatically increased our development agility. The CLI is automatically generated from the API ensuring it is up-to-date with the system, and further reducing development overhead.

We utilise machine learning and statistical models extensively in the product, with a view that drives for maximum task automation while keeping the models and results interpretable. We make heavy use of the PyData stack but utilise a variety of more specific technologies. Explainability is crucial in this domain from various regulatory perspectives, which leads to a careful balance of choosing powerful technologies and ensuring transparency. We have extensive data science experience in the team to implement these.

Our high-volume transaction monitoring system is built on top of Apache Flink. This allows us to use a universal system for both batch analysis and real-time low-latency streaming. We implement both rules-based systems and statistical/ML systems on top of this. Our CLI tools allow power users to rapidly configure the data pipelines and models, and the system manages the full development, test, deploy, and monitor cycle. A small amount of Scala is used in this system.

Our platform runs on AWS. All our infrastructure is managed with Terraform. This makes infrastructure management reliable and codified. It also allows us to deploy the platform within the AWS tenants of clients, which is attractive to large financial institutions. Our entire stack is containerised with docker. All our ops tasks are automated with either small command line scripts, or through the CI/CD pipelines which are powered by GitHub actions. We aggressively automate all tasks, both to reduce our workload and as a way to institutionalise knowledge.

We have automated unit tests, as well as an integration test framework that performs comprehensive end-to-end testing, including the CLI, the API, the tasks, and the workflows. This gives us great confidence to move quickly in a complex system.

Our development process is very agile, but not highly structured. We are a small highly qualified team and expect all members to act maturely and think deeply. All members of the team have a high degree of autonomy. We proactively try to share knowledge among the team, and team members are expected to be reasonably cross-functional. We don’t expect everyone to be an expert in everything (which is impossible), but do expect people to learn quickly and build a good knowledge of all facets of the system. The culture is flexible, respectful and high-performing. The problem space is interesting both in the domain and technically. We treat each other as responsible autonomous adults, and in turn, expect everyone to respect the goals and needs of the company. Company leadership tries to give all team members a good idea of the goals of the company, the constraints and commercial realities, so that they can make holistic decisions when making judgement calls.

Perks
Remote Working
Home office allowance
Co-working space allowance
25 Days Annual Leave
Tech Stack

Application and Data

Amazon S3
Amazon EC2
Amazon VPC
Sass
PostgreSQL
Javascript
Python
Redis
Django
nginx
Amazon Lambda
GraphQL
Amazon Web Services

DevOps

GitHub
Git
Docker
npm
Terraform
Sentry

Utilities

Amazon Route 53
Amazon SES

Business Tools

Google Apps
Slack
JIRA
Confluence