Hire developers Community Blog Find a dev job Log in
Close menu
Tech insights: How we confidently maintain visibility when switching traffic using Nginx
Less noise, more data. Get the biggest data report on software developer careers in South Africa.

How we confidently maintain visibility when switching traffic using Nginx

16 October 2023, by Rob Burger

You can now earn R10k/€1337 for helping companies grow their tech teams with OfferZen! Refer someone now (logged in OZ users) or find out more!

Like many companies, OfferZen’s web architecture comprises many services. We use Nginx’s reverse-proxying to ensure that traffic from our users ends up with the right service. But how do we, as a product team, know which service Nginx is sending the request to, especially when swapping out services that look identical?

In this article, we’re going to dive into how we switched traffic over from one service to another using Nginx locations and how we maintained visibility as this transition was happening.

OfferZen_How-we-confidently-maintain-visibility-when-switching-traffic-using-Nginx_inner-article

A brief intro to reverse-proxying

As an introduction to Nginx reverse-proxying, we’re going to use an analogy: Imagine you enter a library where Nginx is the librarian. The various books on the shelves represent the upstreams or the different servers hosting various services. You don’t know exactly where to look for the specific book you’re after (the upstream), so you kindly ask the librarian (Nginx) to fetch it for you.

When you request a book, the librarian consults their index, heads off to the shelves and returns with exactly what you need.

Nginx works in a similar manner: it acts as the middleman between the user and the upstream services, taking requests from users and returning the data they need. This allows companies to provide a single website address that users can navigate, but the data you request and receive is provided by many upstream services.

Why we need to switch traffic from one service to another

Over the past year at OfferZen, we’ve been decoupling our frontend and our backend. This enables our engineers to develop and release features faster. By making smaller changes in standalone code repositories, testing, and releasing, we’re able to iterate and gather feedback faster, shortening the software development lifecycle.

One of our recent missions was to separate the frontend admin and operations functions from our main application. These are the tools that our Talent Advisors, Account Managers and finance team use to help connect candidates with companies on the OfferZen platform. Previously, these tools were incorporated into our main application but have now been spun out into their own single-page application (SPA).

With the decoupling initiative, we needed to redirect traffic to the new services without impacting the end user while also being able to visualise this switchover. As we use Nginx reverse-proxying to direct traffic to one of the 50+ locations in our setup, switching services and re-directing traffic was easy. The problem was that Nginx doesn’t easily allow for visualisation.

Visualising service traffic using metadata

Out of the box, Nginx doesn’t provide a way for us to see which specific location or upstream is being used. This would be useful for us to troubleshoot misdirected requests and to build up a picture of which services are being used most frequently.

Back to our library analogy, each book has an ISBN number which is like an ID number for the exact book. Each book can also be assigned a category or genre. By keeping a log of requests at the front desk, the librarian can begin to see which books and categories are being requested the most.

We can do something similar in Nginx by adding a unique ID and the name of the provider to our upstream services. By adding this metadata to each location using variables and logging these pieces of metadata when someone navigates the site, we can build up a picture of the traffic directed to each service.

How we used Nginx to visualise our traffic flow

On to the details! Nginx uses custom configuration files to generate an index, which it uses to direct requests. In these files, we define location directives, which you can think of as titles of books. A single upstream can be used by many locations. In the diagram below, you’ll see that the /sign-up location is being served by Heroku, but the /profile and /candidates locations are both being served by the same SPA upstream.

Rob-Burger-Nginx-Screenshot-1

If we want to return different upstream data for the same location, we need to perform a little logic switching in our index. Think of this as replacing a book with its second revision – the title remains the same, but the contents may differ slightly.

The config below is how Nginx determines where to send a request based on the URI – the bit after the website domain name. By using variables in the config, we’re able to identify, log, and later visualise the traffic flow:

# Hypothetical location directive ;)
location ~ ^/(admin/candidates|admin/companies|finance) {
    # Variables (1)
    set  $location_directive_id  "e4a21d7fe8b1";

    set  $upstream            "${OZ_UPSTREAM_HEROKU}";
    set  $final_upstream      $oz_scheme://$upstream;
    set  $upstream_served_by  "heroku";

    # Conditional access to the new SPA version (2)
    if ($oz_enable_spa_admin = 'true') {
        set  $location_directive_id  "4aabcfa9fb5d";
        set  $upstream            "${OZ_UPSTREAM_SPA_ADMIN}";
        set  $upstream_path       "${OZ_UPSTREAM_PATH_SPA_ADMIN}";
        set  $upstream_proxy      "${OZ_UPSTREAM_PROXY_SPA_ADMIN}";
        set  $upstream_served_by  "cloudfront";
        set  $final_upstream      $upstream_proxy/$upstream_path/$is_args$args;
    }

    # Headers (3)
    proxy_set_header  Host  $upstream;

    include  /etc/nginx/conf.d/helpers/headers_proxy_all.conf;

    # Proxy Pass (4)
    proxy_pass  $final_upstream;
}

Looking at the above configuration from the top, down:

  • We set some variables (1) – take note of the $location_directive_id and $upstream_served_by variables, more on these next.
  • Then we have an if block (2), which is our logic switch, that provides conditional access to the new system for our internal QA testers – think of this as revision 2 of our book.
  • Finally, we set some headers (3) and then pass the request (i.e. reverse-proxy) up to the service that’s meant to handle it (4).

By using the $location_directive_id and $upstream_served_by variables and logging these, we’re able to ingest them into our Datadog logging pipeline. We won’t get into the details of how that pipeline works, but using the filter and aggregation features provided by Datadog, we are able to visualise traffic flow when we make the final change for all our staff. In the graph below, you can see that around June 14th, we made the switch from our main application running on Heroku to our new single-page version served from CloudFront.

Rob-Burger-Nginx-Screenshot-2

Because of the nature of decoupling, both systems looked identical to the end-user, so we relied heavily on the real-time logs being ingested, processed and graphed in order to confirm that users were being directed to the correct version.

Zooming out and using the $location_directive_id, we’re able to see exactly which of our 50+ locations are being used. Below is a graph of our top 10 used services as reported by Nginx.

Rob-Burger-Nginx-Screenshot-3

By stacking the location directive IDs in such a way, we’re able to build up a visual picture of which locations are being used the most at any given time, allowing our developers to navigate to the exact location block in the Nginx config using said ID as a lookup.

Conclusion

By using custom Nginx variables, a good logging pipeline, and visualisation tools, we’ve been able to confidently switch over services transparently to the end users that work with our platform daily. We’ve also given our developers the tools and ability to quickly troubleshoot issues and navigate the configuration.


Rob Burger is a born and bred Capetonian. He’s a Tech Lead in the Platform Squad at OfferZen and has also been a teacher, paramedic, support engineer and system/platform architect. He embraces the concept “automate all the things!” to make his and developers’ lives easier.

When he’s not relishing a good developer experience, you can usually find him in the mountains hiking, running or rescuing. If he’s not on a mountain, he’s at home tinkering and working on open-source projects!

Read more

OfferZen_Want-to-Step-Up-Your-Writing-Game_Source-Banner

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Subscribe to our blog

Don’t miss out on cool content. Every week we add new content to our blog, subscribe now.

By subscribing you consent to receive OfferZen’s newsletter and agree to our Privacy Policy and use of cookies.