skip to Main Content

I use a 3rd party REST API (let’s call it FooService) that has a couple of shortcomings:

  1. It doesn’t support scoped access, i.e. if you can authenticate, you can do anything.
  2. Their release life cycle is really short, and they don’t guarantee backward compatibility.

In order to mitigate these problems, I want to write a proxy API that will:

  1. Implement some scoped access rules, not just per endpoint, but also with some fairly complex logic based on the request parameters.
  2. Explicitly implement the endpoints I need and pass through the requests to FooService (i.e. not a true reverse proxy). In the event that a future release of FooService has breaking changes, I want to be able to preserve my own API signatures and just internally translate them to the new requirements for FooService before passing the request on. FooService endpoints that I don’t need should not be accessible to the consumers of my new API.

My first instinct is to write a new application in a language like Go. But I’d like to consider whether it might be viable and advantageous to implement this logic using Nginx configuration.

Is it doable? Recommended? If so, can you please point me at some sample code where Nginx:

  1. authorizes a user based on programmatic logic; and
  2. rewrites a request, reshaping the URL, parameters, and body.

P.S. It doesn’t have to be Nginx; any other web server is fine, if it can do what I need.

2

Answers


  1. I have implemented this use case with NGINX and NJS.

    The policies can be created and stored in JSON files or a Key-Value-Store like etcd or hashicorps consul.

    So let make a simple example. The initial request comes in and you invoke an auth_request to an location calling the njs script with js_content.

    In the njs function you are able to reach for the policy (that can be cached for 12h to just call the backend once / twice a day) read it and apply business logic based on the parameters like headers and uri.

    The action invoked after the policy check is on my side, a rewrite (it will start the processing with the new uri again), a redirect (301, 302) or a route (proxy_pass) to a backend end upstream.

    This setup was tested in real live with 75K rps without any issues. So long story short: NJS is the way to go here.

    I can not share the entire code publicly because it also handles some critical auth parts but if need I can share some core components if necessary.

    Login or Signup to reply.
  2. I’ve done something similar with nginx.
    I had a (elastic) search UI that I wanted to be able to have access the enterprise search API.
    I wanted to use google oauth to authenticate users and then grant them access to the enterprise search API without revealing the API secrets client side.

    The mechanism I used was nginx subrequest authentication.

    nginx subrequest authentication

    The idea is that certain endpoints/prefixes (ie your proxy path) are authorized by a secondary call on nginx’s end. Depending on the response from that secondary call, the original call will be proxied (or not). So you can write your own auth/access logic with a small app.

    This means that for every call to your protected endpoint, there will be a second call initiated (first) by nginx to your auth endpoint to determine if the protected request should be allowed or not. As long as your "auth checker" app is fast and scalable, this shouldn’t have a noticeable performance impact for most apps (especially if the authcheck is local and latency is low).

    As a bonus in the sample nginx config below I am also returning a secret from the auth endpoint (on the internal authcheck…not exposed to client) which is then injected on the protected proxy request as a Bearer token http request header.

    On the internal auth location, you can pass along the original request url as a header (proxy_set_header X-Original-URI $request_uri;) in order to apply more custom logic based on parameters, etc. The internal auth endpoint is still just returning "yes/no" (and maybe some secrets on headers to be passed along to the proxy endpoint) but the logic to generate that decision can be much more sophisticated within your auth app/endpoint.

    # the protected (requires auth) upstream/proxy
    upstream us_enterprise_search {
        server localhost:12345;
    }
    
    # the "auth" app
    # can handle both user logins as well as the internal nginx access check
    upstream us_app {
        server localhost:54321;
    }
    
    server {
        # ... usual stuff, listen port, tls certs, doc root for static file serving, gzip, cache config, etc
    
        #
        # now the interesting endpoints/prefixes...
        #
    
        # proxy to the enterprise-search API
        # the proxy/upstream we want to require authorization for
        location /api {
            # requests to /api need to be authenticated by making request to /auth/test
            # if a user has previously authenticated against the /auth endpoint, the /auth/test will grant access
            auth_request /auth/test;  
            auth_request_set $auth_status $upstream_status;
            auth_request_set $auth_token  $upstream_http_x_ent_auth;
    
            # the upstream/proxy we are protecting and requiring authentication against
            proxy_pass http://us_enterprise_search;
            proxy_pass_request_headers on;
    
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_cache_bypass $http_upgrade;
    
            # set `authorization: Bearer search-XXXXXXXX` from upstream
            # i.e. take secret returned from the auth request and inject it on the proxy request
            proxy_set_header authorization "Bearer $auth_token";
            
        }
    
        # the internal (ie only nginx can access this, not a public endpoint) auth checking endpoint
        location = /auth/test {
            internal;  # so public can't access this
            # If the /auth/test subrequest returns a 2xx response code, the access is allowed. If it returns 401 or 403, the access is denied with the corresponding error code
    
            proxy_pass http://us_app;
            proxy_pass_request_body off;
            proxy_set_header        Content-Length "";
            # pass original request_uri as header in auth request so
            # you can apply custom auth logic based on params
            proxy_set_header        X-Original-URI $request_uri;
            
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_cache_bypass $http_upgrade;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass_request_headers on;
        }
    
        # the login/logout UI app
        # accessible to public.  accessing this path enables the user to login or logout (establish auth) 
        # which will be reflected in the response to the internal /auth/test requests made by nginx
        # note this is the same app/upstream as /auth/test but doesn't need to be.
        # this app obviously has different logic/response for /auth (UI/oauth workflow) than for /auth/test
        location /auth {
            proxy_pass http://us_app;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_cache_bypass $http_upgrade;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass_request_headers on;
            
        }
    
    
    }
    

    If you really want a full wrapper API…

    (to avoid subjecting downstream users to upstream API changes)

    …then your wrapper API app IS the proxy. All requests to the remote API need to run through your app for you to transform them between your stable wrapper API definitions and the unstable remote API.

    In this case, no need for complex nginx auth and secret passing. You can still use nginx as an ssl-termination/loadbalancer layer in front of your api-wrapper-proxy app… but it doesn’t need to do more than that.

    Whether this is a good idea or not depends on…

    • how much does the upstream API really change? Will you be able to reliably transform/map old API signatures/inputs/outputs to the changes they are making? Major breaking changes may not be able to be translated because they fundamentally change the business logic or workflows.
    • You will probably need to monitor for upstream API changes/breaks so you can apply a patch to your wrapper quickly to avoid long outages or unexpected behaviour for your downstream API users.
    • Potentially a lot of work to build/monitor and maintain your wrapper…is it worth it? (value vs time/cost)

    Personally I wouldn’t do it. Depending on the size of company you are dealing with and whether you are paying significant money for the service…I would try to get in touch with someone from technical leadership. Explain that you would appreciate them fixing their api signatures at major versions and have transition periods where multiple versions of the API are running in parallel before the oldest one is end of lifed. If you are paying significant money for the service, you have more leverage to get their attention.

    If it was a free or low-cost service, I would probably not build a wrapper but just monitor for when the API is broken and have a status page or twitter bot: "Company X has changed their API without notice causing an outage. We are working on adapting to the change in order to bring the service back online." Your users know it isn’t your fault and maybe you can shame the company into doing a better job with their API stability.

    Or better yet, find an alternative API that is more stable if you can. Changing public api signatures without warning is an indication their dev team isn’t particularly professional and so there may be lots of "smelly" tech behind the scenes that will also bite you in the future in other ways.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search