I use a 3rd party REST API (let’s call it FooService) that has a couple of shortcomings:
- It doesn’t support scoped access, i.e. if you can authenticate, you can do anything.
- Their release life cycle is really short, and they don’t guarantee backward compatibility.
In order to mitigate these problems, I want to write a proxy API that will:
- Implement some scoped access rules, not just per endpoint, but also with some fairly complex logic based on the request parameters.
- Explicitly implement the endpoints I need and pass through the requests to FooService (i.e. not a true reverse proxy). In the event that a future release of FooService has breaking changes, I want to be able to preserve my own API signatures and just internally translate them to the new requirements for FooService before passing the request on. FooService endpoints that I don’t need should not be accessible to the consumers of my new API.
My first instinct is to write a new application in a language like Go. But I’d like to consider whether it might be viable and advantageous to implement this logic using Nginx configuration.
Is it doable? Recommended? If so, can you please point me at some sample code where Nginx:
- authorizes a user based on programmatic logic; and
- rewrites a request, reshaping the URL, parameters, and body.
P.S. It doesn’t have to be Nginx; any other web server is fine, if it can do what I need.
2
Answers
I have implemented this use case with NGINX and NJS.
The policies can be created and stored in JSON files or a Key-Value-Store like etcd or hashicorps consul.
So let make a simple example. The initial request comes in and you invoke an
auth_request
to anlocation
calling the njs script withjs_content
.In the njs function you are able to reach for the policy (that can be cached for 12h to just call the backend once / twice a day) read it and apply business logic based on the parameters like headers and uri.
The action invoked after the policy check is on my side, a rewrite (it will start the processing with the new uri again), a redirect (301, 302) or a route (proxy_pass) to a backend end upstream.
This setup was tested in real live with 75K rps without any issues. So long story short: NJS is the way to go here.
I can not share the entire code publicly because it also handles some critical auth parts but if need I can share some core components if necessary.
I’ve done something similar with nginx.
I had a (elastic) search UI that I wanted to be able to have access the enterprise search API.
I wanted to use google oauth to authenticate users and then grant them access to the enterprise search API without revealing the API secrets client side.
The mechanism I used was nginx subrequest authentication.
nginx subrequest authentication
The idea is that certain endpoints/prefixes (ie your proxy path) are authorized by a secondary call on nginx’s end. Depending on the response from that secondary call, the original call will be proxied (or not). So you can write your own auth/access logic with a small app.
This means that for every call to your protected endpoint, there will be a second call initiated (first) by nginx to your auth endpoint to determine if the protected request should be allowed or not. As long as your "auth checker" app is fast and scalable, this shouldn’t have a noticeable performance impact for most apps (especially if the authcheck is local and latency is low).
As a bonus in the sample nginx config below I am also returning a secret from the auth endpoint (on the internal authcheck…not exposed to client) which is then injected on the protected proxy request as a Bearer token http request header.
On the internal auth location, you can pass along the original request url as a header (
proxy_set_header X-Original-URI $request_uri;
) in order to apply more custom logic based on parameters, etc. The internal auth endpoint is still just returning "yes/no" (and maybe some secrets on headers to be passed along to the proxy endpoint) but the logic to generate that decision can be much more sophisticated within your auth app/endpoint.If you really want a full wrapper API…
(to avoid subjecting downstream users to upstream API changes)
…then your wrapper API app IS the proxy. All requests to the remote API need to run through your app for you to transform them between your stable wrapper API definitions and the unstable remote API.
In this case, no need for complex nginx auth and secret passing. You can still use nginx as an ssl-termination/loadbalancer layer in front of your api-wrapper-proxy app… but it doesn’t need to do more than that.
Whether this is a good idea or not depends on…
Personally I wouldn’t do it. Depending on the size of company you are dealing with and whether you are paying significant money for the service…I would try to get in touch with someone from technical leadership. Explain that you would appreciate them fixing their api signatures at major versions and have transition periods where multiple versions of the API are running in parallel before the oldest one is end of lifed. If you are paying significant money for the service, you have more leverage to get their attention.
If it was a free or low-cost service, I would probably not build a wrapper but just monitor for when the API is broken and have a status page or twitter bot: "Company X has changed their API without notice causing an outage. We are working on adapting to the change in order to bring the service back online." Your users know it isn’t your fault and maybe you can shame the company into doing a better job with their API stability.
Or better yet, find an alternative API that is more stable if you can. Changing public api signatures without warning is an indication their dev team isn’t particularly professional and so there may be lots of "smelly" tech behind the scenes that will also bite you in the future in other ways.