skip to Main Content

I have two Nginx servers acting as reverse proxies for nodejs servers running on ports 5000 and 5001.
The one that is running on port 5000 is for normal form upload
The other one that is running on port 5001 is for uploading images
On the client side, what I’ve done is after filling out the form (title, description, and image) by the user, the image is uploaded to the image server first and the imageURL, title, and description are uploaded to the normal web server then.

The Problem

When the client fills out the form and clicks on upload if the image upload works then upload to the normal server fails or if normal server upload works then upload to the image server fails.
The error is the following one: (This could for either of them)

Access to XMLHttpRequest at ‘https://myserver.com/imagev2api/profile-upload-single’ from origin ‘https://blogs.vercel.app’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource.

Note: I’ve used app.use(cors()) on both servers (image and normal server)

Here’s both nginx server configurations

Image Server

upstream imageserver.com {
        server 127.0.0.1:5001;
        keepalive 600;
}
server {
        server_name imageserver.com;

        error_log /var/www/log/imagserver.com.error;
        access_log /var/www/log/imagserver.com.access;

        location / {
                proxy_pass http://imageserver.com;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
            # fastcgi_split_path_info ^(.+.php)(/.+)$;
        }


      listen 443 ssl http2; # managed by Certbot 
           ssl_certificate /etc/letsencrypt/live/linoxcloud.com/fullchain.pem; # managed by Certbot 
           ssl_certificate_key /etc/letsencrypt/live/linoxcloud.com/privkey.pem; # managed by Certbot 

           ssl_protocols TLSv1.2 TLSv1.3 SSLv2 SSLv3;
          ssl_session_cache shared:SSL:5m;
         ssl_session_timeout  10m;
        ssl_session_tickets off;
}
server {
        if ($host = imageserver.com) {
                return 301 https://$host$request_uri;
    } # managed by Certbot

        listen 80;
        server_name imageserver.com;
 }

Normal Server

upstream normalserver.com {
        server 127.0.0.1:5000;
        keepalive 600;
}

server {
        server_name normalserver.com;

        error_log /var/www/log/normalserver.com.error;
        access_log /var/www/log/normalserver.com.access;

location / {
                proxy_pass http://normalserver.com;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
     
        }
     listen 443 ssl http2; # managed by Certbot 
           ssl_certificate ...; # managed by Certbot 
           ssl_certificate_key ...; # managed by Certbot 
           ssl_protocols TLSv1.2 TLSv1.3 SSLv2 SSLv3;
          ssl_session_cache shared:SSL:5m;
         ssl_session_timeout  10m;
        ssl_session_tickets off;
}
server {
        if ($host = normalserver.com) {
                return 301 https://$host$request_uri;
    } # managed by Certbot
        listen 80;
        server_name normalserver.com;
 }

I’ve been trying to overcome this problem for some period of time by trying literally everything.
Reference: Two NGINX servers one passing CORS issue (but this doesn’t provide any insights into what the problem and solution is)

Any possible fixes, please?

2

Answers


  1. Chosen as BEST ANSWER

    The problem in my case is that I'm running my NODEJS instances/servers using "pm2" and they are not working simultaneously. Similar issue: https://github.com/Unitech/pm2/issues/4352

    Elaborating on what happened was if two requests are made simultaneously one pm2 process successfully executes but meanwhile the server crashes/restarts after that execution which is making the other server throw a 502 Bad Gateway error. (unreachable as though the server is not running)

    For now, I'm running one server on "pm2" and the other one uses "forever"

    Note: This issue has nothing to do with Nginx (since it can handle any number of websites with different domain names on a single port 80)

    This problem happened quite recently maybe it's some "pm2" bug

    In simple words, when the two requests hit individual pm2 processes, one executes, and the pm2 processes kind of restart again making the second request obsolete.


  2. You have to combine these reverse proxies in one configuration file. There was already a similar thread here: https://serverfault.com/questions/242679/how-to-run-multiple-nginx-instances-on-different-port

    Hope it helps.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search