skip to Main Content

I am trying to make my ec2 instance (with an application running on nodeJS) website secure (as https).

I can successfully access the ip address to see my app running on the instance but when I enter in the subdomain it takes too long to respond.

I set up a load balancer linked with a certificate and target groups of the instance. The node app is running on port 3000.

I have 2 registered targets of the same instance one for port 3000 and one for port 80. The health status is healthy for both.

My load balancer has an SSL/TLS certificate with rules where port 80 is redirected to port 443 and forwarded to the target group.

In route 53 i’ve created a subdomain routed to the DNS of my load balancer.

I connect to my EC2 instance via ssh, I cloned my github repo and use node to run the server. I can successfully access the ip address to see my app running on the instance but when I enter in the subdomain it takes too long to respond.

What I expected to happen is for my ip address to redirect to the subdomain with a secure connection showing the contents of my node app on that subdomain. but instead the ip address loads the contents on an unsecure page with the ip address in the address bar.

The inbound rule of my load balancer is set to all traffic. The outbound is set to port 443, and 80 with the source as the EC2 instance’s security group id.

The inbound of my EC2 instance is set to port 443, 80, 3000 with the source as the load balancer security group id. The outbound is set to all traffic.

What could I be doing wrong?

I also have nginx.conf file:

user nginx;
worker_processes auto; error_log /var/log/nginx/error.log notice; pid /run/nginx.pid;

events { worker_connections 1024; }

http { include /etc/nginx/mime.types; default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;

sendfile            on;
tcp_nopush          on;
keepalive_timeout   65;

server {
listen 80;
    server_name <subdomain_goes_here>;

    location / {
        proxy_pass <load_balancer_DNS_goes_here>;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

location /health {
        return 200 'Healthy';
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}
}

this is my nodejs file:

require("dotenv").config();
const sendGrid = require("@sendgrid/mail"); sendGrid.setApiKey(process.env.sendGridAPIKey);

const express = require('express'); const app = express();

// Define a route handler for the health check path app.get('/health', (req, res) => { // Respond with a 200 OK status and the text "Healthy" res.status(200).send('Healthy'); });

process.env

const PORT = process.env.PORT || 3000;

//access to html frontend app.use(express.static('frontend')); app.use(express.json())

//access to file app.get('/', (req, res)=>{ res.sendFile(__dirname + '/frontend/contact.html') })

//access to data in contact form app.post('/', (req, res)=>{ console.log(req.body)

const mailOptions = {
    from: '[email protected]',
    to: '[email protected]',
    subject: req.body.subject,
    reply_to: req.body.email,
    text: req.body.message
}

sendGrid.send(mailOptions, (error)=>{
    if(error){
        console.logy(error);
        res.send('error');
    } else {
        console.log('Email sent');
        res.send('success');
    }
})
})

app.listen(PORT,'0.0.0.0',()=>{ console.log(Server running on port ${PORT}) })

2

Answers


  1. Seems like there is an issue with your load balancer security group. You should allow traffic on ports 80 and 443 from 0.0.0.0/0 in your inbound rules and allow all traffic in your outbound rules.

    Also, your EC2 security group should allow inbound traffic on port 80 from your load balancer security group and allow all traffic in your outbound group.

    The proxy_pass setting in your nginx config should have the node app local URL (0.0.0.0:3000) instead of the load balancer endpoint.

    Login or Signup to reply.
  2. Based on the setup, you don’t need the target group of ALB to have both 3000 and 80. IMHO flow is as below:

    Client >–HTTP/HTTPS–> ALB >–HTTP:80–> Instance

    Instance is having Nginx listening on port 80 which will need to route it to port 3000 locally. Changes recommended are

    1. Change target group of ALB to only have target with port 80 mentioned.
    2. Change NGINX config proxy_pass to proxy_pass http://127.0.0.1:3000/;

    If issue persists, recommending below tests to know more on the issue:

    1. Login to instance and perform curl test curl -lvk http://127.0.0.1:3000/ . This is to confirm working of the NodeJS service on port 3000

    2. Next perform test against the NGINX listener port 80 using command curl -lvk http://localhost/ . An additional check to know the status of health check url by using curl -lvk http://localhost/health

    3. Get into another EC2 in the same VPC and test curl command as curl -lvk http://<instance's IP>/ . Do make sure that the security group’s on either systems are allowing port 80 communication.

    If all of the above are success, follow below:

    1. Check ALB’s target groups health check status. If health check is failing, most likely the security group or network ACL is playing here.
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search