skip to Main Content

Gist

I have an application that runs on an microservice-based architecture (on Kubernetes). All the communication to/from outside the application happens through an API Gateway.

Which just means that requests from my frontend don’t go directly to the services, but they have to go through the Gateway.

Motive

Now I need to implement a feature that requires realtime communication between the frontend and an internal service. But since the internal service is not exposed to the outside, I need a way to “route” the realtime data through the Gateway.

All my services are running on Node.js, which is the reason I want to use Socket.IO to implement the realtime communication.

architecture

Issue

But how to implement the purple double arrow from the sketch?

So usually the frontend client would connect to the server where Socket.IO is running. But in my case this server (the realtime feature server) is not accessible from the client (and never should be), which means that the client has to connect to the Gateway. Thus the Gateway needs to implement some mechanism to route all incoming messages to the realtime service and vice verca.

Ideas

(1) Have a second HTTP server listening for events on the Gateway and emit those event to the realtime server. In the other direction, the realtime server will emit events to the Gateway, which wiil then emit them to the frontend. I think this approach will definitely work, but it seems redundant to emit everything twice. And it would definitely hurt performance?

(2) Use a Socket.IO Adapter to “pass event between nodes“, which seems as the right way to go because it is used to “pass messages between processes or computers”. But I have problems getting started because of the lack of documentation / examples. I am also not using Redis (is it needed to use the adapter?)

(3) Use the socket.io-emitter package, which seems not like a good option since the last commit was from 3 years ago.

(4) Something else?

2

Answers


  1. As the internal service is not exposed to the outside, I recommend using a tunnel. ngrok is a command for an instant and secure URL to your localhost server through any NAT or firewall.
    If your server exposes the socket service through a certain port, using ngrok to create a reverse proxy to expose the world with which you can connect to your frontend application.
    Using this command is very simple, here is an example of how to use it:

    1. Register and download the ngrok file at the following address Official site
    2. Just run the following instruction to make it work

      ./ngrok http 3000

    3. To make it permanent you must create a service and use a file ngrok.yml for the best configuration.

    Here is the official documentation Here

    Login or Signup to reply.
  2. Alright, basically I designed the application like this

    Ingress

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: centsideas-ingress
      annotations:
        kubernetes.io/tls-acme: 'true'
        kubernetes.io/ingress.class: 'nginx'
        cert-manager.io/cluster-issuer: letsencrypt
    spec:
      tls:
        - hosts:
            - centsideas.com
            - api.centsideas.com
          secretName: centsideas-tls
      rules:
        - host: api.centsideas.com
          http:
            paths:
              - path: /socker.io
                backend: 
                 serviceName: socket-service
                 servicePort: 8000
              -  path: /
                 backend:
                  serviceName: centsideas-gateway
                  servicePort: 3000
        - host: centsideas.com
          http:
            paths:
              - backend:
                  serviceName: centsideas-client
                  servicePort: 8080
    

    Service

    apiVersion: v1
    kind: Service
    metadata:
     name: socket-service
     annotations:
      service.beta.kubernetes.io/external-traffic: "OnlyLocal" 
     namespace: namespace
    spec:
     sessionAffinity: ClientIP
     ports:
      - name: ws-port
        protocol: TCP
        port: 8000
     type: ClusterIP
     selector:
      service: ws-api
    

    Then you create your deployment to deploy the ws-service. Like this, you can also activate k8s HPA (horizontal pod autoscaling) to scale up the socket.io service.
    You must change annotations and other options based on your k8s version (I think the annotation service.beta.kubernetes.io/external-traffic: "OnlyLocal" has been deprecated).

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search