skip to Main Content

Hi I’m little confused about load balancer concept
I’ve read some articles about loadbalancer in nginx and from what I’ve understand is that the load balancer spread the request into multiple servers !
But i thought if one server is down another one is up and running (not simultaneously all server together)

and another thing is when request spread between servers what happen to static data like sessions and InMemory Database like RedisDB

I think i’m confused and missunderstood the loadbalancer mechanism

2

Answers


  1. Chosen as BEST ANSWER

    I actually found my answer in nginx doc page Short answer is IP-Hash mechanism

    Nginx doc word :

    Please note that with round-robin or least-connected load balancing, each subsequent client’s request can be potentially distributed to a different server. There is no guarantee that the same client will be always directed to the same server.

    If there is the need to tie a client to a particular application server — in other words, make the client’s session “sticky” or “persistent” in terms of always trying to select a particular server — the ip-hash load balancing mechanism can be used.

    With ip-hash, the client’s IP address is used as a hashing key to determine what server in a server group should be selected for the client’s requests. This method ensures that the requests from the same client will always be directed to the same server except when this server is unavailable.

    To configure ip-hash load balancing, just add the ip_hash directive to the server (upstream) group configuration:

    upstream myapp1 {
        ip_hash;
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }
    

    http://nginx.org/en/docs/http/load_balancing.html


  2. and from what I’ve understand is that the load balancer spread the request into multiple servers ! But i thought if one server is down another one is up and running (not simultaneously all server together)

    As it comes from the name the goal of load balancer (LB) is to balance the load. As per wiki definition for example:

    In computing, load balancing is the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.

    To perform this task load balancer obviously need to have some monitoring over the resources, including liveness checks (so it can bring out of the rotation the failing servers/nodes). Ideally LB should work with stateless services (i.e. request could be routed to any of the servers supporting handling such request type) but that is not always the case due to multiple reasons, for example in ASP.NET in case of non-distributed session requests should have been routed to servers which handled the previous request from the session, which could have been handled with so called sticky session/cookie.

    and another thing is when request spread between servers what happen to static data like sessions and InMemory Database like RedisDB

    It is not very clear what is the question here. As I mentioned before ideally you will want to have stateless services which will use some shared datastore (s) to handle the requests, so if request comes for any server/node it can load all the needed data to handle it.

    So in short when request comes to LB it selects one of the servers based on some algorithm (round robin, resource based, sharding, response time based, etc.) and send this request to this server so in theory based on the used approach sequential requests from the same source can hit different nodes/servers (so basically this is one of the ways to horizontally scale your application).

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search