skip to Main Content

I am reading about distributed systems and getting confused with what is really means?

I understand on high level, it means that set of different machines that work together to achieve a single goal.

But this definition seems too broad and loose. I would like to give some points to explain the reasons for my confusion:

  1. I see lot of people referring the micro-services as distributed system where the functionalities like Order, Payment etc are distributed in different services, where as some other refer to multiple instances of Order service which possibly trying to serve customers and possibly use some consensus algorithm to come to consensus on shared state (eg. current Inventory level).

  2. When talking about distributed database, I see lot of people talk about different nodes which possibly use to store/serve a part of user request like records with primary key from ‘A-C’ in first node ‘D-F’ in second node etc. On high level it looks like sharding.

  3. When talking about distributed rate limiting. Some refer to multiple application nodes (so called distributed application nodes) using a single rate limiter, some other mention that the rate limiter itself has multiple nodes with a shared cache (like redis).

It feels that people use distributed systems to mention about microservices architecture, horizontal scaling, partitioning (sharding) and anything in between.

2

Answers


  1. I am reading about distributed systems and getting confused with what is really means?

    As commented by @ReinhardMänner, the good general term definition of distributed system (DS) is at https://en.wikipedia.org/wiki/Distributed_computing

    A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. The components interact with one another in order to achieve a common goal.

    Anything that fits above definition can be referred as DS. All mentioned examples such as micro-services, distributed databases, etc. are specific applications of the concept or implementation details.

    The statement "X being a distributed system" does not inherently imply any of such details and for each DS must be explicitly specified, eg. distributed database does not necessarily meaning usage of sharding.

    Login or Signup to reply.
  2. I’ll also draw from Wikipedia, but I think that the second part of the quote is more important:

    A distributed system is a system whose components are located on
    different networked computers, which communicate and coordinate their
    actions by passing messages to one another from any system. The
    components interact with one another in order to achieve a common
    goal. Three significant challenges of distributed systems are:
    maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When
    a component of one system fails, the entire system does not fail
    .

    A system that constantly has to overcome these problems, even if all services are on the same node, or if they communicate via pipes/streams/files, is effectively a distributed system.

    Now, trying to clear up your confusion:

    1. Horizontal scaling was there with monoliths before microservices. Horizontal scaling is basically achieved by division of compute resources.
      Division of compute requires dealing with synchronization, node failure, multiple clocks. But that is still cheaper than scaling vertically. That’s where you might turn to consensus by implementing consensus in the application, or using a dedicated service e.g. Zookeeper, or abusing a DB table for that purpose.
      Monoliths present 2 problems that microservices solve: address-space dependency (i.e. someone’s component may crash the whole process and thus your component) and long startup times.
      While microservices solve these problems, these problems aren’t what makes them into a "distributed system". It doesn’t matter if the different processes/nodes run the same software (monolith) or not (microservices), it matters that they are different processes that can’t easily communicate directly (e.g. via function calls that promise not to fail).

    2. In databases, scaling horizontally is also cheaper than scaling vertically, The two components of horizontal DB scaling are division of compute – effectively, a distributed system – and division of storage – sharding – as you mentioned, e.g. A-C, D-F etc..
      Sharding of storage does not define distributed systems – a single compute node can handle multiple storage nodes. It’s just that it’s much more useful for a database that divides compute to also shard its storage, so you often see them together.

    3. Distributed rate limiting falls under "maintaining concurrency of components". If every node does its own rate limiting, and they don’t communicate, then the system-wide rate cannot be enforced. If they wait for each other to coordinate enforcement, they aren’t concurrent.
      Usually the solution is "approximate" rate limiting where components synchronize "occasionally".
      If your components can’t easily (= no latency) agree on a global rate limit, that’s usually because they can’t easily agree on a global anything. In that case, you’re effectively dealing with a distributed system, even if all components just threads in the same process.
      (that could happen e.g. if you plan to scale out but haven’t done so yet, so you don’t allow your threads to communicate directly.)

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search