I am reading about distributed systems
and getting confused with what is really means?
I understand on high level, it means that set of different machines that work together to achieve a single goal.
But this definition seems too broad and loose. I would like to give some points to explain the reasons for my confusion:
-
I see lot of people referring the micro-services as distributed system where the functionalities like Order, Payment etc are distributed in different services, where as some other refer to multiple instances of Order service which possibly trying to serve customers and possibly use some consensus algorithm to come to consensus on shared state (eg. current Inventory level).
-
When talking about
distributed database
, I see lot of people talk about different nodes which possibly use to store/serve a part of user request like records with primary key from ‘A-C’ in first node ‘D-F’ in second node etc. On high level it looks like sharding. -
When talking about
distributed rate limiting
. Some refer to multiple application nodes (so called distributed application nodes) using a single rate limiter, some other mention that the rate limiter itself has multiple nodes with a shared cache (like redis).
It feels that people use distributed systems
to mention about microservices architecture, horizontal scaling, partitioning (sharding) and anything in between.
2
Answers
As commented by @ReinhardMänner, the good general term definition of distributed system (DS) is at https://en.wikipedia.org/wiki/Distributed_computing
A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. The components interact with one another in order to achieve a common goal.
Anything that fits above definition can be referred as DS. All mentioned examples such as micro-services, distributed databases, etc. are specific applications of the concept or implementation details.
The statement "X being a distributed system" does not inherently imply any of such details and for each DS must be explicitly specified, eg. distributed database does not necessarily meaning usage of sharding.
I’ll also draw from Wikipedia, but I think that the second part of the quote is more important:
A system that constantly has to overcome these problems, even if all services are on the same node, or if they communicate via pipes/streams/files, is effectively a distributed system.
Now, trying to clear up your confusion:
Horizontal scaling was there with monoliths before microservices. Horizontal scaling is basically achieved by division of compute resources.
Division of compute requires dealing with synchronization, node failure, multiple clocks. But that is still cheaper than scaling vertically. That’s where you might turn to consensus by implementing consensus in the application, or using a dedicated service e.g. Zookeeper, or abusing a DB table for that purpose.
Monoliths present 2 problems that microservices solve: address-space dependency (i.e. someone’s component may crash the whole process and thus your component) and long startup times.
While microservices solve these problems, these problems aren’t what makes them into a "distributed system". It doesn’t matter if the different processes/nodes run the same software (monolith) or not (microservices), it matters that they are different processes that can’t easily communicate directly (e.g. via function calls that promise not to fail).
In databases, scaling horizontally is also cheaper than scaling vertically, The two components of horizontal DB scaling are division of compute – effectively, a distributed system – and division of storage – sharding – as you mentioned, e.g. A-C, D-F etc..
Sharding of storage does not define distributed systems – a single compute node can handle multiple storage nodes. It’s just that it’s much more useful for a database that divides compute to also shard its storage, so you often see them together.
Distributed rate limiting falls under "maintaining concurrency of components". If every node does its own rate limiting, and they don’t communicate, then the system-wide rate cannot be enforced. If they wait for each other to coordinate enforcement, they aren’t concurrent.
Usually the solution is "approximate" rate limiting where components synchronize "occasionally".
If your components can’t easily (= no latency) agree on a global rate limit, that’s usually because they can’t easily agree on a global anything. In that case, you’re effectively dealing with a distributed system, even if all components just threads in the same process.
(that could happen e.g. if you plan to scale out but haven’t done so yet, so you don’t allow your threads to communicate directly.)