skip to Main Content

I’m building a Jira clone using a microservice architecture with the following services:

  1. User Service: Manages user-related operations.
  2. Task Service: Handles task-related operations.
  3. Auth Service: Manages authentication.

I’m struggling with model separation and data access across microservices. For instance, when a user creates a task, I need to update the user’s information or fetch user details. Should I directly access the User model from the User Service in these cases? How can I efficiently and scalably handle situations where one microservice needs data or functionality from another, following best practices for production environments?

My tech stack includes: Nginx, MongoDB, SQL, Kafka, Docker, Redis, Node.js, Express, and JWT.

please Feel Free to Suggest any other technologies or tech in order to optimize

2

Answers


  1. So, in general, microservices are expensive and complicated, especially when dealing with these domain-separation issues. Make sure you need them before you go down this path. You can use domain abstraction layers without the overhead of strongly separated services.

    That being said, when using microservices, you need to keep the internals of these services strongly separated so that you don’t accidentally introduce coupling. It’s the whole point of them, that the strong decoupling allows each service to grow, change, and scale independently of the other parts of the system. In this scenario, you would have the Task service be a client of the User service if it needs user details, or if it needs to update the user. Try and minimize the binding between the services so that they aren’t coupled at the payload layer.

    You will find yourself writing a lot of client-code this way, dropping down to transport layers to call between the services. Once again, think long and hard about whether you need this kind of scalability between services, because this comes with lots of other overhead:

    • Deployment overhead (many more deployment packages)
    • Complexity overhead (transport complexity, thinking about non compile-time binding and transport error-handling)
    • Development overhead (much harder to debug distributed systems)
    • Performance overhead (transport serialization/deserialization)

    On the other hand, if you have an organization that needs these services to evolve independently, they can be really powerful ways to achieve crazy high scale. Most organizations that need this kind of scale already have experts in this domain, I recommend having a conversation with that expert.

    Login or Signup to reply.
  2. Why not have a data access layer that are shared services? This way you not only segregate data but also have consistency and negative redundancy of data. Any thoughts?

    I like to stand corrected if this model doesn’t work.

    Thank you

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search