5

A little background before I ask my questions. I've designed a system as an architect based on the requirements given to me by the client. The client has a team or two to three developers which are good in react, typescript and Laravel (PHP). They want a system that

  • offers different modules to the user (Domain is Customer Management System).
  • They want a system in which modules can be subscribed to, independent of each other.
  • The system should not be very difficult to implement
  • It should cater the needs of approximately 10-25 users initially, but they have plans to extend it to 1000+ users later.

The client was very much inspired from microservices so I had to convince him to go for a monolith which then can later be migrated to a microservices. besides to me microservices for a 3-dev team is a joke.

my system design is like the backend is in Laravel/Php. Each module is kind of independent in a way that in a module (which are all hosted in one codebase) have their independent databases (think of separate schema in postgres) which do no utilize or talk directly to each other. if they want to, they will have to send out messages to communicate with each other. so, I have

  • Module A (Postgres)
  • Module B (Postgres)
  • Query Module (See below)
  • RabbitMQ for messaging
  • Redis for Caching

Since there are various queries that require distributed joins (because the databases are separate and modules cannot talk directly) so the Query module serves queries, which are denormalized datasets. whenever there is a change in database a message is sent out, and query module reads it gathers the data from different services (calling their Rest end points) and then updates the dataset. so all the screens in the UI show the data from query module but the changes are made to actual modules.

Is the design correct as a modular monolith? How can it be reduced further considering the team of only 3 developers?

UPDATE (April 19, 2025)

I read the answers, and I see questions like why Redis is used and why RabbitMQ. Please understand how PHP works.

Every time a request is received, unlike C#, Java, Scala, NodeJS, the PHP is always cold started. Http pipeline is setup, processing happen, response is sent out and PHP is fully shut down. Thats how PHP works. If app has to memorize something, it has to be outside PHP. Redis, Database or whatever. That is the reason for using Redis.

The way Laravel works for anything that requires longer processing, it requires app to send out a message to a Queue hosted in Database/Redis/RabbitMQ. The same copy of the code is running either on same server or elsewhere which picks up the message and does the processing. This is not something I invented for the project. See here https://laravel.com/docs/12.x/queues

enter image description here

My choice of using Redis or RabbitMQ is already baked into Laravel and commonly used by Laravel community even for small projects.

I do agree that things would have been much simpler with C# or NodeJS but understand that as a consultant, we have to design an architecture based on a team's capabilities. We work under constraints always. PHP backend was a constraint set by the client. The client previously had small PHP apps (custom deployments) and some desktop apps for windows. Now they want a consolidated online platform for all their clients.

10
  • if you think this form is not the right place to ask this question, can you suggest what is the right forum to ask this? or instead of blindly voting to close this question, suggest what is needed to improve this question? Commented Apr 18 at 5:15
  • see Green fields, blue skies, and the white board - what is too broad? Commented Apr 18 at 5:46
  • 1
    “microservices for a 3-dev team is a joke”—the assumption that there is a correlation between the number of persons in a team and the viability of microservices is wrong. I had projects with a team of one using microservices with great success. Commented Apr 18 at 9:06
  • 1
    "The system should not be very difficult to implement" That's a relief; I hate when the requirement is for it to be extremely difficult (+1 btw) Commented Apr 19 at 17:16
  • 1
    I'm afraid a design review doesn't fit the Q&A format of this community. If you have a specific problem with this implementation, then we can help. We have a dedicated reason for closing questions asking for opinions, which you appear to be doing. At this point you have several answers, but I don't see a way to bring this in line with what this community supports without invalidating answers. Consider posting on StackOverflow Discussions where these kinds of questions are encouraged. Commented Apr 20 at 1:22

4 Answers 4

7

The client was very much inspired from microservices so I had to convince him to go for a monolith which then can later be migrated to a microservices. besides to me microservices for a 3-dev team is a joke.

There are definitely those who agree with you:

Microservices are not neccesarily required to manage huge software, but rather to manage a huge number of people working on them.

The horror of microservices in small teams - medium.com

Microservices solve a few problems

  • allow large teams to break up work and have separate release cycles.

  • scale. You can scale out different services independently.

  • tackle a few large data problems more efficiently

If you don’t have these issues then the added cost of microseconds are not worth it.

You want a modular monolith

Are microservices worth it, when you have A SINGLE TEAM of 4 devs - redit.com

Conversely,

They want a system in which modules can be subscribed to, independent of each other.

This means you have to maintain a hard isolation of these modules regardless of microservices.

Which means you're part way to microservices anyway.

Is this too much for a modular monolith system?

The danger of a modular monolith is it allows unwanted coupling between the modules. Since you have a requirement that forces you to decouple them that danger goes poof. You don't need the architecture to force this on you when the requirements do. Rather, you need an architecture that enables this decoupling.

It may be worth analyzing how far you've been pushed towards microservices already.

Here are some microservice best practices. I'll contrast them with your modular monolith:

  1. Follow the Single-Responsibility Principle (SRP)‍

Still good advice in a modular monolith.

  1. Do Not Share Databases Between Services

You're already doing this.

  1. Clearly Define Data Transfer Object (DTO) Usage

Still good advice in a modular monolith.

  1. Use centralized observability tools‍

Still good advice in a modular monolith.

  1. Carefully consider your authorization options

May not be needed yet.

  1. Use an API gateway for HTTP

Still good advice in a modular monolith.

  1. Use the Right Communication Protocol Between Services

Still good advice in a modular monolith.

  1. Adopt a consistent authentication strategy

Still good advice in a modular monolith.

  1. Use containers and a container orchestration framework

May not be needed yet.

  1. Run health checks on your services‍

Still good advice in a modular monolith.

  1. Maintain consistent practices across your microservices

Still good advice in a modular monolith.

  1. Apply Resiliency and Reliability Patterns

Still good advice in a modular monolith.

  1. Ensure Idempotency of Microservices Operations

Still good advice in a modular monolith.

Evolving microservices - microservices.io

There are other things that go with microservices, like giving each service it's own pipeline, you can likely not bother with yet. But making your modules independently deployable has benefits even on a 3 person team.

Another issue is getting your modules to be the right size. I found some good advice on that here. Yes, making things as small as you can makes them simple. But that just pushes the complexity out somewhere else. Find balance.

6
  • thank you so much for the answer. it definately is a big help and better for my understanding also Commented Apr 18 at 10:15
  • This question got me thinking, and related to this answer: Modularizing the Monolith - Jimmy Bogard - NDC Oslo 2024 (YouTube) - I'm watching this right now. It's a good conceptual explanation of moving from monolith to modularized monolith (and some well-deserved critiques of microservices). Commented Apr 18 at 14:56
  • @GregBurghardt you got me watching it as well. The vertical stuff reminds me of package by feature. I last talked about it here. Commented Apr 18 at 18:08
  • Yup, that was my first thought as well. It seems like a modular monolith takes "package by feature" a step further and primes a monolith to be split into micro services later. A feature is almost too small. A module monolith appears to be broken down by business unit or where the rules change, encompassing many related features. In any event, I hadn't heard of modular monoliths before, so the OP's question (and your answer) perked my interest. Especially since I'm considering breaking up a monolith right now. Commented Apr 18 at 20:04
  • 1
    @SimpleFellow my advice with things like this, take great care with naming. Resist brain dead patterns that make naming so easy you don’t have to think. A name should make clear what does and doesn’t belong in the slice. Commented Apr 19 at 20:09
4

You already built micro services.

  1. Modules communicate asynchronously via message queues.
  2. You have a micro service module that aggregates data from multiple services.
  3. This reporting module gets updates from other modules via messages in the message queue.

The only thing stopping you from calling these things micro services is making them independently deployable. You have a monolithic codebase, the added complexity of asynchronous programming, and maintenance of message queues while retaining the single deployment unit.

You sort of have the worst of both worlds.

Except that all depends.

Modularizing the Monolith by Jimmy Bogard has a great high-level conceptualization of that transition between a monolith and micro services. He does mention message queues as part of the solution, but this implies you have a concrete desire and plan to migrate from a monolith to micro services. Message queues are introduced in the later stages of the transition so you can deal with the change from synchronous programing and business processes to asynchronous programming and business processes.

I had to convince him to go for a monolith which then can later be migrated to a microservices.

I think you made the right call by not jumping right into micro services, however this doesn't sound like a concrete plan to me. It can be migrated later. This implies that you also don't need to. Given this, I think message queues are an over-complication.

You built a modular monolith. You (hopefully) carved out the proper boundaries between modules. Your monolith is primed to be split into micro services by virtue of the fact you modularized it. Get rid of the message queues and just make in-process function calls. What do you gain with the asynchronous nature of message queues at this point, especially at this size?

I think you would benefit more from clearly defined interfaces between modules and the simpler nature of in-process function calls. The simpler nature of the system will allow you to add features and evolve it quicker, which is crucial at the early stages of developing a product. Clearly defined interfaces and in-process function calls lead to another simplification: no more chatter between the reporting module and the other two modules.

First, I propose a slight change in terminology to reset your frame of mind. The reporting module isn't doing "distributed joins"; it's aggregating data from multiple sources. This might be a niggling detail, but people see "joins" and immediately think "joining tables in a database" before immediately doing a face-plant into "there are three separate databases." I think this is the wrong frame of mind here. Aggregating data feels more appropriate, because it alleviates the mental overhead of trying to "join data". You just need to compile it; summarize it.

Stuff happens in Module A. The reporting module needs to respond. Stuff happens in Module B, and the reporting module needs to know about that, too. Here I see two simpler solutions than message queues:

  • Have Module A call a method on the Reporting Module at the appropriate time. Same for Module B. You introduce dependencies between modules, but these are pretty limited in scope.
  • Use a pub/sub pattern to promote looser coupling between modules. The application housing the modules should be wiring these things together (see also composition root).

PHP doesn't seem to support events in the traditional sense like, say, C#, but a publish/subscribe pattern isn't too difficult to create from scratch. A generic event class, event names, and an interface to publish, subscribe, and unsubscribe would suffice. You can try searching php publish subscribe for some options. A cursory search landed me here: https://github.com/Superbalist/php-pubsub — note that I've never used it before, but it supports local events:

$adapter = new \Superbalist\PubSub\Adapters\LocalPubSubAdapter(); $adapter->subscribe('my_channel', function ($message) { var_dump($message); }); $adapter->publish('my_channel', 'Hello World!'); 

Pass the same $adapter around to each module and you've made your own local, synchronous event bus. You get the ability to step through method calls line-by-line in an IDE — essential for quick turnaround times when squashing bugs on what is, presumably, a tight deadline.

I think there are simpler ways to achieve modularity and prime your application to be split up later into micro services. It starts with ditching message queues. Once you've reached a scale where micro services actually solve more problems than they introduce would I add message queues as a temporary stop-gap between "monolith" and "micro services."

In your case, I would opt for faster and simpler development that facilitates refactoring. A monolith achieves this provided the code is in the same repository and you can trace through method calls in a debugger without running multiple processes and multiple debuggers. If you are early on in the development of this product, speed is of the essence, and a simpler architecture that still supports change will serve you better.

Either that or finish the transition to micro services now, because you're about 80% the way there. You just need to package and deploy these things separately. Don't sit on the fence. Choose a side and run with it.

1
  • I added some details. please check them Commented Apr 19 at 15:52
3

Obviously it's hard to say without seeing the code but to simplify it further..

  1. You could drop Rabbit MQ. given that you have monolith design, the messages should be able to be sent with in process function calls. No need for message queues.

  2. You could drop reddis. With only 25 users it seem unlikely that you will need a distributed cache. Just have the query service on one box.

My concern going forward would be the query service. It seems to couple the modules. If different users have different module sets, won't you end up having to store all permutations of every query?

Lastly. Have you really avoided microservices? "query module reads it gathers the data from different services (calling their Rest end points)"

9
  • Honestly, "You could drop Rabbit MQ" was my first thought as well --- or just deploy these silly things separately and now you have micro services. It just feels like the message queue is micro service frosting on top of a modular monolith cake. Yummy and appealing, but otherwise vastly overcomplicated when these things can just make in-process calls. Commented Apr 18 at 20:07
  • Yeah just the pain of maintaining the rabbitmq + reddis infrastructure for 25 users, or even 1000+ users. The reasoning is not explained. Are licence fee being paid? Commented Apr 18 at 21:04
  • @Ewan I added some details. please check the question again. Commented Apr 19 at 16:10
  • @SimpleFellow hi, yes i read your update. I admit im not a php expert, but... It seems to me that a "monolith" application even a modular one would process the whole call in one go, importing and calling other modules in the same request. This would be the most simple solution. Offline jobs and data storage could be handled by your database of choice, Postgres. You ask if your solution is "too much" for 3 devs and 25 customers. I think the answer might be yes, you could have got away with a super basic solution and left laravel out. Commented Apr 19 at 16:22
  • Also, you said you specifically steered away from microservices, but it sounds like you have have made a full microservice solution with event queue comms and rest services. Would it not have been simpler to just make each module a REST api and have the front end call them as required? Commented Apr 19 at 16:25
1

My understanding is the query module it is a collection of database views keeping together information from module A and module B databases. Probably module A and module B databases are exposed each by their own application that reads and writes the database.

query module reads it gathers the data from different services (calling their Rest end points)

The architecture it is already a service oriented architecture (SOA). It might seem a monolith by having all the applications packaged in a sole package, a side useful to detail: why it is a monolith when the communication between modules it is performed over HTTP. To understand where it can get to consider each method from the application's code could be exposed through its own REST end point and its direct calls replaced by calls over HTTP/HTTPS. With that type of setup from a certain stage the development burden gets replaced by the administrative one that in the end fades out also transferring the burden to the infrastructure that in case of under development private clouds could get unmanageable down side the popular public clouds has overtaken. Therefore I think there are two questions (1) for short/medium term "what is the time span the development could be a tolerable bottleneck for the business?" and (2) for long term "what resources the business side can invest in the infrastructure?" rephrased that would be "what it is the cost of infrastructure the business side could support on long term?".

1
  • 1
    Do not edit your own posts without logging in. It's hard to judge if author's intent is preserved. Commented Apr 19 at 9:37

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.