We have a pretty big monolith. We're currently looking at carving it up into micro-services.
Right now we have a file server micro-service (FS) that handles delivering files to clients based on certain criteria.
FS has it's own database that contains meta-data relating to each file. So when a client requests a file FS processes the params, asks another service it's opinion, and decides which file to return. FS's database is quite comprehensive, it contains most of the information relating to each file.
We have another scheduler service (SS) that decides when a client needs to update it's files.
SS also has a database that duplicates a lot of the information that's already in the FS database.
Currently we are looking at moving SS into a micro-service.
Much of the information in the SS database can be queried from a new API in FS. But some of the information we need fast access to in SS and so don't want to add the latency of making API calls from SS to FS.
What are some options for solving this within micro-service best practice?
There is talk of allowing FS and SS to access the same database, but I'm against this idea. SS should query FS via an API for information it requires from that database.
The other option is to have some of the information duplicated in both databases and linked via and ID perhaps. What are your thoughts on this as a solution? I think this is the better option of the two.
I'm completely open to any other suggestions you may have.