We're in a process of moving from a huge old monolithic architecturemonolithic architecture towards microservicemicroservice one.
UI-centric Product
Our product (which is mostly a large single-page app) is quite UI-centric, and thusUI-centric. Thus the current codebase includes lots of automatedautomated UI tests.
Objective: Continuous Deployment
We expect to have a lot of microservicesmicroservices, each of them updated and deployed dozens times throughout the day, so. So we have to come up with an automated testing process whichautomated testing process. This allows us to run end-to-end testsend-to-end tests against the entire set of services as often as possible. In the perfect world it'd be for every commitevery commit in every service, but of course it'severy service. Unfortunately that's not feasible with the our test suite sizetest-suite size and available resourcesavailable resources.
Idea 1: Test environment
Our initial idea is to try to have a testing environmenttest environment which imitates a production environment as much as possible. In that test environmenttest environment we continuously run end-to-endend-to-end test suite-suites.
Idea 2: Record compatible service-versions
Every commitcommit in any service repository (maybe only master branch) causes that service to be built and deployed to that testing environment, and the changes made in that commit will be tested the next time the test suite-suite is run.
When a test suite-suite passes, we can store versions (or commit refs) of each service used at the moment of testing, so. So that we can keep a database of good and compatible service versions-versions.
Downsides
This solution has the following downsides:
- You won't be sure that your service changes don't break any end-to-end scenarios until the next test suite-suite run passes (can take hours)
- It's unclear how to run test suites-suites for changes from branches other than master.
Are there any good working solutions here?