We have System A (an application and a database) that is built for a specific business department and therefore has business aligned data model and table structure. System A is a mission critical application.
Downstream systems, as part of their business processing flows, retrieve data from System A via stored procedures
We have developed System B which has a more generic data model and table structure to replace System A. System B aims to service other business departments as well. Once System B goes live, System A downstream systems will re-point database connection to System B.
Stored procedure used by System A downstream systems were also rewritten in System B. Signatures (input parameters and returned result sets) of the Stored procedures were retained in order to not impact downstream systems retrieving data System A.
What would be the testing required for System B? Here are my thoughts:
To completely guarantee that all features and business logic in System A has been implemented and implemented correctly in System B, all test cases in System A has to be executed in System B
Downstream systems should test all business processing flows that are impacted by the stored procedure rewrites.
An architect is pushing that there is no need for downstream systems to execute #2 and that only a comprehensive black box/white box/unit test of each stored procedure be done by the actual developer (as the stored procedure signatures shouldn't have changed). Is this a logical approach to testing the stored procedures or is this testing method flawed?
Any thoughts on the testing approach above especially #2?