Lets say we have several primary tables in our database (a, b and c), and then another (x) that stores a complex, only semi-predictable JSONb object that has references to all the primary tables. In my case the JSONb looks something like this:
{ entries: [ [ [ {table: 'a', id: '1'}, {table: 'b', id: '4', entries: [ {table: 'a', id: '3'}, {table: 'a', id: '1'} ... ]}, {table: 'c', id: '5', entries: [ {table: 'a', id: '2'}, {table: 'b', id: '4', entries: [ {table: 'a', id: '1'}, {table: 'a', id: '6'}, ... ]}, ... ]}, ... ], ... ], ... ] } When selecting records from table x we want to filter the results by properties of the other tables - eg. only x records that contain an a record that has a field containing a particular enum value.
Is it even possible or at all performant to do this in a single query using this JSONb data structure? It seems it would require some serious aggregating of IDs, and doing that for every query seems like a lot of work.
The alternative I was considering is to keep the JSONb field as it is, but also make join tables (I may have the name wrong there) to track all of the table x records dependencies. So you'd have tables x_a, x_b, x_c, etc. and only store a single unique join record for each a, b, or c, ID that pops up in the x record. This way when writing the queries, a simple and non-JSON approach could be used to do a regular join filter.
As a beginner to intermediate SQL programmer this seems like it would at least lead to more readable code, however I'm not sure if this counts as violating the "only enter the data once" rule.
Any and all input is welcome, including reading material on how to make these decisions.
Language is PostgresQl