I was wondering if there is a good paper out there, that informs about model and data assumptions in AI/ML approaches?

For example if you look at Time Series Modelling (Estimation or Prediction) with Linear models or (G)ARCH/ARMA processes, there are a lot of data assumptions that have to be satisfied to meet the underlying model assumptions:

**Linear Regression**

- No Autocorrelation in your observations, often when dealt with level data (--> ACF)
- Stationarity (Unit-Roots --> Spurious Regressions)
- Homoscedasticity 
- Assumptions about error term distribution "Normaldist" (mean = 0, and some finite variance)
etc.

**Autoregressive Models**

- stationarity
- squared error autocorrelation
- ...

When dealing with ML/AI approaches it feels like you can throw whatever you like as an input (my subjective perception). You are satisfied with the result as long as some prediction/estimation error measurement is good enough (similar to a high, but often misleading R²).

**What assumptions have to be satisfied for an RNN, CNN or LSTM model, that find application in time-series prediction?**

Any thoughts?

**ADDED**

* [Good Article][1] describing my question/thoughts.
* [Medium Article][2] discussing model assumptions + tests, but not in the context of more advanced models
* I read the [100-page ML Book][3]- Unfortunately almost no content about model assumptions or how to test for them.


 [1]: https://towardsdatascience.com/how-not-to-use-machine-learning-for-time-series-forecasting-avoiding-the-pitfalls-19f9d7adf424
 [2]: https://towardsdatascience.com/the-importance-of-analyzing-model-assumptions-in-machine-learning-a1ab09fb5e76
 [3]: http://themlbook.com/