Yes, your understanding is correct.
Cross validation tests the predictive ability of different models by splitting the data into training and testing sets,
Yes.
and this helps check for overfitting.
Model selection or hyperparameter tuning is one purpose to which the CV estimate of predictive performance can be used. It is IMHO important not to confuse CV with to what purpose its results are employed.
In the first place cross validation yields an approximation to generalization error (the expected predictive performance on unseen data of a model).
This estimate can either be used as
- an approximation of generalization error of the model fitted on the whole data set with the same (hyper)paramter deterimination as was used for the CV surrogate models.
- or to select hyperparameters. If you do this, this CV estimate becomes part of model training, and you need an independent estimate for generalization error, see e.g. nested aka double cross validation.
As for overfitting within the model training procedure, CV helps but cannot work miracles. Keep in mind that cross validation results are also subject to variance (of various sources). Thus, with increasing number of models/hyperparameter settings in the comparison there is also an increased risk of accidentally (due to variance of the CV estimates) observing very good prediction and being mislead by this (see the one-standard-deviation rule for a heuristic against this).