The fundamental problem is that there is no good, unambiguous, agreed upon definition of parametric vs. non-parametric tests.
I will quote wiki's entry for non-parametric tests;
The term "nonparametric statistics" has been defined imprecisely in the following two ways, among others: The first meaning of nonparametric involves techniques that do not rely on data belonging to any particular parametric family of probability distributions.
...
The second meaning of non-parametric involves techniques that do not assume that the structure of a model is fixed.
The above first meaning relates to distribution-free methods, i.e. where the underlying data is not assumed to come from a specific parametric distribution (normal, Poisson, binomial, etc.).
And I have to admit that the second meaning completely escapes me.
I will also quote quote from Bradley's classic Distribution-Free Statistical Tests (1968, p. 15–16):
The terms nonparametric and distribution-free are not synonymous, and neither term provides an entirely satisfactory description of the class of statistics to which they are intended to refer.…Roughly speaking, a nonparametric test is one which makes no hypothesis about the value of a parameter in a statistical density function, whereas a distribution-free test is one which makes no assumptions about the precise form of the sampled population. The definitions are not mutually exclusive, and a test can be both distribution-free and parametric.…
The next quote is from The Handbook of Nonparametric Statistics from 1962 (p. 2):
A precise and universally acceptable definition of the term ‘nonparametric’ is not presently available. The viewpoint adopted in this handbook is that a statistical procedure is of a nonparametric type if it has properties which are satisfied to a reasonable approximation when some assumptions that are at least of a moderately general nature hold.
As you can see, this definition is so vague (reasonable, moderatley) as to be completely unhelpful.
You can even find more "unusual" definitions, such as this one from the Handbook of Parametric and Nonparametric Statistical Procedures (Sheskin, 2000):
The distinction employed in this book for categorizing a procedure as a parametric versus a nonparametric test is primarily based on the level of measurement represented by the data that are being analyzed. As a general rule, inferential statistical tests that evaluate categorical/ nominal data and ordinal/rank-order data are categorized as nonparametric tests, while those tests that evaluate interval data or ratio data are categorized as parametric tests. Although the appropriateness of employing level of measurement as a criterion in this context has been debated, its usage provides a reasonably simple and straightforward schema for categorization that facilitates the decision-making process for selecting an appropriate statistical test.
From the above, it should become clear that it is difficult to specifically define the term nonparametric (and therefore, the term parametric).
Let me add one final definition, which is incorrect, but which is nevertheless often encountered, often in respected sources, namelly the definition that parametric tests are those which rely on assumptions of normality of the data. It can be found here, or here, e.g. There are plenty of parametric methods which assume the data belongs to a Poisson, Binomial, Weibull, etc. distribution. So if you encounter this definition ("does not require/assume normality of the data"), you can just stop there and dismiss anything from this source about the topic.
As alluded by the wiki entry, a very common definition is the distribution free definition, i.e. methods which do not make assumptions about the frequency distribution of the variables to be evaluated (underlying data is not assumed -or known- to come from any parametric distribution). This is the one I use, knowing full well that it is not universally accepted. Note that this "assumption" can be a certitude (e.g. for binomial assumption; the data is binomially distributed).
Note that this definition may lead to "odd" classifications; e.g. a $\chi^2$ test for a contingency table is non-parametric (no assumption about the data, only about the distribution of the statistic of interest), while a Fisher-exact test for the exact same contingency table would be parametric (assumes -in fact we know- that the data is binomial). OLS regression would also be non-parametric (in fact, computing the OLS regression is not even statistical; it is purely mathematical (solving a set of linear (in the coefficients) equations, derived from solving partial differential equations). It becomes statistical when making e.g. inferences about the coefficients - but relies on the normality of the residuals, not of the data...).
Now, your definition (basically, any distributional assumption, on the data, or the statistic (and maybe the residuals?)) would be too broad; for any hypothesis test, we need to assume/know, asymptotically or not, the distribution of the statistic (or else we can not compute a p-value), so that would make all hypothesis tests parametric.
TLDR; there is indeed conflicting definitions of paramnetric vs. non-parametric. Hence yes, many (but not all) would describe the $\chi^2$ test as non-parametric (because it makes distributional assumptions about the statistic, not the data).