There are various flavours of Bayesian statistics. One of them is subjectivist (e.g., according to de Finetti). Subjectivist Bayesians hold that probability applies to an individual's state of belief and information but not to underlying data generating processes, which can never be infinitely repeated, which would be necessary to define a true frequentist probability. For this reason (and potentially some others that are harder to discuss), according to a subjectivist, there is no such thing as a true underlying distribution. So the job of the subjective Bayesian in this problem is not to guess the underlying distribution, but rather to specify a distribution that summarises her belief and knowledge about the expected distribution of the data given $\mu$. Not only $p(\mu)$ is a prior choice, also what you call $f_{x|\mu}$!
In fact, this is even the case in what many call "objectivist Bayes", as long as the probabilities are epistemic, i.e., do refer to a state of knowledge rather than really existing underlying data generating processes. The objectivist also will have to choose an $f_{x|\mu}$ that expresses all existing information about the expected distribution of the data given $\mu$ (except that subjective believebelief is not supposed to play a role here; although in reality it is often hard to bring existing information into a suitable formal form without any subjective choices).
These are the major streams of traditional Bayesian philosophy. In the present, much of Bayesian data analysis is based on an implicit assumption that there is a true underlying distribution, which we have called "falsificationist Bayes" here: https://rss.onlinelibrary.wiley.com/doi/10.1111/rssa.12276
Even here (as in frequentism), the task would be to specify a model that makes sense from a subject matter perspective, and that can then be checked, for example by comparing data generated from it with your actual data, as in so-called posterior predictive checks (hence "falsificationist").
There is also the field of Bayesian nonparametrics, which is about very large models with potentially infinite-dimensional parameters covering large sets of the model space in case you don't want to commit to a specific simple one. This may be relevant regardless of whether your probability model is interpreted in an epistemic or frequentist (underlying data generating process) sense.