As Michael Lew said, much depends on the nature of the study. If this is a preliminary study to identify matters for further detailed investigation, then you are the primary audience and you can proceed in whatever way makes sense to you. The risk in that case is that you might lead yourself down a blind alley if you put too much emphasis on an analyte that was indeed a false positive result.
To answer your Question 1, FDR control makes a lot more sense here than the family-wise error rate controlled by the Holm-Bonferroni procedure, which protects you (at the specified Type-I error rate) from making any errors in your determinations of "statistically significant" results. That's awfully stringent for a study like yours.*
Whether FDR control itself makes sense leads to your Question 4 about multicollinearity, a topic that typically receives much more attention than it deserves except in very large-scale data mining. Multicollineariy of predictor variables in a regression model inflates the variances of individual coefficient estimates. What you have, however, is correlations among the outcome values.
The Benjamini-Hochberg (BH) FDR control procedure assumes that the tests are independent. That probably isn't the case when you have multiple correlated outcomes; that FDR control would probably be too stringent. In gene-expression studies with many thousands of outcome variables, many of which might be highly correlated, FDR can be a useful heuristic for limiting the number of top targets for future study. I don't know that there's much value in interpreting the actual FDR rate, however.
If you do have substantial correlations among analytes, you might consider modeling their first few principal components instead.
The answer to Question 3 should also give you pause about the implications of multiple-comparison correction here. If you are going to do such correction, it must be done for all comparisons together. If you learned from your data that only 6 out of 10 biomarkers are "elevated," learning that required the equivalent of 10 comparisons (at least; more if you did all pairwise comparisons among the 3 groups). Even the BH procedure then requires that the lowest uncorrected p-value have a value of the FDR control rate divided by the number of comparisons; for example, for an FDR of 5% over 10 comparisons, the lowest p-value would need to be below 0.005.
Question 2 might best be answered by a different approach: model the CSF and serum values of an analyte together in a single model of the concentrations. Include the sampleType as a categorical predictor with levels of CSF and serum, and include an interaction of sampleType with group. That directly evaluates both systematic differences between CSF and serum and whether they have different associations with the groups. Although there are p-values reported for each coefficient in a model, they typically aren't corrected for multiple comparisons when the model as a whole is "significant."
Finally, as much depends on details specific to your study and what questions you are trying to answer, look for a local experienced statistician who can work through those details through with you and provide guidance for study design, analysis, and write-up.
*I'm assuming that you used "analyte" as a synonym for "biomarker"; that is, you analyzed 10 biomarkers and found that 6 of them had elevated levels in some group.