@Dian breathe easy, it's pretty much not too difficult. So let's work from familiar territory to false discovery rate (FDR).
First, I see that you have a bunch of outcomes, with a varying number of predictors. Someone who is more familiar with multivariate regression (i.e. multiple dependent variables, assuming possible correlations between errors of different models) will have to speak to whether your modeling approach is the best one. Let's take it as given.
Each of your models will produce some number of $p$-values (incidentally I am an epidemiologist, and have absolutely no idea what you mean about "only one $p$-value." If that were true, it would change the nature of my work and that of my colleagues considerably. :). You could go ahead and test your hypotheses about individual effects separately using these $p$-values.
Unfortunately, hypothesis testing is like the lottery (the more you play, the more your chance to "win"), so if you want to go into each hypothesis test assuming that the null hypothesis is true, then you are in trouble, because $\alpha$ (your willingness to make/probability of making a false rejection of a true null hypothesis) only applies to a single test.
You may have heard of "the Bonferroni correction/adjustment", where you try to solve this conundrum by multiplying your $p$-values by the total number of null hypotheses you are testing (let's call that number of tests $m$). You are effectively trying to redefine $\alpha$ as a family-wise error rate (FWER), or the probability of making at least one false rejection out of a family of tests, assuming all null hypotheses are true. Alternatively, and equivalently, you can think about the Bonferroni adjustment as dividing $\alpha$ by $m$ (or $\alpha/2$ by $m$ if you are performing two-tailed tests, which in all likelihood you are in a regression context). We get these two alternatives because basing a rejection decision on $p \le \frac{\alpha/2}{m}$ is equivalent to $mp \le \frac{\alpha}{2}$.
Of course, the Bonferroni technique is a blunt hammer. It positively hemorrhages statistical power. $\overset{_{\vee}}{\mathrm{S}}\mathrm{idák}$ got a smidge more statistical power, by altering the adjustment of the $p$-value to $1-(1-p)^{m}$. Holm improved upon both Bonferroni and $\overset{_{\vee}}{\mathrm{S}}\mathrm{idák}$ adjustments by creating a stepwise adjustment. The step-up procedure for the Bonferroni adjustemnt:
- Compute the exact $p$-value for each test.
- Order the $p$-values from smallest to largest.
For the first test, adjust the $p$-value to be $pm$; and generally:
For the i$^{\text{th}}$ test, adjust the $p$-value to be $p(m–(i–1))$.
- Using Holm’s method, for all tests following the first test for which we fail to reject H$_{0}$ we will also fail to reject the null hypothesis.
The Holm-$\overset{_{\vee}}{\mathrm{S}}\mathrm{idák}$ adjustment is similar, but you would adjust each $p$-value using $1-(1-p)^{m-(i-1)}$.
Some folks, most notably Benjamini and Hochberg (1995), were not comfortable with the world view implied by the assumption that all null hypotheses are true within a stepwise procedure. Surely, they reasoned, if you make an adjustment and reject a single hypothesis, that must imply that a better assumption would be that the remaining $m-1$ hypotheses have a lower probability of all null hypotheses being true? Also, science in general does not assume that there are no relationships in the world: quite the opposite, in fact. Enter the FDR which progressively assumes that rejection probabilities must increase if previous hypotheses were rejected after adjustment. Here's the step-down procedure they proposed:
- Compute the exact $p$-value for each test.
- Order the $p$-values from largest to smallest (step-down!).
- For the first test ($i=1$), adjust the $p$-value to be $\frac{pm}{m-(1-1)} = p$.
- For the i$^{\text{th}}$ test, adjust the $p$-value to be $\frac{pm}{m-(i–1)}$.
- Using Benjamini & Hochberg’s method, we reject all tests including and following the first test for which we reject the null hypothesis.
We often term $p$-values that have been adjusted this way $q$-values.
The advantages of this FDR adjustment include (1) more statistical power, especially for large $m$, and (2) easy integration of additional tests/$p$-values (say, adding $p$-values from an additional regression model) in a manner which leaves the inferences from the first FDR adjustment unchanged.
Update: All these FWER procedures, and the FDR procedure I just described can produce adjusted $p$-values that are greater than one. When reporting adjusted $p$-values, these are typically reported as $p=1$, $p>.999$, $p=$not reject or something along those lines.
References
Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):289–300.