Statistical hypothesis testing is in some way similar to the technique 'proof by contradiction' in mathematics, i.e. if you want to prove something then assume the opposite and derive a contradiction, i.e. something that is impossible.
In statistics 'impossible' does not exist, but some events are very 'improbable'. So in statistics, if you want to 'prove' something (i.e. $H_1$) then you assume the opposite (i.e. $H_0$) and if $H_0$ is true you try to derive something improbable. 'Improbable' is defined by the confidence level that you choose.
If, assuming $H_0$ is true, you can find something very improbable, then $H_0$ can not be true because it leads to a 'statistical contradiction'. Therefore $H_1$ must be true.
This implies that in statistical hypothesis testing you can only find evidence for $H_1$. If one can not reject $H_0$ then the only conclusion you can draw is 'We can not prove $H_1$' or 'we do not find evidence that $H_0$ is false and so we accept $H_0$ (as long as we do not find evidence against it)'.
But there is more ... it is also about power.
Obviously, as nothing is impossible, one can draw wrong conclusions; we might find 'false evidence' for $H_1$ meaning that we conclude that $H_0$ is false while in reality it is true. This is a type I error and the probability of making a type I error is equal to the signficance level that you have choosen. One may also accept $H_0$ while in reality it is false, this is a type II error and the probability of making one is denoted by $\beta$. The power of the test is defined as $1-\beta$ so 1 minus the probability of making a type II error. This is the same as the probability of not making a type II error.
So $\beta$ is the probability of accepting $H_0$ when $H_0$ is false, therefore $1-\beta$ is the probability of rejecting $H_0$ when $H_0$ is false which is the same as the probability of rejecting $H_0$ when $H_1$ is true.
By the above, rejecting $H_0$ is finding evidence for $H_1$, so the power is $1-\beta$ is the probability of finding evidence for $H_1$ when $H_1$ is true.
If you have a test with very high power (close to 1), then this means that if H1 is true, the test would have found evidence for $H_1$ (almost surely) so if we do not find evidence for $H_1$ (i.e. we do not reject $H_0$) and the test has a very high power, then probably $H_1$ is not true (and thus probably $H_0$ is true).
So what we can say is that if your test has very high power , then not rejecting H0 is ''almost as good as'' finding evidence for $H_0$.