8
$\begingroup$

This is so trivial that authors usually don't bother to give an explicit proof. But for me there is some vagueness.

We say that two ensembles $X_n$ and $Y_n$ are statistically close, if $$ \Delta(n) = 1/2 \sum_{\alpha}|\mathbb{P}[X_n = \alpha] - \mathbb{P}[Y_n = \alpha]| $$ is negligible in n. The probability is taken over the randomness of $X_n$ and $Y_n$ respectively.

We say that two ensembles are computationally indistinguishable if for every PPT-adversary D we have $$ |\mathbb{P}[D(X_n) \to 1] - \mathbb{P}[D(Y_n) \to 1]|$$ is negligible.

Why the former implies the latter?

I do understand that for every deterministic function $f$ we have $\Delta(f(X), f(Y)) \le \Delta(X, Y),$ where $\Delta(\cdot, \cdot)$ is the statistical distance.

But in the case of PPT adversaries $D$ is not deterministic, there are implicit random coins. Why can we treat PPT-algorithm $D$ as deterministic function?

$\endgroup$

2 Answers 2

11
$\begingroup$

A probabilistic distinguisher is still a deterministic function of its input and random coins. So a probabilistic distinguisher trying to distinguish $X$ from $Y$ is equivalent to a deterministic distinguisher trying to distinguish $(X,R)$ from $(Y,R)$ where $R$ is a uniform distribution over random coins (importantly: independent of $X$/$Y$).

But:

\begin{align} \Delta\Bigl( (X,R), (Y,R) \Bigr) &= \frac12 \sum_{\alpha,r} \Bigl| \Pr[(X,R)=(\alpha,r)] - \Pr[(Y,R)=(\alpha,r)]\Bigr| \\ &= \frac12 \sum_{\alpha,r} \Bigl| \Pr[X=\alpha]\Pr[R=r] - \Pr[Y=\alpha]\Pr[R=r]\Bigr| \\ &= \frac12 \sum_{\alpha,r} \Bigl| \Pr[X=\alpha] - \Pr[Y=\alpha]\Bigr| \Pr[R=r] \\ &= \frac12 \sum_{\alpha} \Bigl| \Pr[X=\alpha] - \Pr[Y=\alpha]\Bigr| \;\underbrace{\sum_r \Pr[R=r]}_{=1} \\ &= \frac12 \sum_{\alpha} \Bigl| \Pr[X=\alpha] - \Pr[Y=\alpha]\Bigr| \\ &= \Delta(X,Y) \end{align}

In short, having access to some distribution that is independent of $X$/$Y$ doesn't help (or hurt) to distinguish $X$ from $Y$.

$\endgroup$
4
$\begingroup$

Another way to see this would be to try and upper bound the distinguishing advantage for any distinguisher and relate that to the statistical distance.

Edit:

Since the following answer is really good, I will just give ideas without proofs.

Was supposed to be :

Since @Mikero's answer is really good...

What happens when you answer late and do not proof-read: self-Facepalm and hides in shame for bragging about my answer

Let $(X, Y)$ be two random variables on the set $\mathcal{X}$. We denote by $\Delta^D(X;Y)$ the distinguishing advantage of a distinguisher $D$ with binary output and by $\delta(X,Y)$ by the maximum distinguishing advantage for $(X,Y)$.(i.e the advantage of one optimal distinguisher).

We need to do two things:

  • Give an "explicit description" of a deterministic distinguisher $\mathcal{D}$ that has advantage $\delta(X;Y)$
  • show that $\delta(X;Y) = \Delta(X;Y)$
  • The conclusion will be the implication in the question

First we show an explicit optimal deterministic distinguisher

For $X$ with distribution $Pr_X[x], x \in \mathcal{X}$ and $Y$ with distribution $Pr_Y[x]$, intuitively an optimal deterministic distinguisher $\mathcal{D}(\cdot)$ would do the following:

  • $\mathcal{D}(x) = 0$ if $Pr_X[x] \geq Pr_Y[x]$
  • $\mathcal{D}(x) = 1$, otherwise

Let $\mathcal{X}^* = \{x: Pr_X[x] \geq Pr_Y[x]\}$, we can show that $\Delta^{\mathcal{D}}(X,Y) = Pr[Y \in \mathcal{X}^*] - Pr[Y \in \mathcal{X}^*]$.

One can show that $\Delta^{\mathcal{D}}(X;y) = Pr[Y \in \mathcal{X}^*] - Pr[Y \in \mathcal{X}^*] = \delta(X;Y)$

Second, we relate the distinguishing advantage to the statistical distance

We have the following $\forall D, \Delta^D(X;Y) \leq \delta(X;Y)$ by defition, and on the other hand $\delta(X;Y) = \Delta(X;Y)$ therefore we have the following $$\forall D, \Delta^D(X;Y) \leq \Delta(X,Y)$$.

In conclusion the statistical distance gives an upper bound on the performance of any distinguisher, probabilistic included.

$\endgroup$
3
  • $\begingroup$ So, does it not matter if the distinguisher runs in polynomial or exponential time? $\endgroup$ Commented Sep 20, 2019 at 8:17
  • $\begingroup$ @Hilder indeed in this case, I did not restrict the distinguisher to a particular class(PPT) of di to make the argument. An naturally more interesting case would to try an workout the advantage for a particular class, which in most case is generally not easy. $\endgroup$ Commented Sep 25, 2019 at 7:21
  • $\begingroup$ With my way of proof I also get the answer that it does not matter how strong the distinguisher is, as long as the ensembles are statistically close then the advantage stays negligible. $\endgroup$ Commented Nov 14, 2022 at 15:29

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.