1
$\begingroup$

For those of you that don't know, the modular/discrete logarithm problem:

Calculate b given the others: $a^b≡c$ mod d

I am aware that there is no general algorithm for the discrete logarithm, making it intractable. ($O(n^2)$ and worse for n digits is generally considered intractable, though exponential is for sure)

Even though it isn't standard, for simplicity I'll label time complexities here with the first letter:

  • $M(n)$ for multiplication
  • $D(n)$ for division
  • $E(n)$ for exponentiation

I have two conditions to work with:

  • d is a power of 2, so $d=2^n$ for some known n (this is convenient because modulo, which is $D(n)$, can be replaced by bitwise AND, which is $O(n)$)
  • a and c are odd, if not primes, which they probably will be (this could be inferred, since a has to be odd to generate all other odd values, of which c is one)

Naive brute force search would then be $O(2^nE(n))$. The best any algorithm (that I've heard of) does is subexponential - still not polynomial, which is what I'm aiming for.

I have algorithms for some of the other (non-trivial) ones, namely:

(Modular) Division

  • $D(n)=O(n^2)$ or so
  • Tested with 65536-bit numbers (Python, took a few seconds)

(Modular) Exponentiation

  • $E(n)=O(M(n)n)$, in my case, $M(n)=O(n^2)$ so $E(n)=O(n^3)$
  • Tested with 16384-bit numbers (Python, took a few seconds)

Discrete logarithm has me stumped. Since modulus power of 2 allows usage of various bit twiddling hacks and other shortcuts that normally would never work, I think it can be done in polynomial time for this case. No idea how though.

$\endgroup$
4
  • 1
    $\begingroup$ Your statement $O(n^2)$ and worse for $n$ digits is intractable is false. $\endgroup$ Commented Feb 12, 2017 at 20:58
  • 2
    $\begingroup$ A few remarks: (1) The asymptotic time complexities of multiplication, division and modular exponentiation are well-researched and do not relate to the question, so I suggest to remove all that. (2) The claim that quadratic complexity and above are "intractable" is questionable, to say the least: While that may be true for some problems in some disciplines, it is nowhere near general consensus, especially not in a cryptographic context. $\endgroup$ Commented Feb 12, 2017 at 20:59
  • $\begingroup$ In general, you can not perform exponentiation with an $n$-bit exponent with $O(\log n)$ multiplications; thus $E(n)=O(n^2\log(n))$ is ill-justified, and is wrong on standard hardware. $D(n)=O(n\log(n))$ is wrong too, on standard hardware. Note: $\TeX$ has \logfor $\log$. $\endgroup$ Commented Feb 12, 2017 at 21:27
  • $\begingroup$ Check out Magnus Daum's work on Solution Graphs. I've used it to build an extremely simple and fast discrete log for power of two modulus. $\endgroup$ Commented Feb 17, 2017 at 21:57

2 Answers 2

3
$\begingroup$

Suppose multiplication of two $n$-bit integers can be done in $\mathrm M(n)$ operations.

Here's an $\mathcal O(n^2\cdot\mathrm M(n))$ algorithm for discrete logarithms modulo $2^n$, which is basically the Pohlig-Hellman algorithm adapted to this specific problem. (With naïve multiplication, the runtime therefore is in $\mathcal O(n^4)$.)

First, observe that the order of the group $(\mathbb Z/2^n)^\ast$ is $2^{n-1}$: An integer in $\{0,\dots,2^n-1\}$ is invertible modulo $2^n$ if and only if it is odd. However, that group is not cyclic; in fact, its exponent is $2^{n-2}$. So, set $k:=n-2$ and suppose $a$ has order $2^k$ modulo $2^n$. This is the maximal order one can achieve modulo $2^n$; in particular, there are no "primitive roots" modulo $2^n$ for $n\geq3$. This also implies that $b$ can only be uniquely determined modulo $2^k$.

The algorithm essentially works by repeatedly "shifting out" the higher-valued bits in the exponent until there is only one unknown bit left, and then brute-forcing that bit. "Shifting out" bits can be done by raising $c$ to powers of two: Suppose the binary expansion of $b$ is $b=\sum_{i=0}^{k-1}2^ib_i$. Then, since $a^{2^l}\equiv0\pmod{2^n}$ for all $l\geq k$, $$ c^{2^i} = (a^b)^{2^i} = a^{2^ib} = a^{\sum_{j=0}^{k-1}2^{i+j}b_j} \equiv a^{\sum_{j=0}^{k-1-i}2^{i+j}b_j} \pmod{2^n} \text, $$ which only depends on the lower $k-i$ bits of $b$.

In particular, we can start out with $i=k-1$, which will give $$ c^{2^{k-1}} \equiv a^{2^{k-1}b_0}\pmod{2^n}\text, $$ and we can easily determine $b_0$ from this by trying both values $0$ and $1$ and checking which one satisfies the congruence. (Since we know it must be either of them, plugging in only $0$ or $1$ and seeing if it matches is also sufficient.)

Subsequently using $i=k-2$ yields $$ c^{2^{k-2}} \equiv a^{2^{k-2}b_0+2^{k-1}b_1}\pmod{2^n}\text, $$ and since we already know $b_0$, we can compute $b_1$ by trying both (or only one of the) possible values.

Repeating this process will obtain all bits of $b$.


As for the runtime: To determine a single bit, we need to compute $c^{2^i}\bmod2^n$ and $a^{\sum_{j=0}^{k-1-i}2^{i+j}b_j}\bmod2^n$. Using fast exponentiation, powers modulo $2^n$ can be computed in $\mathcal O(n\cdot\mathrm M(n))$ operations, hence as we need to figure out $k\in\mathcal O(n)$ bits, the total runtime is in $\mathcal O(n^2\cdot\mathrm M(n))$.

In practice, one can save a few operations by computing all the values $c^{2^i}\bmod n$ and $a^{2^i}\bmod n$ in advance using repeated squaring. However, this does not improve the asymptotic complexity, as one still needs $\mathcal O(n)$ multiplications per bit of $b$ to combine the cached values of $a^{2^i}\bmod n$ according to the concrete known bits of $b$.

$\endgroup$
4
  • $\begingroup$ Wouldn't you need (at most) $i$ multiplications by powers of $a$ per round? As $i$ runs somewhere between $0$ and $n$, you end up using $\approx 1/2\cdot n^2$ multiplications instead, and therefore with complexity $\mathcal{O}(n^2\cdot M(n))$. I believe this is the more common statement. $\endgroup$ Commented Feb 13, 2017 at 14:35
  • $\begingroup$ @CurveEnthusiast Did you take into account that those powers of $a$ are always the same, and that the exponents are powers of two? We can precompute and cache $a^{2^i}\bmod2^n$ for all $i\in\{0,\dots,k-1\}$ in advance using the relation $a^{2^{i+1}}\bmod2^n=(a^{2^i}\bmod2^n)^2\bmod2^n$, hence in $\mathcal O(n)$ squarings. (This increases the memory usage from $\mathcal O(n)$ to $\mathcal O(n^2)$, though.) Does this make sense? $\endgroup$ Commented Feb 13, 2017 at 17:41
  • $\begingroup$ Hm yeah, but you'd for example be computing $a^{2^{k-1}b_0}$, $a^{2^{k-1}b_1}\cdot a^{2^{k-2}b_0}$, $a^{2^{k-1}b_2}\cdot a^{2^{k-2}b_1}\cdot a^{2^{k-3}b_2}$, etc.. You need respectively 0,1,2 multiplications for this, and it runs from 0 to n-1 (or something close). The total number of multiplications is the sum of this sequence, which is of order $n^2$. $\endgroup$ Commented Feb 14, 2017 at 16:07
  • 1
    $\begingroup$ @CurveEnthusiast Yes, you are right — seems I confused myself somewhere with the caching. I revised the answer. Thank you! $\endgroup$ Commented Feb 14, 2017 at 17:13
1
$\begingroup$

I came up with this method a long time ago but I haven't seen it written up anywhere. It's inspired by Magnus Daum's solution graph method for narrow T-functions. Imagine it as brute-forcing one bit at a time, with backtracking.

u64 powm(u64 g, u64 x) { u64 y = 1; while (x) { if (x & 1) y *= g; x >>= 1; g *= g; } return y; } u64 logm(u64 g, u64 x, u64 y = 0, int bit = 0) { u64 m = (2ull << bit) - 1; u64 f = powm(g, y) ^ x; if (!f) return y; if (!(f & m)) { if (auto q = logm(g, x, y, bit + 1)) return q; } y |= 1ull << bit; if (!((powm(g, y) ^ x) & m)) { if (auto q = logm(g, x, y, bit + 1)) return q; } return 0; } 
$\endgroup$
3
  • $\begingroup$ Doesn't this just end up being Pohlig-Hellman (which also recovers the scalar one bit at a time)? The algorithm looks very similar at least. If not, could you explain the difference? $\endgroup$ Commented Mar 16, 2017 at 11:12
  • $\begingroup$ It's an application of symmetric cryptanalysis techniques to find a root of the function $f(x) = y \oplus g^x$ where $\oplus$ is bitwise XOR. Nothing number theoretic going on here, instead this method is relying on the property of T-functions to only propagate changes in one direction across the word. The fundamental difference is that it doesn't always guess a bit correctly and has to backtrack occasionally, but the probability it has to backtrack by more than a couple bits at a time is small. $\endgroup$ Commented Mar 16, 2017 at 19:28
  • $\begingroup$ This answer is mostly addressing the "bit twiddling hacks and other shortcuts" part of the question, and just for novelty. $\endgroup$ Commented Mar 16, 2017 at 22:59

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.