RSA: Attack and Defense (II)

This article first supplements two specific integer factorization methods - Fermat's factorization method and Pollard's rho algorithm, explaining the essence of their algorithms and applicable scenarios, and provides a Python reference implementation. Next, it analyzes in detail a classic low private exponent attack - Wiener's attack, elaborating on the mathematical basis, the attack principle, and the attack procedure, with a complete Python program. The article also cites the latest research paper proposing a new upper bound for the private exponent when Wiener's attack is successful and verifies the correctness of this limit with a test case.

The enemy knows the system being used.
Claude Shannon (American mathematician, electrical engineer, computer scientist, and cryptographer known as the "father of information theory".)

Previous article: RSA: Attack and Defense (I)

Integer Factorization (Supplementary)

Even if the RSA modulus \(N\) is a very big number (with sufficient bits), problems can still arise if the gap between the prime factors \(p\) and \(q\) is too small or too large. In such cases, there are specific factorization algorithms that can effectively retrieve p and q from the public modulus N.

Fermat's Factorization Method

When the prime factors \(p\) and \(q\) are very close, Fermat's factorization method can factorize the modulus N in a very short time. Fermat's factorization method is named after the French mathematician Pierre de Fermat. Its base point is that every odd integer can be represented as the difference between two squares, i.e. \[N=a^2-b^2\] Applying algebraic factorization on the right side yields \((a+b)(a-b)\). If neither factor is one, it is a nontrivial factor of \(N\). For the RSA modulus \(N\), assuming \(p>q\), correspondingly \(p=a+b\) and \(q=a-b\). In turn, it can be deduced that \[N=\left({\frac {p+q}{2}}\right)^{2}-\left({\frac {p-q}{2}}\right)^{2}\] The idea of Fermat's factorization method is to start from \(\lceil{\sqrt N}\rceil\) and try successive values of a, then verify if \(a^{2}-N=b^{2}\). If it is true, the two nontrivial factors \(p\) and \(q\) are found. The number of steps required by this method is approximately \[{\frac{p+q}{2}}-{\sqrt N}=\frac{({\sqrt p}-{\sqrt q})^{2}}{2}=\frac{({\sqrt N}-q)^{2}}{2q}\] In general, Fermat's factorization method is not much better than trial division. In the worst case, it may be slower. However, when the difference between \(p\) and \(q\) is not large, and \(q\) is very close to \(\sqrt N\), the number of steps becomes very small. In the extreme case, if the difference between \(q\) and \(\sqrt N\) is less than \({\left(4N\right)}^{\frac 1 4}\), this method only takes one step to finish.

Below is a Python implementation of Fermat's factorization method, and an example of applying it to factorize the RSA modulus N:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import gmpy2
import time

def FermatFactor(n):
assert n % 2 != 0

a = gmpy2.isqrt(n) + 1
b2 = gmpy2.square(a) - n

while not gmpy2.is_square(b2):
a += 1
b2 = gmpy2.square(a) - n

b = gmpy2.isqrt(b2)
return a + b, a - b

p = 7422236843002619998657542152935407597465626963556444983366482781089760760914403641211700959458736191688739694068306773186013683526913015038631710959988771
q = 7422236843002619998657542152935407597465626963556444983366482781089760759017266051147512413638949173306397011800331344424158682304439958652982994939276427
N = p * q
print("N =", N)

start = time.process_time()
(p1, q1) = FermatFactor(N)
end = time.process_time()
print(f'Elapsed time {end - start:.3f}s.')

assert(p == p1)
assert(q == q1)

The FermatFactor() function defined at the beginning of the program implements the Fermat factorization method. It calls three library functions of gmpy2: isqrt() to find the square root of an integer, square() to execute the squaring operation, and is_square() to verify if the input is a square number. Two large prime numbers \(p\) and \(q\) of 154 decimal digits each are defined later, and multiplying them gives \(N\). Then \(N\) is fed into the FermatFactor() function and the program starts timing. When the function returns, it prints the elapsed time and confirms the factorization.

1
2
N = 55089599753625499150129246679078411260946554356961748980861372828434789664694269460953507615455541204658984798121874916511031276020889949113155608279765385693784204971246654484161179832345357692487854383961212865469152326807704510472371156179457167612793412416133943976901478047318514990960333355366785001217
Elapsed time 27.830s.

As can be seen, in less than half a minute, this large number of 308 decimal digits (about 1024 bits) was successfully factorized! Going back and examining \(p\) and \(q\), one can see that the first 71 digits of these two large prime numbers of 154 decimal digits are exactly the same. This is exactly the scenario in which the Fermat factorization method exerts its power. If you simply modify the FermatFactor() function to save the starting \(a\) value and compare it to the value at the end of the loop, you get a loop count of 60613989. With such a small number value, it's no wonder that the factorization is done so quickly.

Therefore, the choice of the large prime numbers \(p\) and \(q\) must not only be random but also be far enough apart. After obtaining two large prime numbers, the difference between them shall be checked. If it is too small, regeneration is required to prevent attackers from using Fermat's factorization method to crack it.

Pollard's Rho Algorithm

On the opposite end, if the gap between the large prime factors \(p\) and \(q\) is too large, they may be cracked by Pollard's rho algorithm. This algorithm was invented by British mathematician John Pollard1 in 1975. It requires only a small amount of storage space, and its expected running time is proportional to the square root of the smallest prime factor of the composite number being factorized.

The core idea of Pollard's rho algorithm is to use the collision pattern of traversal sequences to search for factors, and its stochastic and recursive nature allows it to factorize integers efficiently in relatively low complexity. First, for \(N=pq\), assume that \(p\) is the smaller nontrivial factor. The algorithm defines a polynomial modulo \(N\) \[f(x)=(x^{2}+c){\pmod N}\] A pseudorandom sequence can be generated by making recursive calls with this polynomial, and the sequence generation formula is \(x_{n+1}=f(x_n)\). For example, given an initial value of \(x_0=2\) and a constant \(c=1\), it follows that \[\begin{align} x_1&=f(2)=5\\ x_2&=f(x_1)=f(f(2))=26\\ x_3&=f(x_2)=f(f(f(2)))=677\\ \end{align}\] For two numbers \(x_i\) and \(x_j\) in the generated sequence, \(|x_i-x_j|\) must be a multiple of \(p\) if \(x_i\neq x_j\) and \(x_i\equiv x_j{\pmod p}\). In this case, calculating \(\gcd(|x_i-x_j|,N)\) results in \(p\). Based on the Birthday Paradox, in the worst case, it is expected that after generating about \(\sqrt p\) numbers, there will be two numbers that are the same under the modulus \(p\), thus successfully factorizing \(N\). However, the time complexity of performing pairwise comparisons is still unsatisfactory. In addition, storing so many numbers is also troublesome when N is large.

How to solve these problems? This is where the ingenuity of Pollard's rho algorithm lies. Pollard found that the sequence generated by this pseudorandom number generator has two properties:

  1. Since each number depends only on the value that precedes it, and the numbers generated under the modular operation are finite, sooner or later it will enter a cycle. As shown below, the resulting sequence will eventually form a directed graph similar in shape to the Greek letter \(\rho\), from which the algorithm takes its name. Cycle diagram resembling the Greek letter ρ
  2. When \(|x_i-x_j| \equiv 0 \pmod p\), there must be \[|f(x_i)-f(x_j)|=|{x_i}^2-{x_j}^2|=|x_i+x_j|\cdot|x_i-x_j|\equiv 0 \pmod p\] This shows that if two numbers in the sequence satisfy a certain condition under modulus operation, all equally spaced pairs of numbers satisfy the same condition.

Insightful of these two properties, Pollard utilizes Floyd's cycle-finding algorithm (also known as the tortoise and hare algorithm) to set up the fast and slow nodes \(x_h\) and \(x_t\). Starting from the same initial value \(x_0\), the slow node \(x_t\) moves to the next node in the sequence every step, while the fast node \(x_h\) moves forward by two nodes at a time, i.e. \[\begin{align} x_t&=f(x_t)\\ x_h&=f(f(x_h))\\ \end{align}\] After that, calculate \(\gcd(|x_h-x_t|,N)\), and the result that is greater than 1 and less than \(N\) is \(p\), otherwise continue with the same steps. With this design, since each move is equivalent to checking a new node spacing, pairwise comparisons are unnecessary. If not found, eventually the fast and slow nodes will meet on the cycle, at which time the result of finding the greatest common divisor is \(N\). The algorithm's recommendation at this point is to exit and regenerate the pseudorandom number sequence with a different initial value or constant \(c\) and try again.

This is the classic Pollard's rho algorithm. Its time complexity is \(𝑂(\sqrt p\log N)\) (\(\log\) comes from the required \(\gcd\) operations). For RSA modulus \(N\), obviously \(p\leq \sqrt N\), so the upper bound on the time complexity can be written as \(𝑂(N^{\frac 1 4}\log N)\). The time complexity expression for Pollard's rho algorithm indicates that the smaller the minimum prime factor of the composite number being factorized, the faster the factorization is expected to be. An excessively small \(p\) is extremely unsafe.

Programming Pollard's rho algorithm is not difficult. The following Python code shows a function implementation of the algorithm, PollardRhoFactor(), and some test cases

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import gmpy2
import time

def PollardRhoFactor(n, seed, c):

if n % 2 == 0: return 2
if gmpy2.is_prime(n): return n

while True:
f = lambda x: (x**2 + c) % n
t = h = seed
d = 1

while d == 1:
t = f(t) # Tortoise
h = f(f(h)) # Hare
d = gmpy2.gcd(h - t, n)

if d != n:
return d # find a non-trivial factor

# start a new round with updated seed and c
seed = h
c += 1

N = [10967535067, 18446744073709551617, 97546105601219326301,
780002082420246798979794021150335143]

print(f"{'N':<37}{'P':<16}{'Elapsed Time (s)':}")
for i in range(0, len(N)):
start = time.process_time()
p = PollardRhoFactor(N[i], 2, 1)
end = time.process_time()
print(f'{N[i]:<37}{p:<16}{end - start:16.3f}')

F8 = 2**(2**8) + 1 # A 78-digit Fermat number
start = time.process_time()
p = PollardRhoFactor(F8, 2, 1)
end = time.process_time()
print(f'\nF8 = {F8}\np = {p}\nElapsed time {end - start:.3f}s')

The function PollardRhoFactor() accepts three arguments: n is the composite number to be factorized, seed is the initial value of the pseudorandom sequence, and c is the constant value in the generating polynomial. The function internally uses two while to form a double loop: inside the outer loop defines the generating polynomial f and the fast and slow nodes h and t, while the node moving steps and the greatest common divisor operation are implemented in the inner loop. The inner loop ends only if the greatest common divisor d is not 1. At this point, if d is not equal to n, the function returns the non-trivial factor d. Otherwise, d equals n, meaning the fast and slow nodes have met on the cycle. In this situation, the code in the outer loop resets seed to the value of the fast node and increments c, thus restarting a new round of search.

Running the above code on a MacBook Pro (2019), the output is as follows

1
2
3
4
5
6
7
8
9
N                                    P               Elapsed Time (s)
10967535067 104729 0.001
18446744073709551617 274177 0.002
97546105601219326301 9876543191 0.132
780002082420246798979794021150335143 244300526707007 6.124

F8 = 115792089237316195423570985008687907853269984665640564039457584007913129639937
p = 1238926361552897
Elapsed time 64.411s

This result proves the effectiveness of Pollard's rho algorithm. In particular, for the last test, the input to the function was the Fermat number \(F_8\) (defined as \(F_{n}=2^{2^{n}}+1\), where \(n\) is a non-negative integer). In 1980, Pollard and Australian mathematician Richard Brent 2 working together applied this algorithm to factorize \(F_8\) for the first time. The factorization took 2 hours on a UNIVAC 1100/42 computer. And now, on a commercial off-the-shelf laptop computer, Pollard's rho algorithm revealed the smaller prime factor 1238926361552897 of \(F_8\) in 64.4 seconds.

Subsequently, Pollard and Brent made further improvements to the algorithm. They observed that if \(\gcd(d, N)>1\), for any positive integer \(k\), there is also \(\gcd(kd, N)>1\). So multiplying \(k\) consecutive \((|x_h-x_t| \pmod N)\) and taking the modulo \(N\) with the product, and then solving for the greatest common divisor with \(N\) should obtain the same result. This method replaces \(k\) times \(\gcd\) with \((k-1)\) times multiplications modulo \(N\) and a single \(\gcd\), thus achieving acceleration. The downside is that occasionally it may cause the algorithm to fail by introducing a repeated factor. When this happens, it then suffices to reset \(k\) to 1 and fall back to the regular Pollard's rho algorithm.

The following Python function implements the improved Pollard's rho algorithm. It adds an extra for loop to implement the multiplication of \(k\) consecutive differences modulo \(N\), with the resulting product stored in the variable mult. mult is fed to the greatest common divisor function with \(N\), and the result is assigned to d for further check. If this fails, \(k\) is set to 1 in the outer loop.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
def PollardRhoFactor2(n, seed, c, k):

if n % 2 == 0: return 2
if gmpy2.is_prime(n): return n

while True:
f = lambda x: (x**2 + c) % n
t = h = seed
d = 1

while d == 1:
mult = 1
for _ in range(k):
t = f(t) # Tortoise
h = f(f(h)) # Hare
mult = (mult * abs(h - t)) % n

d = gmpy2.gcd(mult, n)

if d != n:
return d # find a non-trivial factor

# start a new round with updated seed and c
seed = h
c += 1
k = 1 # fall back to regular rho algorithm

print(f"{'N':<37}{'P':<16}{'Elapsed Time (s)':}")
for i in range(0, len(N)):
start = time.process_time()
p = PollardRhoFactor2(N[i], 2, 1, 100)
end = time.process_time()
print(f'{N[i]:<37}{p:<16}{end - start:16.3f}')

F8 = 2**(2**8) + 1 # A 78-digit Fermat number
start = time.process_time()
p = PollardRhoFactor2(F8, 2, 1, 100)
end = time.process_time()
print(f'\nF8 = {F8}\np = {p}\nElapsed time {end - start:.3f}s')

Using the same test case, called with \(k\) set to 100, the program runs as follows

1
2
3
4
5
6
7
8
9
N                                    P               Elapsed Time (s)
10967535067 104729 0.001
18446744073709551617 274177 0.002
97546105601219326301 9876543191 0.128
780002082420246798979794021150335143 244300526707007 5.854

F8 = 115792089237316195423570985008687907853269984665640564039457584007913129639937
p = 1238926361552897
Elapsed time 46.601s

It can be seen that for relatively small composite \(N\), the improvement is not significant. As \(N\) becomes larger, the speedup is noticeable. For the 78-bit decimal Fermat number \(F_8\), the improved Pollard's rho algorithm takes only 46.6 seconds, which is a speedup of more than 27% over the regular algorithm. The improved Pollard \(\rho\) algorithm indeed brings significant speedup.

To summarize the above analysis, implementation, and testing of Pollard's rho algorithm, it is necessary to set a numerical lower bound for the generated prime numbers \(p\) and \(q\) to be used by RSA. If either of them is too small, it must be regenerated or it may be cracked by an attacker applying Pollard's rho algorithm.

Low Private Exponent Attack

For some particular application scenarios (e.g., smart cards and IoT), limited by the computational capability and low-power requirements of the device, a smaller value of private exponent \(d\) is favored for fast decryption or digital signing. However, a very low private exponent is very dangerous, and there are some clever attacks that can totally breach such an RSA cryptosystem.

Wiener's Attack

In 1990, Canadian cryptographer Michael J. Wiener conceived an attack scheme3 based on continued fraction approximation that can effectively recover the private exponent \(d\) from the RSA public key \((N, e)\) under certain conditions. Before explaining how this attack works, it is important to briefly introduce the concept and key properties of continued fraction.

Continued Fraction

The continuous fraction itself is just a mathematical expression, but it introduces a new perspective on the study of real numbers. The following is a typical continued fraction \[x = a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{\ddots\,}}}\] where \(a_{0}\) is an integer and all other \(a_{i}(i=1,\ldots ,n)\) are positive integers. One can abbreviate the continued fraction as \(x=[a_0;a_1,a_2,\ldots,a_n]\). Continued fractions have the following properties:

  1. Every rational number can be expressed as a finite continued fraction, i.e., a finite number of \(a_{i}\). Every rational number has an essentially unique simple continued fraction representation with infinite terms. Here are two examples: \[\begin{align} \frac {68} {75}​&=0+\cfrac {1} {1+\cfrac {1} {\small 9+\cfrac {1} {\scriptsize 1+\cfrac {1} {2+\cfrac {1} {2}}}}}=[0;1,9,1,2,2]\\ π&=[3;7,15,1,292,1,1,1,2,…] \end{align}\]

  2. To calculate the continued fraction representation of a positive rational number \(f\), first subtract the integer part of \(f\), then find the reciprocal of the difference and repeat till the difference is zero. Let \(a_i\) be the integer quotient, \(r_i\) be the difference of the \(i\)th step, and \(n\) be the number of steps, then \[\begin{align} a_0 &= \lfloor f \rfloor, &r_0 &= f - a_0\\ a_i&={\large\lfloor} \frac 1 {r_{i-1}} {\large\rfloor}, &r_i &=\frac 1 {r_{i-1}} - a_i \quad (i = 1, 2, ..., n)\\ \end{align}\] The corresponding Python function implementing the continued fraction expansion of rationals is as follows

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    def cf_expansion(nm: int, dn:int) -> list:
    """ Continued Fraction Expansion of Rationals
    Parameters:
    nm - nominator
    dn - denomainator
    Return:
    List for the abbreviated notation of the continued fraction
    """
    cf = []
    a, r = nm // dn, nm % dn
    cf.append(a)

    while r != 0:
    nm, dn = dn, r
    a = nm // dn
    r = nm % dn
    cf.append(a)

    return cf

  3. For both rational and irrational numbers, the initial segments of their continued fraction representations produce increasingly accurate rational approximations. These rational numbers are called the convergents of the continued fraction. The even convergents continually increase, but are always less than the original number; while the odd ones continually decrease, but are always greater than the original number. Denote the numerator and denominator of the \(i\)-th convergent as \(h_i\) and \(k_i\) respectively, and define \(h_{-1}=1,h_{-2}=0\) and \(k_{-1}=0,k_{-2}=1\), then the recursive formula for calculating the convergents is \[\begin{align} \frac {h_0} {k_0} &= [0] = \frac 0 1 = 0<\frac {68} {75}\\ \frac {h_1} {k_1} &= [0;1] = \frac 1 1 = 1>\frac {68} {75}\\ \frac {h_2} {k_2} &= [0;1,9] = \frac 9 {10}<\frac {68} {75}\\ \frac {h_3} {k_3} &= [0;1,9,1] = \frac {10} {11}>\frac {68} {75}\\ \frac {h_4} {k_4} &= [0;1,9,1,2] = \frac {29} {32}<\frac {68} {75}\\ \end{align}\] It can be verified that these convergents satisfy the aforementioned property and are getting closer to the true value. The following Python function implements a convergent generator for a given concatenated fraction expansion, and it returns a tuple of objects consisting of the convergent's numerator and denominator.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    def cf_convergent(cf: list) -> (int, int):
    """ Calculates the convergents of a continued fraction
    Parameters:
    cf - list for the continued fraction expansion
    Return:
    A generator object of the convergent tuple
    (numerator, denominator)
    """
    nm = [] # Numerator
    dn = [] # Denominators

    for i in range(len(cf)):
    if i == 0:
    ni, di = cf[i], 1
    elif i == 1:
    ni, di = cf[i]*cf[i-1] + 1, cf[i]
    else: # i > 1
    ni = cf[i]*nm[i-1] + nm[i-2]
    di = cf[i]*dn[i-1] + dn[i-2]

    nm.append(ni)
    dn.append(di)
    yield ni, di

  4. Regarding the convergents of continued fractions, there is also an important Legendre4 theorem: Let \(a∈ \mathbb Z, b ∈ \mathbb Z^+\) such that \[\left\lvert\,f - \frac a b\right\rvert< \frac 1 {2b^2}\] then \(\frac a b\) is a convergent of the continued fraction of \(f\).

Attack Mechanism

Now analyze how Wiener's attack works. From the relationship between RSA public and private exponent \(ed\equiv 1 {\pmod {\varphi(N)}}\), it can be deduced that there exists an integer \(k\) such that \[ed - k\varphi(N) = 1\] Dividing both sides by \(d\varphi(N)\) gives \[\left\lvert\frac e {\varphi(N)} - \frac k d\right\rvert = \frac 1 {d{\varphi(N)}}\] Careful observation of this formula reveals that because \(\varphi(N)\) itself is very large, and \(\gcd(k,d)=1\), \(\frac k d\) is very close to \(\frac e {\varphi(N)}\). In addition, \[\varphi(N)=(p-1)(q-1)=N-(p+q)+1\] Its difference from \(N\) is also relatively small. Therefore, \(\frac k d\) and \(\frac e N\) also do not differ by much. Since RSA's \((N,e)\) are public, Wiener boldly conceived - if \(\pmb{\frac e N}\) is expanded into a continued fraction, it is possible that \(\pmb{\frac k d}\) is one of its convergents!

So how to verify if a certain convergent is indeed \(\frac k d\)? With \(k\) and \(d\), \(\varphi (N)\) can be calculated, thereby obtaining \((p+q)\). Since both \((p+q)\) and \(pq\) are known, constructing a simple quadratic equation5 can solve for \(p\) and \(q\). If their product equals \(N\), then \(k\) and \(d\) are correct and the attack succeeds.

What are the conditions for Wiener's attack to work? Referring to Legendre's theorem mentioned above, it can be deduced that if \[\left\lvert\frac e N - \frac k d\right\rvert < \frac 1 {2{d^2}}\] then \(\frac k d\) must be a convergent of \(\frac e N\). This formula can also be used to derive an upper bound of the private exponent d for a feasible attack. Wiener's original paper states the upper bound as \(N^{\frac 1 4}\), but without detailed analysis. In 1999, American cryptographer Dan Boneh6 provided the first rigorous proof of the upper bound, showing that under the constraints \(q<p<2q\) and \(e<\varphi(N)\), Wiener's attack applies for \(d<\frac 1 3 N^{\frac 1 4}\). In a new paper published in 2019, several researchers at the University of Wollongong in Australia further expanded the upper bound under the same constraints to \[d\leq \frac 1 {\sqrt[4]{18}} N^\frac 1 4=\frac 1 {2.06...}N^\frac 1 4\]

Note that for simplicity, the above analysis of Wiener's attack mechanism is based on the Euler phi function \(\varphi (N)\). In reality, RSA key pairs are often generated using the Carmichael function \(\lambda(N)\). The relationship between the two is: \[\varphi (N)=\lambda(n)\cdot\gcd(p-1,q-1)\] It can be proven that starting from \(ed≡1{\pmod{\lambda(N)}}\), the same conclusions can be reached. Interested readers may refer to Wiener's original paper for details.

Attack Workflow

With an understanding of the mechanism of Wiener's attack, the attack workflow can be summarized as follows:

  1. Expand \(\frac e N\) into a continued fraction
  2. Generate the sequence of successive convergents of this continued fraction.
  3. Iteratively check each convergent's numerator \(k\) and denominator \(d\):
    • If \(k\) is zero, or \(d\) is even, or \(ed\not\equiv 1 \pmod k\), skip this convergent.
    • Calculate \(\varphi (N) = \frac {ed-1} k\), and solve for the integer roots p and q of the quadratic equation \(x^2−(N−φ(N)+1)x+N\).
    • Verify if \(N = p \cdot q\), if true, the attack succeeds and return \((p, q, d)\); otherwise continue.
    • If all convergents are checked and no match, Wiener's attack fails.

The complete Python implementation is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
import gmpy2
import random

def solve_rsa_primes(s: int, m: int) -> tuple:
""" Solve RSA prime numbers (p, q) from the quadratic equation
p^2 - s * p + m = 0 with the formula p = s/2 +/- sqrt((s/2)^2 - m)
Parameters:
s - sum of primes (p + q)
m - product of primes (p * q)
Return: (p, q)
"""
half_s = s >> 1
tmp = gmpy2.isqrt(half_s ** 2 - m)
return int(half_s + tmp), int(half_s - tmp)

def wiener_attack(n: int, e: int) -> (int, int, int):
""" Wiener's Attack on RSA public key cryptosystem
Parameters:
N - RSA modulus N = p*q
e - RSA public exponent
Return:
A tuple of (p, q, d)
p, q - the two prime factors of RSA modulus N
d - RSA private exponent
"""
cfe = cf_expansion(e, n) # Convert e/n into a continued fraction
cvg = cf_convergent(cfe) # Get all of its convergents

for k, d in cvg:
# Check if k and d meet the requirements
if k == 0 or d % 2 == 0 or (e * d) % k != 1:
continue

# assume ed ≡ 1 (mod ϕ(n))
phi = (e * d - 1) // k
p, q = solve_rsa_primes(n - phi + 1, n)
if n == p * q:
return p, q, d

return None

def uint_to_bytes(x: int) -> bytes:
""" This works only for unsigned (non-negative) integers.
It does not work for 0."""
if x == 0:
return bytes(1)
return x.to_bytes((x.bit_length() + 7) // 8, 'big')

N = int(
'6727075990400738687345725133831068548505159909089226'\
'9093081511054056173840933739311418333016536024767844'\
'14065504536979164089581789354173719785815972324079')

e = int(
'4805054278857670490961232238450763248932257077920876'\
'3637915365038611552743522891345050097418639182479215'\
'15546177391127175463544741368225721957798416107743')

c = int(
'5928120944877154092488159606792758283490469364444892'\
'1679423458017133739626176287570534122326362199676752'\
'56510422984948872954949616521392542703915478027634')

p, q, d = wiener_attack(N, e)
assert not d is None, "Wiener's Attack failed!"
print("p =", p)
print("q =", q)
print("d =", d)
print(uint_to_bytes(pow(c, d, N)))

N = int(
'22836858353287668091920368816286415778103964252589'\
'28295130420474999022996621982166664596581454018899'\
'48429922376560732622754871538043874356270300826321'\
'16650572564937978011181394388679265524940467869924'\
'85473650038355720409426235584833584188449224331698'\
'63569900296911605460645581176522325967221393273906'\
'69673188457131381644120787783215342848744792830245'\
'01805598140668893320307200136190794138325132168722'\
'14217943474001731747822701596634040292342194986951'\
'94551646668806852454006312372413658692027515557841'\
'41440661232146905186431357112566536770669381756925'\
'38179415478954522854711968599279014482060579354284'\
'55238863726089083')

e = int(
'17160819308904585327789016134897914235762203050367'\
'34632679585567058963995675965428034906637374660531'\
'64750599687461192166424505919293706011293378320096'\
'43372382766547546926535697752805239918767190684796'\
'26509298669049485976118315666126871681847641670872'\
'58895073919139366379901867664076540531765577090231'\
'67209821832859747419658344363466584895316847817524'\
'24703257392651850823517297420382138943770358904660'\
'59442300191228592937251734592732623207324742303631'\
'32436274414264865868028527840102483762414082363751'\
'87208612632105886502393648156776330236987329249988'\
'11429508256124902530957499338336903951924035916501'\
'53661610070010419')

d = wiener_attack(N, e)
assert not d is None, "Wiener's attack failed!"
print("d =", d)

old_b = int(gmpy2.root(N, 4)/3)
new_b = int(gmpy2.root(N, 4)/gmpy2.root(18, 4))
print("old_b =", old_b)
print("new_b =", new_b)
assert d > old_b and d <= new_b

The code above ends with two test cases. Referring to the program output below, the first test case gives a small RSA modulus \(N\) and a relatively large \(e\), which is precisely the scenario where Wiener's attack comes into play. The program calls the attack function wiener_attack() that quickly returns \(d\) as 7, then decrypts a ciphertext and recovers the original plaintext "Wiener's attack success!".

The second test case sets a 2048-bit \(N\) and \(e\), and Wiener's attack also succeeds swiftly. The program also verifies that the cracked \(d\) (511 bits) is greater than the old bound old_b (\(N^{\frac 1 4}\)), but slightly less than the new bound new_b (\(\frac 1 {\sqrt[4]{18}} N^\frac 1 4\)). This confirms the conclusion of the University of Wollongong researchers.

1
2
3
4
5
6
7
p = 105192975360365123391387526351896101933106732127903638948310435293844052701259
q = 63949859459297920725542167940404754256294386312715512490347273751054137071981
d = 7
b"Wiener's attack success!"
d = 5968166949079360555220268992852191823920023811474288738674370592596189517443887780023653031793516493806462114248181371416016184480421640973439863346079123
old_b = 4097678063688683751669784036917434915284399064709500941393388469932708726583832656910141469383433913840738001283204519671690533047637554279688711463501824
new_b = 5968166949079360962136673400587903792234115710617172051628964885379180548131448950677569697264501402772121272285767654845001503996650347315559383468867584

These two test cases prove the effectiveness and prerequisites of Wiener's attack. To prevent Wiener's attack, the RSA private exponent \(d\) must be greater than the upper bound. Choosing \(d\) no less than \(N^{\frac 1 2}\) is a more prudent scheme. In practice, the optimized decryption using Fermat's theorem and Chinese remainder theorem is often used, so that even larger \(d\) can achieve fast decryption and digital signing.

To be continued, stay tuned for the next article: RSA: Attack and Defense (III)


  1. John Pollard, a British mathematician, the recipient of 1999 RSA Award for Excellence in Mathematics for major contributions to algebraic cryptanalysis of integer factorization and discrete logarithm.↩︎

  2. Richard Peirce Brent, an Australian mathematician and computer scientist, an emeritus professor at the Australian National University.↩︎

  3. M. Wiener, “Cryptanalysis of short RSA secret exponents,” IEEE Trans. Inform. Theory, vol. 36, pp. 553–558, May 1990↩︎

  4. Adrien-Marie Legendre (1752-1833), a French mathematician who made numerous contributions to mathematics.↩︎

  5. Refer to Solve picoCTF's RSA Challenge Sum-O-Primes↩︎

  6. Dan Boneh, an Israeli–American professor in applied cryptography and computer security at Stanford University, a member of the National Academy of Engineering.↩︎