1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and...

42
1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Adi Shamir Computer Science and Applied Math Dept Computer Science and Applied Math Dept The Weizmann Institute of Science The Weizmann Institute of Science Joint work with Eran Tromer Joint work with Eran Tromer Hebrew University, 6/3/06 Hebrew University, 6/3/06
  • date post

    22-Dec-2015
  • Category

    Documents

  • view

    216
  • download

    2

Transcript of 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and...

Page 1: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

1

Hardware Assisted Solution ofHuge Systems of Linear Equations

Adi ShamirAdi Shamir

Computer Science and Applied Math Dept Computer Science and Applied Math Dept

The Weizmann Institute of ScienceThe Weizmann Institute of Science

Joint work with Eran TromerJoint work with Eran Tromer

Hebrew University, 6/3/06Hebrew University, 6/3/06

Page 2: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

2

Cryptanalysis is Evil• A simple mathematical proof:

• The definition of throughput cost implies thatcryptanalysis = time x money

• Since time = money, cryptanalysis = money2

• Since money is the root of evil, money25=evil

• Consequently, cryptanalysis = evil

Page 3: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

3

Cryptanalysis is Useful:

• Raises interesting mathematical problems

• Requires highly optimized algorithms

• Leads to new computer architectures

• Motivates the development of new technologies

• Stretches the limits on what’s doable

Page 4: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

4

Motivation for this Talk: Integer Factorization and Cryptography

• RSA uses n as the public key and p,q as the secret key in encrypting and signing messages

• The size of the public key n determines the efficiency and the security of the scheme

Productn=p£q

Large primes

p,q

multiplication is easy

factorization is hard

Page 5: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

5

Factorization Records:

• A 256 bit key used by French banks in 1980’s

• A 512 bit key factored in 1999

• A 667 bit (200 digit) key factored in 2005

• The big challenge: factoring 1024 bit keys

Page 6: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

6

Improved Factorization ResultsCan be derived from:

• New algorithms for standard computers (which are better than the Number Field Sieve)

• Non-conventional computation models (unlimited word length, quantum computers)

• Faster standard devices (Moore’s law)

• New implementations of known algorithms on special purpose hardware (wafer scale integration, optoelectronics, …)

Page 7: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

7Bicycle chain sieve [D. H. Lehmer, 1928]Bicycle chain sieve [D. H. Lehmer, 1928]

Page 8: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

8

The Basic Idea of the Number Field Sieve (NFS) Factoring Algorithm:

To factor n:• Find “random” r1,r2

such that r12 r2

2 (mod n)• Hope that gcd(r1-r2,n) is a nontrivial factor of n.

How?

• Let f1(a) = a2 mod n These values are squares mod n, but not over Z• Find a nonempty set S½Z such thatthe product of the values of f1(a) for a2S is a square over the integers• This gives us the two “independent” representations

r12 r2

2 (mod n)

Page 9: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

9

Some Technical Details:How to find S such that is a square?

• Consider all the primes smaller than some bound B, and find many integers a for which f1(a) is B-smooth.

• For each such a, represent the factorization of f1(a) as a

vector of b exponents:

f1(a)=2e1 3e2 5e3 7e4 (e1,e2,...,eb)

• Once b+1 such vectors are found, find a dependency modulo 2 among them. That is, find S such that

=2e1 3e2 5e3 7e4 where the ei are all even.

Sieving

step

Matrix

step

Page 10: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

10

A Simple example:We want to find a subset S such that is a square

Look at the factorization of smooth f1(a) which factor completely into a product of small primes:

f1(0)=102

f1(1)=33

f1(2)=1495

f1(3)=84

f1(4)=616

f1(5)=145

f1(6)=42

24325072112

This is a square, because all exponents are even.

=2 317

=311

=51323

=2237

=23711

=529

=237

Page 11: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

11

The Matrix Step in the Factorization of 1024 Bit Numbers can thus be Reduced to the following problem:

Find a nonzero x satisfying Ax=0 over GF(2) where:

• A is huge (#rows/columns larger than 230 )

• A is sparse (#1’s per row less than 100)

• A is “random” (with no usable structure)

• The algorithm can occasionally fail

Page 12: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

12

Solving linear equations and finding kernel vectors are computationally equivalent problems:

• Let A’ be a non-singular matrix.

• Let A be the singular matrix defined by adding b as a new column and adding zeroes as a new row in A’.

• Consider any nonzero kernel vector x satisfying Ax=0. The last entry of x cannot be zero, so we can use the other entries x’ of x to describe b as a linear combination of the columns of A’, thus solving A’x’=b.

Page 13: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

13

Solving linear equations and finding kernel vectors are computationally equivalent problems:

00000

0110

1011

0001

1101

1

0

0

1

Page 14: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

14

Standard Equation Solving Techniques Are Impractical:

• Elimination and factoring techniques quickly fill up the sparse initial matrix

• Approximation techniques do not work mod 2• Many specialized techniques do not work for

random matrices• Random access to matrix elements is too

expensive• Parallel processing is impractical unless

restricted to 2D grids with local communication between neighbors

Page 15: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

15

Practical limits on what we can do:

• We can use linear space to store the original sparse matrix (with 100x230 “1” entries), along with several dense vectors, but not the quadratic number of entries in a dense matrix of this size (with 260 entries)

• We can use a quadratic time algorithm (260 steps) but not a cubic time algorithms (290 steps) to solve the linear algebra problem

• We have to use special purpose hardware and highly optimized algorithms

Page 16: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

16

Wiedemann’s Algorithm to find a vector u in the kernel of a singular A:

• Theorem: Let p(x) be the characteristic polynomial of the matrix A. Then p(A)=0.

• Lemma: If A is singular, 0 is a root of p(x), and thus p(x)=xq(x).

• Corollary: Aq(A)=p(A)=0, and thus for any vector v, u=q(A)v is in the kernel of A.

Page 17: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

17

Wiedemann’s Algorithm to find a vector u in the kernel of a singular A:

• We have thus reduced the problem of finding a kernel vector of a sparse A into the problem of finding its characteristic polynomial.

• The basic idea: Consider the infinite sequence of powers of A: A,A2,A3 ,...,Ad,... (mod 2).

• If A has dimension dxd, we know that its characteristic polynomial has degree at most d, and thus the matrices in any window of length d satisfy the same linear recurrence relation mod 2.

Page 18: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

18

Wiedemann’s Algorithm to find a vector u in the kernel of a singular A:

• The matrices are too large and dense, so we cannot compute and store them.

• To overcome this problem, pick a random vector v and compute the first bit of each vector Av,A2v,A3 v,...,Adv,... (mod 2).

• These bits satisfy the same linear recurrence relation of size d (mod 2).

Page 19: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

19

Wiedemann’s Algorithm to find a nonzero vector in the kernel of a singular A:

• Given a binary sequence of bits length d, we can find their shortest linear recurrence by using the Berlekamp-Massey algorithm, which runs in quadratic time using linear space.

• To compute the sequence of vectors Av,A2v,A3

v,...,Adv,... (mod 2), we have to repeatedly compute the product of the sparse matrix A by the previous dense vector.

• Each matrix-vector product requires linear time and space. Since we have to compute d such products but retain only the top bit from each one of them, the whole computation requires quadratic time and linear space.

Page 20: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

20

How to Perform the Sparse Matrix / Dense Vector Products

Daniel Bernstein’s observations (2001):

• On a single-processor computer, storage dominates cost but is poorly utilized.

• Sharing the input among multiple processors can lead to collisions, propagation delays.

• Solution: use a mesh-based architecture with a small processor attached to each memory cell, and use a mesh sorting algorithm to perform the matrix/vector multiplications.

Page 21: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

21

Matrix-by-vector multiplication

1

111

1111

111

11

11

111

1

11

1

0

1

0

1

1

0

1

0

1

0

X =

0101101010

?

?

?

?

?

?

?

?

?

?

Σ1

0

1

1

0

1

0

0

0

1

(mod 2)

Page 22: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

22

Is the mesh sorting idea optimal?

• The fastest known mesh sorting algorithm for a mxm mesh requires about 3m steps, but it is too complicated to implement on simple processors.

• Bernstein used the Schimmler mesh sorting algorithm three times to complete each matrix-vector product. For a mxm matrix, this requires about 24m steps.

• We proposed to replace the mesh sorting by mesh routing, which is both simpler and faster. We use only one full routing, which requires 2m steps.

Page 23: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

23

A routing-based circuit for the matrix step[Lenstra,Shamir,Tomlinson,Tromer 2002]

243

5798

325

6457

132

48689

010110101

1

111

111

111

111

11

11

111

11

Model: two-dimensional mesh, nodes connected to ·4 neighbours.

Preprocessing: load the non-zero entries of A into the mesh, one entry per node. The entries of each column are stored in a square block of the mesh, along with a “target cell” for the corresponding vector bit.

Page 24: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

24

Operation of the routing-based circuit

243

5798

325

6457

132

48689

To perform a multiplication:• Initially the target cells contain the vector bits.

These are locally broadcast within each block(i.e., within the matrix column).

• A cell containing a row index i that receives a “1” emits an value(which corresponds to a at row i).

• Each value is routed to thetarget cell of the i-th block(which is collecting ‘s for row i).

• Each target cell counts thenumber of values it received.

• That’s it! Ready for next iteration.

243

5798

325

6457

132

48689

i

i

i

Page 25: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

25

How to perform the routing?

If the original sparse matrix A has size dxd, we have to fold the d vector entries into a mxm mesh where m=sqrt(d).

Routing dominates cost, so the choice of algorithm (time, circuit area) is critical.

There is extensive literature about mesh routing. Examples:

• Bounded-queue-size algorithms

• Hot-potato routing

• Off-line algorithms

None of these are ideal.

Page 26: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

26

Clockwise transposition routing on the mesh

• Very simple schedule, can be realized implicitly by a pipeline.

• Pairwise annihilation.

• Worst-case: m2

• Average-case: ?

• Experimentally:2m steps suffice for random inputs – optimal.

• The point: m2 values handled in time O(m). [Bernstein]

12

3

4

• One packet per cell.• Only pairwise compare-exchange operations ( ).• Compared pairs are swapped according to the preference of the

packet that has the farthestto go along this dimension.

Page 27: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

27

Comparison to Bernstein’s design

• Time: A single routing operation (2m steps)vs. 3 sorting operations (8m steps each).

• Circuit area:• Only the move; the matrix entries don’t.

• Simple routing logic and small routed values

• Matrix entries compactly stored in DRAM (~1/100 the area of “active” storage)

1/12

1/3 i

Page 28: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

28

Further improvements

• Reduce the number of cells in the mesh(for small μ, decreasing #cells by a factor of μdecreases throughput cost by ~μ1/2)

• Use Coppersmith’s block Wiedemann

• Execute the separate multiplication chains ofblock Wiedemann simultaneously on one mesh(for small K, reduces cost by ~K)

Compared to Bernstein’s original design, this reduces the throughput cost by a constant factor

1/7

1/15

1/6

of 45,000.

Page 29: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

29

Does it always work?

• We simulated the algorithm thousands of time with large random mxm meshes, and it always routed the data correctly within 2m steps

• We failed to prove this (or any other) upper bound on the running time of the algorithm.

• We found a particular input for which the routing algorithm failed by entering a loop.

Page 30: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

30

Hardware Fault tolerance• Any wafer scale

design will contain defective cells.

• Cells found to be defective during the initial testing can be handled by modifying the routing algorithm.

Page 31: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

31

Algorithmic fault tolerance

• Transient errors must be detected and corrected as soon as possible since they can lead to totally unusable results.

• This is particularly important in our design, since the routing is not guaranteed to stop after 2m steps, and packets may be dropped

Page 32: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

32

How to detect an error in the computation of AxV (mod 2)?

1

1

0

1

0

1

010001

100010

001001

010100

000110

101000

The original matrix-vector product:

Page 33: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

33

How to detect an error in the computation of AxV (mod 2)?

1

1

0

1

0

1

010001

100010

001001

010100

000110

101000

011011Sum of some matrix rows:

Check bit

Page 34: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

34

Problems with this solution:

• If the new test vector at the bottom is the sum mod 2 of few rows, it will remain sparse but have extremely small chance of detecting a transient error in one of the output bits

• If the new test vector at the bottom is the sum mod 2 of a random subset of rows, it will become dense, but still miss errors with constant probability

Page 35: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

35

Problems with this solution:

• To reduce the probability of missing an error to a negligible value, we can add hundreds of dense test vectors derived from independent random subsets of the rows of the matrix.

• However, in this case the cost of the checking dominates the cost of the actual computation.

Page 36: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

36

Our new solution:

• We add only one test vector, which adds only 1% to the cost of the computation, and still get negligible probability of missing transient errors

• We achieve it by using the fact that in Weidemann’s algorithm we compute all the products

• V1=AV, V2=A2V, V3=A3V, …Vi=AiV,…VD=ADV

Page 37: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

37

Our new solution:

• We choose a random row vector R, and precompute (on a reliable machine) the row vector W=RAk for some small k (e.g., k=200).

• We add to the matrix the two row vectors W and R as the only additional test vectors. Note that these vectors are no longer the sum of subsets of rows of the matrix A.

Page 38: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

38

Our new solution:

• Consider now the following equation, which is true for all i:

Vi+k=Ai+kV=AkAiV=AkVi

• Multiplying it on the left with R, we get that for all i:

RVi+k=RAkVi=WVi

• Since we added the two vectors R and W to the matrix, we get for each vector Vi the products RVi and WVi , which are two bits

Page 39: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

39

Our new solution:

• We store the computed values of WVi in a short shift register of length k=200, and compare it after a delay k with the computed value of RVi+k

• We periodically store the current vector Vi (say, once a day) in an off-line storage. If any one of the consistency tests fails, we stop the computation, test for faulty components, and restart from the last good Vi

Page 40: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

40

Why does it catch transient errors with overwhelming probability?

• Assume that at step i the algorithm computed the erroneous Vi ‘=Vi +E and that all the computations before and after step i were correct (wrt the wrong Vi ‘)

• Due to the linearity of the computation, for any j>0:

V ’i+j=AjV ’

i=Aj(Vi+E)=AjVi+AjE=Vi+j+AjE

and thus the difference between the correct and erroneous Vi develops as AjE from time i onwards

Page 41: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

41

Why does it catch transient errors with overwhelming probability?

• Each test checks whether W(AjE)=0 for some j<k

• Since the matrix A generated by the number field sieve is random looking, its first k=200 powers are likely to be random and dense, and thus each test has an independent probability of 0.5 to fail.

i

First error detection

No more detectableerrors

Page 42: 1 Hardware Assisted Solution of Huge Systems of Linear Equations Adi Shamir Computer Science and Applied Math Dept The Weizmann Institute of Science Joint.

42

-END OF PART ONE-