• No results found

A comparison of reductions from FACT to CNF-SAT

N/A
N/A
Protected

Academic year: 2022

Share "A comparison of reductions from FACT to CNF-SAT"

Copied!
21
0
0

Loading.... (view fulltext now)

Full text

(1)

A comparison of reductions from FACT to CNF-SAT

JOHN ERIKSSON JONAS HÖGLUND

Bachelors’s Thesis at CSC

Supervisor: Per Austrin

Examiner: Örjan Ekeberg

(2)
(3)

Abstract

The integer factorisation problem (FACT) is a well-known number-theoretic problem, with many applications in areas such as cryptography. An instance of a FACT problem (a number n such that n = p × q) can be reduced to an instance of the conjunctive normal form boolean satisfiability problem (CNF- SAT), a well-known NP-complete problem. Some applications of this is to utilize advances in SAT solving for solving FACT, and for creating difficult CNF-SAT instances.

This report compares four different reductions from FACT to CNF-SAT, based on the full adder, array multiplier and Wallace tree multiplier circuits.

The comparisons were done by reducing a set of FACT instances to CNF-SAT instances with the different reductions. The resulting CNF-SAT instances were then compared with respect to the number of clauses and variables, as well as the time taken to solve the instances with the SAT solver MiniSat.

Contents

1 Introduction 1

2 Background 3

2.1 Binary adders . . . . 3

2.1.1 Half adders and full adders . . . . 3

2.1.2 Adder circuits . . . . 4

2.2 Binary multipliers . . . . 5

2.2.1 Array multiplier . . . . 5

2.2.2 Wallace tree multipliers . . . . 6

2.3 Binary circuits and Tseitin transformation . . . . 8

3 Method 9 3.1 Reductions . . . . 9

4 Results 11 5 Discussion 15 5.1 Size of reduction instances . . . 15

5.2 Solving time for reduction instances . . . 15

5.3 Possible future directions for this study . . . 16

(4)

CONTENTS

Bibliography 17

(5)

Chapter 1

Introduction

The integer factorisation problem (FACT) is a well-known number-theoretic problem.

The objective is to determine for a given integer the set of prime numbers whose prod- uct is the given number. The precise computational class of the factorisation problem remains unknown, and no polynomial-time algorithm for solving this on classical comput- ers are known. Note, however, that the problem has been solved for quantum computers [Sho99], although treatment of this is outside the scope of this report.

In particular, instances of FACT that arise in practice are often of a particular form, consisting of the product of precisely two prime numbers (semiprimes)[ARS83]. One such case is in RSA-based encryption. The interest in studying the integer factorisa- tion problem primarily stems from its relevance to modern cryptography. RSA-based encryption is based on the assumption that prime factorisation is a difficult problem to solve, and thus an efficient algorithm for solving FACT would compromise the security of systems based on such encryption.

The boolean satisfiability problem (SAT) is known to be NP-complete. Means of solving SAT has been studied in-depth, with yearly competitions between programs accomplishing this (SAT solvers) to encourage and stimulate further research in this area [Sat11, Sat10]. Even though SAT may well be a theoretically more difficult problem than FACT, extensive research has been done in this area. It is possible that the advances in SAT solving can be used to quickly factorise numbers, given a suitable reduction to SAT. For the purposes of this article, we only consider the construction variant of the SAT problem, yielding a satisfying assignment as its output.

Another benefit of reducing FACT to SAT, apart from the cryptographic perspective given above, is to produce test cases for SAT solvers by reducing a factorisation instance consisting of the product of two large prime numbers. It is easy to vary the difficulty of the SAT instance by selecting appropriate prime factors.

A common method of structuring such reductions is by constructing a boolean circuit

for computing the product of two binary integers. This circuit is then reduced to a SAT

instance. In circuit design, an important goal is to minimize propagation delay through

gates [Par09][p.111,177]. For example, a basic ripple carry adder is very simple, but takes

linear time because the computation of each bit depends on the previous one. A carry-

(6)

CHAPTER 1. INTRODUCTION

lookahead adder can be much faster, but is also more complex with its carry-lookahead logic.

In the context of using boolean circuits in SAT solvers, rather than computing the output quickly given known input the task is instead the opposite: find a given input that yields the output. It is no longer clear whether propagation delay is an important consideration, or if other aspects of the circuit are more important to minimize.

This report compares a set of different reductions from FACT to SAT via such

boolean circuits, with the goal of finding whether there is a connection between the

complexity of the formula yielded by each reduction (in terms of number of variables

and clauses) and the runtime performance of a SAT solver on the same formula.

(7)

Chapter 2

Background

2.1 Binary adders

Binary adders are circuits that take two binary numbers and outputs the sum of these numbers. Binary adders can be implemented in many different ways. This section describes the implementations used in this report.

2.1.1 Half adders and full adders

The basic building blocks of adders are the half adder and the full adder. A half adder has two input bits, A and B, and outputs two bits, S and C

out

, which are the least significant and most significant bit of the sum of the input bits. C

out

is the carry output of this calculation, and is required when building adders for n-bit numbers. The equations describing a half adder are shown in figure 2.1.

A full adder adds three input bits together, A, B and C

in

, and outputs two bits, S and C

out

, just like the half adder. Full adders have a carry-in output unlike half adders, which is also required for building adders. A full adder can be designed in many different ways, and this report implements 2 different full adder designs.

Full adder 1 (see figure 2.2) uses 2 OR gates and 3 AND gates for its C

out, and a

total of 7 gates including 2 XOR gates. Full adder 2 (see figure 2.3) is essentially two half adders chained together and an OR gate, and so uses a total of 5 gates. This means that full adder 1 uses two gates more than full adder 2, but it should also be faster when used as a circuit, as C

out

only needs to pass through 2 gates while C

out

has to pass through three gates.

S

= A ⊕ B

Cout

= A ∧ B

Figure 2.1. Binary equations for half adder.

(8)

CHAPTER 2. BACKGROUND S

= A ⊕ B ⊕ C

in

Cout

= (A ∨ B) ∧ (A ∨ C

in

) ∧ (B ∨ C

in

)

Figure 2.2. Binary equations for full adder 1.

S1

= A ⊕ B

C1

= A ∧ B

C2

= S

1

∧ C

in S

= S

1

⊕ C

in C

= C

1

∨ C

2

Figure 2.3. Binary equations for full adder 2.

function RippleAdder(a[1..i], b[1..i])

Input:

Two non-negative i-bit numbers a and b, least significant bit first.

Output:

An i + 1-bit non-negative number which is the sum of a and

b

, least significant bit first.

output

[1..i + 1]

output

[1], carry ← HalfAdder(a[1], b[1]) for i ← 2..i do

output[i], carry ← F ullAdder(a[i], b[i], carry)

end for

output

[i + 1] ← carry

.

Last carry is also saved return outputs

end function

Figure 2.4. Pseudo-code for the ripple-carry adder algorithm.

2.1.2 Adder circuits

The simplest adder implementation is the ripple-carry adder. This adder works by chaining full adders together sequentially, such that the least significant bit is computed before the second least significant bit, and so on. This is done by feeding the carry bit of one adder to the carry-in input of the next adder. This dependency leads to an adder algorithm with linear runtime [Toh]. Pseudo-code for the algorithm is included in figure 2.4.

In this report, two variants of the ripple-carry adder have been implemented. The

only difference is the full adder design used for building the adder.

(9)

2.2. BINARY MULTIPLIERS

function ArrayMultiplier(a[1..i], b[1..j])

Input:

Two non-negative binary numbers a and b, which are i and

j

bits in lengths respectively.

Output:

An (i + j)-bit non-negative number which is the product of

a

and b, least significant bit first.

allocate output [1..i + j]

line ← a ∧ b

[1]

for x ← 2..j do

partialP roduct ←

0 concat (a ∧ b[x])

.

Partial product is shifted 1 step

line ← adder

(line, partialP roduct)

output

[x − 1] ← line[1]

end for

output

[j..j + i] ← remaining bits of line return output

end function

Figure 2.5. Pseudo-code for the array multiplier algorithm.

2.2 Binary multipliers

Binary multipliers are circuits that multiply 2 binary numbers. A binary multiplier takes two binary numbers a and b, of length i and j, in binary form, and outputs the product of those numbers as a (i + j)-bit binary number.

2.2.1 Array multiplier

Binary multipliers can be created in many different ways, and much research has been done on efficient binary arithmetic circuits. The simplest multiplier is an array multiplier.

It is very similar to the standard long multiplication algorithm for multiplication, except with binary numbers.

When multiplying numbers a and b, the array multiplier first calculates partial prod- ucts by multiplying a with each bit in b. Since a bit can be only 0 or 1, each partial product will either be exactly a or exactly 0, and this can be calculated easily with AND gates.

The partial products are then added. In the array multiplier, this is done very simply.

First, an adder adds the first two partial products, then an another adder adds the sum

of the first adder with the third adder, etc. until all the partial products have been

added (figure 2.6, pseudo-code in figure 2.5). [Par09, p. 226ff]

(10)

CHAPTER 2. BACKGROUND

1 0 1 1

× 1 1 0 1

1 0 1 1 0 0 0 0 1 0 1 1 + 1 0 1 1

1 0 0 0 1 1 1 1 1

Figure 2.6. Array multiplier summation example.

2.2.2 Wallace tree multipliers

A standard array multiplier requires many adders to add all the partial products. If numbers a and b are of length i and j respectively, the algorithm calculates j − 1 binary products, each with a length of i. The number of full or half adders required are then proportional to ij. Many gates are therefore required to build an array multiplier. A Wallace tree reduces the number of gates needed for adding many numbers together.

When the partial products are collected, the circuit ends up with many bits of dif- ferent weights to handle. The purpose of a wallace tree is to reduce these bits so that there will be at most two bits for each weight. These weights can then be added with one adder circuit.

This reduction is done with trees of full adders and half adders. As mentioned earlier, a full adder takes three input bits and outputs 2 bits S and C

out

, which are the least significant and most significant bit respectively. A full adder can be used to reduce three bits of the same weight to 1 bit of the same weight and 1 bit of a higher weight. A half adder can be used in the same way to reduce two bits of the same weight.

For each bit weight, if there are three or more bits, group the bits into groups of three and reduce them with the full adders as described earlier. If there are two bits left after the grouping, use a half adder to reduce these bits two. If there are 1 bit left, leave it as it is. The old reduced bits are removed, the S output bits are added to this weight, and C

out

are added to next weight. Figure 2.7 describes this algorithm in pseudo-code.

This reduction is repeated until no weight has three or more bits. When there are

one or two bits per weight, the multiplication has been reduced to an addition of two

numbers, which are added together with a regular adder [Par09, p. 158ff].

(11)

2.2. BINARY MULTIPLIERS

function WallaceMultiplier(a[1..i], b[1..j])

Input:

Two non-negative binary numbers a and b, which are i and

j

bits in lengths respectively.

Output:

An (i + j)-bit non-negative number which is the product of

a

and b, least significant bit first.

Let m be a map of lists, mapping each bit weight to all current bits with that weight

for x ← 1..i do for y ← 1..j do

Add a[i] ∧ b[j] to m, bit weight i + j end for

end for

while ∃ bit weight ∈ m with 3 or more bits do for z [1..k] ← each bit weight w ∈ m do

Group as many bits in z as possible into groups of three for g ← each group of 3 bits do

s, c ← F ullAdder

(g

1, g2, g3

) Remove group g from m[w]

Add s to m[w]

Add c to m[w + 1]

end for

if there is a remaining group g of 2 bits then

s, c ← Half Adder

(g

1, g2

)

Remove group g from m[w]

Add s to m[w]

Add c to m[w + 1]

end if end for end while

.

Each bit weight has only 1 or 2 bits now, so two long numbers are remaining

num

1 ← first number

num

2 ← second number

output ← AnyAdder

(num1, num2) return output

end function

Figure 2.7. Pseudo-code for the Wallace tree multiplier algorithm.

(12)

CHAPTER 2. BACKGROUND

2.3 Binary circuits and Tseitin transformation

One way of creating reductions to SAT is to create a circuit C

n

(a

1a2

· · · a

mb1b2

· · · b

n

) with one output, where a and b are potential factors of n, and m and m are the bit lengths of a and b respectively. The circuit is construted so that C

n

(a

1a2

· · · a

mb1b2

· · · b

n

) = 1 if and only if a × b = n. The factorization problem then is simply about finding a and b such that a × b = n.

Typical SAT solvers can not solve actual circuits directly. Instead, they operate on conjunctive normal form formulas (CNF). CNF is a conjunction of disjunctions of literals. A literal can be either a variable or the negation of a variable. Each disjunction is known as a clause. The CNF formula is then true if and only if every clause is true.

The circuit must then be reduced to CNF-SAT. Tseitin transformation is an efficient algorithm for reducing a circuit problem to a CNF-SAT problem. The number of clauses in the resulting CNF-SAT formula is linear to the number of gates in the circuit [Tse83].

The principle of the Tseitin transformation is that each gate has 1 output, and for each gate, we add a new variable to the formula which will represent the output value.

Each gate has 1 or 2 inputs and 1 output. For each gate input, we find the corresponding variable, which may be either a variable of the original circuit or an output variable of a gate, whose output is feeded to this gate. Clauses are then added to force the output variable of this gate to match the gate according to the patterns in table 2.1.

Gate CNF clauses

C

= A ∧ B (A ∨ B ∨ C) ∧ (A ∨ C) ∧ (B ∨ C)

C

= A ∨ B (A ∨ B ∨ C) ∧ (A ∨ C) ∧ (B ∨ C)

C

= A ⊕ B (A ∨ B ∨ C) ∧ (A ∨ B ∨ C) ∧ (A ∨ B ∨ C) ∧ (A ∨ B ∨ C)

C

= A (A ∨ C) ∧ (A ∨ C)

Table 2.1. Tseitin transformation substitution rules. Each gate given to the left is translated into the corresponding CNF clause to the right.

(13)

Chapter 3

Method

As mentioned in the introduction, we have studied a set of reductions from FACT to SAT. These reductions are all done constructing boolean circuits for calculating the product of two factors. These circuits are then reduced to CNF-SAT with the Tseitin transformation.

For every reduction, the bit length of the two input numbers can be varied to fit the current problem instance. One requirement is that if we are trying factorise the number

n

= p × q, a number with bit length b the circuit has to be large enough to output at least b bits, and the inputs must not be shorter than the bit lengths of p and q (which are unknown so far). The terms p and q can’t be longer than the number n itself.

Since there will always be a trivial solution n = 1 × n, the circuit must disallow this trivial solution. The bit length of the input numbers must therefore bit restricted so that they are at least 1 bit shorter than the bit length b of n [HW98, p. 3ff]. Since at least 1 of the factors must be less than or equal to √

n

, the bit length of the second input number can be restricted even further, to db/2e. We therefore chose the input number lengths b − 1 and db/2e for the two input numbers.

A SAT solver will attempt to find a truth assignment to the CNF instance which makes all clauses true. The clauses must then be true if and only if the multiplier outputs the number n that the we are trying to factorise. This is enforced by adding unit clauses. The multiplier circuit outputs the result as a sequence of bits. Once the circuit is reduced to CNF-SAT, unit clauses are added to force each output bit to be either true or false, so that the output must be equal to the bit sequence of n.

3.1 Reductions

The reductions differ only in how they choose to express the multiplication circuit, by

varying the adder and multiplier circuits. In this report, we have implemented the two

full adder algorithm variations mentioned above, as well as two multiplier algorithms,

array multiplier and wallace tree multiplier. Each pair of adder and multiplier algorithm

yields a different reduction. Thus, the four reductions compared are:

(14)

CHAPTER 3. METHOD

• Array multiplier with full adder 1

• Array multiplier with full adder 2

• Wallace tree multiplier with full adder 1

• Wallace tree multiplier with full adder 2

For each testcase and circuit, the circuit is constructed, reduced to CNF-SAT, and the output unit clauses are added. The CNF-SAT instance is then tested with a CNF-SAT solver, which will try to solve the instance as fast as possible.

For comparing SAT solver performance, we have chosen the MiniSat SAT solver[Min14], as it has performed well in multiple SAT competitions. We do not vary properties among the hardware on which the SAT solver is run.

The test cases studied are semiprimes whose factors are given below.

• 17977 × 10619863

• 16769023 × 1073676287

• 2147483647 × 2147483647

• 1073676287 × 68718952447

The source code for the implementation used for this report can be found at https:

//bitbucket.org/migomipo/kexjobb . The reductions are implemented in Python.

(15)

Chapter 4

Results

We compared the 4 different reductions detailed above. Table 4.1 presents the test runs for each test case (factoring the product p × q) together with the reduction used, the statistics about this instance of the reduction (number of variables and clauses in CNF form) as well as SAT solver performance using MiniSat. The last two lines of the table are incomplete, as the SAT solver did not finish within feasible time (13000 seconds).

We have also presented the same data in bar plot form for each test case, in order to faciliate visual comparison between the different reductions. The plots present the different statistics for each test case with each statistic scaled independently of the rest, so that e.g. the execution time could be compared with the clause count.

p q full adder multiplier #vars #clauses dur (s) mem (MB)

17977 10619863 full1 array 5326 17123 82.3 30

full2 array 4032 13241 16.7 22

full1 wallace 5372 17288 141 38

full2 wallace 4082 13418 115 37

16769023 1073676287 full1 array 11022 35555 9.20 30

full2 array 8320 27449 78.6 42

full1 wallace 11174 36091 2331 136

full2 wallace 8476 27997 113 67

2147483647 2147483647 full1 array 14638 47267 8.71 30

full2 array 11040 36437 24.4 46

full1 wallace 14894 48167 10.2 26

full2 wallace 11300 36375 47.1 51

1073676287 68718952447 full1 array 16638 53747 107 78

full2 array 12544 41465 4343 229

full1 wallace 16848 54486 — —

full2 wallace 12758 42216 — —

Table 4.1. The statistics and performance of each test run.

(16)

CHAPTER 4. RESULTS

Full adder 2 uses fewer gates than full adder 1, as is clearly shown in table 4.1. The reductions with full adder 2 result in CNF instances with about 25% fewer variables and about 23% fewer clauses. Both the array multiplier and Wallace tree reductions use full adders heavily, so the gate saving are multiplied in the large circuits created by the reductions.

Figure 4.1. Test case 1: 17977 × 10619863

Our first tests were with the number 17977 × 10619863, and the result can be seen in figure 4.1. The array multiplier was reduced faster than the Wallace multiplier with both full adder designs. With both multiplier, reductions with full adder 2 were also reduced faster than those with full adder 1. This difference was very big in the array multiplier case, more than 5 times faster.

Figure 4.2. Test case 2: 16769023 × 1073676287

For the second testcase, shown in figure 4.2, it is clear that the combination of the

first adder and wallace multiplier was by far the reduction that took the longest for

MiniSat to solve. The fastest reduction in terms of execution time for this test case was

(17)

full adder 1 + array multiplier. Considering memory consumption, the combination of full adder 2 and wallace multiplier is worse by a considerable margin. Full adder 2 + array multiplier is best in this sense as well.

Figure 4.3. Test case 3: 2147483647 × 2147483647

In third testcase, shown in figure 4.3, it is very clear that the reductions with full adder 1 were solved significantly faster than the reduction with full adder 2. The array multiplier reductions were also faster than the Wallace tree reductions. This difference was more significant when full adder 2 was used, where the Wallace tree reduction took almost twice as long to solve.

Figure 4.4. Test case 4: 1073676287 × 68718952447

In the final testcase (figure 4.4), the combination of full adder 1 + array multiplier

is both the fastest reduction in terms of execution time, as well as the leanest in terms

of memory consumption. The two reductions involving Wallace tree multipliers were

aborted after 13 000 seconds of execution time.

(18)
(19)

Chapter 5

Discussion

This report contains two significantly different reductions: the array multiplier and the Wallace tree multiplier. We also implemented two full adder algorithms as detailed above.

Each testcase was a factor of two prime numbers, and the task for the SAT solver is to find these numbers. It should be noted that the different between generated CNF instances with different test cases are fairly small. Since the instances are essentially multiplier circuits with an output, the only differences are the size (which is only affected by the size of the input factors), and the unit clauses for checking whether the output is correct. Thus, the size of the CNF instances are a function of the size of the input factors.

5.1 Size of reduction instances

In all cases, the Wallace tree multiplier circuits were very close in size to the array multiplier, measured in both the number of clauses and variables. This is expected, since they are both different algorithms for reducing a set of partial products bits to a sum, using full adders and half adders.

The full adder design had a major impact on the size of the reduction instances. Re- ductions with full adder 2 generated CNF-SAT instance with about 25% fewer variables and about 23% fewer clauses. This is also expected, as the array multiplier and Wallace tree reductions use full adders heavily.

5.2 Solving time for reduction instances

In all the test cases, the array multiplier circuits were solved faster than the correspond-

ing Wallace tree multipliers, in several cases a lot faster. We speculate that this could be

because the circuits yielded by the array multiplier design is has a lot of symmetry, as it

is simply a chain of adders for adding the partial products together, whereas the Wallace

tree multiplier uses a more complicated tree structure. The difference were often very

large, in the test case 1073676287 × 68718952447, the array multiplier with full adder 1

(20)

CHAPTER 5. DISCUSSION

was solved in less than 2 minutes. With the Wallace tree multiplier reduction, MiniSat had to be interrupted after more than 3 hours without finding a solution.

Another interesting observation is that simply changing the full adder implementa- tion made a huge difference. In all cases, full adder 2 resulted in a lot smaller circuits than full adder 1. The difference in SAT solving time varied a lot. In test case 1, full adder 2 was faster with both multiplier types. In test case 2, full adder 2 was faster with the Wallace tree multiplier but slower with the array multiplier, and with the 2 last test cases, full adder 2 seemed to be slower than full adder 1 for all cases. Since the results varied so much, it is difficult to draw any conclusion about which full adder is best for SAT solving. However, it should be noted that small details such as full adder layouts can affect SAT solving time significantly.

We have not compared the general concept of solving FACT via reduction to SAT with other approaches for solving FACT (such as naive brute-force), but we can still make an observation that the runtime of even relatively small semiprimes (such as our biggest test case) ran for a considerable amount of time.

5.3 Possible future directions for this study

This report only studies two fundamentally different FACT-to-CNF-SAT reductions.

Future studies of reductions would benefit of implementing more reductions that we didn’t have the time to study here. For example, reductions could be implemented with carry-lookahead adders instead of simple ripple-carry adders. Carry-lookahead adders reduce the carry propagation delay significantly compared to a ripple-carry adder, but also adds a lot of circuit complexity. More multiplier circuit designs could also be implemented. All these proposed ideas result in different multiplier circuits, but reductions that are fundamentally different from multiplier circuit reductions should also be studied.

Another area in which this study is lacking is in the number of test cases. A study

of bigger scope would benefit from testing against a bigger set of test cases, to minimize

the impact that the choice of particular primes has on the execution time. One could

also study whether different classes of primes favour different reductions.

(21)

Bibliography

[ARS83] L.M. Adleman, R.L. Rivest, and A. Shamir. Cryptographic communications system and method, September 20 1983. US Patent 4,405,829.

[HW98] Satoshi Horie and Osamu Watanabe. Hard instance generation for SAT. CoRR, cs.CC/9809117, 1998.

[Min14] Minisat page. http://minisat.se/, April 2014.

[Par09] Behrooz Parhami. Computer Arithmetic: Algorithms and Hardware Designs.

Oxford University Press, Inc., New York, NY, USA, 2nd edition, 2009.

[Sat10] Sat-race 2010. http://baldur.iti.uka.de/sat-race-2010/, July 2010.

[Sat11] Sat competition 2011. http://www.satcompetition.org/2011/, June 2011.

[Sho99] P. W. Shor. Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM Review, 41:303–332, January 1999.

[Toh] Hardware algorithms for arithmetic modules. http://www.aoki.ecei.tohoku.

ac.jp/arith/mg/algorithm.html .

[Tse83] G. S. Tseitin. On the complexity of derivation in propositional calculus. In

J. Siekmann and G. Wrightson, editors, Automation of Reasoning 2: Classical

Papers on Computational Logic 1967-1970

, pages 466–483. Springer, Berlin,

Heidelberg, 1983.

References

Related documents

Genom enkäten ställdes frågor om vad kommunerna i Sverige gör inom området utvecklings- och tillväxtarbete och på vilket sätt resurser för det här området genereras genom

Silicon content seems to be crucial component materials A and C, outliers close to the higher limit of the material specifications are having higher repair rates than the overall

From these results, it is hard to say if the mixed encoding mode is better than the direct encoding because the presence of multiple encodings is beneficial (despite the

The analysis is based on extractions for (spelling variants of) the noun way from the Early Modern (EEBO, PPCEME2) and Late Modern (CEAL, PPCMBE1) English periods, with a focus on

The unsafe and prime power are of almost constant CPU-Time, while for the asymmetrical, its worst case type it is not able to solve semiprimes over the bit-size of 30 in a

Despite the classical prohibition of moving from fact to value, encounter with the biodiversity and plenitude of being in evolutionary natural history moves us to respect life,

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit