To speed up the calculation when one of the numbers is much larger than another, one could use the property Gcd(a,b)=Gcd(a,Mod(a,b)). This will introduce an additional modular division into the algorithm; this is a slow operation when the numbers are large.
Primality of larger numbers is tested by the function IsPrime that uses the Miller-Rabin algorithm.
The idea of the Miller-Rabin algorithm is to improve the Fermat primality test. If n is prime, then for any x we have Gcd(n,x)=1. Then by Fermat's "little theorem", x^(n-1):=Mod(1,n). (This is really a simple statement; if n is prime, then n-1 nonzero remainders modulo n: 1, 2, ..., n-1 form a cyclic multiplicative group.) Therefore we pick some "base" integer x and compute Mod(x^(n-1),n); this is a quick computation even if n is large. If this value is not equal to 1 for some base x, then n is definitely not prime. However, we cannot test every base x<n; instead we test only some x, so it may happen that we miss the right values of x that would expose the non-primality of n. So Fermat's test sometimes fails, i.e. says that n is a prime when n is in fact not a prime. Also there are infinitely many integers called "Carmichael numbers" which are not prime but pass the Fermat test for every base.
The Miller-Rabin algorithm improves on this by using the property that for prime n there are no nontrivial square roots of unity in the ring of integers modulo n (this is Lagrange's theorem). In other words, if x^2:=Mod(1,n) for some x, then x must be equal to 1 or -1 modulo n. (Since n-1 is equal to -1 modulo n, we have n-1 as a trivial square root of unity modulo n. Note that even if n is prime there may be nontrivial divisors of 1, for example, 2*49:=Mod(1,97).)
We can check that n is odd before applying any primality test. (A test n^2:=Mod(1,24) guarantees that n is not divisible by 2 or 3. For large n it is faster to first compute Mod(n,24) rather than n^2, or test n directly.) Then we note that in Fermat's test the number n-1 is certainly a composite number because n-1 is even. So if we first find the largest power of 2 in n-1 and decompose n-1=2^r*q with q odd, then x^(n-1):=Mod(a^2^r,n) where a:=Mod(x^q,n). (Here r>=1 since n is odd.) In other words, the number Mod(x^(n-1),n) is obtained by repeated squaring of the number a. We get a sequence of r repeated squares: a, a^2, ..., a^2^r. The last element of this sequence must be 1 if n passes the Fermat test. (If it does not pass, n is definitely a composite number.) If n passes the Fermat test, the last-but-one element a^2^(r-1) of the sequence of squares is a square root of unity modulo n. We can check whether this square root is non-trivial (i.e. not equal to 1 or -1 modulo n). If it is non-trivial, then n definitely cannot be a prime. If it is trivial and equal to 1, we can check the preceding element, and so on. If an element is equal to -1, we cannot say anything, i.e. the test passes ( n is "probably a prime").
This procedure can be summarized like this:
Here is a more formal definition. An odd integer n is called strongly-probably-prime for base b if b^q:=Mod(1,n) or b^(q*2^i):=Mod(n-1,n) for some i such that 0<=i<r, where q and r are such that q is odd and n-1=q*2^r.
A practical application of this procedure needs to select particular base numbers. It is advantageous (according to [Pomerance et al. 1980]) to choose prime numbers b as bases, because for a composite base b=p*q, if n is a strong pseudoprime for both p and q, then it is very probable that n is a strong pseudoprime also for b, so composite bases rarely give new information.
An additional check suggested by [Davenport 1992] is activated if r>2 (i.e. if n:=Mod(1,8) which is true for only 1/4 of all odd numbers). If i>=1 is found such that b^(q*2^i):=Mod(n-1,n), then b^(q*2^(i-1)) is a square root of -1 modulo n. If n is prime, there may be only two different square roots of -1. Therefore we should store the set of found values of roots of -1; if there are more than two such roots, then we woill find some roots s1, s2 of -1 such that s1+s2!=Mod(0,n). But s1^2-s2^2:=Mod(0,n). Therefore n is definitely composite, e.g. Gcd(s1+s2,n)>1. This check costs very little computational effort but guards against some strong pseudoprimes.
Yet another small improvement comes from [Damgard et al. 1993]. They found that the strong primality test sometimes (rarely) passes on composite numbers n for more than 1/8 of all bases x<n if n is such that either 3*n+1 or 8*n+1 is a perfect square, or if n is a Carmichael number. Checking Carmichael numbers is slow, but it is easy to show that if n is a large enough prime number, then neither 3*n+1, nor 8*n+1, nor any s*n+1 with small integer s can be a perfect square. [If s*n+1=r^2, then s*n=(r-1)*(r+1).] Testing for a perfect square is quick and does not slow down the algorithm. This is however not implemented in Yacas because it seems that perfect squares are too rare for this improvement to be significant.
If an integer is not "strongly-probably-prime" for a given base b, then it is a composite number. However, the converse statement is false, i.e. "strongly-probably-prime" numbers can actually be composite. Composite strongly-probably-prime numbers for base b are called strong pseudoprimes for base b. There is a theorem that if n is composite, then among all numbers b such that 1<b<n, at most one fourth are such that n is a strong pseudoprime for base b. Therefore if n is strongly-probably-prime for many bases, then the probability for n to be composite is very small.
For numbers less than B=34155071728321, exhaustive
In the implemented routine RabinMiller, the number of bases k is chosen to make the probability of erroneously passing the test p<10^(-25). (Note that this is not the same as the probability to give an incorrect answer, because all numbers that do not pass the test are definitely composite.) The probability for the test to pass mistakenly on a given number is found as follows. Suppose the number of bases k is fixed. Then the probability for a given composite number to pass the test is less than p[f]=4^(-k). The probability for a given number n to be prime is roughly p[p]=1/Ln(n) and to be composite p[c]=1-1/Ln(n). Prime numbers never fail the test. Therefore, the probability for the test to pass is p[f]*p[c]+p[p] and the probability to pass erroneously is
Before calling MillerRabin, the function IsPrime performs two quick checks: first, for n>=4 it checks that n is not divisible by 2 or 3 (all primes larger than 4 must satisfy this); second, for n>257, it checks that n does not contain small prime factors p<=257. This is checked by evaluating the GCD of n with the precomputed product of all primes up to 257. The computation of the GCD is quick and saves time in case a small prime factor is present.
There is also a function NextPrime(n) that returns the smallest prime number larger than n. This function uses a sequence 5,7,11,13,... generated by the function NextPseudoPrime. This sequence contains numbers not divisible by 2 or 3 (but perhaps divisible by 5,7,...). The function NextPseudoPrime is very fast because it does not perform a full primality test.
The function NextPrime however does check each of these pseudoprimes using IsPrime and finds the first prime number.
First we determine whether the number n contains "small" prime factors p<=257. A quick test is to find the GCD of n and the product of all primes up to 257: if the GCD is greater than 1, then n has at least one small prime factor. (The product of primes is precomputed.) If this is the case, the trial division algorithm is used: n is divided by all prime numbers p<=257 until a factor is found. NextPseudoPrime is used to generate the sequence of candidate divisors p.
After separating small prime factors, we test whether the number n is an integer power of a prime number, i.e. whether n=p^s for some prime number p and an integer s>=1. This is tested by the following algorithm. We already know that n is not prime and that n does not contain any small prime factors up to 257. Therefore if n=p^s, then p>257 and 2<=s<s[0]=Ln(n)/Ln(257). In other words, we only need to look for powers not greater than s[0]. This number can be approximated by the "integer logarithm" of n in base 257 (routine IntLog(n, 257)).
Now we need to check whether n is of the form p^s for s=2, 3, ..., s[0]. Note that if for example n=p^24 for some p, then the square root of n will already be an integer, n^(1/2)=p^12. Therefore it is enough to test whether n^(1/s) is an integer for all prime values of s up to s[0], and then we will definitely discover whether n is a power of some other integer. The testing is performed using the integer n-th root function IntNthRoot which quickly computes the integer part of n-th root of an integer number. If we discover that n has an integer root p of order s, we have to check that p itself is a prime power (we use the same algorithm recursively). The number n is a prime power if and only if p is itself a prime power. If we find no integer roots of orders s<=s[0], then n is not a prime power.
If the number n is not a prime power, the Pollard "rho" algorithm is applied [Pollard 1978]. The Pollard "rho" algorithm takes an irreducible polynomial, e.g. p(x)=x^2+1 and builds a sequence of integers x[k+1]:=Mod(p(x[k]),n), starting from x[0]=2. For each k, the value x[2*k]-x[k] is attempted as possibly containing a common factor with n. The GCD of x[2*k]-x[k] with n is computed, and if Gcd(x[2*k]-x[k],n)>1, then that GCD value divides n.
The idea behind the "rho" algorithm is to generate an effectively random sequence of trial numbers t[k] that may have a common factor with n. The efficiency of this algorithm is determined by the size of the smallest factor p of n. Suppose p is the smallest prime factor of n and suppose we generate a random sequence of integers t[k] such that 1<=t[k]<n. It is clear that, on the average, a fraction 1/p of these integers will be divisible by p. Therefore (if t[k] are truly random) we should need on the average p tries until we find t[k] which is accidentally divisible by p. In practice, of course, we do not use a truly random sequence and the number of tries before we find a factor p may be significantly different from p. The quadratic polynomial seems to help reduce the number of tries in most cases.
But the Pollard "rho" algorithm may actually enter an infinite loop when the sequence x[k] repeats itself without giving any factors of n. For example, the unmodified "rho" algorithm starting from x[0]=2 loops on the number 703. The loop is detected by comparing x[2*k] and x[k]. When these two quantities become equal to each other for the first time, the loop may not yet have occurred so the value of GCD is set to 1 and the sequence is continued. But when the equality of x[2*k] and x[k] occurs many times, it indicates that the algorithm has entered a loop. A solution is to randomly choose a different starting number x[0] when a loop occurs and try factoring again, and keep trying new random starting numbers between 1 and n until a non-looping sequence is found. The current implementation stops after 100 restart attempts and prints an error message, "failed to factorize number".
A better (and faster) integer factoring algorithm needs to be implemented in Yacas.
Modern factoring algorithms are all probabilistic (i.e. they do not guarantee a particular finishing time) and fall into three categories:
There is ample literature describing these algorithms.
The Legendre symbol (m/ n) is defined as +1 if m is a quadratic residue modulo n and -1 if it is a non-residue. The Legendre symbol is equal to 0 if m/n is an integer.
The Jacobi symbol [m/n;] is defined as the product of the Legendre symbols of the prime factors f[i] of n=f[1]^p[1]*...*f[s]^p[s],
The Jacobi symbol can be efficiently computed without knowing the full factorization of the number n. The currently used method is based on the following four identities for the Jacobi symbol:
Using these identities, we can recursively reduce the computation of the Jacobi symbol [a/b;] to the computation of the Jacobi symbol for numbers that are on the average half as large. This is similar to the fast "binary" Euclidean algorithm for the computation of the GCD. The number of levels of recursion is logarithmic in the arguments a, b.
More formally, Jacobi symbol [a/b;] is computed by the following algorithm. (The number b must be an odd positive integer, otherwise the result is undefined.)
Note that the arguments a, b may be very large integers and we should avoid performing multiplications of these numbers. We can compute (-1)^((b-1)*(c-1)/4) without multiplications. This expression is equal to 1 if either b or c is equal to 1 mod 4; it is equal to -1 only if both b and c are equal to 3 mod 4. Also, (-1)^((b^2-1)/8) is equal to 1 if either b:=1 or b:=7 mod 8, and it is equal to -1 if b:=3 or b:=5 mod 8. Of course, if s is even, none of this needs to be computed.
The first term of the series gives, at large n, the Hardy-Ramanujan asymptotic estimate,
There exist estimates of the error of this series, but they are complicated. The series is sufficiently well-behaved and it is easier to determine the truncation point heuristically. Each term of the series is either 0 (when all terms in A(k,n) happen to cancel) or has a magnitude which is not very much larger than the magnitude of the previous nonzero term. (But the series is not actually monotonic.) In the current implementation, the series is truncated when Abs(A(k,n)*S(n)*Sqrt(k)) becomes smaller than 0.1 for the first time; in any case, the maximum number of calculated terms is 5+Sqrt(n)/2. One can show that asymptotically for large n, the required number of terms is less than mu/Ln(mu), where mu:=Pi*Sqrt((2*n)/3).
[Ahlgren et al. 2001] mention that there exist explicit constants B[1] and B[2] such that
The floating-point precision necessary to obtain the integer result must be at least the number of digits in the first term P _0(n), i.e.
The RHR algorithm requires O((n/Ln(n))^(3/2)) operations, of which O(n/Ln(n)) are long multiplications at precision Prec<>O(Sqrt(n)) digits. The computational cost is therefore O(n/Ln(n)*M(Sqrt(n))).
The sum is actually not over all k up to n but is truncated when the pentagonal sequence grows above n. Therefore, it contains only O(Sqrt(n)) terms. However, computing P(n) using the recurrence relation requires computing and storing P(k) for all 1<=k<=n. No long multiplications are necessary, but the number of long additions of numbers with Prec<>O(Sqrt(n)) digits is O(n^(3/2)). Therefore the computational cost is O(n^2). This is asymptotically slower than the RHR algorithm even if a slow O(n^2) multiplication is used. With internal Yacas math, the recurrence relation is faster for n<300 or so, and for larger n the RHR algorithm is faster.
Let p[1]^k[1]*...*p[r]^k[r] be the prime factorization of n, where r is the number of prime factors and k[r] is the multiplicity of the r-th factor. Then
The functions ProperDivisors and ProperDivisorsSum are functions that do the same as the above functions, except they do not consider the number n as a divisor for itself. These functions are defined by:
ProperDivisors(n)=Divisors(n)-1,
ProperDivisorsSum(n)=DivisorsSum(n)-n.
Another number-theoretic function is Moebius, defined as follows: Moebius(n)=(-1)^r if no factors of n are repeated, Moebius(n)=0 if some factors are repeated, and Moebius(n)=1 if n=1. This again requires to factor the number n completely and investigate the properties of its prime factors. From the definition, it can be seen that if n is prime, then Moebius(n)= -1. The predicate IsSquareFree(n) then reduces to Moebius(n)!=0, which means that no factors of n are repeated.
The function GaussianNorm computes the norm N(z)=a^2+b^2 of z. The norm plays a fundamental role in the arithmetic of Gaussian integers, since it has the multiplicative property:
A unit of a ring is an element that divides any other element of the ring. There are four units in the Gaussian integers: 1, -1, I, -I. They are exactly the Gaussian integers whose norm is 1. The predicate IsGaussianUnit tests for a Gaussian unit.
Two Gaussian integers z and w are "associated" is z/w is a unit. For example, 2+I and -1+2*I are associated.
A Gaussian integer is called prime if it is only divisible by the units and by its associates. It can be shown that the primes in the ring of Gaussian integers are:
For example, 7 is prime as a Gaussian integer, while 5 is not, since 5=(2+I)*(2-I). Here 2+I is a Gaussian prime.
The ring of Gaussian integers is an example of an Euclidean ring, i.e. a ring where there is a division algorithm. This makes it possible to compute the greatest common divisor using Euclid's algorithm. This is what the function GaussianGcd computes.
As a consequence, one can prove a version of the fundamental theorem of arithmetic for this ring: The expression of a Gaussian integer as a product of primes is unique, apart from the order of primes, the presence of units, and the ambiguities between associated primes.
The function GaussianFactors finds this expression of a Gaussian integer z as the product of Gaussian primes, and returns the result as a list of pairs {p,e}, where p is a Gaussian prime and e is the corresponding exponent. To do that, an auxiliary function called GaussianFactorPrime is used. This function finds a factor of a rational prime of the form 4*n+1. We compute a:=(2*n)! (mod p). By Wilson's theorem a^2 is congruent to -1 (mod p), and it follows that p divides (a+I)*(a-I)=a^2+1 in the Gaussian integers. The desired factor is then the GaussianGcd of a+I and p. If the result is a+b*I, then p=a^2+b^2.
If z is a rational (i.e. real) integer, we factor z in the Gaussian integers by first factoring it in the rational integers, and after that by factoring each of the integer prime factors in the Gaussian integers.
If z is not a rational integer, we find its possible Gaussian prime factors by first factoring its norm N(z) and then computing the exponent of each of the factors of N(z) in the decomposition of z.