Slides from a recent presentation on homomorphic encryption schemes based on approximate GCD, learning with errors (LWE) and ring LWE assumptions:

Homomorphic Encryption from the LWE Assumption

Slides from a recent talk on secure computation and consensus:

Oblivious Transfer / Secure Multi Party Computation / Consensus

At some point, you may find yourself questioning the security of NIST specified prime curves. Such thoughts are not uncommon given the compromised status of Dual_EC_DRBG. More significantly, NSA has dropped NIST p-256 and current guidance is to use p-384:

https://www.nsa.gov/ia/programs/suiteb_cryptography/

Interestingly, the agency has also stated that “With respect to IAD customers using large, unclassified PKI systems, remaining at 112 bits of security (i.e. 2048-bit RSA) may be preferable (or sometimes necessary due to budget constraints) for the near-term in anticipation of deploying quantum resistant asymmetric algorithms upon their first availability”.

The intent of this post is to try and question some theories around deliberate specification of weak curves  by selection of “weak seed” inputs to the SHA-1 function as part of the ANSI / NIST curve generation process. The reasoning presented expands on implied requirements around the number of weak curves that must exist over 256 bit prime fields in order for such theories to hold.

Weak Curves:

It is widely known that certain classes of “weak curves” exist where the standard ECDLP problem is reduced to a simpler problem. For example, attacks on curves which leverage pairings are documented in the literature. Such attacks leverage bilinear maps \langle \cdot,\cdot\rangle: \mathbb{G}_{1}\times \mathbb{G}_{2} \rightarrow \mathbb{J} from abelian groups \mathbb{G}_{1} and \mathbb{G}_{2} that evaluate to elements in some other abelian group \mathbb{J}. For the Weil and Tate pairings, we map from groups in E(\mathbb{F}_{p^{k}}) to a multiplicative group \mathbb{F}^{*}_{p^{k}} where  \mathbb{F}_{p^{k}} is some finite extension of \mathbb{F}_{p} with k > 1. Pairings, when applied to so called supersingular curves with embedding degree less than 6, reduce standard ECDLP to the sub exponential DLP in multiplicative groups. Another case of weak curves are the so called prime field anomolous curves with \sharp E(\mathbb{F}_{p}) = p, though in general we don’t expect such an equality to hold (On the other hand, the so called Hasse bound tells us that the most that \sharp E(\mathbb{F}_{p}) can differ from p + 1 is by the trace of Frobenius t where \mid t \mid \le 2 \sqrt{p})

Koblitz and Menezes (KM2015):

For the rest of this post, we’ll dissect some parts of a post Snowden defense of ECC authored by Koblitz and Menezes, two mathematicians of high repute (Koblitz in particular is the co-inventor of elliptic curve cryptography, author of the landmark text “A Course in Number Theory and Cryptography”, and noted sceptic of reductionist methods / provable security) . The ECC defense is entitled “An Riddle Wrapped in an Enigma” published here –

https://eprint.iacr.org/2015/1018.pdf

The line of thought we’re concerned with traces lineage of the NIST prime curves to NSA  involvement in the ANSI X9F1 committee, the exact nature of NSA involvement is not well outlined in the paper, but the implication is that NSA was willing to generate curves on behalf of the ANSI effort, and at the time, had the technology to enumerate orders of the resulting groups.

In order to ensure that known weak curves (known to the NSA, that is) were not selected, it was decided that curve co-efficients be selected by passing a seed to the SHA-1 function. The security of this method would be assured given the fact that it’s hard to invert SHA-1, i.e., in the event that NSA was aware of certain weak curves, it would be hard to find the SHA-1 pre-image that would result in the required co-efficients for the weak curve.

So this brings us to one of the primary arguments against security of the prime curves, and it’s centered on the provenance / legitimacy of the SHA-1 seeds used for curve generation. In the vein of classic conspiracy theories, the argument goes that the NSA could have known about “many” weak curves, and repeatedly picked random seeds until one of the seeds resulted in a weak curve. Koblitz and Menezes in the referenced paper refute this line of thought by arguing that in order to find a seed for a weak prime curve in 2^{48} SHA-1 trials, there would have to be approximately 2^{209} weak curves over 256 bit prime fields. The argument is as follows:

For a field \mathbb{F}_{p}, the number the number of “isomorphism classes” of curves over the field is 2p. Therefore, for a 256 bit prime there are about 2^{257} isomorphism classes of elliptic curves over \mathbb{F}_{p}.  Also, for a given curve, we would prefer \sharp E(\mathbb{F}_{p}) to be as close to a prime as possible since complexity of ECDLP is based on the largest prime subgroup of E(\mathbb{F}_{p}) – put another way, an attacker can use the group order to solve ECDLP in subgroups whose orders are co-prime and then map back to the full group through Chinese Remainder Theorem (CRT) – therefore, it’s important to know the group order, which circa 1997 was purportedly a computation only within reach of the NSA – Koblitz and Menezes (KM2015) imply this as one of the reasons behind involvement of NSA in the ANSI process.

With E(\mathbb{F}_{p}) we’re interested primarily in groups of prime order (given CRT derived weaknesses), so to pick up the analysis of KM2015 from where we left off, for p being a 256 bit number p, the prime number theorem tells us that there are around p/ln \ p = 2^{256}/ln(2^{256}) = 2^{256}/256 \ ln2 \approx 2^{248} primes. Put another way, a proportion 2^{-8} of 256 bit numbers are prime, this tells us that the proportion of strong curves over the field is about 2^{-8} of 2^{257} (since there are 2^{257} isomorphism classes of curves over a 256 bit field) which puts the number of strong curves at about 2^{249}. Now the argument goes that if NSA knew how to break a proportion 2^{-40} of the strong curves, then there would be 2^{249} / 2^{40} = 2^{209} curves that NSA could break, but that the rest of the world believes to be secure.  In this case, 2^{209}/2^{257} = 1/2^{48} is the proportion of total curves that are known to be insecure only to the NSA. So if you go with the math presented, the argument goes that NSA could have found a known weak curve after 2^{48} seed input trials to SHA-1.

KM2015 then proceed to argue that for a 256 bit prime, 2^{209} is an extremely large number of weak curves to have gone undetected by the community for 18 years. The argument goes – assume the NSA could conduct week seed searches in 2^{48} work – if we take this to be true, we can then work backward with the prime number theorem etc. to arrive at the number of required weak curves for NSA to have “plausibly” searched for seeds producing desired weak curves in 2^{48} trials.

Deep Crack: Bespoke Silicon Circa 1998

Though KM2015 go on to cite other possible reasons for dropping p-256, including adaptations of Bernstein and Lange’s so called n^{1/3}  pre-computation based attacks, our focus here was to understand the capping of possible NSA SHA-1 trials / hashing power at 2^{48}, and the derived requirement around 2^{209} weak curves.  Should such a bound on available hashing power be considered high or low given the state of 1997 technology? Alas, we must draw our own conclusions, but we present one important benchmark around the state of bespoke silicon in that era. Specifically, we cite the case of the EFF 1998 DES Cracker, colloquially called “Deep Crack”. This was a machine that brute forced the 2^{56} DES key space in a matter of days. Deep Crack was designed and built by Kocher et. al. at CRI with 1856 custom ASICS. The cost of the DES cracker is stated at $250,000 in 1998 dollars.

Conclusions:

It is not the intent of the author to promote the view that there are a large number of weak curves nor that the NSA exploited this fact to pre-determine SHA-1 seeds in order to ensure selection of weak curves and their eventual standardization through NIST. The situation with weak curves is quite different from that of Dual_EC_DRBG, since in some sense, eventual knowledge of such weak curves would confer an “equal opportunity” advantage to anyone with cryptanalytic ability, rather than bestowing an asymmetric advantage to any one party.  Rather, the point of this post is to merely trace reasoning around existence of weak curves and provide more insight around quantifying the size of classes of purported weak curves. If one allows NSA to have about 2^{80} hashing power in 1997 rather than 2^{48}, by the reasoning outlined above, this would imply the existence of about 2^{257}/2^{80} = 2^{177} weak curves which is a very large number and seems unlikely. In concluding, we echo the remarks of KM2015:

“Although there is no plausible reason to mistrust the NIST curves, there are two reasons why it is nevertheless preferable to use other curves .. The first reason is public perception — even though it is unlikely that the NIST curves are unsafe, there has been enough speculation to the contrary that to keep everybody happy one might as well avoid the NIST curves. The second reason is that the other curves do have some advantages … It is no discredit to the NIST curves that more efficient alternatives are now available — after all, it has been 18 years, and it would be surprising if no one had come up with anything better by now”

 

Fully homomorphic encryption has evolved from Craig Gentry’s original work based on worst case problems in ideal lattices and problems based on sparse subset sums, to other schemes such as the BGV Ring LWE  scheme, which is based on lattice SVP. This scheme is  characterized by aggregate plaintext spaces comprising rings of integers \mathcal{O}_{\mathbb{Q}(\zeta_m)} = \mathbb{A}_p  : = \mathbb{Z}[X]/\langle \Phi_m(X),p \rangle \cong \mathbb{Z}_p(\zeta_m) reduced mod p of  cyclotomic number field extensions \mathbb{Q}(\zeta_m)/\mathbb{Q} (where \zeta_m is a primitive m‘th root of unity and m is the lattice dimension), and their Galois groups Gal(\mathbb{Q}(\zeta_m)/\mathbb{Q}) . Other than the Ring LWE based BGV, named after Brakerski, Gentry and Vaikuntanathan, there is also the “LTV” scheme of Lopez-Alt, Tromer and Vaikuntanathan, based on the NTRU problem, and Brakerski’s “Scale Invariant” scheme that does not utilize ring switching during homomorphic evaluation. With the rise of the above “second generation” schemes, we pause and note the remarkable aspect of Gentry’s original work – which remains the bootstrapping theorem: that efficient fully homomorphic encryption can be constructed from a somewhat homomorphic scheme that “compactly” evaluates its decryption circuit. Formally this can be stated in the following way:

Consider a ‘somewhat’ homomorphic scheme \varepsilon with plaintext space P = {0,1}, a ciphertext \Psi_1 that encrypts a message \pi under a public key pk_1, and a secret key sk_1 associated with pk_1. The key sk_1 is then encrypted under a second public key pk_2. Denote the encryption of the jth bit of sk_1 under pk_2 as \bar{sk_1}_j.  Define a ‘re-encryption’ procedure for the ciphertext \Psi_1 as the following –

Recrypt_\varepsilon(pk_2,D_\varepsilon, \langle \bar{sk_1}_j \rangle, \Psi_1) :

1) \bar{\Psi_1}_j \gets Encrypt_\varepsilon(pk_2, \Psi_1j)

2) \Psi_2 \gets Evaluate_\varepsilon(pk_2, D_\varepsilon, \langle \langle \bar{sk_1}_j \rangle, \langle \bar{\Psi_1}_j \rangle \rangle)

Where D_\varepsilon denotes the scheme’s decryption circuit. The algorithm takes the bits of the ciphered message \Psi_1 (which is an encryption of the message \pi under pk_1), and encrypts them under the ‘outer’ encryption key pk_2 to yield the ciphertext bits \bar{\Psi_1}_j.  The decryption circuit is then homomorphically evaluated with the ciphered decryption secret \bar{sk_1} (the secret key sk_1 encrypted under pk_2) and the ‘doubly’ encrypted message ciphertext \bar{\Psi_1}. This latter step removes the ‘inner’ encryption of the message \pi under the key pk_1, thereby yielding the ciphertext \Psi_2 which is just the encryption of \pi under pk_2. The bootstrapping theorem states that such an algorithm can be used as an effective noise reduction mechanism to create a fully homomorphic encryption scheme from a somewhat homomorphic one so long as the decryption circuit is “compact” and does not increase noise components to a level that results in ciphertext “wraparound” in the underlying ring. Another way to say this is that the scheme cannot work if the magnitude of ciphertext noise results in reductions modulo q during decryption, where q is a parameter that characterizes the ciphertext space. Interestingly, the decryption circuit associated with Gentry’s original somewhat homomorphic scheme was not sufficiently compact for bootstrapping fully homomorphic encryption. The curcuit was therefore “squashed” along with an approach where the party encrypting data  initiates the homomorphic decryption process.

In the context of privacy preserving outsourced computation, the Recrypt procedure described would typically be evaluated by an untrusted counterparty. Therefore, it cannot compromise the security of secret keys (This is the case since \bar{sk_1} used by the decryption circuit is the encryption of a secret key rather than the secret key itself) [Todo: discuss circular security and security of key dependent messages].

Lets take a step back and consider some essential aspects of the problem and of Gentry’s original scheme  – Fully homomorphic encryption (FHE) schemes allow for the computation of arbitrary functions on encrypted data viz. you can do anything on the data that can be efficiently expressed as a circuit. FHE schemes have eluded cryptographers for at least 3 decades. In 2009 Craig Gentry published an initial scheme based on the ideal coset problem instantiated over ideal lattices, that is lattices corresponding to ideals in polynomial rings of form \mathbb{A} = \mathbb{Z}[X] / \Phi_m(X) , consisting of residues of integer polynomials modulo the m'th cyclotomic polynomial $latex \Phi_m(X) $ of degree m.

Specifically, in his initial somewhat homomorphic construction, Gentry employs computational assumptions around the ideal coset problem (ICP) – when instantiated over ideal lattices. this is equivalent to the so called the bounded distance decoding problem (BDDP). The second computational assumption used involves sparse subset sums – this arises out of a requirement to  ‘squash’ the decryption circuit in order to address its essential non-compactness .  At the time, lattices were chosen to further minimize complexity of the decryption circuit and facilitate the bootstrap while maximizing the scheme’s evaluative capacity – decryption in lattice based schemes is typically dominated by matrix vector multiplication (NC1 complexity). Contrast this to RSA, for instance, that requires evaluating modular exponents.

To recap, a central aspect of Gentry’s result is precisely bootstrapping, which brings us to US patent 2011/0110525 A1 granted May 12 2011 “Fully homorphic encryption method based on a bootstrappable encryption scheme, computer program and apparatus” (Asignee IBM).  Within the detailed description of the invention, the following is stated – “It is shown that if the decryption circuit of a somewhat homomorphic encryption scheme is shallow enough, in particular, if it is shallow enough to be evaluated homomorphically by the somewhat homomorphic scheme itself (a self-referential property), then this somewhat homomorphic scheme becomes “bootstrappable”, and can be used to construct a fully homomorphic scheme that can evaluate circuits of arbitrary depth.” The description within the remaining text of the patent is essentially a distillation of Gentry’s 2009 thesis. So on the face of it, it does look like bootstrapping is covered by the patent.

There is the lingering question of the applicability of this (or other) patents to the “leveled”  scheme of Brakerski, Gentry, Vaikuntanathan (BGV) published in 2011. This is a fully homomorphic scheme based on the learning with errors (LWE) or Ring-LWE (RLWE) problem/s. Rather than bootstrapping as a noise management technique, BGV starts with somewhat homomorphic encryption and uses tensor products along with a process called re-linearization and modulus switching to maintain the relative magnitude of ciphertext noise during homomorphic evaluation, but does propose bootstrapping as an optimization for evaluating deeper circuits that otherwise in the purely leveled scheme may require very large parameter choices for “aggregate” plaintext / ciphertext rings.

Concerning references and the attribution of results – the work discussed spans multiple groups and individuals including Craig Gentry, Vinod Vaikuntanathan, Shai Halevi, Zvika Brakerski, Nigel Smart, F. Vercauteren, Adriana Lopez-Alt and Eran Tromer to name a few.

This post picks up at the point of considering BGV (Brakerski, Gentry, Vaikuntanathan) encryption variants based on Ring-LWE. Thought it may seem natural to start by considering the BGV enciphering function, I will avoid discussing this since is actually quite simple in principle and essentially consists of polynomial mixing and sampling from an error distribution. Also, it is described well in a post on Windows on Theory by Boaz Barak and Zvika Brakerski that can be found here:

http://windowsontheory.org/2012/05/02/building-the-swiss-army-knife/

Instead, I’d like to delve into the plaintext algebra of the “Ring-LWE” BGV variant in order to understand the encoding / representation of plaintext bits in the scheme . In this post, we consider the idea of batched homomorphic operations and so called ‘plaintext slots’, along with some of the basic algebra that supports the specification of plaintext slots.

In particular, we consider the Ring-LWE BGV variant defined over rings of form:

\mathbb{A} = \mathbb{Z}[X] / \Phi_m(X) where  m  is a parameter and  \Phi_m(X) is the m' th cyclotomic polynomial whose roots are the primitive m 'th roots of unity.  The native plaintext space for this variant is taken to be the ring:

\mathbb{A}_2 = \mathbb{A}/2\mathbb{A}

Note that the cyclotomic polynomial factors into  irreducible factors mod 2 (which follows partially from the fact that the cyclotomic extension \mathbb{Q}(\zeta_m)/\mathbb{Q} is Galois) :

\Phi_m(X) = F_1(X)\cdot F_2(X) \cdot \cdot \cdot F_r(X) mod 2

Lets cover some background around Chinese Remaindering and homomorphisms of rings – Consider F(X) \in \mathbb{F}_2[X], a monic polynomial of degree that splits into distinct irreducible factors of degree d = N/r, viz. F(X)=\prod_{i = 1}^{r}F_i(X) . (In the case of F(X) = \Phi_m(X) mod 2 we have d \times r = \phi(m)). Let A denote the ring

A = \mathbb{F}_2[X]/(F(X))

which is the quotient of \mathbb{F}_2[X] modulo the ideal generated by F(X)

By the Chinese remainder theorem, there is the isomorphism:

A \cong \Bbb{F}_2[X]/F_1(X) \otimes \Bbb{F}_2[X]/F_2(X) \otimes\dots\otimes \Bbb{F}_2[X]/F_r(X)

\cong \Bbb{F}_{2^d} \otimes \dots \otimes \Bbb{F}_{2^d}

So that A is isomorphic to r copies of \Bbb{F}_{2^d} (fields of d bit strings).

This captures central idea around the of  batching homomorphic operations -We start by encoding individual bits in the r distinct plaintext fields \Bbb{F}_{2^d}. More precisely, we are not just confined to representing bits, but any element in the ring. For example, if 8 divides d, then we can represent a byte since in this case there is an embedding of \mathbb{F}_{2^8} in \mathbb{F}_{2^d}. Then, rather than evaluating functions on the individual bits or ring elements, we use the isomorphism above and work in the ring A. Evaluating a function once in A implicitly evaluates the function in parallel over the r distinct plaintext spaces \Bbb{F}_{2^d}. The ring \mathbb{A}_2 is typically referred to as the “aggregate” plaintext space in BGV, and the sub-rings \mathbb{Z}_2[X]/F_i(X) are referred to as the “embedded” plaintext spaces or “slots”.

Some notes on the Brakerski and Vaikuntanathan’s (BV) homomorphic encryption scheme published in 2011, two years following Gentry’s publication of a fully homomorphic encryption scheme.

BV achieves fully homomorphic properties from the standard learning with errors problem (LWE). Gentry’s original scheme was based on ideal lattices. This was partly motivated through natural homomorphisms that arise from using ideals in rings given the fact that they’re closed under addition and multiplication. BV, however, contend that the security of schemes based on ideal lattices rests on relatively untested cryptographic assumptions, in addition to which, the bootstrapping procedure in Gentry’s formulation requires “squashing” or simplifying the decryption circuit, which introduces an additional hardness assumption in the form of the sparse subset sum problem. The BV scheme, in contrast, is based on known worst case, classical hardness assumptions of standard problems on arbitrary lattices – the specific problem employed in this scheme is learning with errors (LWE), which states that given an n dimensional secret vector over an integer field, any polynomial number of “noisy”, random linear combinations of the coefficients of the vector are indistinguishable from uniformly random elements in the underlying field. Best known algorithms for LWE are almost exponential in dimension n of the secret vector.

An artifact of cipher text multiplication in BV is that the resulting polynomial expression contains terms quadratic in the secret key. Therefore a key aspect of the scheme is so called “re-linearization” which involves publishing encryptions of linear and quadratic terms in the secret key in the resulting polynomial under a new secret key. The substituted expression is then linear in the new secret key. The scheme additionally relies on creating a chain of L secret keys along with encryptions of the quadratic terms of a given key in the chain under the next key. This allows for L levels of multiplications under the scheme.

Similar to Gentry’s original blueprint, BV starts with a somewhat homomorphic  scheme based on standard LWE, and then creates a bootstrappable scheme that inherits initial scheme’s homomorphic properties and utilizes dimension modulus reduction for managing cipher text noise during successive evaluations up to the prescribed multiplicative depth. From a security standpoint, dimension modulus reduction does not significantly affect Regev’s reduction of Decision LWE (DLWE) to approximate, worst case short vector problems (SVP) on reduced dimensional lattices, thereby maintaining security of the initial scheme.

So what’s the relationship to existing patent’s for Gentry’s scheme?  Not sure. Next we will discuss aspects of the Brakerski, Gentry, Vaikuntanathan (BGV) scheme that among other things, is a “Ring-LWE” adaptation of BV that incorporates Smart Vercauteren batch operations (“cipher text packing”) by defining the message space to consist of elements in a polynomial ring.

%d bloggers like this: