Zvika Brakerski


A lot of effort was made to port techniques from second generation latticebased FHE (using tensoring) to FHEOI. Gentry, Sahai and Waters (Crypto 13) showed that third generation techniques (which were later formalized using the ``gadget matrix'') can also be ported. In this work, we propose a comprehensive study of applying third generation FHE techniques to the regime of FHEOI. We present and analyze a third generation FHEOI based on decisional AGCD without the noisefree assumption. We proceed to showing a batch version of our scheme where each ciphertext can encode a vector of messages and operations are performed coordinatewise. We use a similar AGCD variant to Cheon et al. (Eurocrypt 13) who suggested the batch approach for second generation FHE, but we do not require the noisefree component or a subset sum assumption. However, like Cheon et al., we do require circular security for our scheme, even for bounded homomorphism. Lastly, we discuss some of the obstacles towards efficient implementation of our schemes and discuss a number of possible optimizations.
We show that our techniques can also be applied to construct a new type of (nonadaptive) 2message delegation protocols for batch NP statements. Specifically, we can simultaneously prove the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.
We present an ABE scheme where homomorphic operations can be performed compactly across attributes. Of course, decrypting the resulting ciphertext needs to be done with a key respective to a policy f with f(x_{i})=0 for all attributes involved in the computation. In our scheme, the target policy f needs to be known to the evaluator, we call this targeted homomorphism. Our scheme is secure under the polynomial hardness of learning with errors (LWE) with subexponential modulustonoise ratio.
We present a second scheme where there needs not be a single target policy. Instead, the decryptor only needs a set of keys representing policies f_{j} s.t. for any attribute x_{i} there exists f_{j} with f_{j}(x_{i})=0. In this scheme, the ciphertext size grows (quadratically) with the size of the set of policies (and is still independent of the number of inputs or attributes). Again, the target set of policies needs to be known at evaluation time. This latter scheme is secure in the random oracle model under the polynomial hardness of LWE with subexponential noise ratio.
However, the security properties of the current candidate graded encoding schemes are not well understood, and new attacks frequently introduced. It is therefore important to assume as little as possible about the security of the graded encoding scheme, and use as conservative security models as possible. This often comes at a cost of reducing the efficiency or the functionality of the obfuscator.
In this work, we present a candidate obfuscator, based on compositeorder graded encoding schemes, which obfuscates circuits directly a la Zimmerman (Eurocrypt 2015) and ApplebaumBrakerski (TCC 2015). Our construction requires a graded encoding scheme with only 33 ``plaintext slots'' (= subrings of the underlying ring), which is directly related to the size and complexity of the obfuscated program. We prove that our obfuscator is superior to previous works in two different security models.
1. We prove that our obfuscator is indistinguishabilitysecure (iO) in the Unique Representation Generic Graded Encoding model. Previous works either required a compositeorder scheme with polynomially many slots, or were provable in a milder security model. This immediately translates to a polynomial improvement in efficiency, and shows that improved security does not come at the cost of efficiency in this case.
2. Following Badrinarayanan et al. (Eurocrypt 2016), we consider a model where finding any ``nontrivial'' encoding of zero breaks the security of the encoding scheme. We show that, perhaps surprisingly, secure obfuscation is possible in this model even for some classes of nonevasive functions (for example, any class of conjunctions). We define the property required of the function class, formulate an appropriate (generic) security model, and prove that our aforementioned obfuscator is virtualblackbox (VBB) secure in this model.
Prior works either supported only an apriori bounded number of parties (LopezAlt, Tromer and Vaikuntanthan, STOC 12), or only supported singlehop evaluation where all inputs need to be known before the computation starts (Clear and McGoldrick, Crypto 15, Mukherjee and Wichs, Eurocrypt 16). In all aforementioned works, the ciphertext length grew at least quadratically with the number of parties.
Technically, our starting point is the LWEbased approach of previous works. Our result is achieved via a careful use of Gentry's bootstrapping technique, tailored to the specific scheme. Our hardness assumption is that the scheme of Mukherjee and Wichs is circular secure (and thus bootstrappable). A leveled scheme can be achieved under standard LWE.
We show that saiO does not exist, even for a minimal correctness requirement, if NP is not in the intersection of AM and coAM, and if oneway functions exist. A simple complementary observation shows that if oneway functions do not exist, then averagecase saiO exists. Technically, previous approaches utilized the behavior of the obfuscator on evasive functions, for which saiO always exists. We overcome this barrier by using a PRF as a ``baseline'' for the obfuscated program.
We broaden our study and consider relaxed notions of security for iO. We introduce the notion of correlation obfuscation, where the obfuscations of equivalent circuits only need to be mildly correlated (rather than statistically indistinguishable). Perhaps surprisingly, we show that correlation obfuscators exist via a trivial construction for some parameter regimes, whereas our impossibility result extends to other regimes. Interestingly, within the gap between the parameters regimes that we show possible and impossible, there is a small fraction of parameters that still allow to build publickey encryption from oneway functions and thus deserve further investigation.
In this work, we present a threemessage zeroknowledge argument system with soundness against uniform polynomialtime cheating provers. The main component in our construction is the recent delegation protocol for RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely on a threemessage variant of their protocol based on a keyless collisionresistant hash functions secure against uniform adversaries as well as other standard primitives.
More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee against realworld adversaries, which we formalize following Rogaway's ``humanignorance" approach (VIETCRYPT 2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our protocol to finding collisions in the underlying hash function.
We prove that our scheme is semiadaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWEbased constructions were only able to achieve selective security. (We stress that the ``complexity leveraging'' technique is not applicable for unbounded attributes.)
We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an apriori unbounded sequence of LWE matrices, and have finegrained control over which trapdoor is embedded in each and every one of them, all with succinct representation.
Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals nothing more than blackbox access to f as an oracle, at least as long as (essentially) the conjunction is chosen from a distribution having sufficient entropy.
We present a generic transformation that converts any generalpurpose publickey functional encryption scheme into a hierarchical one without relying on any additional assumptions. This significantly refines our understanding of the power of functional encryption, showing (somewhat surprisingly) that the existence of functional encryption is equivalent to that of its hierarchical generalization.
Instantiating our transformation with the existing functional encryption schemes yields a variety of hierarchical schemes offering various tradeoffs between their delegation capabilities (i.e., the depth and width of their hierarchical structures) and underlying assumptions. When starting with a scheme secure against an unbounded number of collusions, we can support arbitrary hierarchical structures. In addition, even when starting with schemes that are secure against a bounded number of collusions (which are known to exist under rather minimal assumptions such as the existence of publickey encryption and shallow pseudorandom generators), we can support hierarchical structures of bounded depth and width.
Instantiating our construction with existing singleinput schemes, we obtain multiinput schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even oneway functions), offering various tradeoffs between security and efficiency.
Previous constructions of multiinput functional encryption schemes either relied on somewhat stronger assumptions and provided weaker security guarantees (Goldwasser et al., EUROCRYPT '14), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al., EUROCRYPT '15).
Independent and concurrent work by Ananth and Jain (ePrint '15) shows (among other contributions) how to construct a selectivelysecure multiinput privatekey functional encryption scheme based on any publickey functional encryption scheme. In this context, our result both relies on a weaker assumption and guarantees stronger security.
Embedding a universal circuit for some class of functions allows us to produce constrained keys for functions in this class, which gives us the first standardlatticeassumptionbased constrained PRF (CPRF) for general boundeddescription boundeddepth functions, for arbitrary polynomial bounds on the description size and the depth. (A constrained key w.r.t a circuit C enables one to evaluate the PRF on all x for which C(x)=1, but reveals nothing on the PRF values at other points.) We rely on the LWE assumption and on the onedimensional SIS (Short Integer Solution) assumption, which are both related to the worst case hardness of general lattice problems. Previous constructions for similar function classes relied on such exotic assumptions as the existence of multilinear maps or secure program obfuscation. The main drawback of our construction is that it does not allow collusion (i.e. to provide more than a single constrained key to an adversary). Similarly to the aforementioned previous works, our PRF family is also key homomorphic.
Interestingly, our constrained keys are very short. Their length does not depend directly either on the size of the constraint circuit or on the input length. We are not aware of any prior construction achieving this property, even relying on strong assumptions such as indistinguishability obfuscation.
We prove that our obfuscator is secure against a class of generic algebraic attacks, formulated by a generic graded encoding model. We further consider a more robust model which provides more power to the adversary and extend our results to this setting as well.
As a secondary contribution, we define a new simple notion of algebraic security (which was implicit in previous works) and show that it captures standard security relative to an ideal GES oracle.
In this paper we show that any sufficiently expressive selectivelysecure FE scheme can be transformed into a fully secure one without introducing any additional assumptions. We present a direct blackbox transformation, making novel use of hybrid encryption, a classical technique that was originally introduced for improving the efficiency of encryption schemes, combined with a new technique we call the Trojan Method. This method allows to embed a secret execution thread in the functional keys of the underlying scheme, which will only be activated within the proof of security of the resulting scheme.As another application of the Trojan Method, we show how to construct functional encryption schemes for arbitrary circuits starting from ones for shallow circuits (NC1 or even TC0).
Whereas function privacy is inherently limited in the publickey setting, in the privatekey setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages m_{1}, ..., m_{T} together with decryption keys corresponding to functions f_{1}, ..., f_{T}, reveal essentially no information other than the values { f_{i}(m_{j})}_{i,j=1,...,T}. Despite its great potential, the known functionprivate privatekey schemes either support rather limited families of functions (such as inner products), or offer somewhat weak notions of function privacy.
We present a generic transformation that yields a functionprivate functional encryption scheme, starting with any nonfunctionprivate scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme, and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain functionprivate schemes based either on obfuscation assumptions, on the Learning with Errors assumption, or even on general publickey encryption (offering various tradeoffs between security and efficiency).
We prove that the construction is a secure obfuscation in a generic multilinear group model, under the blackbox definition of Barak et al. (CRYPTO 2001). Security is based on a new worstcase hardness assumption about exponential hardness of the NPcomplete problem 3SAT, the Bounded Speedup Hypothesis.
One of the new techniques we introduce is a method for enforcing input consistency, which we call randomizing subassignments. We hope that this technique can find further application in constructing secure obfuscators.
Our approach consists of three main ideas: Noisebounded sequential evaluation of high fanin operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimensionmodulus reduction.
Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We show that the construction is secure when the conjunction is drawn from a distribution, under mild assumptions on the distribution. Security follows from multilinear entropic variants of the DiffieHellman assumption. We conjecture that our construction is secure for any conjunction, regardless of the distribution from which it is drawn. We offer supporting evidence for this conjecture, proving that our obfuscator is secure for any conjunction against generic adversaries.
Our techniques capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.
Schulman (FOCS 92, STOC 93) presented the notion of interactive coding: A simulator that, given any protocol $\pi$, is able to simulate it (i.e.\ produce its intended transcript) even with constant rate adversarial channel errors, and with only constant (multiplicative) communication overhead. Until recently, however, the running time of all known simulators was exponential (or subexponential) in the communication complexity of $\pi$ (denoted $N$ in this work). Brakerski and Kalai (FOCS 12) recently presented a simulator that runs in time $\poly(N)$. Their simulator is randomized (each party flips private coins) and has failure probability roughly $2^{N}$.
In this work, we improve the computational complexity of interactive coding. While at least $N$ computational steps are required (even just to output the transcript of $\pi$), the BK simulator runs in time $\tilde\Omega(N^2)$.
We present two efficient algorithms for interactive coding: The first with computational complexity $O(N \log N)$ and exponentially small (in $N$) failure probability; and the second with computational complexity $O(N)$, but failure probability $1/\poly(N)$. (Computational complexity is measured in the RAM model.)
Although using the PVW method with LWEbased schemes leads to worse asymptotic efficiency than using the SV technique with ringLWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``generalLWE'' schemes, suggesting yet another tradeoff that can be optimized for different settings.
An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of noise withlow hamming weight cannot be fully homomorphic. This applies to known schemes such as LPNbased symmetric or public key encryption.
An additional corollary is that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint '11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One by applying the aforementioned general statement, and another by directly attacking one of the components of the scheme.
We show how to efficiently simulate any interactive protocol in the presence of constantrate adversarial noise, while incurring only a constant blowup in the communication complexity ($\cc$). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $12^{\Omega(\cc)}$.
Previous solutions are either inefficient or are resilient only to stochastic noise.
We use this technique to construct a \emph{scaleinvariant} fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus $q$ and the initial noise level $B$, and not on their absolute values.
Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for ``modulus switching''), and this modulus can take arbitrary form, including a power of $2$ which carries obvious advantages for implementation. In addition, security can be classically reduced to the worstcase hardness of the GapSVP problem (with quasipolynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction to GapSVP.
Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ringLWE (RLWE) problems that have $2^k$ security against known attacks. For RLWE, we have:
Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width  e.g., where a constant fraction of levels have width at least $k$  we can reduce the pergate computation of the bootstrapped version to $\tilde{O}(k)$, independent of $L$, by batching the bootstrapping operation. Previous FHE schemes all required $\tilde{\Omega}(k^{3.5})$ computation per gate.
At the core of our construction is a much more effective approach for managing the noise level of latticebased ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).
Our construction improves on previous works in two aspects:
Our scheme has very short ciphertexts and we therefore use it to construct an asymptotically efficient LWEbased singleserver private information retrieval (PIR) protocol. The communication complexity of our protocol (in the publickey model) is kpolylog(k)+log DB bits per singlebit query, which is better than any known scheme (here, k is a security parameter).
Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an example of a publickey encryption scheme that is resilient to $\lambda$ bits of leakage, and yet its $2$repetition is not resilient to even $(1+\epsilon)\lambda$ bits of leakage. In their counterexample, the repeated schemes share secretly generated public parameters.
In this work we show that under a reasonable strengthening of the definition of leakage resilience (one that captures known proof techniques for achieving nontrivial leakage resilience), parallel repetition does in fact amplify leakage. In particular, if fresh public parameters are used for each copy of the LewkoWaters scheme, then their negative result does not hold, and leakage is amplified by parallel repetition.
More generally, we show that given $t$ schemes that are resilient to $\lambda_1, \ldots, \lambda_t$ bits of leakage, respectfully, their direct product is resilient to $\sum (\lambda_i1)$ bits. We present our amplification theorem in a general framework that applies other cryptographic primitives as well.
One of the obstacles in going from ``somewhat'' to full homomorphism is the requirement that the somewhat homomorphic scheme be circular secure, namely, the scheme can be used to securely encrypt its own secret key. For all known somewhat homomorphic encryption schemes, this requirement was not known to be achievable under any cryptographic assumption, and had to be explicitly assumed. We take a step forward towards removing this additional assumption by proving that our scheme is in fact secure when encrypting polynomial functions of the secret key.
Our scheme is based on the ring learning with errors (RLWE) assumption that was recently introduced by Lyubashevsky, Peikert and Regev (Eurocrypt 2010). The RLWE assumption is reducible to worstcase problems on ideal lattices, and allows us to completely abstract out the lattice interpretation, resulting in an extremely simple scheme. For example, our secret key is s, and our public key is (a,b=as+2e), where s,a,e are all degree (n1) integer polynomials whose coefficients are independently drawn from easy to sample distributions.
In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high minentropy from the adversary's point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees.
We formalize a framework for studying the security of deterministic publickey encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hardtoinvert auxiliary inputs. Within this framework, we propose two schemes: the first is based on the decisional DiffieHellman (and, more generally, on the $d$linear) assumption, and the second is based on a rather general class of subgroup indistinguishability assumptions (including, in particular, quadratic residuosity and Paillier's composite residuosity). Our schemes are secure with respect to any auxiliary input that is subexponentially hard to invert (assuming the standard hardness of the underlying computational assumptions). In addition, our first scheme is secure even in the multiuser setting where related plaintexts may be encrypted under multiple public keys. Constructing a scheme that is secure in the multiuser setting (even without considering auxiliary inputs) was identified by Bellare et al. as an important open problem.
With this motivation in mind, we suggest a new framework for blackbox constructions that encompasses constructions with a nonblackbox flavor: specifically, those that rely on zeroknowledge proofs relative to some oracle. We show that our framework is powerful enough to capture the NaorYung/Sahai paradigm for building a (shielding) CCAsecure publickey encryption scheme from a CPAsecure one, something ruled out by prior blackbox separation results. On the other hand, we show that several blackbox impossibility results still hold even in a setting that allows for zeroknowledge proofs.
We show how to securely update a secret key while information is leaked: We construct schemes that remain secure even if an attacker, at each time period, can probe the entire memory (containing a secret key) and ``leak'' up to a $(1o(1))$ fraction of the secret key. The attacker may also probe the memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness allows $k^\epsilon$ bits of leakage during each update process). All of the above is achieved without restricting the model as is done in previous works (e.g. by assuming that ``only computation leaks information'' [MicaliReyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which allows for a leakage rate of $(1/2o(1))$) or the symmetric external DiffieHellman assumption (which allows for a leakage rate of $(1o(1))$), we achieve the above for public key encryption, identitybased encryption, and signature schemes. Prior to this work, it was not known how to construct publickey encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (in the more general model) and (2) giving a public key encryption (and IBE) schemes that are resilient to continual leakage.
In particular, under what we call the subgroup indistinguishability assumption, of which the QR and DCR are special cases, we can construct a scheme that has:
Our scheme is the first to achieve keydependent security and auxiliaryinput security based on the DCR and QR assumptions. Previous schemes that achieved these properties relied either on the DDH or LWE assumptions. The proposed scheme is also the first to achieve leakage resiliency for leakage rate $(1o(1))$ of the secret key length, under the QR assumption. We note that leakage resilient schemes under the DCR and the QR assumptions, for the restricted case of composite modulus product of safe primes, were implied by the work of [NS, Crypto09], using hash proof systems. However, under the QR assumption, known constructions of hash proof systems only yield a leakage rate of $o(1)$ of the secret key length.
We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call apriorimessage unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present simple constructions based on the RSA assumption, the short integer solution (SIS) assumption, and the computational DiffieHellman (CDH) assumption over bilinear groups.
Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the short integer solution (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call ring trapdoor functions. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security.
Finally, we show a connection between ring signatures and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the learning with error (LWE) assumption, and is similar to recently introduced IBE schemes; The second is based on the $d$linear assumption over bilinear groups.
In the case of functions that can be expressed as degree$d$ polynomials, we show that the resulting schemes are also secure with respect to key cycles of any length. Specifically, for any polynomial number $n$ of key pairs, our schemes can securely encrypt a degree$d$ polynomial whose variables are the collection of coordinates of all $n$ secret keys. Prior to this work, it was not known how to achieve this for nonlinear functions.
Our key idea is a general transformation that amplifies KDM security. The transformation takes an encryption scheme that is KDM secure w.r.t. some functions even when the secret keys are weak (i.e. chosen from an arbitrary distribution with entropy $k$), and outputs a scheme that is KDM secure w.r.t. a richer class of functions. The resulting scheme may no longer be secure with weak keys. Thus, in some sense, this transformation converts security with weak keys into amplified KDM security.
VRFs are both a natural and a useful primitive, and previous works have suggested a variety of constructions and applications. Still, there are many open questions in the study of VRFs, especially their relation to more widely studied cryptographic primitives and constructing them from a wide variety of cryptographic assumptions.
In this work we define a natural relaxation of VRFs that we call weak verifiable random functions, where pseudorandomness is required to hold only for randomly selected inputs. We conduct a study of weak VRFs, focusing on applications, constructions, and their relationship to other cryptographic primitives. We show:
We define the new notion and go on to give restorers for properties in the dense graph model: Bipartiteness and $\rho$Clique, and for function monotonicity. We also show some general relations between property restoring and other notions such as property testing and tolerant property testing.