Publications 
Zvika
Brakerski and Vinod Vaikuntanathan Constrained KeyHomomorphic PRFs from Standard Lattice Assumptions (or: How to Secretly Embed a Circuit in Your PRF)
TCC 2015. [Abstract] [PDF]
Boneh et al. (Crypto 13) and Banerjee and Peikert (Crypto 14) constructed pseudorandom functions (PRFs) from the Learning with Errors (LWE) assumption by embedding combinatorial objects, a path and a tree respectively, in instances of the LWE problem. In this work, we show how to generalize this approach to embed circuits, inspired by recent progress in the study of Attribute Based Encryption.
Embedding a universal circuit for some class of functions allows us to produce constrained keys for functions in this class, which gives us the first standardlatticeassumptionbased constrained PRF (CPRF) for general boundeddescription boundeddepth functions, for arbitrary polynomial bounds on the description size and the depth. (A constrained key w.r.t a circuit C enables one to evaluate the PRF on all x for which C(x)=1, but reveals nothing on the PRF values at other points.) We rely on the LWE assumption and on the onedimensional SIS (Short Integer Solution) assumption, which are both related to the worst case hardness of general lattice problems. Previous constructions for similar function classes relied on such exotic assumptions as the existence of multilinear maps or secure program obfuscation. The main drawback of our construction is that it does not allow collusion (i.e. to provide more than a single constrained key to an adversary). Similarly to the aforementioned previous works, our PRF family is also key homomorphic.
Interestingly, our constrained keys are very short. Their length does not depend directly either on the size of the constraint circuit or on the input length. We are not aware of any prior construction achieving this property, even relying on strong assumptions such as indistinguishability obfuscation.
Benny Applebaum and Zvika Brakerski Obfuscating Circuits via CompositeOrder Graded Encoding TCC 2015. [Abstract] [PDF]
We present a candidate obfuscator based on compositeorder Graded Encoding Schemes (GES), which are a generalization of multilinear maps. Our obfuscator operates on circuits directly without converting them into formulas or branching programs as was done in previous solutions. As a result, the time and size complexity of the obfuscated program, measured by the number of GES elements, is directly proportional to the circuit complexity of the program being obfuscated. This improves upon previous constructions whose complexity was related to the formula or branching program size. Known instantiations of Graded Encoding Schemes allow us to obfuscate circuit classes of polynomial degree, which include for example families of circuits of logarithmic depth.
We prove that our obfuscator is secure against a class of generic algebraic attacks, formulated by a generic graded encoding model. We further consider a more robust model which provides more power to the adversary and extend our results to this setting as well.
As a secondary contribution, we define a new simple notion of algebraic security (which was implicit in previous works) and show that it captures standard security relative to an ideal GES oracle.

In a functional encryption (FE) scheme, the owner of the secret key can generate restricted decryption keys that allow users to learn specific functions of the encrypted messages and nothing else. In many known constructions of FE schemes, such a notion of security is guaranteed only for messages that are fixed ahead of time (i.e., before the adversary even interacts with the system). This is called selective security, which is too restrictive for many realistic applications. Achieving adaptive security (also called full security), where security is guaranteed even for messages that are adaptively chosen at any point in time, seems significantly more challenging. The handful of known fullysecure schemes are based on specifically tailored techniques that rely on strong assumptions (such as obfuscation assumptions or multilinear maps assumptions).
In this paper we show that any sufficiently expressive selectivelysecure FE scheme can be transformed into a fully secure one without introducing any additional assumptions. We present a direct blackbox transformation, making novel use of hybrid encryption, a classical technique that was originally introduced for improving the efficiency of encryption schemes, combined with a new technique we call the Trojan Method. This method allows to embed a secret execution thread in the functional keys of the underlying scheme, which will only be activated within the proof of security of the resulting scheme.As another application of the Trojan Method, we show how to construct functional encryption schemes for arbitrary circuits starting from ones for shallow circuits (NC1 or even TC0).
Zvika Brakerski and Gil Segev FunctionPrivate Functional Encryption in the PrivateKey Setting TCC 2015. [Abstract] [PDF]
Functional encryption supports restricted decryption keys that allow users to learn specific functions of the encrypted messages. Whereas the vast majority of research on functional encryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios it is crucial to offer privacy also for the functions for which decryption keys are provided.
Whereas function privacy is inherently limited in the publickey setting, in the privatekey setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages m_{1}, ..., m_{T} together with decryption keys corresponding to functions f_{1}, ..., f_{T}, reveal essentially no information other than the values { f_{i}(m_{j})}_{i,j=1,...,T}. Despite its great potential, the known functionprivate privatekey schemes either support rather limited families of functions (such as inner products), or offer somewhat weak notions of function privacy.
We present a generic transformation that yields a functionprivate functional encryption scheme, starting with any nonfunctionprivate scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme, and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain functionprivate schemes based either on obfuscation assumptions, on the Learning with Errors assumption, or even on general publickey encryption (offering various tradeoffs between security and efficiency).
Zvika
Brakerski and Guy Rothblum Virtual BlackBox Obfuscation for All Circuits via Generic Graded Encoding
TCC 2014. [Abstract] [PDF]
We present a new generalpurpose obfuscator for all polynomialsize circuits. The obfuscator uses graded encoding schemes, a generalization of multilinear maps. We prove that the obfuscator exposes no more information than the program's blackbox functionality, and achieves virtual blackbox security, in the generic graded encoded scheme model. This proof is under a plausible worstcase complexitytheoretic assumption related to the Exponential Time Hypothesis, in addition to standard cryptographic assumptions.
Zvika
Brakerski and Guy Rothblum BlackBox Obfuscation for dCNFs
ITCS 2014. [Abstract] [PDF]
We show how to securely obfuscate a new class of functions: conjunctions of NC^{0}_{d} circuits. These are functions of the form C(x) = ∧_{i} C_{i}(x), where each C_{i} is a boolean NC^{0}_{d} circuit, whose output bit is only a function of d = O(1) bits of the input x. For example, dCNFs, where each clause is a disjunction of at most d variables, are in this class. Given such a function, we produce an obfuscated program that preserves the inputoutput functionality of the given function, but reveals nothing else. Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013).
We prove that the construction is a secure obfuscation in a generic multilinear group model, under the blackbox definition of Barak et al. (CRYPTO 2001). Security is based on a new worstcase hardness assumption about exponential hardness of the NPcomplete problem 3SAT, the Bounded Speedup Hypothesis.
One of the new techniques we introduce is a method for enforcing input consistency, which we call randomizing subassignments. We hope that this technique can find further application in constructing secure obfuscators.
Zvika
Brakerski and Vinod Vaikuntanathan LatticeBased FHE as Secure as PKE
ITCS 2014. [Abstract] [PDF]
We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of Õ(n^{1.5+ε})approximation for lattice problems (such as GapSVP) under quantum reductions for any ε>0 (or Õ(n^{2+ε})approximation under classical reductions). This matches the best known hardness for ``regular'' (nonhomomorphic) lattice based publickey encryption up to the ε term. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a nonleveled FHE scheme.)
Our approach consists of three main ideas: Noisebounded sequential evaluation of high fanin operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimensionmodulus reduction.
Zvika
Brakerski and Guy Rothblum Obfuscating Conjunctions
CRYPTO 2013. [Abstract] [PDF]
We show how to securely obfuscate the class of conjunction functions (functions like f(x_{1}, ... , x_{n}) = x_{1} ∧ ! x_{4} ∧ ! x_{6} ∧ ... ∧ x_{n2}). Given any function in the class, we produce an obfuscated program which preserves the inputoutput functionality of the given function, but reveals nothing else.
Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We show that the construction is secure when the conjunction is drawn from a distribution, under mild assumptions on the distribution. Security follows from multilinear entropic variants of the DiffieHellman assumption. We conjecture that our construction is secure for any conjunction, regardless of the distribution from which it is drawn. We offer supporting evidence for this conjecture, proving that our obfuscator is secure for any conjunction against generic adversaries.

We show that the Learning with Errors (LWE) problem is classically at least as hard as
standard worstcase lattice problems, even with polynomial modulus.
Previously this was only known under quantum reductions.
Our techniques capture the tradeoff between the dimension and the modulus of LWE instances,
leading to a much better understanding of the
landscape of the problem. The proof is inspired by techniques from
several recent cryptographic constructions, most notably fully
homomorphic encryption schemes.
Zvika
Brakerski and Moni Naor Fast Algorithms for Interactive Coding
SODA 2013. [Abstract] [PDF]
Consider two parties who wish to communicate in order to execute some
interactive protocol $\pi$. However, the communication channel
between them is noisy: An adversary sees everything that is
transmitted over the channel and can change a constant
fraction of the bits as he pleases, thus interrupting the
execution of $\pi$ (which was designed for an errorless
channel). If $\pi$ only contains a single long message, then a
good error correcting code would overcome the noise with only
a constant overhead in communication. However, this solution
is not applicable to interactive protocols consisting of
many short messages.
Schulman (FOCS 92, STOC 93) presented the notion of interactive
coding: A simulator that, given any protocol $\pi$, is able to
simulate it (i.e.\ produce its intended transcript) even with
constant rate adversarial channel errors, and with only constant
(multiplicative) communication overhead. Until recently, however,
the running time of all known simulators was exponential (or
subexponential) in the communication complexity of $\pi$ (denoted
$N$ in this work). Brakerski and Kalai (FOCS 12) recently presented
a simulator that runs in time $\poly(N)$. Their simulator is
randomized (each party flips private coins) and has failure
probability roughly $2^{N}$.
In this work, we improve the computational complexity of interactive
coding. While at least $N$ computational steps are required (even
just to output the transcript of $\pi$), the BK simulator runs in
time $\tilde\Omega(N^2)$.
We present two efficient algorithms for interactive coding: The first with computational complexity $O(N \log N)$ and exponentially small (in $N$) failure probability; and the second with computational complexity $O(N)$, but failure probability $1/\poly(N)$. (Computational complexity is measured in the RAM model.)
Zvika
Brakerski, Craig
Gentry and Shai Halevi Packed Ciphertexts in LWEbased Homomorphic Encryption
PKC 2013. [Abstract] [PDF]
We observe that the PeikertVaikuntanathanWaters (PVW) method of packing many plaintext elements in a single Regevtype ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the SmartVercauteren (SV) ciphertextpacking technique that relies on polynomialCRT. While the SV technique is only applicable to schemes that rely on ringLWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of ``GeneralLWE'').
Although using the PVW method with LWEbased schemes leads to worse asymptotic efficiency than using the SV technique with ringLWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``generalLWE'' schemes, suggesting yet another tradeoff that can be optimized for different settings.
Zvika
Brakerski When Homomorphism Becomes a Liability
TCC 2013. [Abstract] [PDF]
We show that an encryption scheme cannot have a simple decryption circuit and be homomorphic at the same time. Specifically, if a scheme can homomorphically evaluate the majority function, then its decryption circuit cannot be a linear function of the secret key (or even a succinct polynomial), even if decryption error is allowed.
An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of noise withlow hamming weight cannot be fully homomorphic. This applies to known schemes such as LPNbased symmetric or public key encryption.
An additional corollary is that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint '11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One by applying the aforementioned general statement, and another by directly attacking one of the components of the scheme.
Zvika
Brakerski and Yael Tauman Kalai Efficient Interactive Coding Against Adversarial Noise
FOCS 2012. [Abstract] [PDF]
In this work, we study the fundamental problem of constructing interactive protocols that are robust to noise,
a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an errorfree channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (noninteractive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors.
We show how to efficiently simulate any interactive protocol in the presence of constantrate adversarial noise, while incurring only a constant blowup in the communication complexity ($\cc$).
Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $12^{\Omega(\cc)}$.
Previous solutions are either inefficient or are resilient only to stochastic noise.
Zvika Brakerski Fully Homomorphic Encryption
without Modulus Switching from Classical GapSVP CRYPTO 2012. [Abstract] [PDF] We present a new tensoring technique for LWEbased fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically ($B \to B^2\cdot\poly(n)$) with every multiplication (before ``refreshing''), our noise only grows linearly ($B \to B\cdot\poly(n)$).
We use this technique to construct a \emph{scaleinvariant} fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus $q$ and the initial noise level $B$, and not on their absolute values.
Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for ``modulus switching''), and this modulus can take arbitrary form, including a power of $2$ which carries obvious advantages for implementation. In addition, security can be classically reduced to the worstcase hardness of the GapSVP problem (with quasipolynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction to GapSVP.
Zvika Brakerski, Craig Gentry and Vinod Vaikuntanathan (Leveled) Fully Homomorphic Encryption without Bootstrapping ITCS 2012. [Abstract] [PDF]
We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and
bases security on weaker assumptions.
A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomialsize circuits), without Gentry's bootstrapping procedure.
Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ringLWE (RLWE) problems that have $2^k$ security against known attacks. For RLWE, we have:
 A leveled FHE scheme that can evaluate $L$level arithmetic circuits with $\tilde{O}(k \cdot L^3)$ pergate computation  i.e., computation quasilinear in the security parameter. Security is based on RLWE for an approximation factor exponential in $L$. This construction does not use the bootstrapping procedure.
 A leveled FHE scheme that uses bootstrapping as an optimization, where the pergate computation (which includes the bootstrapping procedure) is $\tilde{O}(k^2)$, independent of $L$. Security is based on the hardness of RLWE for quasipolynomial factors (as opposed to the subexponential factors needed in previous schemes).
We obtain similar results to the above for LWE, but with worse performance.
Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width  e.g., where a constant fraction of levels have width at least $k$ 
we can reduce the pergate computation of the bootstrapped version to $\tilde{O}(k)$, independent of $L$,
by batching the bootstrapping operation.
Previous FHE schemes all required $\tilde{\Omega}(k^{3.5})$ computation per gate.
At the core of our construction is a much more effective approach for managing the noise level
of latticebased ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).
Zvika Brakerski and Vinod Vaikuntanathan Efficient Fully Homomorphic Encryption from (Standard) LWE FOCS 2011. [Abstract] [PDF]
We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worstcase hardness of ``short vector problems'' on arbitrary lattices. Additionally, relying on LWE makes our scheme very natural to understand (and implement).
Our construction improves on previous works in two aspects:
 We show that ``somewhat homomorphic'' encryption can
be based on LWE, using a new relinearization technique. In contrast, all
previous schemes relied on complexity assumptions related to ideals in various
rings.
 We deviate from the
``squashing paradigm'' used in all previous
works. We introduce a new dimensionmodulus reduction technique, which shortens the
ciphertexts and reduces the decryption complexity of our scheme, {\em without
introducing additional assumptions}. In contrast, all previous works required an
additional, very strong assumption (namely, the sparse subset sum assumption).
Our scheme has very short ciphertexts and we therefore use it to construct an
asymptotically efficient LWEbased singleserver private information
retrieval (PIR) protocol. The communication complexity of our protocol (in the
publickey model) is kpolylog(k)+log DB bits per singlebit
query, which is better than any known scheme (here, k is a security parameter).
Zvika Brakerski and Yael Tauman Kalai A Parallel Repetition Theorem for Leakage Resilience TCC 2012. [Abstract] [PDF]
A leakage resilient encryption scheme is one which stays secure even against an attacker
that obtains a bounded amount of side information on the secret key (say
$\lambda$ bits of ``leakage''). A fundamental question is whether
parallel repetition amplifies leakage resilience. Namely, if we
secret share our message, and encrypt the shares under two independent keys,
will the resulting scheme be resilient to $2\lambda$ bits of leakage?
Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an
example of a publickey encryption scheme that is resilient to $\lambda$ bits
of leakage, and yet its $2$repetition is not resilient to even
$(1+\epsilon)\lambda$ bits of leakage. In their counterexample, the repeated
schemes share secretly generated public parameters.
In this work we show that under a reasonable strengthening of the definition of
leakage resilience (one that captures known proof techniques for achieving nontrivial
leakage resilience), parallel repetition does in fact amplify
leakage. In particular, if fresh public parameters are used for each copy of the LewkoWaters scheme, then
their negative result does not hold, and leakage is amplified by parallel repetition.
More generally, we show that given $t$ schemes that are resilient to
$\lambda_1, \ldots, \lambda_t$ bits of leakage, respectfully, their direct
product is resilient to $\sum (\lambda_i1)$ bits. We present our
amplification theorem in a general framework that applies other cryptographic primitives as well.
Zvika Brakerski and Vinod Vaikuntanathan Fully Homomorphic Encryption from RingLWE and Security for Key Dependent Messages CRYPTO 2011. [Abstract] [PDF]
We present a somewhat homomorphic encryption scheme that is both very simple to describe and analyze, and whose security (quantumly) reduces to the worstcase hardness of problems on ideal lattices. We then transform it into a fully homomorphic encryption scheme using standard ``squashing'' and ``bootstrapping'' techniques introduced by Gentry (STOC 2009).
One of the obstacles in going from ``somewhat'' to full homomorphism is the requirement that the somewhat homomorphic scheme be circular secure, namely, the scheme can be used to securely encrypt its own secret key. For all known somewhat homomorphic encryption schemes, this requirement was not known to be achievable under any cryptographic assumption, and had to be explicitly assumed. We take a step forward towards removing this additional assumption by proving that our scheme is in fact secure when encrypting polynomial functions of the secret key.
Our scheme is based on the ring learning with errors (RLWE) assumption that was recently introduced by Lyubashevsky, Peikert and Regev (Eurocrypt 2010). The RLWE assumption is reducible to worstcase problems on ideal lattices, and allows us to completely abstract out the lattice interpretation, resulting in an extremely simple scheme. For example, our secret key is s, and our public key is (a,b=as+2e), where s,a,e are all degree (n1) integer polynomials whose coefficients are independently drawn from easy to sample distributions.
Zvika Brakerski and Gil Segev Better Security for Deterministic PublicKey Encryption: The AuxiliaryInput Setting CRYPTO 2011. [Abstract] [PDF]
Deterministic publickey encryption, introduced by Bellare, Boldyreva, and O'Neill (CRYPTO '07), provides an alternative to randomized publickey encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security when the plaintext is distributed over a small set. Bellare et al. addressed this difficulty by requiring semantic security to hold only when the plaintext has high minentropy from the adversary's point of view.
In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high minentropy from the adversary's point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees.
We formalize a framework for studying the security of deterministic publickey encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hardtoinvert auxiliary inputs. Within this framework, we propose two schemes: the first is based on the decisional DiffieHellman (and, more generally, on the $d$linear) assumption, and the second is based on a rather general class of subgroup indistinguishability assumptions (including, in particular, quadratic residuosity and Paillier's composite residuosity). Our schemes are secure with respect to any auxiliary input that is subexponentially hard to invert (assuming the standard hardness of the underlying computational assumptions). In addition, our first scheme is secure even in the multiuser setting where related plaintexts may be encrypted under multiple public keys. Constructing a scheme that is secure in the multiuser setting (even without considering auxiliary inputs) was identified by Bellare et al. as an important open problem.
Zvika Brakerski, Jonathan Katz, Gil Segev and Arkady Yerukhimovich Limits on the Power of ZeroKnowledge Proofs in Cryptographic Constructions TCC 2011. [Abstract] [PDF]
For over 20 years, blackbox impossibility results have been used
to argue the infeasibility of constructing certain cryptographic
primitives (e.g., key agreement) from others (e.g., oneway
functions). A widely recognized limitation of such impossibility
results, however, is that they say nothing about the usefulness of
(known) nonblackbox techniques. This is
unsatisfying, as we would at least like to rule out constructions
using the set of techniques we have at our disposal.
With this motivation in mind, we suggest a new framework for
blackbox constructions that encompasses constructions with a
nonblackbox flavor: specifically, those that rely on
zeroknowledge proofs relative to some oracle. We show that
our framework is powerful enough to capture the NaorYung/Sahai
paradigm for building a (shielding) CCAsecure publickey encryption
scheme from a CPAsecure one, something ruled out by prior
blackbox separation results. On the other hand, we show that several
blackbox impossibility results still hold even in a
setting that allows for zeroknowledge proofs.
Zvika Brakerski, Yael Tauman Kalai, Jonathan Katz and Vinod Vaikuntanathan Overcoming the Hole in the Bucket: PublicKey Cryptography Resilient to Continual Memory Leakage FOCS 2010. [Abstract] [PDF]
In recent years, there has been a major effort to design cryptographic schemes that remain secure even if part of the secret key is leaked. This is due to a recent proliferation of side channel attacks which, through various physical means, can recover part of the secret key. We explore the possibility of achieving security even with continual leakage, i.e., even if some information is leaked each time the key is used.
We show how to securely update a secret key while information is leaked: We construct schemes that remain secure even if an attacker, at each time period, can probe the entire memory (containing a secret key) and ``leak'' up to a $(1o(1))$ fraction of the secret key. The attacker may also probe the memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness allows $k^\epsilon$ bits of leakage during each update process). All of the above is achieved without restricting the model as is done in previous works (e.g. by assuming that ``only computation leaks information'' [MicaliReyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which allows for a leakage rate of $(1/2o(1))$) or the symmetric external DiffieHellman assumption (which allows for a leakage rate of $(1o(1))$), we achieve the above for public key encryption, identitybased encryption, and signature schemes. Prior to this work, it was not known how to construct publickey encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (in the more general model) and (2) giving a public key encryption (and IBE) schemes that are resilient to continual leakage.
Zvika Brakerski and Shafi Goldwasser Circular and Leakage Resilient PublicKey Encryption Under Subgroup Indistinguishability (or: Quadratic Residuosity Strikes Back) CRYPTO 2010. [Abstract] [PDF]
The main results of this work are new publickey encryption schemes
that, under the quadratic residuosity (QR) assumption (or
Paillier's decisional composite residuosity (DCR) assumption), achieve keydependent
message security as well as high resilience to secret key leakage
and high resilience to the presence of auxiliary input information.
In particular, under what we call the subgroup
indistinguishability assumption, of which the QR and DCR are special
cases, we can construct a scheme that has:
 Keydependent message (circular) security. Achieves security
even when encrypting affine functions of its own secret key (in fact,
w.r.t.\ affine ``keycycles'' of predefined length). Our scheme also
meets the requirements for extending keydependent message security to
broader classes of functions beyond affine functions using previous
techniques of [BGK, ePrint09] or [BHHI, Eurocrypt10].
 Leakage resiliency.
Remains secure even if any
adversarial lowentropy (efficiently computable) function of the
secret key is given to the adversary. A proper selection of parameters
allows for a ``leakage rate'' of $(1o(1))$ of the length of the secret key.
 Auxiliaryinput security. Remains secure even if
any sufficiently \emph{hard to invert} (efficiently computable) function of the
secret key is given to the adversary.
Our scheme is the first to achieve keydependent security and auxiliaryinput
security based on the DCR and QR assumptions. Previous schemes that achieved
these properties relied either on the DDH or LWE assumptions. The proposed
scheme is also the first to achieve leakage resiliency for leakage rate
$(1o(1))$ of the secret key length, under the QR assumption. We note that
leakage resilient schemes under the DCR and the QR assumptions, for the
restricted case of composite modulus product of safe primes, were implied by
the work of [NS, Crypto09], using hash proof systems. However, under the QR
assumption, known constructions of hash proof systems only yield a leakage rate
of $o(1)$ of the secret key length.
Zvika Brakerski and Yael Tauman Kalai A Framework for Efficient Signatures, Ring Signatures and Identity Based Encryption in the Standard Model Manuscript, 2010. [Abstract] [PDF]
In this work, we present a generic framework for constructing efficient signature scheme, ring signature schemes, and identity based encryption schemes, all in the standard model (without relying on random oracles).
We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call apriorimessage unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present simple constructions based on the RSA assumption, the short integer solution (SIS) assumption, and the computational DiffieHellman (CDH) assumption over bilinear groups.
Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the short integer solution (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call ring trapdoor functions. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security.
Finally, we show a connection between ring signatures and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the learning with error (LWE) assumption, and is similar to recently introduced IBE schemes; The second is based on the $d$linear assumption over bilinear groups.
Zvika Brakerski, Shafi Goldwasser and Yael Tauman Kalai BlackBox CircularSecure Encryption Beyond Affine Functions TCC 2011. [Abstract] [PDF]
We show how to achieve publickey encryption schemes that can securely encrypt
nonlinear functions of their own secret key.
Specifically, we show that for any constant $d\in\mathbb{N}$, there exists a
publickey encryption scheme that can securely encrypt any function $f$ of its
own secret key, assuming $f$ can be expressed as a polynomial of total
degree $d$. Such a scheme is said to be keydependent message (KDM) secure
w.r.t. degree$d$ polynomials. We also show that for any constants $c,e$,
there exists a publickey encryption scheme that is KDM secure w.r.t. all
Turing machines with description size $c\log \lambda$ and running time
$\lambda^e$, where $\lambda$ is the security parameter. The security of such
publickey schemes can be based either on the standard decision DiffieHellman
(DDH) assumption or on the learning with errors (LWE) assumption (with certain
parameters settings).
In the case of functions that can be expressed as degree$d$ polynomials, we
show that the resulting schemes are also secure with respect to key
cycles of any length. Specifically, for any polynomial number $n$ of key
pairs, our schemes can securely encrypt a degree$d$ polynomial whose variables
are the collection of coordinates of all $n$ secret keys. Prior to this work,
it was not known how to achieve this for nonlinear functions.
Our key idea is a general transformation that amplifies KDM security. The
transformation takes an encryption scheme that is KDM secure w.r.t.
some functions even when the secret keys are weak (i.e. chosen from an
arbitrary distribution with entropy $k$), and outputs a scheme that is KDM
secure w.r.t. a richer class of functions. The resulting scheme may no
longer be secure with weak keys. Thus, in some sense, this transformation
converts security with weak keys into amplified KDM security.
Zvika Brakerski and Boaz PattShamir Distributed Discovery of Large NearCliques
PODC 2009 (Brief Announcement), DISC 2009, Distributed Computing 2011. [Abstract] [PDF]
Given an undirected graph and $0\le\epsilon\le1$, a set of nodes is called $\epsilon$near clique if all but an $\epsilon$ fraction of the pairs of nodes in the set have a link between them. In this paper we present a fast synchronous network algorithm that uses small messages and finds a nearclique. Specifically, we present a constanttime algorithm that finds, with constant probability of success, a linear size $\epsilon$near clique if there exists an $\epsilon^3$near clique of linear size in the graph. The algorithm uses messages of $O(\log n)$ bits. The failure probability can be reduced to $n^{\Omega(1)}$ in $O(\log n)$ time, and the algorithm also works if the graph contains a clique of size $\Omega(n/\log^{\alpha}\log n)$ for some $\alpha \in (0,1)$.

Publickey
encryption schemes rely for their INDCPA security on permessage fresh
randomness. In practice, randomness may be of poor quality for a variety of
reasons, leading to failure of the schemes. Expecting the systems to improve
is unrealistic. What we show in this paper is that we can, instead, improve
the cryptography to offset the lack of possible randomness. We provide
publickey encryption schemes that achieve INDCPA security when the randomness they
use is of high quality, but, when the latter is not the case, rather than
breaking completely, they achieve a weaker but still useful notion of
security that we call INDCDA. This hedged publickey encryption provides
the best possible security guarantees in the face of bad randomness. We
provide simple RObased ways to make inpractice INDCPA schemes hedge
secure with minimal software changes. We also provide nonRO model schemes
relying on lossy trapdoor functions (LTDFs) and techniques from
deterministic encryption. They achieve adaptive security by establishing
and exploiting the anonymity of LTDFs which we believe is of independent
interest.
Zvika Brakerski and Oded Goldreich From Absolute Distinguishability to Positive Distinguishability
Studies in Complexity and
Cryptography 2011 (Book Chapter). [Abstract] [PDF]
We study methods of converting algorithms that distinguish pairs
of distributions with a gap that has an absolute value that is noticeable
into corresponding algorithms in which the gap is always positive.
Our focus is on designing algorithms that, in addition to the tested string,
obtain a fixed number of samples from each distribution.
Needless to say, such algorithms can not provide a very reliable
guess for the sign of the original distinguishability gap,
still we show that even guesses that are noticeably better than random
are useful in this setting.

Verifiable random functions (VRFs), introduced by Micali, Rabin and Vadhan, are
pseudorandom functions in which the owner of the seed produces a publickey
that constitutes a commitment to all values of the function and can then
produce, for any input $x$, a proof that the function has been evaluated
correctly on $x$, preserving pseudorandomness for all other inputs. No
publickey (even a falsely generated one) should allow for proving more than
one value per input.
VRFs are both a natural and a useful primitive, and previous works have
suggested a variety of constructions and applications. Still, there are many
open questions in the study of VRFs, especially their relation to more widely
studied cryptographic primitives and constructing them from a wide variety of
cryptographic assumptions.
In this work we define a natural relaxation of VRFs that we call weak
verifiable random functions, where pseudorandomness is required to hold only
for randomly selected inputs. We conduct a study of weak VRFs, focusing on
applications, constructions, and their relationship to other cryptographic
primitives. We show:
 Constructions. We present constructions of weak VRFs based on
a variety of assumptions, including general assumptions such as (enhanced)
trapdoor permutations, as well as constructions based on specific
numbertheoretic assumptions such as the DiffieHellman assumption in bilinear
groups.
 Separations.
Verifiable random functions (both weak and standard) cannot be constructed from
oneway permutations in a blackbox manner. This constitutes the first result
separating (standard) VRFs from any cryptographic primitive.
 Applications. Weak VRFs capture the essence of
constructing noninteractive zeroknowledge proofs for all NP languages.
Zvika Brakerski
Local Property Restoring
Manuscript, 2008. [Abstract] [PDF]
In this paper we present the new notion of local restoring of
properties. A restorer gets implicit access, in the form of oracle
queries, to a function (which may correspond to an object such as a graph, a
code or a matrix) that is close to having a property, such as function
monotonicity or graph bipartiteness. The restorer then, using a small number of
queries to the function, can locally compute the value of a ``corrected''
function in any desired point. The corrected function is required to be both
close to the original function and strictly have the property. In the
case of error correcting codes, this corresponds to local selfcorrecting. Here
we extend the definition and study for general functions and properties.
We define the new notion and go on to give restorers for properties in the
dense graph model: Bipartiteness and $\rho$Clique, and for function
monotonicity. We also show some general relations between property restoring
and other notions such as property testing and tolerant property testing.
Zvika Brakerski and Boaz PattShamir JitterApproximation Tradeoff for Periodic Scheduling
IPDPS 2004, Wireless Networks 2006. [Abstract] [PDF] We consider an asymmetric wireless communication setting, where a
server periodically broadcasts data items to different
mobile clients. The goal is to serve items in to a prescribed
rate, while minimizing the energy consumption of the mobile
users. Abstractly, we are presented with a set of jobs, each with a
known execution time and a requested period, and the task is to
design a schedule for these jobs over a single shared resource
without preemption.
Given any solution
schedule, its period approximation is the
maximal factor by which the average period of a job in the schedule
is blown up w.r.t. its requested period, and the jitter is roughly
the maximal variability of times between two
consecutive occurrences of the same job. Schedules with
low jitter allow the mobile devices to save power by
having their receivers switched off longer.
In this paper we consider a
scenario where clients may be willing to settle for nonoptimal
period approximation so that the jitter is improved.
We present a parametric jitterapproximation tradeoff algorithm that
allows us to choose various combinations between jitter optimality
and period optimality for any given set of jobs.
Zvika Brakerski, Aviv Nisgav and Boaz PattShamir General Perfectly Periodic Scheduling
PODC 2002, Algorithmica 2006. [Abstract] [PDF]
In a perfectly periodic schedule, each job must be scheduled
precisely every some fixed number of time units after its previous
occurrence. Traditionally, motivated by centralized systems, the
perfect periodicity requirement is relaxed, the main goal being to
attain the requested average rate. Recently, motivated by mobile
clients with limited power supply, perfect periodicity seems to be
an attractive alternative that allows clients to save energy by
reducing their ``busy waiting'' time. In this case, clients may be
willing to compromise their requested service rate in order to get
perfect periodicity. In this paper, we study a general model of
perfectly periodic schedules, where each job has a requested period
and a length; we assume that $m$ jobs can be served in parallel for
some given $m$. Job lengths may not be truncated, but granted
periods may be different than the requested periods.
We present an algorithm which computes schedules such that the
worstcase proportion between the requested period and the granted
period is guaranteed to be close to the lower bound. This algorithm
improves on previous algorithms for perfect schedules in providing a
worstcase guarantee rather than an averagecase guarantee, in
generalizing unit length jobs to arbitrary length jobs, and in
generalizing the singleserver model to multiple servers.
Zvika Brakerski, Vladimir Dreizin and Boaz PattShamir Dispatching in PerfectlyPeriodic Schedules
J. Algorithms 2003. [Abstract] [PDF]
In a perfectlyperiodic schedule, time is divided into timeslots, and each client gets a time slot precisely every predefined number of time slots, called the period of that client. Periodic schedules are useful in mobile communication where they can help save power in the mobile device, and they also enjoy the best possible smoothness. In this paper we study the question of dispatching in a perfectly periodic schedule, namely how to find the next item to schedule, assuming that the schedule is already given somehow. Simple dispatching algorithms suffer from either linear time complexity per slot or from exponential space requirement. We show that if the schedule is given in a natural tree representation, then there exists a way to get the best possible running time per slot for a given space parameter, or the best possible space (up to a polynomial) for a given time parameter. We show that in many practical cases, the running time is constant and the space complexity is polynomial.
