All numbers refer to sections in the two-volume book Foundations of Cryptography: Chapters 1-4 are in Volume 1 and Chapters 5-7 are in Volume 2. A section number such as X.0 (resp., X.Y.0 or X.Y.Z.0) refers to the preamble of Chapter X preceding Section X.1 (resp., the preamble of Section X.Y (or Sec X.Y.Z) preceding Section X.Y.1 (or X.Y.Z.1)).
Meetings number 1-5 and 7-8 cover material that was covered in the reading group on introduction to complexity theory, although the perspective is different. Hence, students who took that reading group can skip these parts and focus on meetings 6 and 9-10 (for the first volume) and on meetings 12-18 and 20-21 (for the second volume). (Meeting 11 and 19 are optional; they are better done one after the other.)
When reading the text, do not rush through conceptual discussions but rather reflect on them. The same holds for high-level overviews of proofs. When failing to understand some part of a proof, make sure that you understand the high-level picture (i.e., the notions that underlie the claim being proved, the high-level strategy of the proof, etc). Do not get stuck with low-level details, unless you are sure that they are essential to your understanding of the main idea(s) of the proof. The possibility of errors in the text should not be ignored; see lists of corrections and additions in both Volume 1 and Volume 2.
CRYPTOGRAPHY is concerned with the construction of schemes that withstand any abuse: Such schemes are constructed so to maintain a desired functionality, even under malicious attempts aimed at making them deviate from their prescribed functionality.
The design of cryptography schemes is a very difficult task. One cannot rely on intuitions regarding the typical state of the environment in which the system operates. For sure, the adversary attacking the system will try to manipulate the environment into untypical states. Nor can one be content with counter-measures designed to withstand specific attacks, since the adversary (which acts after the design of the system is completed) will try to attack the schemes in ways that are typically different from the ones the designer had envisioned. The validity of the above assertions seems self-evident, still some people hope that in practice ignoring these tautologies will not result in actual damage. Experience shows that these hopes rarely come true; cryptographic schemes based on make-believe are broken, typically sooner than later.
In view of the above, we believe that it makes little sense to make assumptions regarding the specific strategy that the adversary may use. The only assumptions that can be justified refer to the computational abilities of the adversary. Furthermore, it is our opinion that the design of cryptographic systems has to be based on firm foundations; whereas ad-hoc approaches and heuristics are a very dangerous way to go. A heuristic may make sense when the designer has a very good idea about the environment in which a scheme is to operate, yet a cryptographic scheme has to operate in a maliciously selected environment which typically transcends the designer's view.
THIS BOOK is aimed at presenting firm foundations for cryptography. The foundations of cryptography are the paradigms, approaches, and techniques used to conceptualize, define and provide solutions to natural ``security concerns''. We will present some of these paradigms, approaches and techniques as well as some of the fundamental results obtained using them. Our emphasis is on the clarification of fundamental concepts and on demonstrating the feasibility of solving several central cryptographic problems.
Solving a cryptographic problem (or addressing a security concern) is a two-stage process consisting of a definitional stage and a constructive stage. First, in the definitional stage, the functionality underlying the natural concern is to be identified, and an adequate cryptographic problem has to be defined. Trying to list all undesired situations is infeasible and prone to error. Instead, one should define the functionality in terms of operation in an imaginary ideal model, and require a candidate solution to emulate this operation in the real, clearly defined, model (which specifies the adversary's abilities). Once the definitional stage is completed, one proceeds to construct a system that satisfies the definition. Such a construction may use some simpler tools, and its security is proven relying on the features of these tools. In practice, of course, such a scheme may need to satisfy also some specific efficiency requirements.
THIS BOOK focuses on several archetypical cryptographic problems (e.g., encryption and signature schemes) and on several central tools (e.g., computational difficulty, pseudorandomness, and zero-knowledge proofs). For each of these problems (resp., tools), we start by presenting the natural concern underlying it (resp., its intuitive objective), then define the problem (resp., tool), and finally demonstrate that the problem may be solved (resp., the tool can be constructed). In the latter step, our focus is on demonstrating the feasibility of solving the problem, not on providing a practical solution. As a secondary concern, we typically discuss the level of practicality (or impracticality) of the given (or known) solution.
You must become comfortable with the notion of a probabilistic polynomial-time algorthm (and negligible probability). It is also important that you get comfortable with non-uniform polynomial-time and oracle machines, because they are used in much of the definitional treatment that follows.
Core material (mandatory):
Add'l material (optional): This too should be an easy read.
Core material (obligatory).
At this point you should be able to prove (at the middle of the night) that if P=NP, then there are no one-way functions. Hence P-neq-NP is a necessary condition for most of the area, since almost all the tools and applications t we consider cannot exist if one-way functions do not exist. You should also understand the difference between strong and weak one-way functions, and see that the various length conventions can be assumed w.l.o.g. (= without loss of generality).
Add'l material (optional): This too should be an easy read.
EXERCISES
At this point, you should know most of Sec 1.2-1.3 (background on probability and computation) and Sec 2.1-2.2 (basic definitions of OWF). It is crucial that you are in command of this material. Notation: OWF = One-Way Function.
This meeting focuses on the question of how can we prove implications of the form the computational assumption W implies the computational assumption S (in the current case weak OWF imply strong OWF). The point is that the weak assumption is an assumption, and we must use it when trying to establish the strong assumption. How can it be done?
The only way known (so far) is showing that a violation of the strong assumption (i.e., the existence of an algorithm that contradicts it) implies a violation of the weak assumption (i.e., the existence of an algorithm that contradicts it). The process by which the first algorithm is transformed to the second (and is coupled with a probabilistic analysis of the behavior of the algorithms on the relevant input distributions) is called a reducibility argument.
NOTE: I use the term reducibility argument rather than reduction for several reasons. First, because I want to emphasize the preservation of distribution on the instances (which one may say that is merely a variant of worst-case reductions (as opposed to the standsard (worst-case) reductions one sees in complexity theory). Second, because what is being reduced is the violation of the assumption and conclusion that we have in mind; that is, we reduce the task of violating the assumed hardness of one task to the violation of the hardness that we want to conclude. (Typically, this comes after we show a construction of the latter primitive, which uses the given primitive as a black-box.)
CORE MATERIAL (OBLIGATORY READING). In addition to Sec 2.3, read also the two short descriptive sections (i.e., 2.2.4 and 2.4.2).
ADD'L MATERIAL (OPTIONAL/RECOMMENDED):
EXERCISES
OPTIONAL EXERCISES
Some of you were annoyed/disappointed by the discovery that a one-way function (OWF) may not hide some of its bits. Indeed, if $f$ is a OWF, then so is $f'(x,r)=(f(x),r)$, where $|r|=|x|$ (or actually $r$ can be polynomially longer than $x$). See also a more dramatic example in the suggested exercise (Nr 24).
So the question is whether we can have a OWF with a "hidden bit"; first, we will need to define such a bit (called a hard-core predicate), and then we shall show that any OWF can be transformed into one that has such a bit. Again, the trasformation is easy (which is good): Given $f$, consider $f'(x,r)=(f(x),r)$, where $|r|=|x|$, and the bit will be $b(x,r) = \sum_i x_ir_i \mod 2$ (the inner-product mod 2 of $x$ and $r$).
But the issue is to prove that $b$ is a hard-core (i.e., it is totally hidden by $f$). We'll spend most time on this. Indeed, it will be a reducibility argument (i.e., assuming for contradiction a bad algorithm, $B$). The argument proceeds in two steps.
READING ASSIGNMENT -- CORE MATERIAL (OBLIGATORY)
ADD'L MATERIAL (OPTIONAL/RECOMMENDED):
The following is intended only for those very interested in the material.
On the other hand, it may be good to be aware of the facts and glance
at the proof ideas.
OPTIONAL EXERCISES (NOT FOR SUBMISSION)
Here, we will get closer to cryptographic applications with pseudorandom generators (PRGs). While in the TOC (Theory of Computation) PRG may mean various things (i.e., it is actually a general paradigm or family name), in the context of Cryptography it means efficient deterministic algorithms stretching short random seeds into longer sequences that look random to any feasible observer (i.e., *any* probabilistic polynomial-time machine).
We shall thus focus on the notion of computational indistintguishability: Its very definition, relation to statistical indistintguishability, and its preservation under repeated sampling.
When presenting the hybrid method for the first time, I usually voice the algorithm's complaint, which says "But I was designed to distinguish X from Y, (so why the hell do you invoke me on these weird hybrids?)"
The answer, of course, is that horses do not speak Hebrew, algorithms don't speak and complain at all, and if they do -- we just answer "Shut up and say what you would do on these hybrids" (and note that if the algorithm really behaves differently on the intermediate hybrids for which it was "not designed for" then we distinguish any of the extreme hybrids from the one next to it...)
READING ASSIGNMENT -- CORE MATERIAL (OBLIGATORY)
ADD'L MATERIAL (OPTIONAL/RECOMMENDED): I repeat the recommendation of last time. It is intended only for those very interested in the material. It is good to be aware of the issues and facts covered here, and maybe also glance at the proof ideas.
OPTIONAL EXERCISES (NOT FOR SUBMISSION) (All exercises appear in Sec 3.8.4 of the book)
OBLIGATORY MATERIAL
ADDITIONAL/OPTIONAL MATERIAL (in order of priority)
EXERCISES
This meeting has two parts. The first part shows a simple construction of a PRG, whereas the second part deals with pseudorandom functions (PRFs).
In retrospect, I find the treatment at the beginning of Sec 3.6.1 (i.e., Def 3.6.3) unnecessarily complicated, and prefer the alternative presentation (as in Def 3.6.4). (Optional exercise: Show that nothing is lost by this; although there is a hint in the text but you may want to work out the details.)
OBLIGATORY MATERIAL
ADDITIONAL/OPTIONAL MATERIAL
OPTIONAL EXERCISES [All exercises appear at the end of chapter 3]
ADDITIONAL EXERCISES FOR CHAPTER 2 (which can be postponed to next week, because the next week load is lighter).
We ended dealing with pseudorandom generators and move to zero-knowledge proofs.
We start with the definition of interactive proofs, which is needed for non-trivility of zero-knowledge proofs. Here we stress the notions of efficient verification, via randomized interaction, and the notions of completeness and soundness of any proof system (and this type, in particular).
READING ASSIGNMENT
This reading discusses at length the definition of zero-knowledge (ZK), and then presents a perfect zero-knowledge proof system for Graph Isomorphism, which we don't know to be in BPP. We note that the "simulation paradigm, which underlies the def of ZK, is a special case of the real-vs-ideal paradigm.
READING ASSIGNMENT
EXERCISES. By "(C)ZK" I mean computational zero-knowledge. (Recall, we often omit the "C"; hence the parenthesis.)
A Q-and-A (from the 2013/14 class): What if the def of ZK is extended to require similarity of the distributions for all inputs (rather than only for inputs in the set $S$)? In this case we reach triviality (i.e., the set is in BPP). This is the case because the honest prover implicitly communicates whether the input is in $S$, since the honest verifier rules correctly on this bit. (Indeed, $(P,V)(x)$ is a 0/1-random variable that is 1 wp greater than 2/3 if $x\in S$ and 0 w.p. greater than 2/3 otherwise.) Hence, a simulator for the honest $V$ that work for all inputs (rather than only for inputs in $S$) yields a decision procedure for $S$.
A Q-n-A (from the 2013/14 class):
Looking at the book, I noticed that auxiliary-input ZK (aiZK)
is defined such that the potential
distinguisher is only given $x$ and $z$ (not $x,z$ and $y$).
This is actually more natural if we think of the distinguisher
as representing a part of the verifier,
but indeed the alternative definition is more streamlined
with the notion of computational indistinguishability of ensembles.
All simulators I know (for aiZK) satisfy the seemingly stronger definition,
but I wounder whether the defs are equivalent.
Proposed project: Can you prove or disprove equivalence?
I have no clue how difficulty this may be, so please contact
me before you pass the "threshold" of a few hours of thinking on it.
READING ASSIGNMENT
Another Q-n-A (from the 2013/14 class): A question by Roei Tell made me realize that the treatment in the book neglects to clarify (say in Thm 4.4.11) that the "ZKIP for NP" is actually for any NP-witness relation, and not merely for any NP-set.
You might have seen the ``boxed'' implementation of "ZKIP for NP" (of Sec 4.4.2.1), but the actuakl contents of this meeting is a digital implementation. For this we need to define and construct (based on OWF) a commitment scheme, and then use it instead ofthe boxes in the ZKIP.
READING ASSIGNMENT
So far, we covered all the mandatory material for this semester. If you have energy and time left, you may also cover the following advanced material.
When I was actually teaching this class (ei.e., last in 2013/14), I did have this (end-of-semester) meeting in which I suggested to
RECOMMENDED READING (NOT MANDATORY):
Among the advanced Sections 4.5-4.11, I would give
the highest priority to the following:
RE SECTION 4.6 (WI) I have to withdraw two claims regarding *strong* witness indistinguishable proofs as defined in Definition 4.6.2. Specifically, in general, *strong* witness indistinguishability is not closed under parallel composition (and so Lemma 4.6.7 is wrong). Consequently, in contrary to what is stated in Theorem 4.6.8, we do not know whether there exist constant-round public-coin proofs with negligible error that are *strong* witness indistinguishable for languages out of BPP. We stress that the flaws pointed out here only refer to *strong* witness indistinguishability and not to (regular) witness indistinguishability. That is, as stated in Lemma 4.6.6, (regular) witness indistinguishability is closed under parallel composition and thus the part of Theorem 4.6.8 that refers to regular witness indistinguishability is valid (i.e., providing constant-round public-coin proofs with negligible error that are witness indistinguishable for NP). RE SECTION 4.10 (NIZK) Remark 4.10.6 as well as Theorems 4.10.10, 4.10.14 and 4.10.16, seem to require trapdoor permutations in which the permutation's domain coincides with the set of all strings of certain length. This special case of Definition 2.4.5 can be implemented by modifying the RSA or the Factoring Trapdoors (cf. Canetti et al (in STOC'96) and [5]). (For further detail on Remark 4.10.6, the reader is referred to [23].) Unfortunately, the correction that appeared in Appendix C of Volume 2 is also incorrect. For details see http://www.wisdom.weizmann.ac.il/~oded/PSBookFrag/nizk-tdp.ps. For a discussion of *enhanced trapdoor permutations* see my paper with Ron Rothblum titled Enhancements of Trapdoor Permutations (at http://www.wisdom.weizmann.ac.il/~oded/p_tdp.html).For a few additional errors/corrections see the list on the bottom of the web-page of Volume 1
In this semester and in Vol 2, we move to "actual" cryptography. We shall cover encyption, signatures and general secure MPC (multi-party computation).
Starting with encyption, we discuss private-key and public-key schemes, which differ in the security definition (i.e., whether or not the adversary knows the encryption-key), which in turn effects the actual way of using the scheme. The basic syntax is of three probabilistic polynomial-time algorithms: key-generation, encryption, and decryption.
The focus is on the definition of security. We consider two natural definitions: (1) "semantic security", and (2) "security as indistinguishability of encryptions"; Def (1) is more intuitive, whereas Def (2) more useful in some applications. We show that the definitions are in fact equivalent. For simplicity, implicitly in (1) and explicitly in (2), the definitional treatment is in terms of non-uniform complexity.
The basic treatment concerns the encryption of a single message, but it is extended to multiple messages (encrypted using the same key). For public-key schemes, this is equivalent to case of a single messsage, but that's not the case for private-key schemes.
READING ASSIGNMENT
OPTIONAL READING
Having discussed the definition of secure encyption in great length, we now turn to the construction of such schemes.
The basic results are that (1) secure private-key schemes can be constructed based on any (non-uniformly hard) one-way function, whereas (2) secure public-key schemes can be constructed based on any (non-uniformly hard) collection of trapdoor permutations.
The first result is proved by using a pseudorandom function (PRF) to implement a "one-time pad" (where a counter that keeps the position is replaced by a randomly selected value). Note the methodology of proving security which is based on a mental experiment in which the PRF is replaced by (oracle access to) a truly random function.
The second result is proved using a hard-core predicate of such a permutation for encrypting a single bit. A less wasteful version is also presented. The presentation uses a notion of "block cipher" that is different from the standard one (see Footnote 14).
READING ASSIGNMENT
OPTIONAL READING
EXERCISES (at the end of the chapter)
OPTIONAL EXERCISES
Being so happy with the notion and results regarding secure encryption schemes, we note that these schemes are secure only against an adversary that merely eavesdrops on the communication rather that tries to influence it in any way. Turning to such influences we consider the following types of manipulation.
READING ASSIGNMENT: Parts of Sec 5.4 "Beyond eavesdropping security"; plaese read
HIGHLY RECOMMENDED (but not obligatory)
We start a new major topic; that of signature schemes, both in the private-key incarnation (usually called MAC) and in the public-key one (usually called "signatures"). We use a unified framework in which both appear as special cases, although their actual usage in applications is very different.
After presenting the basic definition, we turn to a seemingly odd restriction, called length restricted signature schemes, which is motivated by being a useful step towards a full fledged scheme. We describe one transformation of length-restricted schemes into full-fledged ones, but will actually use another (which will be presented next time). Still, the current construction and more so its analysis is very instructive.
READING ASSIGNMENT
RECOMMENDED (but not obligatory)
We shall cover important three topics:
READING ASSIGNMENT
The corresponding reading material for these three topics is:
RECOMMENDED EXERCISES
This is the first part of two meeting devoted to the construction of secure signature schemes (in the public-key model). In this part we shall cover three important paradigms.
The first paradigm is the notion of one-time signature schemes (OTS), which are signature schemes that are secure as long as one uses them (with a fresh signing-key) at most once. Although very restricted, it will turn out that they imply general signature schemes. We shall also discuss length-restricted OTS, see how to construct them based on any OWF, and get general OTS via the hash-and-sign paradigm.
The second paradigm is "refreshing" and the third is "authentication trees". The bulk of Sec 6.4.2 is showing how to combine "refreshed" instances of a OTS with an authentication tree in order to obtain a general signature scheme. This scheme is presented first in the memory-dependent model, but we will get rid of the memory/state later (in next meeting).
READING ASSIGNMENT
OPTIONAL READING:
EXERCISES
SUGGESTED DISCUSSION: Digest the memory-based scheme of Construction 6.4.14, focusing on the function of the authentication tree. Then note how this can be made memory-less by replacing the storage of random values (used to generate key pairs) by the use of a PRF (applied to the names of nodes in the auth-tree).
RECOMMENDED READING (Universal One-Way Hashing Function)
ADDITIONAL READING
Note: This is advanced and supplementary material, which actually fits right after Meeting 11.
All constructions we have seen in Chapter 4 are "black-box" in the sense that the simulator uses the verifier program as a black-box. Indeed, it is hard to imagine what else the simulator can do with the verifier program, given that reverse engineering it seem infeasible. Still, the simualtor that underlies the technique that will be presented here does use the verifier program in a non-black-box manner (i.e., it uses it as a string; specifically, as an NP-witness for the authenticity of the verifier's next message). This yields constructions of protocols that cannot be proven ZK in a black-box manner, provided NP is not in BPP.
You have to decide if you are going to try to read my sketchy description of the construction. If you want to do so, then you will need the following prerequisites.
REQUIRED READING in preparation for the actual contents of the meeting
OPTIONAL READING (only needed for the very last paragraph (i.e., the stronger result))
Now, following is my sketch of the non-black-box technique. A wider perspective (as well as a less detailed sketch) is provided in Apdx C.5.2 of Volume 2.
ON BOAZ BARAK'S NON-BLACK-BOX TECHNIQUE AS APPLIED TO OBTAIN CONSTANT-ROUND, PUBLIC-COIN ZK ARGUMENTS FOR NP We will present a constant-round public-coin zero-knowledge argument system for any set in NP, where the computational soundness error is negligible. It is known that proving such a result via black-box simulation is highly unlikely, because constant-round public-coin *black-box* zero-knowledge argument systems exist only for sets in BPP. (Recall that argument system where defined in Sec 4.8.1.) While the ``standard'' definition of ZK refers to any PPT verifiers that uses an auxiliary input of polynomial-length, we shall warm-up with proving ZK only wrt the basic definition (w.o., aux-input). Actually, we shall start with establishing ZK wrt verifiers that use sub-linear long aux-input. Recall that a pair of interactive PPT machines $(P,V)$ is an argument system for $S$ (typically in $\NP$) if * completeness: For every $x\in S$ there exists $w$ (of $poly(|x|)$-length) such that $\prob[(P(w),V)(x)=1] = 1$. * (computational) soundness: For every $x\not\in S$ and every $poly(n)$-size circuit $\{P_n\}$ it holds that $\prob[(P_{|x|},V)(x)=1]$ is negligible in $|x|$. Note that the soundness error is negligible. The def extends to $\Ntime(t)$ by replacing poly with $t$ (in the completeness condition, and *maybe* poly by poly(t) in soundness). (In general, I'll try to mention the soundness error in each case.) Throughout, we shall use witness-indistinguishable (WI) argument systems (as defined in Sec 4.6.1.1). Recall that these are interactive arguments $(P,V)$ for NP-sets $S$ such that for every $x\in S$ and corresponding NP-witnesses $w,w'\in R(x)$ and every poly-size $V^*$, the distributions $(P(w),V^*)(x)$ and $(P(w'),V^*)(x)$ are computationally indistinguishable. Note that ZK implies WI, but not necessarily the other way around. Also note that WI is preserved under parallel executions. (Again, the notion of WI extends to NTIME(t).) The warm-up: Zero-Knowledge wrt auxiliary-input of sub-linear length. ============================================================== [Overview: The basic strategy.] Actually, on common input $x$, we shall consider (deterministic) verifiers having aux-input of length at most $|x|/2$. (Note that we can also handle randomized verifiers with aux-input of length less than $|x|/3$, by first reducing their randomness complexity to $|x|/6$ (by using a PRG).) The protocol proceeds as follows, on input $x\in\bitset^n$: 1. $V$ sends the prover a random $n$-bit string, denoted $r$. 2. The prover uses a WI proof to prove that *either* $x\in S$ *or* there exists a program $\Pi$ of length less than $n/2$ such that $\Pi(x)=r$ within poly(n) steps. The idea is that *in the real interaction* the 2nd condition is unlikely to hold (because $\prob_r[exists \Pi s.t. \Pi(x)=r] \leq 2^{-n/2}$), which means that soundness is maintained, and yet the 2nd condition assists the simulation: Given a cheating verifier $V^*$ of length less than $n/2$, the simulator can use it as a program $\Pi$ in the 2nd condition (of the WI system invoked in Step 2). The WI feature implies that this simulation (of the WI proof) is indistinguishable from the real interaction (in which an NP-witness for $x\in S$ is used). [An annoying but crucial detail: What is the running time of $V^*$?] Note that the 2nd condition in Step 2 asserts that the program $\Pi$, which we wish to set to $V^*$, runs in "polynomial time". But what is this polynoimal? We need to fix it when we design our proof system, whereas $V^*$ is "selected" afterwords. So what polynomial should we use? (Note: it must be bigger than any poly bound for $V^*$.) So we cannot use any fixed polynomial; instead, we pick a super-poly function $t$ (e.g., $t(n)=n^{\log n}$). Hence, the 2nd condition (in Step 2) is revised to read there exists a program $\Pi$ of length less than $n/2$ such that $\Pi(x)=r$ within $t(n)$ steps. But then this is not an NP-assertion. Still it is an Ntime(t)-assertion. So we need an WI argument for a set in NTIME(t), where (say) $t(n)=n^{\log n}$. [Obtaining an WI argument for a set in NTIME(t), where $t(n)=n^{\log n}$.] Firstly, note that the low-communication argument system for NP (reviewed in Sec 4.8.4) generalizes to NTIME(t). Specifically, assuming a CFH (= collision-resistent hashing) that is resilient against poly(t)-size circuits, we obtain an argument system for NTIME(t). (It can be made to have communication $n^\epsilon$, for any $\epsilon \gt 0$, but we don't need that. All we care is that verification is in poly-time.) Furthermore, in this argument system, the complexity of the honest prover, on common input $x$, is polynomial in the time of the fastest ND computation (i.e., computation with the ``best witness'') that leads to accepting the input $x$. (ND = non-deterministic) (If $x$ is recognized in $t_x \leq t(|x|)$ steps, then we can construct the corresponding PCP oracle in time polynomial in $t_x$.) This instance-based/point-wise/local complexity measure is crucial here, since it will mean that the simulator, which runs this prover using the cheating $V^*$ as an NP-witness, will spend time that is polynomial in the running-time of $V^*$. (Note that this argument system is constant-round.) So far we presented an argument system for NTIME(t), whereas we need a WI one. We first note that the above argument system is of the public-coin type. Next, we use a generic technique for converting argument systems (or interactive proof systems) of the public-coin type with perfect completeness into zero-knowledge ones (and hence WI ones). Specifically, since the verifier's actions are oblivious of the prover's answers/msgs, we can just let the prover "encrypt" its msgs using a commitment scheme, and augment the "encrypted" interaction with a (ZK/WI) proof that the original verifier would have accepted the corresponding decrpted/de-committed transcript. (That is, the NP-assertion being proved is that the sequence of commitments and verifier-random-msgs $(c_1,r_1,...,c_t,r_t)$ corresponds to a transcript $(v_1,r_1,...,v_t,r_t)$ that would have convinced the original verifier to accept the input, where $v_i$ is the value committed to in $c_i$ (and the NP-witness also contains de-commitment information).) If we use a ZK proof (and do not use non-BB techniques), then we cannot hope to have a constant-round system with public-coins, but it suffices to use a *strong*-WI proof (Def 4.6.2) with such features. Ignoring the difference between WI and strong-WI, let me re-iterate where we stand. We wish to construct a WI argument system for NTIME(t), and we sketched a construction that starts with an argument system for NTIME(t) and employs a (strong)-WI subprotocol (for NP) that transforms it into a WI argument system for NTIME(t). The resulting protocol uses commitments to the prover messages of the original argument system and a (strong-WI) subprotocol for proving that these commitments are to values that would have been accepted by the original versifier. [Another annoying but crucial detail: we need strong-WI rather than WI.] Recall the definition of strong-WI (Def 4.6.2), and note (see more below) that we actually need the subprotocol to be strong-WI. [Note that the subprotocol, which is used to prove that the "encrypted" transcripts encodes an accepting, gets as input the said commitments/encryptions.] Specifically, we need a (constant-round, public-coin) *strong-WI* for NP (with negligible error), where the *strong* version is needed because the input to this subprotocol (i.e., sequence of commitments that correspond to different "encrypted" execution of the plain argument system) whereas different "encrypted" executions of the argument system (on the same $x$) are most likely to yield different sequences of commitments (even when the committed values are equal, let alone that the committed values may be different, because we consider execution of the plain system with different NP-witnesses). These commitments are computationally indistinguishable, and using *strong*-WI implies that the transcripts (of the strong-WI subprotocols) are computationally indistinguishable. Unfortunately, we do not have constant-round public-coin *strong-WI* systems for NP with negligible error, but we do have such system with constant error. The gap can be bridged in any one of two ways: (1) by using a constant-error strong-WI as a subprotocol, and running the entire resulting (constant-error) WI protocol in parallel; (2) by running parallel versions of the encrypted argument system and applying a constant-error strong-WI to each, in parallel, while relying on the fact that strong-WI is preserved in the case of product distributions (on the sequence of inputs (see Apdx C.3.1)). That is, option (2) uses the fact that the sequences of commitments generated in the "encrypted" argument system is a product distribution (i.e., the commitments are statistically independent). [Stressing a subtle point: the role of perfect completeness. The foregoing transformation (which iuses "encrypted" prover-messages) requires perfect completeness (in the original protocol), which we have in the current application. The point is that a cheating/adversary verifier may learn whether a certain (rare) sequence of verifier-randomness is convincing. It may be even that the all-zero randomness is convincing for one witness but not for another. This does not occur in case the protocol has perfect completeness; intuitively, any randomness the adversary verifier chooses, it will see a sequence of hidden messages plus a ZK/WI proof/arg that this sequence encodes an accepting transcript...] The real thing: Zero-Knowledge wrt aux-input of arbitrary polynomial length. ========================================================================== So far we only handled cheating verifiers that having aux-input of length at most $|x|/2$ (on common input $x$). Note that this hypothesis was essential to our analysis of soundness that relies on the fact that most $n$-bit strings cannot be generated by a program of length less than $n/2$. Hence, we modify the protocol as follows, while using a strong (as above) CFH $H_n$ (which maps arbitrary bit strings to $n$-bit long bit strings) and a commitment scheme $C$. The protocol proceeds as follows, on input $x\in\bitset^n$: 0. $V$ sends the prover a random $h\in H_n$, and the prover responds with $c = C(0^n)$. 1. $V$ sends the prover a random $n$-bit string, denoted $r$. 2. The prover uses a WI **argument of knowledge** (AOK rather than POK) to prove that it **knows** *either* a witness for $x\in S$ *or* a program $\Pi$ of length less than $t(n)$ and some $s$ such that $\Pi(x,c)=r$ within $t(n)$ steps and $c=C(h(\Pi),s)$, where $s$ denotes randomness used in the commitment scheme $C$. Note that the prover is required to prove its knowledge of $\Pi$ and not merely $\Pi$'s existence (which is almost a triviality (see Exer)). The definition of a AOK/POK should be revised here so to allow extraction in time that is polynomially-related to the length of the witness (which may be $t(n)$, and polynomial in the inverse of the acceptance acceptance probability of the AOK). Again, the idea is that in the real interaction the 2nd condition is unlikely to hold (essentially because the prover commits to a hash-value $c$ before seeing $r$), and yet it assists the simulation (as before). Details (on both issues) follow. EXER: Prove that if $h$ is regular (i.e., $h^{-1}(0^n)$ has approximately $2^{m-n}$ elements of length $m$), then, for every $r$, there exists a $\Pi$ as in the 2nd condition. Turning back to the foregoing protocol, we first establish the ZK feature. The simulator, which obtains the code of (a poly-size) $V^*$, just emulates the actions of a related prover as follows: In Step 0, rather than committing to 0^n, it commits to $h(V^*)$. In Step 2, it uses the WI-prover and proves knowledge of a witness for the 2nd condition; that is, it runs this prover when using $V^*$ (and the randomness $s$) as a witness. By the instance-based complexity measure, it follows that this simulation is completed in time polynomial in size of $V^*$. The fact that this simulation is computationally indistinguishable from the real interaction follows by the WI feature (and the hiding property of $C$), but this is slightly less straightforward than it seems. To clarify the issue consider the following three distributions: (i) the real interaction: uses $C(0^n)$ and a witness to $x\in S$; (ii) a mental experiment: uses $C(h(V^*))$ and a witness to $x\in S$; (iii) the simulation: uses $C(h(V^*)$ and the witness $V^*$. WI implies that (ii) and (iii) are computationally indistinguishable, but we also need to show that that (i) and (ii) are also comput-indist. The latter follows using the hypothesis that $S\in NP$ and noting that the full transcript can be generated in polynomial-time by using the NP-witness (for $x$) as aux-input. [Note the difference viz-a-viz the situation discussed in the construction of WI for NTIME(t) above, where strong-WI was not preserved by parallel executions (see Apdx C.3.1 in Vol 2); specifically, here the hybrid distribution (i.e., (ii)) is polynomial-time constructable. In general, for $(x,w),(x',w') \in R$, it need not hold that $(x',w)\in R$; in (ii) we capitalized on the fact that here $(x',w)\in R$.] Next, we need to establish the computational soundness of the protocol. Suppose that for some $x\not\in S$ a poly-size strategy $P^*$ manages to convince the verifier with non-negligible probability $p(x)$. We start by fixing a random execution of Step 0, and considering the residual execution from that point on, noting that w.p. $\geq p(x)/2$ this residual execution accepts w.p. $\geq p(x)/2$. Let $h$ and $c$ be the hash-function and commitment value used (in Step 0). We consider two random executions of the residual protocol such that in the $i$th execution the verifier selected $r_i$ in Step (1), and note that with probability $(p(x)/4)^2$ these $r_i$'s are such that Step 2 accepts with probability at least $p(x)/4$ (with each $r_i$). Using the POK feature of the argument system, we obtain (via extraction, in time $\poly(t(n)/p(x)$) two programs $\Pi_1$ and $\Pi_2$ such that $\Pi_i(x,c)=r_i$ and $c$ is in the range of $C(h(\Pi_i))$. Hence, $\Pi_1 \neq \Pi_2$ (since these $\Pi_i$'s yield different $r_i$'s on the same $(x,c)$) but $h(\Pi_1) = h(\Pi_2)$ (by the perfect binding of $C$), which contradicts the hypothesis regarding $H$ (i.e., its being CFH resilient against size $\poly(t)$). Finally, we need to show a POA WI protocol for NTIME(t). Recall that we showed a WI argument for NTIME(t), but we need one with a POK feature. This can be shown by noting that our construction of a WI argument has a POK feature provided that its ingredients have this feature. (Recall that the ingredients are a (strong) WI proof for NP, and a PCP system for NTIME(t); details are omitted.) A stronger result: Establishing the above based on standard CFH ================================================================ Recall that the above results were established under the hypothesis that we have a CFH that is resilient against size $\poly(t)$, while the standard assumption refers to resilience wrt poly-size. We can prove the same result using the standard assumption by (1) introducing an adequate definition (i.e., "universal arguments"), which includes a local-extractability feature (i.e., extract individual bits of the witness in poly-time), (2) constructing such universal arguments using standard CFH, and (3) using them together with tree-hashing (instead of $H$ above) and with error correcting codes (applied before the hashing). For details, see http://www.wisdom.weizmann.ac.il/~oded/p_ua.html
An account from the 2013/14 class: After a long (much longer than planned...) discussion about types of adversaries considered in cryptography (i.e., uniform vs non-uniform aux-inputs, randomized or deterministic adversaries), we started talking about "solving generic cryptography problems" (aka "general secure multi-party computations" (MPC)).
My guess is the aforementioned account is related that the fact that Section 7.1.1 presents a host of definitions that are incomparable rather than a single definition that is advocated as ultimate. This stands in contrast to the situation in previous chapters, but is merely a reflection of the fact that in the current case there is no ultimate goal but rather trade-offs between the fraction of malicious parties that can be tolerated and the level of malicious behavior.
Nevertheless, all definitions are obtained as instantiations of a single definitional framework that views secure (real model) protocols as emulating an ideal model (see Sec 7.1.1.0). The specific definitions are obtained by postulating limits on the malicious behavior in the real model, defining an ideal model, and postulating that security means that any malicious behavior in the real model is essentially reflected in the ideal model. This abstract description may be better understood after seeing two concrete instantiations; that is, those presented in Section 7.1.1.2 and 7.1.1.3. The mind-blowing array of parameters that are used to derive a host of alternative definitions is presented in Section 7.1.1.1 (and may be better to read later).
READING ASSIGNMENT
Sections 7.1.0--7.1.2:
The definitional part of the overview and statement of known results.
This means reading page 599 (i.e., 1st page of Chapter 7)
as well as Sections 7.1.1 (definitions) and 7.1.2 (results).
As hinted above, I suggest the following order.
It may be good to go over the definitions of secure MPC agaian as well as over some known results and constructions. This corresponds to Items 1-6 (below), whereas items 7-8 provide overviews of constructions (to be read now).
READING ASSIGNMENT
ADD'L SUGGESTED READING (OPTIONAL!)