Guidelines for a reading group on FOUNDATIONS OF CRYPTOGRAPHY

(by Oded Goldreich)


Quick links: The notes on One-Way Functions (OWF), Pseudorandomness (PRG), Zero-Knowledge (ZK), Encryption, Signatures, Secure Multi-Party Computation (MPC).

Orientation

This page contains guidelines for a reading group on foundations of cryptography based on my the two-volume book on the subject. Unlike the guidelines provided for a reading group on introduction to complexity theory, the following guidelines are actually my personal lecture notes (which are in a process of adaptation for the current use). This means that they are more terse and less error-free.

All numbers refer to sections in the two-volume book Foundations of Cryptography: Chapters 1-4 are in Volume 1 and Chapters 5-7 are in Volume 2. A section number such as X.0 (resp., X.Y.0 or X.Y.Z.0) refers to the preamble of Chapter X preceding Section X.1 (resp., the preamble of Section X.Y (or Sec X.Y.Z) preceding Section X.Y.1 (or X.Y.Z.1)).

Meetings number 1-5 and 7-8 cover material that was covered in the reading group on introduction to complexity theory, although the perspective is different. Hence, students who took that reading group can skip these parts and focus on meetings 6 and 9-10 (for the first volume) and on meetings 12-18 and 20-21 (for the second volume). (Meeting 11 and 19 are optional; they are better done one after the other.)

When reading the text, do not rush through conceptual discussions but rather reflect on them. The same holds for high-level overviews of proofs. When failing to understand some part of a proof, make sure that you understand the high-level picture (i.e., the notions that underlie the claim being proved, the high-level strategy of the proof, etc). Do not get stuck with low-level details, unless you are sure that they are essential to your understanding of the main idea(s) of the proof. The possibility of errors in the text should not be ignored; see lists of corrections and additions in both Volume 1 and Volume 2.

Preface

[The following six paragraphs are taken from the book's preface; I call attention to the 4th and 6th paragraph that describes the nature of the book and this reading course.]

CRYPTOGRAPHY is concerned with the construction of schemes that withstand any abuse: Such schemes are constructed so to maintain a desired functionality, even under malicious attempts aimed at making them deviate from their prescribed functionality.

The design of cryptography schemes is a very difficult task. One cannot rely on intuitions regarding the typical state of the environment in which the system operates. For sure, the adversary attacking the system will try to manipulate the environment into untypical states. Nor can one be content with counter-measures designed to withstand specific attacks, since the adversary (which acts after the design of the system is completed) will try to attack the schemes in ways that are typically different from the ones the designer had envisioned. The validity of the above assertions seems self-evident, still some people hope that in practice ignoring these tautologies will not result in actual damage. Experience shows that these hopes rarely come true; cryptographic schemes based on make-believe are broken, typically sooner than later.

In view of the above, we believe that it makes little sense to make assumptions regarding the specific strategy that the adversary may use. The only assumptions that can be justified refer to the computational abilities of the adversary. Furthermore, it is our opinion that the design of cryptographic systems has to be based on firm foundations; whereas ad-hoc approaches and heuristics are a very dangerous way to go. A heuristic may make sense when the designer has a very good idea about the environment in which a scheme is to operate, yet a cryptographic scheme has to operate in a maliciously selected environment which typically transcends the designer's view.

THIS BOOK is aimed at presenting firm foundations for cryptography. The foundations of cryptography are the paradigms, approaches, and techniques used to conceptualize, define and provide solutions to natural ``security concerns''. We will present some of these paradigms, approaches and techniques as well as some of the fundamental results obtained using them. Our emphasis is on the clarification of fundamental concepts and on demonstrating the feasibility of solving several central cryptographic problems.

Solving a cryptographic problem (or addressing a security concern) is a two-stage process consisting of a definitional stage and a constructive stage. First, in the definitional stage, the functionality underlying the natural concern is to be identified, and an adequate cryptographic problem has to be defined. Trying to list all undesired situations is infeasible and prone to error. Instead, one should define the functionality in terms of operation in an imaginary ideal model, and require a candidate solution to emulate this operation in the real, clearly defined, model (which specifies the adversary's abilities). Once the definitional stage is completed, one proceeds to construct a system that satisfies the definition. Such a construction may use some simpler tools, and its security is proven relying on the features of these tools. In practice, of course, such a scheme may need to satisfy also some specific efficiency requirements.

THIS BOOK focuses on several archetypical cryptographic problems (e.g., encryption and signature schemes) and on several central tools (e.g., computational difficulty, pseudorandomness, and zero-knowledge proofs). For each of these problems (resp., tools), we start by presenting the natural concern underlying it (resp., its intuitive objective), then define the problem (resp., tool), and finally demonstrate that the problem may be solved (resp., the tool can be constructed). In the latter step, our focus is on demonstrating the feasibility of solving the problem, not on providing a practical solution. As a secondary concern, we typically discuss the level of practicality (or impracticality) of the given (or known) solution.


PLAN FOR THE 1ST MEETING: INTRODUCTION AND A START

Topics: Background material and a first taste of the actual material.

Part 1: Background material

Some of you may know some of this material, but do take a look at all parts to make sure you are comfortable with it. If necessary, consult add'l sources for more details on any of this.

You must become comfortable with the notion of a probabilistic polynomial-time algorthm (and negligible probability). It is also important that you get comfortable with non-uniform polynomial-time and oracle machines, because they are used in much of the definitional treatment that follows.

Core material (mandatory):

I expect that the aformentioned text (i.e., Sec 1.2 ands 1.3) should be a very easy reading, also for those who do not know this backgrond. If this is not the case for you, then it may be that the course is not suitable for you.

Add'l material (optional): This too should be an easy read.

Part 2: A first taste of the actual material

Sec 2.1 motivates the definition of one-way functions, which is provided in Sec 2.2

Core material (obligatory).

This should be an easy read. Sec 2.2.1 and 2.2.2 present basic definitions; just read them carefuly and make sure you understand what they mean (e.g., how do they differ). It is good to be aware of Sec 2.2.3, but unnecessary to suffer the tedious but quite straightforward proofs. Still be warned that a proof is required; we shall further discuss this issue in next reading assignment (i.e., transforming weak One-Way Functions into strong ones).

At this point you should be able to prove (at the middle of the night) that if P=NP, then there are no one-way functions. Hence P-neq-NP is a necessary condition for most of the area, since almost all the tools and applications t we consider cannot exist if one-way functions do not exist. You should also understand the difference between strong and weak one-way functions, and see that the various length conventions can be assumed w.l.o.g. (= without loss of generality).

Add'l material (optional): This too should be an easy read.

EXERCISES


PLAN FOR THE 2ND MEETING: ONE-WAY FUNCTIONS FROM WEAK TO STRONG

Topic: From weak one-way functions to strong ones (or a first real encounter with reducibility arguments).

At this point, you should know most of Sec 1.2-1.3 (background on probability and computation) and Sec 2.1-2.2 (basic definitions of OWF). It is crucial that you are in command of this material. Notation: OWF = One-Way Function.

This meeting focuses on the question of how can we prove implications of the form the computational assumption W implies the computational assumption S (in the current case weak OWF imply strong OWF). The point is that the weak assumption is an assumption, and we must use it when trying to establish the strong assumption. How can it be done?

The only way known (so far) is showing that a violation of the strong assumption (i.e., the existence of an algorithm that contradicts it) implies a violation of the weak assumption (i.e., the existence of an algorithm that contradicts it). The process by which the first algorithm is transformed to the second (and is coupled with a probabilistic analysis of the behavior of the algorithms on the relevant input distributions) is called a reducibility argument.

NOTE: I use the term reducibility argument rather than reduction for several reasons. First, because I want to emphasize the preservation of distribution on the instances (which one may say that is merely a variant of worst-case reductions (as opposed to the standsard (worst-case) reductions one sees in complexity theory). Second, because what is being reduced is the violation of the assumption and conclusion that we have in mind; that is, we reduce the task of violating the assumed hardness of one task to the violation of the hardness that we want to conclude. (Typically, this comes after we show a construction of the latter primitive, which uses the given primitive as a black-box.)

CORE MATERIAL (OBLIGATORY READING). In addition to Sec 2.3, read also the two short descriptive sections (i.e., 2.2.4 and 2.4.2).

ADD'L MATERIAL (OPTIONAL/RECOMMENDED):

EXERCISES

  1. Prove that if $f$ is a One-Way Function (OWF), then the range of $f$ cannot have polynomial size (i.e., it cannot be that $|\{f(x):x\in\{0,1\}^\}|=\poly(n)$).
  2. Prove that if $f$ is a one-way *permutation* (OWP), then $f^2$ defined as $f^2(x)=f(f(x))$ is a OWP.
  3. In contrast to (2), assuming the existence of OWF, prove that there exists a OWF $f$ such that $f^2$ is easy to invert (i.e., $f^2$ is *not* a OWF).
    If you cannot do it, look at the following hint.
    Hint: Assuming the existence of OWF, prove that there exists a OWF $f$ such that $f(x,y)=(1^{|y|},f'(x))$, where $|x|=|y|$ (and $f'$ is some length-preserving OWF).

OPTIONAL EXERCISES


PLAN FOR THE 3RD MEETING: HARD-CORE PREDICATES

Topic: Hard-core predicates

Some of you were annoyed/disappointed by the discovery that a one-way function (OWF) may not hide some of its bits. Indeed, if $f$ is a OWF, then so is $f'(x,r)=(f(x),r)$, where $|r|=|x|$ (or actually $r$ can be polynomially longer than $x$). See also a more dramatic example in the suggested exercise (Nr 24).

So the question is whether we can have a OWF with a "hidden bit"; first, we will need to define such a bit (called a hard-core predicate), and then we shall show that any OWF can be transformed into one that has such a bit. Again, the trasformation is easy (which is good): Given $f$, consider $f'(x,r)=(f(x),r)$, where $|r|=|x|$, and the bit will be $b(x,r) = \sum_i x_ir_i \mod 2$ (the inner-product mod 2 of $x$ and $r$).

But the issue is to prove that $b$ is a hard-core (i.e., it is totally hidden by $f$). We'll spend most time on this. Indeed, it will be a reducibility argument (i.e., assuming for contradiction a bad algorithm, $B$). The argument proceeds in two steps.

  1. An (easy) averaging argument that "reduces" the case of $\Exp_x[\prob_r[B'(f(x),r) = b(x,r)]] \geq 0.5+\e$, to the case of (at least an $\e/2$ frtion of) $x$'s that satisfy $\prob_r[B_x(r) = b(x,r)] \geq 0.5+\e/2$, where $B_x(r) = B'(f(x),r)$ (from the contradiction hypothesis).
  2. The core of the proof is handling such $x$'s. We shall show that for each such $x$, using $B'$, we can find a small list of candidates that contains it (and then we find $x$ itself by applying $f$ to all candidates).
    First, we go to LaLa Land (dream): suppose $\prob_r[B_x(r) = b(x,r)] \geq 0.75+\e$, and show how to find $x$ in this case. Then, we wake-up (which gets us to the actual proof)...
(A coding theoretic perspective: The Lala Land dream corresponds to unique decoding of the Hadamard Code, where the sequence $(B_x(r):r\in\{0,1\}^n)$ is a corrupted version of the encoding of $x$ under the Hadamard code. The actual proof corresponds to list decoding of the Hadamard code.)

READING ASSIGNMENT -- CORE MATERIAL (OBLIGATORY)

ADD'L MATERIAL (OPTIONAL/RECOMMENDED):
The following is intended only for those very interested in the material. On the other hand, it may be good to be aware of the facts and glance at the proof ideas.

The concern is Sec 2.6 is of the actual level of security obtained by various constructions and reducibility arguments. E.g., compare the security of "weak OWF implies strong OWF" to that of "OWF imply OWF coupled with hard-core predicates".

OPTIONAL EXERCISES (NOT FOR SUBMISSION)


PLAN FOR THE 4TH MEETING: COMPUTATIONAL INDISTINTGUISHABILITY

Topic: computational indistintguishability (i.e., the main aspect of pseudorandomness)

Here, we will get closer to cryptographic applications with pseudorandom generators (PRGs). While in the TOC (Theory of Computation) PRG may mean various things (i.e., it is actually a general paradigm or family name), in the context of Cryptography it means efficient deterministic algorithms stretching short random seeds into longer sequences that look random to any feasible observer (i.e., *any* probabilistic polynomial-time machine).

We shall thus focus on the notion of computational indistintguishability: Its very definition, relation to statistical indistintguishability, and its preservation under repeated sampling.

When presenting the hybrid method for the first time, I usually voice the algorithm's complaint, which says "But I was designed to distinguish X from Y, (so why the hell do you invoke me on these weird hybrids?)"
The answer, of course, is that horses do not speak Hebrew, algorithms don't speak and complain at all, and if they do -- we just answer "Shut up and say what you would do on these hybrids" (and note that if the algorithm really behaves differently on the intermediate hybrids for which it was "not designed for" then we distinguish any of the extreme hybrids from the one next to it...)

READING ASSIGNMENT -- CORE MATERIAL (OBLIGATORY)

ADD'L MATERIAL (OPTIONAL/RECOMMENDED): I repeat the recommendation of last time. It is intended only for those very interested in the material. It is good to be aware of the issues and facts covered here, and maybe also glance at the proof ideas.

OPTIONAL EXERCISES (NOT FOR SUBMISSION) (All exercises appear in Sec 3.8.4 of the book)


PLAN FOR THE 5TH MEETING: PSEUDORANDOM GENERATORS (PART 1)

Topic: On the definition of pseudorandom generators (PRGs).

OBLIGATORY MATERIAL

In Sec 3.3.2 you'll see another use of the hybrid method, but this time things are less "transparent"; an alternative proof (also via an hybrid argument) can be seen in the Exercise 19 below (do take a look at it, even if you avoid actually working out all details!).

ADDITIONAL/OPTIONAL MATERIAL (in order of priority)

EXERCISES


PLAN FOR THE 6TH MEETING: PSEUDORANDOM GENERATORS (PART 2)

Topics: A simple construction of PRG based on OWP, and pseudorandom functions (PRF)

This meeting has two parts. The first part shows a simple construction of a PRG, whereas the second part deals with pseudorandom functions (PRFs).

In retrospect, I find the treatment at the beginning of Sec 3.6.1 (i.e., Def 3.6.3) unnecessarily complicated, and prefer the alternative presentation (as in Def 3.6.4). (Optional exercise: Show that nothing is lost by this; although there is a hint in the text but you may want to work out the details.)

OBLIGATORY MATERIAL

ADDITIONAL/OPTIONAL MATERIAL

OPTIONAL EXERCISES [All exercises appear at the end of chapter 3]

In relation to Exer 28, note a correction in the book's webpage: The definitions in Sec 3.6.1 should be corrected to require that the function $\ell$ is upper bounded by a polynomial (and is easy to compute).

ADDITIONAL EXERCISES FOR CHAPTER 2 (which can be postponed to next week, because the next week load is lighter).

In relation to Exer 28, note a correction in the book's webpage: The definitions in Sec 3.6.1 should be corrected to require that the function $\ell$ is upper bounded by a polynomial (and is easy to compute).


PLAN FOR THE 7TH MEETING: INTERACTIVE PROOF SYSTEMS

Topic: Interactive proof systems (the stage for defining zero-knowledge proofs).

We ended dealing with pseudorandom generators and move to zero-knowledge proofs.

We start with the definition of interactive proofs, which is needed for non-trivility of zero-knowledge proofs. Here we stress the notions of efficient verification, via randomized interaction, and the notions of completeness and soundness of any proof system (and this type, in particular).

READING ASSIGNMENT


PLAN FOR THE 8TH MEETING: ZERO-KNOWLEDGE PROOF SYSTEMS (PART 1)

Topics: Defining zero-knowledge and an example

This reading discusses at length the definition of zero-knowledge (ZK), and then presents a perfect zero-knowledge proof system for Graph Isomorphism, which we don't know to be in BPP. We note that the "simulation paradigm, which underlies the def of ZK, is a special case of the real-vs-ideal paradigm.

READING ASSIGNMENT

If you feel like it -- take a look at the coming sections 4.3.3-4.3.4, which present "auxiliary input ZK" and use it in sequential composition. It will be the next reading assignment.

EXERCISES. By "(C)ZK" I mean computational zero-knowledge. (Recall, we often omit the "C"; hence the parenthesis.)

HINT: In both cases, use the simulator as a basis for a decision procedure, but be warn that you must examine the output provided by the prover (and rely on the completeness and soundness of the proof system in your analysis).
EXTRA HINT/GUIDANCE: You need to specify the "cheating verifier" $V^*$; in fact, a semi-honest verifier will do.

A Q-and-A (from the 2013/14 class): What if the def of ZK is extended to require similarity of the distributions for all inputs (rather than only for inputs in the set $S$)? In this case we reach triviality (i.e., the set is in BPP). This is the case because the honest prover implicitly communicates whether the input is in $S$, since the honest verifier rules correctly on this bit. (Indeed, $(P,V)(x)$ is a 0/1-random variable that is 1 wp greater than 2/3 if $x\in S$ and 0 w.p. greater than 2/3 otherwise.) Hence, a simulator for the honest $V$ that work for all inputs (rather than only for inputs in $S$) yields a decision procedure for $S$.


PLAN FOR THE 9TH MEETING: ZERO-KNOWLEDGE PROOF SYSTEMS (PART 2)

Topics: auxiliary-input ZK and its closure under sequential composition.

A Q-n-A (from the 2013/14 class): Looking at the book, I noticed that auxiliary-input ZK (aiZK) is defined such that the potential distinguisher is only given $x$ and $z$ (not $x,z$ and $y$). This is actually more natural if we think of the distinguisher as representing a part of the verifier, but indeed the alternative definition is more streamlined with the notion of computational indistinguishability of ensembles. All simulators I know (for aiZK) satisfy the seemingly stronger definition, but I wounder whether the defs are equivalent.
Proposed project: Can you prove or disprove equivalence? I have no clue how difficulty this may be, so please contact me before you pass the "threshold" of a few hours of thinking on it.

READING ASSIGNMENT

Another Q-n-A (from the 2013/14 class): In addition to answering lots of interesting questions, I explained why I'm not concerned of the gap between ZK and aux-input-ZK: Basically, it is because all proofs we'll see are via black-box simulators, which in turn imply aux-input-ZK. (Note: The definition of black-box simulators refers to any cheating verifier strategy that can be implemented by (possibly non-uniform) poly-size circuits.)


PLAN FOR THE 10TH MEETING: ZERO-KNOWLEDGE PROOF SYSTEMS (PART 3)

Topic: zero-knowledge proofs for any NP-statement

Another Q-n-A (from the 2013/14 class): A question by Roei Tell made me realize that the treatment in the book neglects to clarify (say in Thm 4.4.11) that the "ZKIP for NP" is actually for any NP-witness relation, and not merely for any NP-set.

You might have seen the ``boxed'' implementation of "ZKIP for NP" (of Sec 4.4.2.1), but the actuakl contents of this meeting is a digital implementation. For this we need to define and construct (based on OWF) a commitment scheme, and then use it instead ofthe boxes in the ZKIP.

READING ASSIGNMENT

(Optional:) You may also read Sec 4.4.4 (re "second-level considerations"), but this is *not* mandatory.


PLAN FOR THE 11TH MEETING: MORE ABOUT ZK -- OPTIONAL

Topic: advanced topics regarding zero-knowledge

So far, we covered all the mandatory material for this semester. If you have energy and time left, you may also cover the following advanced material.

When I was actually teaching this class (ei.e., last in 2013/14), I did have this (end-of-semester) meeting in which I suggested to

  1. Discuss any question the student may have re the basic material;
  2. Present high-level overviews of suggested reading;
  3. Discuss any issues the students may have wrt Weizmann, graduate school, research, and life....

RECOMMENDED READING (NOT MANDATORY):
Among the advanced Sections 4.5-4.11, I would give the highest priority to the following:

Note, however, that these parts contain two unfortunate errors.
RE SECTION 4.6 (WI)
I have to withdraw two claims regarding *strong* witness indistinguishable
proofs as defined in Definition 4.6.2.
Specifically, in general, *strong* witness indistinguishability is not
closed under parallel composition
(and so Lemma 4.6.7 is wrong). Consequently, in contrary to what is stated
in Theorem 4.6.8,
we do not know whether there exist constant-round public-coin proofs with
negligible error that
are *strong* witness indistinguishable for languages out of BPP.
We stress that the flaws pointed out here only refer to *strong* witness
indistinguishability and not to (regular) witness indistinguishability.
That is, as stated in Lemma 4.6.6, (regular) witness indistinguishability
is closed under parallel composition and thus the part of Theorem 4.6.8
that refers to regular witness indistinguishability is valid (i.e.,
providing constant-round public-coin proofs with negligible error that are
witness indistinguishable for NP).

RE SECTION 4.10 (NIZK)
Remark 4.10.6 as well as Theorems 4.10.10, 4.10.14 and 4.10.16, seem to
require trapdoor permutations in which the permutation's domain coincides
with the set of all strings of certain length.
This special case of Definition 2.4.5 can be implemented by modifying the
RSA or the Factoring Trapdoors
(cf. Canetti et al (in STOC'96) and [5]). (For further detail on Remark
4.10.6, the reader is referred to [23].)
Unfortunately, the correction that appeared in Appendix C of Volume 2
is also incorrect. For details see
http://www.wisdom.weizmann.ac.il/~oded/PSBookFrag/nizk-tdp.ps.
For a discussion of *enhanced trapdoor permutations* 
see my paper with Ron Rothblum titled Enhancements of Trapdoor
Permutations (at http://www.wisdom.weizmann.ac.il/~oded/p_tdp.html).
For a few additional errors/corrections see the list on the bottom of the web-page of Volume 1


[This was originally the beginning of the 2nd semester]


PLAN FOR THE 12TH MEETING: ENCRYPTION SCHEMES (PART 1)

Topic: Defining secure encryption schemes.

In this semester and in Vol 2, we move to "actual" cryptography. We shall cover encyption, signatures and general secure MPC (multi-party computation).

Starting with encyption, we discuss private-key and public-key schemes, which differ in the security definition (i.e., whether or not the adversary knows the encryption-key), which in turn effects the actual way of using the scheme. The basic syntax is of three probabilistic polynomial-time algorithms: key-generation, encryption, and decryption.

The focus is on the definition of security. We consider two natural definitions: (1) "semantic security", and (2) "security as indistinguishability of encryptions"; Def (1) is more intuitive, whereas Def (2) more useful in some applications. We show that the definitions are in fact equivalent. For simplicity, implicitly in (1) and explicitly in (2), the definitional treatment is in terms of non-uniform complexity.

The basic treatment concerns the encryption of a single message, but it is extended to multiple messages (encrypted using the same key). For public-key schemes, this is equivalent to case of a single messsage, but that's not the case for private-key schemes.

READING ASSIGNMENT

Actually, this is assignment has a bit more materail than usual.

OPTIONAL READING


PLAN FOR THE 13TH MEETING: ENCRYPTION SCHEMES (PART 2)

Topic: Constructing secure encyption schemes.

Having discussed the definition of secure encyption in great length, we now turn to the construction of such schemes.

The basic results are that (1) secure private-key schemes can be constructed based on any (non-uniformly hard) one-way function, whereas (2) secure public-key schemes can be constructed based on any (non-uniformly hard) collection of trapdoor permutations.

The first result is proved by using a pseudorandom function (PRF) to implement a "one-time pad" (where a counter that keeps the position is replaced by a randomly selected value). Note the methodology of proving security which is based on a mental experiment in which the PRF is replaced by (oracle access to) a truly random function.

The second result is proved using a hard-core predicate of such a permutation for encrypting a single bit. A less wasteful version is also presented. The presentation uses a notion of "block cipher" that is different from the standard one (see Footnote 14).

READING ASSIGNMENT

OPTIONAL READING

EXERCISES (at the end of the chapter)

OPTIONAL EXERCISES


PLAN FOR THE 14TH MEETING: ENCRYPTION SCHEMES (PART 3)

Topic: Security (of encryption) wrt active attacks

Being so happy with the notion and results regarding secure encryption schemes, we note that these schemes are secure only against an adversary that merely eavesdrops on the communication rather that tries to influence it in any way. Turning to such influences we consider the following types of manipulation.

  1. Key-dependent passive attacks in which the adversary may effect the choice of "tested" plaintexts so that they depend on the public-key. Needless to say, this refers only to public-key encryption schemes, and by the "tested" plaintext we mean the one for which security is claimed.
    (Note: here we refer to dependence on the encryption-key, wheraes recent studies of "key-dependent encryption" and/or "circular security" refer to encrypting information regarding the decryption key!)
  2. Chosen plaintext attacks in which the adversary $A$ may ask the user $U$ to encrypt additional plaintexts (that $A$ hands to $U$) before and/or after the tested plaintext is presented in its encrypted form. In other words, the adversary has oracle access to the encryption operator (i.e., the encryption algorithm coupled with the same encryption-key that is used for the encryption of the test plaintext). Of course, this may add power only in the private-key case. Note: We refer to a dynamic process in which $A$ first queries the encryption oracle, then affects the choice of the plaintext (that is given to it in encrypted form), and then queries the oracle again.
  3. Chosen ciphertext attacks (CCA) in which the adversary $A$ may ask the user $U$ to decrypt any ciphertext of its choice before and/or after the tested plaintext is presented in its encrypted form. Of course, $A$ is not allowed to ask for the decryption of the encrypted plaintext. In other words, the adversary has oracle access to the decryption operator (i.e., the decryption algorithm coupled with the same decryption-key that was generated along with the encryption-key that is used for the encryption of the test plaintext).

READING ASSIGNMENT: Parts of Sec 5.4 "Beyond eavesdropping security"; plaese read

HIGHLY RECOMMENDED (but not obligatory)

NOTE: Sec 5.4.2 (key-dependent passive attacks) and Sec 5.4.3 (chosen plaintext attacks) are quite easy. In contrast, the construction of public-key encryption secure under CCA (undertaken in Sec 5.4.4.4) is quite complex.


PLAN FOR THE 15TH MEETING: SIGNATURE SCHEMES (PART 1)

Topic: Unforgeable signaure schemes - definitions and preparation for constructions

We start a new major topic; that of signature schemes, both in the private-key incarnation (usually called MAC) and in the public-key one (usually called "signatures"). We use a unified framework in which both appear as special cases, although their actual usage in applications is very different.

After presenting the basic definition, we turn to a seemingly odd restriction, called length restricted signature schemes, which is motivated by being a useful step towards a full fledged scheme. We describe one transformation of length-restricted schemes into full-fledged ones, but will actually use another (which will be presented next time). Still, the current construction and more so its analysis is very instructive.

READING ASSIGNMENT

RECOMMENDED (but not obligatory)


PLAN FOR THE 16TH MEETING: SIGNATURE SCHEMES (PART 2)

Topics: Collision-resistent hashing and constructing unforgeable MACs

We shall cover important three topics:

  1. An alternative transformation of length-restricted signature schemes into full-fledged ones, using (collision-free/resielent) hashing (CFH). The advantage of this transformation is that it produces signatures of bounded length (i.e., independent of the length of the document).
  2. Three constructions of CFH: The first construction is based on claw-free pairs of permutations, which in turn can be constructed based on either factoring or DLP. The other two constructions convert length-restricted CFH into full-fledged CFH, where the second construction (i.e., tree-hashing) is of special interest.
  3. Constructing MACs based on OWF (or rather PRFs).

READING ASSIGNMENT
The corresponding reading material for these three topics is:

  1. Sec 6.2.2.2 (the "hash and sign" paradigm).
  2. Sec 6.2.3 Constructing CFH:
  3. Sec 6.3.1.1 and Sec 6.3.1.2; the first suffices to show that OWF implies a MAC, but the secobd is more appealing for actual use.

RECOMMENDED EXERCISES


PLAN FOR THE 17TH MEETING: SIGNATURE SCHEMES (PART 3)

Topic: Constructing unforgeable signature schemes -- basic paradigms.

This is the first part of two meeting devoted to the construction of secure signature schemes (in the public-key model). In this part we shall cover three important paradigms.

The first paradigm is the notion of one-time signature schemes (OTS), which are signature schemes that are secure as long as one uses them (with a fresh signing-key) at most once. Although very restricted, it will turn out that they imply general signature schemes. We shall also discuss length-restricted OTS, see how to construct them based on any OWF, and get general OTS via the hash-and-sign paradigm.

The second paradigm is "refreshing" and the third is "authentication trees". The bulk of Sec 6.4.2 is showing how to combine "refreshed" instances of a OTS with an authentication tree in order to obtain a general signature scheme. This scheme is presented first in the memory-dependent model, but we will get rid of the memory/state later (in next meeting).

READING ASSIGNMENT

OPTIONAL READING:

It makes sense to read all of Sec 6.4.2 now, but I am suggesting a stopping point for those who will find these 12 pages (of Sec 6.4.2) too much of a reading.

EXERCISES


PLAN FOR THE 18TH MEETING: SIGNATURE SCHEMES (PART 4)

Topic: Constructing unforgeable signature schemes -- using Universal One-Way Hashing Function

SUGGESTED DISCUSSION: Digest the memory-based scheme of Construction 6.4.14, focusing on the function of the authentication tree. Then note how this can be made memory-less by replacing the storage of random values (used to generate key pairs) by the use of a PRF (applied to the names of nodes in the auth-tree).

RECOMMENDED READING (Universal One-Way Hashing Function)

ADDITIONAL READING


PLAN FOR THE 19TH MEETING: NON-BLACK-BOX TECHNIQUES -- OPTIONAL

Topic: Boaz Barak's non-black-box technique applied to obtain constant-round, public-coin zero-knowledge arguments for NP.

Note: This is advanced and supplementary material, which actually fits right after Meeting 11.

All constructions we have seen in Chapter 4 are "black-box" in the sense that the simulator uses the verifier program as a black-box. Indeed, it is hard to imagine what else the simulator can do with the verifier program, given that reverse engineering it seem infeasible. Still, the simualtor that underlies the technique that will be presented here does use the verifier program in a non-black-box manner (i.e., it uses it as a string; specifically, as an NP-witness for the authenticity of the verifier's next message). This yields constructions of protocols that cannot be proven ZK in a black-box manner, provided NP is not in BPP.

You have to decide if you are going to try to read my sketchy description of the construction. If you want to do so, then you will need the following prerequisites.

REQUIRED READING in preparation for the actual contents of the meeting

  1. The def of WI (Sec 4.6.1.1), the fact it is closed under parallel composition (Sec 4.6.2) and the fact that it is implied by ZK (see Sec 4.6.3.1).
    When reading Sec 4.6.1.1, do *not* skip the definition of strong-WI, and note that ZK implies strong-WI, but be warned that Lem 4.6.7 (in Sec 4.6.3) is false.
    Optional: See Apdx C.3 (in Volume 2) for explaination to why Lem 4.6.7 is false (see C.3.1), and a patch that suffices for our purposes (see C.3.2).
  2. Proofs of Knowledge: Their def as presented in Sec 4.7.1.
    (You might have read it already...)
  3. Computationally-sound proofs (aka Arguments): Their def (Sec 4.8.1) and succinct communication (Sec 4.8.4).
    (The latter uses PCPs, but in a "black-box manner"...)

OPTIONAL READING (only needed for the very last paragraph (i.e., the stronger result))

Now, following is my sketch of the non-black-box technique. A wider perspective (as well as a less detailed sketch) is provided in Apdx C.5.2 of Volume 2.


ON BOAZ BARAK'S NON-BLACK-BOX TECHNIQUE
AS APPLIED TO OBTAIN CONSTANT-ROUND, PUBLIC-COIN ZK ARGUMENTS FOR NP

We will present a constant-round public-coin zero-knowledge
argument system for any set in NP, where the computational
soundness error is negligible. It is known that proving
such a result via black-box simulation is highly unlikely,
because constant-round public-coin *black-box* zero-knowledge
argument systems exist only for sets in BPP.
(Recall that argument system where defined in Sec 4.8.1.) 

While the ``standard'' definition of ZK refers to any PPT verifiers
that uses an auxiliary input of polynomial-length, we shall warm-up
with proving ZK only wrt the basic definition (w.o., aux-input).
Actually, we shall start with establishing ZK wrt verifiers
that use sub-linear long aux-input.

Recall that a pair of interactive PPT machines $(P,V)$
is an argument system for $S$ (typically in $\NP$) if
* completeness: For every $x\in S$ there exists $w$ (of $poly(|x|)$-length)
  such that $\prob[(P(w),V)(x)=1] = 1$.
* (computational) soundness: For every $x\not\in S$
  and every $poly(n)$-size circuit $\{P_n\}$
  it holds that $\prob[(P_{|x|},V)(x)=1]$ is negligible in $|x|$.
Note that the soundness error is negligible.
The def extends to $\Ntime(t)$ by replacing poly with $t$
(in the completeness condition, and *maybe* poly by poly(t) in soundness).
(In general, I'll try to mention the soundness error in each case.)

Throughout, we shall use witness-indistinguishable (WI) argument systems
(as defined in Sec 4.6.1.1).
Recall that these are interactive arguments $(P,V)$ for NP-sets $S$
such that for every $x\in S$ and corresponding NP-witnesses $w,w'\in R(x)$
and every poly-size $V^*$, the distributions $(P(w),V^*)(x)$
and $(P(w'),V^*)(x)$ are computationally indistinguishable.
Note that ZK implies WI, but not necessarily the other way around.
Also note that WI is preserved under parallel executions.
(Again, the notion of WI extends to NTIME(t).)


The warm-up: Zero-Knowledge wrt auxiliary-input of sub-linear length.
==============================================================

[Overview: The basic strategy.] 
Actually, on common input $x$, we shall consider (deterministic) verifiers
having aux-input of length at most $|x|/2$.
(Note that we can also handle randomized verifiers with aux-input
of length less than $|x|/3$, by first reducing their randomness complexity
to $|x|/6$ (by using a PRG).)
The protocol proceeds as follows, on input $x\in\bitset^n$:
1. $V$ sends the prover a random $n$-bit string, denoted $r$.
2. The prover uses a WI proof to prove that *either* $x\in S$
   *or* there exists a program $\Pi$ of length less than $n/2$
   such that $\Pi(x)=r$ within poly(n) steps.
The idea is that *in the real interaction* the 2nd condition is 
unlikely to hold (because $\prob_r[exists \Pi s.t. \Pi(x)=r] \leq 2^{-n/2}$), 
which means that soundness is maintained,
and yet the 2nd condition assists the simulation: 
Given a cheating verifier $V^*$ of length less than $n/2$,
the simulator can use it as a program $\Pi$ in the 2nd condition
(of the WI system invoked in Step 2).
The WI feature implies that this simulation (of the WI proof)
is indistinguishable from the real interaction
(in which an NP-witness for $x\in S$ is used).

[An annoying but crucial detail: What is the running time of $V^*$?]
Note that the 2nd condition in Step 2 asserts that the program $\Pi$,
which we wish to set to $V^*$, runs in "polynomial time". 
But what is this polynoimal? We need to fix it when we design
our proof system, whereas $V^*$ is "selected" afterwords. 
So what polynomial should we use?
(Note: it must be bigger than any poly bound for $V^*$.)
So we cannot use any fixed polynomial; instead, 
we pick a super-poly function $t$ (e.g., $t(n)=n^{\log n}$).
Hence, the 2nd condition (in Step 2) is revised to read
   there exists a program $\Pi$ of length less than $n/2$
   such that $\Pi(x)=r$ within $t(n)$ steps.
But then this is not an NP-assertion.
Still it is an Ntime(t)-assertion.

So we need an WI argument for a set in NTIME(t), 
where (say) $t(n)=n^{\log n}$.

[Obtaining an WI argument for a set in NTIME(t), where $t(n)=n^{\log n}$.]
Firstly, note that the low-communication argument system for NP
(reviewed in Sec 4.8.4) generalizes to NTIME(t).
Specifically, assuming a CFH (= collision-resistent hashing)
that is resilient against poly(t)-size circuits, 
we obtain an argument system for NTIME(t).
(It can be made to have communication $n^\epsilon$, for any $\epsilon \gt 0$,
but we don't need that. All we care is that verification is in poly-time.)
Furthermore, in this argument system, the complexity of the honest prover, 
on common input $x$, is polynomial in the time of the fastest ND computation
(i.e., computation with the ``best witness'') 
that leads to accepting the input $x$. (ND = non-deterministic)
(If $x$ is recognized in $t_x \leq t(|x|)$ steps, then we can construct
the corresponding PCP oracle in time polynomial in $t_x$.)
This instance-based/point-wise/local complexity measure is crucial here,
since it will mean that the simulator,
which runs this prover using the cheating $V^*$ as an NP-witness,
will spend time that is polynomial in the running-time of $V^*$.
(Note that this argument system is constant-round.)

So far we presented an argument system for NTIME(t),
whereas we need a WI one.
We first note that the above argument system is of the public-coin type.
Next, we use a generic technique for converting argument systems
(or interactive proof systems)
of the public-coin type with perfect completeness 
into zero-knowledge ones (and hence WI ones). 
Specifically, since the verifier's actions are oblivious of 
the prover's answers/msgs, we can just let the prover "encrypt" its msgs 
using a commitment scheme, and augment the "encrypted" interaction 
with a (ZK/WI) proof that the original verifier would have accepted 
the corresponding decrpted/de-committed transcript.
(That is, the NP-assertion being proved is that the sequence of
commitments and verifier-random-msgs $(c_1,r_1,...,c_t,r_t)$
corresponds to a transcript $(v_1,r_1,...,v_t,r_t)$ that
would have convinced the original verifier to accept the input,
where $v_i$ is the value committed to in $c_i$
(and the NP-witness also contains de-commitment information).)
If we use a ZK proof (and do not use non-BB techniques),
then we cannot hope to have a constant-round system with public-coins,
but it suffices to use a *strong*-WI proof (Def 4.6.2) with such features.

Ignoring the difference between WI and strong-WI, 
let me re-iterate where we stand.
We wish to construct a WI argument system for NTIME(t),
and we sketched a construction that 
starts with an argument system for NTIME(t) 
and employs a (strong)-WI subprotocol (for NP) that
transforms it into a WI argument system for NTIME(t).
The resulting protocol uses commitments to the prover messages
of the original argument system and a (strong-WI) subprotocol 
for proving that these commitments are to values that 
would have been accepted by the original versifier. 


[Another annoying but crucial detail: we need strong-WI rather than WI.]
Recall the definition of strong-WI (Def 4.6.2), and note (see more below) 
that we actually need the subprotocol to be strong-WI.
[Note that the subprotocol, which is used to prove that 
the "encrypted" transcripts encodes an accepting,
gets as input the said commitments/encryptions.] 
Specifically, we need a (constant-round, public-coin) *strong-WI* for NP
(with negligible error), where the *strong* version is needed 
because the input to this subprotocol (i.e., sequence of commitments that 
correspond to different "encrypted" execution of the plain argument system)
whereas different "encrypted" executions of the argument system 
(on the same $x$) are most likely to yield different sequences of commitments 
(even when the committed values are equal,
let alone that the committed values may be different, because 
we consider execution of the plain system with different NP-witnesses). 
These commitments are computationally indistinguishable,
and using *strong*-WI implies that the transcripts 
(of the strong-WI subprotocols) are computationally indistinguishable.
Unfortunately, 
we do not have constant-round public-coin *strong-WI* systems for NP
with negligible error, but we do have such system with constant error.
The gap can be bridged in any one of two ways: 
(1) by using a constant-error strong-WI as a subprotocol, 
and running the entire resulting (constant-error) WI protocol in parallel;
(2) by running parallel versions of the encrypted argument system
and applying a constant-error strong-WI to each, in parallel,
while relying on the fact that strong-WI is preserved in the case
of product distributions (on the sequence of inputs (see Apdx C.3.1)).
That is, option (2) uses the fact that the sequences of commitments 
generated in the "encrypted" argument system is a product distribution 
(i.e., the commitments are statistically independent).

[Stressing a subtle point: the role of perfect completeness.
The foregoing transformation (which iuses "encrypted" prover-messages)
requires perfect completeness (in the original protocol), 
which we have in the current application. 
The point is that a cheating/adversary verifier may learn whether 
a certain (rare) sequence of verifier-randomness is convincing.
It may be even that the all-zero randomness is convincing for one
witness but not for another. This does not occur in case the protocol 
has perfect completeness; intuitively, any randomness the adversary verifier 
chooses, it will see a sequence of hidden messages plus a ZK/WI proof/arg 
that this sequence encodes an accepting transcript...] 



The real thing: Zero-Knowledge wrt aux-input of arbitrary polynomial length.
==========================================================================
So far we only handled cheating verifiers that having aux-input of length
at most $|x|/2$ (on common input $x$). Note that this hypothesis was
essential to our analysis of soundness that relies on the fact that
most $n$-bit strings cannot be generated by a program of length less than $n/2$.
Hence, we modify the protocol as follows, 
while using a strong (as above) CFH $H_n$
(which maps arbitrary bit strings to $n$-bit long bit strings)
and a commitment scheme $C$.

The protocol proceeds as follows, on input $x\in\bitset^n$:
0. $V$ sends the prover a random $h\in H_n$,
   and the prover responds with $c = C(0^n)$.
1. $V$ sends the prover a random $n$-bit string, denoted $r$.
2. The prover uses a WI **argument of knowledge** (AOK rather than POK)
   to prove that it **knows** *either* a witness for $x\in S$
   *or* a program $\Pi$ of length less than $t(n)$ and some $s$
   such that $\Pi(x,c)=r$ within $t(n)$ steps and $c=C(h(\Pi),s)$,
   where $s$ denotes randomness used in the commitment scheme $C$.
Note that the prover is required to prove its knowledge of $\Pi$
and not merely $\Pi$'s existence (which is almost a triviality (see Exer)).
The definition of a AOK/POK should be revised here so to allow extraction 
in time that is polynomially-related to the length of the witness
(which may be $t(n)$, and polynomial in the inverse of the acceptance
acceptance probability of the AOK).
Again, the idea is that in the real interaction the 2nd condition
is unlikely to hold (essentially because the prover commits to 
a hash-value $c$ before seeing $r$), 
and yet it assists the simulation (as before).
Details (on both issues) follow. 

EXER: Prove that if $h$ is regular (i.e., $h^{-1}(0^n)$
has approximately $2^{m-n}$ elements of length $m$),
then, for every $r$, there exists a $\Pi$ as in the 2nd condition.

Turning back to the foregoing protocol, we first establish the ZK feature. 
The simulator, which obtains the code of (a poly-size) $V^*$,
just emulates the actions of a related prover as follows:
In Step 0, rather than committing to 0^n, it commits to $h(V^*)$.
In Step 2, it uses the WI-prover and proves knowledge of a witness
for the 2nd condition; that is, it runs this prover when using $V^*$
(and the randomness $s$) as a witness. 
By the instance-based complexity measure, it follows that 
this simulation is completed in time polynomial in size of $V^*$.
The fact that this simulation is computationally indistinguishable
from the real interaction follows by the WI feature (and the hiding
property of $C$), but this is slightly less straightforward than it seems.

To clarify the issue consider the following three distributions:
(i) the real interaction: uses $C(0^n)$ and a witness to $x\in S$;
(ii) a mental experiment: uses $C(h(V^*))$ and a witness to $x\in S$;
(iii) the simulation: uses $C(h(V^*)$ and the witness $V^*$.
WI implies that (ii) and (iii) are computationally indistinguishable,
but we also need to show that that (i) and (ii) are also comput-indist.
The latter follows using the hypothesis that $S\in NP$ 
and noting that the full transcript can be generated in polynomial-time 
by using the NP-witness (for $x$) as aux-input. 
[Note the difference viz-a-viz the situation discussed in the 
construction of WI for NTIME(t) above, where strong-WI was not preserved 
by parallel executions (see Apdx C.3.1 in Vol 2); specifically, here 
the hybrid distribution (i.e., (ii)) is polynomial-time constructable. 
In general, for $(x,w),(x',w') \in R$, it need not hold that $(x',w)\in R$;
in (ii) we capitalized on the fact that here $(x',w)\in R$.]

Next, we need to establish the computational soundness of the protocol.
Suppose that for some $x\not\in S$ a poly-size strategy $P^*$ manages
to convince the verifier with non-negligible probability $p(x)$.
We start by fixing a random execution of Step 0,
and considering the residual execution from that point on, noting 
that w.p. $\geq p(x)/2$ this residual execution accepts w.p. $\geq p(x)/2$.
Let $h$ and $c$ be the hash-function and commitment value used (in Step 0).
We consider two random executions of the residual protocol such that
in the $i$th execution the verifier selected $r_i$ in Step (1),
and note that with probability $(p(x)/4)^2$ these $r_i$'s are such that
Step 2 accepts with probability at least $p(x)/4$ (with each $r_i$).
Using the POK feature of the argument system,
we obtain (via extraction, in time $\poly(t(n)/p(x)$)
two programs $\Pi_1$ and $\Pi_2$ such that $\Pi_i(x,c)=r_i$
and $c$ is in the range of $C(h(\Pi_i))$.
Hence, $\Pi_1 \neq \Pi_2$
(since these $\Pi_i$'s yield different $r_i$'s on the same $(x,c)$)
but $h(\Pi_1) = h(\Pi_2)$ (by the perfect binding of $C$),
which contradicts the hypothesis regarding $H$
(i.e., its being CFH resilient against size $\poly(t)$).

Finally, we need to show a POA WI protocol for NTIME(t).
Recall that we showed a WI argument for NTIME(t),
but we need one with a POK feature.
This can be shown by noting that our construction of a WI argument
has a POK feature provided that its ingredients have this feature.
(Recall that the ingredients are a (strong) WI proof for NP,
and a PCP system for NTIME(t); details are omitted.)


A stronger result: Establishing the above based on standard CFH
================================================================
Recall that the above results were established under the hypothesis
that we have a CFH that is resilient against size $\poly(t)$,
while the standard assumption refers to resilience wrt poly-size.
We can prove the same result using the standard assumption by 
(1) introducing an adequate definition (i.e., "universal arguments"), 
which includes a local-extractability feature 
(i.e., extract individual bits of the witness in poly-time),
(2) constructing such universal arguments using standard CFH, and 
(3) using them together with tree-hashing (instead of $H$ above)
and with error correcting codes (applied before the hashing).
For details, see http://www.wisdom.weizmann.ac.il/~oded/p_ua.html


PLAN FOR THE 20TH MEETING: SECURE MULTI-PARTY COMPUTATION (PART 1)

Topic: Secure multi-party computation -- definitions and results.

An account from the 2013/14 class: After a long (much longer than planned...) discussion about types of adversaries considered in cryptography (i.e., uniform vs non-uniform aux-inputs, randomized or deterministic adversaries), we started talking about "solving generic cryptography problems" (aka "general secure multi-party computations" (MPC)).

My guess is the aforementioned account is related that the fact that Section 7.1.1 presents a host of definitions that are incomparable rather than a single definition that is advocated as ultimate. This stands in contrast to the situation in previous chapters, but is merely a reflection of the fact that in the current case there is no ultimate goal but rather trade-offs between the fraction of malicious parties that can be tolerated and the level of malicious behavior.

Nevertheless, all definitions are obtained as instantiations of a single definitional framework that views secure (real model) protocols as emulating an ideal model (see Sec 7.1.1.0). The specific definitions are obtained by postulating limits on the malicious behavior in the real model, defining an ideal model, and postulating that security means that any malicious behavior in the real model is essentially reflected in the ideal model. This abstract description may be better understood after seeing two concrete instantiations; that is, those presented in Section 7.1.1.2 and 7.1.1.3. The mind-blowing array of parameters that are used to derive a host of alternative definitions is presented in Section 7.1.1.1 (and may be better to read later).

READING ASSIGNMENT
Sections 7.1.0--7.1.2: The definitional part of the overview and statement of known results.
This means reading page 599 (i.e., 1st page of Chapter 7) as well as Sections 7.1.1 (definitions) and 7.1.2 (results). As hinted above, I suggest the following order.


PLAN FOR THE 21ST MEETING: SECURE MULTI-PARTY COMPUTATION (PART 2)

Topic: Secure multi-party computation (MPC) -- overviews of constructions

It may be good to go over the definitions of secure MPC agaian as well as over some known results and constructions. This corresponds to Items 1-6 (below), whereas items 7-8 provide overviews of constructions (to be read now).

  1. Section 7.1.1.2: Defining security in the malicious model; this is a simpler model (which is useful only for $t$ less than $m/2$ bad guys, out of $m$ total). Here the ideal model is straightforward.
  2. Section 7.1.1.3: Defining security in a malicious model allowing abort (for $m=2$). Here the ideal model is aimed to capture delivery of the output to the 1st party who may (if bad) block delivery of output to the 2nd party.
  3. Implicit in Section 7.1.1.0: The general paradigm captured by the distributions $\REAL_{A(z),\Pi}({\bar x})$ and $\IDEAL_{B(z),f}({\bar x})$, where $\Pi$ is a protocol for computing $f$ and $B$ is an ideal-model adversary that controls the same parties as the real-model adversary $A$ (and is of similar complexity, and gets the same aux-input/side-information $z$).
  4. Implicit in Section 7.1.3.1: The semi-honest model (no restriction on $t$ here), and a simplified formulation for the case of deterministic functionalities.
  5. Implicit in Section 7.1.1.3 (see Prop 7.2.11): Reducing the secure computation of general functionalities to the secure computation of deterministic ones.
  6. Section 7.1.2: Stating the results regrading secure MPC (in various models): Assuming "simple TDP" secure MPC is possible both in the plain model with honest majority or in a malicious model allowing abort for any number of dishonest parties.
  7. Section 7.1.3.1: Transforming MPC that are secure in the semi-honest model into MPC that are secure in the malicious model (i.e., "Forcing semi-honest behavior"). The key tool is ZK proofs, but we need to have the parties commit to their input first as well as commit to random pads generated by secure coin tossing, and then have them prove in ZK that their actions are proper wrt these committed values. We also need to prove (in ZK) knowledge of the committed inputs...
  8. Section 7.1.3.3: A high-level overview of MPC secure in the semi-honest model. We reduce general MPC to the private computation of a simple two-party functionality regarding four bits.

READING ASSIGNMENT

ADD'L SUGGESTED READING (OPTIONAL!)


Back to the two-volume book's webpage.