Comments on Fifteen TCC'11 Presentations

[Brown U., March 28-30, 2011]

My impression is that the program of TCC'11 (see LNCS 6597) was marked by a significant number of works that present conceptual contributions as their main contributions. Some of them are explorative works that take a first step in a new and seemingly promising direction, while only providing some evidence to its feasibility. In my opinion, this is to be commended.

The following comments refer to a sample of TCC'11 presentations, where this sample is strongly biased towards my own taste. This sample contains the following presentations:

  1. Input Locality and Hardness Amplification by Andrej Bogdanov and Alon Rosen
  2. General Hardness Amplification of Predicates and Puzzles by Thomas Holenstein and Grant Schoenebeck
  3. Security Amplification for the Cascade of Arbitrarily Weak PRPs: Tight Bounds via the Interactive Hardcore Lemma by Stefano Tessaro
  4. Survey talk: Dense Model Theorems and Their Applications by Luca Trevisan
  5. Homomorphic Encryption: from Private-Key to Public-Key by Ron Rothblum
  6. Functional Encryption: Definitions and Challenges by Dan Boneh, Amit Sahai, and Brent Waters
  7. Bringing People of Different Beliefs Together to do UC by Sanjam Garg, Vipul Goyal, Abhishek Jain, and Amit Sahai
  8. PCPs and the Hardness of Generating Private Synthetic Data by Jonathan Ullman and Salil Vadhan
  9. Towards Privacy for Social Networks: A Zero-Knowledge Based Definition of Privacy by Johannes Gehrke, Edward Lui, and Rafael Pass
  10. On the Black-Box Complexity of Optimally-Fair Coin Tossing by Dana Dachman-Soled, Yehuda Lindell, Mohammad Mahmoody, and Tal Malkin
  11. On the Complexity of Non-Adaptively Increasing the Stretch of Pseudorandom Generators by Eric Miles and Emanuele Viola
  12. Survey Talk: Concurrent Security and Non-Malleability by Rafael Pass
  13. Limits on the Power of Zero-Knowledge Proofs in Cryptographic Constructions by Zvika Brakerski, Jonathan Katz, Gil Segev, and Arkady Yerukhimovich
  14. Towards Non-Black-Box Lower Bounds in Cryptography by Rafael Pass, Wei-Lung Dustin Tseng, and Muthuramakrishnan Venkitasubramaniam
  15. On Black-Box Separations among Injective One-Way Functions by Takahiro Matsuda and Kanta Matsuura
The relative length of my comments does not reflect my relative ranking of these works but is rather related to what I have to say about them.

1. Input Locality and Hardness Amplification
[by Andrej Bogdanov and Alon Rosen]

My interpretation of the main result of this work is somewhat different than the interpretation offered by the authors. My view is that the work's main result says that there exists a time-versus-success trade-off for the inversion task of any injective input-local function: If one can invert such an n-input bit function in time T(n) and success probability $\e(n)$, then one can invert this function in time $\exp(\tildeO(\sqrt(n\log(1/\e(n)))) T(n)$ with success probability $1-\e(n)$, where in both cases the probability is taken over the choice of a random $n$-bit argument to the function.

2. General Hardness Amplification of Predicates and Puzzles
[by Thomas Holenstein and Grant Schoenebeck]

This may be the last simplification of the analysis of "hardness amplification", at least in the setting of predicates. The analysis proceeds by identifying hardcore regions (sets) with respect to the specific adversary that attacks the composed predicate (rather than identifying generic hardcore regions). The point is that proving the existence of such special-purpose regions is easier, and that is all that is needed in the rest of the analysis. This approach was implicitly used before in the context of the direct product problem (see, e.g., Sec 2.3 in Vol 1 of FOC), but not in the context of XOR-like lemmas.

3. Security Amplification for the Cascade of Arbitrarily Weak PRPs: Tight Bounds via the Interactive Hardcore Lemma
[by Stefano Tessaro]

The main focus of this work is the security amplification features of the cascading construction (i.e., sequential applications of different permutations), but it presents a general framework and techniques that are applicable also to many other related problems. The main result provides an almost optimal analysis of the cascade construction, improving over significantly inferior results.

4. Survey talk: Dense Model Theorems and Their Applications
[by Luca Trevisan]

The main message that a cryptography and/or complexity researcher should take from this talk is a result of Reingold et al (FOCS'08) (and Dziembowski and Pietzrank (FOCS'08)) that asserts the following. Let $G$ be a pseudorandom generator with stretch $ell$ (i.e., it stretches k-bit seeds into ell(k)-bit sequences), and let $t=O(log k)$. Then, for every $(k,k-t)$-source $X$ (i.e. a distribution over $k$-bit strings having min-entropy at least $k-t$), the random variable $G(X)$ is computationally indistinguishable from some $(ell(k),ell(k)-t)$-source. (This result is optimal since $G(X)$ may have $t$ fixed bits, say, always start with $t$ zeros.) The aforementioned result has a clear application to "seed leakage" (but only for a logarithmic amount): Applying an adequate randomness-extractor to $G(X)$ yields a pseudorandom distribution.

5. Homomorphic Encryption: from Private-Key to Public-Key
[by Ron Rothblum]

Recent feasibility results regarding homomorphic encryption schemes beg for a systematic study of the various forms of such schemes. Parameters to consider include (1) the set of functions for which homomorphic encryption is provided; (2) the quality of the output of the homomorphic encryption operation (i.e., "distribution preserving" or merely "compact"); and (3) private-key schemes vs public-key schemes. The current work offers a transformation from private-key schemes to public-key schemes in the case of compact schemes (which include XOR in the set of homomorphic supported functions). This result mirrors an analogue transformation for the case of distribution-preserving schemes. It also indicates that the minimal level of homomorphism quality (i.e., compactness) suffices to bridge the gap (which exists in non-homomorphic schemes) between private-key schemes and public-key ones.

6. Functional Encryption: Definitions and Challenges
[by Dan Boneh, Amit Sahai, and Brent Waters]

This highly explorative (and somewhat speculative) work presents a unified framework that captures a host of notions of encryption schemes that offer a restricted and/or adaptive decryption capacities. Loosely speaking, we refer to schemes in which a single ciphertext (generated w.r.t a public parameters (which may be viewed as an encryption-key)) can be partially decrypted when using corresponding decryption keys, where different keys may yield different levels of decryption (e.g., full decryption, or just partial information regarding the plaintext such as whether it satisfies a predetermine predicate). The public parameters are generated by a trusted party (e.g., a system administrator) that hands corresponding decryption keys to the various parties.

The framework is called "functional encryption" because, for a generic functionality $F$, it supports encryption of a message $x$ such that an entity associated with a parameter $k$ can obtain from the ciphertext the value of $F(k,x)$, but nothing else. Special cases of this framework correspond to (1) the standard definition of secure encryption; (2) ID-based encryptions; and (3) various forms of attribute-based encryption (supporting arbitrary "key policies" or "message policies"). While most natural special cases seem impossible in the plain model, this work advocates a general study of the various functionalities and the feasibility of implementing them under some reasonable set-up assumptions.

7. Bringing People of Different Beliefs Together to do UC
[by Sanjam Garg, Vipul Goyal, Abhishek Jain, and Amit Sahai]

Building on the work of Lin et al (STOC'09), this work aims at what can be called "set-up combiners" (indeed, compare this to "combiners" for various primitives); that is, the aim is designing protocols that are secure when one of several (combinations of) set-up assumptions holds. The foregoing description corresponds to the case of "common belief" and is extended to a setting of different beliefs where security is guaranteed only for parties that hold true beliefs. The framework is further extended by allowing to describe "bound" on the type of violation of the main beliefs. This is done by identifying each set-up with a pair of functionalities such that the first functionality represents the main belief regarding the behaviour of this set-up, and the second functionality represents a "fall back" belief regarding what may happen when the main belief fails. Thus, the framework associates each set-up with a pair of beliefs (i.e., a main one and a fall-back one). Such pairs of beliefs are classified into types, and a general result regarding the feasibility of secure multi-party computation (MPC) in this setting is presented.

8. PCPs and the Hardness of Generating Private Synthetic Data
[by Jonathan Ullman and Salil Vadhan]

Assuming the existence of one-way functions, Dwork et al (STOC'09) showed that there exists efficiently computable predicates for which it is infeasible to produced a synthetic sanitized database that supports queries regarding the approximate number of records that satisfy these predicates. While these predicates have quadratic-size circuits, they are not natural and/or simple in any reasonable sense. The current work uses PCPs in order to obtain an analogous result with respect to very simple predicates (i.e., the conjunction of two attributes in the original database). (Still, one may argue that the database itself is not a natural one.)

9. Towards Privacy for Social Networks: A Zero-Knowledge Based Definition of Privacy
[by Johannes Gehrke, Edward Lui, and Rafael Pass]

The celebrated definition of differential privacy deviates from standard cryptographic definitions in three ways: (1) it refers to a noticeable (but small) error allowance rather than to a negligible one; (2) it considers pointwise differences (or rather ratios) rather than aggregated differences; and (3) it compares what can be (efficiently) deduced from the system to what could have been (efficiently) deduced from all information (database) except the partial information (record) that we wish to protect.

While the 1st deviation has appeared also in a few cryptography settings (e.g., epsilon-knowledge, password-based security, weak fairness, etc), the 2nd deviation is a strengthening (wrt the standard formulations) and so it need not alarm us (at TCC). There is however an issue with the 3rd deviation: Comparing the definition of differential privacy to, say, the one of secure multi-party computation, the former is analogous to requiring that the protocol is secure against attacks in which all parties, except one, take part. Note that a protocol can be secure wrt such attacks while being totally insecure otherwise (e.g., any $m$-party protocol that is invoked by providing each party with a share of a secret under a $t$-out-of-$m$ secret sharing scheme is secure against the corruption of any $t$ parties but may be insecure when only $t-1$ parties are corrupted). Indeed, the issue is what is the ideal model that is used as yardstick in the definition of security (and noting that security wrt an ideal model adversary that controls $t$ parties does not mean security wrt an ideal model adversary that controls $t-1$ parties).

This issue is at the focus of the current work, which offers a specific definitional framework for handling it. Rather than using the "one-recored-reduced database" as a yardstick, it is suggested to use as yardstick a (randomized) function (or a set of randomized functions) of this reduced database. Needless to say, this set of functions should include the desired utility function, but it may include more than it. Specifying the set of functions (to be used as a yardstick) is left to the system designer. One natural choice is a random sample of the database records, where the size of this sample is closely related to the sample complexity of the task of approximating the desired utility function. (That is, the yardstick is a projection of the database on a uniformly selected set of positions of adequate size.) This is natural since a sample of that size is necessary in order to compute the desired utility. Interestingly, it is shown that every utility function $u$ has a mechanism that allows to compute the utility in a way that is private with respect to a random sample of size that is linearly related to the sample complexity of approximating $u$. Indeed, essentially, a utility $u$ that maps the database to the real interval [0,1] is private wrt a random sample of $k$ points iff the sample complexity of approximating $u$ is $Theta(k)$.

These considerations are extended from the simple setting of random samples to the setting that allows sampling augmented by local explorations (where locality is wrt the topology of the "social network"). It turns out that privacy wrt exploration-augmented samples is related to property testing in general graphs, but this relation calls for further study (see Sec 4 in the paper).

10. On the Black-Box Complexity of Optimally-Fair Coin Tossing
[by Dana Dachman-Soled, Yehuda Lindell, Mohammad Mahmoody, and Tal Malkin]

This work sheds more light on the known discrepancy between r-round coin tossing protocols obtaining bias inversely proportional to $r$ and protocols that obtain bias that is only a square root of the aforementioned optimal bias. Recall that the optimal protocol of Moran et al (TCC'09) is based on Oblivious Transfer (OT), whereas the simple protocol obtaining a square root of this bias can be based on any one-way function (OWF). The current work provides evidence that this gap is no coincidence: A protocol with bias smaller than square root the optimal bias cannot be based on OWF via a black-box reduction as long as $r$ is not too large (wrt the security parameter). The technique also implies a similar lower bound on the number of rounds in secure $O(1)$-party computation that is based on a black-box reduction to OWF.

11. On the Complexity of Non-Adaptively Increasing the Stretch of Pseudorandom Generators
[by Eric Miles and Emanuele Viola]

The issue at hand is increasing the stretch of pseudorandom generators (PRGs), and specifically the gap between sublinear (net) stretch and linear stretch. Recall that an iterative application of any PRG (even with a single bit of net stretch) yields PRGs with arbitrary (polynomial) stretch. The current work shows a black-box separation that refers to non-adaptive applications of the base PRG and a restricted type of post-processing. The latter restriction seem essential, and so the entire issue begs for further study.

12. Survey Talk: Concurrent Security and Non-Malleability
[by Rafael Pass]

Unfortunately, Rafael did not back his excellent talk by a written survey (and the proceedings contains only a laconic abstract). Hopefully, this may change in the future.

The pivot of the constructions presented is an ID-based model, which also facilitates the definitional treatment. Thus, Rafael started by arguing that this ID-based model can be implemented in the standard model by using signature schemes. Among the results that were highlighted are: (1) OWF imply O(1)-round concurrent non-malleable (NM) commitment schemes; (2) a stronger notion that is also achieved is robustness wrt concurrent execution with any k-round protocol, where k is a constant (and the O(1)-commitment scheme uses more rounds); and (3) combing any O(1)-round OT with such a robust NM commitment yields O(1)-round secure MPC.

13. Limits on the Power of Zero-Knowledge Proofs in Cryptographic Constructions
[by Zvika Brakerski, Jonathan Katz, Gil Segev, and Arkady Yerukhimovich]

Black-box separations between cryptographic primitives provide some indication towards the complexity of the methods that must be employed when relating these primitives, but these separations may fail to indicate that the said implication is impossible. One notable example of this failure refers to the used of zero-knowledge (ZK) proofs in asserting claims regarding the proper application of the underlying primitive: In these cases, the ZK protocol is applied to the code of the underlying primitive and not to its input-output behavior only (which means that such constructions can not use the base primitive in a black-box manner). The current work is focused on this specific issue, and suggests to resolve it by augmenting the oracle that represents the basic primitive with a ZK-functionality for proving "NP-assertions relative to the primary oracle". As a sanity-check, it is shown that constructions that utilize the aforementioned paradigm (e.g., CCA-secure public-key encryption (PKE) schemes based on standard PKE augmented with a NIZK) fall withing the new model, whereas black-box separations that seems unrelated to ZK (e.g., key-exchange vs OWF) remain valid also wrt ZK-augmented black-box separations.

14. Towards Non-Black-Box Lower Bounds in Cryptography
[by Rafael Pass, Wei-Lung Dustin Tseng, and Muthuramakrishnan Venkitasubramaniam]

A more radical approach also directed at the black-box usage of the basic primitive (while leaving the black-box restriction wrt the security analysis) eliminates the former restriction altogether but relies on (stronger) intractability assumptions as a basis for its conclusions. It turns out that some know separations can be extended to this setting.

15. On Black-Box Separations among Injective One-Way Functions
[by Takahiro Matsuda and Kanta Matsuura]

The main result of this work is a black-box separation between arbitrary 1-1 OWF and OWP (i.e., 1-1 and length-preserving OWF). This separation holds even if the 1-1 OWF only increases the length only by one bit. This implies a separation between 2-regular OWF and OWP.


Back to list of Oded's choices.