My focus in this posting is on open problems and useful observations that were made explicitly or implicitly in some TCC'12 talks. Additional comments regarding these and other talks can be found HERE.
Open Problem: Reducing the assumptions used for constructing Non-Interactive Zero-Knowledge proofs (NIZK) for any set in NP. Recall that (enhanced) Trapdoor Permutations (TDP) suffice for NIZK (for any set in NP), but there seem no reason to think that OWF per se does not suffice. Actually, I'd be more humble and ask, for starters, whether such NIZK can be constructed based on any TDP (rather than based on enhanced ones).
(N.B.: Here, ZK means statistical zero-knowledge.)
This paper refers to PCPs that remain Zero-Knowledge (ZK)
also in the "unbounded model"
(i.e., when the number of queries asked by the cheating verifier
is not a priori bounded at the system design time).
In such a case, the proof length can not be a priori bounded.
The focus of Kilian et al [KPT'97] was on the bounded model,
and their results for the unbounded model use exponential long proof
that may not be produced by small circuits. The current paper seeks
such proofs (i.e., generated by small circuits), called "efficient".
It is shown that if the verifier is non-adaptive, then such efficient PCPs
exist only for sets in AM intersection coAM.
Open Problem: Characterize the class of sets having efficient ZK PCPs; for starters, do they extend beyond BPP? Are they contained in SZK?
Update (April 2012):
The second question has been resolved by Mohammad Mahmoody and David Xiao;
see ECCC TR12-052
(titled ``Languages with Efficient Zero-Knowledge PCPs are in SZK'').
Mohammad Mahmoody commented that the TCC'12 paper by Garg et al
(on Resettable Statistical Zero Knowledge) provides some evidence
that efficient ZK PCPs extend beyond BPP.
They show that a notion of point obfuscation, which in turn can be constructed under an assumption that has a non-Black-Box flavor (non-BB, see below), can be used to construct 3-round ZK for NP. This assumption and approach seems incomparable to the "extractability assumptions" used in prior studies.
The said non-BB flavor appears in assumption that say that, for every distribution of hard (to solve) instances, something else is hard; this becomes clear in the contrapositive that asserts that an efficient breaking of something yields an efficient solver, whereas the "yielding" is purely existential (i.e., no transformation, let alone reduction, between the algorithms is given).
Observation: The non-BB flavor of some assumptions. The foregoing point might have been made before, but it occurred to me most clearly in the context of this talk.
The said transformation is obtained by a simple construction, which does not preserve security but is rather targeted to a specific known, security level $S$. It hashes to a set having size $sqrt(S)$' and applies a non-adaptive PRF on the hashed value. Two nice ways are used to adapt the result to the standard setting, where the super-polynomial security $S$ is not know. The first uses combiners, and the second uses a non-uniform advice of arbitrary small unbounded length.
Observation: Queries that can be answered according to their hashed value offer an efficient transformation between adaptive and non-adaptive queries. That is, suppose that the values to $F:[N]\to\bits^*$ can be obtained from $F':[M]\to\bits^*$ by hashing $x$ to $h(x)$, using the (random) hashing function $h$, and obtaining $F(x)$ from the value of $F'(h(x))$ (and $h$). Then, an algorithm that makes adaptive queries to $F$ may be emulated by an algoithm that makes (non-adaptive) queries to $F'$ (i.e., obtains the value of $F'$ on all points in $[M]$). Indeed, if $M$ is polynomially related to the number of adaptive queries, then the foregoing transformation has a polynomial overhead.
The question addressed is improving the efficiency of the GGM construction (of PRFs based on PRGs), while maintaining security. For a limited range of the security levels this can be done, and the construction is interesting. The $n$-bit argument is hashed to $m$ bits (using a $t$-wise independent function), then GGM is applied, but the output is used as a description of another hash function that is applied to the original argument. While the basic instincts may warn against this double use of the argument, a hybrid argument that first replaces the PRF by a truly random function, shows that this does work provided that no $t-1$-wise collision (on the 1st hashing) occurs in the attack.
Observation: Using an input both as an argument to a hashing function and as effecting the selection of this function. Such a double use is very dangerous if the foregoing effect is deterministic, but may work in the randomized setting. Specifically, suppose that a random process $R:\bits^n\to\bits^m$ is used and that $x$ is hashed to $H_{R(x)}(x)$, where $H$ is a $(t+1)$-wise independent hasing family. Then, as long as $o(2^{(1-(1/t)) m})$ evaluations are allowed (which means that we expect no $t$-way collisions), the outcomes will be random.
The talk gave equal room to two extreme settings of the parameters. The first setting is the strong locality case, where one seeks to minimize the number of queries, denoted $r$, typically have $r=O(1)$, but then seek to minimize the length (i.e., have $n$ be as small as possible as a function of $k$). The second setting is the high rate case, where one seeks to minimize the rate of the code (i.e., have $k/n$ approach 1), while using polylog locality. The main focus since 1995 was on the strong locality, and currently 3-query codes of sub-exponential length are known), but the recent multiplicity codes refer to the high rate case (and in particular overcome the barrier of one half).
Open Problem: Make further progress towards closing the gap between the known upper and lower bounds for Locally Decodable Codes in the strong locality regime. Indeed, I have been annoyed by this for more than a decade, and made two conjectures (regarding lower bounds) that were disproved. I dare make no further conjectures...
The dichotomy (and characterization) is for predicates that may be used in local constructions (of candidate small-biased generators of *small* polynomial stretch): Bad predicates yield a biased sequence when the candidate PRG couples them with almost all possible graphs (yielding NC0 construction), whereas good predicates yield small bias sequences in such a case. The bad predicates are either linear or those that whose output bit is correlated with less than three input bits. Although the above is only remotely related to the study of general pseudorandom generators (i.e., fooling any efficient distinguisher), it reminded me of a favorite open problem.
Open Problem: Construct pseudorandom generators of linear and/or polynomial stretch in NC0 based on as weak/widely-believed assumptions as possible. Indeed, Benny Applebaum's work Pseudorandom Generators with Long Stretch and Low Locality from Random Local One-Way Functions is currently the last word on this project.
This work initiates a study of condensers for seed-dependent sources, which are each defined by an efficient randomized process that is given the seed that may be used by the condensers. Condensers are used here since extractors are (virtually) impossible, and it is shown that for typical applications their output is good enough (provided that their deficiency is small (say logarithmic)).
Observation: The fact that condensing with log security entropy-deficiency loss suffices for Cryptographic application is an important observation to keep in mind.
Open Problem: Construct such condensers based on as weak/widely-believed assumptions as possible.
Two decades and three years after the pioneering work of Impagliazzo and Rudich, a vast number of results that assert black-box separations among cryptographic primitives are know. Two such examples are listed below. Initially, such results were understood as asserting an inherent barrier (as is still believed to exist between one-way functions and trapdoor permutations), but with time a more careful interpretation that only asserts a "technical limitation" (i.e., a limitation on the techniques) came to dominate. For an excellent clarification of the complex issues involved, see Reingold, Trevisan and Vadhan's study of Notions of Reducibility between Cryptographic Primitives. Still, my feeling is that there is more to be said about the subject. In particular, it will be great to see positive results that circumvent separations of the type presented in the following two works.