Comments regarding a few TCC'12 talks

This posting is aimed only at TCC fans; thus, I allowed myself many TCC-standard abbreviations.

Due to minor personal problems, I did not attend some of the talks (which I did plan to attend) and took no notes for them as well as for some other talks. Also, as usual, my selection of comments is very biased towards my research taste and interest.

Oded

1. Computing on Authenticated Data

by Jae Hyun Ahn, Dan Boneh, Jan Camenisch, Susan Hohenberger, abhi shelat, and Brent Waters.

The work introduces $P$-homomorphic signatures, as a unified framework that generalizes various prior notions. A binary Boolean predicate $P$ described the desired derivation of signatures such that $P(S,m)=1$ iff having signatures to all messages in $S$ (possibly a singleton) should allow the derivation of a signature for message $m$. The auxiliary algorithm DeriveSign produces such a signature, when given the (public) verification-key and signatures to all messages in $S$. The security definition asserts the infeasibility of deriving signatures outside the "P-closure" of the signatures obtained in a chosen signing attack.

2. Identifying Cheaters Without an Honest Majority by Yuval Ishai, Rafail Ostrovsky, and Hakan Seyalioglu.

Introduces the notion of local-identification secret sharing (LISS), where each party is notified of the identity of parties whose shares do not fit its own, which is weaker than (global) ISS in which all parties are notified of the identity of the parties whose share are inconsistent with the majority. The point is that LISS is relevant (and indeed can be meaningfully achieved) also when the honest parties are in minority.

3. Invited Talk: Non-Interactive Zero-Knowledge

by Jens Groth.

The focus was on efficient constructions, where efficiency refers mainly to the length of the CRS and the proof. Linear (or rather almost linear) length is a barrier for known constructions that use "standard assumptions". Recall that (enhanced) TDP suffice for NIZK (for NP), but there seem no reason to think that OWF per se does not suffice. Actually, I'd be more humble and ask, for starters, whether NIZK can be constructed based on any TDP (rather than based on enhanced ones).

4. On Efficient Zero-Knowledge PCPs

by Yuval Ishai, Mohammad Mahmoody, and Amit Sahai.

This paper refers to PCPs that remain ZK also in the "unbounded model" (i.e., when the number of queries asked by the cheating verifier is not a priori bounded at the system design time). In such a case, the proof length can not be a priori bounded. The focus of Kilian et al [KPT'97] was on the bounded model, and their results for the unbounded model use exponential long proof that may not be produced by small circuits. The current paper seeks such proofs (i.e., generated by small circuits), called "efficient". It is shown that if the verifier is non-adaptive, then such efficient PCPs exist only for sets in AM intersection coAM. It is begging to determine the complexity of this class (of efficient ZK PCPs): For starters, do they extend beyond BPP? Are they contained in SZK?

5. Point Obfuscation and 3-round Zero-Knowledge

by Nir Bitansky and Omer Paneth.

They show that a notion of point obfuscation, which in turn can be constructed under an assumption that has a non-black-box flavor (see below), can be used to construct 3-round ZK for NP. This assumption and approach seems incomparable to the "extractability assumptions" used in prior studies.

The said non-BB flavor appears in assumption that say that, for every distribution of hard (to solve) instances, something else is hard; this becomes clear in the contrapositive that asserts that an efficient breaking of something yields an efficient solver, whereas the "yielding" is purely existential (i.e., no transformation, let alone reduction, between the algorithms is given).

6. From Non-Adaptive to Adaptive Pseudorandom Functions

by Itay Berman and Iftach Haitner.

The said transformation is obtained by a simple construction, which does not preserve security but is rather targeted to a specific known, security level $S$. It hashes to a set having size $sqrt(S)$' and applies a non-adaptive PRF on the hashed value. Two nice ways are used to adapt the result to the standard setting, where the super-polynomial security $S$ is not know. The first uses combiners, and the second uses a non-uniform advice of arbitrary small unbounded length.

7. Hardness Preserving Constructions of Pseudorandom Functions

by Abhishek Jain, Krzysztof Pietrzak, and Aris Tentes.

The question addressed is improving the efficiency of the GGM construction (of PRFs based on PRGs), while maintaining security. For a limited range of the security levels this can be done, and the construction is interesting. The $n$-bit argument is hashed to $m$ bits (using a $t$-wise independent function), then GGM is applied, but the output is used as a description of another hash function that is applied to the original argument. While the basic instincts may warn against this double use of the argument, a hybrid argument that first replaces the PRF by a truly random function, shows that this does work provided that no $t-1$-wise collision (on the 1st hashing) occurs in the attack.

8. Lossy Functions Do Not Amplify Well

by Krzysztof Pietrzak, Alon Rosen, and Gil Segev.

In contrast to my own intuition and/or hopes, this work shows that there exists no black-box construction for amplifying lossiness. That is, there is no poly-size oracle circuit $C$ that amplifies from relative lossiness $r = ell/n$ to relative lossiness that is greater than $r+O(log n)/n$. The result is tight, since amplification by an additive $O(log n)/n$ is possible. Recall that we refer to functions $f:\bits^n\to\bits^*$ that are either injective or loss $\ell$ bits, and we desire $C^F:\bits^N\to\bits^*$ to be correspondingly either injective or loss significantly more than $(\ell/n)\cdot L$ bits.

9. Resettable Statistical Zero Knowledge

by Sanjam Garg, Rafail Ostrovsky, Ivan Visconti, and Akshay Wadia.

This paper gives evidence that rSZK proofs exist for SZK, and stronger evidence for rSZK containing sets that seems outside of BPP. They ask whether rSZK *arguments* exist for all of NP (in analogy to the non-resettable case).

10. The Knowledge Tightness of Parallel Zero-Knowledge

by Kai-Min Chung, Rafael Pass, Wei-Lung Dustin Tseng.

This work shows rather tight bounds on the knowledge tightness of $(c+7)$-round black-box parallel ZK arguments: When $k$ parallel sessions are played, the tightness is $O(k^{1/c})$.

11. Invited Talk: Locally Decodable Codes

by Sergey Yekhanin.

The talk gave equal room to two extreme settings of the parameters. The first setting is the strong locality case, where one seeks to minimize the number of queries, denoted $r$, typically have $r=O(1)$, but then seek to minimize the length (i.e., have $n$ be as small as possible as a function of $k$). The second setting is the high rate case, where one seeks to minimize the rate of the code (i.e., have $k/n$ approach 1), while using polylog locality. The main focus since 1995 was on the strong locality, and currently 3-query codes of sub-exponential length are known), but the recent multiplicity codes refer to the high rate case (and in particular overcome the barrier of one half).

12. A Dichotomy for Local Small-Bias Generators

by Benny Applebaum, Andrej Bogdanov, and Alon Rosen.

The dichotomy (and characterization) is for predicates that may be used in local constructions (of candidate small-biased generators of *small* polynomial stretch): Bad predicates yield a biased sequence when the candidate PRG couples them with almost all possible graphs (yielding NC0 construction), whereas good predicates yield small bias sequences in such a case. The bad predicates are either linear or those that whose output bit is correlated with less than three input bits.

13. Randomness Condensers for Efficiently Samplable, Seed-Dependent Sources

by Yevgeniy Dodis, Thomas Ristenpart, and Salil Vadhan.

This work initiates a study of condensers for seed-dependent sources, which are each defined by an efficient randomized process that is given the seed that may be used by the condensers. Condensers are used here since extractors are (virtually) impossible, and it is shown that for typical applications their output is good enough (provided that their deficiency is small (say logarithmic)).

14. Uniqueness is a Different Story: Impossibility of Verifiable Random Functions from Trapdoor Permutations

by Dario Fiore and Dominique Schroder

While it was known that there is no black-box construction of VPRFs based on OWFs, and this work strengthens the result to TDPs. Actually, they show that signature schemes with unique valid signatures (viewed as unpredictable but verifiable functions) cannot be BB-constructed based on "adaptive TDPs" (i.e., TDP that are hard to invert under a chosen image attack).