A small sample of TCC'13 articles
For a larger sample, see
my comments re some TCC'13 talks.
All articles appear in the proceedings of TCC'13, LNCS 7785.
A Counterexample to the Chain Rule for Conditional HILL Entropy:
And What Deniable Encryption Has to Do with It
by Stephan Krenn, Krzysztof Pietrzak, and Akshay Wadia
This refers to conditional pseudo-entropy (PE) as defined in HILL.
Recall that $X$ has pseudo-entropy at least $k$
if it is computationally indistinguishable from some $X'$
that has (Shannon) entropy $k$.
It is shown that $PE[X|Y] > t$
while $PE[X|Y,b]=0$ with $b$ being a random bit.
Let $f$ be a one-way function and $h$ be a hard-core of $f$.
Consider the following probability space:
- $b\in_R\{0,1\}$,
- For $i=1,...,2t$ select $x_i$ uniformly in $\{0,1\}^n$ s.t. $h(x_i)=b$,
and for $i=2t+1,...,3t$ select $x_i$ uniformly in $\{0,1\}^n$ s.t. $h(x_i)=1-b$.
- Let $X=sort(x_{2t+1},...,x_{3t})$ and $Y=sort(f(x_1),...,f(x_{3t}))$.
Then, $PE(X|Y) = \log_2 {{2t}\choose t}$,
because given $Y$ we need to present $t$ preimages
having the same $h$-value and this value may be arbitrary
(since $b$ cannot be approximated based on $Y$).
On the other hand, $PE(X|Y,b)=0$, since $Y$ and $b$ determine $X$.
Languages with Efficient Zero-Knowledge PCPs are in SZK
by Mohammad Mahmoody and David Xiao
A PCP is called efficient if the proof oracle can be represented
by a small circuit (i.e., a circuit of size polynomial in the PCP-input).
N.B.: A zero-knowledge PCP (ZKPCP) for a set outside BPP
must use proofs of super-polynomial length,
and (non-efficient) ZKPCPs for NEXP were shown in [KPT, STOC'97].
An interesting result of [GOVW, TCC12] implies that Quadratic Residuosity
has efficient ZK PCPs. The paper's main result is captured in its title,
and it leaves open the question of characterizing the class efficient-ZKPCP.
Unprovable Security of Perfect NIZK
and Non-interactive Non-malleable Commitments
by Rafael Pass
This paper advocates building a hierarchy of assumptions,
viewing possible constructions
also as assumption alas of a more complex nature.
Building on the suggestion of Naor (2003),
the author present a taxonomy of assumptions
and separates the two primitives in
the title from the set of "refutable assumption",
where separation is wrt black-box proofs of security.
The result also provide an appealing example of natural primitives
that are security implemented in the ROM but cannot be proved secure
(via a black box reduction) in the standard model.
Invited Talk by Tal Malkin: Secure Computation for Big Data
This talk focuses on secure multi-party computation on huge data,
where the underlying computation can be performed in sublinear time
(e.g., binary search or any other common data base operation).
Using oblivious RAM and standard MPC (see next),
such secure computation can be performed with polylogarithmic overhead.
Indeed, a very important result of Ostrovsky and Shoup (1997),
which I managed to forget...,
is that secure multi-party computation can be implemented
with polylogarithmic overhead with respect to the time of
a plain RAM computation (rather than w.r.t. to time on a TM).
The solution relies on an Oblivious RAM (ORAM) and a secure MPC
of the reactive functionality performed by its CPU.
Testing the Lipschitz Property over Product Distributions
with Applications to Data Privacy
by Kashyap Dixit, Madhav Jha, Sofya Raskhodnikova,
and Abhradeep Thakurta
This inspiring work starts by relating ("average case") differential privacy
to a Lipschitz condition regarding the utility provided by the algorithm.
For an algorithm $A$, and any possible output $z$,
define $f_z(x) = \log_2(Prob[A(x)=z])$.
Then saying that $A$ has differential privacy $e$
iff for every $z$ the function $f_z$ is $e$-Lipschitz.
The next idea is that a privacy preserving mechanism that is designed
to allow only questions that preserve privacy (wrt a distribution of databases)
may answer a query (presented as a function of the database)
iff the corresponding functions are Lipschitz.
Hence the role of testing the Lipschitz property.
The main result is a transformation that given an arbitrary query translates
it to an admissible one, while keeping the admissible ones intact.
Invited Talk by Benny Applebaum:
Cryptographic Hardness of Random Local Functions - Survey
The talk touched on a variety of issues regarding the hardness
of random local functions, while only mentioning the alternative
structured construction of hard local functions by generically compiling
hard log-space functions. One concrete motivation for avoiding the
latter transformation is it effect on the input/output lengths.
Benny pointed out that in the context of local computation there
are implications that are not valid in the standard setting
(e.g., robustness to linear tests implies robusteness to stronger tests).
I would also stress that things we take for granted in the standard
setting may not hold in the context of local computation
(or may be hard to establish in it): e.g., amplification
of the stretch of PRGs.
On the Power of Correlated Randomness in Secure Computation
by Yuval Ishai, Eyal Kushilevitz, Sigurd Meldgaard, Claudio Orlandi and Anat Paskin-Cherniavsky
The model of correlated randomness may be viewed as "extending"
a simple cryptographic primitive to performing more complex tasks
(i.e., general MPC). Alternatively, one may view the correlated randomness
as the outcome of a preprocessing stage,
which is performed before the actual inputs are available. In any case,
this model turns out to be very powerful:
It allows for general MPC with communication that is linear
in the input length (regardless of the circuit size), and it allows for
perfectly secure computation of any unidirectional two-party functionalities
(rather than statistically secure computation in which the communication must
grow linearly with the security parameter that determines the error).
In both results the correlated randomness has exponential length,
and it is left open whether this is inherent.
Back to
list of Oded's choices.