Comments regarding a few TCC'13 talks
This is a "farewell from TCC" post:
As I announced in the business meeting,
I am stepping down as chair and member
of the steering committee of TCC,
and also do not intend to attend future venues.
The reason is that, in the last decade,
my research interests and activities have been drifting away from cryptography.
I still find cryptography very interesting, but I believe one cannot
pursue all that one finds interesting: Painful choices to keep away from
many interesting issues (due to limited personal resources) are inevitable.
This posting is aimed only at TCC fans;
thus, I allowed myself many TCC-standard abbreviations.
As usual, I include summaries only of talks and/or extended abstracts
that I understood and was able to appreciate and/or "take home".
1.1 Overcoming Weak Expectations
by Yevgeniy Dodis and Yu Yu
Basically, the paper present two simple probabilistic inequalities
and discusses their relevance to security w.r.t deficient cryptographic keys.
Both inequalities refer to a source $R\in\{0,1\}^n$ of deficiency $d$
(i.e., min-entropy $n-d$), and the function $f:\{0,1\}^n\to\R$ represents
a measure of the effect of an adversarial behavior w.r.t a given key.
For "one sided" security (aka "unpredictable type"), the following
trivial inequality should be applied: $E[|f(R)|] \leq 2^d E[|f(U)|]$,
where $U$ is the uniform distribution over $n$-bit strings.
For "two sided" security (aka "indistinguishability type"), the following
inequality should be applied: $E[f(R)] \leq 2^{d/2} \sqrt{E[f(U)^2]}$.
1.2 A Counterexample to the Chain Rule for Conditional HILL Entropy:
And What Deniable Encryption Has to Do with It
by Stephan Krenn, Krzysztof Pietrzak, and Akshay Wadia
This refers to conditional pseudo-entropy (PE) as defined in HILL.
It is shown that $PE[X|Y] > t$ while $PE[X|Y,b]=0$ with $b$ being a random bit.
Let $f$ be a OWF and $h$ be a hard-core of $f$.
Consider the following probability space:
- $b\in_R\{0,1\}$,
- For $i=1,...,2t$ select $x_i$ uniformly in $\{0,1\}^n$ s.t. $h(x_i)=b$,
and for $i=2t+1,...,3t$ select $x_i$ uniformly in $\{0,1\}^n$ s.t. $h(x_i)=1-b$.
- Let $X=sort(x_{2t+1},...,x_{3t})$ and $Y=sort(f(x_1),...,f(x_{3t}))$.
Then, $PE(X|Y) = \log_2 {{2t}\choose t}$,
because given $Y$ we need to present $t$ preimages
having the same $h$-value and this value may be arbitrary
(since $b$ cannot be approximated based on $Y$).
On the other hand, $PE(X|Y,b)=0$, since $Y$ and $b$ determine $X$.
1.3 Hardness Preserving Reductions via Cuckoo Hashing
by Itay Berman, Iftach Haitner, Ilan Komargodski, and Moni Naor
The point is going beyond the birthday bound, which is a natural
barrier in may applications in which (standard) hashing is applied.
Furthermore, the value obtained via Cuckoo hashing are suitable to
many of these applications.
2.2 Public-Coin Concurrent Zero-Knowledge in the Global Hash Model
by Ran Canetti, Huijia Lin, and Omer Paneth
The question addressed in paper is whether a *public-coin* interactive
proof system (for NP) can be concurrent zero-knowledge.
(N.B.: The issue is "public coin" vs general interactive proofs.)
Not being able to prove (the affirmative answer), they introduce
a relaxed model akin the "global CRS" model; that is, the same
random hash function is used in all invocations of the protocol.
In this model, they obtain round complexity that is almost logarithmic.
A latter work [by whom?] provides an affirmative answer in the standard model,
but uses a polynomial (in security parameter) number of rounds.
Open: can the number of rounds be reduced to (almost) logarithmic?
3.2 When Homomorphism Becomes a Liability
by Zvika Brakerski
Essentially, it is shown that if the decryption function can be weakly learned
in the presence of noise (corresponding to decryption error), then homomorphism
w.r.t the majority function implies insecurity, since such a homomorphism
provides a way of reducing decryption errors. This provides a way of proving
insecurity of schemes, and was indeed applied to the scheme of Bogdanov & Lee.
4.1 Why "Fiat-Shamir for Proofs" Lacks a Proof
by Nir Bitansky, Dana Dachman-Soled, Sanjam Garg, Abhishek Jain, Yael Tauman Kalai,
Adriana Lopez-Alt, and Daniel Wichs
The work of Goldwasser and Kalai explains why the Fiat-Shamir heuristic fails
for arguments (computationally sound proofs), and the question here is
what happens wrt proofs (i.e., statistically sound proofs).
This work shows that the security of the result of applying this heuristic
cannot be proved secure (based on "standard assumptions")
via a black-box reduction.
5.1 A Full Characterization of Functions that Imply Fair Coin Tossing and Ramifications to Fairness
by Gilad Asharov, Yehuda Lindell, and Tal Rabin
It is easy to reduce (fair) coin tossing to the fair computation of the XOR functionality:
Each party sends a random bit, and the outcome is defined as the XOR of these bits.
The question addressed in this work is what other two-party deterministic functionalities
can be used towards this end. The answer is that a function can be used towards this end
iff, when viewed as a matrix, there exists a linear combination of its rows (ditto columns)
that yield a uniform vector (equiv., the all-one vector).
6.1 Languages with Efficient Zero-Knowledge PCPs are in SZK
by Mohammad Mahmoody and David Xiao
A PCP is called efficient if the proof oracle can be represented
by a small circuit (i.e., a circuit of size polynomial in the PCP-input).
N.B.: A ZK PCP for a set outside BPP must use proofs of super-polynomial length,
and (non-efficient) ZK PCPs for NEXP were shown in [KPT, STOC'97].
An interesting result of [GOVW, TCC12] implies that Quadratic Residuosity
has *efficient* ZK PCPs. The paper's main result is captured in its title,
and it leaves open the question of characterizing the class "efficient ZK PCP".
6.3 Unprovable Security of Perfect NIZK and Non-interactive Non-malleable Commitments
by Rafael Pass
This paper advocates building a hierarchy of assumptions, viewing possible constructions
also as assumption alas of a more complex nature. Building on the suggestion of Naor (2003),
the author present a taxonomy of assumptions and separates the two primitives in
the title from the set of "refutable assumption", where separation is wrt black-box
proofs of security. The result also provide an appealing example of natural primitives
that are security implemented in the ROM but cannot be proved secure (via a black box reduction)
in the standard model.
Invited Talk by Tal Malkin: Secure Computation for Big Data
This talk focuses on secure multi-party computation on huge data,
where the underlying computation can be performed in sublinear time
(e.g., binary search or any other common data base operation).
Using oblivious RAM (see 7.2 (next)) and standard MPC,
such secure computation can be performed with polylogarithmic overhead.
7.2 Distributed Oblivious RAM for Secure Two-Party Computation
by Steve Lu and Rafail Ostrovsky
A very important result of Ostrovsky and Shoup (1997),
which I managed to forget...,
is that secure multi-party computation can be implemented
with polylogarithmic overhead with respect to the time of
a plain RAM computation (rather than w.r.t. to time on a TM).
The solution relies on an Oblivious RAM (ORAM) and a secure MPC
of the reactive functionality performed by its CPU.
In the current work, this construction is optimized
by introducing an auxiliary model (of "two-party RAM")
and capitalizing of additional features of the known ORAM constructions.
8.1 Testing the Lipschitz Property over Product Distributions with Applications to Data Privacy
by Kashyap Dixit, Madhav Jha, Sofya Raskhodnikova, and Abhradeep Thakurta
This inspiring work starts by relating ("average case") differential privacy
to a Lipschitz condition regarding the utility provided by the algorithm.
For an algorithm $A$, and any possible output $z$,
define $f_z(x) = \log_2(Prob[A(x)=z])$.
Then saying that $A$ has differential privacy $e$
iff for every $z$ the function $f_z$ is $e$-Lipschitz.
The next idea is that a privacy preserving mechanism that is designed
to allow only questions that preserve privacy (wrt a distribution of databases)
may answer a query (presented as a function of the database) iff the corresponding
functions are Lipschitz. Hence the role of testing the Lipschitz property.
The main result is a transformation that given an arbitrary query translates
it to an admissible one, while keeping the admissible ones intact.
8.2 Limits on the Usefulness of Random Oracles
by Iftach Haitner, Eran Omri, and Hila Zarosim
While it is know that the ROM is "useful" (i.e., "too useful")
in many cases, this works asks whether there are cases in which
the ROM is not useful; that is, it seems to identify general classes
of functionalities for which a solution in the ROM implies the existence
of a solution in the plain model (w.o., using any add'l assumptions).
They show one such class -- the class of input-free functionalities
(e.g., coin tossing or KE). Indeed, the fact that KE cannot be based
on OWF follows as a corollary, since the ROM provides a OWF.
8.3 Analyzing Graphs with Node Differential Privacy
by Shiva Prasad Kasiviswanathan, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith
This work studies differential privacy in the context of data that is represented
as a graph (rather than as a plain array). This model was studied before mainly
in the context of "edge privacy" (i.e., data bases are at distance 1 iff they
differ on one edge) and/or for dense graphs, but it seems that the current work
that focuses on "node privacy" (i.e., the removal of a vertex and its edges)
in the bounded-degree model is more appealing.
9.1 Universally Composable Synchronous Computation
by Jonathan Katz, Ueli Maurer, Bjorn Tackmann, and Vassilis Zikas
It is observed that the standard UC framework is inherently asynchronous,
and the standard protocols do not have a guaranteed termination time.
This paper focuses on integrating various notions of synchronoucity
into the UC framework.
9.3 On the Feasibility of Extending Oblivious Transfer
by Yehuda Lindell and Hila Zarosim
By "extending OT" one means implementing more than $n$ instances of OT based
on $n$ invocations of the OT protocol.
Such an extension can be done by using OWF,
but cannot be done in the information theoretic model [Beaver'96].
This work shows (1) that OWF (rather than a weaker computation assumption)
is indeed necessary for extending OT, (2) that in the malicious model
the minimal number of OTs being used must be super logarithmic
(or else one gets an OT from scratch), and (3) extending OT in
the malicious adaptive model requires assumptions at least at the
strength of static OT.
10.2 Revisiting Lower and Upper Bounds for Selective Decommitments
by Rafail Ostrovsky, Vanishree Rao, Alessandra Scafuro, and Ivan Visconti
The question of selective decommitment has first occurred to me in the
summer of 1986, in the wake of working on GMW2.
I also discussed it with Ran Canetti in early 1990s, but it seems that
we did not mention it in our paper with Feige and Noar
(on "Adaptively Secure Multi-party Computation").
This may explain how please I were when seeing the elegant 5-round
protocol of this work. Ditto the other results (e.g., Thm 3).
10.3 On the Circular Security of Bit-Encryption
by Ron D. Rothblum
It seems a folklore belief that any bit-encryption is circular secure,
but this work outlines two obstacles towards proving that conjecture:
Firstly, such a proof cannot use a black-box reduction (of circular
security to standard security or even to the stronger CCA security).
Secondly, such a proof would imply that the multilinear map conjecture
is false. While the latter conjecture seems a significant extension
of the (already questionable) bilinear map conjecture, ruling it out
does not see easy...
Invited Talk by Benny Applebaum:
Cryptographic Hardness of Random Local Functions - Survey
The talk touched on a variety of issues regarding the hardness
of random local functions, while only mentioning the alternative
structured construction of hard local functions by generically compiling
hard log-space functions. One concrete motivation for avoiding the
latter transformation is it effect on the input/output lengths.
Benny pointed out that in the context of local computation there
are implications that are not valid in the standard setting
(e.g., robustness to linear tests implies robusteness to stronger tests).
I would also stress that things we take for granted in the standard
setting may not hold in the context of local computation
(or may be hard to establish in it): e.g., amplification
of the stretch of PRGs.
11.1 On the Power of Correlated Randomness in Secure Computation
by Yuval Ishai, Eyal Kushilevitz, Sigurd Meldgaard, Claudio Orlandi and Anat Paskin-Cherniavsky
The model of correlated randomness may be viewed as "extending" a simple cryptographic
primitive to performing more complex tasks (i.e., general MPC). Alternatively,
one may view the correlated randomness as the outcome of a preprocessing stage,
which is performed before the actual inputs are available. In any case,
this model turns out to be very powerful: It allows for general MPC with communication
that is linear in the input length (regardless of the circuit size), and it allows
for perfectly secure computation of any unidirectional two-party functionalities
(rather than statistically secure computation in which the communication must
grow linearly with the security parameter that determines the error).
In both results the correlated randomness has exponential length,
and it is left open whether this is inherent.
12.3 Randomness-Dependent Message Security
by Eleanor Birrell, Kai-Min Chung, Rafael Pass, Sidharth Telang
This work introduces the notion of randomness-dependent message (KDM)
security, which has a spirit analogous to KDM (key-dependent message).
Also in this case, the motivation that I find more convincing is
that such schemes are a powerful primitive, having applications
that are not offered by standard encryption schemes.
(Indeed, also in this case, I'm unconvinced of the other motivation;
that is, the need to develope a theory of undoing damage caused by
careless implementations. Theory schould insist that implementation
conforms to basic cryptographic principles such as keeping the
keys and the encryption-randomness out of the reach of the
application level.)
This work provides a few basic results regarding this new notion.
[For a change, I have also attended the Rump session,
which featured relatively few talks and many announcements
(of conferences and positions).]
R1. How to Garble RAM Programs (Ostrovsky).
The main result was already reviewed by me
in my choices (Nr 106).
A very recent result, just reported, is a RAM analogue of
a recent result of Goldwasser et al
on resusable garbled circuits.
R2. Malicious security Based on Garbled Circuits achiving $2^{-s}$ security
with just $s$ circuits (Lindell).
Consider the standard cut-and-choose technique applied for verifying the
proper construction of garbled circuits: $s$ garbled copies are constructed
and sent, and $s/2$ are selected at random and revealed for verification
of proper construction. But when computing with the remaining $s/2$ copies
and seeing inconsistent answers, what should we do? We should complain only
if there are many non-majority outcomes, which means that our detection
probability will be $1-2^{-\Omega(s)}$ not $1-2^{-s}$. The alternative
provided suggested in this talk is to construct $O(s)$ (small) auxiliary
circuits that will reveal the constructor's input if any inconsistency
is detected (via the secret associated with the circuit's outputs),
and apply the wasteful analysis only to the small circuits.
R3. Interactive Coding Revisited (Pass).
The starting point is the observation that the communication-efficient
interactive coding do not preserve security, since they involve rewinding.
Indeed, it is a theorem that if one wish to preserve security,
then msg-by-msg encoding is optimal. However, we may consider
"computationally bounded noise" (as done in single msg encoding).
Indeed, assuming OWF, one may get a corresponding result for
interactive coding, and the use of OWF is necessary.