Some highlights of ICALP25
The 52nd EATCS International Colloquium on Automata, Languages,
and Programming (ICALP)
took place in the campus of Aarhus University.
I really liked the conference center in which ICALP25 took place
and enjoyed visiting the charming city of Aarhus again.
Below are my comments on some of the talk I have attended.
Dana Ron - On Tolerant Testing and Distance Approximation
Tolerant testing is a generalization of property testing.
It refer to the study of sublinear complexity algorithms
for solving an approximate decision problem in which one
has to distinguish between
instances that are close to the predetermined property
and instances that are far from the property.
The actual definition specifies a way of accessing parts of the instance
as well as a distance measure and two parameters that specify what is far
and what is close. These parameters may be arbitrary in some cases,
and related in other cases (e.g., being far may mean being at distance
that is greater than twice the distance that is considered close).
Tolerant testing (in the arbitrary parameters version)
is closely related to distance approximation,
but may be weaker when only specific parameters pairs
of distance parameters are allowed.
Approximating Dasgupta Cost in Sublinear Time from a Few Random Seeds
(by M. Kapralov, A. Kumar, S. Lattanzi, A. Mousavifar, W. Wrzos-Kaminska)
The Dasgupta Cost of a graph is the sum $\sum_{v\in V}\rk(v)\cdot \deg(v)$,
where $\rk(v)$ is the ranking of $v$ when ordering the vertices according
to their degree (such that $\rk(v) \leq \rk(w)$ iff $\deg(v) \geq \deg (w)$.
Can this quantity be approximated using sublinear number of queries?
The paper provides a positive answer in the special case that the graph
is expanding.
Recall that the sum of vertex degrees as well as the
sum of powers of degrees can be approximated on sublinear time
(see work by
Eden, Ron, and Seshadhri).
A general formulation that covers all cases is given by
the sum $\sum_{v\in V} f(v)$,
where $f$ is an arbitrary ``nice'' function.
Note that one may consider the problem when allowing
degree queries only (as done in the current work),
but incidence query may be useful especially in case
the function $f$ is not arbitrary
(as in the cases discussed above).
A 0.51-Approximation of Maximum Matching in Sublinear n sup 1.5 Time
(by S. Mahabadi, M. Roghani, J. Tarnawski)
What I found appeaking in this work is the extensive and clever
use of local computation. Specifically, a problem about graphs
is solved by local computation of auxiliary graphs, and application
of other sublinear-time algorithms to the resulting graphs.
Relative-error testing of conjunctions and decision lists
(by X. Chen, W. Pires, T. Pitassi, R. Servedio)
The relative-error model was introduced in
order to study the complexity of testing sparse Boolean functions.
This model deviates from the standard property testing model
by defining a function $f:[N]\to\{0,1\}$ as $\eps$-close to a set $F$
if there exists $f'\in F$ that differs from $f$ on
at most $\eps\cdot|f^{-1}(1)|$ (rather than $\esp\cdot N$) entries.
Testers in this model are also given access to a sampling devcice
that returns uniformly distributed element of $f^{-1}(1)$.
(Unfortuntaely, the term `relative error' only captures
the first aspect but not the second one.)
One interesting result proved in this paper is that,
under some reasonable assumptions,
the query complexity of standard testing is upper-bounded by
the query complexity of testing in the relative-error model.
One may assume, wlog, that $F$ is not empty,
so the main condition is that the query complexity
of the given tester behaves nicely (essentually,
increases at least linearly in $1/\eps$).
As a corollary to the foregoing combined with their tester
for Monomials in the relative-error model, they obtain
a one-sided tester in the standard model
with linaer in $1/\eps$ complexity.
In fact, they can also obtain
a (constant-query) proximity-oblivious tester
for Monomials with polynomial rejection probability
(cf. Section 1.4 in HERE).
Sepehr Assadi - Presburger Award lecture
The focus was recent lower bounds on the space complexity
of multi-pass graph streaming problems,
where the focus is on a small number of passes
(e.g., a contant, polylog, or evern just sublinear).
A key technique that was highlighted is obtaining
a result for $p+1$ passes by considering a related
problem and a lower bound that refers to $p$-pass algorithms.
This reminded by of a talk by Jukka Suomela
at the 3rd WOLA,
which refered to the (distributed computing) LOCAL model
(where locality lower bounds were obtained through round elimination).
Mary Wootters - List-Recovery and (Non)-Applications
List-recovery can be seen as a generalization of list decoding.
Recall that in the latter case, for parameters $\delta$ and $L$,
one is interested in the list of size at most $L$ that consists of
all codewords that are at relative distance at most $\delta$ from
a given word. In that case, the focus is on positive $\delat$
(since the case of $\delta=0$ is trivial).
In contrast, in the generalization also the case of $\rho=0$
is interesting, and an additional parameter $\ell$ is added
(where list decoding corresponds to the case $\ell=1$).
In that case, we consider codes $C\subseteq\Sigma^n$,
and for each sequence of sets $S_1,...,S_n\subseteq\Sigma$
such that $|S_i| \leq \ell$ for every $i$,
we seek a list (of size at most $L$) of all codewords $(c_1,...,c_n)\in C$
such that $c_i\in S_i$ for at least $(1-\delta)\cdot n$ of the $i$'s.
As in the case of list decoding (qand unique decoding),
there is the combinatorial problem of existence of such codes,
and the algorithmic question of constructing them
and performing enconding and decoding efficiently.
Reference
See the program
and the
proceedings.
Back to
list of Oded's choices.