Let me seize the opportunity and publicize the new website of the RANDOM conference.
As usual, I am posting comments on a few presentations that I attended, with the usual caveats asserting that some omission reflect my own limitations rather than anything else. Here is a list of the presentation I will comment on.
I wish to highlight two technical observations.
To obtain an n-vertex expander when given an m-vertex expander, for $m\in[n,2n]$, pick $2(m-n)$ vertices, pair them, merge each pair into a single vertex, and add self-loops on the unpaired vertices to maintain regularity. Assuming the m-vertex graph has a second eigenvalue smaller than L, the resulting n-vertex graph has a second eigenvalue smaller than $(1+3L)/2$. (Indeed, get $L<1/3$ first.)
Added (on 17-Oct-19): I got reminded that I heard a different construction from Noga Alon many years ago; see my current memo.
This weird observation is, of course, made in retrospect. The actual story had the authors suggest $0.5AB+0.5BA$ as an alternative notion of product aimed at overcoming technical problems with the straightforward notion of a product.
The property being tested consists of functions $F:[n]^d\to\GF(2)$ such that there exists functions $f_1,...,f_d:[n]\to\GF(2)$ such that $F(x_1,...,x_d) = \sum_{i\in[d]} f_i(x_i)$. Note that neither $[n]$ nor the $f_i$'s is endowed with any algebraic structure, which is present only in the sum.
The tester consists of selecting uniformly $x,y\in[n]^d$, defining $f_{x,y}:\GF(2)^d\to\GF(2)$ such that $f_{x,y}(a)=F(z_1,...,z_d)$, where $z_i=x_i$ if $a_i=1$ and $z_i=y_i$ otherwise, and running a 4-query (BLR-type) affinity test on $f_{x,y}$; that is, select uniformly $a,b\in\GF(2)^d$ and accept iff $f_{x,y}(a)+f_{x,y}(b)=f_{x,y}(a+b)+f_{x,y}(0)$.
The analysis starts by asserting (based on BCHKS) that if the test accepts w.o. $1-\eps$, then $E_{x,y}[\delta(f_{x,y},AFFINE)] \leq \eps$, where $\delta$ denotes the relative Hamming distance. Hence, $E_{y}[\delta(f_{x,y},AFFINE)] \leq \eps$, for some $x$. So for every $y$ there exists an affine function $g_y:\GF(2)^d\to\GF(2)$ such that $E_{y}[\delta(f_{x,y},g_y)] \leq \eps$. Now, the main step is to use [DS] in order to argue that there exists a global function $G:[n]^d\to\GF(2)^d$ such that $Pr_{y}[G(y) \neq g_y] = O(\eps)$, where the $d$-sequence $G(y)$ is viewed as an affine function (or rather as its linear part).
The focus of this paper is on the disjoint repair group property, where a code is said to have the $t$-disjoint RGP if for every bit location in the codeword there exists $t$ disjoint sets that allow to reconstruct the value of the corresponding bit.
I have been fascinated for a few years by the following related feature of Reed-Muller codes. The fact that the domain of the code contains many small subsets that are almost pairwise disjoint such that the value of the code restricted to any of these subsets is a small code. Note that a $d$-wise Cartesian product of a linear code of block-length $n$ yields a code of block-length $N=n^d$ that has $d\cdot n^{d-1} = o(N)$ such sets. But taking a curve of degree $t$ in $\GF(n)^d$ yields $n^{(t+1)\cdot d-1}$ such sets (of pairwise intersection $d$). I was wondering whether a similar feature can be present in combinatorially based codes.
In the standard (single source) recovery problem one is given many traces of a fixed string, where is trace is obtained by subjecting the string to some random corruption model (e.g., random omissions and substitutions), and is asked to recover the string. In the population problem one is given traces of several such source strings and is asked to reconstruct all strings (or strings close to them). That is, each sample is generated by selecting a source according to a fixed (but unknown) distribution over the sources, and then a trace is generated as in the single source case. This work reduces the population problem, in the average-case, where the sources are distributed uniformly and independently, to the single source problem (also in the average-case ). I was wondering whether such reductions can hold (in the worst case) for a rich class of sources that are characterized in some appealing way of for other distributions (e.g., of sufficiently high min-entropy).
This paper address the most important problem that was left open in the area after the breakthrough result of Chattopadhyay and Zuckerman (2015); that is, constructing a two-source extractor that can handle low rates of min-entropy with negligible extraction error. (Another important problem, left open by CZ15, was resolved by Li.) (The current record for this setting, is held by of Lewko (2018), who handles any min-entropy rate above 4/9.)
The current paper handles a relaxed version of the problem, constructing a condenser, which outputs a distribution on $m$-bit string that is close to one with min-entropy $m-o(\log(1/\eps))$ (rather than being close to the uniform distribution). The other parameters are very good in comparison to what is known for two-source extractors, where I personally don't care about the difference between poly-logarithmic min-entropy and logarithmic min-entropy (let alone the constant in front of the log function, which is important only for the construction of Ramsey graphs).
The standard definition of (averaging) samplers refers to the approximation error that they achieve for the average value of a bounded function defined over a huge domain. Of course, the target function must satisfy some structural condition in order to allow for an efficient approximation (i.e., it is infeasible to distinguish the all-zero function from a function that obtains a huge value on a single element), but strictly bounded functions are an over-kill.
This work considers subgaussian functions; that is, functions $f:\{0,1\}^m\to\Reals$ that satisfy $E[e^{t\cdot f(U_m)}] \leq e^{O(t^2)}$ for every $t$, and shows that the parameters achieved for bounded functions can be achieved also in the subgaussian case. This improves over what one gains by bounding the range of such functions by an interval of length $O(m^{1/2})$. A key tool, which may be of independent interest, is an adaptation of much of the relevant theory, which is pivoted at error measured in terms of total variation distance, to error measured in terms of KL-divergence.
The main contribution of this work is presented a derandomization of the multi-switching lemma (akin of the derandomization of the switching lemma of Trevisan and Xue).