On (Near-)Optimal Algorithms for Sparse Convolution

by Nick Fischer

Oded's comments

The talk actually focused on multiplying sparse polynomials, or rather on an optimal reduction of this problem to the problem of multiplying general (univariate) polynomials. Specifically, the input is a pair of univariate polynomials of degree $n$, and $t$ is a bound on the number of non-zero coefficients in the input and output polynomials. The main result is a linear-time reduction of this problem to the multiplication of univariate polynomials of degree $O(t)$.

As a warm-up, Nick presented an over-simplified randomized reduction, which is worse by a polylogarithmic factor. The basic idea is using an adequate hashing of high degree polynomials to polynomials of degree that relates to the sparsity parameter, $t$. Specifically, the polynomial $P(x)=\sum_{i=0}^n p_i x^i$ is mapped to the polynomial $HASH_h(P)=\sum_{i=I}^n p_i x^{h(i)}$, where $h(i)=i mod m$ and $m$ is a random prime of size $O(t\log n)$. The key observations are:

(The foregoing does not really work as is, but is rather meant to provide some feeling of the ideas involved.)


Back to list of Oded's choices.