## Lecture notes on Boolean Matrix Multiplication

by Virginia Vassilevska Williams

#### Oded's comments

Boolean Matrix Multiplication (BMM) is defined as integer MM
except that product is replaced by Boolean conjunction
and addition is replaced by Boolean disjunction.
Obviously, BMM can be reduced to integer matrix multiplication,
which implies that BMM for n-by-n matrices can be performed
in $O(n^\omega)$ time, where $\omega \in[2,2.373]$
is the exponent for integer matrix multiplication.
However, there are several reasons for seeking to avoid this reduction:

- Obtaining a more transparent and practical algorithm for BMM.
These issues become acute when moving from Strassen's original
algorithm (1969) to subsequent improvements.
- Handling generalizations and variants of BMM that are
not reducible to integer MM.
- Handling matrix multiplication in other semirings,
which are also not reducible to integer MM.
- Seeking alternative algorithmic techniques,
which may eventually lead to faster algorithms for BMM
or even for integer MM.
- Understanding the impact of various features of a field
on the complexity of matrix multiplication (over the field).

It is common practice to identify the foregoing objectives
with the vague term *combinatorial algorithms*,
but I fail to understand what this is considered a good term.
(In light of the equivalent of BMM to Triangle Detection,
I would not object to viewing BMM as a combinatorial problem,
but this view does not clarify what is a combinatorial algorithm.)
In any case, lacking a good definition for a research direction
does not mean that the direction is not interesting and well-motivated.
In fact, I do believe that the search of alternative algorithms
for BMM is interesting and well-motivated.
The lecture notes presents
two alternative algorithms for BMM,
but I suggest to start with a simpler one. Specifically,
I suggest starting with the following simpler algorithm:
Partitions the ($n$-by-$n$) matrices into blocks of size $s$-by-$s$,
where $s=\sqrt{\log_2n}$,
and consider the resulting $n/s$-by-$n/s$ matrices (of matrices).
Using the straightforward BMM algorithm (for the $n/s$-by-$n/s$ matrices),
along with precomputed tables for multiplication
and `addition' of $s$-by-$s$ (Boolean) matrices,
we get a algorithm for BMM of $n$-by-$n$ matrices.
Its complexity is $O((n/s)^3)=O(n^3/(\log n)^{3/2})$
in a model allowing $O(\log n)$-word operations at unit cost.
Observing that, when counting bit operations,
these word operations can be implemented in $O(\log n)$ time,
we still get a sub-cubic (i.e., $O(n^3/(\log n)^{1/2})$-time) algorithm.

The foregoing algorithm generalizes to any *fixed* finite semiring, $R$,
because the crucial observation is that we use tables of size $|R|^{2s^2}$,
whereas each entry in these tables can be computed in $\poly(s)$-time.
Hence, we rely on $|R|^{2s^2}=O(n^2)$,
which implies $s \leq (\log_{|R|} n)^{1/2}$.

#### The actual lecture notes

The actual lecture notes cover two topics:

- The computational equivalence of BMM and Triangle Detection.
- Alternative algorithms for solving BMM in (slightly) sub-cubic time;
that is, algorithms that do not reduce BMM to integer MM.

See lecture notes (with my comments).

Back to
list of Oded's choices.