Boolean Matrix Multiplication (BMM) is defined as integer MM except that product is replaced by Boolean conjunction and addition is replaced by Boolean disjunction. Obviously, BMM can be reduced to integer matrix multiplication, which implies that BMM for n-by-n matrices can be performed in $O(n^\omega)$ time, where $\omega \in[2,2.373]$ is the exponent for integer matrix multiplication. However, there are several reasons for seeking to avoid this reduction:
The lecture notes presents two alternative algorithms for BMM, but I suggest to start with a simpler one. Specifically, I suggest starting with the following simpler algorithm: Partitions the ($n$-by-$n$) matrices into blocks of size $s$-by-$s$, where $s=\sqrt{\log_2n}$, and consider the resulting $n/s$-by-$n/s$ matrices (of matrices). Using the straightforward BMM algorithm (for the $n/s$-by-$n/s$ matrices), along with precomputed tables for multiplication and `addition' of $s$-by-$s$ (Boolean) matrices, we get a algorithm for BMM of $n$-by-$n$ matrices. Its complexity is $O((n/s)^3)=O(n^3/(\log n)^{3/2})$ in a model allowing $O(\log n)$-word operations at unit cost. Observing that, when counting bit operations, these word operations can be implemented in $O(\log n)$ time, we still get a sub-cubic (i.e., $O(n^3/(\log n)^{1/2})$-time) algorithm.
The foregoing algorithm generalizes to any fixed finite semiring, $R$, because the crucial observation is that we use tables of size $|R|^{2s^2}$, whereas each entry in these tables can be computed in $\poly(s)$-time. Hence, we rely on $|R|^{2s^2}=O(n^2)$, which implies $s \leq (\log_{|R|} n)^{1/2}$.
The actual lecture notes cover two topics: