The computational model consists of $n$ processors that are connected by point-to-point channels (between each pair). At each round, each processor can send a $O(log n)$-bit long message to each other processor, and the sole complexity measure is the number of rounds required to solve natural search problems in which the input is presented distributively. In particular, the focus is on solving graph theoretic problems, where each processor gets the incidence list of a vertex in the graph (including this vertex identity).
This model may be viewed as a natural model of multi-party computation, where the input is naturally distributed among the processors (akin to the number-in-hand model). Here the focus is on the number of rounds such that at each round only a logarithmic number of bits can be sent between each pair of processors.
A useful paradigm in the constraction of such protocols is to try to gradually concentrate the work; that is, concentrate the relevant information among less processors. Such concebtration is beneficial for two reasons. The obvious reason is that we reduce a relevant parameter of the problem; specifically, the nuymber of processors. But the second benefit is that each of these new centers has higher communication capacity, since each of the $m$ centers can assign $n/m$ processors to communicate in its place.
One result of the current paper is that a MST can be found in a constant number of rounds if the communication capacity of each processor is polylogarithmic rather than logarithmic. Hence, for $m=n/polylog n$, if we can contract the input $n$-vertex graph into an $m$-vertex graph such that the information rrelevant to MST is held by $m$ processors, then we are done. The bulk of the work is devoted to doing this. Specifically, this is done gradually, by iterations, each implementable in a constant number of rounds, such that at each iteration the number of centers decraesed form $n/f$ to $n/exp(sqrt n))$.
Note that this protocol, as well as the prior one that achieved $O(logloglog n)$ rounds, is randomized.
We present a randomized algorithm that computes a Minimum Spanning Tree (MST) in O(log^* n) rounds, with high probability, in the Congested Clique model of distributed computing. In this model, the input is a graph on n nodes, initially each node knows only its incident edges, and per round each two nodes can exchange O(log n) bits.
Our key technical novelty is an O(log^* n) Graph Connectivity algorithm, the heart of which is a (recursive) forest growth method, based on a combination of two ideas: a sparsity-sensitive sketching aimed at sparse graphs and a random edge sampling aimed at dense graphs.
Our result improves significantly over the $O(\log \log \log n)$ algorithm of Hegeman et al. [PODC 2015] and the $O(\log \log n)$ algorithm of Lotker et al. [SPAA 2003; SICOMP 2005].