The notion of "reproducibily" is a relaxation of the notion of pseudo-determinism (see. e.g., Nr 206). Both notions refer to randomized algorithms that solve search problems. In the context of randomized log-space computation, two definitions of reproducibility are presented and shown equivalent: The weak definition requires outputting two copies of the same solution, whereas the strong definition requires a pair of algorithms such that the first outputs a short identifier (of logarithmic length) such that the second algorithm behaves as a pseudo-deterministic algorithm when given the input-identifier pair. The weak definition implies the strong one by setting the identifier to equal the instantaneous configuration of the machine after outputting the first copy.
The main result is that "RL search" is reproducible; that is, any search problem that is solvable by a randomized log-space machine that never outputs a wrong solution (i.e., having one-sided error) is reproducible. This is shown by considering the probabilities $p_t(v)$ that represents the event that when starting in the instantaneous configuration $v$ the machine completes producing an output in at most $t$ steps. The identifier is a random threshold, say, in the interval [0.6,0.66], up to logarithmically many bits of precision, and the machine advances along a path of the lex-first configurations that exceed the threshold (according to a randomized approximation).
The strong notion of reproducibily is applicable also outside the domain of randomized computations of bounded space, and a host of begging questions arise.
A curious property of randomized log-space search algorithms is that their outputs are often longer than their workspace. One consequence is that there is no clear way to reproduce the same output when running the algorithm twice on the same input. It is not feasible to store the random bits (or the output) of the previous run in log-space, and using new random bits in another execution can result in a different output. This leads to the question: how can we reproduce the results of a randomized log space computation of a search problem?
We will give a precise definition of this notion of "reproducibility". Then we will show that every problem in search-RL has a randomized log-space algorithm where the output can be reproduced. Reproducibility can be thought of as an extension of pseudo-determinism. Indeed, for some problems in search-RL we show pseudo-deterministic algorithms whose running time significantly improve on known deterministic algorithms.