Lecture 1: Probability and the Probabilistic Method. After a general introduction to the course, we recall some of the basic notions of probability theory, such as the expectation and the variance of a random variable, Chebyshev's inequality and the first law of large numbers. We also introduce the Probabilistic Method and demonstrate one application of it (specifically, to the Max3SAT problem). Finally, we develop a simple algorithm that given a 3CNF formula, finds an assignment that satisfies many of its clauses.
Lecture 2: Laws of Large Numbers and Existence of Expanders. We focus on various laws of large numbers. These laws bound the probability that a random variable deviates far from its expectation. Motivated by the problem of approximating the average of a function, we present stronger bounds than Chebyshev's inequality: moving from pairwise independent sampling to $2k$-wise independent sampling and then to totally independent sampling makes the the error probability exponentially vanishing (in the amount of independence). We also define families of graphs called expanders, and prove the existence of a family of 3-regular expanders (using the probabilistic method).
Lecture 3: Small Pairwise-Independent Sample Spaces. Often in applications that use randomness (i.e., coin tosses) we do not need all these tosses to be independent, instead it may suffice that they are $k$-independent for some $k$. In the case of totally independent random variables the probability space is at least of the size $|S|^m$, where $m$ is the number of variables in the sequence and $S$ is the sample space for an individual variable. In contrast, we present a construction for pairwise independent sample space of the size $\max(|S|,m)^2$ and its extension to the $k$-wise independent case. This will allow us to {\it derandomize} the algorithm (presented on the first lecture) that finds assignments for 3CNF that satisfies at least 7/8 clauses. The resulting algorithm will conduct an exhaustive search of the relatively small 3-wise independent sample space (of boolean assignments).
Lecture 4: Small Pairwise-Independent Sample Spaces (cont.) and Hash Functions. A good construction of $k$-wise independent sample space should be small and convenient to use in applications. The construction presented in the previous lecture satisfies the first requirement (i.e., is small), but is rather complex, imposing a number of technical problems in application. In this lecture a second, much more convenient construction is presented, based on affine transformations. This construction yields small sample spaces, but only for pairwise independence ($k=2$). We also start discussing hash functions, showing their relation to small pairwise-independent sample spaces, and presenting a Hashing Lemma. The latter asserts that pairwise-independent hash functions map sets in a ``smooth'' manner (i.e., each image obtains about the same number of preimages).
Lecture 5: Approximate Counting. We discuss quantitative problems related to NP-relations. Specifically, given an element in an NP-language, we ask how many witnesses this element has. We use Hash functions in order to present an efficient randomized procedure that given an an instance in an NP-language and oracle access to NP, approximates the number of witnesses for that instance.
Lecture 6: Uniform Generation. This lecture continues the discussion of approximate counting and uniform generation using an NP-oracle begun in the previous lecture, where we saw an approximate counting algorithm. Here we will see the equivalence (up to Turing reductions) of approximate counting to uniform generation of NP-witnesses. Together with the approximate counting algorithm from last lecture, this yields a uniform generation algorithm using an NP-oracle. We conclude with an alternative (direct) uniform generation algorithm, which uses $n$-wise independent hash functions (as opposed to the pairwise independent hash functions used for approximate counting).
Lecture 7: Small Bias Sample Spaces (Part 1). In this lecture we introduce the notion of $\e$-bias sample spaces. Informally, a random variable over $\bitset^n$ is $\e$-bias if for every subset of bits, the difference between the probability that the parity of the bits is zero and the probability it is one is at most $\e$. We show a connection between $\e$-bias and the statistical distance to the uniform distribution. Next, we present an application of $\e$-bias sample spaces for generating ``approximately'' $k$-wise independent distributions (i.e., distributions over $\bitset^n$ in which every $k$ bits look almost uniformly distributed over $\bitset^n$).
Lecture 8: Small Bias Sample Spaces (Part 2). In this lecture we present a construction of small bias sample spaces; that is, we present an algorithm that given $n$ and $\e$ outputs an explicit representation of an $\e$-bias sample space over $\bitset^n$ and does so within time $\poly(n/\e)$. We also present an application of such a construction to proving a tight result regarding the hardness of approximating MaxQE (where one is given a sequence of quadratic equations over $GF(2)$ and is asked to find an assignment that satisfies as many as possible of these equations).
Lecture 9: Expansion and Eigenvalues. This lecture is about expander graphs, which are very useful in many probabilistic applications. First, we would describe some of the combinatorial definitions and properties of an expander graph. Second, we would see one application which uses expander graphs, and finally, we would discuss some of the algebraic properties of such graphs.
Lecture 10: Random Walks on Expanders. In this lecture we show that taking an $l$-step random walk in an expander graph is in a way similar to choosing $l$ vertices at random. The advantage of using a random walk over taking an independent sample is that a random walk on a $d$-regular graph requires $\log_2 d$ random bits per each step, whereas selecting each random vertex of an $N$-vertex graph requires $\log_2 N$ random bits. As a warm up, we show that satring at any vertex in an $N$-vertex expander and taking $O(\log N)$ random steps, one reaches approximately the uniform distribution (on the vertices). We also recall the relation between two definitions of expanders and describe some known constructions of expanders.
Lecture 11: Primality Testing (via SQRT extraction). In this lecture we present two randomized algorithms in number theory. The first algorithm finds in expected polynomial-time a square root of a number in $Z_p^*$ for any prime number $p$. We then use this algorithm to construct a polynomial time algorithm with two-sided errors for testing primality.
Lecture 12: Hitters and Samplers. A sampler is an oracle Turing machine that estimates the average of any function $f: \{0,1\}^n \to [0,1]$ with bounded deviation and bounded probability of failure. A hitter is an oracle Turing machine that, given a boolean function $f: \{0,1\}^n \to \{0,1\}$, finds $x$ s.t.~$f(x)=1$ with a bounded probability of failure if $f$ has value 1 on at least a constant fraction of $\{0,1\}^n$. These are fundamental and useful problems. We consider the randomness and query complexities of hitters and samplers whose running time is polynomially bounded, and recall lower bounds for each. We show several simple constructions for both, as well as improved constructions based on pairwise-independent sample spaces and on random walks on expander graphs. We then show composite constructions whose complexities match the lower bounds.
Lecture 13: Randomized Rounding. Some approximation algorithms use linear programming on a variant of the original problem. The solutions of a linear programming problem are non-integral, in the general case, while legitimate solutions to the original problems are integers only. One idea which can be used to turn real numbers into integer solutions, is to use the linear programming (non-integer) solutions as probability parameters for selecting (integer) solutions to the original problem. In this lecture we present this idea and demonstrate it in two cases.
Back to Oded Goldreich's homepage.
This work may be published or be a basis for publication in the future. Copyright may be transferred without further notice and this version may no longer be accessible.