[LOGO]

Faculty of Mathematics and Computer Science

The Weizmann Institute of Science


Students Theory Seminar - 2007

About ~ Previous Years ~ Department Seminars

 


Time & Place

 

In the second semester, the seminar is on Thursdays, 2:30 - 4:30 pm (+break at 3:30), at Pekeris.

This year we also had Omer Reingold’s Pseudorandomness seminar.

 


 

November

 

5/11/2006

Ronen Gradwohl

Extractors and Condensers from Univariate Polynomials

 

12/11/2006 [at 4pm!]

Amir Yehudayoff

Lower Bounds for Multilinear Arithmetic Formulas

19/11/2006

 

 

No meeting due to the STOC’07 deadline

26/11/2006

 

Dana Moshkovitz

Linear time Encodable and List Decodable Codes

 

December

 

3/12/2006 [at Zi 261!]

Shachar Lovett

New Locally Decodable Codes and Private Information Retrieval Schemes

10/12/2006

Yael Tauman Kalai

New Results for Learning Noisy Parities and Halfspaces

17/12/2006

[at Zi 261!]

Ariel Gabizon

Derandomized Curve Samplers

24/12/2006

[at Zi 261!]

Ariel Gabizon

Derandomized Curve Samplers (Cont’d)

31/12/2006

Or Sheffet

On the Randomness Complexity of Samplers, and the Usefulness of Expanders

 

January

 

7/1/2007

Or Meir

Derandomizing Arthur-Merlin Games using Hitting Sets

14/1/2007

[at Zi 261]

Zeev Dvir

Valiant’s Rigidity Argument

21/1/2007

[at Zi 261]

Asaf Nussbaum

If NP languages are hard on the worst-case, then it is easy to find their hard instances

28/2/2007

[at Zi 261]

Asaf Nussbaum

Zero-One Laws

 

February

 

4/2/2007

[at Zi 261]

Alex Samorodnitsky

Gowers Uniformity norms, and why we should care

11/2/2007

Noam Livne

Relativization, diagonalization and stuff

18/2/2007

[at Zi 1]

Noam Livne

Relativization, diagonalization and stuff (Cont’d)

25/2/2007

 

Avi Wigderson’s lectures in Haifa later that week

 

March

 

4/3/2007

 

PURIM

15/3/2007

Guy Kindler

On the Embeddability of Negative Type Metrics into L1, and

integrality-gap preserving reductions

22/3/2007

[at Zi 261!]

Guy Kindler

On the Embeddability of Negative Type Metrics into L1, and

integrality-gap preserving reductions (Cont’d)

29/3/2007

[at Zi 261!]

Noam Livne

No better ways to generate hard NP instances than picking uniformly at random (+bonus talk on April 1st)

 

April

 

5/4/2007

 

Passover

12/4/2007

Shachar Lovett

PRIMES is in P

 

19/4/2007

 

No meeting due to the FOCS’07 deadline (April 20th)

26/4/2007

 

Faculty Excursion

 

May

 

3/5/2007

[at Zi 261!]

Sanjeev Arora

Geometry and Approximation Algorithms for NP-hard Optimization Problems

10/5/2007

[at Zi 261!]

Avi Wigderson

Old and New results & problems on multi-party communication

18/5/2007

 

IMU Meeting, May 17th -18th (Ben Gurion University)

24/5/2007

 

Erdős Lectures (Hebrew University)

31/5/2007

 

Random Structures and Algorithms, May 28th – June 1st (Tel-Aviv University)

 

June

 

7/6/2007

 

No meeting this week

14/6/2007

 

FCRC 2007 (including STOC’07 and CCC’07)

Different Approaches to Complexity in Mathematics (Technion, June 12-19)

21/6/2007

Eden Chlamtac

Near Optimal Algorithms for Unique Games

28/6/2007

 

Complexity Workshop in Oberwolfach

 

July

 

5/7/2007

Eden Chlamtac

Coloring, Independent Sets and SDP Hierarchies

12/7/2007

Or Meir

Reconstructive Dispersers and Hitting Set Generators

 

August

 

30/8/2007

Zeev Dvir

Multipoint evaluation of multivariate polynomials in small characteristic

 


 

Title:

Extractors and Condensers from Univariate Polynomials

Speaker:

Ronen Gradwohl

 

In this talk we will discuss the new extractor and condenser constructions of Guruswami, Umans, and Vadhan. The condenser, which is based on Parvaresh-Vardy codes, is relatively simple to describe and is mostly self-contained. It is the first lossless condenser that is optimal up to constant factors in both the seed and output lengths, and can thus be used directly to construct extractors that match the current best known construction of Lu, Reingold, Vadhan, and Wigderson.


 

Title:

Lower Bounds for Multilinear Arithmetic Formulas

Speaker:

Amir Yehudayoff

 

Arithmetic circuits are the standard model for computing polynomials.

Arithmetic formulas are circuits that don't use the same computation twice.

We will try to go over the main ideas of two papers [1,2] of Raz concerning multilinear arithmetic formulas.

 

[1] proves a super-polynomial lower bound for the size of any multilinear arithmetic formula for the determinant (or the permanent).

 

[2] proves a super-polynomial separation between multilinear formula and circuit size.

More specifically, [2] constructs a polynomial f in F[x1,...,xN] that is computed by poly(N)-size O(log2N)-depth multilinear circuit, but isn't computed by any poly(N)-size multilinear arithmetic formula.

 

 

[1]  Multi-Linear Formulas for Permanent and Determinant are of Super-Polynomial Size,
R.Raz,

  • Proceeding of the 36th STOC, 2004, pp. 633-641
  • Journal of the Association for Computing Machinery (to appear)

 

[2] Separation of Multilinear Circuit and Formula Size,
R.Raz,

  • Proceeding of the 45th FOCS, 2004, pp. 344-351
    (title: ``Multilinear-NC1 ≠ Multilinear-NC2'')
  • Theory Of Computing Vol. 2, article 6 (2006)

 

Title:

Linear time encodable and list decodable codes

Speaker:

Dana Moshkovitz

 

 

I will present a part of a beautiful paper by Guruswami and Indyk from STOC’03.

 

This paper deals with list decoding and focuses on the running time of the decoder.

The authors construct error correcting codes that can be encoded and list decoded (from fraction of errors arbitrarily close to 1) in linear time in the length of the message.

 

Prior to this work, it was only known how to construct codes that can be:

·        unique decoded in linear time

·        list decoded in n×polylogn time [e.g., Reed-Solomon codes]

·        list decoded from erasures in linear time

 

The construction is based on expanders.

 


 

Title:

New Locally Decodable Codes and Private Information Retrieval Schemes

Speaker:

Shachar Lovett

 

The paper shows how to construct 3-query Locally Decodable Codes (LDCs), encoding n bits to N=exp(n1/t), for values t such that 2t-1 is prime (such primes are called Marsenne primes). The largest known Marsenne prime is 232582657-1. Under the conjecture that the number of Marsenne primes is infinite, the construction gives N=exp(nO(1/loglogn)).

The best known constructions up to this paper were exp(nloglogq/qlogq) for q-query LDC.

 


 

Title:

New Results for Learning Noisy Parities and Halfspaces

Speaker:

Yael Tauman Kalai

 

I am going to present a paper by Vitaly Feldman, Subhash Khot, Parikshit Gopalan and Ashok Kumar Ponnuswami, from FOCS 2006.

 

This paper addresses well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise.  In this talk, I will focus only on one of the results in the paper.  The result is that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise.  In particular, together with the parity learning algorithm of Blum, Kalai, and Wasserman, this gives the first nontrivial algorithm for learning parities with adversarial noise.

 


 

Title:

Derandomized Curve Samplers

Speaker:

Ariel Gabizon

 

 

Let Fn be some vector space and A be any subset of Fn.

Loosely speaking, a sampler is a randomized procedure that outputs a set of points s1,…,sm such

that, with high probability, the number of these points that are inside of A are proportional to A's size, i.e., about m|A|/|Fn|.

A curve sampler is a sampler with the additional property that the points outputted by the sampler are a low-degree curve in Fn. For a long time, the only known construction of a curve sampler was to simply pick a random low-degree curve in Fn. Recently, Amnon Ta-Shma and Chris Umans gave an incredibly elegant and simple construction of  a curve sampler that is much more randomness efficient (in fact, it is close to optimal in the number of random bits).

We will show their construction of randomness efficient curve samplers.

 

Remark: In their paper, Ta-Shma and Umans use these new curve samplers to construct objects known as lossless condensers. We will not go into the connection between lossless condensers and curve samplers in the lecture.


 

Title:

On the Randomness Complexity of Samplers, and the Usefulness of Expanders

Speaker:

Or Sheffet

 

In this talk we will discuss several known results regarding the randomness complexity of samplers, derived by the use of expanders. Mainly, they are derived from the two main applications of expanders: the Expander Mixing Lemma (EML) and the Expander Random Walk theorem (ERW). Given a function f:[N]®[0,1] we wish to estimate its average, E[f]. Samplers are probabilistic Turing Machines with oracle access to f that with high probability produce a close estimation for E[f] using only a few queries to the values of f.

 

We discuss the importance of the randomness complexity of samplers, i.e., the number of random bits they use. As presented in [Gol06], the randomness complexity affects the number of queries samplers do, given an access to one weak random source.

We discuss the [CEG95] lower bound on the randomness complexity of samplers. Using the EML, we present a sampler that uses exactly log N random bits (a version of this sampler appears in [Gol97]). Using the ERW, we present Zuckerman's construction of a disperser (and an extractor).

 

Students that are new to samplers and/or expanders and the two main tools (EML, ERW) are particularly invited to come and learn. Students that already know the subject of samplers and expanders, are welcome to come, help me in the lecture, and feast (I am in charge also of the refreshments for this talk, and I can guarantee that they will be

good...)

 

All results are taken from:

[CEG95]     Lower bounds for sampling algorithms for estimating the average

[Gol97]      A sample of samplers - a computational perspective on sampling (survey)

[Gol06]      Another motivation for reducing the randomness complexity of algorithms  

[Zuc06]      Linear degree extractors and the inapproximability of max clique and chromatic number

 


 

Title:

Derandomizing Arthur-Merlin Games using Hitting Sets

Speaker:

Or Meir



In the last few years there have been few papers focusing on derandomization of AM, that is, proving that AM = NP given some hardness assumption. Such a result would make it easier to prove that a given problem is in NP, as what will be required in order to prove that the problem is in NP will be the design of an interactive proof of constant number of rounds instead of designing a (non-interactive) NP-proof. In particular, it would imply that the Graph-Non-isomorphism problem is in NP, and thus Graph-Isomorphism is in NPÇcoNP.


In this talk we will show one such derandomization result of AM, of Miltersen and Vinodchandran. Their proof is particularly interesting, since it takes a different approach to such derandomization than the usual one. While usually such derandomization results use the hardness assumption to construct strong enough pseudorandom generators and then apply them to obtain the required derandomization, Miltersen and Vinodchandran use Hitters instead of pseudorandom generators. They also show that this approach could be used to show that BPP=P, although it would require stronger hardness assumptions than the known result of Impagliazzo and Wigderson.

 

The paper is by P. B. Miltersen and N.V. Vinodchandran, originally from FOCS’99.


 

Title:

Valiant’s Rigidity Argument

Speaker:

Zeev Dvir

 

I will show an approach suggested by Valiant in 1977 for proving super linear circuit lower bounds. The approach relies on the notion of matrix rigidity. I will define this notion and show how it is related to circuit lower bounds.


 

Title:

If NP languages are hard on the worst-case, then it is easy to find their hard instances

Speaker:

Asaf Nussbaum

 

The paper is by Danny Gutfreund, Ronen Shaltiel and Amnon Ta-Shma.

 

Assuming that P¹NP, then NP-complete problems are hard to solve on worst case instances, but are there any heuristics that successfully solve NP-complete problems on (most) inputs that occur 'in practice'?

 

Due to the difficulty of analyzing real life distributions, such heuristics are required to perform well under arbitrary 'reasonable' distributions D, where the latter are modeled as (arbitrary) distributions that can be sampled efficiently.

 

The main result of [GSTS] rules out the existence of such heuristics: given an arbitrary algorithm A, they explicitly construct a simple distribution DA that produces hard instances for A with constant probability. Their result holds for both deterministic heuristics (if NP¹P) and for probabilistic ones (assuming NP¹RP).

 


 

Title:

Zero-One Laws

Speaker:

Asaf Nussbaum

 

Random graphs exhibit remarkable combinatorial structure. In particular, many natural graph properties are satisfied either almost surely or hardly ever (namely, with probability tending either to 1 or to 0, as the size of the random graph gets larger). The famed 0/1-law of the random graphs establishes such a phenomenon, not for specific prescribed properties, but rather for an entire class of graph properties: first-order properties. This means that any graph property specified by a first-order formula (where variables stand for vertices and the only relation is adjacency) holds either almost surely or hardly ever on random G(n,p) graphs for any constant choice of p. This unexpected connection between combinatorics and logic was independently proved by Fagin (in the west) and by Glebskii, Kogan, Liagonkii & Talanov (in the east). We describe a surprisingly simple variant of Fagin's proof due to Spencer.

 


 

Title:

Gowers Uniformity norms, and why we should care

Speaker:

Alex Samorodnitsky

 

We will discuss the notion of Gowers Uniformity norms, and argue that they are natural extensions of the "usual" notion of maximal Fourier coefficient of a function. The latter notion quantifies the best approximation of the function by a linear function. Gowers norms (presumably) do the same for low-degree polynomials.

 


 

Title:

Relativization, diagonalization and stuff

Speaker:

Noam Livne

 

In the talk we will try to understand the basic concept of relativization. We will start with a review of the notion of diagonalization, and present some non-trivial diagonalizations: the nondeterministic hierarchy theorem and Ladner's theorem (if P¹NP then there is a set in NP-P that is not NP-complete). We will then see the famous result of Baker-Gill-Solovay: P vs. NP cannot be solved via diagonalization. We will see how to determine that a proof relativizes. Finally, we will prove IP=PSPACE. This result surprised the community, since it was shown previous to the proof that such a result cannot be shown via relativizing techniques. The proof was the first truly non-relativizing proof.


 

Title:

On the Embeddability of Negative Type Metrics into L1, and

integrality-gap preserving reductions

Speaker:

Guy Kindler

 

It turns out that several interesting hard problems, particularly the sparsest cut problem, can be reduced to the problem of finding the best embedding of a metric in L1. In recent years, approximation algorithms for these problems were obtained by taking a metric that corresponds to an instance, embedding it in an L22 metric using an SDP, and then proving that any L22 metric can be embedded into L1 with small distortion. This kindled interest in the question of how well L22 metrics can be embedded in L1.

 

It was conjectured by Goemans and Linial that any L22 metric embeds into L1 with constant distortion (namely the distortion does not depend on the number of points in the metric space). I will discuss a paper of Khot and Vishnoi that disproves the conjecture by showing a non-constant integrality gap for the sparsest cut problem. Interestingly, the paper first shows a hardness result for the sparsest cut problem, and then applies the reduction to a special instance to get the integrality gap. By applying other hardness reductions (such as for max-cut) to this instance, the paper shows tight integrality gaps for other interesting problems.

 


 

Title:

No better ways to generate hard NP instances than picking uniformly at random

Speaker:

Noam Livne

 

 

In his seminal paper from '84 Levin showed that there exist 'average case complete problems'. These are NPC problems, such that if some problem in NP is hard on the average with respect to some distribution taken from a particular family of distributions (namely, P-computable distributions), then the former problem is also hard on the average with respect to some distribution taken from that family of distributions.

 

One weakness of this result was that this family of distributions is quite restricted. In other words, in order to establish the hardness of such a complete problem, one has to assume the hardness of NP under some distribution taken from a restricted family of distributions, while it is possible that NP is hard on the average only with respect to distributions taken from outside of this restricted family.

 

In a paper titled as this talk from '90, Impagliazzo and Levin show that in fact it is sufficient to assume the hardness of NP under samplable distributions (presumably a much wider family of distributions), in order to establish the hardness of these complete problems. That is, a problem complete for NP with P-computable distributions is also complete for NP with samplable distributions.

 


 

Title:

PRIMES is in P

Speaker:

Shachar Lovett

 

PRIMES is the following decision problem: Is the given number prime or composite?

 

The first deterministic polynomial time algorithm for PRIMES was found in 1976 by Gary Miller, but his algorithm assumed the Generalized Riemann Hypothesis, an unproven conjecture in number theory.

In 1980, Michael Rabin found a randomized variant of Miller's algorithm that works under no unproven conjectures, thus constructing the Miller-Rabin algorithm, and showing that PRIMES is in RP.

In 1983, a breakthrough was achieved, when Adleman, Pomerance and Rumley gave a deterministic algorithm based on Miller's algorithm that, given input n, runs in time (logn)O(logloglog n).

Yet, no algorithm running in time polylog(n) was found.

 

This was the state of affairs until 2002, when Manindra Agrawal, Neeraj Kayal and Nitin Saxena discovered a deterministic polynomial time algorithm for PRIMES that does not depend on any unproven conjectures, thus settling that PRIMES is in P.

 

 


 

Title:

Geometry and Approximation Algorithms for NP-hard Optimization Problems

Speaker:

Sanjeev Arora, Princeton

 

Semidefinite programming (SDP) has been key to design of approximation algorithms for NP-hard problems.

Analysis of SDP seems to call for geometric arguments, old and new.

In the first half of the talk I will survey this area broadly, focusing on what role geometry plays.

In the second half of the talk, I will give details of the main result from my joint paper with Rao and Vazirani in 2004 that resulted in new approximation algorithms for graph partitioning problems such as sparsest cut. I will also outline how this result has proved useful in other applications, including a new embedding of l1 into l2 that improves Bourgain's bound of O(log n) from 1985 to almost O(Ölog n).

 

The presentation will be self-contained.


 

Title:

Old and New results & problems on multi-party communication

Speaker:

Avi Wigderson, Institute for Advanced Study

 

In the talk I'll discuss the "number-on-forehead" model of communication, in which k players attempt to jointly compute a function of k arguments, where each player sees k-1 of them. This nontrivial generalization of the standard 2-party communication model, introduced by Chandra, Furst and Lipton, captures many computational models, and lower bounds for it had (and will have) important implications. I'll describe some of this motivation, old results, and some new work with Emanuele Viola, especially on the 1-way and bounded rounds versions

of this model, as well as the possibility of strong direct-product theorem for it.

 


 

Title:

Near-Optimal Algorithms for Unique Games

Speaker:

Eden Chlamtac, Princeton

 

 

The Unique Games Problem is a constraint satisfaction problem over arbitrary domains which can be viewed as a generalization of MAX-CUT and of systems of two-variable linear equations over finite fields.

The Unique Games Conjecture (UGC) states that it is NP-hard to distinguish between UG instances where almost all constraints are simultaneously satisfiable and ones where very few constraints are satisfiable. This conjecture has been shown to imply a number of inapproximability results which are not known to follow from more standard complexity assumptions.

 

I will present a recent paper (STOC 06) of Charikar, Makarychev and Makarychev, in which they give two approximation algorithms for Unique Games. The approximation guarantees in both algorithms come very close to hardness results which follow from UGC, as was shown by Khot, Kindler, Mossel and O'Donnell.

 


 

Title:

Coloring, Independent Sets and SDP Hierarchies

Speaker:

Eden Chlamtac, Princeton

 

 

In this talk I will discuss two related problems. First, given a graph which has chromatic number k (but without an explicit k-coloring), find a legal coloring which uses as few colors as possible. Second, given a graph containing an independent set of size gn (for positive constant g), find a maximum size independent set. Both problems may also be considered in the hypergraph setting.

 

Much of the work on these problems has involved the use of semidefinite programming (SDP). While this approach seems to have inherent limitations (as demonstrated by lower bounds on integrality gaps), there is hope that hierarchies of SDP relaxations may lead to better approximation algorithms. (This is not clear a priori-- for other problems, such as Vertex Cover, this possibility has been strongly ruled out.)

 

I will discuss some classical SDP-based algorithms for both problems in the graph and hypergraph setting, as well as recent work where I have shown some improvement over previous algorithms (in certain settings) using a constant level of the Lasserre SDP hierarchy.

 


 

Title:

Reconstructive Dispersers and Hitting Set Generators

Speaker:

Or Meir

 

Pseudorandom Generators are often viewed as the "computational analogue" of extractors. In fact, it is known that every "black-box" construction of a PRG yields an extractor. This raises the natural question of whether every construction of an extractor yields a PRG. In this talk we will give an answer to this question for Hitting Set Generators and dispersers, which are the one-sided error versions of PRGs and extractors. More specifically, the paper defines a broad family of dispersers (called "Reconstructive Dispersers"), such that any construction of a reconstructive disperser yields an HSG.

 


 

Title:

Multipoint evaluation of multivariate polynomials in small characteristic

Speaker:

Zeev Dvir

 

 

I will present a new algorithm by Chris Umans that solves the following problem: Given a multivariate polynomial f(X1,...,Xm) with individual degrees < d and N=dm evaluation points a1,...,ad^m in Fqm , compute all the N values f(aj). The new algorithm runs in time almost linear in N when the characteristic of the field is sufficiently small (previous algorithms for this problem required time >> N1.5). One application of this new algorithm is an optimal (up to logarithmic factors) algorithm for modular composition of univariate polynomials (computing f(g(x)) mod h(x) for univariate f,g,h). This in turn improves the running time of the best polynomial factorization algorithm.

The nice thing about this algorithm is that it uses the same trick that was used in both the list decoding algorithm of Guruswami and Rudra and also in the recent extractor construction of Guruswami, Umans and Vadhan. This makes it the 3rd time this trick is used to give an almost optimal solution for a long standing open problem. The presentation of the algorithm will assume certain familiarity with field extensions of Fq.