Many real-life data sets can be modeled as a collection of points and the distances between them, a mathematical object called a "metric space". Indeed, the points often represent objects such as documents, images, or DNA sequences, and the distances represent dis-similarity between objects. Typical computational tasks on such data, for instance, matching a query object against the data set, were previously solved by algorithms that are often highly specialized to the particular choice of distances. In 2003, Weizmann scientists introduced a notion of dimension that applies to all metric spaces, and generalizes many well-known special cases, like dimension of a Euclidean space. They further showed that this dimension controls the computational efficiency of matching a query against the data. Their work provided a solid mathematical foundation, which was followed in the last decade by Weizmann scientists and by many other experts around the world. They created a large body of work that extends this methodology to many other computational tasks, including, for instance, routing messages in computer networks, optimization of delivery routes, and statistical inference/learning for the purpose of classification.
This methodology has proved to be a powerful algorithmic paradigm, which is useful not only for the design of a host of new algorithms, but also to explain the empirical success of known ones.