Mini-Symposium
on Multiscale and Diffusion
Abstracts
Adaptive Multiscale Image Segmentation and Anisotropic Diffusion
Meirav Galun, Weizmann Institute
Segmentation and denoising are important tasks in image analysis. One
approach to these tasks is via the Segmentation by Weighted Aggregation
algorithm (SWA). Inspired by algebraic multigrid (AMG), the SWA algorithm
constructs in an adaptive way multiscale representations of the input image,
useful for denoising and segmentation. In this talk we show that the
coarsening approach of AMG is intimately related to anisotropic
diffusion. Specifically, we show both theoretically and experimentally that
the (two-level) algebraic multigrid restriction + prolongation operator
implements a low-rank variation of anisotropic diffusion. We further compare
its denoising properties to those of the classical anisotropic diffusion on
smooth and step signals.
Nonlinear Multigrid Revisited
Irad Yavneh, Technion
Multigrid algorithms for discretized nonlinear partial differential equations
and systems are nearly as old as multigrid itself. Over the years several approaches
and variants of nonlinear multigrid algorithms have been developed. Typically,
for relatively easy problems the different approaches exhibit similar performance.
However, for difficult problems the behavior varies, and it is not easy to predict
which approach may prevail. In this talk we will consider nonlinear multigrid,
focusing on the task of coarse-grid correction, in a general framework of variational
coarsening. Such a view reveals clear relations between the various existing
approaches and may suggest future variants. This study also sheds light on the
choice of inter-rid transfer operators, which are so important for obtaining
fast multigrid convergence, and which have received much attention in linear
multigrid algorithms but far less so in nonlinear multigrid.
Blue-Noise Point Sampling Using Kernel Density Model
Raanan Fattal, Hebrew University
Stochastic point distributions with blue-noise spectrum are used extensively
in numerical analysis for function evaluation and computer graphics for
various applications such as avoiding aliasing artifacts in ray tracing,
halftoning, stippling, etc. In this talk we present a new approach for
generating point sets with high-quality blue noise properties that
formulates the problem as a statistical mechanics particle model and
produces the points by sampling this model. This formulation unifies
randomness with the requirement of equidistant point spacing, which leads to
the enhanced blue noise spectral properties. We derive a highly efficient
multi-scale sampling scheme to draw random point distributions from this
model and avoid the critical slowing down phenomena that plagues this type
of models. This derivation is accompanied by a model-specific analysis.
Altogether, our approach generates high-quality point distributions,
supports variable spatial point density, and runs in time that is linear in
the number of points generated.
Natural Image Denoising by Non-Local Means, Diffusion and Optimality
Boaz Nadler, Weizmann Institute
The Non-local means algorithm is considered as one of the
state-of-the-art image denoising algorithms. In this talk we'll first
provide two different interpretations for non-local means: The first is a
probabilistic interpretation in terms of a diffusion process in patch
space. The second interpretation is as a proxy to the minimal mean squared
error estimator within a Bayesian framework. Both interpretations have
interesting consequences regarding the performance of NL-means and
fundamental limitations for natural image denoising.
Variational Regularization of Lie Groups and their Cosets
Nir Sochen, Tel Aviv University
In some recent applications the objects of interest are fields of
matrices. Diffusion Tensor MRI or in short DTI is an example of
such application. Sets of matrices with the usual matrix multiplication
are often Lie groups or quotients thereof. In this talk we will present
variational regularization of such objects via anisotropic diffusion and the
Beltrami framework. Applications to neuroimaging will be presented.
Heat Diffusion Descriptors for Deformable Shapes
Michael Bronstein, Technion
Large databases of 3D models available in public domain have created the
demand for shape search and retrieval algorithms capable of finding similar
shapes in the same way a search engine responds to text quires. Since many
shapes manifest rich variability, shape retrieval is often required to be
invariant to different classes of transformations and shape variations. One
of the most challenging settings in the case of non-rigid shapes, in which
the class of transformations may be very wide due to the capability of such
shapes to bend and assume different forms. In this talk, we will explore
approaches to 3D shape retrieval analogous to feature-based representations
popular in the computer vision community. We will show how to construct
invariant local feature descriptors based on heat diffusion in order to
represent 3D shapes as collections of geometric "words" and "expressions"
and how to adopt methods employed in search engines for efficient indexing
and search of shapes. To conclude, we will show "Shape Google", a prototype
search engine for deformable objects. (Based on joint work with M.
Ovsjanikov, L. Guibas, A. Bronstein, I. Kokkinos, D. Raviv and R. Kimmel)
Detecting Faint Edges in Noisy Images
Ronen Basri, Weizmann Institute
One of the most intensively studied problems in image processing
concerns how to detect edges in images. Edges are important since they mark
the locations of discontinuities indepth, surface orientation, or
reflectance, and their detection can facilitate a variety of applications
including image segmentation and object recognition. Accurate detection of
faint, low-contrast edges in noisy images is challenging. Optimal detection
of such edges can potentially be achieved if we use filters that match
the shapes, lengths, and orientations of the sought edges. This however
requires search in the space of continuous curves. In this talk we explore
the limits of detectability, taking into account the lengths of edges and
their combinatorics. We further construct two efficient multi-level
algorithms for edge detection. The first algorithm uses a family of
rectangular filters of variable lengths and orientations. The second
algorithm uses a family of curved filters constructed through
a dynamic-programming-like procedure using a modified beamlet transform. We
demonstrate the power of these algorithms in applications to both noisy and
natural images, showing state-of-the-art results.