The Weizmann Institute of Science
Faculty of Mathematics and Computer Science
Computer Vision Lab



What is a Good Image Segment?
A Unified Approach to Segment Extraction

Shai Bagon, Oren Boiman, Michal Irani *

* Author names are ordered alphabetically due to equal contribution

This webpage presents the paper "What is a Good Image Segment? A Unified Approach to Segment Extraction" (ECCV 2008).
Paper [PDF] [bibtex]
Conference slides [PPS]

Abstract

There is a huge diversity in possible definitions of “visually meaningful” image segments, ranging from simple uniformly colored segments, textured segments, through symmetric patterns, and up to complex semantically meaningful objects. This diversity has led to a wide range of different approaches for image segmentation. In this paper we present a single unified framework for addressing this problem – “Segmentation by Composition”. We define a good image segment as one which can be easily composed using its own pieces, but is difficult to compose using pieces from other parts of the image. This non-parametric approach captures a large diversity of segment types, yet requires no pre-definition or modelling of segment types, nor prior training. Based on this definition, we develop a segment extraction algorithm – i.e., given a single point-of-interest, provide the “best” image segment containing that point. This induces a figure-ground image segmentation, which applies to a range of different segmentation tasks: single image segmentation, simultaneous co-segmentation of several images, and class-based segmentations.
Back to top

What is a Good Image Segment?

There is a huge diversity in possible definitions of what is a good image segment, as can be seen in the following images:
In the simplest case, a uniformly colored region may be a good image segment (flower in right most image). In other cases, a good segment might be a textured region (second and third from right) and all the way to complex objects (puffins on the left and the butterfly).
The diversity in segment types has led to a wide range of approaches for image segmentation: Algorithms for extracting uniformly colored regions, algorithms for extracting textured regions, algorithm for extracting regions with a distinct empirical color distribution. Some algorithms employ symmetry cues for image segmentation, while others use high-level semantic cues provided by object classes (i.e., class-based segmentation)
Back to top

Segmentation by Composition

In this work we propose a single unified approach to define and extract visually meaningful image segments, without any explicit modelling. Our approach defines a “good image segment” as one which is “easy to compose” (like a puzzle) using its own parts, yet it is difficult to compose it from other parts of the image:
“Easy to compose”:
Few large “puzzle pieces”
“Hard to compose”:
Many small “puzzle pieces”
The larger and fewer these “puzzle pieces” that compose a segment from its other parts – the easier the composition, hence the more visually appealing that segment is.
This approach captures a wide range of segment types: uniformly colored segments, through textured segments, and even complex objects.
Our approach is based on the work of Boiman and Irani Similarity by Composition (NIPS'06).
Back to top

Results

Single Image Segmentation

Images were taken from Alpert et al. Segmentation evaluation database, Berkeley Segmentation Dataset and GrabCut database. Our algorithm was evaluated on the benchmark database of Alpert et al. The total Fmeasure score of our algorithm was 0.87±0.01, which is currently state-of-the-art on this database.
Input Image User selected point (green) and the recovered segmentation (red) Recovered figure-ground segmentation


Back to top

Cosegmentation

Cosegmentation of image pairs, with comparison to Cosegmentation of Rother et al.
Input Image pair User selected point-of-interest Recovered figure-ground segmentation


Back to top

Class-Based Segmentation

Performing class based segmentation by appending example images to foreground reference (Ref = S). See paper for more details.
Input image Example images point-of-interest Recovered
figure-ground segmentation
Click on an image to see higher resolution version.
Back to top