In image analysis, segmentation is the partioning of a digital image into multiple regions (sets of pixels), according to a given criterion. The goal of segmentation is typically to locate objects of interest and is sometimes considered a computer vision problem. Unfortunately, many important segmentation algorithms are too simple to solve this problem accurately: they compensate for this limitation with their predictability, generality, and efficiency.
Pixel based Segmantation
A simple example of this kind of segmentation is thresholding a grayscale image with a fixed threshold t: each pixel p is assigned to one of two classes, P0 or P1, depending on whether I(p) < t or I(p) ≥ t. Some other segmentation algorithms are based on segmenting images into regions of similar texture according to wavelet or Fourier transforms.
Region Merging: quad-tree/oct-tree: a recursive alghorithm: the picture is divided into 4/8 parts, if the resulting subpicture doesn´t meet a homogenity criteria, it is further divided, etc. The resulting data structure is a Quadtree/Octree, from which the merging process can be performed.
Model based Segmentation
models are influenced by inner forces(ideal: circle) and forces wich are computed from the image data, which pull the model towards the object boundary. Statistical Models: if the object to be segmented is known beforehand, a statistical model can be used to serve as a template.
Multi-scale segmentation
Image segmentations are computed at multiple scales in scale-space and sometimes propagated from coarse to fine scales; see scale-space segmentation.
Segmentation criteria can be arbitrarily complex and may take into account global as well as local criteria. A common requirement is that each region must be connected in some sense.