In computer vision and image processing the concept of a feature detection refers to methods that aim at computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not.
Definition of a feature
There is no universal or exact definition of what constitutes a feature, and the exact definition often depends on the problem or the type of application. Given that, a feature is defined as an "interesting" part of an image, and features are used as a starting point for many computer vision algorithms. Since features are used as the starting point and main primitives for subsequent algorithms, then the overall algorithm will often only be as good as its feature detector. Consequently, the desirable property for a feature detector is repeatability: will the same feature be detected in two different images.
Feature detection is a low level image processing operation. That is, it is usually performed as the first operation on an image, and examines every pixel to see if there is a feature present. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features.
Occasionally, when feature detection is computationally expensive and there are time constraints, a higher level algorithm may be used to guide the feature detection stage, so that only certain parts of the image are searched for features.
Very many feature algorithms use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, computational complexity and repeatability, but they can be divided (with some overlap) in to several groups.
Types of image features
These are points where there is a boundary (or edge) between two image regions. In general, the edge can be of an arbitrary shape, and may include junctions. In practice, edges are usually defined as points in the image which have a strong gradient. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms may place some constraints on the shape of an edge.
Locally, edges have a one dimensional structure.
These terms are used somewhat interchangeably and refer to point-like features in an image, which have a local two dimensional structure. The name "Corner" arose since early algorithms first performed edge detection, and then analysed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels of curvature in the image gradient. It was then noticed that the so-called corners were also being detected on parts of the image which were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition.
Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operator. Blob detectors can detect areas in an image which are too smooth to be detected by a corner detector.
Consider shrinking an image and then performing corner detection. The detector will respond to points which are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large extent, this distinction can be remedied by including an appropriate notion of scale. Nevertheless, due to its response properties to different types of image structures at different scales, the LoG blob detector is also mentioned in the article on corner detection.
Feature detectors
Feature detector | Edge | Corner | Blob |
---|---|---|---|
Canny | X | ||
Sobel | X | ||
Harris & Stephens / Plessey | X | X | |
Shi & Tomasi | X | ||
SUSAN | X | ||
FAST | X | ||
Laplacian of Gaussian | X | X | |
Difference of Gaussians | X | X | |
Determinant of Hessian | X | X | |
MSER | X | ||
Grey-level blobs | X |
Once features have been detected, the local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector. Among the approaches that are used to feature description, one can mention N-jets and local histograms (see scale-invariant feature transform for one example of a local histogram descriptor).
References
Please, see the respective articles on edge detection, corner detection, blob detection and ridge detection for main references within each feature category.
See also
- Edge detection
- Corner detection
- Blob detection
- Ridge detection
- Interest point detection
- Feature extraction
- Feature (Computer vision)
- Computer vision
The computer vision section, especially with regards to edge detection, corner detection and blob detection (not to mention interest point detection), is currently undergoing a substantial rewrite. Much of this is about the clarification of naming and the lack of consistency in the literature. This is discussed in some considerable detail in Talk:Corner detection, and would probably be worth reading until this tag is removed.