Feature detection (computer vision): Difference between revisions

Content deleted Content added
Minor polishing
Add R tags
 
(161 intermediate revisions by 91 users not shown)
Line 1:
#REDIRECT [[Feature (computer vision)#Detectors]]
In [[computer vision]] and [[image processing]] the concept of '''feature detection''' refers to methods that aim at computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image ___domain, and will often in the form of isolated points, continuous curves or connected regions.
{{Rcatshell|
 
{{R to section}}
== Definition of a feature ==
{{R with history}}}}
 
There is no universal or exact definition of what constitutes a feature, and the exact definition often depends on the problem or the type of application. Given that, a feature is defined as an "interesting" part of an image, and features are used as a starting point for many computer vision algorithms. Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. Consequently, the desirable property for a feature detector is repeatability: will the same feature be detected in two or more images of the same scene.
 
Feature detection is a low level image processing operation. That is, it is usually performed as the first operation on an image, and examines every pixel to see if there is a feature present. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features. As a built-in pre-requisite to feature detection, the input image is usually smoothed by a Gaussian kernel in a scale-space representation and one or several feature images are computed, often expressed in terms of local derivative operations.
 
Occasionally, when feature detection is computationally expensive and there are time constraints, a higher level algorithm may be used to guide the feature detection stage, so that only certain parts of the image are searched for features.
 
Very many feature algorithms use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, the computational complexity and the repeatability. At an overview level, these feature detectors can (with some overlap) be divided into the following types of groups.
 
 
== Types of image features ==
 
=== [[Edge_detection|Edges]] ===
 
These are points where there is a boundary (or an edge) between two image regions. In general, the edge can be of an arbitrary shape, and may include junctions. In practice, edges are usually defined as points in the image which have a strong [[gradient]] magnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms may place some constraints on the shape of an edge.
 
Locally, edges have a one dimensional structure.
 
=== [[Corner_detection|Corners / interest points]] ===
 
The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two dimensional structure. The name "Corner" arose since early algorithms first performed edge detection, and then analysed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels of [[curvature]] in the image gradient. It was then noticed that the so-called corners were also being detected on parts of the image which were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition.
 
=== [[Blob_detection|Blobs / regions of interest or interest points]] ===
 
Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operators. Blob detectors can detect areas in an image which are too smooth to be detected by a corner detector.
 
Consider shrinking an image and then performing corner detection. The detector will respond to points which are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large extent, this distinction can be remedied by including an appropriate notion of scale. Nevertheless, due to its response properties to different types of image structures at different scales, the LoG and DoH [[blob detection|blob detectors]] are also mentioned in the article on [[corner detection]].
 
=== [[Ridge_detection|Ridges]] ===
 
For elongated objects, the notion of ''ridges'' is a natural tool. A ridge descriptor computed from a grey-level image can be seen as a generalization of the a medial axis. From a practical viewpoint, a ridge can be thought of as a one-dimensional curve that represents an axis of symmetry, and in addition has an attribute of local ridge width associated with each each ridge point. Unfortunately, however, it is algorithmically harder to extract ridge features from general classes of grey-level images than edge-, corner- or blob features. Nevertheless, ridge descriptors are frequently used for road extraction in aerial images and for extracting blood vessels in medical images -- see [[ridge detection]].
 
== Feature detectors ==
 
{|
|+ Common feature detectors and their classification:
!Feature detector!![[Edge_detection| Edge ]]!![[Corner_detection| Corner ]]!![[Blob_detection| Blob ]]
|-
| Canny
| X
|-
| Sobel
| X
|-
|Harris & Stephens / Plessey
| X
| X
|-
|Shi & Tomasi
|
| X
|-
|SUSAN
|
| X
|-
|FAST
|
| X
|-
|Laplacian of Gaussian
|
| X
| X
|-
| Difference of Gaussians
|
| X
| X
|-
|Determinant of Hessian
|
| X
| X
|-
| MSER
|
|
| X
|-
| Grey-level blobs
|
|
|X
|}
 
== [[Feature extraction]] ==
 
Once features have been detected, a local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector. Among the approaches that are used to feature description, one can mention [[N-jet]]s and local histograms (see [[scale-invariant feature transform]] for one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob detection.
 
== References ==
 
Please, see the respective articles on [[edge detection]], [[corner detection]], [[blob detection]] and [[ridge detection]] for main references within each feature category.
 
== See also ==
 
* [[Edge detection ]]
* [[Corner detection]]
* [[Blob detection ]]
* [[Ridge detection ]]
* [[Interest point detection]]
* [[Feature extraction]]
* [[Feature (Computer vision)]]
* [[Computer vision]]
 
[[Category:Computer vision]]
[[Category:Image processing]]