Content deleted Content added
Added a description about ridges with reference to article on ridge detection |
Minor polishing |
||
Line 1:
In [[computer vision]] and [[image processing]] the concept of '''feature detection''' refers to methods that aim at computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image ___domain, and will often in the form of isolated points, continuous curves or connected regions.
== Definition of a feature ==
There is no universal or exact definition of what constitutes a feature, and the exact definition often depends on the problem or the type of application. Given that, a feature is defined as an "interesting" part of an image, and features are used as a starting point for many computer vision algorithms. Since features are used as the starting point and main primitives for subsequent algorithms,
Feature detection is a low level image processing operation. That is, it is usually performed as the first operation on an image, and examines every pixel to see if there is a feature present. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features. As a built-in pre-requisite to feature detection, the input image is usually smoothed by a Gaussian kernel in a scale-space representation and one or several feature images are computed, often expressed in terms of local derivative operations.
Occasionally, when feature detection is computationally expensive and there are time constraints, a higher level algorithm may be used to guide the feature detection stage, so that only certain parts of the image are searched for features.
Very many feature algorithms use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, the computational complexity and the repeatability
Line 16:
=== [[Edge_detection|Edges]] ===
These are points where there is a boundary (or an edge) between two image regions. In general, the edge can be of an arbitrary shape, and may include junctions. In practice, edges are usually defined as points in the image which have a strong [[gradient]] magnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms may place some constraints on the shape of an edge.
Locally, edges have a one dimensional structure.
Line 22:
=== [[Corner_detection|Corners / interest points]] ===
=== [[Blob_detection|Blobs / regions of interest or interest points]] ===
Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point
Consider shrinking an image and then performing corner detection. The detector will respond to points which are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large extent, this distinction can be remedied by including an appropriate notion of scale. Nevertheless, due to its response properties to different types of image structures at different scales, the LoG and DoH [[blob detection|blob
=== [[Ridge_detection|Ridges]] ===
Line 90:
== [[Feature extraction]] ==
Once features have been detected,
== References ==
|