Scale co-occurrence matrix: Difference between revisions

Content deleted Content added
Becky15 (talk | contribs)
m Added links
FrescoBot (talk | contribs)
m Bot: link syntax and minor changes
Line 4:
}}
 
'''Scale co-occurrence matrix (SCM)''' is a method for image feature extraction within scale space after [[[[https://en.wikipedia.org/wiki/Wavelet_transformWavelet transform|wavelet transformation]]]], proposed by Wu Jun and Zhao Zhongming (Institute of Remote Sensing Application, [[https://en.wikipedia.org/wiki/China|China]]). In practice, we first do discrete wavelet transformation for one gray image and get sub images with different scales. Then we construct a series of scale based concurrent matrixes, every matrix describing the gray level variation between two adjacent scales. Last we use selected functions (such as Harris statistical approach) to calculate measurements with SCM and do feature extraction and classification.
One basis of the method is the fact: way texture information changes from one scale to another can represent that texture in some extent thus it can be used as a criterion for feature extraction. The matrix captures the relation of features between different scales rather than the features within a single scale space, which can represent the scale property of texture better. Also, there are several experiments showing that it can get more accurate results for texture classification than the traditional texture classification.<ref>{{cite journal|last1=Wu|first1=Jun|last2=Zhao|first2=Zhongming|title=Scale Co-occurrence Matrix for Texture Analysis using Wavelet Transformation|journal=Journal of Remote Sensing|date=Mar 2001|volume=5|issue=2|page=100}}</ref>
 
Line 12:
SCM based on discrete wavelet frame transformation make use of both correlations and feature information so that it combines structural and statistical benefits.
 
=== [[Discrete wavelet transform|Discrete wavelet]] frame (DWF) ===
In order to do SCM we have to use discrete wavelet frame (DWF) transformation first to get a series of sub images. The discrete wavelet frames is nearly identical to the standard wavelet transform,<ref>{{cite journal|last1=Kevin|first1=Lund|last2=Curt|first2=Burgess|title=Producing high-dimensional semantic spaces from lexical co-occurrence|journal=Behavior Resesrch Methods|date=June 1996|volume=28|issue=2|pages=203–208}}</ref> except that one upsamples the filters, rather than downsamples the image. Given an image, the DWF decomposes its channel using the same method as the wavelet transform, but without the subsampling process. This results in four filtered images with the same size as the input image. The decomposition is then continued in the LL channels only as in the wavelet transform, but since the image is not subsampled, the filter has to be upsampled by inserting zeros in between its coefficients. The number of channels, hence the number of features for DWF is given by 3&nbsp;×&nbsp;l&nbsp;−&nbsp;1.<ref>{{cite journal|last1=Mallat|first1=S.G.|title=A theory for multiresolution signal decomposition: The wavelet representation|journal=IEEE Transactions on Pattern Analysis and Machine Intelligence|date=1989|pages=674–693|doi=10.1109/34.192463}}</ref>
One dimension discrete wavelet frame decompose the image in this way: