Content deleted Content added
Citation bot (talk | contribs) Alter: template type. Add: author pars. 1-1. Removed URL that duplicated unique identifier. Removed parameters. Some additions/deletions were actually parameter name changes. | You can use this bot yourself. Report bugs here. | Activated by SemperIocundus | via #UCB_webform |
templated cite details |
||
Line 176:
|date= 2013|title=Classification Analysis of DNA Microarrays|publisher=John Wiley and Sons|isbn=978-0-470-17081-6|url=http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470170816.html}}</ref> This type of approach is not hypothesis-driven, but rather is based on iterative pattern recognition or statistical learning methods to find an "optimal" number of clusters in the data. Examples of unsupervised analyses methods include self-organizing maps, neural gas, k-means cluster analyses,<ref>De Souto M et al. (2008) Clustering cancer gene expression data: a comparative study, BMC Bioinformatics, 9(497).</ref> hierarchical cluster analysis, Genomic Signal Processing based clustering<ref>Istepanian R, Sungoor A, Nebel J-C (2011) Comparative Analysis of Genomic Signal Processing for Microarray data Clustering, IEEE Transactions on NanoBioscience, 10(4): 225-238.</ref> and model-based cluster analysis. For some of these methods the user also has to define a distance measure between pairs of objects. Although the Pearson correlation coefficient is usually employed, several other measures have been proposed and evaluated in the literature.<ref>{{cite journal|last1=Jaskowiak|first1=Pablo A|last2=Campello|first2=Ricardo JGB|last3=Costa|first3=Ivan G|title=On the selection of appropriate distances for gene expression data clustering|journal=BMC Bioinformatics|volume=15|issue=Suppl 2|pages=S2|doi=10.1186/1471-2105-15-S2-S2|pmid=24564555|pmc=4072854|year=2014}}</ref> The input data used in class discovery analyses are commonly based on lists of genes having high informativeness (low noise) based on low values of the coefficient of variation or high values of Shannon entropy, etc. The determination of the most likely or optimal number of clusters obtained from an unsupervised analysis is called cluster validity. Some commonly used metrics for cluster validity are the silhouette index, Davies-Bouldin index,<ref>Bolshakova N, Azuaje F (2003) Cluster validation techniques for genome expression data, Signal Processing, Vol. 83, pp. 825–833.</ref> Dunn's index, or Hubert's <math>\Gamma</math> statistic.
* Class prediction analysis: This approach, called supervised classification, establishes the basis for developing a predictive model into which future unknown test objects can be input in order to predict the most likely class membership of the test objects. Supervised analysis<ref name="Peterson"/> for class prediction involves use of techniques such as linear regression, k-nearest neighbor, learning vector quantization, decision tree analysis, random forests, naive Bayes, logistic regression, kernel regression, artificial neural networks, support vector machines, [[mixture of experts]], and supervised neural gas. In addition, various metaheuristic methods are employed, such as [[genetic algorithm]]s, covariance matrix self-adaptation, [[particle swarm optimization]], and [[ant colony optimization]]. Input data for class prediction are usually based on filtered lists of genes which are predictive of class, determined using classical hypothesis tests (next section), Gini diversity index, or information gain (entropy).
* Hypothesis-driven statistical analysis: Identification of statistically significant changes in gene expression are commonly identified using the [[t-test]], [[ANOVA]], [[Bayesian method]]<ref name="Ben-GalShani2005">{{
<!-- {{Citation needed|date=July 2008}}as in many other cases where authorities disagree, a sound conservative approach is to directly compare different normalization methods to determine the effects of these different methods on the results obtained. This can be done, for example, by investigating the performance of various methods on data from "spike-in" experiments. {{Citation needed|date=July 2008}} -->
* Dimensional reduction: Analysts often reduce the number of dimensions (genes) prior to data analysis.<ref name="Peterson"/> This may involve linear approaches such as principal components analysis (PCA), or non-linear manifold learning (distance metric learning) using kernel PCA, diffusion maps, Laplacian eigenmaps, local linear embedding, locally preserving projections, and Sammon's mapping.
|