Content deleted Content added
generalization of regression trees to objects with pairwise dissmilarities |
|||
(47 intermediate revisions by 29 users not shown) | |||
Line 1:
{{about|decision trees in machine learning|the use of the term in decision analysis|Decision tree}}▼
{{short description|Machine learning algorithm}}
▲{{about|decision trees in machine learning|the use of the term in decision analysis|Decision tree}}
{{Machine learning|Supervised learning}}
'''Decision tree learning''' is a [[supervised learning]] approach used in [[statistics]], [[data mining]] and [[machine learning]]. In this formalism, a classification or regression [[decision tree]] is used as a [[
Tree models where the target variable can take a discrete set of values are called '''[[Statistical classification|classification]] [[decision tree|trees]]'''; in these tree structures, [[leaf node|leaves]] represent class labels and branches represent [[Logical conjunction|conjunction]]s of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically [[real numbers]]) are called '''[[regression analysis|regression]] [[decision tree|trees]]'''. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.<ref name=":1">{{Cite journal |
Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity because they produce algorithms that are easy to interpret and visualize, even for users without a statistical background.<ref>{{Cite journal|last1=Wu|first1=Xindong|last2=Kumar|first2=Vipin|last3=Ross Quinlan|first3=J.|last4=Ghosh|first4=Joydeep|last5=Yang|first5=Qiang|last6=Motoda|first6=Hiroshi|last7=McLachlan|first7=Geoffrey J.|last8=Ng|first8=Angus|last9=Liu|first9=Bing|last10=Yu|first10=Philip S.|last11=Zhou|first11=Zhi-Hua|date=2008-01-01|title=Top 10 algorithms in data mining|journal=Knowledge and Information Systems|language=en|volume=14|issue=1|pages=1–37|doi=10.1007/s10115-007-0114-2|s2cid=2367747|issn=0219-3116|hdl=10983/15329|hdl-access=free}}</ref>
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and [[decision making]]. In [[data mining]], a decision tree describes data (but the resulting classification tree can be an input for
==General==
Line 23:
|isbn=978-9814590075
|s2cid=44697571
}}</ref> The goal is to create
A decision tree is a simple representation for classifying examples. For this section, assume that all of the input [[Feature (machine learning)|feature]]s have finite discrete domains, and there is a single target feature called the "classification". Each element of the ___domain of the classification is called a ''class''.
Line 29:
A tree is built by splitting the source [[Set (mathematics)|set]], constituting the root node of the tree, into subsets—which constitute the successor children. The splitting is based on a set of splitting rules based on classification features.<ref>{{Cite book|title=Understanding Machine Learning|last1=Shalev-Shwartz|first1=Shai|publisher=Cambridge University Press|url=http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning|last2=Ben-David|first2=Shai|date=2014|chapter=18. Decision Trees}}</ref> This process is repeated on each derived subset in a recursive manner called [[recursive partitioning]].
The [[recursion]] is completed when the subset at a node has all the same values of the target variable, or when splitting no longer adds value to the predictions. This process of ''top-down induction of decision trees'' (TDIDT)<ref name="Quinlan86">{{Cite journal | url=https://link.springer.com/content/pdf/10.1007/BF00116251.pdf | doi=10.1007/BF00116251| title=Induction of decision trees| journal=Machine Learning| volume=1| pages=81–106| year=1986| last1=Quinlan| first1=J. R.| s2cid=189902138| doi-access=free}}</ref> is an example of a [[greedy algorithm]], and it is by far the most common strategy for learning decision trees from data.<ref name="top-downDT" />
In [[data mining]], decision trees can be described also as the combination of mathematical and computational techniques to aid the description, categorization and generalization of a given set of data.
Line 39:
The dependent variable, <math>Y</math>, is the target variable that we are trying to understand, classify or generalize. The vector <math>\textbf{x}</math> is composed of the features, <math>x_1, x_2, x_3</math> etc., that are used for that task.
[[File:Cart tree kyphosis.png|thumb|
alt=Three different representations of a regression tree of kyphosis data|
An example tree which estimates the probability of
Line 53:
]]
== Decision tree types ==
Decision trees used in [[data mining]] are of two main types:
* '''[[Classification tree]]''' analysis is when the predicted outcome is the class (discrete) to which the data belongs.
* '''
The term '''classification and regression tree (CART)''' analysis is an [[umbrella term]] used to refer to either of the above procedures, first introduced by [[Leo Breiman|Breiman]] et al. in 1984.<ref name="bfos">{{Cite book
Line 69 ⟶ 68:
|isbn=978-0-412-04841-8
}}</ref> Trees used for regression and trees used for classification have some similarities – but also some differences, such as the procedure used to determine where to split.<ref name="bfos"/>
Some techniques, often called ''ensemble'' methods, construct more than one decision tree:
* '''[[Gradient boosted trees|Boosted trees]]''' Incrementally building an ensemble by training each new instance to emphasize the training instances previously mis-modeled. A typical example is [[AdaBoost]]. These can be used for regression-type and classification-type problems.<ref>Friedman, J. H. (1999). ''[https://astro.temple.edu/~msobel/courses_files/StochasticBoosting(gradient).pdf Stochastic gradient boosting] {{Webarchive|url=https://web.archive.org/web/20181128041212/https://astro.temple.edu/~msobel/courses_files/StochasticBoosting(gradient).pdf |date=2018-11-28 }}.'' Stanford University.</ref><ref>Hastie, T., Tibshirani, R., Friedman, J. H. (2001). ''The elements of statistical learning : Data mining, inference, and prediction.'' New York: Springer Verlag.</ref>▼
* '''Committees of decision trees''' (also called k-DT<ref>Heath, D., Kasif, S. and Salzberg, S. (1993). ''k-DT: A multi-tree learning method.'' In ''Proceedings of the Second Intl. Workshop on Multistrategy Learning'', pp. 138-149.</ref>), an early method that used randomized decision tree algorithms to generate multiple different trees from the training data, and then combine them using majority voting to generate output.<ref>Heath, D., Kasif, S., and Salzberg, S. L. (1996). ''Committees of decision trees.'' In B. Gorayska and J. Mey (Eds.), ''Cognitive Technology: In Search of a Humane Interface'' (pp. 305–317). Amsterdam: Elsevier Science B.V.</ref>
* '''[[Bootstrap aggregating|Bootstrap aggregated]]''' (or bagged) decision trees, an early ensemble method, builds multiple decision trees by repeatedly [[Bootstrapping (statistics)|resampling training data with replacement]], and voting the trees for a consensus prediction.<ref>{{cite journal |last=Breiman |first=L. |year=1996 |title=Bagging Predictors |journal=Machine Learning |volume=24 |issue=2 |pages=123–140 |doi=10.1007/BF00058655 |doi-access=free }}</ref>▼
** A '''[[random forest]]''' classifier is a specific type of [[bootstrap aggregating]]▼
* '''Rotation forest''' – in which every decision tree is trained by first applying [[principal component analysis]] (PCA) on a random subset of the input features.<ref>{{cite journal |last1=Rodriguez |first1=J. J. |last2=Kuncheva |first2=L. I.|author2-link=Ludmila Kuncheva |last3=Alonso |first3=C. J. |year=2006 |title=Rotation forest: A new classifier ensemble method |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=28 |issue=10 |pages=1619–1630 |doi=10.1109/TPAMI.2006.211 |pmid=16986543 |citeseerx=10.1.1.156.8277 |s2cid=6847493 }}</ref>▼
A special case of a decision tree is a [[decision list]],<ref>{{cite journal|last1=Rivest|first1=Ron|title=Learning Decision Lists|journal=Machine Learning|date=Nov 1987|volume=3|issue=2|pages=229–246|doi=10.1023/A:1022607331053|s2cid=30625841|url=http://people.csail.mit.edu/rivest/pubs/Riv87b.pdf|doi-access=free}}</ref> which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity{{
▲*'''[[Gradient boosted trees|Boosted trees]]''' Incrementally building an ensemble by training each new instance to emphasize the training instances previously mis-modeled. A typical example is [[AdaBoost]]. These can be used for regression-type and classification-type problems.<ref>Friedman, J. H. (1999). ''[https://astro.temple.edu/~msobel/courses_files/StochasticBoosting(gradient).pdf Stochastic gradient boosting] {{Webarchive|url=https://web.archive.org/web/20181128041212/https://astro.temple.edu/~msobel/courses_files/StochasticBoosting(gradient).pdf |date=2018-11-28 }}.'' Stanford University.</ref><ref>Hastie, T., Tibshirani, R., Friedman, J. H. (2001). ''The elements of statistical learning : Data mining, inference, and prediction.'' New York: Springer Verlag.</ref>
▲*'''[[Bootstrap aggregating|Bootstrap aggregated]]''' (or bagged) decision trees, an early ensemble method, builds multiple decision trees by repeatedly [[Bootstrapping (statistics)|resampling training data with replacement]], and voting the trees for a consensus prediction.<ref>{{cite journal |last=Breiman |first=L. |year=1996 |title=Bagging Predictors |journal=Machine Learning |volume=24 |issue=2 |pages=123–140 |doi=10.1007/BF00058655 |doi-access=free }}</ref>
▲**A '''[[random forest]]''' classifier is a specific type of [[bootstrap aggregating]]
▲*'''Rotation forest''' – in which every decision tree is trained by first applying [[principal component analysis]] (PCA) on a random subset of the input features.<ref>{{cite journal |last1=Rodriguez |first1=J. J. |last2=Kuncheva |first2=L. I.|author2-link=Ludmila Kuncheva |last3=Alonso |first3=C. J. |year=2006 |title=Rotation forest: A new classifier ensemble method |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=28 |issue=10 |pages=1619–1630 |doi=10.1109/TPAMI.2006.211 |pmid=16986543 |citeseerx=10.1.1.156.8277 |s2cid=6847493 }}</ref>
▲A special case of a decision tree is a [[decision list]],<ref>{{cite journal|last1=Rivest|first1=Ron|title=Learning Decision Lists|journal=Machine Learning|date=Nov 1987|volume=3|issue=2|pages=229–246|doi=10.1023/A:1022607331053|s2cid=30625841|url=http://people.csail.mit.edu/rivest/pubs/Riv87b.pdf|doi-access=free}}</ref> which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity{{cn|date=December 2021}}, permit non-greedy learning methods<ref>{{cite journal|last1=Letham|first1=Ben|last2=Rudin|first2=Cynthia|author2-link= Cynthia Rudin |last3=McCormick|first3=Tyler|last4=Madigan|first4=David|title=Interpretable Classifiers Using Rules And Bayesian Analysis: Building A Better Stroke Prediction Model|journal=Annals of Applied Statistics|date=2015|volume=9|issue=3|pages=1350–1371|doi=10.1214/15-AOAS848|arxiv=1511.01644|s2cid=17699665}}</ref> and monotonic constraints to be imposed.<ref>{{cite journal|last1=Wang|first1=Fulton|last2=Rudin|first2=Cynthia|author2-link=Cynthia Rudin|title=Falling Rule Lists|journal=Journal of Machine Learning Research|date=2015|volume=38|url=http://www.jmlr.org/proceedings/papers/v38/wang15a.pdf|access-date=2016-01-22|archive-date=2016-01-28|archive-url=https://web.archive.org/web/20160128223950/http://www.jmlr.org/proceedings/papers/v38/wang15a.pdf|url-status=dead}}</ref>
Notable decision tree algorithms include:
* [[ID3 algorithm|ID3]] (Iterative Dichotomiser 3)
* [[C4.5 algorithm|C4.5]] (successor of ID3)
*
* OC1 (Oblique classifier 1). First method that created multivariate splits at each node.<ref>{{Cite journal | doi = 10.1613/jair.63 | last1 = Murthy | first1 = S. K. | year = 1994 | title = A System for Induction of Oblique Decision Trees | journal = Journal of Artificial Intelligence Research | volume = 2 | issue = 1 | pages = 1–32 | doi-access = free }}</ref>
* [[Chi-square automatic interaction detection]] (CHAID). Performs multi-level splits when computing classification trees.<ref>{{Cite journal | doi = 10.2307/2986296 | last1 = Kass | first1 = G. V. | year = 1980 | title = An exploratory technique for investigating large quantities of categorical data | jstor = 2986296| journal = Applied Statistics | volume = 29 | issue = 2| pages = 119–127 }}</ref><ref>{{Cite journal|last1=Biggs|first1=David|last2=De Ville|first2=Barry|last3=Suen|first3=Ed|date=1991|title=A method of choosing multiway partitions for classification and decision trees|url=https://doi.org/10.1080/02664769100000005|journal=Journal of Applied Statistics|volume=18|issue=1|pages=49–62|doi=10.1080/02664769100000005|bibcode=1991JApSt..18...49B |issn=0266-4763|url-access=subscription}}</ref><ref>Ritschard, G. (2013), '''"'''CHAID and Earlier Supervised Tree Methods", in J.J. McArdle and G. Ritschard (eds), ''Contemporary Issues in Exploratory Data Mining in the Behavioral Sciences'', Quantitative Methodology Series, New York: Routledge, pages 48-74. [https://www.researchgate.net/publication/315476407_CHAID_and_Earlier_Supervised_Tree_Methods Preprint]</ref> * [[Multivariate adaptive regression splines|MARS]]: extends decision trees to handle numerical data better.
* Conditional Inference Trees. Statistics-based approach that uses non-parametric tests as splitting criteria, corrected for multiple testing to avoid overfitting. This approach results in unbiased predictor selection and does not require pruning.<ref name="Hothorn2006">{{Cite journal | doi = 10.1198/106186006X133933 | last1 = Hothorn | first1 = T.| last2 = Hornik | first2 = K.| last3 = Zeileis | first3 = A.| year = 2006 | title = Unbiased Recursive Partitioning: A Conditional Inference Framework | jstor = 27594202| journal = Journal of Computational and Graphical Statistics | volume = 15 | issue = 3| pages = 651–674 | citeseerx = 10.1.1.527.2935 | s2cid = 6074128 }}</ref><ref name="Strobl2009">{{Cite journal | doi = 10.1037/a0016973 | pmid = 19968396 | pmc = 2927982 | last1 = Strobl | first1 = C.| last2 = Malley | first2 = J.| last3 = Tutz | first3 = G.| year = 2009 | title = An Introduction to Recursive Partitioning: Rationale, Application and Characteristics of Classification and Regression Trees, Bagging and Random Forests | journal = Psychological Methods | volume = 14 | issue = 4| pages = 323–348 }}</ref>
Line 92 ⟶ 89:
ID3 and CART were invented independently at around the same time (between 1970 and 1980){{Citation needed|date=August 2014}}, yet follow a similar approach for learning a decision tree from training tuples.
It has also been proposed to leverage concepts of [[fuzzy set theory]] for the definition of a special version of decision tree, known as Fuzzy Decision Tree (FDT).<ref name="Janikow1998">{{Cite journal | doi=10.1109/3477.658573| title=Fuzzy decision trees: issues and methods| journal=IEEE Transactions on Systems, Man, and Cybernetics
In this type of fuzzy classification, generally, an input vector <math>\textbf{x}</math> is associated with multiple classes, each with a different confidence value.
Boosted ensembles of FDTs have been recently investigated as well, and they have shown performances comparable to those of other very efficient fuzzy classifiers.<ref name="Barsacchi2020">{{Cite journal | url=http://www.sciencedirect.com/science/article/pii/S0957417420302608 | doi=10.1016/j.eswa.2020.113436|
title=An analysis of boosted ensembles of binary fuzzy decision trees| journal=Expert Systems with Applications|
volume=154| year=2020| last1=Barsacchi| first1=M.|last2=Bechini| first2=A.|last3=Marcelloni| first3=F.| page=113436| s2cid=216369273| url-access=subscription}}</ref>
==Metrics==
Line 106 ⟶ 104:
|publisher=Doctoral thesis
|url=https://d-nb.info/921171064
}}</ref>
=== Estimate of Positive Correctness ===
A simple and effective metric can be used to identify the degree to which true positives outweigh
<math> E_P = TP - FP </math>
Line 134 ⟶ 132:
{|
|+
|'''Feature A Confusion Matrix'''
{| class="wikitable"
! {{diagonal split header|Actual Class|Predicted<br />Class}}
Line 148 ⟶ 146:
|5
|}
|style="padding-left: 4em;" | '''Feature B Confusion Matrix'''
{| class="wikitable"
! {{diagonal split header|Actual Class|Predicted<br />Class}}
Line 171 ⟶ 169:
===Gini impurity===
'''Gini impurity''', '''Gini's diversity index''',<ref>{{cite web |title=Growing Decision Trees |url=https://www.mathworks.com/help/stats/growing-decision-trees.html |website=MathWorks
For a set of items with <math>J</math> classes and relative frequencies <math>p_i</math>, <math>i \in \{1, 2, ...,J\}</math>, the probability of choosing an item with label <math>i</math> is <math>p_i</math>, and the probability of miscategorizing that item is <math>\sum_{k \ne i} p_k = 1-p_i</math>. The Gini impurity is computed by summing pairwise products of these probabilities for each class label:
The Gini impurity is also an information theoretic measure and corresponds to [[Tsallis Entropy]] with deformation coefficient <math>q=2</math>, which in physics is associated with the lack of information in out-of-equilibrium, non-extensive, dissipative and quantum systems. For the limit <math>q\to 1</math> one recovers the usual Boltzmann-Gibbs or Shannon entropy. In this sense, the Gini impurity is nothing but a variation of the usual entropy measure for decision trees.▼
:<math>\operatorname{I}_G(p) = \sum_{i=1}^J \left( p_i \sum_{k\neq i} p_k \right)
Line 181 ⟶ 177:
= \sum_{i=1}^J (p_i - p_i^2)
= \sum_{i=1}^J p_i - \sum_{i=1}^J p_i^2
= 1 - \sum^J_{i=1} p_i^2. </math>
▲The Gini impurity is also an information theoretic measure and corresponds to [[Tsallis Entropy]] with deformation coefficient <math>q=2</math>, which in physics is associated with the lack of information in out-of-equilibrium, non-extensive, dissipative and quantum systems. For the limit <math>q\to 1</math> one recovers the usual Boltzmann-Gibbs or Shannon entropy. In this sense, the Gini impurity is nothing but a variation of the usual entropy measure for decision trees.
===Information gain===
Line 189 ⟶ 187:
Entropy is defined as below
:<math>\Eta(T) = \operatorname{I}_{E}\left(p_1, p_2, \ldots, p_J\right)
= - \sum^J_{i=1} p_i \log_2 p_i</math>
where <math>p_1, p_2, \ldots</math> are fractions that add up to 1 and represent the percentage of each class present in the child node that results from a split in the tree.<ref name="Witten 2011 102–103">{{Cite book|title=Data Mining|url=https://archive.org/details/dataminingpracti00witt_966|url-access=limited|last1=Witten|first1=Ian|last2=Frank|first2=Eibe|last3=Hall|first3=Mark|publisher=Morgan Kaufmann|year=2011|isbn=978-0-12-374856-0|___location=Burlington, MA|pages=[https://archive.org/details/dataminingpracti00witt_966/page/n136 102]–103}}</ref>
:<math display="block"> \overbrace{IG(T,a)}^\text{information gain}
= \overbrace{\Eta(T)}^\text{entropy (parent)}
- \overbrace{\Eta(T\mid a)}^\text{sum of entropies (children)} </math><math>=-\sum_{i=1}^J p_i\log_2 p_i - \sum_{i=1}^J - \Pr(i\mid a)\log_2 \Pr(i\mid a)</math>
Line 201 ⟶ 199:
:<math display="block"> \overbrace{E_A(\operatorname{IG}(T,a))}^\text{expected information gain}
= \overbrace{I(T; A)}^{\text{mutual information between } T \text{ and } A}
= \overbrace{\Eta(T)}^\text{entropy (parent)}
- \overbrace{\Eta(T\mid A)}^\text{weighted sum of entropies (children)} </math><math>=-\sum_{i=1}^J p_i\log_2 p_i - \sum_a p(a)\sum_{i=1}^J-\Pr(i\mid a) \log_2 \Pr(i\mid a) </math>
:Where weighted sum of entropies is given by,
Line 234 ⟶ 232:
To build the tree, the information gain of each possible first split would need to be calculated. The best first split is the one that provides the most information gain. This process is repeated for each impure node until the tree is complete. This example is adapted from the example appearing in Witten et al.<ref name="Witten 2011 102–103"/>
Information gain is also known as [[
===Variance reduction===
Introduced in CART,<ref name="bfos"/> variance reduction is ofte
:<math>
Line 247 ⟶ 245:
By replacing <math>(y_i - y_j)^2</math> in the formula above with the dissimilarity <math>d_{ij}</math> between two objects <math>i</math> and <math>j</math>, the variance reduction criterion applies to any kind of object for which pairwise dissimilarities can be computed.<ref name=":1" />
Used by CART in 1984,<ref name="ll">{{Cite book
|last=Larose
Line 305 ⟶ 303:
* '''Able to handle both numerical and [[Categorical variable|categorical]] data.'''<ref name=":0" /> Other techniques are usually specialized in analyzing datasets that have only one type of variable. (For example, relation rules can be used only with nominal variables while neural networks can be used only with numerical variables or categoricals converted to 0-1 values.) Early decision trees were only capable of handling categorical variables, but more recent versions, such as C4.5, do not have this limitation.<ref name="tdidt" />
* '''Requires little data preparation.''' Other techniques often require data normalization. Since trees can handle qualitative predictors, there is no need to create [[dummy variable (statistics)|dummy variables]].<ref name=":0" />
* '''Uses a [[white box (software engineering)|white box]] or open-box<ref name="tdidt" /> model.''' If a given situation is observable in a model the explanation for the condition is easily explained by
* '''Possible to validate a model using statistical tests.''' That makes it possible to account for the reliability of the model.
* Non-parametric approach that makes no assumptions of the training data or prediction residuals; e.g., no distributional, independence, or constant variance assumptions
* '''Performs well with large datasets.''' Large amounts of data can be analyzed using standard computing resources in reasonable time.
* '''Accuracy with flexible modeling'''. These methods may be applied to healthcare research with increased accuracy.<ref>{{Cite journal |last1=Hu |first1=Liangyuan |last2=Li |first2=Lihua |date=2022-12-01 |title=Using Tree-Based Machine Learning for Health Studies: Literature Review and Case Series |journal=International Journal of Environmental Research and Public Health |language=en |volume=19 |issue=23 |pages=16080 |doi=10.3390/ijerph192316080 |issn=1660-4601 |pmc=9736500 |pmid=36498153|doi-access=free }}</ref>
* '''Mirrors human decision making more closely than other approaches.'''<ref name=":0" /> This could be useful when modeling human decisions/behavior.
* '''Robust against co-linearity, particularly boosting.'''
* '''In built''' '''[[feature selection]]'''. Additional irrelevant feature will be less used so that they can be removed on subsequent runs. The hierarchy of attributes in a decision tree reflects the importance of attributes.<ref>{{Cite book|last=Provost, Foster, 1964-|title=Data science for business : [what you need to know about data mining and data-analytic thinking]|date=2013|publisher=O'Reilly|others=Fawcett, Tom.|isbn=978-1-4493-6132-7|edition= 1st|___location=Sebastopol, Calif.|oclc=844460899}}</ref> It means that the features on top are the most informative.<ref>{{Cite journal|last1=Piryonesi S. Madeh|last2=El-Diraby Tamer E.|date=2020-06-01|title=Role of Data Analytics in Infrastructure Asset Management: Overcoming Data Size and Quality Problems|journal=Journal of Transportation Engineering, Part B: Pavements|volume=146|issue=2|pages=04020022|doi=10.1061/JPEODX.0000175| s2cid=216485629 }}</ref>
* '''Decision trees can approximate any [[Boolean function]] e.g. [[Exclusive or|XOR]].<ref>{{cite journal |first1=Dinesh |last1=Mehtaa |first2=Vijay |last2=Raghavan |title=Decision tree approximations of Boolean functions |journal=Theoretical Computer Science |volume=270 |issue=1–2 |year=2002 |pages=609–623 |doi=10.1016/S0304-3975(01)00011-1 |doi-access=free }}</ref>'''
===Limitations===
* Trees can be very non-robust. A small change in the [[Training, test, and validation sets|training data]] can result in a large change in the tree and consequently the final predictions.<ref name=":0" />
* The problem of learning an optimal decision tree is known to be [[NP-complete]] under several aspects of optimality and even for simple concepts.<ref>{{Cite journal | doi = 10.1016/0020-0190(76)90095-8 | last1 = Hyafil | first1 = Laurent | last2 = Rivest | first2 = RL | year = 1976 | title = Constructing Optimal Binary Decision Trees is NP-complete | journal = Information Processing Letters | volume = 5 | issue = 1| pages = 15–17 }}</ref><ref>Murthy S. (1998). [https://cs.nyu.edu/~roweis/csc2515-2006/readings/murthy_dt.pdf "Automatic construction of decision trees from data: A multidisciplinary survey"]. ''Data Mining and Knowledge Discovery''</ref> Consequently, practical decision-tree learning algorithms are based on heuristics such as the [[greedy algorithm]] where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. To reduce the greedy effect of local optimality, some methods such as the dual information distance (DID) tree were proposed.<ref>{{cite journal|url=
* Decision-tree learners can create over-complex trees that do not generalize well from the training data. (This is known as [[overfitting]].<ref>{{Cite book | title = Principles of Data Mining | doi = 10.1007/978-1-84628-766-4 | year = 2007 | isbn = 978-1-84628-765-7 | s2cid = 45746 }}</ref>) Mechanisms such as [[Pruning (decision trees)|pruning]] are necessary to avoid this problem (with the exception of some algorithms such as the Conditional Inference approach, that does not require pruning).<ref name="Hothorn2006" /><ref name="Strobl2009" />
* The average depth of the tree that is defined by the number of nodes or tests till classification is not guaranteed to be minimal or small under various splitting criteria.<ref name="Tris">{{cite web|author = Ben-Gal I. and Trister C. (2015)|title = Parallel Construction of Decision Trees with Consistently Non Increasing Expected Number of Tests
* For data including categorical variables with different numbers of levels, [[information gain in decision trees]] is biased in favor of attributes with more levels.<ref>{{cite conference|author=Deng, H.|author2=Runger, G. |author3=Tuv, E. |title=Bias of importance measures for multi-valued attributes and solutions|conference=Proceedings of the 21st International Conference on Artificial Neural Networks (ICANN)|year=2011|pages= 293–300|url=https://www.researchgate.net/publication/221079908}}</ref> To counter this problem, instead of choosing the attribute with highest [[information gain]], one can choose the attribute with the highest [[information gain ratio]] among the attributes whose information gain is greater than the mean information gain.
===Implementations===
Many data mining software packages provide implementations of one or more decision tree algorithms (e.g. random forest).
Open source examples include:
* [[ALGLIB]], a C++, C# and Java numerical analysis library with data analysis features (random forest)
* [[SPSS Modeler|IBM SPSS Modeler]], ▼
* [[KNIME]], a free and open-source data analytics, reporting and integration platform (decision trees, random forest)
* [[RapidMiner]], ▼
* [[Orange (software)|Orange]], an open-source data visualization, machine learning and data mining toolkit (random forest)
* [[SAS (software)#Components|SAS Enterprise Miner]], ▼
* [[R (programming language)|R]] (an open-source software environment for statistical computing, which includes several CART implementations such as rpart, party and randomForest packages),
* [[Matlab]], ▼
*
▲* [[R (programming language)|R]] (an open-source software environment for statistical computing, which includes several CART implementations such as rpart, party and randomForest packages),
* [[Weka (machine learning)|Weka]] (a free and open-source data-mining suite, contains many decision tree algorithms), ▼
* [[scikit-learn]] (a free and open-source machine learning library for the [[Python (programming language)|Python]] programming language).
▲* [[Weka (machine learning)|Weka]] (a free and open-source data-mining suite, contains many decision tree algorithms),
Notable commercial software:
* [[Microsoft SQL Server]], and
*
==Extensions==
Line 343 ⟶ 348:
===Alternative search methods===
Evolutionary algorithms have been used to avoid local optimal decisions and search the decision tree space with little ''a priori'' bias.<ref>{{cite book |last1=Papagelis |first1=A. |last2=Kalles |first2=D. |year=2001 |chapter=Breeding Decision Trees Using Evolutionary Techniques |title=Proceedings of the Eighteenth International Conference on Machine Learning, June 28–July 1, 2001 |pages=393–400 |chapter-url=http://www.gatree.com/wordpress/wp-content/uploads/2010/04/BreedinDecisioTreeUsinEvo.pdf }}</ref><ref>{{cite journal |last1=Barros |first1=Rodrigo C. |last2=Basgalupp |first2=M. P. |last3=Carvalho |first3=A. C. P. L. F. |last4=Freitas |first4=Alex A. |year=2012 |doi=10.1109/TSMCC.2011.2157494 |title=A Survey of Evolutionary Algorithms for Decision-Tree Induction |journal=IEEE Transactions on Systems, Man, and Cybernetics |series=Part C: Applications and Reviews |volume=42 |issue=3 |pages=291–312 |citeseerx=10.1.1.308.9068 |s2cid=365692 }}</ref>
It is also possible for a tree to be sampled using [[Markov chain Monte Carlo|MCMC]].<ref>{{cite journal |last1=Chipman |first1=Hugh A. |first2=Edward I. |last2=George |first3=Robert E. |last3=McCulloch |title=Bayesian CART model search |journal=Journal of the American Statistical Association |volume=93 |issue=443 |year=1998 |pages=935–948 |doi=10.1080/01621459.1998.10473750 |citeseerx=10.1.1.211.5573 }}</ref>
The tree can be searched for in a bottom-up fashion.<ref>{{cite book |last1=Barros |first1=R. C. |last2=Cerri |first2=R. |last3=Jaskowiak |first3=P. A. |last4=Carvalho |first4=A. C. P. L. F. |doi=10.1109/ISDA.2011.6121697 |chapter=A bottom-up oblique decision tree induction algorithm |title=Proceedings of the 11th International Conference on Intelligent Systems Design and Applications (ISDA 2011) |pages=450–456 |year=2011 |isbn=978-1-4577-1676-8 |s2cid=15574923 }}</ref> Or several trees can be constructed parallelly to reduce the expected number of tests till classification.<ref name="Tris"
==See also==
Line 354 ⟶ 359:
* [[Binary decision diagram]]
* [[CHAID]]
* [[Predictive analytics#Classification and regression trees
* [[ID3 algorithm]]
* [[C4.5 algorithm]]
Line 369 ⟶ 374:
==Further reading==
* {{cite book |first1=Gareth |last1=James |first2=Daniela |last2=Witten |first3=Trevor |last3=Hastie |first4=Robert |last4=Tibshirani |chapter=Tree-Based Methods |title=An Introduction to Statistical Learning: with Applications in R |___location=New York |publisher=Springer |year=2017 |isbn=978-1-4614-7137-0 |chapter-url=https://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf#page=317 |pages=303–336 }}
==External links==
* [https://www.cs.kent.ac.uk/people/staff/mg483/code/evoldectrees/ Evolutionary Learning of Decision Trees in C++]
* [http://christianherta.de/lehre/dataScience/machineLearning/decision-trees.html A very detailed explanation of information gain as splitting criterion]
[[Category:Decision trees]]
|