Decision tree learning: Difference between revisions

Content deleted Content added
m Decision tree types: task, replaced: IEEE Transactions on Systems, Man, and Cybernetics, → IEEE Transactions on Systems, Man, and Cybernetics -
 
(32 intermediate revisions by 24 users not shown)
Line 3:
{{Machine learning|Supervised learning}}
 
'''Decision tree learning''' is a [[supervised learning]] approach used in [[statistics]], [[data mining]] and [[machine learning]]. In this formalism, a classification or regression [[decision tree]] is used as a [[predictive model]] to draw conclusions about a set of observations.
 
Tree models where the target variable can take a discrete set of values are called '''[[Statistical classification|classification]] [[decision tree|trees]]'''; in these tree structures, [[leaf node|leaves]] represent class labels and branches represent [[Logical conjunction|conjunction]]s of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically [[real numbers]]) are called '''[[regression analysis|regression]] [[decision tree|trees]]'''. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.<ref name=":1">{{Cite journal |last1=Studer |first1=Matthias |last2=Ritschard |first2=Gilbert |last3=Gabadinho |first3=Alexis |last4=Müller |first4=Nicolas S. |date=2011 |title=Discrepancy Analysis of State Sequences |url=http://journals.sagepub.com/doi/10.1177/0049124111415372 |journal=Sociological Methods & Research |language=en |volume=40 |issue=3 |pages=471–510 |doi=10.1177/0049124111415372 |s2cid=13307797 |issn=0049-1241}}</ref>
 
Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity because they produce algorithms that are easy to interpret and visualize, even for users without a statistical background.<ref>{{Cite journal|last1=Wu|first1=Xindong|last2=Kumar|first2=Vipin|last3=Ross Quinlan|first3=J.|last4=Ghosh|first4=Joydeep|last5=Yang|first5=Qiang|last6=Motoda|first6=Hiroshi|last7=McLachlan|first7=Geoffrey J.|last8=Ng|first8=Angus|last9=Liu|first9=Bing|last10=Yu|first10=Philip S.|last11=Zhou|first11=Zhi-Hua|date=2008-01-01|title=Top 10 algorithms in data mining|journal=Knowledge and Information Systems|language=en|volume=14|issue=1|pages=1–37|doi=10.1007/s10115-007-0114-2|s2cid=2367747|issn=0219-3116|hdl=10983/15329|hdl-access=free}}</ref>
 
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and [[decision making]]. In [[data mining]], a decision tree describes data (but the resulting classification tree can be an input for [[decision making]]).
 
==General==
Line 23:
|isbn=978-9814590075
|s2cid=44697571
}}</ref> The goal is to create aan modelalgorithm that predicts the value of a target variable based on several input variables.
 
A decision tree is a simple representation for classifying examples. For this section, assume that all of the input [[Feature (machine learning)|feature]]s have finite discrete domains, and there is a single target feature called the "classification". Each element of the ___domain of the classification is called a ''class''.
Line 39:
The dependent variable, <math>Y</math>, is the target variable that we are trying to understand, classify or generalize. The vector <math>\textbf{x}</math> is composed of the features, <math>x_1, x_2, x_3</math> etc., that are used for that task.
 
[[File:Cart tree kyphosis.png|thumb|800px500x500px|
alt=Three different representations of a regression tree of kyphosis data|
An example tree which estimates the probability of
Line 53:
]]
 
== Decision tree types ==
Decision trees used in [[data mining]] are of two main types:
* '''[[Classification tree]]''' analysis is when the predicted outcome is the class (discrete) to which the data belongs.
Line 71:
Some techniques, often called ''ensemble'' methods, construct more than one decision tree:
* '''[[Gradient boosted trees|Boosted trees]]''' Incrementally building an ensemble by training each new instance to emphasize the training instances previously mis-modeled. A typical example is [[AdaBoost]]. These can be used for regression-type and classification-type problems.<ref>Friedman, J. H. (1999). ''[https://astro.temple.edu/~msobel/courses_files/StochasticBoosting(gradient).pdf Stochastic gradient boosting] {{Webarchive|url=https://web.archive.org/web/20181128041212/https://astro.temple.edu/~msobel/courses_files/StochasticBoosting(gradient).pdf |date=2018-11-28 }}.'' Stanford University.</ref><ref>Hastie, T., Tibshirani, R., Friedman, J. H. (2001). ''The elements of statistical learning : Data mining, inference, and prediction.'' New York: Springer Verlag.</ref>
* '''Committees of decision trees''' (also called k-DT<ref>Heath, D., Kasif, S. and Salzberg, S. (1993). ''k-DT: A multi-tree learning method.'' In ''Proceedings of the Second Intl. Workshop on Multistrategy Learning'', pp. 138-149.</ref>), an early method that used randomized decision tree algorithms to generate multiple different trees from the training data, and then combine them using majority voting to generate output.<ref>Heath, D., Kasif, S., and Salzberg, S. L. (1996). ''Committees of decision trees.'' In B. Gorayska and J. Mey (Eds.), ''Cognitive Technology: In Search of a Humane Interface'' (pp. 305–317). Amsterdam: Elsevier Science B.V.</ref>
* '''[[Bootstrap aggregating|Bootstrap aggregated]]''' (or bagged) decision trees, an early ensemble method, builds multiple decision trees by repeatedly [[Bootstrapping (statistics)|resampling training data with replacement]], and voting the trees for a consensus prediction.<ref>{{cite journal |last=Breiman |first=L. |year=1996 |title=Bagging Predictors |journal=Machine Learning |volume=24 |issue=2 |pages=123–140 |doi=10.1007/BF00058655 |doi-access=free }}</ref>
** A '''[[random forest]]''' classifier is a specific type of [[bootstrap aggregating]]
Line 80 ⟶ 81:
* [[ID3 algorithm|ID3]] (Iterative Dichotomiser 3)
* [[C4.5 algorithm|C4.5]] (successor of ID3)
* [[Predictive analytics#Classification and regression trees (CART)|CART]] (Classification And Regression Tree)<ref name="bfos" />
* OC1 (Oblique classifier 1). First method that created multivariate splits at each node.<ref>{{Cite journal | doi = 10.1613/jair.63 | last1 = Murthy | first1 = S. K. | year = 1994 | title = A System for Induction of Oblique Decision Trees | journal = Journal of Artificial Intelligence Research | volume = 2 | issue = 1 | pages = 1–32 | doi-access = free }}</ref>
* [[Chi-square automatic interaction detection]] (CHAID). Performs multi-level splits when computing classification trees.<ref>{{Cite journal | doi = 10.2307/2986296 | last1 = Kass | first1 = G. V. | year = 1980 | title = An exploratory technique for investigating large quantities of categorical data | jstor = 2986296| journal = Applied Statistics | volume = 29 | issue = 2| pages = 119–127 }}</ref><ref>{{Cite journal|last1=Biggs|first1=David|last2=De Ville|first2=Barry|last3=Suen|first3=Ed|date=1991|title=A method of choosing multiway partitions for classification and decision trees|url=https://doi.org/10.1080/02664769100000005|journal=Journal of Applied Statistics|volume=18|issue=1|pages=49–62|doi=10.1080/02664769100000005|bibcode=1991JApSt..18...49B |issn=0266-4763|url-access=subscription}}</ref><ref>Ritschard, G. (2013), '''"'''CHAID and Earlier Supervised Tree Methods", in J.J. McArdle and G. Ritschard (eds), ''Contemporary Issues in Exploratory Data Mining in the Behavioral Sciences'', Quantitative Methodology Series, New York: Routledge, pages 48-74. [https://www.researchgate.net/publication/315476407_CHAID_and_Earlier_Supervised_Tree_Methods Preprint]</ref>
* [[Multivariate adaptive regression splines|MARS]]: extends decision trees to handle numerical data better.
* Conditional Inference Trees. Statistics-based approach that uses non-parametric tests as splitting criteria, corrected for multiple testing to avoid overfitting. This approach results in unbiased predictor selection and does not require pruning.<ref name="Hothorn2006">{{Cite journal | doi = 10.1198/106186006X133933 | last1 = Hothorn | first1 = T.| last2 = Hornik | first2 = K.| last3 = Zeileis | first3 = A.| year = 2006 | title = Unbiased Recursive Partitioning: A Conditional Inference Framework | jstor = 27594202| journal = Journal of Computational and Graphical Statistics | volume = 15 | issue = 3| pages = 651–674 | citeseerx = 10.1.1.527.2935 | s2cid = 6074128 }}</ref><ref name="Strobl2009">{{Cite journal | doi = 10.1037/a0016973 | pmid = 19968396 | pmc = 2927982 | last1 = Strobl | first1 = C.| last2 = Malley | first2 = J.| last3 = Tutz | first3 = G.| year = 2009 | title = An Introduction to Recursive Partitioning: Rationale, Application and Characteristics of Classification and Regression Trees, Bagging and Random Forests | journal = Psychological Methods | volume = 14 | issue = 4| pages = 323–348 }}</ref>
Line 87 ⟶ 89:
ID3 and CART were invented independently at around the same time (between 1970 and 1980){{Citation needed|date=August 2014}}, yet follow a similar approach for learning a decision tree from training tuples.
 
It has also been proposed to leverage concepts of [[fuzzy set theory]] for the definition of a special version of decision tree, known as Fuzzy Decision Tree (FDT).<ref name="Janikow1998">{{Cite journal | doi=10.1109/3477.658573| title=Fuzzy decision trees: issues and methods| journal=IEEE Transactions on Systems, Man, and Cybernetics - Part B: (Cybernetics)| volume=28| pages=1–14| year=1998| last1=Janikow| first1=C. Z.| issue=1| pmid=18255917}}</ref>
In this type of fuzzy classification, generally, an input vector <math>\textbf{x}</math> is associated with multiple classes, each with a different confidence value.
Boosted ensembles of FDTs have been recently investigated as well, and they have shown performances comparable to those of other very efficient fuzzy classifiers.<ref name="Barsacchi2020">{{Cite journal | url=http://www.sciencedirect.com/science/article/pii/S0957417420302608 | doi=10.1016/j.eswa.2020.113436|
title=An analysis of boosted ensembles of binary fuzzy decision trees| journal=Expert Systems with Applications|
volume=154| year=2020| last1=Barsacchi| first1=M.|last2=Bechini| first2=A.|last3=Marcelloni| first3=F.| page=113436| s2cid=216369273| url-access=subscription}}</ref>
 
 
==Metrics==
Line 104 ⟶ 107:
 
=== Estimate of Positive Correctness ===
A simple and effective metric can be used to identify the degree to which true positives outweigh truefalse negativespositives (see [[Confusion matrix]]). This metric, "Estimate of Positive Correctness" is defined below:
 
<math> E_P = TP - FP </math>
Line 166 ⟶ 169:
 
===Gini impurity===
'''Gini impurity''', '''Gini's diversity index''',<ref>{{cite web |title=Growing Decision Trees |url=https://www.mathworks.com/help/stats/growing-decision-trees.html |website=MathWorks }}</ref> or '''[[Diversity index#Gini–Simpson index|Gini-Simpson Index]]''' in biodiversity research, is named after Italian mathematician [[Corrado Gini]] and used by the CART (classification and regression tree) algorithm for classification trees. Gini impurity measures how often a randomly chosen element of a set would be incorrectly labeled if it waswere labeled randomly and independently according to the distribution of labels in the set. It reaches its minimum (zero) when all cases in the node fall into a single target category.
 
For a set of items with <math>J</math> classes and relative frequencies <math>p_i</math>, <math>i \in \{1, 2, ...,J\}</math>, the probability of choosing an item with label <math>i</math> is <math>p_i</math>, and the probability of miscategorizing that item is <math>\sum_{k \ne i} p_k = 1-p_i</math>. The Gini impurity is computed by summing pairwise products of these probabilities for each class label:
Line 232 ⟶ 235:
 
===Variance reduction===
Introduced in CART,<ref name="bfos"/> variance reduction is ofte
Introduced in CART,<ref name="bfos"/> variance reduction is often employed in cases where the target variable is continuous (regression tree), meaning that use of many other metrics would first require discretization before being applied. The variance reduction of a node {{mvar|N}} is defined as the total reduction of the variance of the target variable {{mvar|Y}} due to the split at this node:
 
:<math>
Line 242 ⟶ 245:
By replacing <math>(y_i - y_j)^2</math> in the formula above with the dissimilarity <math>d_{ij}</math> between two objects <math>i</math> and <math>j</math>, the variance reduction criterion applies to any kind of object for which pairwise dissimilarities can be computed.<ref name=":1" />
 
 
===Measure of "goodness"===
Used by CART in 1984,<ref name="ll">{{Cite book
|last=Larose
Line 300 ⟶ 303:
* '''Able to handle both numerical and [[Categorical variable|categorical]] data.'''<ref name=":0" /> Other techniques are usually specialized in analyzing datasets that have only one type of variable. (For example, relation rules can be used only with nominal variables while neural networks can be used only with numerical variables or categoricals converted to 0-1 values.) Early decision trees were only capable of handling categorical variables, but more recent versions, such as C4.5, do not have this limitation.<ref name="tdidt" />
* '''Requires little data preparation.''' Other techniques often require data normalization. Since trees can handle qualitative predictors, there is no need to create [[dummy variable (statistics)|dummy variables]].<ref name=":0" />
* '''Uses a [[white box (software engineering)|white box]] or open-box<ref name="tdidt" /> model.''' If a given situation is observable in a model the explanation for the condition is easily explained by boolean[[Boolean logic]]. By contrast, in a [[black box]] model, the explanation for the results is typically difficult to understand, for example with an [[artificial neural network]].
* '''Possible to validate a model using statistical tests.''' That makes it possible to account for the reliability of the model.
* Non-parametric approach that makes no assumptions of the training data or prediction residuals; e.g., no distributional, independence, or constant variance assumptions
Line 312 ⟶ 315:
===Limitations===
* Trees can be very non-robust. A small change in the [[Training, test, and validation sets|training data]] can result in a large change in the tree and consequently the final predictions.<ref name=":0" />
* The problem of learning an optimal decision tree is known to be [[NP-complete]] under several aspects of optimality and even for simple concepts.<ref>{{Cite journal | doi = 10.1016/0020-0190(76)90095-8 | last1 = Hyafil | first1 = Laurent | last2 = Rivest | first2 = RL | year = 1976 | title = Constructing Optimal Binary Decision Trees is NP-complete | journal = Information Processing Letters | volume = 5 | issue = 1| pages = 15–17 }}</ref><ref>Murthy S. (1998). [https://cs.nyu.edu/~roweis/csc2515-2006/readings/murthy_dt.pdf "Automatic construction of decision trees from data: A multidisciplinary survey"]. ''Data Mining and Knowledge Discovery''</ref> Consequently, practical decision-tree learning algorithms are based on heuristics such as the [[greedy algorithm]] where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. To reduce the greedy effect of local optimality, some methods such as the dual information distance (DID) tree were proposed.<ref>{{cite journal|url= http://www.eng.tau.ac.il/~bengal/DID.pdf|title=Efficient Construction of Decision Trees by the Dual Information Distance Method|author= Ben-Gal I. Dana A., Shkolnik N. and Singer|journal= Quality Technology & Quantitative Management | volume= 11 | issue=1 |pages= 133–147|year=2014|doi=10.1080/16843703.2014.11673330|s2cid=7025979|access-date=2014-02-13|archive-date=2016-06-04|archive-url=https://web.archive.org/web/20160604183738/http://www.eng.tau.ac.il/~bengal/DID.pdf|url-status=dead}}</ref>
* Decision-tree learners can create over-complex trees that do not generalize well from the training data. (This is known as [[overfitting]].<ref>{{Cite book | title = Principles of Data Mining | doi = 10.1007/978-1-84628-766-4 | year = 2007 | isbn = 978-1-84628-765-7 | s2cid = 45746 }}</ref>) Mechanisms such as [[Pruning (decision trees)|pruning]] are necessary to avoid this problem (with the exception of some algorithms such as the Conditional Inference approach, that does not require pruning).<ref name="Hothorn2006" /><ref name="Strobl2009" />
* The average depth of the tree that is defined by the number of nodes or tests till classification is not guaranteed to be minimal or small under various splitting criteria.<ref name="Tris">{{cite web|author = Ben-Gal I. and Trister C. (2015)|title = Parallel Construction of Decision Trees with Consistently Non Increasing Expected Number of Tests |url = http://www.eng.tau.ac.il/~bengal/Trist.pdf|publisher = Applied Stochastic Models in Business and Industry, Vol. 31(1) 64-78|access-date = 2021-01-30|archive-date = 2021-02-05|archive-url = https://web.archive.org/web/20210205043215/http://www.eng.tau.ac.il/~bengal/Trist.pdf|url-status = dead}}</ref>
* For data including categorical variables with different numbers of levels, [[information gain in decision trees]] is biased in favor of attributes with more levels.<ref>{{cite conference|author=Deng, H.|author2=Runger, G. |author3=Tuv, E. |title=Bias of importance measures for multi-valued attributes and solutions|conference=Proceedings of the 21st International Conference on Artificial Neural Networks (ICANN)|year=2011|pages= 293–300|url=https://www.researchgate.net/publication/221079908}}</ref> To counter this problem, instead of choosing the attribute with highest [[information gain]], one can choose the attribute with the highest [[information gain ratio]] among the attributes whose information gain is greater than the mean information gain.<ref>{{cite journal |doi=10.1007/BF00116251 |last=Quinlan |first=J. Ross |title=Induction of Decision Trees |journal=[[Machine Learning (journal)|Machine Learning]] |volume=1 |issue=1 |year=1986 |pages=81–106 |doi-access=free }}</ref> This biases the decision tree against considering attributes with a large number of distinct values, while not giving an unfair advantage to attributes with very low information gain. Alternatively, the issue of biased predictor selection can be avoided by the Conditional Inference approach,<ref name="Hothorn2006" /> a two-stage approach,<ref>{{Cite journal|last1=Brandmaier|first1=Andreas M.|last2=Oertzen|first2=Timo von|last3=McArdle|first3=John J.|last4=Lindenberger|first4=Ulman|title=Structural equation model trees.|journal=Psychological Methods|language=en|volume=18|issue=1|pages=71–86|doi=10.1037/a0030001|pmid=22984789|pmc=4386908|year=2012|hdl=11858/00-001M-0000-0024-EA33-9}}</ref> or adaptive leave-one-out feature selection.<ref>{{cite journal|last1=Painsky|first1=Amichai|last2=Rosset|first2=Saharon|title=Cross-Validated Variable Selection in Tree-Based Methods Improves Predictive Performance|journal=IEEE Transactions on Pattern Analysis and Machine Intelligence|date=2017|volume=39|issue=11|pages=2142–2153|pmid=28114007|doi=10.1109/TPAMI.2016.2636831|arxiv=1512.03444|s2cid=5381516}}</ref>
 
===Implementations===
Many data mining software packages provide implementations of one or more decision tree algorithms (e.g. random forest).
 
ExamplesOpen source examples include:
 
* Salford Systems CART (which licensed the proprietary code of the original CART authors),<ref name="bfos"/>
* [[ALGLIB]], a C++, C# and Java numerical analysis library with data analysis features (random forest)
* [[SPSS Modeler|IBM SPSS Modeler]],
* [[KNIME]], a free and open-source data analytics, reporting and integration platform (decision trees, random forest)
* [[RapidMiner]],
* [[Orange (software)|Orange]], an open-source data visualization, machine learning and data mining toolkit (random forest)
* [[SAS (software)#Components|SAS Enterprise Miner]],
* [[Matlab]],
* [[R (programming language)|R]] (an open-source software environment for statistical computing, which includes several CART implementations such as rpart, party and randomForest packages),
*
* [[Weka (machine learning)|Weka]] (a free and open-source data-mining suite, contains many decision tree algorithms),
* [[Orange (software)|Orange]],
* [[KNIME]],
* [[Microsoft SQL Server]] [https://technet.microsoft.com/en-us/library/cc645868.aspx], and
* [[scikit-learn]] (a free and open-source machine learning library for the [[Python (programming language)|Python]] programming language).
* [[Weka (machine learning)|Weka]] (a free and open-source data-mining suite, contains many decision tree algorithms),
 
Notable commercial software:
 
* [[MatlabMATLAB]],
* [[Microsoft SQL Server]], and
* [[RapidMiner]],
*
* [[SAS (software)#Components|SAS Enterprise Miner]],
* [[SPSS Modeler|IBM SPSS Modeler]],
 
==Extensions==