Classification Error Rate Decision Tree
Contents |
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company
Decision Tree Classification In Data Mining Example
Business Learn more about hiring developers or posting ads with us Cross Validated Questions decision tree classification matlab Tags Users Badges Unanswered Ask Question _ Cross Validated is a question and answer site for people interested in statistics, machine learning, data decision tree classification envi analysis, data mining, and data visualization. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the
Decision Tree Classification In R
top When is classification error rate preferable when pruning decision trees? up vote 2 down vote favorite I'm going through Chapter 8 of "Introduction to Statistical learning" which introduces decision trees. My question is specific to the three approaches to pruning a decision tree (i.e., classification error rate, Gini Index, and cross-entropy). With regard to building classification trees, the chapter states that "classification error is not sufficiently sensitive enough for tree-growing, and in practice, the
Decision Tree Classification Remote Sensing
Gini Index and cross-entropy are preferred". However, it also states that "Any of these three approaches might be used when pruning the tree, but the classification error rate is preferable if prediction accuracy of the final pruned tree is the goal." There are two questions with regard to this: Given that classification error rate is not sensitive enough, why should it be used, over Gini Index and cross-entropy, if prediction accuracy is the goal? What advantage does it have over Gini Index and cross-entropy? If classification error rate is preferred, in what instances would we use the Gini Index and cross-entropy when pruning a decision tree? cart share|improve this question asked Mar 8 '15 at 10:32 Eugene Yan 1255 add a comment| 1 Answer 1 active oldest votes up vote 1 down vote accepted It's generally the case that, if you're trying to maximize some loss function (classification accuracy, Brier score, log-loss, etc.) it's more effective to use modeling procedures (tree learning, tree pruning) that maximize this directly. So the default attitude would be that, if you're trying to maximize classification accuracy, you should both train and prune your tree based on classification accuracy. However, there are a couple of things that might motivate you to make exceptions to this and not train your tree based on classification accuracy: The tree
be down. Please try the request again. Your cache administrator is webmaster. Generated Wed, 05 Oct 2016 21:14:29 GMT by s_hv997 (squid/3.5.20)
be down. Please try the request again. Your cache administrator is webmaster. Generated Wed, 05 Oct 2016 21:14:29 GMT by s_hv997 (squid/3.5.20)
be down. Please try the request again. Your cache administrator is webmaster. Generated Wed, 05 Oct 2016 21:14:29 GMT by s_hv997 (squid/3.5.20)