How To Calculate Mean Absolute Error In Weka
Contents |
Report Content as Inappropriate ♦ ♦ Mean absolute error in classification Hi all, how is the mean absolute error calculated in classification? In numeric prediction it is: {|p1-a1|+....+|pn-an|}/n. But what is the difference of predicted values pi and actual value ai in classification? I am trying to figure out how weka came out relative absolute error weka with the following value of the Mean absolute error: Correctly Classified Instances 4520
Root Relative Squared Error
47.494 % Incorrectly Classified Instances 4997 52.506 % Mean absolute error
Relative Absolute Error Meaning
0.2717 Total Number of Instances 9517 a b c d e <-- classified as 566 589 226 31 1
Relative Absolute Error Definition
| a = '(-inf--0.0025]' 209 647 364 71 8 | b = '(-0.0025--0.0015]' 181 681 708 390 60 | c = '(-0.0015-0]' 31 134 367 1178 823 | d = '(0-0.0015]' 1 17 66 747 1421 | e = '(0.0015-inf)' Thank you for your help. Regards G. Nam -- Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kanns mit allen: http://www.gmx.net/de/go/multimessenger_______________________________________________ Wekalist mailing list [hidden email] https://list.scms.waikato.ac.nz/mailman/listinfo/wekalist Peter Reutemann-2 Reply | Threaded Open this post in understanding weka output threaded view ♦ ♦ | Report Content as Inappropriate ♦ ♦ Re: Mean absolute error in classification > how is the mean absolute error calculated in classification? In numeric > prediction it is: {|p1-a1|+....+|pn-an|}/n. But what is the difference of > predicted values pi and actual value ai in classification? > I am trying to figure out how weka came out with the following value of > the Mean absolute error: > > Correctly Classified Instances 4520 47.494 % > Incorrectly Classified Instances 4997 52.506 % > Mean absolute error 0.2717 > Total Number of Instances 9517 > > a b c d e <-- classified as > 566 589 226 31 1 | a = '(-inf--0.0025]' > 209 647 364 71 8 | b = '(-0.0025--0.0015]' > 181 681 708 390 60 | c = '(-0.0015-0]' > 31 134 367 1178 823 | d = '(0-0.0015]' > 1 17 66 747 1421 | e = '(0.0015-inf)' For each instance in the test set, Weka obtains a distribution (for each class label a value from 0 to 1, i.e., 0-100%). This distribution is matched against the expected distribution (the expected class label has 1 in that array, the others 0). For each class label the following is c
The equation is given in the library references. Expressed in words, the MAE is the average over the verification sample of how to interpret weka classification the absolute values of the differences between forecast and the corresponding relative absolute error formula observation. The MAE is a linear score which means that all the individual differences are weighted equally in mean absolute error interpretation the average. Root mean squared error (RMSE) The RMSE is a quadratic scoring rule which measures the average magnitude of the error. The equation for the RMSE is http://weka.8497.n7.nabble.com/Mean-absolute-error-in-classification-td9440.html given in both of the references. Expressing the formula in words, the difference between forecast and corresponding observed values are each squared and then averaged over the sample. Finally, the square root of the average is taken. Since the errors are squared before they are averaged, the RMSE gives a relatively high weight to large errors. This means http://www.eumetcal.org/resources/ukmeteocal/verification/www/english/msg/ver_cont_var/uos3/uos3_ko1.htm the RMSE is most useful when large errors are particularly undesirable. The MAE and the RMSE can be used together to diagnose the variation in the errors in a set of forecasts. The RMSE will always be larger or equal to the MAE; the greater difference between them, the greater the variance in the individual errors in the sample. If the RMSE=MAE, then all the errors are of the same magnitude Both the MAE and RMSE can range from 0 to ∞. They are negatively-oriented scores: Lower values are better. Loading Questions ... You read that a set of temperature forecasts shows a MAE of 1.5 degrees and a RMSE of 2.5 degrees. What does this mean? Choose the best answer: Feedback This is true, but not the best answer. If RMSE>MAE, then there is variation in the errors. Feedback This is true too, the RMSE-MAE difference isn't large enough to indicate the presence of very large errors. Feedback This is true, by the definition of the MAE, but not the bes
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more http://stackoverflow.com/questions/2903933/how-to-interpret-weka-classification about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow Community Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up How to interpret weka classification? up vote 42 down vote favorite absolute error 26 How can we interpret the classification result in weka using naive bayes? How is mean, std deviation, weight sum and precision calculated? How is kappa statistic, mean absolute error, root mean squared error etc calculated? What is the interpretation of the confusion matrix? computer-vision classification weka share|improve this question edited May 10 '12 at 20:54 Atilla Ozgur 10.1k32745 asked May 25 '10 at 10:55 user349821 211144 add a relative absolute error comment| 3 Answers 3 active oldest votes up vote 74 down vote Below is some sample output for a naive Bayes classifier, using 10-fold cross-validation. There's a lot of information there, and what you should focus on depends on your application. I'll explain some of the results below, to get you started. === Stratified cross-validation === === Summary === Correctly Classified Instances 71 71 % Incorrectly Classified Instances 29 29 % Kappa statistic 0.3108 Mean absolute error 0.3333 Root mean squared error 0.4662 Relative absolute error 69.9453 % Root relative squared error 95.5466 % Total Number of Instances 100 === Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure ROC Area Class 0.967 0.692 0.686 0.967 0.803 0.709 0 0.308 0.033 0.857 0.308 0.453 0.708 1 Weighted Avg. 0.71 0.435 0.753 0.71 0.666 0.709 === Confusion Matrix === a b <-- classified as 59 2 | a = 0 27 12 | b = 1 The correctly and incorrectly classified instances show the percentage of test instances that were correctly and incorrectly classified. The raw numbers are shown in the confusion matrix, with a and b representing the class labels. Here there were 100 instances, so the percentages and raw nu
be down. Please try the request again. Your cache administrator is webmaster. Generated Sun, 16 Oct 2016 03:34:29 GMT by s_ac4 (squid/3.5.20)