Chapter 3, exercises in 3.11
5. Consider the following data set for a binary class problem.
A B Class Label
T F +
T T +
T T +
T F −
T T +
F F −
F F −
F F −
T T −
T F −
a. Calculate the information gain when splitting on A and B. Which
attribute would the decision tree induction algorithm choose?
b. Calculate the gain in the Gini index when splitting on A and B.
Which attribute would the decision tree induction algorithm
choose?
c. Figure 3.11 shows that entropy and the Gini index are both
monotonically increasing on the range [0, 0.5] and they are both
monotonically decreasing on the range [0.5, 1]. Is it possible that
information gain and the gain in the Gini index favor different
attributes? Explain.7. Consider the following set of training examples.
X Y Z No. of Class C1 Examples No. of Class C2 Examples
0 0 0 5 40
0 0 1 0 15
0 1 0 10 5
0 1 1 45 0
1 0 0 10 5
1 0 1 25 0
1 1 0 5 20
1 1 1 0 15
a. Compute a two-level decision tree using the greedy approach
described in this chapter. Use the classification error rate as the
criterion for splitting. What is the overall error rate of the induced
tree?
b. Repeat part (a) using X as the first splitting attribute and then
choose the best remaining attribute for splitting at each of the two
successor nodes. What is the error rate of the induced tree?
c. Compare the results of parts (a) and (b). Comment on the suitability
of the greedy heuristic used for splitting attribute selection.8. The following table summarizes a data set with three attributes A, B,
C and two class labels +, −. Build a two-level decision tree.
A B C
Number of Instances
+ −
T T T 5 0
F T T 0 20
T F T 20 0
F F T 0 5
T T F 0 0
F T F 25 0
T F F 0 0
F F F 0 25
a. According to the classification error rate, which attribute would be
chosen as the first splitting attribute? For each attribute, show the
contingency table and the gains in classification error rate.
b. Repeat for the two children of the root node.
c. How many instances are misclassified by the resulting decision
tree?
d. Repeat parts (a), (b), and (c) using C as the splitting attribute.
e. Use the results in parts (c) and (d) to conclude about the greedy
nature of the decision tree induction algorithm.
Approximate price: $22
We value our customers and so we ensure that what we do is 100% original..
With us you are guaranteed of quality work done by our qualified experts.Your information and everything that you do with us is kept completely confidential.You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.The Product ordered is guaranteed to be original. Orders are checked by the most advanced anti-plagiarism software in the market to assure that the Product is 100% original. The Company has a zero tolerance policy for plagiarism.The Free Revision policy is a courtesy service that the Company provides to help ensure Customer’s total satisfaction with the completed Order. To receive free revision the Company requires that the Customer provide the request within fourteen (14) days from the first completion date and within a period of thirty (30) days for dissertations.The Company is committed to protect the privacy of the Customer and it will never resell or share any of Customer’s personal information, including credit card data, with any third party. All the online transactions are processed through the secure and reliable online payment systems.By placing an order with us, you agree to the service we provide. We will endear to do all that it takes to deliver a comprehensive paper as per your requirements. We also count on your cooperation to ensure that we deliver on this mandate.
Intro to Data Mining
Never use plagiarized sources. Get Your Original Essay on
Intro to Data Mining
Hire Professionals Just from $11/Page