class TMVA::DecisionTree: public TMVA::BinaryTree


 Implementation of a Decision Tree

 In a decision tree successive decision nodes are used to categorize the
 events out of the sample as either signal or background. Each node
 uses only a single discriminating variable to decide if the event is
 signal-like ("goes right") or background-like ("goes left"). This
 forms a tree like structure with "baskets" at the end (leave nodes),
 and an event is classified as either signal or background according to
 whether the basket where it ends up has been classified signal or
 background during the training. Training of a decision tree is the
 process to define the "cut criteria" for each node. The training
 starts with the root node. Here one takes the full training event
 sample and selects the variable and corresponding cut value that gives
 the best separation between signal and background at this stage. Using
 this cut criterion, the sample is then divided into two subsamples, a
 signal-like (right) and a background-like (left) sample. Two new nodes
 are then created for each of the two sub-samples and they are
 constructed using the same mechanism as described for the root
 node. The devision is stopped once a certain node has reached either a
 minimum number of events, or a minimum or maximum signal purity. These
 leave nodes are then called "signal" or "background" if they contain
 more signal respective background events from the training sample.

Function Members (Methods)

public:
virtual~DecisionTree()
Int_tBuildTree(vector<TMVA::Event*>& eventSample, TMVA::DecisionTreeNode* node = NULL)
Double_tCheckEvent(const TMVA::Event&, Bool_t UseYesNoLeaf = kFALSE)
static TClass*Class()
voidCleanTree(TMVA::DecisionTreeNode* node = NULL)
voidClearTree()
UInt_tCountLeafNodes(TMVA::DecisionTreeNode* n = NULL)
UInt_tTMVA::BinaryTree::CountNodes(TMVA::Node* n = NULL)
virtual TMVA::Node*CreateNode()
TMVA::DecisionTreeDecisionTree()
TMVA::DecisionTreeDecisionTree(TMVA::DecisionTreeNode* n)
TMVA::DecisionTreeDecisionTree(const TMVA::DecisionTree& d)
TMVA::DecisionTreeDecisionTree(TMVA::SeparationBase* sepType, Int_t minSize, Int_t nCuts, TMVA::SeparationBase* qtype = NULL, Bool_t randomisedTree = kFALSE, Int_t useNvars = 0, Int_t iSeed = 0)
voidDescendTree(TMVA::DecisionTreeNode* n = NULL)
voidFillEvent(TMVA::Event& event, TMVA::DecisionTreeNode* node)
voidFillTree(vector<TMVA::Event*>& eventSample)
TMVA::DecisionTreeNode*GetLeftDaughter(TMVA::DecisionTreeNode* n)
UInt_tTMVA::BinaryTree::GetNNodes() const
TMVA::DecisionTreeNode*GetNode(ULong_t sequence, UInt_t depth)
Double_tGetNodePurityLimit() const
TMVA::DecisionTreeNode*GetRightDaughter(TMVA::DecisionTreeNode* n)
TMVA::Node*TMVA::BinaryTree::GetRoot() const
UInt_tTMVA::BinaryTree::GetTotalTreeDepth() const
vector<Double_t>GetVariableImportance()
Double_tGetVariableImportance(Int_t ivar)
virtual TClass*IsA() const
voidTMVA::BinaryTree::Print(ostream& os) const
voidPruneNode(TMVA::DecisionTreeNode* node)
voidPruneTree()
voidPruneTreeCC()
voidPruneTreeEEP(TMVA::DecisionTreeNode* node)
voidTMVA::BinaryTree::Read(istream& istr)
voidSetNodePurityLimit(Double_t p)
voidSetParentTreeInNodes(TMVA::DecisionTreeNode* n = NULL)
voidSetPruneMethod(TMVA::DecisionTree::EPruneMethod m = kCostComplexityPruning)
voidSetPruneStrength(Double_t p)
voidTMVA::BinaryTree::SetRoot(TMVA::Node* r)
voidTMVA::BinaryTree::SetTotalTreeDepth(Int_t depth)
voidTMVA::BinaryTree::SetTotalTreeDepth(TMVA::Node* n = NULL)
virtual voidShowMembers(TMemberInspector& insp, char* parent)
virtual voidStreamer(TBuffer& b)
voidStreamerNVirtual(TBuffer& b)
Double_tTrainNode(vector<TMVA::Event*>& eventSample, TMVA::DecisionTreeNode* node)
Double_tTrainNodeFast(vector<TMVA::Event*>& eventSample, TMVA::DecisionTreeNode* node)
Double_tTrainNodeFull(vector<TMVA::Event*>& eventSample, TMVA::DecisionTreeNode* node)
protected:
voidTMVA::BinaryTree::DeleteNode(TMVA::Node*)
private:
voidFindMinAndMax(vector<TMVA::Event*>& eventSample, vector<Double_t>& min, vector<Double_t>& max)
Double_tGetNodeError(TMVA::DecisionTreeNode* node)
Double_tGetSubTreeError(TMVA::DecisionTreeNode* node)
Double_tSamplePurity(TMVA::DecisionTree::vector<Event*> eventSample)
voidSetCutPoints(vector<Double_t>& cut_points, Double_t xmin, Double_t xmax, Int_t num_gridpoints)

Data Members

public:
enum EPruneMethod { kExpectedErrorPruning
kCostComplexityPruning
kNoPruning
};
protected:
UInt_tTMVA::BinaryTree::fDepthmaximal depth in tree reached
TMVA::MsgLoggerTMVA::BinaryTree::fLoggermessage loggera
Int_tTMVA::BinaryTree::fNNodestotal number of nodes in the tree (counted)
private:
Double_tfMinSepGainmin number of separation gain to perform node splitting
Double_tfMinSizemin number of events in node
TRandom2*fMyTrandomrandom number generator for randomised trees
Int_tfNCutsnumber of grid point in variable cut scans
Double_tfNodePurityLimitpurity limit to decide whether a node is signal
Int_tfNvarsnumber of variables used to separate S and B
TMVA::DecisionTree::EPruneMethodfPruneMethodmethod used for prunig
Double_tfPruneStrengtha parameter to set the "amount" of pruning..needs to be adjusted
TMVA::SeparationBase*fQualityIndexseparation/quality criterio for CC-pruning
Bool_tfRandomisedTreechoose at each node splitting a random set of variables
TMVA::SeparationBase*fSepTypethe separation crition
Int_tfUseNvarsthe number of variables used in randomised trees;
Bool_tfUseSearchTreecut scan done with binary trees or simple event loop.
vector<Double_t>fVariableImportancethe relative importance of the different variables
static const Int_tfgDebugLeveldebug level determining some printout/control plots etc.

Class Charts

Inheritance Inherited Members Includes Libraries
Class Charts

Function documentation

DecisionTree( void )
 default constructor using the GiniIndex as separation criterion,
 no restrictions on minium number of events in a leave note or the
 separation gain in the node splitting
DecisionTree( DecisionTreeNode* n )
 default constructor using the GiniIndex as separation criterion,
 no restrictions on minium number of events in a leave note or the
 separation gain in the node splitting
DecisionTree(TMVA::SeparationBase* sepType, Int_t minSize, Int_t nCuts, TMVA::SeparationBase* qtype = NULL, Bool_t randomisedTree = kFALSE, Int_t useNvars = 0, Int_t iSeed = 0)
 constructor specifying the separation type, the min number of
 events in a no that is still subjected to further splitting, the
 number of bins in the grid used in applying the cut for the node
 splitting.
DecisionTree( const DecisionTree &d)
 copy constructor that creates a true copy, i.e. a completely independent tree
 the node copy will recursively copy all the nodes
~DecisionTree( void )
 destructor
void SetParentTreeInNodes(TMVA::DecisionTreeNode* n = NULL)
 descend a tree to find all its leaf nodes, fill max depth reached in the
 tree at the same time.
Int_t BuildTree(vector<TMVA::Event*>& eventSample, TMVA::DecisionTreeNode* node = NULL)
 building the decision tree by recursively calling the splitting of
 one (root-) node into two daughter nodes (returns the number of nodes)
void FillTree(vector<TMVA::Event*>& eventSample)
 fill the existing the decision tree structure by filling event
 in from the top node and see where they happen to end up
void FillEvent(TMVA::Event& event, TMVA::DecisionTreeNode* node)
 fill the existing the decision tree structure by filling event
 in from the top node and see where they happen to end up
void ClearTree()
 clear the tree nodes (their S/N, Nevents etc), just keep the structure of the tree
void CleanTree(TMVA::DecisionTreeNode* node = NULL)
 remove those last splits that result in two leaf nodes that
 are both of the type background
void PruneTree()
 prune (get rid of internal nodes) the Decision tree to avoid overtraining
 serveral different pruning methods can be applied as selected by the
 variable "fPruneMethod". Currently however only the Expected Error Pruning
 is implemented

void PruneTreeEEP(TMVA::DecisionTreeNode* node)
 recursive prunig of nodes using the Expected Error Pruning (EEP)
 if internal node, then prune
void PruneTreeCC()
UInt_t CountLeafNodes(TMVA::DecisionTreeNode* n = NULL)
 return the number of terminal nodes in the sub-tree below Node n
void DescendTree(TMVA::DecisionTreeNode* n = NULL)
 descend a tree to find all its leaf nodes
void PruneNode(TMVA::DecisionTreeNode* node)
 prune away the subtree below the node
Double_t GetNodeError(TMVA::DecisionTreeNode* node)
 calculate an UPPER limit on the error made by the classification done
 by this node. If the S/S+B of the node is f, then according to the
 training sample, the error rate (fraction of misclassified events by
 this node) is (1-f)
 now f has a statistical error according to the binomial distribution
 hence the error on f can be estimated (same error as the binomial error
 for efficency calculations ( sigma = sqrt(eff(1-eff)/nEvts ) )
Double_t GetSubTreeError(TMVA::DecisionTreeNode* node)
 calculate the expected statistical error on the subtree below "node"
 which is used in the expected error pruning
TMVA::DecisionTreeNode* GetLeftDaughter(TMVA::DecisionTreeNode* n)
 get left daughter node current node "n"
TMVA::DecisionTreeNode* GetRightDaughter(TMVA::DecisionTreeNode* n)
 get right daughter node current node "n"
TMVA::DecisionTreeNode* GetNode(ULong_t sequence, UInt_t depth)
 retrieve node from the tree. Its position (up to a maximal tree depth of 64)
 is coded as a sequence of left-right moves starting from the root, coded as
 0-1 bit patterns stored in the "long-integer"  (i.e. 0:left ; 1:right
void FindMinAndMax(vector<TMVA::Event*>& eventSample, vector<Double_t>& min, vector<Double_t>& max)
 helper function which calculates gets the Min and Max value
 of the event variables in the current event sample
void SetCutPoints(vector<Double_t>& cut_points, Double_t xmin, Double_t xmax, Int_t num_gridpoints)
 helper function which calculates the grid points used for
 the cut scan
Double_t TrainNodeFast(vector<TMVA::Event*>& eventSample, TMVA::DecisionTreeNode* node)
 decide how to split a node. At each node, ONE of the variables is
 choosen, which gives the best separation between signal and bkg on
 the sample which enters the Node.
 In order to do this, for each variable a scan of the different cut
 values in a grid (grid = fNCuts) is performed and the resulting separation
 gains are compared.. This cut scan uses either a binary search tree
 or a simple loop over the events depending on the number of events
 in the sample
Double_t TrainNodeFull(vector<TMVA::Event*>& eventSample, TMVA::DecisionTreeNode* node)
 decide how to split a node. At each node, ONE of the variables is
 choosen, which gives the best separation between signal and bkg on
 the sample which enters the Node.
 In this node splitting routine the event are sorted for each
 variable allowing to find the true optimal cut in each variable, by
 looping through all events, placing the cuts always in the middle between
 two of the sorted event and finding the true maximal separation gain
 possible by cutting on this variable
Double_t CheckEvent(const TMVA::Event& , Bool_t UseYesNoLeaf = kFALSE)
 the event e is put into the decision tree (starting at the root node)
 and the output is NodeType (signal) or (background) of the final node (basket)
 in which the given events ends up. I.e. the result of the classification if
 the event for this decision tree.
Double_t SamplePurity(TMVA::DecisionTree::vector<Event*> eventSample)
calculates the purity S/(S+B) of a given event sample
vector< Double_t > GetVariableImportance(Int_t ivar)
return the relative variable importance, normalized to all
variables together having the importance 1. The importance in
evaluated as the total separation-gain that this variable had in
the decision trees (weighted by the number of events)
Double_t GetVariableImportance(Int_t ivar)
 returns the relative improtance of variable ivar
Node * CreateNode()
{ return new DecisionTreeNode(); }
Double_t TrainNode(vector<TMVA::Event*>& eventSample, TMVA::DecisionTreeNode* node)
 determine the way how a node is split (which variable, which cut value)
{ return TrainNodeFast( eventSample, node ); }
void SetPruneMethod(TMVA::DecisionTree::EPruneMethod m = kCostComplexityPruning)
{ fPruneMethod = m; }
void SetPruneStrength(Double_t p)
{ fPruneStrength = p; }
void SetNodePurityLimit(Double_t p)
{ fNodePurityLimit = p; }
Double_t GetNodePurityLimit( )
{ return fNodePurityLimit; }

Author: Andreas Hoecker, Joerg Stelzer, Helge Voss, Kai Voss
Last change: root/tmva $Id: DecisionTree.h 26050 2008-11-01 09:18:41Z brun $
Last generated: 2008-11-01 10:21
Copyright (c) 2005: *

This page has been automatically generated. If you have any comments or suggestions about the page layout send a mail to ROOT support, or contact the developers with any questions or problems regarding ROOT.