Logo ROOT   6.14/05
Reference Guide
MethodCFMlpANN.h
Go to the documentation of this file.
1 // @(#)root/tmva $Id$
2 // Author: Andreas Hoecker, Joerg Stelzer, Helge Voss, Kai Voss
3 
4 /**********************************************************************************
5  * Project: TMVA - a Root-integrated toolkit for multivariate data analysis *
6  * Package: TMVA *
7  * Class : MethodCFMlpANN *
8  * Web : http://tmva.sourceforge.net *
9  * *
10  * Description: *
11  * Interface for Clermond-Ferrand artificial neural network. *
12  * The ANN code has been translated from FORTRAN77 (f2c); *
13  * see files: MethodCFMlpANN_f2c_mlpl3.cpp *
14  * MethodCFMlpANN_f2c_datacc.cpp *
15  * *
16  * -------------------------------------------------------------------- *
17  * Reference for the original FORTRAN version: *
18  * Authors : J. Proriol and contributions from ALEPH-Clermont-Fd *
19  * Team members. Contact : gaypas@afal11.cern.ch *
20  * *
21  * Copyright: Laboratoire Physique Corpusculaire *
22  * Universite de Blaise Pascal, IN2P3/CNRS *
23  * -------------------------------------------------------------------- *
24  * *
25  * Usage: options are given through Factory: *
26  * factory->BookMethod( "MethodCFMlpANN", OptionsString ); *
27  * *
28  * where: *
29  * TString OptionsString = "n_training_cycles:n_hidden_layers" *
30  * *
31  * default is: n_training_cycles = 5000, n_layers = 4 *
32  * note that the number of hidden layers in the NN is *
33  * *
34  * n_hidden_layers = n_layers - 2 *
35  * *
36  * since there is one input and one output layer. The number of *
37  * nodes (neurons) is predefined to be *
38  * *
39  * n_nodes[i] = nvars + 1 - i (where i=1..n_layers) *
40  * *
41  * with nvars being the number of variables used in the NN. *
42  * Hence, the default case is: n_neurons(layer 1 (input)) : nvars *
43  * n_neurons(layer 2 (hidden)): nvars-1 *
44  * n_neurons(layer 3 (hidden)): nvars-1 *
45  * n_neurons(layer 4 (out)) : 2 *
46  * *
47  * This artificial neural network usually needs a relatively large *
48  * number of cycles to converge (8000 and more). Overtraining can *
49  * be efficiently tested by comparing the signal and background *
50  * output of the NN for the events that were used for training and *
51  * an independent data sample (with equal properties). If the separation *
52  * performance is significantly better for the training sample, the *
53  * NN interprets statistical effects, and is hence overtrained. In *
54  * this case, the number of cycles should be reduced, or the size *
55  * of the training sample increased. *
56  * *
57  * Authors (alphabetical): *
58  * Andreas Hoecker <Andreas.Hocker@cern.ch> - CERN, Switzerland *
59  * Xavier Prudent <prudent@lapp.in2p3.fr> - LAPP, France *
60  * Helge Voss <Helge.Voss@cern.ch> - MPI-K Heidelberg, Germany *
61  * Kai Voss <Kai.Voss@cern.ch> - U. of Victoria, Canada *
62  * *
63  * Copyright (c) 2005: *
64  * CERN, Switzerland *
65  * U. of Victoria, Canada *
66  * MPI-K Heidelberg, Germany *
67  * LAPP, Annecy, France *
68  * *
69  * Redistribution and use in source and binary forms, with or without *
70  * modification, are permitted according to the terms listed in LICENSE *
71  * (http://tmva.sourceforge.net/LICENSE) *
72  * *
73  **********************************************************************************/
74 
75 #ifndef ROOT_TMVA_MethodCFMlpANN
76 #define ROOT_TMVA_MethodCFMlpANN
77 
78 //////////////////////////////////////////////////////////////////////////
79 // //
80 // MethodCFMlpANN //
81 // //
82 // Interface for Clermond-Ferrand artificial neural network //
83 // //
84 //////////////////////////////////////////////////////////////////////////
85 
86 #include <iosfwd>
87 
88 #include "TMVA/MethodBase.h"
90 #include "TMatrixF.h"
91 
92 namespace TMVA {
93 
95 
96  public:
97 
98  MethodCFMlpANN( const TString& jobName,
99  const TString& methodTitle,
100  DataSetInfo& theData,
101  const TString& theOption = "3000:N-1:N-2");
102 
103  MethodCFMlpANN( DataSetInfo& theData,
104  const TString& theWeightFile);
105 
106  virtual ~MethodCFMlpANN( void );
107 
108  virtual Bool_t HasAnalysisType( Types::EAnalysisType type, UInt_t numberClasses, UInt_t /*numberTargets*/ );
109 
110  // training method
111  void Train( void );
112 
114 
115  // write weights to file
116  void AddWeightsXMLTo( void* parent ) const;
117 
118  // read weights from file
119  void ReadWeightsFromStream( std::istream& istr );
120  void ReadWeightsFromXML( void* wghtnode );
121  // calculate the MVA value
122  Double_t GetMvaValue( Double_t* err = 0, Double_t* errUpper = 0 );
123 
124  // data accessors for external functions
125  Double_t GetData ( Int_t isel, Int_t ivar ) const { return (*fData)(isel, ivar); }
126  Int_t GetClass( Int_t ivar ) const { return (*fClass)[ivar]; }
127 
128 
129  // ranking of input variables
130  const Ranking* CreateRanking() { return 0; }
131 
132  protected:
133 
134  // make ROOT-independent C++ class for classifier response (classifier-specific implementation)
135  void MakeClassSpecific( std::ostream&, const TString& ) const;
136 
137  // header and auxiliary classes
138  void MakeClassSpecificHeader( std::ostream&, const TString& = "" ) const;
139 
140  // get help message text
141  void GetHelpMessage() const;
142 
144  Double_t*, Int_t*, Int_t* );
145 
146  private:
147 
148  void PrintWeights( std::ostream & o ) const;
149 
150  // the option handling methods
151  void DeclareOptions();
152  void ProcessOptions();
153 
154  // LUTs
155  TMatrixF *fData; // the (data,var) string
156  std::vector<Int_t> *fClass; // the event class (1=signal, 2=background)
157 
158  Int_t fNlayers; // number of layers (including input and output layers)
159  Int_t fNcycles; // number of training cycles
160  Int_t* fNodes; // number of nodes per layer
161 
162  // additional member variables for the independent NN::Evaluation phase
163  Double_t** fYNN; // weights
164  TString fLayerSpec; // the hidden layer specification string
166 
167  // auxiliary member functions
168  Double_t EvalANN( std::vector<Double_t>&, Bool_t& isOK );
169  void NN_ava ( Double_t* );
170  Double_t NN_fonc( Int_t, Double_t ) const;
171 
172  // default initialisation
173  void Init( void );
174 
175  ClassDef(MethodCFMlpANN,0); // Interface for Clermond-Ferrand artificial neural network
176  };
177 
178 } // namespace TMVA
179 
180 #endif
Double_t GetData(Int_t isel, Int_t ivar) const
void Train(void)
training of the Clement-Ferrand NN classifier
void DeclareOptions()
define the options (their key words) that can be set in the option string know options: NCycles=xx :t...
void NN_ava(Double_t *)
auxiliary functions
void MakeClassSpecificHeader(std::ostream &, const TString &="") const
write specific classifier response for header
Int_t GetClass(Int_t ivar) const
void ReadWeightsFromXML(void *wghtnode)
read weights from xml file
void MakeClassSpecific(std::ostream &, const TString &) const
EAnalysisType
Definition: Types.h:127
Virtual base Class for all MVA method.
Definition: MethodBase.h:109
Basic string class.
Definition: TString.h:131
Ranking for variables in method (implementation)
Definition: Ranking.h:48
int Int_t
Definition: RtypesCore.h:41
bool Bool_t
Definition: RtypesCore.h:59
Int_t DataInterface(Double_t *, Double_t *, Int_t *, Int_t *, Int_t *, Int_t *, Double_t *, Int_t *, Int_t *)
data interface function
TMatrixT.
Definition: TMatrixDfwd.h:22
virtual ~MethodCFMlpANN(void)
destructor
#define ClassDef(name, id)
Definition: Rtypes.h:320
std::vector< Int_t > * fClass
virtual Bool_t HasAnalysisType(Types::EAnalysisType type, UInt_t numberClasses, UInt_t)
CFMlpANN can handle classification with 2 classes.
Class that contains all the data information.
Definition: DataSetInfo.h:60
const Ranking * CreateRanking()
void PrintWeights(std::ostream &o) const
write the weights of the neural net
Double_t GetMvaValue(Double_t *err=0, Double_t *errUpper=0)
returns CFMlpANN output (normalised within [0,1])
Double_t NN_fonc(Int_t, Double_t) const
activation function
unsigned int UInt_t
Definition: RtypesCore.h:42
void GetHelpMessage() const
get help message text
Double_t EvalANN(std::vector< Double_t > &, Bool_t &isOK)
evaluates NN value as function of input variables
void ReadWeightsFromStream(std::istream &istr)
read back the weight from the training from file (stream)
double Double_t
Definition: RtypesCore.h:55
int type
Definition: TGX11.cxx:120
void AddWeightsXMLTo(void *parent) const
write weights to xml file
Implementation of Clermond-Ferrand artificial neural network.
void Init(void)
default initialisation called by all constructors
void ProcessOptions()
decode the options in the option string
Abstract ClassifierFactory template that handles arbitrary types.
Interface to Clermond-Ferrand artificial neural network.
virtual void ReadWeightsFromStream(std::istream &)=0
MethodCFMlpANN(const TString &jobName, const TString &methodTitle, DataSetInfo &theData, const TString &theOption="3000:N-1:N-2")
standard constructor