Logo ROOT  
Reference Guide
TMultiLayerPerceptron Class Reference

This class describes a neural network.

There are facilities to train the network and use the output.

The input layer is made of inactive neurons (returning the optionally normalized input) and output neurons are linear. The type of hidden neurons is free, the default being sigmoids. (One should still try to pass normalized inputs, e.g. between [0.,1])

The basic input is a TTree and two (training and test) TEventLists. Input and output neurons are assigned a value computed for each event with the same possibilities as for TTree::Draw(). Events may be weighted individually or via TTree::SetWeight(). 6 learning methods are available: kStochastic, kBatch, kSteepestDescent, kRibierePolak, kFletcherReeves and kBFGS.

This implementation, written by C. Delaere, is inspired from the mlpfit package from J.Schwindling et al. with some extensions:

  • the algorithms are globally the same
  • in TMultilayerPerceptron, there is no limitation on the number of layers/neurons, while MLPFIT was limited to 2 hidden layers
  • TMultilayerPerceptron allows you to save the network in a root file, and provides more export functionalities
  • TMultilayerPerceptron gives more flexibility regarding the normalization of inputs/outputs
  • TMultilayerPerceptron provides, thanks to Andrea Bocci, the possibility to use cross-entropy errors, which allows to train a network for pattern classification based on Bayesian posterior probability.

Introduction

Neural Networks are more and more used in various fields for data analysis and classification, both for research and commercial institutions. Some randomly chosen examples are:

  • image analysis
  • financial movements predictions and analysis
  • sales forecast and product shipping optimisation
  • in particles physics: mainly for classification tasks (signal over background discrimination)

More than 50% of neural networks are multilayer perceptrons. This implementation of multilayer perceptrons is inspired from the MLPfit package originally written by Jerome Schwindling. MLPfit remains one of the fastest tool for neural networks studies, and this ROOT add-on will not try to compete on that. A clear and flexible Object Oriented implementation has been chosen over a faster but more difficult to maintain code. Nevertheless, the time penalty does not exceed a factor 2.

The MLP

The multilayer perceptron is a simple feed-forward network with the following structure:

It is made of neurons characterized by a bias and weighted links between them (let's call those links synapses). The input neurons receive the inputs, normalize them and forward them to the first hidden layer.

Each neuron in any subsequent layer first computes a linear combination of the outputs of the previous layer. The output of the neuron is then function of that combination with f being linear for output neurons or a sigmoid for hidden layers. This is useful because of two theorems:

  1. A linear combination of sigmoids can approximate any continuous function.
  2. Trained with output = 1 for the signal and 0 for the background, the approximated function of inputs X is the probability of signal, knowing X.

Learning methods

The aim of all learning methods is to minimize the total error on a set of weighted examples. The error is defined as the sum in quadrature, divided by two, of the error on each individual output neuron. In all methods implemented, one needs to compute the first derivative of that error with respect to the weights. Exploiting the well-known properties of the derivative, especially the derivative of compound functions, one can write:

  • for a neuron: product of the local derivative with the weighted sum on the outputs of the derivatives.
  • for a synapse: product of the input with the local derivative of the output neuron.

This computation is called back-propagation of the errors. A loop over all examples is called an epoch. Six learning methods are implemented.

Stochastic minimization:

is the most trivial learning method. This is the Robbins-Monro stochastic approximation applied to multilayer perceptrons. The weights are updated after each example according to the formula: \(w_{ij}(t+1) = w_{ij}(t) + \Delta w_{ij}(t)\)

with

\(\Delta w_{ij}(t) = - \eta(d e_p / d w_{ij} + \delta) + \epsilon \Delta w_{ij}(t-1)\)

The parameters for this method are Eta, EtaDecay, Delta and Epsilon.

Steepest descent with fixed step size (batch learning):

It is the same as the stochastic minimization, but the weights are updated after considering all the examples, with the total derivative dEdw. The parameters for this method are Eta, EtaDecay, Delta and Epsilon.

Steepest descent algorithm:

Weights are set to the minimum along the line defined by the gradient. The only parameter for this method is Tau. Lower tau = higher precision = slower search. A value Tau = 3 seems reasonable.

Conjugate gradients with the Polak-Ribiere updating formula:

Weights are set to the minimum along the line defined by the conjugate gradient. Parameters are Tau and Reset, which defines the epochs where the direction is reset to the steepest descent.

Conjugate gradients with the Fletcher-Reeves updating formula:

Weights are set to the minimum along the line defined by the conjugate gradient. Parameters are Tau and Reset, which defines the epochs where the direction is reset to the steepest descent.

Broyden, Fletcher, Goldfarb, Shanno (BFGS) method:

Implies the computation of a NxN matrix computation, but seems more powerful at least for less than 300 weights. Parameters are Tau and Reset, which defines the epochs where the direction is reset to the steepest descent.

How to use it...

TMLP is build from 3 classes: TNeuron, TSynapse and TMultiLayerPerceptron. Only TMultiLayerPerceptron should be used explicitly by the user.

TMultiLayerPerceptron will take examples from a TTree given in the constructor. The network is described by a simple string: The input/output layers are defined by giving the expression for each neuron, separated by comas. Hidden layers are just described by the number of neurons. The layers are separated by colons. In addition, input/output layer formulas can be preceded by '@' (e.g "@out") if one wants to also normalize the data from the TTree. Input and outputs are taken from the TTree given as second argument. Expressions are evaluated as for TTree::Draw(), arrays are expended in distinct neurons, one for each index. This can only be done for fixed-size arrays. If the formula ends with "!", softmax functions are used for the output layer. One defines the training and test datasets by TEventLists.

Example:

TMultiLayerPerceptron("x,y:10:5:f",inputTree);
TMultiLayerPerceptron()
Default constructor.

Both the TTree and the TEventLists can be defined in the constructor, or later with the suited setter method. The lists used for training and test can be defined either explicitly, or via a string containing the formula to be used to define them, exactly as for a TCut.

The learning method is defined using the TMultiLayerPerceptron::SetLearningMethod() . Learning methods are :

A weight can be assigned to events, either in the constructor, either with TMultiLayerPerceptron::SetEventWeight(). In addition, the TTree weight is taken into account.

Finally, one starts the training with TMultiLayerPerceptron::Train(Int_t nepoch, Option_t* options). The first argument is the number of epochs while option is a string that can contain: "text" (simple text output) , "graph" (evoluting graphical training curves), "update=X" (step for the text/graph output update) or "+" (will skip the randomisation and start from the previous values). All combinations are available.

Example:

net.Train(100,"text, graph, update=10");

When the neural net is trained, it can be used directly ( TMultiLayerPerceptron::Evaluate() ) or exported to a standalone C++ code ( TMultiLayerPerceptron::Export() ).

Finally, note that even if this implementation is inspired from the mlpfit code, the feature lists are not exactly matching:

  • mlpfit hybrid learning method is not implemented
  • output neurons can be normalized, this is not the case for mlpfit
  • the neural net is exported in C++, FORTRAN or PYTHON
  • the drawResult() method allows a fast check of the learning procedure

In addition, the paw version of mlpfit had additional limitations on the number of neurons, hidden layers and inputs/outputs that does not apply to TMultiLayerPerceptron.

Definition at line 26 of file TMultiLayerPerceptron.h.

Public Types

enum  EDataSet { kTraining , kTest }
 
enum  ELearningMethod {
  kStochastic , kBatch , kSteepestDescent , kRibierePolak ,
  kFletcherReeves , kBFGS
}
 
- Public Types inherited from TObject
enum  {
  kIsOnHeap = 0x01000000 , kNotDeleted = 0x02000000 , kZombie = 0x04000000 , kInconsistent = 0x08000000 ,
  kBitMask = 0x00ffffff
}
 
enum  { kSingleKey = BIT(0) , kOverwrite = BIT(1) , kWriteDelete = BIT(2) }
 
enum  EDeprecatedStatusBits { kObjInCanvas = BIT(3) }
 
enum  EStatusBits {
  kCanDelete = BIT(0) , kMustCleanup = BIT(3) , kIsReferenced = BIT(4) , kHasUUID = BIT(5) ,
  kCannotPick = BIT(6) , kNoContextMenu = BIT(8) , kInvalidObject = BIT(13)
}
 

Public Member Functions

 TMultiLayerPerceptron ()
 Default constructor. More...
 
 TMultiLayerPerceptron (const char *layout, const char *weight, TTree *data, TEventList *training, TEventList *test, TNeuron::ENeuronType type=TNeuron::kSigmoid, const char *extF="", const char *extD="")
 The network is described by a simple string: The input/output layers are defined by giving the branch names separated by comas. More...
 
 TMultiLayerPerceptron (const char *layout, const char *weight, TTree *data=0, const char *training="Entry$%2==0", const char *test="", TNeuron::ENeuronType type=TNeuron::kSigmoid, const char *extF="", const char *extD="")
 The network is described by a simple string: The input/output layers are defined by giving the branch names separated by comas. More...
 
 TMultiLayerPerceptron (const char *layout, TTree *data, TEventList *training, TEventList *test, TNeuron::ENeuronType type=TNeuron::kSigmoid, const char *extF="", const char *extD="")
 The network is described by a simple string: The input/output layers are defined by giving the branch names separated by comas. More...
 
 TMultiLayerPerceptron (const char *layout, TTree *data=0, const char *training="Entry$%2==0", const char *test="", TNeuron::ENeuronType type=TNeuron::kSigmoid, const char *extF="", const char *extD="")
 The network is described by a simple string: The input/output layers are defined by giving the branch names separated by comas. More...
 
virtual ~TMultiLayerPerceptron ()
 Destructor. More...
 
void ComputeDEDw () const
 Compute the DEDw = sum on all training events of dedw for each weight normalized by the number of events. More...
 
virtual void Draw (Option_t *option="")
 Draws the network structure. More...
 
void DrawResult (Int_t index=0, Option_t *option="test") const
 Draws the neural net output It produces an histogram with the output for the two datasets. More...
 
Bool_t DumpWeights (Option_t *filename="-") const
 Dumps the weights to a text file. More...
 
Double_t Evaluate (Int_t index, Double_t *params) const
 Returns the Neural Net for a given set of input parameters #parameters must equal #input neurons. More...
 
void Export (Option_t *filename="NNfunction", Option_t *language="C++") const
 Exports the NN as a function for any non-ROOT-dependant code Supported languages are: only C++ , FORTRAN and Python (yet) This feature is also useful if you want to plot the NN as a function (TF1 or TF2). More...
 
Double_t GetDelta () const
 
Double_t GetEpsilon () const
 
Double_t GetError (Int_t event) const
 Error on the output for a given event. More...
 
Double_t GetError (TMultiLayerPerceptron::EDataSet set) const
 Error on the whole dataset. More...
 
Double_t GetEta () const
 
Double_t GetEtaDecay () const
 
TMultiLayerPerceptron::ELearningMethod GetLearningMethod () const
 
Int_t GetReset () const
 
TString GetStructure () const
 
Double_t GetTau () const
 
TNeuron::ENeuronType GetType () const
 
Bool_t LoadWeights (Option_t *filename="")
 Loads the weights from a text file conforming to the format defined by DumpWeights. More...
 
void Randomize () const
 Randomize the weights. More...
 
Double_t Result (Int_t event, Int_t index=0) const
 Computes the output for a given event. More...
 
void SetData (TTree *)
 Set the data source. More...
 
void SetDelta (Double_t delta)
 Sets Delta - used in stochastic minimisation (look at the constructor for the complete description of learning methods and parameters) More...
 
void SetEpsilon (Double_t eps)
 Sets Epsilon - used in stochastic minimisation (look at the constructor for the complete description of learning methods and parameters) More...
 
void SetEta (Double_t eta)
 Sets Eta - used in stochastic minimisation (look at the constructor for the complete description of learning methods and parameters) More...
 
void SetEtaDecay (Double_t ed)
 Sets EtaDecay - Eta *= EtaDecay at each epoch (look at the constructor for the complete description of learning methods and parameters) More...
 
void SetEventWeight (const char *)
 Set the event weight. More...
 
void SetLearningMethod (TMultiLayerPerceptron::ELearningMethod method)
 Sets the learning method. More...
 
void SetReset (Int_t reset)
 Sets number of epochs between two resets of the search direction to the steepest descent. More...
 
void SetTau (Double_t tau)
 Sets Tau - used in line search (look at the constructor for the complete description of learning methods and parameters) More...
 
void SetTestDataSet (const char *test)
 Sets the Test dataset. More...
 
void SetTestDataSet (TEventList *test)
 Sets the Test dataset. More...
 
void SetTrainingDataSet (const char *train)
 Sets the Training dataset. More...
 
void SetTrainingDataSet (TEventList *train)
 Sets the Training dataset. More...
 
void Train (Int_t nEpoch, Option_t *option="text", Double_t minE=0)
 Train the network. More...
 
- Public Member Functions inherited from TObject
 TObject ()
 TObject constructor. More...
 
 TObject (const TObject &object)
 TObject copy ctor. More...
 
virtual ~TObject ()
 TObject destructor. More...
 
void AbstractMethod (const char *method) const
 Use this method to implement an "abstract" method that you don't want to leave purely abstract. More...
 
virtual void AppendPad (Option_t *option="")
 Append graphics object to current pad. More...
 
virtual void Browse (TBrowser *b)
 Browse object. May be overridden for another default action. More...
 
ULong_t CheckedHash ()
 Check and record whether this class has a consistent Hash/RecursiveRemove setup (*) and then return the regular Hash value for this object. More...
 
virtual const char * ClassName () const
 Returns name of class to which the object belongs. More...
 
virtual void Clear (Option_t *="")
 
virtual TObjectClone (const char *newname="") const
 Make a clone of an object using the Streamer facility. More...
 
virtual Int_t Compare (const TObject *obj) const
 Compare abstract method. More...
 
virtual void Copy (TObject &object) const
 Copy this to obj. More...
 
virtual void Delete (Option_t *option="")
 Delete this object. More...
 
virtual Int_t DistancetoPrimitive (Int_t px, Int_t py)
 Computes distance from point (px,py) to the object. More...
 
virtual void Draw (Option_t *option="")
 Default Draw method for all objects. More...
 
virtual void DrawClass () const
 Draw class inheritance tree of the class to which this object belongs. More...
 
virtual TObjectDrawClone (Option_t *option="") const
 Draw a clone of this object in the current selected pad for instance with: gROOT->SetSelectedPad(gPad). More...
 
virtual void Dump () const
 Dump contents of object on stdout. More...
 
virtual void Error (const char *method, const char *msgfmt,...) const
 Issue error message. More...
 
virtual void Execute (const char *method, const char *params, Int_t *error=0)
 Execute method on this object with the given parameter string, e.g. More...
 
virtual void Execute (TMethod *method, TObjArray *params, Int_t *error=0)
 Execute method on this object with parameters stored in the TObjArray. More...
 
virtual void ExecuteEvent (Int_t event, Int_t px, Int_t py)
 Execute action corresponding to an event at (px,py). More...
 
virtual void Fatal (const char *method, const char *msgfmt,...) const
 Issue fatal error message. More...
 
virtual TObjectFindObject (const char *name) const
 Must be redefined in derived classes. More...
 
virtual TObjectFindObject (const TObject *obj) const
 Must be redefined in derived classes. More...
 
virtual Option_tGetDrawOption () const
 Get option used by the graphics system to draw this object. More...
 
virtual const char * GetIconName () const
 Returns mime type name of object. More...
 
virtual const char * GetName () const
 Returns name of object. More...
 
virtual char * GetObjectInfo (Int_t px, Int_t py) const
 Returns string containing info about the object at position (px,py). More...
 
virtual Option_tGetOption () const
 
virtual const char * GetTitle () const
 Returns title of object. More...
 
virtual UInt_t GetUniqueID () const
 Return the unique object id. More...
 
virtual Bool_t HandleTimer (TTimer *timer)
 Execute action in response of a timer timing out. More...
 
virtual ULong_t Hash () const
 Return hash value for this object. More...
 
Bool_t HasInconsistentHash () const
 Return true is the type of this object is known to have an inconsistent setup for Hash and RecursiveRemove (i.e. More...
 
virtual void Info (const char *method, const char *msgfmt,...) const
 Issue info message. More...
 
virtual Bool_t InheritsFrom (const char *classname) const
 Returns kTRUE if object inherits from class "classname". More...
 
virtual Bool_t InheritsFrom (const TClass *cl) const
 Returns kTRUE if object inherits from TClass cl. More...
 
virtual void Inspect () const
 Dump contents of this object in a graphics canvas. More...
 
void InvertBit (UInt_t f)
 
virtual Bool_t IsEqual (const TObject *obj) const
 Default equal comparison (objects are equal if they have the same address in memory). More...
 
virtual Bool_t IsFolder () const
 Returns kTRUE in case object contains browsable objects (like containers or lists of other objects). More...
 
R__ALWAYS_INLINE Bool_t IsOnHeap () const
 
virtual Bool_t IsSortable () const
 
R__ALWAYS_INLINE Bool_t IsZombie () const
 
virtual void ls (Option_t *option="") const
 The ls function lists the contents of a class on stdout. More...
 
void MayNotUse (const char *method) const
 Use this method to signal that a method (defined in a base class) may not be called in a derived class (in principle against good design since a child class should not provide less functionality than its parent, however, sometimes it is necessary). More...
 
virtual Bool_t Notify ()
 This method must be overridden to handle object notification. More...
 
void Obsolete (const char *method, const char *asOfVers, const char *removedFromVers) const
 Use this method to declare a method obsolete. More...
 
void operator delete (void *ptr)
 Operator delete. More...
 
void operator delete[] (void *ptr)
 Operator delete []. More...
 
voidoperator new (size_t sz)
 
voidoperator new (size_t sz, void *vp)
 
voidoperator new[] (size_t sz)
 
voidoperator new[] (size_t sz, void *vp)
 
TObjectoperator= (const TObject &rhs)
 TObject assignment operator. More...
 
virtual void Paint (Option_t *option="")
 This method must be overridden if a class wants to paint itself. More...
 
virtual void Pop ()
 Pop on object drawn in a pad to the top of the display list. More...
 
virtual void Print (Option_t *option="") const
 This method must be overridden when a class wants to print itself. More...
 
virtual Int_t Read (const char *name)
 Read contents of object with specified name from the current directory. More...
 
virtual void RecursiveRemove (TObject *obj)
 Recursively remove this object from a list. More...
 
void ResetBit (UInt_t f)
 
virtual void SaveAs (const char *filename="", Option_t *option="") const
 Save this object in the file specified by filename. More...
 
virtual void SavePrimitive (std::ostream &out, Option_t *option="")
 Save a primitive as a C++ statement(s) on output stream "out". More...
 
void SetBit (UInt_t f)
 
void SetBit (UInt_t f, Bool_t set)
 Set or unset the user status bits as specified in f. More...
 
virtual void SetDrawOption (Option_t *option="")
 Set drawing option for object. More...
 
virtual void SetUniqueID (UInt_t uid)
 Set the unique object id. More...
 
virtual void SysError (const char *method, const char *msgfmt,...) const
 Issue system error message. More...
 
R__ALWAYS_INLINE Bool_t TestBit (UInt_t f) const
 
Int_t TestBits (UInt_t f) const
 
virtual void UseCurrentStyle ()
 Set current style settings in this object This function is called when either TCanvas::UseCurrentStyle or TROOT::ForceStyle have been invoked. More...
 
virtual void Warning (const char *method, const char *msgfmt,...) const
 Issue warning message. More...
 
virtual Int_t Write (const char *name=0, Int_t option=0, Int_t bufsize=0)
 Write this object to the current directory. More...
 
virtual Int_t Write (const char *name=0, Int_t option=0, Int_t bufsize=0) const
 Write this object to the current directory. More...
 

Protected Member Functions

void AttachData ()
 Connects the TTree to Neurons in input and output layers. More...
 
void BFGSDir (TMatrixD &, Double_t *)
 Computes the direction for the BFGS algorithm as the product between the Hessian estimate (bfgsh) and the dir. More...
 
void BuildNetwork ()
 Instantiates the network from the description. More...
 
void ConjugateGradientsDir (Double_t *, Double_t)
 Sets the search direction to conjugate gradient direction beta should be: More...
 
Double_t DerivDir (Double_t *)
 scalar product between gradient and direction = derivative along direction More...
 
bool GetBFGSH (TMatrixD &, TMatrixD &, TMatrixD &)
 Computes the hessian matrix using the BFGS update algorithm. More...
 
Double_t GetCrossEntropy () const
 Cross entropy error for a softmax output neuron, for a given event. More...
 
Double_t GetCrossEntropyBinary () const
 Cross entropy error for sigmoid output neurons, for a given event. More...
 
void GetEntry (Int_t) const
 Load an entry into the network. More...
 
Double_t GetSumSquareError () const
 Error on the output for a given event. More...
 
Bool_t LineSearch (Double_t *, Double_t *)
 Search along the line defined by direction. More...
 
void MLP_Batch (Double_t *)
 One step for the batch (stochastic) method. More...
 
void MLP_Stochastic (Double_t *)
 One step for the stochastic method buffer should contain the previous dw vector and will be updated. More...
 
void SetGammaDelta (TMatrixD &, TMatrixD &, Double_t *)
 Sets the gamma \((g_{(t+1)}-g_{(t)})\) and delta \((w_{(t+1)}-w_{(t)})\) vectors Gamma is computed here, so ComputeDEDw cannot have been called before, and delta is a direct translation of buffer into a TMatrixD. More...
 
void SteepestDir (Double_t *)
 Sets the search direction to steepest descent. More...
 
- Protected Member Functions inherited from TObject
virtual void DoError (int level, const char *location, const char *fmt, va_list va) const
 Interface to ErrorHandler (protected). More...
 
void MakeZombie ()
 

Private Member Functions

 TMultiLayerPerceptron (const TMultiLayerPerceptron &)
 
void BuildFirstLayer (TString &)
 Instantiates the neurons in input Inputs are normalised and the type is set to kOff (simple forward of the formula value) More...
 
void BuildHiddenLayers (TString &)
 Builds hidden layers. More...
 
void BuildLastLayer (TString &, Int_t)
 Builds the output layer Neurons are linear combinations of input, by default. More...
 
void BuildOneHiddenLayer (const TString &sNumNodes, Int_t &layer, Int_t &prevStart, Int_t &prevStop, Bool_t lastLayer)
 Builds a hidden layer, updates the number of layers. More...
 
void ExpandStructure ()
 Expand the structure of the first layer. More...
 
void MLP_Line (Double_t *, Double_t *, Double_t)
 Sets the weights to a point along a line Weights are set to [origin + (dist * dir)]. More...
 
TMultiLayerPerceptronoperator= (const TMultiLayerPerceptron &)
 
void Shuffle (Int_t *, Int_t) const
 Shuffle the Int_t index[n] in input. More...
 

Private Attributes

Int_t fCurrentTree
 ! index of the current tree in a chain More...
 
Double_t fCurrentTreeWeight
 ! weight of the current tree in a chain More...
 
TTreefData
 ! pointer to the tree used as datasource More...
 
Double_t fDelta
 ! Delta - used in stochastic minimisation - Default=0. More...
 
Double_t fEpsilon
 ! Epsilon - used in stochastic minimisation - Default=0. More...
 
Double_t fEta
 ! Eta - used in stochastic minimisation - Default=0.1 More...
 
Double_t fEtaDecay
 ! EtaDecay - Eta *= EtaDecay at each epoch - Default=1. More...
 
TTreeFormulafEventWeight
 ! formula representing the event weight More...
 
TString fextD
 String containing the derivative name. More...
 
TString fextF
 String containing the function name. More...
 
TObjArray fFirstLayer
 Collection of the input neurons; subset of fNetwork. More...
 
Double_t fLastAlpha
 ! internal parameter used in line search More...
 
TObjArray fLastLayer
 Collection of the output neurons; subset of fNetwork. More...
 
ELearningMethod fLearningMethod
 ! The Learning Method More...
 
TTreeFormulaManagerfManager
 ! TTreeFormulaManager for the weight and neurons More...
 
TObjArray fNetwork
 Collection of all the neurons in the network. More...
 
TNeuron::ENeuronType fOutType
 Type of output neurons. More...
 
Int_t fReset
 ! number of epochs between two resets of the search direction to the steepest descent - Default=50 More...
 
TString fStructure
 String containing the network structure. More...
 
TObjArray fSynapses
 Collection of all the synapses in the network. More...
 
Double_t fTau
 ! Tau - used in line search - Default=3. More...
 
TEventListfTest
 ! EventList defining the events in the test dataset More...
 
Bool_t fTestOwner
 ! internal flag whether one has to delete fTest or not More...
 
TEventListfTraining
 ! EventList defining the events in the training dataset More...
 
Bool_t fTrainingOwner
 ! internal flag whether one has to delete fTraining or not More...
 
TNeuron::ENeuronType fType
 Type of hidden neurons. More...
 
TString fWeight
 String containing the event weight. More...
 

Friends

class TMLPAnalyzer
 

Additional Inherited Members

- Static Public Member Functions inherited from TObject
static Long_t GetDtorOnly ()
 Return destructor only flag. More...
 
static Bool_t GetObjectStat ()
 Get status of object stat flag. More...
 
static void SetDtorOnly (void *obj)
 Set destructor only flag. More...
 
static void SetObjectStat (Bool_t stat)
 Turn on/off tracking of objects in the TObjectTable. More...
 

#include <TMultiLayerPerceptron.h>

Inheritance diagram for TMultiLayerPerceptron:
[legend]

Member Enumeration Documentation

◆ EDataSet

Enumerator
kTraining 
kTest 

Definition at line 32 of file TMultiLayerPerceptron.h.

◆ ELearningMethod

Enumerator
kStochastic 
kBatch 
kSteepestDescent 
kRibierePolak 
kFletcherReeves 
kBFGS 

Definition at line 30 of file TMultiLayerPerceptron.h.

Constructor & Destructor Documentation

◆ TMultiLayerPerceptron() [1/6]

TMultiLayerPerceptron::TMultiLayerPerceptron ( )

Default constructor.

Definition at line 263 of file TMultiLayerPerceptron.cxx.

◆ TMultiLayerPerceptron() [2/6]

TMultiLayerPerceptron::TMultiLayerPerceptron ( const char *  layout,
TTree data = 0,
const char *  training = "Entry$%2==0",
const char *  test = "",
TNeuron::ENeuronType  type = TNeuron::kSigmoid,
const char *  extF = "",
const char *  extD = "" 
)

The network is described by a simple string: The input/output layers are defined by giving the branch names separated by comas.

Hidden layers are just described by the number of neurons. The layers are separated by colons.

Ex: "x,y:10:5:f"

The output can be prepended by '@' if the variable has to be normalized. The output can be followed by '!' to use Softmax neurons for the output layer only.

Ex: "x,y:10:5:c1,c2,c3!"

Input and outputs are taken from the TTree given as second argument. training and test are two cuts (see TTreeFormula) defining events to be used during the neural net training and testing.

Example: "Entry$%2", "(Entry$+1)%2".

Both the TTree and the cut can be defined in the constructor, or later with the suited setter method.

Definition at line 445 of file TMultiLayerPerceptron.cxx.

◆ TMultiLayerPerceptron() [3/6]

TMultiLayerPerceptron::TMultiLayerPerceptron ( const char *  layout,
const char *  weight,
TTree data = 0,
const char *  training = "Entry$%2==0",
const char *  test = "",
TNeuron::ENeuronType  type = TNeuron::kSigmoid,
const char *  extF = "",
const char *  extD = "" 
)

The network is described by a simple string: The input/output layers are defined by giving the branch names separated by comas.

Hidden layers are just described by the number of neurons. The layers are separated by colons.

Ex: "x,y:10:5:f"

The output can be prepended by '@' if the variable has to be normalized. The output can be followed by '!' to use Softmax neurons for the output layer only.

Ex: "x,y:10:5:c1,c2,c3!"

Input and outputs are taken from the TTree given as second argument. training and test are two cuts (see TTreeFormula) defining events to be used during the neural net training and testing.

Example: "Entry$%2", "(Entry$+1)%2".

Both the TTree and the cut can be defined in the constructor, or later with the suited setter method.

Definition at line 517 of file TMultiLayerPerceptron.cxx.

◆ TMultiLayerPerceptron() [4/6]

TMultiLayerPerceptron::TMultiLayerPerceptron ( const char *  layout,
TTree data,
TEventList training,
TEventList test,
TNeuron::ENeuronType  type = TNeuron::kSigmoid,
const char *  extF = "",
const char *  extD = "" 
)

The network is described by a simple string: The input/output layers are defined by giving the branch names separated by comas.

Hidden layers are just described by the number of neurons. The layers are separated by colons.

Ex: "x,y:10:5:f"

The output can be prepended by '@' if the variable has to be normalized. The output can be followed by '!' to use Softmax neurons for the output layer only.

Ex: "x,y:10:5:c1,c2,c3!"

Input and outputs are taken from the TTree given as second argument. training and test are the two TEventLists defining events to be used during the neural net training. Both the TTree and the TEventLists can be defined in the constructor, or later with the suited setter method.

Definition at line 317 of file TMultiLayerPerceptron.cxx.

◆ TMultiLayerPerceptron() [5/6]

TMultiLayerPerceptron::TMultiLayerPerceptron ( const char *  layout,
const char *  weight,
TTree data,
TEventList training,
TEventList test,
TNeuron::ENeuronType  type = TNeuron::kSigmoid,
const char *  extF = "",
const char *  extD = "" 
)

The network is described by a simple string: The input/output layers are defined by giving the branch names separated by comas.

Hidden layers are just described by the number of neurons. The layers are separated by colons.

Ex: "x,y:10:5:f"

The output can be prepended by '@' if the variable has to be normalized. The output can be followed by '!' to use Softmax neurons for the output layer only.

Ex: "x,y:10:5:c1,c2,c3!"

Input and outputs are taken from the TTree given as second argument. training and test are the two TEventLists defining events to be used during the neural net training. Both the TTree and the TEventLists can be defined in the constructor, or later with the suited setter method.

Definition at line 379 of file TMultiLayerPerceptron.cxx.

◆ ~TMultiLayerPerceptron()

TMultiLayerPerceptron::~TMultiLayerPerceptron ( )
virtual

Destructor.

Definition at line 568 of file TMultiLayerPerceptron.cxx.

◆ TMultiLayerPerceptron() [6/6]

TMultiLayerPerceptron::TMultiLayerPerceptron ( const TMultiLayerPerceptron )
private

Member Function Documentation

◆ AttachData()

void TMultiLayerPerceptron::AttachData ( )
protected

Connects the TTree to Neurons in input and output layers.

The formulas associated to each neuron are created and reported to the network formula manager. By default, the branch is not normalised since this would degrade performance for classification jobs. Normalisation can be requested by putting '@' in front of the formula.

Definition at line 1247 of file TMultiLayerPerceptron.cxx.

◆ BFGSDir()

void TMultiLayerPerceptron::BFGSDir ( TMatrixD bfgsh,
Double_t dir 
)
protected

Computes the direction for the BFGS algorithm as the product between the Hessian estimate (bfgsh) and the dir.

Definition at line 2475 of file TMultiLayerPerceptron.cxx.

◆ BuildFirstLayer()

void TMultiLayerPerceptron::BuildFirstLayer ( TString input)
private

Instantiates the neurons in input Inputs are normalised and the type is set to kOff (simple forward of the formula value)

Definition at line 1382 of file TMultiLayerPerceptron.cxx.

◆ BuildHiddenLayers()

void TMultiLayerPerceptron::BuildHiddenLayers ( TString hidden)
private

Builds hidden layers.

Definition at line 1400 of file TMultiLayerPerceptron.cxx.

◆ BuildLastLayer()

void TMultiLayerPerceptron::BuildLastLayer ( TString output,
Int_t  prev 
)
private

Builds the output layer Neurons are linear combinations of input, by default.

If the structure ends with "!", neurons are set up for classification, ie. with a sigmoid (1 neuron) or softmax (more neurons) activation function.

Definition at line 1464 of file TMultiLayerPerceptron.cxx.

◆ BuildNetwork()

void TMultiLayerPerceptron::BuildNetwork ( )
protected

Instantiates the network from the description.

Definition at line 1351 of file TMultiLayerPerceptron.cxx.

◆ BuildOneHiddenLayer()

void TMultiLayerPerceptron::BuildOneHiddenLayer ( const TString sNumNodes,
Int_t layer,
Int_t prevStart,
Int_t prevStop,
Bool_t  lastLayer 
)
private

Builds a hidden layer, updates the number of layers.

Definition at line 1419 of file TMultiLayerPerceptron.cxx.

◆ ComputeDEDw()

void TMultiLayerPerceptron::ComputeDEDw ( ) const

Compute the DEDw = sum on all training events of dedw for each weight normalized by the number of events.

Definition at line 1144 of file TMultiLayerPerceptron.cxx.

◆ ConjugateGradientsDir()

void TMultiLayerPerceptron::ConjugateGradientsDir ( Double_t dir,
Double_t  beta 
)
protected

Sets the search direction to conjugate gradient direction beta should be:

\(||g_{(t+1)}||^2 / ||g_{(t)}||^2\) (Fletcher-Reeves)

\(g_{(t+1)} (g_{(t+1)}-g_{(t)}) / ||g_{(t)}||^2\) (Ribiere-Polak)

Definition at line 2360 of file TMultiLayerPerceptron.cxx.

◆ DerivDir()

Double_t TMultiLayerPerceptron::DerivDir ( Double_t dir)
protected

scalar product between gradient and direction = derivative along direction

Definition at line 2451 of file TMultiLayerPerceptron.cxx.

◆ Draw()

void TMultiLayerPerceptron::Draw ( Option_t option = "")
virtual

Draws the network structure.

Neurons are depicted by a blue disk, and synapses by lines connecting neurons. The line width is proportional to the weight.

Reimplemented from TObject.

Definition at line 2505 of file TMultiLayerPerceptron.cxx.

◆ DrawResult()

void TMultiLayerPerceptron::DrawResult ( Int_t  index = 0,
Option_t option = "test" 
) const

Draws the neural net output It produces an histogram with the output for the two datasets.

Index is the number of the desired output neuron. "option" can contain:

  • test or train to select a dataset
  • comp to produce a X-Y comparison plot
  • nocanv to not create a new TCanvas for the plot

Definition at line 1514 of file TMultiLayerPerceptron.cxx.

◆ DumpWeights()

Bool_t TMultiLayerPerceptron::DumpWeights ( Option_t filename = "-") const

Dumps the weights to a text file.

Set filename to "-" (default) to dump to the standard output

Definition at line 1588 of file TMultiLayerPerceptron.cxx.

◆ Evaluate()

Double_t TMultiLayerPerceptron::Evaluate ( Int_t  index,
Double_t params 
) const

Returns the Neural Net for a given set of input parameters #parameters must equal #input neurons.

Definition at line 1694 of file TMultiLayerPerceptron.cxx.

◆ ExpandStructure()

void TMultiLayerPerceptron::ExpandStructure ( )
private

Expand the structure of the first layer.

Definition at line 1305 of file TMultiLayerPerceptron.cxx.

◆ Export()

void TMultiLayerPerceptron::Export ( Option_t filename = "NNfunction",
Option_t language = "C++" 
) const

Exports the NN as a function for any non-ROOT-dependant code Supported languages are: only C++ , FORTRAN and Python (yet) This feature is also useful if you want to plot the NN as a function (TF1 or TF2).

Definition at line 1719 of file TMultiLayerPerceptron.cxx.

◆ GetBFGSH()

bool TMultiLayerPerceptron::GetBFGSH ( TMatrixD bfgsh,
TMatrixD gamma,
TMatrixD delta 
)
protected

Computes the hessian matrix using the BFGS update algorithm.

from gamma (g_{(t+1)}-g_{(t)}) and delta (w_{(t+1)}-w_{(t)}). It returns true if such a direction could not be found (if gamma and delta are orthogonal).

Definition at line 2386 of file TMultiLayerPerceptron.cxx.

◆ GetCrossEntropy()

Double_t TMultiLayerPerceptron::GetCrossEntropy ( ) const
protected

Cross entropy error for a softmax output neuron, for a given event.

Definition at line 1123 of file TMultiLayerPerceptron.cxx.

◆ GetCrossEntropyBinary()

Double_t TMultiLayerPerceptron::GetCrossEntropyBinary ( ) const
protected

Cross entropy error for sigmoid output neurons, for a given event.

Definition at line 1092 of file TMultiLayerPerceptron.cxx.

◆ GetDelta()

Double_t TMultiLayerPerceptron::GetDelta ( ) const
inline

Definition at line 78 of file TMultiLayerPerceptron.h.

◆ GetEntry()

void TMultiLayerPerceptron::GetEntry ( Int_t  entry) const
protected

Load an entry into the network.

Definition at line 740 of file TMultiLayerPerceptron.cxx.

◆ GetEpsilon()

Double_t TMultiLayerPerceptron::GetEpsilon ( ) const
inline

Definition at line 77 of file TMultiLayerPerceptron.h.

◆ GetError() [1/2]

Double_t TMultiLayerPerceptron::GetError ( Int_t  event) const

Error on the output for a given event.

Definition at line 1027 of file TMultiLayerPerceptron.cxx.

◆ GetError() [2/2]

Double_t TMultiLayerPerceptron::GetError ( TMultiLayerPerceptron::EDataSet  set) const

Error on the whole dataset.

Definition at line 1056 of file TMultiLayerPerceptron.cxx.

◆ GetEta()

Double_t TMultiLayerPerceptron::GetEta ( ) const
inline

Definition at line 76 of file TMultiLayerPerceptron.h.

◆ GetEtaDecay()

Double_t TMultiLayerPerceptron::GetEtaDecay ( ) const
inline

Definition at line 79 of file TMultiLayerPerceptron.h.

◆ GetLearningMethod()

TMultiLayerPerceptron::ELearningMethod TMultiLayerPerceptron::GetLearningMethod ( ) const
inline

Definition at line 80 of file TMultiLayerPerceptron.h.

◆ GetReset()

Int_t TMultiLayerPerceptron::GetReset ( ) const
inline

Definition at line 82 of file TMultiLayerPerceptron.h.

◆ GetStructure()

TString TMultiLayerPerceptron::GetStructure ( ) const
inline

Definition at line 83 of file TMultiLayerPerceptron.h.

◆ GetSumSquareError()

Double_t TMultiLayerPerceptron::GetSumSquareError ( ) const
protected

Error on the output for a given event.

Definition at line 1079 of file TMultiLayerPerceptron.cxx.

◆ GetTau()

Double_t TMultiLayerPerceptron::GetTau ( void  ) const
inline

Definition at line 81 of file TMultiLayerPerceptron.h.

◆ GetType()

TNeuron::ENeuronType TMultiLayerPerceptron::GetType ( ) const
inline

Definition at line 84 of file TMultiLayerPerceptron.h.

◆ LineSearch()

bool TMultiLayerPerceptron::LineSearch ( Double_t direction,
Double_t buffer 
)
protected

Search along the line defined by direction.

buffer is not used but is updated with the new dw so that it can be used by a later stochastic step. It returns true if the line search fails.

Definition at line 2255 of file TMultiLayerPerceptron.cxx.

◆ LoadWeights()

Bool_t TMultiLayerPerceptron::LoadWeights ( Option_t filename = "")

Loads the weights from a text file conforming to the format defined by DumpWeights.

Definition at line 1638 of file TMultiLayerPerceptron.cxx.

◆ MLP_Batch()

void TMultiLayerPerceptron::MLP_Batch ( Double_t buffer)
protected

One step for the batch (stochastic) method.

DEDw should have been updated before calling this.

Definition at line 2184 of file TMultiLayerPerceptron.cxx.

◆ MLP_Line()

void TMultiLayerPerceptron::MLP_Line ( Double_t origin,
Double_t dir,
Double_t  dist 
)
private

Sets the weights to a point along a line Weights are set to [origin + (dist * dir)].

Definition at line 2212 of file TMultiLayerPerceptron.cxx.

◆ MLP_Stochastic()

void TMultiLayerPerceptron::MLP_Stochastic ( Double_t buffer)
protected

One step for the stochastic method buffer should contain the previous dw vector and will be updated.

Definition at line 2139 of file TMultiLayerPerceptron.cxx.

◆ operator=()

TMultiLayerPerceptron & TMultiLayerPerceptron::operator= ( const TMultiLayerPerceptron )
private

◆ Randomize()

void TMultiLayerPerceptron::Randomize ( ) const

Randomize the weights.

Definition at line 1220 of file TMultiLayerPerceptron.cxx.

◆ Result()

Double_t TMultiLayerPerceptron::Result ( Int_t  event,
Int_t  index = 0 
) const

Computes the output for a given event.

Look at the output neuron designed by index.

Definition at line 1014 of file TMultiLayerPerceptron.cxx.

◆ SetData()

void TMultiLayerPerceptron::SetData ( TTree data)

Set the data source.

Definition at line 577 of file TMultiLayerPerceptron.cxx.

◆ SetDelta()

void TMultiLayerPerceptron::SetDelta ( Double_t  delta)

Sets Delta - used in stochastic minimisation (look at the constructor for the complete description of learning methods and parameters)

Definition at line 701 of file TMultiLayerPerceptron.cxx.

◆ SetEpsilon()

void TMultiLayerPerceptron::SetEpsilon ( Double_t  eps)

Sets Epsilon - used in stochastic minimisation (look at the constructor for the complete description of learning methods and parameters)

Definition at line 691 of file TMultiLayerPerceptron.cxx.

◆ SetEta()

void TMultiLayerPerceptron::SetEta ( Double_t  eta)

Sets Eta - used in stochastic minimisation (look at the constructor for the complete description of learning methods and parameters)

Definition at line 681 of file TMultiLayerPerceptron.cxx.

◆ SetEtaDecay()

void TMultiLayerPerceptron::SetEtaDecay ( Double_t  ed)

Sets EtaDecay - Eta *= EtaDecay at each epoch (look at the constructor for the complete description of learning methods and parameters)

Definition at line 711 of file TMultiLayerPerceptron.cxx.

◆ SetEventWeight()

void TMultiLayerPerceptron::SetEventWeight ( const char *  branch)

Set the event weight.

Definition at line 593 of file TMultiLayerPerceptron.cxx.

◆ SetGammaDelta()

void TMultiLayerPerceptron::SetGammaDelta ( TMatrixD gamma,
TMatrixD delta,
Double_t buffer 
)
protected

Sets the gamma \((g_{(t+1)}-g_{(t)})\) and delta \((w_{(t+1)}-w_{(t)})\) vectors Gamma is computed here, so ComputeDEDw cannot have been called before, and delta is a direct translation of buffer into a TMatrixD.

Definition at line 2412 of file TMultiLayerPerceptron.cxx.

◆ SetLearningMethod()

void TMultiLayerPerceptron::SetLearningMethod ( TMultiLayerPerceptron::ELearningMethod  method)

Sets the learning method.

Available methods are: kStochastic, kBatch, kSteepestDescent, kRibierePolak, kFletcherReeves and kBFGS. (look at the constructor for the complete description of learning methods and parameters)

Definition at line 671 of file TMultiLayerPerceptron.cxx.

◆ SetReset()

void TMultiLayerPerceptron::SetReset ( Int_t  reset)

Sets number of epochs between two resets of the search direction to the steepest descent.

(look at the constructor for the complete description of learning methods and parameters)

Definition at line 732 of file TMultiLayerPerceptron.cxx.

◆ SetTau()

void TMultiLayerPerceptron::SetTau ( Double_t  tau)

Sets Tau - used in line search (look at the constructor for the complete description of learning methods and parameters)

Definition at line 721 of file TMultiLayerPerceptron.cxx.

◆ SetTestDataSet() [1/2]

void TMultiLayerPerceptron::SetTestDataSet ( const char *  test)

Sets the Test dataset.

Those events will not be used for the minimization but for control. Note that the tree must be already defined.

Definition at line 650 of file TMultiLayerPerceptron.cxx.

◆ SetTestDataSet() [2/2]

void TMultiLayerPerceptron::SetTestDataSet ( TEventList test)

Sets the Test dataset.

Those events will not be used for the minimization but for control

Definition at line 620 of file TMultiLayerPerceptron.cxx.

◆ SetTrainingDataSet() [1/2]

void TMultiLayerPerceptron::SetTrainingDataSet ( const char *  train)

Sets the Training dataset.

Those events will be used for the minimization. Note that the tree must be already defined.

Definition at line 632 of file TMultiLayerPerceptron.cxx.

◆ SetTrainingDataSet() [2/2]

void TMultiLayerPerceptron::SetTrainingDataSet ( TEventList train)

Sets the Training dataset.

Those events will be used for the minimization

Definition at line 609 of file TMultiLayerPerceptron.cxx.

◆ Shuffle()

void TMultiLayerPerceptron::Shuffle ( Int_t index,
Int_t  n 
) const
private

Shuffle the Int_t index[n] in input.

Input:

  • index: the array to shuffle
  • n: the size of the array

Output:

  • index: the shuffled indexes

This method is used for stochastic training

Definition at line 2120 of file TMultiLayerPerceptron.cxx.

◆ SteepestDir()

void TMultiLayerPerceptron::SteepestDir ( Double_t dir)
protected

Sets the search direction to steepest descent.

Definition at line 2234 of file TMultiLayerPerceptron.cxx.

◆ Train()

void TMultiLayerPerceptron::Train ( Int_t  nEpoch,
Option_t option = "text",
Double_t  minE = 0 
)

Train the network.

nEpoch is the number of iterations. option can contain:

  • "text" (simple text output)
  • "graph" (evoluting graphical training curves)
  • "update=X" (step for the text/graph output update)
  • "+" will skip the randomisation and start from the previous values.
  • "current" (draw in the current canvas)
  • "minErrorTrain" (stop when NN error on the training sample gets below minE
  • "minErrorTest" (stop when NN error on the test sample gets below minE All combinations are available.

Definition at line 769 of file TMultiLayerPerceptron.cxx.

Friends And Related Function Documentation

◆ TMLPAnalyzer

friend class TMLPAnalyzer
friend

Definition at line 27 of file TMultiLayerPerceptron.h.

Member Data Documentation

◆ fCurrentTree

Int_t TMultiLayerPerceptron::fCurrentTree
private

! index of the current tree in a chain

Definition at line 124 of file TMultiLayerPerceptron.h.

◆ fCurrentTreeWeight

Double_t TMultiLayerPerceptron::fCurrentTreeWeight
private

! weight of the current tree in a chain

Definition at line 125 of file TMultiLayerPerceptron.h.

◆ fData

TTree* TMultiLayerPerceptron::fData
private

! pointer to the tree used as datasource

Definition at line 123 of file TMultiLayerPerceptron.h.

◆ fDelta

Double_t TMultiLayerPerceptron::fDelta
private

! Delta - used in stochastic minimisation - Default=0.

Definition at line 143 of file TMultiLayerPerceptron.h.

◆ fEpsilon

Double_t TMultiLayerPerceptron::fEpsilon
private

! Epsilon - used in stochastic minimisation - Default=0.

Definition at line 142 of file TMultiLayerPerceptron.h.

◆ fEta

Double_t TMultiLayerPerceptron::fEta
private

! Eta - used in stochastic minimisation - Default=0.1

Definition at line 141 of file TMultiLayerPerceptron.h.

◆ fEtaDecay

Double_t TMultiLayerPerceptron::fEtaDecay
private

! EtaDecay - Eta *= EtaDecay at each epoch - Default=1.

Definition at line 144 of file TMultiLayerPerceptron.h.

◆ fEventWeight

TTreeFormula* TMultiLayerPerceptron::fEventWeight
private

! formula representing the event weight

Definition at line 139 of file TMultiLayerPerceptron.h.

◆ fextD

TString TMultiLayerPerceptron::fextD
private

String containing the derivative name.

Definition at line 135 of file TMultiLayerPerceptron.h.

◆ fextF

TString TMultiLayerPerceptron::fextF
private

String containing the function name.

Definition at line 134 of file TMultiLayerPerceptron.h.

◆ fFirstLayer

TObjArray TMultiLayerPerceptron::fFirstLayer
private

Collection of the input neurons; subset of fNetwork.

Definition at line 127 of file TMultiLayerPerceptron.h.

◆ fLastAlpha

Double_t TMultiLayerPerceptron::fLastAlpha
private

! internal parameter used in line search

Definition at line 146 of file TMultiLayerPerceptron.h.

◆ fLastLayer

TObjArray TMultiLayerPerceptron::fLastLayer
private

Collection of the output neurons; subset of fNetwork.

Definition at line 128 of file TMultiLayerPerceptron.h.

◆ fLearningMethod

ELearningMethod TMultiLayerPerceptron::fLearningMethod
private

! The Learning Method

Definition at line 138 of file TMultiLayerPerceptron.h.

◆ fManager

TTreeFormulaManager* TMultiLayerPerceptron::fManager
private

! TTreeFormulaManager for the weight and neurons

Definition at line 140 of file TMultiLayerPerceptron.h.

◆ fNetwork

TObjArray TMultiLayerPerceptron::fNetwork
private

Collection of all the neurons in the network.

Definition at line 126 of file TMultiLayerPerceptron.h.

◆ fOutType

TNeuron::ENeuronType TMultiLayerPerceptron::fOutType
private

Type of output neurons.

Definition at line 133 of file TMultiLayerPerceptron.h.

◆ fReset

Int_t TMultiLayerPerceptron::fReset
private

! number of epochs between two resets of the search direction to the steepest descent - Default=50

Definition at line 147 of file TMultiLayerPerceptron.h.

◆ fStructure

TString TMultiLayerPerceptron::fStructure
private

String containing the network structure.

Definition at line 130 of file TMultiLayerPerceptron.h.

◆ fSynapses

TObjArray TMultiLayerPerceptron::fSynapses
private

Collection of all the synapses in the network.

Definition at line 129 of file TMultiLayerPerceptron.h.

◆ fTau

Double_t TMultiLayerPerceptron::fTau
private

! Tau - used in line search - Default=3.

Definition at line 145 of file TMultiLayerPerceptron.h.

◆ fTest

TEventList* TMultiLayerPerceptron::fTest
private

! EventList defining the events in the test dataset

Definition at line 137 of file TMultiLayerPerceptron.h.

◆ fTestOwner

Bool_t TMultiLayerPerceptron::fTestOwner
private

! internal flag whether one has to delete fTest or not

Definition at line 149 of file TMultiLayerPerceptron.h.

◆ fTraining

TEventList* TMultiLayerPerceptron::fTraining
private

! EventList defining the events in the training dataset

Definition at line 136 of file TMultiLayerPerceptron.h.

◆ fTrainingOwner

Bool_t TMultiLayerPerceptron::fTrainingOwner
private

! internal flag whether one has to delete fTraining or not

Definition at line 148 of file TMultiLayerPerceptron.h.

◆ fType

TNeuron::ENeuronType TMultiLayerPerceptron::fType
private

Type of hidden neurons.

Definition at line 132 of file TMultiLayerPerceptron.h.

◆ fWeight

TString TMultiLayerPerceptron::fWeight
private

String containing the event weight.

Definition at line 131 of file TMultiLayerPerceptron.h.

Libraries for TMultiLayerPerceptron:
[legend]

The documentation for this class was generated from the following files: