Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TMultiLayerPerceptron.cxx
Go to the documentation of this file.
1// @(#)root/mlp:$Id$
2// Author: Christophe.Delaere@cern.ch 20/07/03
3
4/*************************************************************************
5 * Copyright (C) 1995-2003, Rene Brun and Fons Rademakers. *
6 * All rights reserved. *
7 * *
8 * For the licensing terms see $ROOTSYS/LICENSE. *
9 * For the list of contributors see $ROOTSYS/README/CREDITS. *
10 *************************************************************************/
11
12/** \class TMultiLayerPerceptron
13
14
15This class describes a neural network.
16There are facilities to train the network and use the output.
17
18The input layer is made of inactive neurons (returning the
19optionally normalized input) and output neurons are linear.
20The type of hidden neurons is free, the default being sigmoids.
21(One should still try to pass normalized inputs, e.g. between [0.,1])
22
23The basic input is a TTree and two (training and test) TEventLists.
24Input and output neurons are assigned a value computed for each event
25with the same possibilities as for TTree::Draw().
26Events may be weighted individually or via TTree::SetWeight().
276 learning methods are available: kStochastic, kBatch,
28kSteepestDescent, kRibierePolak, kFletcherReeves and kBFGS.
29
30This implementation, written by C. Delaere, is *inspired* from
31the mlpfit package from J.Schwindling et al. with some extensions:
32
33 - the algorithms are globally the same
34 - in TMultilayerPerceptron, there is no limitation on the number of
35 layers/neurons, while MLPFIT was limited to 2 hidden layers
36 - TMultilayerPerceptron allows you to save the network in a root file, and
37 provides more export functionalities
38 - TMultilayerPerceptron gives more flexibility regarding the normalization of
39 inputs/outputs
40 - TMultilayerPerceptron provides, thanks to Andrea Bocci, the possibility to
41 use cross-entropy errors, which allows to train a network for pattern
42 classification based on Bayesian posterior probability.
43
44### Introduction
45
46Neural Networks are more and more used in various fields for data
47analysis and classification, both for research and commercial
48institutions. Some randomly chosen examples are:
49
50 - image analysis
51 - financial movements predictions and analysis
52 - sales forecast and product shipping optimisation
53 - in particles physics: mainly for classification tasks (signal
54 over background discrimination)
55
56More than 50% of neural networks are multilayer perceptrons. This
57implementation of multilayer perceptrons is inspired from the
58<A HREF="http://schwind.home.cern.ch/schwind/MLPfit.html">MLPfit
59package</A> originally written by Jerome Schwindling. MLPfit remains
60one of the fastest tool for neural networks studies, and this ROOT
61add-on will not try to compete on that. A clear and flexible Object
62Oriented implementation has been chosen over a faster but more
63difficult to maintain code. Nevertheless, the time penalty does not
64exceed a factor 2.
65
66### The MLP
67
68The multilayer perceptron is a simple feed-forward network with
69the following structure:
70
71\image html mlp.png
72
73It is made of neurons characterized by a bias and weighted links
74between them (let's call those links synapses). The input neurons
75receive the inputs, normalize them and forward them to the first
76hidden layer.
77
78Each neuron in any subsequent layer first computes a linear
79combination of the outputs of the previous layer. The output of the
80neuron is then function of that combination with <I>f</I> being
81linear for output neurons or a sigmoid for hidden layers. This is
82useful because of two theorems:
83
84 1. A linear combination of sigmoids can approximate any
85 continuous function.
86 2. Trained with output = 1 for the signal and 0 for the
87 background, the approximated function of inputs X is the probability
88 of signal, knowing X.
89
90### Learning methods
91
92The aim of all learning methods is to minimize the total error on
93a set of weighted examples. The error is defined as the sum in
94quadrature, divided by two, of the error on each individual output
95neuron.
96In all methods implemented, one needs to compute
97the first derivative of that error with respect to the weights.
98Exploiting the well-known properties of the derivative, especially the
99derivative of compound functions, one can write:
100
101 - for a neuron: product of the local derivative with the
102 weighted sum on the outputs of the derivatives.
103 - for a synapse: product of the input with the local derivative
104 of the output neuron.
105
106This computation is called back-propagation of the errors. A
107loop over all examples is called an epoch.
108Six learning methods are implemented.
109
110#### Stochastic minimization:
111
112is the most trivial learning method. This is the Robbins-Monro
113stochastic approximation applied to multilayer perceptrons. The
114weights are updated after each example according to the formula:
115\f$w_{ij}(t+1) = w_{ij}(t) + \Delta w_{ij}(t)\f$
116
117with
118
119\f$\Delta w_{ij}(t) = - \eta(d e_p / d w_{ij} + \delta) + \epsilon \Delta w_{ij}(t-1)\f$
120
121The parameters for this method are Eta, EtaDecay, Delta and
122Epsilon.
123
124#### Steepest descent with fixed step size (batch learning):
125
126It is the same as the stochastic
127minimization, but the weights are updated after considering all the
128examples, with the total derivative dEdw. The parameters for this
129method are Eta, EtaDecay, Delta and Epsilon.
130
131#### Steepest descent algorithm:
132
133Weights are set to the minimum along the line defined by the gradient. The
134only parameter for this method is Tau. Lower tau = higher precision =
135slower search. A value Tau = 3 seems reasonable.
136
137#### Conjugate gradients with the Polak-Ribiere updating formula:
138
139Weights are set to the minimum along the line defined by the conjugate gradient.
140Parameters are Tau and Reset, which defines the epochs where the direction is
141reset to the steepest descent.
142
143#### Conjugate gradients with the Fletcher-Reeves updating formula:
144
145Weights are set to the minimum along the line defined by the conjugate gradient. Parameters
146are Tau and Reset, which defines the epochs where the direction is
147reset to the steepest descent.
148
149#### Broyden, Fletcher, Goldfarb, Shanno (BFGS) method:
150
151 Implies the computation of a NxN matrix
152computation, but seems more powerful at least for less than 300
153weights. Parameters are Tau and Reset, which defines the epochs where
154the direction is reset to the steepest descent.
155
156### How to use it...
157
158TMLP is build from 3 classes: TNeuron, TSynapse and
159TMultiLayerPerceptron. Only TMultiLayerPerceptron should be used
160explicitly by the user.
161
162TMultiLayerPerceptron will take examples from a TTree
163given in the constructor. The network is described by a simple
164string: The input/output layers are defined by giving the expression for
165each neuron, separated by comas. Hidden layers are just described
166by the number of neurons. The layers are separated by colons.
167In addition, input/output layer formulas can be preceded by '@' (e.g "@out")
168if one wants to also normalize the data from the TTree.
169Input and outputs are taken from the TTree given as second argument.
170Expressions are evaluated as for TTree::Draw(), arrays are expended in
171distinct neurons, one for each index.
172This can only be done for fixed-size arrays.
173If the formula ends with "!", softmax functions are used for the output layer.
174One defines the training and test datasets by TEventLists.
175
176Example:
177~~~ {.cpp}
178TMultiLayerPerceptron("x,y:10:5:f",inputTree);
179~~~
180
181Both the TTree and the TEventLists can be defined in
182the constructor, or later with the suited setter method. The lists
183used for training and test can be defined either explicitly, or via
184a string containing the formula to be used to define them, exactly as
185for a TCut.
186
187The learning method is defined using the TMultiLayerPerceptron::SetLearningMethod() .
188Learning methods are :
189
190 - TMultiLayerPerceptron::kStochastic,
191 - TMultiLayerPerceptron::kBatch,
192 - TMultiLayerPerceptron::kSteepestDescent,
193 - TMultiLayerPerceptron::kRibierePolak,
194 - TMultiLayerPerceptron::kFletcherReeves,
195 - TMultiLayerPerceptron::kBFGS
196
197A weight can be assigned to events, either in the constructor, either
198with TMultiLayerPerceptron::SetEventWeight(). In addition, the TTree weight
199is taken into account.
200
201Finally, one starts the training with
202TMultiLayerPerceptron::Train(Int_t nepoch, Option_t* options). The
203first argument is the number of epochs while option is a string that
204can contain: "text" (simple text output) , "graph"
205(evoluting graphical training curves), "update=X" (step for
206the text/graph output update) or "+" (will skip the
207randomisation and start from the previous values). All combinations
208are available.
209
210Example:
211~~~ {.cpp}
212net.Train(100,"text, graph, update=10");
213~~~
214
215When the neural net is trained, it can be used
216directly ( TMultiLayerPerceptron::Evaluate() ) or exported to a
217standalone C++ code ( TMultiLayerPerceptron::Export() ).
218
219Finally, note that even if this implementation is inspired from the mlpfit code,
220the feature lists are not exactly matching:
221
222 - mlpfit hybrid learning method is not implemented
223 - output neurons can be normalized, this is not the case for mlpfit
224 - the neural net is exported in C++, FORTRAN or PYTHON
225 - the drawResult() method allows a fast check of the learning procedure
226
227In addition, the paw version of mlpfit had additional limitations on the number of
228neurons, hidden layers and inputs/outputs that does not apply to TMultiLayerPerceptron.
229*/
230
231
233#include "TSynapse.h"
234#include "TNeuron.h"
235#include "TClass.h"
236#include "TTree.h"
237#include "TEventList.h"
238#include "TRandom3.h"
239#include "TTimeStamp.h"
240#include "TRegexp.h"
241#include "TCanvas.h"
242#include "TH2.h"
243#include "TGraph.h"
244#include "TLegend.h"
245#include "TMultiGraph.h"
246#include "TDirectory.h"
247#include "TSystem.h"
248#include <iostream>
249#include <fstream>
250#include "TMath.h"
251#include "TTreeFormula.h"
252#include "TTreeFormulaManager.h"
253#include "TMarker.h"
254#include "TLine.h"
255#include "TText.h"
256#include "TObjString.h"
257#include <cstdlib>
258
260
261////////////////////////////////////////////////////////////////////////////////
262/// Default constructor
263
265{
266 if(!TClass::GetClass("TTreePlayer")) gSystem->Load("libTreePlayer");
267 fNetwork.SetOwner(true);
268 fFirstLayer.SetOwner(false);
269 fLastLayer.SetOwner(false);
270 fSynapses.SetOwner(true);
271 fData = 0;
272 fCurrentTree = -1;
274 fStructure = "";
275 fWeight = "1";
276 fTraining = 0;
277 fTrainingOwner = false;
278 fTest = 0;
279 fTestOwner = false;
280 fEventWeight = 0;
281 fManager = 0;
283 fEta = .1;
284 fEtaDecay = 1;
285 fDelta = 0;
286 fEpsilon = 0;
287 fTau = 3;
288 fLastAlpha = 0;
289 fReset = 50;
292 fextF = "";
293 fextD = "";
294}
295
296////////////////////////////////////////////////////////////////////////////////
297/// The network is described by a simple string:
298/// The input/output layers are defined by giving
299/// the branch names separated by comas.
300/// Hidden layers are just described by the number of neurons.
301/// The layers are separated by colons.
302///
303/// Ex: "x,y:10:5:f"
304///
305/// The output can be prepended by '@' if the variable has to be
306/// normalized.
307/// The output can be followed by '!' to use Softmax neurons for the
308/// output layer only.
309///
310/// Ex: "x,y:10:5:c1,c2,c3!"
311///
312/// Input and outputs are taken from the TTree given as second argument.
313/// training and test are the two TEventLists defining events
314/// to be used during the neural net training.
315/// Both the TTree and the TEventLists can be defined in the constructor,
316/// or later with the suited setter method.
317
319 TEventList * training,
322 const char* extF, const char* extD)
323{
324 if(!TClass::GetClass("TTreePlayer")) gSystem->Load("libTreePlayer");
325 fNetwork.SetOwner(true);
326 fFirstLayer.SetOwner(false);
327 fLastLayer.SetOwner(false);
328 fSynapses.SetOwner(true);
329 fStructure = layout;
330 fData = data;
331 fCurrentTree = -1;
333 fTraining = training;
334 fTrainingOwner = false;
335 fTest = test;
336 fTestOwner = false;
337 fWeight = "1";
338 fType = type;
340 fextF = extF;
341 fextD = extD;
342 fEventWeight = 0;
343 fManager = 0;
344 if (data) {
345 BuildNetwork();
346 AttachData();
347 }
349 fEta = .1;
350 fEpsilon = 0;
351 fDelta = 0;
352 fEtaDecay = 1;
353 fTau = 3;
354 fLastAlpha = 0;
355 fReset = 50;
356}
357
358////////////////////////////////////////////////////////////////////////////////
359/// The network is described by a simple string:
360/// The input/output layers are defined by giving
361/// the branch names separated by comas.
362/// Hidden layers are just described by the number of neurons.
363/// The layers are separated by colons.
364///
365/// Ex: "x,y:10:5:f"
366///
367/// The output can be prepended by '@' if the variable has to be
368/// normalized.
369/// The output can be followed by '!' to use Softmax neurons for the
370/// output layer only.
371///
372/// Ex: "x,y:10:5:c1,c2,c3!"
373///
374/// Input and outputs are taken from the TTree given as second argument.
375/// training and test are the two TEventLists defining events
376/// to be used during the neural net training.
377/// Both the TTree and the TEventLists can be defined in the constructor,
378/// or later with the suited setter method.
379
381 const char * weight, TTree * data,
382 TEventList * training,
385 const char* extF, const char* extD)
386{
387 if(!TClass::GetClass("TTreePlayer")) gSystem->Load("libTreePlayer");
388 fNetwork.SetOwner(true);
389 fFirstLayer.SetOwner(false);
390 fLastLayer.SetOwner(false);
391 fSynapses.SetOwner(true);
392 fStructure = layout;
393 fData = data;
394 fCurrentTree = -1;
396 fTraining = training;
397 fTrainingOwner = false;
398 fTest = test;
399 fTestOwner = false;
400 fWeight = weight;
401 fType = type;
403 fextF = extF;
404 fextD = extD;
405 fEventWeight = 0;
406 fManager = 0;
407 if (data) {
408 BuildNetwork();
409 AttachData();
410 }
412 fEta = .1;
413 fEtaDecay = 1;
414 fDelta = 0;
415 fEpsilon = 0;
416 fTau = 3;
417 fLastAlpha = 0;
418 fReset = 50;
419}
420
421////////////////////////////////////////////////////////////////////////////////
422/// The network is described by a simple string:
423/// The input/output layers are defined by giving
424/// the branch names separated by comas.
425/// Hidden layers are just described by the number of neurons.
426/// The layers are separated by colons.
427///
428/// Ex: "x,y:10:5:f"
429///
430/// The output can be prepended by '@' if the variable has to be
431/// normalized.
432/// The output can be followed by '!' to use Softmax neurons for the
433/// output layer only.
434///
435/// Ex: "x,y:10:5:c1,c2,c3!"
436///
437/// Input and outputs are taken from the TTree given as second argument.
438/// training and test are two cuts (see TTreeFormula) defining events
439/// to be used during the neural net training and testing.
440///
441/// Example: "Entry$%2", "(Entry$+1)%2".
442///
443/// Both the TTree and the cut can be defined in the constructor,
444/// or later with the suited setter method.
445
447 const char * training,
448 const char * test,
450 const char* extF, const char* extD)
451{
452 if(!TClass::GetClass("TTreePlayer")) gSystem->Load("libTreePlayer");
453 fNetwork.SetOwner(true);
454 fFirstLayer.SetOwner(false);
455 fLastLayer.SetOwner(false);
456 fSynapses.SetOwner(true);
457 fStructure = layout;
458 fData = data;
459 fCurrentTree = -1;
461 fTraining = new TEventList(Form("fTrainingList_%lu",(ULong_t)this));
462 fTrainingOwner = true;
463 fTest = new TEventList(Form("fTestList_%lu",(ULong_t)this));
464 fTestOwner = true;
465 fWeight = "1";
466 TString testcut = test;
467 if(testcut=="") testcut = Form("!(%s)",training);
468 fType = type;
470 fextF = extF;
471 fextD = extD;
472 fEventWeight = 0;
473 fManager = 0;
474 if (data) {
475 BuildNetwork();
476 data->Draw(Form(">>fTrainingList_%lu",(ULong_t)this),training,"goff");
477 data->Draw(Form(">>fTestList_%lu",(ULong_t)this),(const char *)testcut,"goff");
478 AttachData();
479 }
480 else {
481 Warning("TMultiLayerPerceptron::TMultiLayerPerceptron","Data not set. Cannot define datasets");
482 }
484 fEta = .1;
485 fEtaDecay = 1;
486 fDelta = 0;
487 fEpsilon = 0;
488 fTau = 3;
489 fLastAlpha = 0;
490 fReset = 50;
491}
492
493////////////////////////////////////////////////////////////////////////////////
494/// The network is described by a simple string:
495/// The input/output layers are defined by giving
496/// the branch names separated by comas.
497/// Hidden layers are just described by the number of neurons.
498/// The layers are separated by colons.
499///
500/// Ex: "x,y:10:5:f"
501///
502/// The output can be prepended by '@' if the variable has to be
503/// normalized.
504/// The output can be followed by '!' to use Softmax neurons for the
505/// output layer only.
506///
507/// Ex: "x,y:10:5:c1,c2,c3!"
508///
509/// Input and outputs are taken from the TTree given as second argument.
510/// training and test are two cuts (see TTreeFormula) defining events
511/// to be used during the neural net training and testing.
512///
513/// Example: "Entry$%2", "(Entry$+1)%2".
514///
515/// Both the TTree and the cut can be defined in the constructor,
516/// or later with the suited setter method.
517
519 const char * weight, TTree * data,
520 const char * training,
521 const char * test,
523 const char* extF, const char* extD)
524{
525 if(!TClass::GetClass("TTreePlayer")) gSystem->Load("libTreePlayer");
526 fNetwork.SetOwner(true);
527 fFirstLayer.SetOwner(false);
528 fLastLayer.SetOwner(false);
529 fSynapses.SetOwner(true);
530 fStructure = layout;
531 fData = data;
532 fCurrentTree = -1;
534 fTraining = new TEventList(Form("fTrainingList_%lu",(ULong_t)this));
535 fTrainingOwner = true;
536 fTest = new TEventList(Form("fTestList_%lu",(ULong_t)this));
537 fTestOwner = true;
538 fWeight = weight;
539 TString testcut = test;
540 if(testcut=="") testcut = Form("!(%s)",training);
541 fType = type;
543 fextF = extF;
544 fextD = extD;
545 fEventWeight = 0;
546 fManager = 0;
547 if (data) {
548 BuildNetwork();
549 data->Draw(Form(">>fTrainingList_%lu",(ULong_t)this),training,"goff");
550 data->Draw(Form(">>fTestList_%lu",(ULong_t)this),(const char *)testcut,"goff");
551 AttachData();
552 }
553 else {
554 Warning("TMultiLayerPerceptron::TMultiLayerPerceptron","Data not set. Cannot define datasets");
555 }
557 fEta = .1;
558 fEtaDecay = 1;
559 fDelta = 0;
560 fEpsilon = 0;
561 fTau = 3;
562 fLastAlpha = 0;
563 fReset = 50;
564}
565
566////////////////////////////////////////////////////////////////////////////////
567/// Destructor
568
570{
571 if(fTraining && fTrainingOwner) delete fTraining;
572 if(fTest && fTestOwner) delete fTest;
573}
574
575////////////////////////////////////////////////////////////////////////////////
576/// Set the data source
577
579{
580 if (fData) {
581 std::cerr << "Error: data already defined." << std::endl;
582 return;
583 }
584 fData = data;
585 if (data) {
586 BuildNetwork();
587 AttachData();
588 }
589}
590
591////////////////////////////////////////////////////////////////////////////////
592/// Set the event weight
593
595{
596 fWeight=branch;
597 if (fData) {
598 if (fEventWeight) {
600 delete fEventWeight;
601 }
602 fManager->Add((fEventWeight = new TTreeFormula("NNweight",fWeight.Data(),fData)));
603 }
604}
605
606////////////////////////////////////////////////////////////////////////////////
607/// Sets the Training dataset.
608/// Those events will be used for the minimization
609
611{
612 if(fTraining && fTrainingOwner) delete fTraining;
613 fTraining = train;
614 fTrainingOwner = false;
615}
616
617////////////////////////////////////////////////////////////////////////////////
618/// Sets the Test dataset.
619/// Those events will not be used for the minimization but for control
620
622{
623 if(fTest && fTestOwner) delete fTest;
624 fTest = test;
625 fTestOwner = false;
626}
627
628////////////////////////////////////////////////////////////////////////////////
629/// Sets the Training dataset.
630/// Those events will be used for the minimization.
631/// Note that the tree must be already defined.
632
634{
635 if(fTraining && fTrainingOwner) delete fTraining;
636 fTraining = new TEventList(Form("fTrainingList_%lu",(ULong_t)this));
637 fTrainingOwner = true;
638 if (fData) {
639 fData->Draw(Form(">>fTrainingList_%lu",(ULong_t)this),train,"goff");
640 }
641 else {
642 Warning("TMultiLayerPerceptron::TMultiLayerPerceptron","Data not set. Cannot define datasets");
643 }
644}
645
646////////////////////////////////////////////////////////////////////////////////
647/// Sets the Test dataset.
648/// Those events will not be used for the minimization but for control.
649/// Note that the tree must be already defined.
650
652{
653 if(fTest && fTestOwner) {delete fTest; fTest=0;}
654 if(fTest) if(strncmp(fTest->GetName(),Form("fTestList_%lu",(ULong_t)this),10)) delete fTest;
655 fTest = new TEventList(Form("fTestList_%lu",(ULong_t)this));
656 fTestOwner = true;
657 if (fData) {
658 fData->Draw(Form(">>fTestList_%lu",(ULong_t)this),test,"goff");
659 }
660 else {
661 Warning("TMultiLayerPerceptron::TMultiLayerPerceptron","Data not set. Cannot define datasets");
662 }
663}
664
665////////////////////////////////////////////////////////////////////////////////
666/// Sets the learning method.
667/// Available methods are: kStochastic, kBatch,
668/// kSteepestDescent, kRibierePolak, kFletcherReeves and kBFGS.
669/// (look at the constructor for the complete description
670/// of learning methods and parameters)
671
673{
674 fLearningMethod = method;
675}
676
677////////////////////////////////////////////////////////////////////////////////
678/// Sets Eta - used in stochastic minimisation
679/// (look at the constructor for the complete description
680/// of learning methods and parameters)
681
683{
684 fEta = eta;
685}
686
687////////////////////////////////////////////////////////////////////////////////
688/// Sets Epsilon - used in stochastic minimisation
689/// (look at the constructor for the complete description
690/// of learning methods and parameters)
691
693{
694 fEpsilon = eps;
695}
696
697////////////////////////////////////////////////////////////////////////////////
698/// Sets Delta - used in stochastic minimisation
699/// (look at the constructor for the complete description
700/// of learning methods and parameters)
701
703{
704 fDelta = delta;
705}
706
707////////////////////////////////////////////////////////////////////////////////
708/// Sets EtaDecay - Eta *= EtaDecay at each epoch
709/// (look at the constructor for the complete description
710/// of learning methods and parameters)
711
713{
714 fEtaDecay = ed;
715}
716
717////////////////////////////////////////////////////////////////////////////////
718/// Sets Tau - used in line search
719/// (look at the constructor for the complete description
720/// of learning methods and parameters)
721
723{
724 fTau = tau;
725}
726
727////////////////////////////////////////////////////////////////////////////////
728/// Sets number of epochs between two resets of the
729/// search direction to the steepest descent.
730/// (look at the constructor for the complete description
731/// of learning methods and parameters)
732
734{
735 fReset = reset;
736}
737
738////////////////////////////////////////////////////////////////////////////////
739/// Load an entry into the network
740
742{
743 if (!fData) return;
744 fData->GetEntry(entry);
745 if (fData->GetTreeNumber() != fCurrentTree) {
746 ((TMultiLayerPerceptron*)this)->fCurrentTree = fData->GetTreeNumber();
747 fManager->Notify();
748 ((TMultiLayerPerceptron*)this)->fCurrentTreeWeight = fData->GetWeight();
749 }
751 for (Int_t i=0;i<nentries;i++) {
752 TNeuron *neuron = (TNeuron *)fNetwork.UncheckedAt(i);
753 neuron->SetNewEvent();
754 }
755}
756
757////////////////////////////////////////////////////////////////////////////////
758/// Train the network.
759/// nEpoch is the number of iterations.
760/// option can contain:
761/// - "text" (simple text output)
762/// - "graph" (evoluting graphical training curves)
763/// - "update=X" (step for the text/graph output update)
764/// - "+" will skip the randomisation and start from the previous values.
765/// - "current" (draw in the current canvas)
766/// - "minErrorTrain" (stop when NN error on the training sample gets below minE
767/// - "minErrorTest" (stop when NN error on the test sample gets below minE
768/// All combinations are available.
769
771{
772 Int_t i;
773 TString opt = option;
774 opt.ToLower();
775 // Decode options and prepare training.
776 Int_t verbosity = 0;
777 Bool_t newCanvas = true;
778 Bool_t minE_Train = false;
779 Bool_t minE_Test = false;
780 if (opt.Contains("text"))
781 verbosity += 1;
782 if (opt.Contains("graph"))
783 verbosity += 2;
784 Int_t displayStepping = 1;
785 if (opt.Contains("update=")) {
786 TRegexp reg("update=[0-9]*");
787 TString out = opt(reg);
788 displayStepping = atoi(out.Data() + 7);
789 }
790 if (opt.Contains("current"))
791 newCanvas = false;
792 if (opt.Contains("minerrortrain"))
793 minE_Train = true;
794 if (opt.Contains("minerrortest"))
795 minE_Test = true;
796 TVirtualPad *canvas = 0;
797 TMultiGraph *residual_plot = 0;
798 TGraph *train_residual_plot = 0;
799 TGraph *test_residual_plot = 0;
800 if ((!fData) || (!fTraining) || (!fTest)) {
801 Error("Train","Training/Test samples still not defined. Cannot train the neural network");
802 return;
803 }
804 Info("Train","Using %d train and %d test entries.",
805 fTraining->GetN(), fTest->GetN());
806 // Text and Graph outputs
807 if (verbosity % 2)
808 std::cout << "Training the Neural Network" << std::endl;
809 if (verbosity / 2) {
810 residual_plot = new TMultiGraph;
811 if(newCanvas)
812 canvas = new TCanvas("NNtraining", "Neural Net training");
813 else {
814 canvas = gPad;
815 if(!canvas) canvas = new TCanvas("NNtraining", "Neural Net training");
816 }
817 train_residual_plot = new TGraph(nEpoch);
818 test_residual_plot = new TGraph(nEpoch);
819 canvas->SetLeftMargin(0.14);
820 train_residual_plot->SetLineColor(4);
821 test_residual_plot->SetLineColor(2);
822 residual_plot->Add(train_residual_plot);
823 residual_plot->Add(test_residual_plot);
824 residual_plot->Draw("LA");
825 if (residual_plot->GetXaxis()) residual_plot->GetXaxis()->SetTitle("Epoch");
826 if (residual_plot->GetYaxis()) residual_plot->GetYaxis()->SetTitle("Error");
827 }
828 // If the option "+" is not set, one has to randomize the weights first
829 if (!opt.Contains("+"))
830 Randomize();
831 // Initialisation
832 fLastAlpha = 0;
834 Double_t *buffer = new Double_t[els];
835 Double_t *dir = new Double_t[els];
836 for (i = 0; i < els; i++)
837 buffer[i] = 0;
838 Int_t matrix_size = fLearningMethod==TMultiLayerPerceptron::kBFGS ? els : 1;
839 TMatrixD bfgsh(matrix_size, matrix_size);
840 TMatrixD gamma(matrix_size, 1);
841 TMatrixD delta(matrix_size, 1);
842 // Epoch loop. Here is the training itself.
843 Double_t training_E = 1e10;
844 Double_t test_E = 1e10;
845 for (Int_t iepoch = 0; (iepoch < nEpoch) && (!minE_Train || training_E>minE) && (!minE_Test || test_E>minE) ; iepoch++) {
846 switch (fLearningMethod) {
848 {
849 MLP_Stochastic(buffer);
850 break;
851 }
853 {
854 ComputeDEDw();
855 MLP_Batch(buffer);
856 break;
857 }
859 {
860 ComputeDEDw();
861 SteepestDir(dir);
862 if (LineSearch(dir, buffer))
863 MLP_Batch(buffer);
864 break;
865 }
867 {
868 ComputeDEDw();
869 if (!(iepoch % fReset)) {
870 SteepestDir(dir);
871 } else {
872 Double_t norm = 0;
873 Double_t onorm = 0;
874 for (i = 0; i < els; i++)
875 onorm += dir[i] * dir[i];
876 Double_t prod = 0;
877 Int_t idx = 0;
878 TNeuron *neuron = 0;
879 TSynapse *synapse = 0;
881 for (i=0;i<nentries;i++) {
882 neuron = (TNeuron *) fNetwork.UncheckedAt(i);
883 prod -= dir[idx++] * neuron->GetDEDw();
884 norm += neuron->GetDEDw() * neuron->GetDEDw();
885 }
887 for (i=0;i<nentries;i++) {
888 synapse = (TSynapse *) fSynapses.UncheckedAt(i);
889 prod -= dir[idx++] * synapse->GetDEDw();
890 norm += synapse->GetDEDw() * synapse->GetDEDw();
891 }
892 ConjugateGradientsDir(dir, (norm - prod) / onorm);
893 }
894 if (LineSearch(dir, buffer))
895 MLP_Batch(buffer);
896 break;
897 }
899 {
900 ComputeDEDw();
901 if (!(iepoch % fReset)) {
902 SteepestDir(dir);
903 } else {
904 Double_t norm = 0;
905 Double_t onorm = 0;
906 for (i = 0; i < els; i++)
907 onorm += dir[i] * dir[i];
908 TNeuron *neuron = 0;
909 TSynapse *synapse = 0;
911 for (i=0;i<nentries;i++) {
912 neuron = (TNeuron *) fNetwork.UncheckedAt(i);
913 norm += neuron->GetDEDw() * neuron->GetDEDw();
914 }
916 for (i=0;i<nentries;i++) {
917 synapse = (TSynapse *) fSynapses.UncheckedAt(i);
918 norm += synapse->GetDEDw() * synapse->GetDEDw();
919 }
920 ConjugateGradientsDir(dir, norm / onorm);
921 }
922 if (LineSearch(dir, buffer))
923 MLP_Batch(buffer);
924 break;
925 }
927 {
928 SetGammaDelta(gamma, delta, buffer);
929 if (!(iepoch % fReset)) {
930 SteepestDir(dir);
931 bfgsh.UnitMatrix();
932 } else {
933 if (GetBFGSH(bfgsh, gamma, delta)) {
934 SteepestDir(dir);
935 bfgsh.UnitMatrix();
936 } else {
937 BFGSDir(bfgsh, dir);
938 }
939 }
940 if (DerivDir(dir) > 0) {
941 SteepestDir(dir);
942 bfgsh.UnitMatrix();
943 }
944 if (LineSearch(dir, buffer)) {
945 bfgsh.UnitMatrix();
946 SteepestDir(dir);
947 if (LineSearch(dir, buffer)) {
948 Error("TMultiLayerPerceptron::Train()","Line search fail");
949 iepoch = nEpoch;
950 }
951 }
952 break;
953 }
954 }
955 // Security: would the learning lead to non real numbers,
956 // the learning should stop now.
958 Error("TMultiLayerPerceptron::Train()","Stop.");
959 iepoch = nEpoch;
960 }
961 // Process other ROOT events. Time penalty is less than
962 // 1/1000 sec/evt on a mobile AMD Athlon(tm) XP 1500+
966 // Intermediate graph and text output
967 if ((verbosity % 2) && ((!(iepoch % displayStepping)) || (iepoch == nEpoch - 1))) {
968 std::cout << "Epoch: " << iepoch
969 << " learn=" << training_E
970 << " test=" << test_E
971 << std::endl;
972 }
973 if (verbosity / 2) {
974 train_residual_plot->SetPoint(iepoch, iepoch,training_E);
975 test_residual_plot->SetPoint(iepoch, iepoch,test_E);
976 if (!iepoch) {
977 Double_t trp = train_residual_plot->GetY()[iepoch];
978 Double_t tep = test_residual_plot->GetY()[iepoch];
979 for (i = 1; i < nEpoch; i++) {
980 train_residual_plot->SetPoint(i, i, trp);
981 test_residual_plot->SetPoint(i, i, tep);
982 }
983 }
984 if ((!(iepoch % displayStepping)) || (iepoch == nEpoch - 1)) {
985 if (residual_plot->GetYaxis()) {
986 residual_plot->GetYaxis()->UnZoom();
987 residual_plot->GetYaxis()->SetTitleOffset(1.4);
988 residual_plot->GetYaxis()->SetDecimals();
989 }
990 canvas->Modified();
991 canvas->Update();
992 }
993 }
994 }
995 // Cleaning
996 delete [] buffer;
997 delete [] dir;
998 // Final Text and Graph outputs
999 if (verbosity % 2)
1000 std::cout << "Training done." << std::endl;
1001 if (verbosity / 2) {
1002 TLegend *legend = new TLegend(.75, .80, .95, .95);
1003 legend->AddEntry(residual_plot->GetListOfGraphs()->At(0),
1004 "Training sample", "L");
1005 legend->AddEntry(residual_plot->GetListOfGraphs()->At(1),
1006 "Test sample", "L");
1007 legend->Draw();
1008 }
1009}
1010
1011////////////////////////////////////////////////////////////////////////////////
1012/// Computes the output for a given event.
1013/// Look at the output neuron designed by index.
1014
1016{
1017 GetEntry(event);
1018 TNeuron *out = (TNeuron *) (fLastLayer.At(index));
1019 if (out)
1020 return out->GetValue();
1021 else
1022 return 0;
1023}
1024
1025////////////////////////////////////////////////////////////////////////////////
1026/// Error on the output for a given event
1027
1029{
1030 GetEntry(event);
1031 Double_t error = 0;
1032 // look at 1st output neuron to determine type and error function
1033 Int_t nEntries = fLastLayer.GetEntriesFast();
1034 if (nEntries == 0) return 0.0;
1035 switch (fOutType) {
1036 case (TNeuron::kSigmoid):
1037 error = GetCrossEntropyBinary();
1038 break;
1039 case (TNeuron::kSoftmax):
1040 error = GetCrossEntropy();
1041 break;
1042 case (TNeuron::kLinear):
1043 error = GetSumSquareError();
1044 break;
1045 default:
1046 // default to sum-of-squares error
1047 error = GetSumSquareError();
1048 }
1049 error *= fEventWeight->EvalInstance();
1050 error *= fCurrentTreeWeight;
1051 return error;
1052}
1053
1054////////////////////////////////////////////////////////////////////////////////
1055/// Error on the whole dataset
1056
1058{
1059 TEventList *list =
1061 Double_t error = 0;
1062 Int_t i;
1063 if (list) {
1064 Int_t nEvents = list->GetN();
1065 for (i = 0; i < nEvents; i++) {
1066 error += GetError(list->GetEntry(i));
1067 }
1068 } else if (fData) {
1069 Int_t nEvents = (Int_t) fData->GetEntries();
1070 for (i = 0; i < nEvents; i++) {
1071 error += GetError(i);
1072 }
1073 }
1074 return error;
1075}
1076
1077////////////////////////////////////////////////////////////////////////////////
1078/// Error on the output for a given event
1079
1081{
1082 Double_t error = 0;
1083 for (Int_t i = 0; i < fLastLayer.GetEntriesFast(); i++) {
1084 TNeuron *neuron = (TNeuron *) fLastLayer[i];
1085 error += neuron->GetError() * neuron->GetError();
1086 }
1087 return (error / 2.);
1088}
1089
1090////////////////////////////////////////////////////////////////////////////////
1091/// Cross entropy error for sigmoid output neurons, for a given event
1092
1094{
1095 Double_t error = 0;
1096 for (Int_t i = 0; i < fLastLayer.GetEntriesFast(); i++) {
1097 TNeuron *neuron = (TNeuron *) fLastLayer[i];
1098 Double_t output = neuron->GetValue(); // sigmoid output and target
1099 Double_t target = neuron->GetTarget(); // values lie in [0,1]
1100 if (target < DBL_EPSILON) {
1101 if (output == 1.0)
1102 error = DBL_MAX;
1103 else
1104 error -= TMath::Log(1 - output);
1105 } else
1106 if ((1 - target) < DBL_EPSILON) {
1107 if (output == 0.0)
1108 error = DBL_MAX;
1109 else
1110 error -= TMath::Log(output);
1111 } else {
1112 if (output == 0.0 || output == 1.0)
1113 error = DBL_MAX;
1114 else
1115 error -= target * TMath::Log(output / target) + (1-target) * TMath::Log((1 - output)/(1 - target));
1116 }
1117 }
1118 return error;
1119}
1120
1121////////////////////////////////////////////////////////////////////////////////
1122/// Cross entropy error for a softmax output neuron, for a given event
1123
1125{
1126 Double_t error = 0;
1127 for (Int_t i = 0; i < fLastLayer.GetEntriesFast(); i++) {
1128 TNeuron *neuron = (TNeuron *) fLastLayer[i];
1129 Double_t output = neuron->GetValue(); // softmax output and target
1130 Double_t target = neuron->GetTarget(); // values lie in [0,1]
1131 if (target > DBL_EPSILON) { // (target == 0) => dE = 0
1132 if (output == 0.0)
1133 error = DBL_MAX;
1134 else
1135 error -= target * TMath::Log(output / target);
1136 }
1137 }
1138 return error;
1139}
1140
1141////////////////////////////////////////////////////////////////////////////////
1142/// Compute the DEDw = sum on all training events of dedw for each weight
1143/// normalized by the number of events.
1144
1146{
1147 Int_t i,j;
1149 TSynapse *synapse;
1150 for (i=0;i<nentries;i++) {
1151 synapse = (TSynapse *) fSynapses.UncheckedAt(i);
1152 synapse->SetDEDw(0.);
1153 }
1154 TNeuron *neuron;
1156 for (i=0;i<nentries;i++) {
1157 neuron = (TNeuron *) fNetwork.UncheckedAt(i);
1158 neuron->SetDEDw(0.);
1159 }
1160 Double_t eventWeight = 1.;
1161 if (fTraining) {
1162 Int_t nEvents = fTraining->GetN();
1163 for (i = 0; i < nEvents; i++) {
1165 eventWeight = fEventWeight->EvalInstance();
1166 eventWeight *= fCurrentTreeWeight;
1168 for (j=0;j<nentries;j++) {
1169 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
1170 synapse->SetDEDw(synapse->GetDEDw() + (synapse->GetDeDw()*eventWeight));
1171 }
1173 for (j=0;j<nentries;j++) {
1174 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
1175 neuron->SetDEDw(neuron->GetDEDw() + (neuron->GetDeDw()*eventWeight));
1176 }
1177 }
1179 for (j=0;j<nentries;j++) {
1180 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
1181 synapse->SetDEDw(synapse->GetDEDw() / (Double_t) nEvents);
1182 }
1184 for (j=0;j<nentries;j++) {
1185 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
1186 neuron->SetDEDw(neuron->GetDEDw() / (Double_t) nEvents);
1187 }
1188 } else if (fData) {
1189 Int_t nEvents = (Int_t) fData->GetEntries();
1190 for (i = 0; i < nEvents; i++) {
1191 GetEntry(i);
1192 eventWeight = fEventWeight->EvalInstance();
1193 eventWeight *= fCurrentTreeWeight;
1195 for (j=0;j<nentries;j++) {
1196 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
1197 synapse->SetDEDw(synapse->GetDEDw() + (synapse->GetDeDw()*eventWeight));
1198 }
1200 for (j=0;j<nentries;j++) {
1201 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
1202 neuron->SetDEDw(neuron->GetDEDw() + (neuron->GetDeDw()*eventWeight));
1203 }
1204 }
1206 for (j=0;j<nentries;j++) {
1207 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
1208 synapse->SetDEDw(synapse->GetDEDw() / (Double_t) nEvents);
1209 }
1211 for (j=0;j<nentries;j++) {
1212 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
1213 neuron->SetDEDw(neuron->GetDEDw() / (Double_t) nEvents);
1214 }
1215 }
1216}
1217
1218////////////////////////////////////////////////////////////////////////////////
1219/// Randomize the weights
1220
1222{
1224 Int_t j;
1225 TSynapse *synapse;
1226 TNeuron *neuron;
1227 TTimeStamp ts;
1228 TRandom3 gen(ts.GetSec());
1229 for (j=0;j<nentries;j++) {
1230 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
1231 synapse->SetWeight(gen.Rndm() - 0.5);
1232 }
1234 for (j=0;j<nentries;j++) {
1235 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
1236 neuron->SetWeight(gen.Rndm() - 0.5);
1237 }
1238}
1239
1240////////////////////////////////////////////////////////////////////////////////
1241/// Connects the TTree to Neurons in input and output
1242/// layers. The formulas associated to each neuron are created
1243/// and reported to the network formula manager.
1244/// By default, the branch is not normalised since this would degrade
1245/// performance for classification jobs.
1246/// Normalisation can be requested by putting '@' in front of the formula.
1247
1249{
1250 Int_t j = 0;
1251 TNeuron *neuron = 0;
1252 Bool_t normalize = false;
1254
1255 // Set the size of the internal array of parameters of the formula
1256 Int_t maxop, maxpar, maxconst;
1257 ROOT::v5::TFormula::GetMaxima(maxop, maxpar, maxconst);
1259
1260 //first layer
1261 const TString input = TString(fStructure(0, fStructure.First(':')));
1262 const TObjArray *inpL = input.Tokenize(", ");
1264 // make sure nentries == entries in inpL
1265 R__ASSERT(nentries == inpL->GetLast()+1);
1266 for (j=0;j<nentries;j++) {
1267 normalize = false;
1268 const TString brName = ((TObjString *)inpL->At(j))->GetString();
1269 neuron = (TNeuron *) fFirstLayer.UncheckedAt(j);
1270 if (brName[0]=='@')
1271 normalize = true;
1272 fManager->Add(neuron->UseBranch(fData,brName.Data() + (normalize?1:0)));
1273 if(!normalize) neuron->SetNormalisation(0., 1.);
1274 }
1275 delete inpL;
1276
1277 // last layer
1279 fStructure(fStructure.Last(':') + 1,
1280 fStructure.Length() - fStructure.Last(':')));
1281 const TObjArray *outL = output.Tokenize(", ");
1283 // make sure nentries == entries in outL
1284 R__ASSERT(nentries == outL->GetLast()+1);
1285 for (j=0;j<nentries;j++) {
1286 normalize = false;
1287 const TString brName = ((TObjString *)outL->At(j))->GetString();
1288 neuron = (TNeuron *) fLastLayer.UncheckedAt(j);
1289 if (brName[0]=='@')
1290 normalize = true;
1291 fManager->Add(neuron->UseBranch(fData,brName.Data() + (normalize?1:0)));
1292 if(!normalize) neuron->SetNormalisation(0., 1.);
1293 }
1294 delete outL;
1295
1296 fManager->Add((fEventWeight = new TTreeFormula("NNweight",fWeight.Data(),fData)));
1297 //fManager->Sync();
1298
1299 // Set the old values
1300 ROOT::v5::TFormula::SetMaxima(maxop, maxpar, maxconst);
1301}
1302
1303////////////////////////////////////////////////////////////////////////////////
1304/// Expand the structure of the first layer
1305
1307{
1308 TString input = TString(fStructure(0, fStructure.First(':')));
1309 const TObjArray *inpL = input.Tokenize(", ");
1310 Int_t nneurons = inpL->GetLast()+1;
1311
1312 TString hiddenAndOutput = TString(
1313 fStructure(fStructure.First(':') + 1,
1314 fStructure.Length() - fStructure.First(':')));
1315 TString newInput;
1316 Int_t i = 0;
1317 // loop on input neurons
1318 for (i = 0; i<nneurons; i++) {
1319 const TString name = ((TObjString *)inpL->At(i))->GetString();
1320 TTreeFormula f("sizeTestFormula",name,fData);
1321 // Variable size arrays are unreliable
1322 if(f.GetMultiplicity()==1 && f.GetNdata()>1) {
1323 Warning("TMultiLayerPerceptron::ExpandStructure()","Variable size arrays cannot be used to build implicitly an input layer. The index 0 will be assumed.");
1324 }
1325 // Check if we are coping with an array... then expand
1326 // The array operator used is {}. It is detected in TNeuron, and
1327 // passed directly as instance index of the TTreeFormula,
1328 // so that complex compounds made of arrays can be used without
1329 // parsing the details.
1330 else if(f.GetNdata()>1) {
1331 for(Int_t j=0; j<f.GetNdata(); j++) {
1332 if(i||j) newInput += ",";
1333 newInput += name;
1334 newInput += "{";
1335 newInput += j;
1336 newInput += "}";
1337 }
1338 continue;
1339 }
1340 if(i) newInput += ",";
1341 newInput += name;
1342 }
1343 delete inpL;
1344
1345 // Save the result
1346 fStructure = newInput + ":" + hiddenAndOutput;
1347}
1348
1349////////////////////////////////////////////////////////////////////////////////
1350/// Instantiates the network from the description
1351
1353{
1355 TString input = TString(fStructure(0, fStructure.First(':')));
1356 TString hidden = TString(
1357 fStructure(fStructure.First(':') + 1,
1358 fStructure.Last(':') - fStructure.First(':') - 1));
1360 fStructure(fStructure.Last(':') + 1,
1361 fStructure.Length() - fStructure.Last(':')));
1362 Int_t bll = atoi(TString(
1363 hidden(hidden.Last(':') + 1,
1364 hidden.Length() - (hidden.Last(':') + 1))).Data());
1365 if (input.Length() == 0) {
1366 Error("BuildNetwork()","malformed structure. No input layer.");
1367 return;
1368 }
1369 if (output.Length() == 0) {
1370 Error("BuildNetwork()","malformed structure. No output layer.");
1371 return;
1372 }
1373 BuildFirstLayer(input);
1374 BuildHiddenLayers(hidden);
1375 BuildLastLayer(output, bll);
1376}
1377
1378////////////////////////////////////////////////////////////////////////////////
1379/// Instantiates the neurons in input
1380/// Inputs are normalised and the type is set to kOff
1381/// (simple forward of the formula value)
1382
1384{
1385 const TObjArray *inpL = input.Tokenize(", ");
1386 const Int_t nneurons =inpL->GetLast()+1;
1387 TNeuron *neuron = 0;
1388 Int_t i = 0;
1389 for (i = 0; i<nneurons; i++) {
1390 const TString name = ((TObjString *)inpL->At(i))->GetString();
1391 neuron = new TNeuron(TNeuron::kOff, name);
1392 fFirstLayer.AddLast(neuron);
1393 fNetwork.AddLast(neuron);
1394 }
1395 delete inpL;
1396}
1397
1398////////////////////////////////////////////////////////////////////////////////
1399/// Builds hidden layers.
1400
1402{
1403 Int_t beg = 0;
1404 Int_t end = hidden.Index(":", beg + 1);
1405 Int_t prevStart = 0;
1406 Int_t prevStop = fNetwork.GetEntriesFast();
1407 Int_t layer = 1;
1408 while (end != -1) {
1409 BuildOneHiddenLayer(hidden(beg, end - beg), layer, prevStart, prevStop, false);
1410 beg = end + 1;
1411 end = hidden.Index(":", beg + 1);
1412 }
1413
1414 BuildOneHiddenLayer(hidden(beg, hidden.Length() - beg), layer, prevStart, prevStop, true);
1415}
1416
1417////////////////////////////////////////////////////////////////////////////////
1418/// Builds a hidden layer, updates the number of layers.
1419
1421 Int_t& prevStart, Int_t& prevStop,
1422 Bool_t lastLayer)
1423{
1424 TNeuron *neuron = 0;
1425 TSynapse *synapse = 0;
1426 TString name;
1427 if (!sNumNodes.IsAlnum() || sNumNodes.IsAlpha()) {
1428 Error("BuildOneHiddenLayer",
1429 "The specification '%s' for hidden layer %d must contain only numbers!",
1430 sNumNodes.Data(), layer - 1);
1431 } else {
1432 Int_t num = atoi(sNumNodes.Data());
1433 for (Int_t i = 0; i < num; i++) {
1434 name.Form("HiddenL%d:N%d",layer,i);
1435 neuron = new TNeuron(fType, name, "", (const char*)fextF, (const char*)fextD);
1436 fNetwork.AddLast(neuron);
1437 for (Int_t j = prevStart; j < prevStop; j++) {
1438 synapse = new TSynapse((TNeuron *) fNetwork[j], neuron);
1439 fSynapses.AddLast(synapse);
1440 }
1441 }
1442
1443 if (!lastLayer) {
1444 // tell each neuron which ones are in its own layer (for Softmax)
1445 Int_t nEntries = fNetwork.GetEntriesFast();
1446 for (Int_t i = prevStop; i < nEntries; i++) {
1447 neuron = (TNeuron *) fNetwork[i];
1448 for (Int_t j = prevStop; j < nEntries; j++)
1449 neuron->AddInLayer((TNeuron *) fNetwork[j]);
1450 }
1451 }
1452
1453 prevStart = prevStop;
1454 prevStop = fNetwork.GetEntriesFast();
1455 layer++;
1456 }
1457}
1458
1459////////////////////////////////////////////////////////////////////////////////
1460/// Builds the output layer
1461/// Neurons are linear combinations of input, by default.
1462/// If the structure ends with "!", neurons are set up for classification,
1463/// ie. with a sigmoid (1 neuron) or softmax (more neurons) activation function.
1464
1466{
1467 Int_t nneurons = output.CountChar(',')+1;
1468 if (fStructure.EndsWith("!")) {
1469 fStructure = TString(fStructure(0, fStructure.Length() - 1)); // remove "!"
1470 if (nneurons == 1)
1472 else
1474 }
1475 Int_t prevStop = fNetwork.GetEntriesFast();
1476 Int_t prevStart = prevStop - prev;
1477 Ssiz_t pos = 0;
1478 TNeuron *neuron;
1479 TSynapse *synapse;
1480 TString name;
1481 Int_t i,j;
1482 for (i = 0; i<nneurons; i++) {
1483 Ssiz_t nextpos=output.Index(",",pos);
1484 if (nextpos!=kNPOS)
1485 name=output(pos,nextpos-pos);
1486 else name=output(pos,output.Length());
1487 pos+=nextpos+1;
1488 neuron = new TNeuron(fOutType, name);
1489 for (j = prevStart; j < prevStop; j++) {
1490 synapse = new TSynapse((TNeuron *) fNetwork[j], neuron);
1491 fSynapses.AddLast(synapse);
1492 }
1493 fLastLayer.AddLast(neuron);
1494 fNetwork.AddLast(neuron);
1495 }
1496 // tell each neuron which ones are in its own layer (for Softmax)
1497 Int_t nEntries = fNetwork.GetEntriesFast();
1498 for (i = prevStop; i < nEntries; i++) {
1499 neuron = (TNeuron *) fNetwork[i];
1500 for (j = prevStop; j < nEntries; j++)
1501 neuron->AddInLayer((TNeuron *) fNetwork[j]);
1502 }
1503
1504}
1505
1506////////////////////////////////////////////////////////////////////////////////
1507/// Draws the neural net output
1508/// It produces an histogram with the output for the two datasets.
1509/// Index is the number of the desired output neuron.
1510/// "option" can contain:
1511/// - test or train to select a dataset
1512/// - comp to produce a X-Y comparison plot
1513/// - nocanv to not create a new TCanvas for the plot
1514
1516{
1517 TString opt = option;
1518 opt.ToLower();
1519 TNeuron *out = (TNeuron *) (fLastLayer.At(index));
1520 if (!out) {
1521 Error("DrawResult()","no such output.");
1522 return;
1523 }
1524 //TCanvas *canvas = new TCanvas("NNresult", "Neural Net output");
1525 if (!opt.Contains("nocanv"))
1526 new TCanvas("NNresult", "Neural Net output");
1527 const Double_t *norm = out->GetNormalisation();
1528 TEventList *events = 0;
1529 TString setname;
1530 Int_t i;
1531 if (opt.Contains("train")) {
1532 events = fTraining;
1533 setname = Form("train%d",index);
1534 } else if (opt.Contains("test")) {
1535 events = fTest;
1536 setname = Form("test%d",index);
1537 }
1538 if ((!fData) || (!events)) {
1539 Error("DrawResult()","no dataset.");
1540 return;
1541 }
1542 if (opt.Contains("comp")) {
1543 //comparison plot
1544 TString title = "Neural Net Output control. ";
1545 title += setname;
1546 setname = "MLP_" + setname + "_comp";
1547 TH2D *hist = ((TH2D *) gDirectory->Get(setname.Data()));
1548 if (!hist)
1549 hist = new TH2D(setname.Data(), title.Data(), 50, -1, 1, 50, -1, 1);
1550 hist->Reset();
1551 Int_t nEvents = events->GetN();
1552 for (i = 0; i < nEvents; i++) {
1553 GetEntry(events->GetEntry(i));
1554 hist->Fill(out->GetValue(), (out->GetBranch() - norm[1]) / norm[0]);
1555 }
1556 hist->Draw();
1557 } else {
1558 //output plot
1559 TString title = "Neural Net Output. ";
1560 title += setname;
1561 setname = "MLP_" + setname;
1562 TH1D *hist = ((TH1D *) gDirectory->Get(setname.Data()));
1563 if (!hist)
1564 hist = new TH1D(setname, title, 50, 1, -1);
1565 hist->Reset();
1566 Int_t nEvents = events->GetN();
1567 for (i = 0; i < nEvents; i++)
1568 hist->Fill(Result(events->GetEntry(i), index));
1569 hist->Draw();
1570 if (opt.Contains("train") && opt.Contains("test")) {
1571 events = fTraining;
1572 setname = "train";
1573 hist = ((TH1D *) gDirectory->Get("MLP_test"));
1574 if (!hist)
1575 hist = new TH1D(setname, title, 50, 1, -1);
1576 hist->Reset();
1577 nEvents = events->GetN();
1578 for (i = 0; i < nEvents; i++)
1579 hist->Fill(Result(events->GetEntry(i), index));
1580 hist->Draw("same");
1581 }
1582 }
1583}
1584
1585////////////////////////////////////////////////////////////////////////////////
1586/// Dumps the weights to a text file.
1587/// Set filename to "-" (default) to dump to the standard output
1588
1590{
1591 TString filen = filename;
1592 std::ostream * output;
1593 if (filen == "") {
1594 Error("TMultiLayerPerceptron::DumpWeights()","Invalid file name");
1595 return kFALSE;
1596 }
1597 if (filen == "-")
1598 output = &std::cout;
1599 else
1600 output = new std::ofstream(filen.Data());
1601 TNeuron *neuron = 0;
1602 *output << "#input normalization" << std::endl;
1604 Int_t j=0;
1605 for (j=0;j<nentries;j++) {
1606 neuron = (TNeuron *) fFirstLayer.UncheckedAt(j);
1607 *output << neuron->GetNormalisation()[0] << " "
1608 << neuron->GetNormalisation()[1] << std::endl;
1609 }
1610 *output << "#output normalization" << std::endl;
1612 for (j=0;j<nentries;j++) {
1613 neuron = (TNeuron *) fLastLayer.UncheckedAt(j);
1614 *output << neuron->GetNormalisation()[0] << " "
1615 << neuron->GetNormalisation()[1] << std::endl;
1616 }
1617 *output << "#neurons weights" << std::endl;
1619 while ((neuron = (TNeuron *) it->Next()))
1620 *output << neuron->GetWeight() << std::endl;
1621 delete it;
1623 TSynapse *synapse = 0;
1624 *output << "#synapses weights" << std::endl;
1625 while ((synapse = (TSynapse *) it->Next()))
1626 *output << synapse->GetWeight() << std::endl;
1627 delete it;
1628 if (filen != "-") {
1629 ((std::ofstream *) output)->close();
1630 delete output;
1631 }
1632 return kTRUE;
1633}
1634
1635////////////////////////////////////////////////////////////////////////////////
1636/// Loads the weights from a text file conforming to the format
1637/// defined by DumpWeights.
1638
1640{
1641 TString filen = filename;
1642 Double_t w;
1643 if (filen == "") {
1644 Error("TMultiLayerPerceptron::LoadWeights()","Invalid file name");
1645 return kFALSE;
1646 }
1647 char *buff = new char[100];
1648 std::ifstream input(filen.Data());
1649 // input normalzation
1650 input.getline(buff, 100);
1652 Float_t n1,n2;
1653 TNeuron *neuron = 0;
1654 while ((neuron = (TNeuron *) it->Next())) {
1655 input >> n1 >> n2;
1656 neuron->SetNormalisation(n2,n1);
1657 }
1658 input.getline(buff, 100);
1659 // output normalization
1660 input.getline(buff, 100);
1661 delete it;
1663 while ((neuron = (TNeuron *) it->Next())) {
1664 input >> n1 >> n2;
1665 neuron->SetNormalisation(n2,n1);
1666 }
1667 input.getline(buff, 100);
1668 // neuron weights
1669 input.getline(buff, 100);
1670 delete it;
1672 while ((neuron = (TNeuron *) it->Next())) {
1673 input >> w;
1674 neuron->SetWeight(w);
1675 }
1676 delete it;
1677 input.getline(buff, 100);
1678 // synapse weights
1679 input.getline(buff, 100);
1681 TSynapse *synapse = 0;
1682 while ((synapse = (TSynapse *) it->Next())) {
1683 input >> w;
1684 synapse->SetWeight(w);
1685 }
1686 delete it;
1687 delete[] buff;
1688 return kTRUE;
1689}
1690
1691////////////////////////////////////////////////////////////////////////////////
1692/// Returns the Neural Net for a given set of input parameters
1693/// #parameters must equal #input neurons
1694
1696{
1698 TNeuron *neuron;
1699 while ((neuron = (TNeuron *) it->Next()))
1700 neuron->SetNewEvent();
1701 delete it;
1703 Int_t i=0;
1704 while ((neuron = (TNeuron *) it->Next()))
1705 neuron->ForceExternalValue(params[i++]);
1706 delete it;
1707 TNeuron *out = (TNeuron *) (fLastLayer.At(index));
1708 if (out)
1709 return out->GetValue();
1710 else
1711 return 0;
1712}
1713
1714////////////////////////////////////////////////////////////////////////////////
1715/// Exports the NN as a function for any non-ROOT-dependant code
1716/// Supported languages are: only C++ , FORTRAN and Python (yet)
1717/// This feature is also useful if you want to plot the NN as
1718/// a function (TF1 or TF2).
1719
1720void TMultiLayerPerceptron::Export(Option_t * filename, Option_t * language) const
1721{
1722 TString lg = language;
1723 lg.ToUpper();
1724 Int_t i;
1726 Warning("TMultiLayerPerceptron::Export","Request to export a network using an external function");
1727 }
1728 if (lg == "C++") {
1729 TString basefilename = filename;
1730 Int_t slash = basefilename.Last('/')+1;
1731 if (slash) basefilename = TString(basefilename(slash, basefilename.Length()-slash));
1732
1733 TString classname = basefilename;
1734 TString header = filename;
1735 header += ".h";
1736 TString source = filename;
1737 source += ".cxx";
1738 std::ofstream headerfile(header);
1739 std::ofstream sourcefile(source);
1740 headerfile << "#ifndef " << basefilename << "_h" << std::endl;
1741 headerfile << "#define " << basefilename << "_h" << std::endl << std::endl;
1742 headerfile << "class " << classname << " { " << std::endl;
1743 headerfile << "public:" << std::endl;
1744 headerfile << " " << classname << "() {}" << std::endl;
1745 headerfile << " ~" << classname << "() {}" << std::endl;
1746 sourcefile << "#include \"" << header << "\"" << std::endl;
1747 sourcefile << "#include <cmath>" << std::endl << std::endl;
1748 headerfile << " double Value(int index";
1749 sourcefile << "double " << classname << "::Value(int index";
1750 for (i = 0; i < fFirstLayer.GetEntriesFast(); i++) {
1751 headerfile << ",double in" << i;
1752 sourcefile << ",double in" << i;
1753 }
1754 headerfile << ");" << std::endl;
1755 sourcefile << ") {" << std::endl;
1756 for (i = 0; i < fFirstLayer.GetEntriesFast(); i++)
1757 sourcefile << " input" << i << " = (in" << i << " - "
1758 << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[1] << ")/"
1759 << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[0] << ";"
1760 << std::endl;
1761 sourcefile << " switch(index) {" << std::endl;
1762 TNeuron *neuron;
1764 Int_t idx = 0;
1765 while ((neuron = (TNeuron *) it->Next()))
1766 sourcefile << " case " << idx++ << ":" << std::endl
1767 << " return neuron" << neuron << "();" << std::endl;
1768 sourcefile << " default:" << std::endl
1769 << " return 0.;" << std::endl << " }"
1770 << std::endl;
1771 sourcefile << "}" << std::endl << std::endl;
1772 headerfile << " double Value(int index, double* input);" << std::endl;
1773 sourcefile << "double " << classname << "::Value(int index, double* input) {" << std::endl;
1774 for (i = 0; i < fFirstLayer.GetEntriesFast(); i++)
1775 sourcefile << " input" << i << " = (input[" << i << "] - "
1776 << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[1] << ")/"
1777 << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[0] << ";"
1778 << std::endl;
1779 sourcefile << " switch(index) {" << std::endl;
1780 delete it;
1782 idx = 0;
1783 while ((neuron = (TNeuron *) it->Next()))
1784 sourcefile << " case " << idx++ << ":" << std::endl
1785 << " return neuron" << neuron << "();" << std::endl;
1786 sourcefile << " default:" << std::endl
1787 << " return 0.;" << std::endl << " }"
1788 << std::endl;
1789 sourcefile << "}" << std::endl << std::endl;
1790 headerfile << "private:" << std::endl;
1791 for (i = 0; i < fFirstLayer.GetEntriesFast(); i++)
1792 headerfile << " double input" << i << ";" << std::endl;
1793 delete it;
1795 idx = 0;
1796 while ((neuron = (TNeuron *) it->Next())) {
1797 if (!neuron->GetPre(0)) {
1798 headerfile << " double neuron" << neuron << "();" << std::endl;
1799 sourcefile << "double " << classname << "::neuron" << neuron
1800 << "() {" << std::endl;
1801 sourcefile << " return input" << idx++ << ";" << std::endl;
1802 sourcefile << "}" << std::endl << std::endl;
1803 } else {
1804 headerfile << " double input" << neuron << "();" << std::endl;
1805 sourcefile << "double " << classname << "::input" << neuron
1806 << "() {" << std::endl;
1807 sourcefile << " double input = " << neuron->GetWeight()
1808 << ";" << std::endl;
1809 TSynapse *syn = 0;
1810 Int_t n = 0;
1811 while ((syn = neuron->GetPre(n++))) {
1812 sourcefile << " input += synapse" << syn << "();" << std::endl;
1813 }
1814 sourcefile << " return input;" << std::endl;
1815 sourcefile << "}" << std::endl << std::endl;
1816
1817 headerfile << " double neuron" << neuron << "();" << std::endl;
1818 sourcefile << "double " << classname << "::neuron" << neuron << "() {" << std::endl;
1819 sourcefile << " double input = input" << neuron << "();" << std::endl;
1820 switch(neuron->GetType()) {
1821 case (TNeuron::kSigmoid):
1822 {
1823 sourcefile << " return ((input < -709. ? 0. : (1/(1+exp(-input)))) * ";
1824 break;
1825 }
1826 case (TNeuron::kLinear):
1827 {
1828 sourcefile << " return (input * ";
1829 break;
1830 }
1831 case (TNeuron::kTanh):
1832 {
1833 sourcefile << " return (tanh(input) * ";
1834 break;
1835 }
1836 case (TNeuron::kGauss):
1837 {
1838 sourcefile << " return (exp(-input*input) * ";
1839 break;
1840 }
1841 case (TNeuron::kSoftmax):
1842 {
1843 sourcefile << " return (exp(input) / (";
1844 Int_t nn = 0;
1845 TNeuron* side = neuron->GetInLayer(nn++);
1846 sourcefile << "exp(input" << side << "())";
1847 while ((side = neuron->GetInLayer(nn++)))
1848 sourcefile << " + exp(input" << side << "())";
1849 sourcefile << ") * ";
1850 break;
1851 }
1852 default:
1853 {
1854 sourcefile << " return (0.0 * ";
1855 }
1856 }
1857 sourcefile << neuron->GetNormalisation()[0] << ")+" ;
1858 sourcefile << neuron->GetNormalisation()[1] << ";" << std::endl;
1859 sourcefile << "}" << std::endl << std::endl;
1860 }
1861 }
1862 delete it;
1863 TSynapse *synapse = 0;
1865 while ((synapse = (TSynapse *) it->Next())) {
1866 headerfile << " double synapse" << synapse << "();" << std::endl;
1867 sourcefile << "double " << classname << "::synapse"
1868 << synapse << "() {" << std::endl;
1869 sourcefile << " return (neuron" << synapse->GetPre()
1870 << "()*" << synapse->GetWeight() << ");" << std::endl;
1871 sourcefile << "}" << std::endl << std::endl;
1872 }
1873 delete it;
1874 headerfile << "};" << std::endl << std::endl;
1875 headerfile << "#endif // " << basefilename << "_h" << std::endl << std::endl;
1876 headerfile.close();
1877 sourcefile.close();
1878 std::cout << header << " and " << source << " created." << std::endl;
1879 }
1880 else if(lg == "FORTRAN") {
1881 TString implicit = " implicit double precision (a-h,n-z)\n";
1882 std::ofstream sigmoid("sigmoid.f");
1883 sigmoid << " double precision FUNCTION SIGMOID(X)" << std::endl
1884 << implicit
1885 << " IF(X.GT.37.) THEN" << std::endl
1886 << " SIGMOID = 1." << std::endl
1887 << " ELSE IF(X.LT.-709.) THEN" << std::endl
1888 << " SIGMOID = 0." << std::endl
1889 << " ELSE" << std::endl
1890 << " SIGMOID = 1./(1.+EXP(-X))" << std::endl
1891 << " ENDIF" << std::endl
1892 << " END" << std::endl;
1893 sigmoid.close();
1894 TString source = filename;
1895 source += ".f";
1896 std::ofstream sourcefile(source);
1897
1898 // Header
1899 sourcefile << " double precision function " << filename
1900 << "(x, index)" << std::endl;
1901 sourcefile << implicit;
1902 sourcefile << " double precision x(" <<
1903 fFirstLayer.GetEntriesFast() << ")" << std::endl << std::endl;
1904
1905 // Last layer
1906 sourcefile << "C --- Last Layer" << std::endl;
1907 TNeuron *neuron;
1909 Int_t idx = 0;
1910 TString ifelseif = " if (index.eq.";
1911 while ((neuron = (TNeuron *) it->Next())) {
1912 sourcefile << ifelseif.Data() << idx++ << ") then" << std::endl
1913 << " " << filename
1914 << "=neuron" << neuron << "(x);" << std::endl;
1915 ifelseif = " else if (index.eq.";
1916 }
1917 sourcefile << " else" << std::endl
1918 << " " << filename << "=0.d0" << std::endl
1919 << " endif" << std::endl;
1920 sourcefile << " end" << std::endl;
1921
1922 // Network
1923 sourcefile << "C --- First and Hidden layers" << std::endl;
1924 delete it;
1926 idx = 0;
1927 while ((neuron = (TNeuron *) it->Next())) {
1928 sourcefile << " double precision function neuron"
1929 << neuron << "(x)" << std::endl
1930 << implicit;
1931 sourcefile << " double precision x("
1932 << fFirstLayer.GetEntriesFast() << ")" << std::endl << std::endl;
1933 if (!neuron->GetPre(0)) {
1934 sourcefile << " neuron" << neuron
1935 << " = (x(" << idx+1 << ") - "
1936 << ((TNeuron *) fFirstLayer[idx])->GetNormalisation()[1]
1937 << "d0)/"
1938 << ((TNeuron *) fFirstLayer[idx])->GetNormalisation()[0]
1939 << "d0" << std::endl;
1940 idx++;
1941 } else {
1942 sourcefile << " neuron" << neuron
1943 << " = " << neuron->GetWeight() << "d0" << std::endl;
1944 TSynapse *syn;
1945 Int_t n = 0;
1946 while ((syn = neuron->GetPre(n++)))
1947 sourcefile << " neuron" << neuron
1948 << " = neuron" << neuron
1949 << " + synapse" << syn << "(x)" << std::endl;
1950 switch(neuron->GetType()) {
1951 case (TNeuron::kSigmoid):
1952 {
1953 sourcefile << " neuron" << neuron
1954 << "= (sigmoid(neuron" << neuron << ")*";
1955 break;
1956 }
1957 case (TNeuron::kLinear):
1958 {
1959 break;
1960 }
1961 case (TNeuron::kTanh):
1962 {
1963 sourcefile << " neuron" << neuron
1964 << "= (tanh(neuron" << neuron << ")*";
1965 break;
1966 }
1967 case (TNeuron::kGauss):
1968 {
1969 sourcefile << " neuron" << neuron
1970 << "= (exp(-neuron" << neuron << "*neuron"
1971 << neuron << "))*";
1972 break;
1973 }
1974 case (TNeuron::kSoftmax):
1975 {
1976 Int_t nn = 0;
1977 TNeuron* side = neuron->GetInLayer(nn++);
1978 sourcefile << " div = exp(neuron" << side << "())" << std::endl;
1979 while ((side = neuron->GetInLayer(nn++)))
1980 sourcefile << " div = div + exp(neuron" << side << "())" << std::endl;
1981 sourcefile << " neuron" << neuron ;
1982 sourcefile << "= (exp(neuron" << neuron << ") / div * ";
1983 break;
1984 }
1985 default:
1986 {
1987 sourcefile << " neuron " << neuron << "= 0.";
1988 }
1989 }
1990 sourcefile << neuron->GetNormalisation()[0] << "d0)+" ;
1991 sourcefile << neuron->GetNormalisation()[1] << "d0" << std::endl;
1992 }
1993 sourcefile << " end" << std::endl;
1994 }
1995 delete it;
1996
1997 // Synapses
1998 sourcefile << "C --- Synapses" << std::endl;
1999 TSynapse *synapse = 0;
2001 while ((synapse = (TSynapse *) it->Next())) {
2002 sourcefile << " double precision function " << "synapse"
2003 << synapse << "(x)\n" << implicit;
2004 sourcefile << " double precision x("
2005 << fFirstLayer.GetEntriesFast() << ")" << std::endl << std::endl;
2006 sourcefile << " synapse" << synapse
2007 << "=neuron" << synapse->GetPre()
2008 << "(x)*" << synapse->GetWeight() << "d0" << std::endl;
2009 sourcefile << " end" << std::endl << std::endl;
2010 }
2011 delete it;
2012 sourcefile.close();
2013 std::cout << source << " created." << std::endl;
2014 }
2015 else if(lg == "PYTHON") {
2016 TString classname = filename;
2017 TString pyfile = filename;
2018 pyfile += ".py";
2019 std::ofstream pythonfile(pyfile);
2020 pythonfile << "from math import exp" << std::endl << std::endl;
2021 pythonfile << "from math import tanh" << std::endl << std::endl;
2022 pythonfile << "class " << classname << ":" << std::endl;
2023 pythonfile << "\tdef value(self,index";
2024 for (i = 0; i < fFirstLayer.GetEntriesFast(); i++) {
2025 pythonfile << ",in" << i;
2026 }
2027 pythonfile << "):" << std::endl;
2028 for (i = 0; i < fFirstLayer.GetEntriesFast(); i++)
2029 pythonfile << "\t\tself.input" << i << " = (in" << i << " - "
2030 << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[1] << ")/"
2031 << ((TNeuron *) fFirstLayer[i])->GetNormalisation()[0] << std::endl;
2032 TNeuron *neuron;
2034 Int_t idx = 0;
2035 while ((neuron = (TNeuron *) it->Next()))
2036 pythonfile << "\t\tif index==" << idx++
2037 << ": return self.neuron" << neuron << "();" << std::endl;
2038 pythonfile << "\t\treturn 0." << std::endl;
2039 delete it;
2041 idx = 0;
2042 while ((neuron = (TNeuron *) it->Next())) {
2043 pythonfile << "\tdef neuron" << neuron << "(self):" << std::endl;
2044 if (!neuron->GetPre(0))
2045 pythonfile << "\t\treturn self.input" << idx++ << std::endl;
2046 else {
2047 pythonfile << "\t\tinput = " << neuron->GetWeight() << std::endl;
2048 TSynapse *syn;
2049 Int_t n = 0;
2050 while ((syn = neuron->GetPre(n++)))
2051 pythonfile << "\t\tinput = input + self.synapse"
2052 << syn << "()" << std::endl;
2053 switch(neuron->GetType()) {
2054 case (TNeuron::kSigmoid):
2055 {
2056 pythonfile << "\t\tif input<-709. : return " << neuron->GetNormalisation()[1] << std::endl;
2057 pythonfile << "\t\treturn ((1/(1+exp(-input)))*";
2058 break;
2059 }
2060 case (TNeuron::kLinear):
2061 {
2062 pythonfile << "\t\treturn (input*";
2063 break;
2064 }
2065 case (TNeuron::kTanh):
2066 {
2067 pythonfile << "\t\treturn (tanh(input)*";
2068 break;
2069 }
2070 case (TNeuron::kGauss):
2071 {
2072 pythonfile << "\t\treturn (exp(-input*input)*";
2073 break;
2074 }
2075 case (TNeuron::kSoftmax):
2076 {
2077 pythonfile << "\t\treturn (exp(input) / (";
2078 Int_t nn = 0;
2079 TNeuron* side = neuron->GetInLayer(nn++);
2080 pythonfile << "exp(self.neuron" << side << "())";
2081 while ((side = neuron->GetInLayer(nn++)))
2082 pythonfile << " + exp(self.neuron" << side << "())";
2083 pythonfile << ") * ";
2084 break;
2085 }
2086 default:
2087 {
2088 pythonfile << "\t\treturn 0.";
2089 }
2090 }
2091 pythonfile << neuron->GetNormalisation()[0] << ")+" ;
2092 pythonfile << neuron->GetNormalisation()[1] << std::endl;
2093 }
2094 }
2095 delete it;
2096 TSynapse *synapse = 0;
2098 while ((synapse = (TSynapse *) it->Next())) {
2099 pythonfile << "\tdef synapse" << synapse << "(self):" << std::endl;
2100 pythonfile << "\t\treturn (self.neuron" << synapse->GetPre()
2101 << "()*" << synapse->GetWeight() << ")" << std::endl;
2102 }
2103 delete it;
2104 pythonfile.close();
2105 std::cout << pyfile << " created." << std::endl;
2106 }
2107}
2108
2109////////////////////////////////////////////////////////////////////////////////
2110/// Shuffle the Int_t index[n] in input.
2111///
2112/// Input:
2113/// - index: the array to shuffle
2114/// - n: the size of the array
2115///
2116/// Output:
2117/// - index: the shuffled indexes
2118///
2119/// This method is used for stochastic training
2120
2122{
2123 TTimeStamp ts;
2124 TRandom3 rnd(ts.GetSec());
2125 Int_t j, k;
2126 Int_t a = n - 1;
2127 for (Int_t i = 0; i < n; i++) {
2128 j = (Int_t) (rnd.Rndm() * a);
2129 k = index[j];
2130 index[j] = index[i];
2131 index[i] = k;
2132 }
2133 return;
2134}
2135
2136////////////////////////////////////////////////////////////////////////////////
2137/// One step for the stochastic method
2138/// buffer should contain the previous dw vector and will be updated
2139
2141{
2142 Int_t nEvents = fTraining->GetN();
2143 Int_t *index = new Int_t[nEvents];
2144 Int_t i,j,nentries;
2145 for (i = 0; i < nEvents; i++)
2146 index[i] = i;
2147 fEta *= fEtaDecay;
2148 Shuffle(index, nEvents);
2149 TNeuron *neuron;
2150 TSynapse *synapse;
2151 for (i = 0; i < nEvents; i++) {
2152 GetEntry(fTraining->GetEntry(index[i]));
2153 // First compute DeDw for all neurons: force calculation before
2154 // modifying the weights.
2156 for (j=0;j<nentries;j++) {
2157 neuron = (TNeuron *) fFirstLayer.UncheckedAt(j);
2158 neuron->GetDeDw();
2159 }
2160 Int_t cnt = 0;
2161 // Step for all neurons
2163 for (j=0;j<nentries;j++) {
2164 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
2165 buffer[cnt] = (-fEta) * (neuron->GetDeDw() + fDelta)
2166 + fEpsilon * buffer[cnt];
2167 neuron->SetWeight(neuron->GetWeight() + buffer[cnt++]);
2168 }
2169 // Step for all synapses
2171 for (j=0;j<nentries;j++) {
2172 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
2173 buffer[cnt] = (-fEta) * (synapse->GetDeDw() + fDelta)
2174 + fEpsilon * buffer[cnt];
2175 synapse->SetWeight(synapse->GetWeight() + buffer[cnt++]);
2176 }
2177 }
2178 delete[]index;
2179}
2180
2181////////////////////////////////////////////////////////////////////////////////
2182/// One step for the batch (stochastic) method.
2183/// DEDw should have been updated before calling this.
2184
2186{
2187 fEta *= fEtaDecay;
2188 Int_t cnt = 0;
2190 TNeuron *neuron = 0;
2191 // Step for all neurons
2192 while ((neuron = (TNeuron *) it->Next())) {
2193 buffer[cnt] = (-fEta) * (neuron->GetDEDw() + fDelta)
2194 + fEpsilon * buffer[cnt];
2195 neuron->SetWeight(neuron->GetWeight() + buffer[cnt++]);
2196 }
2197 delete it;
2199 TSynapse *synapse = 0;
2200 // Step for all synapses
2201 while ((synapse = (TSynapse *) it->Next())) {
2202 buffer[cnt] = (-fEta) * (synapse->GetDEDw() + fDelta)
2203 + fEpsilon * buffer[cnt];
2204 synapse->SetWeight(synapse->GetWeight() + buffer[cnt++]);
2205 }
2206 delete it;
2207}
2208
2209////////////////////////////////////////////////////////////////////////////////
2210/// Sets the weights to a point along a line
2211/// Weights are set to [origin + (dist * dir)].
2212
2214{
2215 Int_t idx = 0;
2216 TNeuron *neuron = 0;
2217 TSynapse *synapse = 0;
2219 while ((neuron = (TNeuron *) it->Next())) {
2220 neuron->SetWeight(origin[idx] + (dir[idx] * dist));
2221 idx++;
2222 }
2223 delete it;
2225 while ((synapse = (TSynapse *) it->Next())) {
2226 synapse->SetWeight(origin[idx] + (dir[idx] * dist));
2227 idx++;
2228 }
2229 delete it;
2230}
2231
2232////////////////////////////////////////////////////////////////////////////////
2233/// Sets the search direction to steepest descent.
2234
2236{
2237 Int_t idx = 0;
2238 TNeuron *neuron = 0;
2239 TSynapse *synapse = 0;
2241 while ((neuron = (TNeuron *) it->Next()))
2242 dir[idx++] = -neuron->GetDEDw();
2243 delete it;
2245 while ((synapse = (TSynapse *) it->Next()))
2246 dir[idx++] = -synapse->GetDEDw();
2247 delete it;
2248}
2249
2250////////////////////////////////////////////////////////////////////////////////
2251/// Search along the line defined by direction.
2252/// buffer is not used but is updated with the new dw
2253/// so that it can be used by a later stochastic step.
2254/// It returns true if the line search fails.
2255
2257{
2258 Int_t idx = 0;
2259 Int_t j,nentries;
2260 TNeuron *neuron = 0;
2261 TSynapse *synapse = 0;
2262 // store weights before line search
2263 Double_t *origin = new Double_t[fNetwork.GetEntriesFast() +
2266 for (j=0;j<nentries;j++) {
2267 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
2268 origin[idx++] = neuron->GetWeight();
2269 }
2271 for (j=0;j<nentries;j++) {
2272 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
2273 origin[idx++] = synapse->GetWeight();
2274 }
2275 // try to find a triplet (alpha1, alpha2, alpha3) such that
2276 // Error(alpha1)>Error(alpha2)<Error(alpha3)
2277 Double_t err1 = GetError(kTraining);
2278 Double_t alpha1 = 0.;
2279 Double_t alpha2 = fLastAlpha;
2280 if (alpha2 < 0.01)
2281 alpha2 = 0.01;
2282 if (alpha2 > 2.0)
2283 alpha2 = 2.0;
2284 Double_t alpha3 = alpha2;
2285 MLP_Line(origin, direction, alpha2);
2286 Double_t err2 = GetError(kTraining);
2287 Double_t err3 = err2;
2288 Bool_t bingo = false;
2289 Int_t icount;
2290 if (err1 > err2) {
2291 for (icount = 0; icount < 100; icount++) {
2292 alpha3 *= fTau;
2293 MLP_Line(origin, direction, alpha3);
2294 err3 = GetError(kTraining);
2295 if (err3 > err2) {
2296 bingo = true;
2297 break;
2298 }
2299 alpha1 = alpha2;
2300 err1 = err2;
2301 alpha2 = alpha3;
2302 err2 = err3;
2303 }
2304 if (!bingo) {
2305 MLP_Line(origin, direction, 0.);
2306 delete[]origin;
2307 return true;
2308 }
2309 } else {
2310 for (icount = 0; icount < 100; icount++) {
2311 alpha2 /= fTau;
2312 MLP_Line(origin, direction, alpha2);
2313 err2 = GetError(kTraining);
2314 if (err1 > err2) {
2315 bingo = true;
2316 break;
2317 }
2318 alpha3 = alpha2;
2319 err3 = err2;
2320 }
2321 if (!bingo) {
2322 MLP_Line(origin, direction, 0.);
2323 delete[]origin;
2324 fLastAlpha = 0.05;
2325 return true;
2326 }
2327 }
2328 // Sets the weights to the bottom of parabola
2329 fLastAlpha = 0.5 * (alpha1 + alpha3 -
2330 (err3 - err1) / ((err3 - err2) / (alpha3 - alpha2)
2331 - (err2 - err1) / (alpha2 - alpha1)));
2332 fLastAlpha = fLastAlpha < 10000 ? fLastAlpha : 10000;
2333 MLP_Line(origin, direction, fLastAlpha);
2335 // Stores weight changes (can be used by a later stochastic step)
2336 idx = 0;
2338 for (j=0;j<nentries;j++) {
2339 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
2340 buffer[idx] = neuron->GetWeight() - origin[idx];
2341 idx++;
2342 }
2344 for (j=0;j<nentries;j++) {
2345 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
2346 buffer[idx] = synapse->GetWeight() - origin[idx];
2347 idx++;
2348 }
2349 delete[]origin;
2350 return false;
2351}
2352
2353////////////////////////////////////////////////////////////////////////////////
2354/// Sets the search direction to conjugate gradient direction
2355/// beta should be:
2356///
2357/// \f$||g_{(t+1)}||^2 / ||g_{(t)}||^2\f$ (Fletcher-Reeves)
2358///
2359/// \f$g_{(t+1)} (g_{(t+1)}-g_{(t)}) / ||g_{(t)}||^2\f$ (Ribiere-Polak)
2360
2362{
2363 Int_t idx = 0;
2364 Int_t j,nentries;
2365 TNeuron *neuron = 0;
2366 TSynapse *synapse = 0;
2368 for (j=0;j<nentries;j++) {
2369 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
2370 dir[idx] = -neuron->GetDEDw() + beta * dir[idx];
2371 idx++;
2372 }
2374 for (j=0;j<nentries;j++) {
2375 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
2376 dir[idx] = -synapse->GetDEDw() + beta * dir[idx];
2377 idx++;
2378 }
2379}
2380
2381////////////////////////////////////////////////////////////////////////////////
2382/// Computes the hessian matrix using the BFGS update algorithm.
2383/// from gamma (g_{(t+1)}-g_{(t)}) and delta (w_{(t+1)}-w_{(t)}).
2384/// It returns true if such a direction could not be found
2385/// (if gamma and delta are orthogonal).
2386
2388{
2389 TMatrixD gd(gamma, TMatrixD::kTransposeMult, delta);
2390 if ((Double_t) gd[0][0] == 0.)
2391 return true;
2392 TMatrixD aHg(bfgsh, TMatrixD::kMult, gamma);
2393 TMatrixD tmp(gamma, TMatrixD::kTransposeMult, bfgsh);
2394 TMatrixD gHg(gamma, TMatrixD::kTransposeMult, aHg);
2395 Double_t a = 1 / (Double_t) gd[0][0];
2396 Double_t f = 1 + ((Double_t) gHg[0][0] * a);
2397 TMatrixD res( TMatrixD(delta, TMatrixD::kMult,
2399 res *= f;
2400 res -= (TMatrixD(delta, TMatrixD::kMult, tmp) +
2403 res *= a;
2404 bfgsh += res;
2405 return false;
2406}
2407
2408////////////////////////////////////////////////////////////////////////////////
2409/// Sets the gamma \f$(g_{(t+1)}-g_{(t)})\f$ and delta \f$(w_{(t+1)}-w_{(t)})\f$ vectors
2410/// Gamma is computed here, so ComputeDEDw cannot have been called before,
2411/// and delta is a direct translation of buffer into a TMatrixD.
2412
2414 Double_t * buffer)
2415{
2417 Int_t idx = 0;
2418 Int_t j,nentries;
2419 TNeuron *neuron = 0;
2420 TSynapse *synapse = 0;
2422 for (j=0;j<nentries;j++) {
2423 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
2424 gamma[idx++][0] = -neuron->GetDEDw();
2425 }
2427 for (j=0;j<nentries;j++) {
2428 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
2429 gamma[idx++][0] = -synapse->GetDEDw();
2430 }
2431 for (Int_t i = 0; i < els; i++)
2432 delta[i].Assign(buffer[i]);
2433 //delta.SetElements(buffer,"F");
2434 ComputeDEDw();
2435 idx = 0;
2437 for (j=0;j<nentries;j++) {
2438 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
2439 gamma[idx++][0] += neuron->GetDEDw();
2440 }
2442 for (j=0;j<nentries;j++) {
2443 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
2444 gamma[idx++][0] += synapse->GetDEDw();
2445 }
2446}
2447
2448////////////////////////////////////////////////////////////////////////////////
2449/// scalar product between gradient and direction
2450/// = derivative along direction
2451
2453{
2454 Int_t idx = 0;
2455 Int_t j,nentries;
2456 Double_t output = 0;
2457 TNeuron *neuron = 0;
2458 TSynapse *synapse = 0;
2460 for (j=0;j<nentries;j++) {
2461 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
2462 output += neuron->GetDEDw() * dir[idx++];
2463 }
2465 for (j=0;j<nentries;j++) {
2466 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
2467 output += synapse->GetDEDw() * dir[idx++];
2468 }
2469 return output;
2470}
2471
2472////////////////////////////////////////////////////////////////////////////////
2473/// Computes the direction for the BFGS algorithm as the product
2474/// between the Hessian estimate (bfgsh) and the dir.
2475
2477{
2479 TMatrixD dedw(els, 1);
2480 Int_t idx = 0;
2481 Int_t j,nentries;
2482 TNeuron *neuron = 0;
2483 TSynapse *synapse = 0;
2485 for (j=0;j<nentries;j++) {
2486 neuron = (TNeuron *) fNetwork.UncheckedAt(j);
2487 dedw[idx++][0] = neuron->GetDEDw();
2488 }
2490 for (j=0;j<nentries;j++) {
2491 synapse = (TSynapse *) fSynapses.UncheckedAt(j);
2492 dedw[idx++][0] = synapse->GetDEDw();
2493 }
2494 TMatrixD direction(bfgsh, TMatrixD::kMult, dedw);
2495 for (Int_t i = 0; i < els; i++)
2496 dir[i] = -direction[i][0];
2497 //direction.GetElements(dir,"F");
2498}
2499
2500////////////////////////////////////////////////////////////////////////////////
2501/// Draws the network structure.
2502/// Neurons are depicted by a blue disk, and synapses by
2503/// lines connecting neurons.
2504/// The line width is proportional to the weight.
2505
2507{
2508#define NeuronSize 2.5
2509
2510 Int_t nLayers = fStructure.CountChar(':')+1;
2511 Float_t xStep = 1./(nLayers+1.);
2512 Int_t layer;
2513 for(layer=0; layer< nLayers-1; layer++) {
2514 Float_t nNeurons_this = 0;
2515 if(layer==0) {
2516 TString input = TString(fStructure(0, fStructure.First(':')));
2517 nNeurons_this = input.CountChar(',')+1;
2518 }
2519 else {
2520 Int_t cnt=0;
2521 TString hidden = TString(fStructure(fStructure.First(':') + 1,fStructure.Last(':') - fStructure.First(':') - 1));
2522 Int_t beg = 0;
2523 Int_t end = hidden.Index(":", beg + 1);
2524 while (end != -1) {
2525 Int_t num = atoi(TString(hidden(beg, end - beg)).Data());
2526 cnt++;
2527 beg = end + 1;
2528 end = hidden.Index(":", beg + 1);
2529 if(layer==cnt) nNeurons_this = num;
2530 }
2531 Int_t num = atoi(TString(hidden(beg, hidden.Length() - beg)).Data());
2532 cnt++;
2533 if(layer==cnt) nNeurons_this = num;
2534 }
2535 Float_t nNeurons_next = 0;
2536 if(layer==nLayers-2) {
2538 nNeurons_next = output.CountChar(',')+1;
2539 }
2540 else {
2541 Int_t cnt=0;
2542 TString hidden = TString(fStructure(fStructure.First(':') + 1,fStructure.Last(':') - fStructure.First(':') - 1));
2543 Int_t beg = 0;
2544 Int_t end = hidden.Index(":", beg + 1);
2545 while (end != -1) {
2546 Int_t num = atoi(TString(hidden(beg, end - beg)).Data());
2547 cnt++;
2548 beg = end + 1;
2549 end = hidden.Index(":", beg + 1);
2550 if(layer+1==cnt) nNeurons_next = num;
2551 }
2552 Int_t num = atoi(TString(hidden(beg, hidden.Length() - beg)).Data());
2553 cnt++;
2554 if(layer+1==cnt) nNeurons_next = num;
2555 }
2556 Float_t yStep_this = 1./(nNeurons_this+1.);
2557 Float_t yStep_next = 1./(nNeurons_next+1.);
2559 TSynapse *theSynapse = 0;
2560 Float_t maxWeight = 0;
2561 while ((theSynapse = (TSynapse *) it->Next()))
2562 maxWeight = maxWeight < theSynapse->GetWeight() ? theSynapse->GetWeight() : maxWeight;
2563 delete it;
2565 for(Int_t neuron1=0; neuron1<nNeurons_this; neuron1++) {
2566 for(Int_t neuron2=0; neuron2<nNeurons_next; neuron2++) {
2567 TLine* synapse = new TLine(xStep*(layer+1),yStep_this*(neuron1+1),xStep*(layer+2),yStep_next*(neuron2+1));
2568 synapse->Draw();
2569 theSynapse = (TSynapse *) it->Next();
2570 if (!theSynapse) continue;
2571 synapse->SetLineWidth(Int_t((theSynapse->GetWeight()/maxWeight)*10.));
2572 synapse->SetLineStyle(1);
2573 if(((TMath::Abs(theSynapse->GetWeight())/maxWeight)*10.)<0.5) synapse->SetLineStyle(2);
2574 if(((TMath::Abs(theSynapse->GetWeight())/maxWeight)*10.)<0.25) synapse->SetLineStyle(3);
2575 }
2576 }
2577 delete it;
2578 }
2579 for(layer=0; layer< nLayers; layer++) {
2580 Float_t nNeurons = 0;
2581 if(layer==0) {
2582 TString input = TString(fStructure(0, fStructure.First(':')));
2583 nNeurons = input.CountChar(',')+1;
2584 }
2585 else if(layer==nLayers-1) {
2587 nNeurons = output.CountChar(',')+1;
2588 }
2589 else {
2590 Int_t cnt=0;
2591 TString hidden = TString(fStructure(fStructure.First(':') + 1,fStructure.Last(':') - fStructure.First(':') - 1));
2592 Int_t beg = 0;
2593 Int_t end = hidden.Index(":", beg + 1);
2594 while (end != -1) {
2595 Int_t num = atoi(TString(hidden(beg, end - beg)).Data());
2596 cnt++;
2597 beg = end + 1;
2598 end = hidden.Index(":", beg + 1);
2599 if(layer==cnt) nNeurons = num;
2600 }
2601 Int_t num = atoi(TString(hidden(beg, hidden.Length() - beg)).Data());
2602 cnt++;
2603 if(layer==cnt) nNeurons = num;
2604 }
2605 Float_t yStep = 1./(nNeurons+1.);
2606 for(Int_t neuron=0; neuron<nNeurons; neuron++) {
2607 TMarker* m = new TMarker(xStep*(layer+1),yStep*(neuron+1),20);
2608 m->SetMarkerColor(4);
2609 m->SetMarkerSize(NeuronSize);
2610 m->Draw();
2611 }
2612 }
2613 const TString input = TString(fStructure(0, fStructure.First(':')));
2614 const TObjArray *inpL = input.Tokenize(" ,");
2615 const Int_t nrItems = inpL->GetLast()+1;
2616 Float_t yStep = 1./(nrItems+1);
2617 for (Int_t item = 0; item < nrItems; item++) {
2618 const TString brName = ((TObjString *)inpL->At(item))->GetString();
2619 TText* label = new TText(0.5*xStep,yStep*(item+1),brName.Data());
2620 label->Draw();
2621 }
2622 delete inpL;
2623
2624 Int_t numOutNodes=fLastLayer.GetEntriesFast();
2625 yStep=1./(numOutNodes+1);
2626 for (Int_t outnode=0; outnode<numOutNodes; outnode++) {
2627 TNeuron* neuron=(TNeuron*)fLastLayer[outnode];
2628 if (neuron && neuron->GetName()) {
2629 TText* label = new TText(xStep*nLayers,
2630 yStep*(outnode+1),
2631 neuron->GetName());
2632 label->Draw();
2633 }
2634 }
2635}
#define f(i)
Definition RSha256.hxx:104
#define a(i)
Definition RSha256.hxx:99
const Ssiz_t kNPOS
Definition RtypesCore.h:115
int Int_t
Definition RtypesCore.h:45
const Bool_t kFALSE
Definition RtypesCore.h:92
unsigned long ULong_t
Definition RtypesCore.h:55
double Double_t
Definition RtypesCore.h:59
float Float_t
Definition RtypesCore.h:57
const Bool_t kTRUE
Definition RtypesCore.h:91
const char Option_t
Definition RtypesCore.h:66
#define ClassImp(name)
Definition Rtypes.h:364
#define gDirectory
Definition TDirectory.h:290
#define R__ASSERT(e)
Definition TError.h:120
char name[80]
Definition TGX11.cxx:110
int type
Definition TGX11.cxx:121
int nentries
TMatrixT< Double_t > TMatrixD
Definition TMatrixDfwd.h:23
#define NeuronSize
char * Form(const char *fmt,...)
R__EXTERN TSystem * gSystem
Definition TSystem.h:559
#define gPad
static void SetMaxima(Int_t maxop=1000, Int_t maxpar=1000, Int_t maxconst=1000)
static function to set the maximum value of 3 parameters
static void GetMaxima(Int_t &maxop, Int_t &maxpar, Int_t &maxconst)
static function to get the maximum value of 3 parameters -maxop : maximum number of operations -maxpa...
virtual void SetTitleOffset(Float_t offset=1)
Set distance between the axis and the axis title.
Definition TAttAxis.cxx:293
virtual void SetLineStyle(Style_t lstyle)
Set the line style.
Definition TAttLine.h:42
virtual void SetLineWidth(Width_t lwidth)
Set the line width.
Definition TAttLine.h:43
virtual void SetLineColor(Color_t lcolor)
Set the line color.
Definition TAttLine.h:40
virtual void SetLeftMargin(Float_t leftmargin)
Set Pad left margin in fraction of the pad width.
Definition TAttPad.cxx:109
void SetDecimals(Bool_t dot=kTRUE)
Sets the decimals flag By default, blank characters are stripped, and then the label is correctly ali...
Definition TAxis.h:203
virtual void UnZoom()
Reset first & last bin to the full range.
Definition TAxis.cxx:1132
The Canvas class.
Definition TCanvas.h:23
static TClass * GetClass(const char *name, Bool_t load=kTRUE, Bool_t silent=kFALSE)
Static method returning pointer to TClass of the specified class name.
Definition TClass.cxx:2957
virtual void SetOwner(Bool_t enable=kTRUE)
Set whether this collection is the owner (enable==true) of its content.
A TEventList object is a list of selected events (entries) in a TTree.
Definition TEventList.h:31
virtual Long64_t GetEntry(Int_t index) const
Return value of entry at index in the list.
virtual Int_t GetN() const
Definition TEventList.h:56
A TGraph is an object made of two arrays X and Y with npoints each.
Definition TGraph.h:41
virtual void SetPoint(Int_t i, Double_t x, Double_t y)
Set x and y values for point number i.
Definition TGraph.cxx:2284
Double_t * GetY() const
Definition TGraph.h:132
1-D histogram with a double per channel (see TH1 documentation)}
Definition TH1.h:618
virtual void Reset(Option_t *option="")
Reset.
Definition TH1.cxx:10126
virtual Int_t Fill(Double_t x)
Increment bin with abscissa X by 1.
Definition TH1.cxx:3350
virtual void Draw(Option_t *option="")
Draw this histogram with options.
Definition TH1.cxx:3073
2-D histogram with a double per channel (see TH1 documentation)}
Definition TH2.h:292
virtual void Reset(Option_t *option="")
Reset this histogram: contents, errors, etc.
Definition TH2.cxx:3846
Int_t Fill(Double_t)
Invalid Fill method.
Definition TH2.cxx:294
This class displays a legend box (TPaveText) containing several legend entries.
Definition TLegend.h:23
TLegendEntry * AddEntry(const TObject *obj, const char *label="", Option_t *option="lpf")
Add a new entry to this legend.
Definition TLegend.cxx:330
virtual void Draw(Option_t *option="")
Draw this legend with its current attributes.
Definition TLegend.cxx:423
A simple line.
Definition TLine.h:22
virtual TObject * At(Int_t idx) const
Returns the object at position idx. Returns 0 if idx is out of range.
Definition TList.cxx:357
Manages Markers.
Definition TMarker.h:22
virtual TMatrixTBase< Element > & UnitMatrix()
Make a unit matrix (matrix need not be a square one).
A TMultiGraph is a collection of TGraph (or derived) objects.
Definition TMultiGraph.h:36
TList * GetListOfGraphs() const
Definition TMultiGraph.h:70
virtual void Add(TGraph *graph, Option_t *chopt="")
Add a new graph to the list of graphs.
virtual void Draw(Option_t *chopt="")
Draw this multigraph with its current attributes.
TAxis * GetYaxis()
Get y axis of the graph.
TAxis * GetXaxis()
Get x axis of the graph.
This class describes a neural network.
TTreeFormula * fEventWeight
! formula representing the event weight
void BuildOneHiddenLayer(const TString &sNumNodes, Int_t &layer, Int_t &prevStart, Int_t &prevStop, Bool_t lastLayer)
Builds a hidden layer, updates the number of layers.
void SteepestDir(Double_t *)
Sets the search direction to steepest descent.
void BuildNetwork()
Instantiates the network from the description.
TObjArray fNetwork
Collection of all the neurons in the network.
Double_t Evaluate(Int_t index, Double_t *params) const
Returns the Neural Net for a given set of input parameters #parameters must equal #input neurons.
TEventList * fTest
! EventList defining the events in the test dataset
bool GetBFGSH(TMatrixD &, TMatrixD &, TMatrixD &)
Computes the hessian matrix using the BFGS update algorithm.
void BuildHiddenLayers(TString &)
Builds hidden layers.
void BuildFirstLayer(TString &)
Instantiates the neurons in input Inputs are normalised and the type is set to kOff (simple forward o...
void SetTau(Double_t tau)
Sets Tau - used in line search (look at the constructor for the complete description of learning meth...
TMultiLayerPerceptron()
Default constructor.
Double_t GetSumSquareError() const
Error on the output for a given event.
void ConjugateGradientsDir(Double_t *, Double_t)
Sets the search direction to conjugate gradient direction beta should be:
Double_t fTau
! Tau - used in line search - Default=3.
TTree * fData
! pointer to the tree used as datasource
Double_t Result(Int_t event, Int_t index=0) const
Computes the output for a given event.
void SetGammaDelta(TMatrixD &, TMatrixD &, Double_t *)
Sets the gamma and delta vectors Gamma is computed here, so ComputeDEDw cannot have been called bef...
TEventList * fTraining
! EventList defining the events in the training dataset
TString fStructure
String containing the network structure.
Int_t fReset
! number of epochs between two resets of the search direction to the steepest descent - Default=50
Bool_t LoadWeights(Option_t *filename="")
Loads the weights from a text file conforming to the format defined by DumpWeights.
void MLP_Batch(Double_t *)
One step for the batch (stochastic) method.
TNeuron::ENeuronType fOutType
Type of output neurons.
Double_t fCurrentTreeWeight
! weight of the current tree in a chain
ELearningMethod fLearningMethod
! The Learning Method
Double_t fLastAlpha
! internal parameter used in line search
Int_t fCurrentTree
! index of the current tree in a chain
void Export(Option_t *filename="NNfunction", Option_t *language="C++") const
Exports the NN as a function for any non-ROOT-dependant code Supported languages are: only C++ ,...
Double_t fEpsilon
! Epsilon - used in stochastic minimisation - Default=0.
void Train(Int_t nEpoch, Option_t *option="text", Double_t minE=0)
Train the network.
TNeuron::ENeuronType GetType() const
void BFGSDir(TMatrixD &, Double_t *)
Computes the direction for the BFGS algorithm as the product between the Hessian estimate (bfgsh) and...
void SetTestDataSet(TEventList *test)
Sets the Test dataset.
Bool_t fTrainingOwner
! internal flag whether one has to delete fTraining or not
void SetLearningMethod(TMultiLayerPerceptron::ELearningMethod method)
Sets the learning method.
void SetTrainingDataSet(TEventList *train)
Sets the Training dataset.
void BuildLastLayer(TString &, Int_t)
Builds the output layer Neurons are linear combinations of input, by default.
Double_t fDelta
! Delta - used in stochastic minimisation - Default=0.
TTreeFormulaManager * fManager
! TTreeFormulaManager for the weight and neurons
void Randomize() const
Randomize the weights.
Bool_t LineSearch(Double_t *, Double_t *)
Search along the line defined by direction.
virtual void Draw(Option_t *option="")
Draws the network structure.
void ExpandStructure()
Expand the structure of the first layer.
Double_t fEta
! Eta - used in stochastic minimisation - Default=0.1
Double_t GetError(Int_t event) const
Error on the output for a given event.
Double_t fEtaDecay
! EtaDecay - Eta *= EtaDecay at each epoch - Default=1.
void SetEtaDecay(Double_t ed)
Sets EtaDecay - Eta *= EtaDecay at each epoch (look at the constructor for the complete description o...
void AttachData()
Connects the TTree to Neurons in input and output layers.
void SetData(TTree *)
Set the data source.
void SetEventWeight(const char *)
Set the event weight.
Bool_t DumpWeights(Option_t *filename="-") const
Dumps the weights to a text file.
TString fWeight
String containing the event weight.
void SetDelta(Double_t delta)
Sets Delta - used in stochastic minimisation (look at the constructor for the complete description of...
Double_t GetCrossEntropy() const
Cross entropy error for a softmax output neuron, for a given event.
void SetReset(Int_t reset)
Sets number of epochs between two resets of the search direction to the steepest descent.
Bool_t fTestOwner
! internal flag whether one has to delete fTest or not
void Shuffle(Int_t *, Int_t) const
Shuffle the Int_t index[n] in input.
virtual ~TMultiLayerPerceptron()
Destructor.
Double_t DerivDir(Double_t *)
scalar product between gradient and direction = derivative along direction
void MLP_Stochastic(Double_t *)
One step for the stochastic method buffer should contain the previous dw vector and will be updated.
TObjArray fSynapses
Collection of all the synapses in the network.
void MLP_Line(Double_t *, Double_t *, Double_t)
Sets the weights to a point along a line Weights are set to [origin + (dist * dir)].
TNeuron::ENeuronType fType
Type of hidden neurons.
TObjArray fLastLayer
Collection of the output neurons; subset of fNetwork.
TString fextD
String containing the derivative name.
void ComputeDEDw() const
Compute the DEDw = sum on all training events of dedw for each weight normalized by the number of eve...
Double_t GetCrossEntropyBinary() const
Cross entropy error for sigmoid output neurons, for a given event.
void DrawResult(Int_t index=0, Option_t *option="test") const
Draws the neural net output It produces an histogram with the output for the two datasets.
void SetEta(Double_t eta)
Sets Eta - used in stochastic minimisation (look at the constructor for the complete description of l...
TObjArray fFirstLayer
Collection of the input neurons; subset of fNetwork.
void GetEntry(Int_t) const
Load an entry into the network.
void SetEpsilon(Double_t eps)
Sets Epsilon - used in stochastic minimisation (look at the constructor for the complete description ...
TString fextF
String containing the function name.
virtual void SetTitle(const char *title="")
Set the title of the TNamed.
Definition TNamed.cxx:164
virtual const char * GetName() const
Returns name of object.
Definition TNamed.h:47
This class describes an elementary neuron, which is the basic element for a Neural Network.
Definition TNeuron.h:25
Double_t GetWeight() const
Definition TNeuron.h:48
void SetWeight(Double_t w)
Sets the neuron weight to w.
Definition TNeuron.cxx:1144
Double_t GetDEDw() const
Definition TNeuron.h:53
Double_t GetValue() const
Computes the output using the appropriate function and all the weighted inputs, or uses the branch as...
Definition TNeuron.cxx:944
void SetDEDw(Double_t in)
Sets the derivative of the total error wrt the neuron weight.
Definition TNeuron.cxx:1164
Double_t GetDeDw() const
Computes the derivative of the error wrt the neuron weight.
Definition TNeuron.cxx:1080
Double_t GetBranch() const
Returns the formula value.
Definition TNeuron.cxx:910
TNeuron * GetInLayer(Int_t n) const
Definition TNeuron.h:37
Double_t GetError() const
Computes the error for output neurons.
Definition TNeuron.cxx:1059
TTreeFormula * UseBranch(TTree *, const char *)
Sets a formula that can be used to make the neuron an input.
Definition TNeuron.cxx:874
TSynapse * GetPre(Int_t n) const
Definition TNeuron.h:35
void ForceExternalValue(Double_t value)
Uses the branch type to force an external value.
Definition TNeuron.cxx:1121
Double_t GetTarget() const
Computes the normalized target pattern for output neurons.
Definition TNeuron.cxx:1070
const Double_t * GetNormalisation() const
Definition TNeuron.h:50
ENeuronType GetType() const
Returns the neuron type.
Definition TNeuron.cxx:863
void SetNewEvent() const
Inform the neuron that inputs of the network have changed, so that the buffered values have to be rec...
Definition TNeuron.cxx:1153
void SetNormalisation(Double_t mean, Double_t RMS)
Sets the normalization variables.
Definition TNeuron.cxx:1133
void AddInLayer(TNeuron *)
Tells a neuron which neurons form its layer (including itself).
Definition TNeuron.cxx:853
ENeuronType
Definition TNeuron.h:29
@ kLinear
Definition TNeuron.h:29
@ kSigmoid
Definition TNeuron.h:29
@ kExternal
Definition TNeuron.h:29
@ kSoftmax
Definition TNeuron.h:29
@ kGauss
Definition TNeuron.h:29
@ kOff
Definition TNeuron.h:29
@ kTanh
Definition TNeuron.h:29
Iterator of object array.
Definition TObjArray.h:126
TObject * Next()
Return next object in array. Returns 0 when no more objects in array.
An array of TObjects.
Definition TObjArray.h:37
Int_t GetEntriesFast() const
Definition TObjArray.h:64
virtual void AddLast(TObject *obj)
Add object in the next empty slot in the array.
TObject * UncheckedAt(Int_t i) const
Definition TObjArray.h:90
TIterator * MakeIterator(Bool_t dir=kIterForward) const
Returns an array iterator.
Int_t GetLast() const
Return index of last object in array.
TObject * At(Int_t idx) const
Definition TObjArray.h:166
Collectable string class.
Definition TObjString.h:28
virtual void Warning(const char *method, const char *msgfmt,...) const
Issue warning message.
Definition TObject.cxx:879
virtual void Error(const char *method, const char *msgfmt,...) const
Issue error message.
Definition TObject.cxx:893
virtual void Draw(Option_t *option="")
Default Draw method for all objects.
Definition TObject.cxx:197
virtual void Info(const char *method, const char *msgfmt,...) const
Issue info message.
Definition TObject.cxx:867
Random number generator class based on M.
Definition TRandom3.h:27
virtual Double_t Rndm()
Machine independent random number generator.
Definition TRandom3.cxx:99
Regular expression class.
Definition TRegexp.h:31
Basic string class.
Definition TString.h:136
Ssiz_t Length() const
Definition TString.h:410
void ToLower()
Change string to lower-case.
Definition TString.cxx:1145
Bool_t EndsWith(const char *pat, ECaseCompare cmp=kExact) const
Return true if string ends with the specified string.
Definition TString.cxx:2197
Ssiz_t First(char c) const
Find first occurrence of a character c.
Definition TString.cxx:519
const char * Data() const
Definition TString.h:369
Bool_t IsAlpha() const
Returns true if all characters in string are alphabetic.
Definition TString.cxx:1751
Ssiz_t Last(char c) const
Find last occurrence of a character c.
Definition TString.cxx:912
void ToUpper()
Change string to upper case.
Definition TString.cxx:1158
TObjArray * Tokenize(const TString &delim) const
This function is used to isolate sequential tokens in a TString.
Definition TString.cxx:2217
Int_t CountChar(Int_t c) const
Return number of times character c occurs in the string.
Definition TString.cxx:496
Bool_t Contains(const char *pat, ECaseCompare cmp=kExact) const
Definition TString.h:624
Bool_t IsAlnum() const
Returns true if all characters in string are alphanumeric.
Definition TString.cxx:1766
Ssiz_t Index(const char *pat, Ssiz_t i=0, ECaseCompare cmp=kExact) const
Definition TString.h:639
This is a simple weighted bidirectional connection between two neurons.
Definition TSynapse.h:20
Double_t GetDeDw() const
Computes the derivative of the error wrt the synapse weight.
Definition TSynapse.cxx:89
void SetWeight(Double_t w)
Sets the weight of the synapse.
Definition TSynapse.cxx:102
Double_t GetWeight() const
Definition TSynapse.h:30
void SetDEDw(Double_t in)
Sets the derivative of the total error wrt the synapse weight.
Definition TSynapse.cxx:110
TNeuron * GetPre() const
Definition TSynapse.h:27
Double_t GetDEDw() const
Definition TSynapse.h:34
virtual int Load(const char *module, const char *entry="", Bool_t system=kFALSE)
Load a shared library.
Definition TSystem.cxx:1853
virtual Bool_t ProcessEvents()
Process pending events (GUI, timers, sockets).
Definition TSystem.cxx:417
Base class for several text objects.
Definition TText.h:22
The TTimeStamp encapsulates seconds and ns since EPOCH.
Definition TTimeStamp.h:71
time_t GetSec() const
Definition TTimeStamp.h:135
Used to coordinate one or more TTreeFormula objects.
virtual void Add(TTreeFormula *)
Add a new formula to the list of formulas managed The manager of the formula will be changed and the ...
virtual void Remove(TTreeFormula *)
Remove a formula from this manager.
virtual Bool_t Notify()
This method must be overridden to handle object notification.
Used to pass a selection expression to the Tree drawing routine.
T EvalInstance(Int_t i=0, const char *stringStack[]=0)
Evaluate this treeformula.
A TTree represents a columnar dataset.
Definition TTree.h:79
virtual Double_t GetWeight() const
Definition TTree.h:541
virtual Long64_t GetEntries() const
Definition TTree.h:460
virtual Int_t GetEntry(Long64_t entry=0, Int_t getall=0)
Read all branches of entry and return total number of bytes read.
Definition TTree.cxx:5618
virtual Int_t GetTreeNumber() const
Definition TTree.h:516
virtual void Draw(Option_t *opt)
Default Draw method for all objects.
Definition TTree.h:428
TVirtualPad is an abstract base class for the Pad and Canvas classes.
Definition TVirtualPad.h:51
virtual void Modified(Bool_t flag=1)=0
virtual void Update()=0
const Int_t n
Definition legend1.C:16
Bool_t IsNaN(Double_t x)
Definition TMath.h:892
Double_t Log(Double_t x)
Definition TMath.h:760
Double_t Sqrt(Double_t x)
Definition TMath.h:691
Short_t Abs(Short_t d)
Definition TMathBase.h:120
Definition test.py:1
TCanvas * slash()
Definition slash.C:1
auto * m
Definition textangle.C:8
static void output(int code)
Definition gifencode.c:226