Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
MethodCFMlpANN.cxx
Go to the documentation of this file.
1// @(#)root/tmva $Id$
2// Author: Andreas Hoecker, Joerg Stelzer, Helge Voss, Kai Voss
3
4/**********************************************************************************
5 * Project: TMVA - a Root-integrated toolkit for multivariate Data analysis *
6 * Package: TMVA *
7 * Class : TMVA::MethodCFMlpANN *
8 * *
9 * *
10 * Description: *
11 * Implementation (see header for description) *
12 * *
13 * Authors (alphabetical): *
14 * Andreas Hoecker <Andreas.Hocker@cern.ch> - CERN, Switzerland *
15 * Xavier Prudent <prudent@lapp.in2p3.fr> - LAPP, France *
16 * Helge Voss <Helge.Voss@cern.ch> - MPI-K Heidelberg, Germany *
17 * Kai Voss <Kai.Voss@cern.ch> - U. of Victoria, Canada *
18 * *
19 * Copyright (c) 2005: *
20 * CERN, Switzerland *
21 * U. of Victoria, Canada *
22 * MPI-K Heidelberg, Germany *
23 * LAPP, Annecy, France *
24 * *
25 * Redistribution and use in source and binary forms, with or without *
26 * modification, are permitted according to the terms listed in LICENSE *
27 * (see tmva/doc/LICENSE) *
28 **********************************************************************************/
29
30/*! \class TMVA::MethodCFMlpANN
31\ingroup TMVA
32
33Interface to Clermond-Ferrand artificial neural network
34
35
36The CFMlpANN belong to the class of Multilayer Perceptrons (MLP), which are
37feed-forward networks according to the following propagation schema:
38
39\image html tmva_mlp.png Schema for artificial neural network.
40
41The input layer contains as many neurons as input variables used in the MVA.
42The output layer contains two neurons for the signal and background
43event classes. In between the input and output layers are a variable number
44of <i>k</i> hidden layers with arbitrary numbers of neurons. (While the
45structure of the input and output layers is determined by the problem, the
46hidden layers can be configured by the user through the option string
47of the method booking.)
48
49As indicated in the sketch, all neuron inputs to a layer are linear
50combinations of the neuron output of the previous layer. The transfer
51from input to output within a neuron is performed by means of an "activation
52function". In general, the activation function of a neuron can be
53zero (deactivated), one (linear), or non-linear. The above example uses
54a sigmoid activation function. The transfer function of the output layer
55is usually linear. As a consequence: an ANN without hidden layer should
56give identical discrimination power as a linear discriminant analysis (Fisher).
57In case of one hidden layer, the ANN computes a linear combination of
58sigmoid.
59
60The learning method used by the CFMlpANN is only stochastic.
61*/
62
63
64#include "TMVA/MethodCFMlpANN.h"
65
67#include "TMVA/Configurable.h"
68#include "TMVA/DataSet.h"
69#include "TMVA/DataSetInfo.h"
70#include "TMVA/IMethod.h"
71#include "TMVA/MethodBase.h"
73#include "TMVA/MsgLogger.h"
74#include "TMVA/Tools.h"
75#include "TMVA/Types.h"
76
77#include "TMatrix.h"
78#include "TMath.h"
79
80#include <cstdlib>
81#include <iostream>
82#include <string>
83
84
85
87
88using std::stringstream;
89using std::make_pair;
90using std::atoi;
91
92
93
94
95////////////////////////////////////////////////////////////////////////////////
96/// standard constructor
97///
98/// option string: "n_training_cycles:n_hidden_layers"
99///
100/// default is: n_training_cycles = 5000, n_layers = 4
101///
102/// * note that the number of hidden layers in the NN is:
103/// n_hidden_layers = n_layers - 2
104///
105/// * since there is one input and one output layer. The number of
106/// nodes (neurons) is predefined to be:
107///
108/// n_nodes[i] = nvars + 1 - i (where i=1..n_layers)
109///
110/// with nvars being the number of variables used in the NN.
111///
112/// Hence, the default case is:
113///
114/// n_neurons(layer 1 (input)) : nvars
115/// n_neurons(layer 2 (hidden)): nvars-1
116/// n_neurons(layer 3 (hidden)): nvars-1
117/// n_neurons(layer 4 (out)) : 2
118///
119/// This artificial neural network usually needs a relatively large
120/// number of cycles to converge (8000 and more). Overtraining can
121/// be efficiently tested by comparing the signal and background
122/// output of the NN for the events that were used for training and
123/// an independent data sample (with equal properties). If the separation
124/// performance is significantly better for the training sample, the
125/// NN interprets statistical effects, and is hence overtrained. In
126/// this case, the number of cycles should be reduced, or the size
127/// of the training sample increased.
128
130 const TString& methodTitle,
132 const TString& theOption ) :
133 TMVA::MethodBase( jobName, Types::kCFMlpANN, methodTitle, theData, theOption),
134 fData(0),
135 fClass(0),
136 fNlayers(0),
137 fNcycles(0),
138 fNodes(0),
139 fYNN(0),
140 MethodCFMlpANN_nsel(0)
141{
143}
144
145////////////////////////////////////////////////////////////////////////////////
146/// constructor from weight file
147
149 const TString& theWeightFile):
150 TMVA::MethodBase( Types::kCFMlpANN, theData, theWeightFile),
151 fData(0),
152 fClass(0),
153 fNlayers(0),
154 fNcycles(0),
155 fNodes(0),
156 fYNN(0),
157 MethodCFMlpANN_nsel(0)
158{
159}
160
161////////////////////////////////////////////////////////////////////////////////
162/// CFMlpANN can handle classification with 2 classes
163
169
170////////////////////////////////////////////////////////////////////////////////
171/// define the options (their key words) that can be set in the option string
172/// know options: NCycles=xx :the number of training cycles
173/// HiddenLayser="N-1,N-2" :the specification of the hidden layers
174
176{
177 DeclareOptionRef( fNcycles =3000, "NCycles", "Number of training cycles" );
178 DeclareOptionRef( fLayerSpec="N,N-1", "HiddenLayers", "Specification of hidden layer architecture" );
179}
180
181////////////////////////////////////////////////////////////////////////////////
182/// decode the options in the option string
183
185{
186 fNodes = new Int_t[20]; // number of nodes per layer (maximum 20 layers)
187 fNlayers = 2;
189 TString layerSpec(fLayerSpec);
190 while(layerSpec.Length()>0) {
191 TString sToAdd = "";
192 if (layerSpec.First(',')<0) {
194 layerSpec = "";
195 }
196 else {
197 sToAdd = layerSpec(0,layerSpec.First(','));
198 layerSpec = layerSpec(layerSpec.First(',')+1,layerSpec.Length());
199 }
200 Int_t nNodes = 0;
201 if (sToAdd.BeginsWith("N") || sToAdd.BeginsWith("n")) { sToAdd.Remove(0,1); nNodes = GetNvar(); }
202 nNodes += atoi(sToAdd);
203 fNodes[currentHiddenLayer++] = nNodes;
204 fNlayers++;
205 }
206 fNodes[0] = GetNvar(); // number of input nodes
207 fNodes[fNlayers-1] = 2; // number of output nodes
208
209 if (IgnoreEventsWithNegWeightsInTraining()) {
210 Log() << kFATAL << "Mechanism to ignore events with negative weights in training not yet available for method: "
211 << GetMethodTypeName()
212 << " --> please remove \"IgnoreNegWeightsInTraining\" option from booking string."
213 << Endl;
214 }
215
216 Log() << kINFO << "Use configuration (nodes per layer): in=";
217 for (Int_t i=0; i<fNlayers-1; i++) Log() << kINFO << fNodes[i] << ":";
218 Log() << kINFO << fNodes[fNlayers-1] << "=out" << Endl;
219
220 // some info
221 Log() << "Use " << fNcycles << " training cycles" << Endl;
222
223 Int_t nEvtTrain = Data()->GetNTrainingEvents();
224
225 // note that one variable is type
226 if (nEvtTrain>0) {
227
228 // Data LUT
229 fData = new TMatrix( nEvtTrain, GetNvar() );
230 fClass = new std::vector<Int_t>( nEvtTrain );
231
232 // ---- fill LUTs
233
234 UInt_t ivar;
235 for (Int_t ievt=0; ievt<nEvtTrain; ievt++) {
236 const Event * ev = GetEvent(ievt);
237
238 // identify signal and background events
239 (*fClass)[ievt] = DataInfo().IsSignal(ev) ? 1 : 2;
240
241 // use normalized input Data
242 for (ivar=0; ivar<GetNvar(); ivar++) {
243 (*fData)( ievt, ivar ) = ev->GetValue(ivar);
244 }
245 }
246
247 //Log() << kVERBOSE << Data()->GetNEvtSigTrain() << " Signal and "
248 // << Data()->GetNEvtBkgdTrain() << " background" << " events in trainingTree" << Endl;
249 }
250
251}
252
253////////////////////////////////////////////////////////////////////////////////
254/// default initialisation called by all constructors
255
257{
258 // CFMlpANN prefers normalised input variables
259 SetNormalised( kTRUE );
260
261 // initialize dimensions
262 MethodCFMlpANN_nsel = 0;
263}
264
265////////////////////////////////////////////////////////////////////////////////
266/// destructor
267
269{
270 delete fData;
271 delete fClass;
272 delete[] fNodes;
273
274 if (fYNN!=0) {
275 for (Int_t i=0; i<fNlayers; i++) delete[] fYNN[i];
276 delete[] fYNN;
277 fYNN=0;
278 }
279}
280
281////////////////////////////////////////////////////////////////////////////////
282/// training of the Clement-Ferrand NN classifier
283
285{
286 Double_t dumDat(0);
287 Int_t ntrain(Data()->GetNTrainingEvents());
288 Int_t ntest(0);
289 Int_t nvar(GetNvar());
290 Int_t nlayers(fNlayers);
291 Int_t *nodes = new Int_t[nlayers];
292 Int_t ncycles(fNcycles);
293
294 for (Int_t i=0; i<nlayers; i++) nodes[i] = fNodes[i]; // full copy of class member
295
296 if (fYNN != 0) {
297 for (Int_t i=0; i<fNlayers; i++) delete[] fYNN[i];
298 delete[] fYNN;
299 fYNN = 0;
300 }
301 fYNN = new Double_t*[nlayers];
302 for (Int_t layer=0; layer<nlayers; layer++)
303 fYNN[layer] = new Double_t[fNodes[layer]];
304
305 // please check
306#ifndef R__WIN32
307 Train_nn( &dumDat, &dumDat, &ntrain, &ntest, &nvar, &nlayers, nodes, &ncycles );
308#else
309 Log() << kWARNING << "<Train> sorry CFMlpANN does not run on Windows" << Endl;
310#endif
311
312 delete [] nodes;
313
314 ExitFromTraining();
315}
316
317////////////////////////////////////////////////////////////////////////////////
318/// returns CFMlpANN output (normalised within [0,1])
319
321{
322 Bool_t isOK = kTRUE;
323
324 const Event* ev = GetEvent();
325
326 // copy of input variables
327 std::vector<Double_t> inputVec( GetNvar() );
328 for (UInt_t ivar=0; ivar<GetNvar(); ivar++) inputVec[ivar] = ev->GetValue(ivar);
329
330 Double_t myMVA = EvalANN( inputVec, isOK );
331 if (!isOK) Log() << kFATAL << "EvalANN returns (!isOK) for event " << Endl;
332
333 // cannot determine error
334 NoErrorCalc(err, errUpper);
335
336 return myMVA;
337}
338
339////////////////////////////////////////////////////////////////////////////////
340/// evaluates NN value as function of input variables
341
343{
344 // hardcopy of input variables (necessary because they are update later)
345 Double_t* xeev = new Double_t[GetNvar()];
346 for (UInt_t ivar=0; ivar<GetNvar(); ivar++) xeev[ivar] = inVar[ivar];
347
348 // ---- now apply the weights: get NN output
349 isOK = kTRUE;
350 for (UInt_t jvar=0; jvar<GetNvar(); jvar++) {
351
352 if (fVarn_1.xmax[jvar] < xeev[jvar]) xeev[jvar] = fVarn_1.xmax[jvar];
353 if (fVarn_1.xmin[jvar] > xeev[jvar]) xeev[jvar] = fVarn_1.xmin[jvar];
354 if (fVarn_1.xmax[jvar] == fVarn_1.xmin[jvar]) {
355 isOK = kFALSE;
356 xeev[jvar] = 0;
357 }
358 else {
359 xeev[jvar] = xeev[jvar] - ((fVarn_1.xmax[jvar] + fVarn_1.xmin[jvar])/2);
360 xeev[jvar] = xeev[jvar] / ((fVarn_1.xmax[jvar] - fVarn_1.xmin[jvar])/2);
361 }
362 }
363
364 NN_ava( xeev );
365
366 Double_t retval = 0.5*(1.0 + fYNN[fParam_1.layerm-1][0]);
367
368 delete [] xeev;
369
370 return retval;
371}
372
373////////////////////////////////////////////////////////////////////////////////
374/// auxiliary functions
375
377{
378 for (Int_t ivar=0; ivar<fNeur_1.neuron[0]; ivar++) fYNN[0][ivar] = xeev[ivar];
379
380 for (Int_t layer=1; layer<fParam_1.layerm; layer++) {
381 for (Int_t j=1; j<=fNeur_1.neuron[layer]; j++) {
382
383 Double_t x = Ww_ref(fNeur_1.ww, layer+1,j); // init with the bias layer
384
385 for (Int_t k=1; k<=fNeur_1.neuron[layer-1]; k++) { // neurons of originating layer
386 x += fYNN[layer-1][k-1]*W_ref(fNeur_1.w, layer+1, j, k);
387 }
388 fYNN[layer][j-1] = NN_fonc( layer, x );
389 }
390 }
391}
392
393////////////////////////////////////////////////////////////////////////////////
394/// activation function
395
397{
398 Double_t f(0);
399
400 if (u/fDel_1.temp[i] > 170) f = +1;
401 else if (u/fDel_1.temp[i] < -170) f = -1;
402 else {
403 Double_t yy = TMath::Exp(-u/fDel_1.temp[i]);
404 f = (1 - yy)/(1 + yy);
405 }
406
407 return f;
408}
409
410////////////////////////////////////////////////////////////////////////////////
411/// read back the weight from the training from file (stream)
412
414{
415 TString var;
416
417 // read number of variables and classes
418 UInt_t nva(0), lclass(0);
419 istr >> nva >> lclass;
420
421 if (GetNvar() != nva) // wrong file
422 Log() << kFATAL << "<ReadWeightsFromFile> mismatch in number of variables" << Endl;
423
424 // number of output classes must be 2
425 if (lclass != 2) // wrong file
426 Log() << kFATAL << "<ReadWeightsFromFile> mismatch in number of classes" << Endl;
427
428 // check that we are not at the end of the file
429 if (istr.eof( ))
430 Log() << kFATAL << "<ReadWeightsFromStream> reached EOF prematurely " << Endl;
431
432 // read extrema of input variables
433 for (UInt_t ivar=0; ivar<GetNvar(); ivar++)
434 istr >> fVarn_1.xmax[ivar] >> fVarn_1.xmin[ivar];
435
436 // read number of layers (sum of: input + output + hidden)
437 istr >> fParam_1.layerm;
438
439 if (fYNN != 0) {
440 for (Int_t i=0; i<fNlayers; i++) delete[] fYNN[i];
441 delete[] fYNN;
442 fYNN = 0;
443 }
444 fYNN = new Double_t*[fParam_1.layerm];
445 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
446 // read number of neurons for each layer
447 // coverity[tainted_data_argument]
448 istr >> fNeur_1.neuron[layer];
449 fYNN[layer] = new Double_t[fNeur_1.neuron[layer]];
450 }
451
452 // to read dummy lines
453 const Int_t nchar( 100 );
454 char* dumchar = new char[nchar];
455
456 // read weights
457 for (Int_t layer=1; layer<=fParam_1.layerm-1; layer++) {
458
459 Int_t nq = fNeur_1.neuron[layer]/10;
460 Int_t nr = fNeur_1.neuron[layer] - nq*10;
461
462 Int_t kk(0);
463 if (nr==0) kk = nq;
464 else kk = nq+1;
465
466 for (Int_t k=1; k<=kk; k++) {
467 Int_t jmin = 10*k - 9;
468 Int_t jmax = 10*k;
469 if (fNeur_1.neuron[layer]<jmax) jmax = fNeur_1.neuron[layer];
470 for (Int_t j=jmin; j<=jmax; j++) {
471 istr >> Ww_ref(fNeur_1.ww, layer+1, j);
472 }
473 for (Int_t i=1; i<=fNeur_1.neuron[layer-1]; i++) {
474 for (Int_t j=jmin; j<=jmax; j++) {
475 istr >> W_ref(fNeur_1.w, layer+1, j, i);
476 }
477 }
478 // skip two empty lines
479 istr.getline( dumchar, nchar );
480 }
481 }
482
483 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
484
485 // skip 2 empty lines
486 istr.getline( dumchar, nchar );
487 istr.getline( dumchar, nchar );
488
489 istr >> fDel_1.temp[layer];
490 }
491
492 // sanity check
493 if ((Int_t)GetNvar() != fNeur_1.neuron[0]) {
494 Log() << kFATAL << "<ReadWeightsFromFile> mismatch in zeroth layer:"
495 << GetNvar() << " " << fNeur_1.neuron[0] << Endl;
496 }
497
498 fNlayers = fParam_1.layerm;
499 delete[] dumchar;
500}
501
502////////////////////////////////////////////////////////////////////////////////
503/// data interface function
504
506 Int_t* /* icode*/, Int_t* /*flag*/,
507 Int_t* /*nalire*/, Int_t* nvar,
508 Double_t* xpg, Int_t* iclass, Int_t* ikend )
509{
510 // icode and ikend are dummies needed to match f2c mlpl3 functions
511 *ikend = 0;
512
513
514 // sanity checks
515 if (0 == xpg) {
516 Log() << kFATAL << "ERROR in MethodCFMlpANN_DataInterface zero pointer xpg" << Endl;
517 }
518 if (*nvar != (Int_t)this->GetNvar()) {
519 Log() << kFATAL << "ERROR in MethodCFMlpANN_DataInterface mismatch in num of variables: "
520 << *nvar << " " << this->GetNvar() << Endl;
521 }
522
523 // fill variables
524 *iclass = (int)this->GetClass( MethodCFMlpANN_nsel );
525 for (UInt_t ivar=0; ivar<this->GetNvar(); ivar++)
526 xpg[ivar] = (double)this->GetData( MethodCFMlpANN_nsel, ivar );
527
528 ++MethodCFMlpANN_nsel;
529
530 return 0;
531}
532
533////////////////////////////////////////////////////////////////////////////////
534/// write weights to xml file
535
537{
538 void *wght = gTools().AddChild(parent, "Weights");
539 gTools().AddAttr(wght,"NVars",fParam_1.nvar);
540 gTools().AddAttr(wght,"NClasses",fParam_1.lclass);
541 gTools().AddAttr(wght,"NLayers",fParam_1.layerm);
542 void* minmaxnode = gTools().AddChild(wght, "VarMinMax");
543 stringstream s;
544 s.precision( 16 );
545 for (Int_t ivar=0; ivar<fParam_1.nvar; ivar++)
546 s << std::scientific << fVarn_1.xmin[ivar] << " " << fVarn_1.xmax[ivar] << " ";
547 gTools().AddRawLine( minmaxnode, s.str().c_str() );
548 void* neurons = gTools().AddChild(wght, "NNeurons");
549 stringstream n;
550 n.precision( 16 );
551 for (Int_t layer=0; layer<fParam_1.layerm; layer++)
552 n << std::scientific << fNeur_1.neuron[layer] << " ";
553 gTools().AddRawLine( neurons, n.str().c_str() );
554 for (Int_t layer=1; layer<fParam_1.layerm; layer++) {
555 void* layernode = gTools().AddChild(wght, "Layer"+gTools().StringFromInt(layer));
556 gTools().AddAttr(layernode,"NNeurons",fNeur_1.neuron[layer]);
557 void* neuronnode=NULL;
558 for (Int_t neuron=0; neuron<fNeur_1.neuron[layer]; neuron++) {
559 neuronnode = gTools().AddChild(layernode,"Neuron"+gTools().StringFromInt(neuron));
560 stringstream weights;
561 weights.precision( 16 );
562 weights << std::scientific << Ww_ref(fNeur_1.ww, layer+1, neuron+1);
563 for (Int_t i=0; i<fNeur_1.neuron[layer-1]; i++) {
564 weights << " " << std::scientific << W_ref(fNeur_1.w, layer+1, neuron+1, i+1);
565 }
566 gTools().AddRawLine( neuronnode, weights.str().c_str() );
567 }
568 }
569 void* tempnode = gTools().AddChild(wght, "LayerTemp");
570 stringstream temp;
571 temp.precision( 16 );
572 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
573 temp << std::scientific << fDel_1.temp[layer] << " ";
574 }
575 gTools().AddRawLine(tempnode, temp.str().c_str() );
576}
577////////////////////////////////////////////////////////////////////////////////
578/// read weights from xml file
579
581{
582 gTools().ReadAttr( wghtnode, "NLayers",fParam_1.layerm );
584 const char* minmaxcontent = gTools().GetContent(minmaxnode);
585 stringstream content(minmaxcontent);
586 for (UInt_t ivar=0; ivar<GetNvar(); ivar++)
587 content >> fVarn_1.xmin[ivar] >> fVarn_1.xmax[ivar];
588 if (fYNN != 0) {
589 for (Int_t i=0; i<fNlayers; i++) delete[] fYNN[i];
590 delete[] fYNN;
591 fYNN = 0;
592 }
593 fYNN = new Double_t*[fParam_1.layerm];
595 const char* neuronscontent = gTools().GetContent(layernode);
596 stringstream ncontent(neuronscontent);
597 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
598 // read number of neurons for each layer;
599 // coverity[tainted_data_argument]
600 ncontent >> fNeur_1.neuron[layer];
601 fYNN[layer] = new Double_t[fNeur_1.neuron[layer]];
602 }
603 for (Int_t layer=1; layer<fParam_1.layerm; layer++) {
605 void* neuronnode=NULL;
607 for (Int_t neuron=0; neuron<fNeur_1.neuron[layer]; neuron++) {
608 const char* neuronweights = gTools().GetContent(neuronnode);
609 stringstream weights(neuronweights);
610 weights >> Ww_ref(fNeur_1.ww, layer+1, neuron+1);
611 for (Int_t i=0; i<fNeur_1.neuron[layer-1]; i++) {
612 weights >> W_ref(fNeur_1.w, layer+1, neuron+1, i+1);
613 }
615 }
616 }
618 const char* temp = gTools().GetContent(tempnode);
619 stringstream t(temp);
620 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
621 t >> fDel_1.temp[layer];
622 }
623 fNlayers = fParam_1.layerm;
624}
625
626////////////////////////////////////////////////////////////////////////////////
627/// write the weights of the neural net
628
629void TMVA::MethodCFMlpANN::PrintWeights( std::ostream & o ) const
630{
631 // write number of variables and classes
632 o << "Number of vars " << fParam_1.nvar << std::endl;
633 o << "Output nodes " << fParam_1.lclass << std::endl;
634
635 // write extrema of input variables
636 for (Int_t ivar=0; ivar<fParam_1.nvar; ivar++)
637 o << "Var " << ivar << " [" << fVarn_1.xmin[ivar] << " - " << fVarn_1.xmax[ivar] << "]" << std::endl;
638
639 // write number of layers (sum of: input + output + hidden)
640 o << "Number of layers " << fParam_1.layerm << std::endl;
641
642 o << "Nodes per layer ";
643 for (Int_t layer=0; layer<fParam_1.layerm; layer++)
644 // write number of neurons for each layer
645 o << fNeur_1.neuron[layer] << " ";
646 o << std::endl;
647
648 // write weights
649 for (Int_t layer=1; layer<=fParam_1.layerm-1; layer++) {
650
651 Int_t nq = fNeur_1.neuron[layer]/10;
652 Int_t nr = fNeur_1.neuron[layer] - nq*10;
653
654 Int_t kk(0);
655 if (nr==0) kk = nq;
656 else kk = nq+1;
657
658 for (Int_t k=1; k<=kk; k++) {
659 Int_t jmin = 10*k - 9;
660 Int_t jmax = 10*k;
661 Int_t i, j;
662 if (fNeur_1.neuron[layer]<jmax) jmax = fNeur_1.neuron[layer];
663 for (j=jmin; j<=jmax; j++) {
664
665 //o << fNeur_1.ww[j*max_nLayers_ + layer - 6] << " ";
666 o << Ww_ref(fNeur_1.ww, layer+1, j) << " ";
667
668 }
669 o << std::endl;
670 //for (i=1; i<=fNeur_1.neuron[layer-1]; i++) {
671 for (i=1; i<=fNeur_1.neuron[layer-1]; i++) {
672 for (j=jmin; j<=jmax; j++) {
673 // o << fNeur_1.w[(i*max_nNodes_ + j)*max_nLayers_ + layer - 186] << " ";
674 o << W_ref(fNeur_1.w, layer+1, j, i) << " ";
675 }
676 o << std::endl;
677 }
678
679 // skip two empty lines
680 o << std::endl;
681 }
682 }
683 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
684 o << "Del.temp in layer " << layer << " : " << fDel_1.temp[layer] << std::endl;
685 }
686}
687
688////////////////////////////////////////////////////////////////////////////////
689
690void TMVA::MethodCFMlpANN::MakeClassSpecific( std::ostream& fout, const TString& className ) const
691{
692 // write specific classifier response
693 fout << " // not implemented for class: \"" << className << "\"" << std::endl;
694 fout << "};" << std::endl;
695}
696
697////////////////////////////////////////////////////////////////////////////////
698/// write specific classifier response for header
699
700void TMVA::MethodCFMlpANN::MakeClassSpecificHeader( std::ostream& , const TString& ) const
701{
702}
703
704////////////////////////////////////////////////////////////////////////////////
705/// get help message text
706///
707/// typical length of text line:
708/// "|--------------------------------------------------------------|"
709
711{
712 Log() << Endl;
713 Log() << gTools().Color("bold") << "--- Short description:" << gTools().Color("reset") << Endl;
714 Log() << Endl;
715 Log() << "<None>" << Endl;
716 Log() << Endl;
717 Log() << gTools().Color("bold") << "--- Performance optimisation:" << gTools().Color("reset") << Endl;
718 Log() << Endl;
719 Log() << "<None>" << Endl;
720 Log() << Endl;
721 Log() << gTools().Color("bold") << "--- Performance tuning via configuration options:" << gTools().Color("reset") << Endl;
722 Log() << Endl;
723 Log() << "<None>" << Endl;
724}
#define REGISTER_METHOD(CLASS)
for example
Cppyy::TCppType_t fClass
#define f(i)
Definition RSha256.hxx:104
constexpr Bool_t kFALSE
Definition RtypesCore.h:108
constexpr Bool_t kTRUE
Definition RtypesCore.h:107
ROOT::Detail::TRangeCast< T, true > TRangeDynCast
TRangeDynCast is an adapter class that allows the typed iteration through a TCollection.
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t nchar
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t GCValues_t GetPrimarySelectionOwner GetDisplay GetScreen GetColormap GetNativeEvent const char const char dpyName wid window const char font_name cursor keysym reg const char only_if_exist regb h Point_t winding char text const char depth char const char Int_t count const char ColorStruct_t color const char Pixmap_t Pixmap_t PictureAttributes_t attr const char char ret_data h unsigned char height h Atom_t Int_t ULong_t ULong_t unsigned char prop_list Atom_t Atom_t Atom_t Time_t type
TMatrixT< Float_t > TMatrix
Definition TMatrix.h:24
MsgLogger & Log() const
Class that contains all the data information.
Definition DataSetInfo.h:62
Virtual base Class for all MVA method.
Definition MethodBase.h:111
void PrintWeights(std::ostream &o) const
write the weights of the neural net
void DeclareOptions() override
define the options (their key words) that can be set in the option string know options: NCycles=xx :t...
void MakeClassSpecificHeader(std::ostream &, const TString &="") const override
write specific classifier response for header
void ReadWeightsFromXML(void *wghtnode) override
read weights from xml file
Double_t EvalANN(std::vector< Double_t > &, Bool_t &isOK)
evaluates NN value as function of input variables
void AddWeightsXMLTo(void *parent) const override
write weights to xml file
void NN_ava(Double_t *)
auxiliary functions
Int_t DataInterface(Double_t *, Double_t *, Int_t *, Int_t *, Int_t *, Int_t *, Double_t *, Int_t *, Int_t *) override
data interface function
void MakeClassSpecific(std::ostream &, const TString &) const override
void Train(void) override
training of the Clement-Ferrand NN classifier
void ProcessOptions() override
decode the options in the option string
Double_t GetMvaValue(Double_t *err=nullptr, Double_t *errUpper=nullptr) override
returns CFMlpANN output (normalised within [0,1])
Bool_t HasAnalysisType(Types::EAnalysisType type, UInt_t numberClasses, UInt_t) override
CFMlpANN can handle classification with 2 classes.
Double_t NN_fonc(Int_t, Double_t) const
activation function
void ReadWeightsFromStream(std::istream &istr) override
read back the weight from the training from file (stream)
void Init(void) override
default initialisation called by all constructors
virtual ~MethodCFMlpANN(void)
destructor
MethodCFMlpANN(const TString &jobName, const TString &methodTitle, DataSetInfo &theData, const TString &theOption="3000:N-1:N-2")
standard constructor
void GetHelpMessage() const override
get help message text
Bool_t AddRawLine(void *node, const char *raw)
XML helpers.
Definition Tools.cxx:1190
const TString & Color(const TString &)
human readable color strings
Definition Tools.cxx:828
const char * GetContent(void *node)
XML helpers.
Definition Tools.cxx:1174
void ReadAttr(void *node, const char *, T &value)
read attribute from xml
Definition Tools.h:329
void * GetChild(void *parent, const char *childname=nullptr)
get child node
Definition Tools.cxx:1150
void AddAttr(void *node, const char *, const T &value, Int_t precision=16)
add attribute to xml
Definition Tools.h:347
void * AddChild(void *parent, const char *childname, const char *content=nullptr, bool isRootNode=false)
add child node
Definition Tools.cxx:1124
void * GetNextChild(void *prevchild, const char *childname=nullptr)
XML helpers.
Definition Tools.cxx:1162
Singleton class for Global types used by TMVA.
Definition Types.h:71
@ kClassification
Definition Types.h:127
Basic string class.
Definition TString.h:138
Double_t x[n]
Definition legend1.C:17
const Int_t n
Definition legend1.C:16
create variable transformations
Tools & gTools()
MsgLogger & Endl(MsgLogger &ml)
Definition MsgLogger.h:148
Double_t Exp(Double_t x)
Returns the base-e exponential function of x, which is e raised to the power x.
Definition TMath.h:720