Running with nthreads = 4
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sig_tree of type Signal with 1000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg_tree of type Background with 1000 events
Factory : Booking method: ␛[1mBDT␛[0m
:
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sig_tree
: Using variable vars[0] from array expression vars of size 256
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg_tree
: Using variable vars[0] from array expression vars of size 256
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 800
: Signal -- testing events : 200
: Signal -- training and testing events: 1000
: Background -- training events : 800
: Background -- testing events : 200
: Background -- training and testing events: 1000
:
Factory : Booking method: ␛[1mTMVA_DNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.,MaxEpochs=10:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.,MaxEpochs=10:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: Layout: "DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.,MaxEpochs=10" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: InputLayout: "0|0|0" [The Layout of the input]
: BatchLayout: "0|0|0" [The Layout of the batch]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Will now use the CPU architecture with BLAS and IMT support !
Factory : Booking method: ␛[1mTMVA_CNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:InputLayout=1|16|16:Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0,MaxEpochs=10:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:InputLayout=1|16|16:Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0,MaxEpochs=10:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|16|16" [The Layout of the input]
: Layout: "CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0,MaxEpochs=10" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: BatchLayout: "0|0|0" [The Layout of the batch]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Will now use the CPU architecture with BLAS and IMT support !
Factory : ␛[1mTrain all methods␛[0m
Factory : Train method: BDT for Classification
:
BDT : #events: (reweighted) sig: 800 bkg: 800
: #events: (unweighted) sig: 800 bkg: 800
: Training 400 Decision Trees ... patience please
: Elapsed time for training with 1600 events: 1.34 sec
BDT : [dataset] : Evaluation of BDT on training sample (1600 events)
: Elapsed time for evaluation of 1600 events: 0.0176 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.class.C␛[0m
: TMVA_CNN_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory : Training finished
:
Factory : Train method: TMVA_DNN_CPU for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 4
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 8 Input = ( 1, 1, 256 ) Batch size = 100 Loss function = C
Layer 0 DENSE Layer: ( Input = 256 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 1 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 2 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 3 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 4 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 5 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 6 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 7 DENSE Layer: ( Input = 100 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity
: Using 1280 events for training and 320 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 14.0803
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.939939 0.79883 0.116823 0.0115521 11399.2 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.709273 0.768999 0.119165 0.0123079 11229.9 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.610491 0.729211 0.120687 0.0113797 10978.2 0
: 4 | 0.542963 0.736765 0.117771 0.0117198 11315.3 1
: 5 | 0.466065 0.740315 0.125014 0.0117561 10595.3 2
: 6 | 0.422436 0.737276 0.120068 0.0118431 11088 3
: 7 | 0.361219 0.73756 0.120162 0.0117564 11069.6 4
: 8 | 0.338099 0.817913 0.118405 0.0116171 11237.2 5
: 9 | 0.291961 0.788546 0.119624 0.0116034 11109 6
:
: Elapsed time for training with 1600 events: 1.1 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
TMVA_DNN_CPU : [dataset] : Evaluation of TMVA_DNN_CPU on training sample (1600 events)
: Elapsed time for evaluation of 1600 events: 0.0627 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.class.C␛[0m
Factory : Training finished
:
Factory : Train method: TMVA_CNN_CPU for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 4
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 7 Input = ( 1, 16, 16 ) Batch size = 100 Loss function = C
Layer 0 CONV LAYER: ( W = 16 , H = 16 , D = 10 ) Filter ( W = 3 , H = 3 ) Output = ( 100 , 10 , 10 , 256 ) Activation Function = Relu
Layer 1 BATCH NORM Layer: Input/Output = ( 10 , 256 , 100 ) Norm dim = 10 axis = 1
Layer 2 CONV LAYER: ( W = 16 , H = 16 , D = 10 ) Filter ( W = 3 , H = 3 ) Output = ( 100 , 10 , 10 , 256 ) Activation Function = Relu
Layer 3 POOL Layer: ( W = 15 , H = 15 , D = 10 ) Filter ( W = 2 , H = 2 ) Output = ( 100 , 10 , 10 , 225 )
Layer 4 RESHAPE Layer Input = ( 10 , 15 , 15 ) Output = ( 1 , 100 , 2250 )
Layer 5 DENSE Layer: ( Input = 2250 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 6 DENSE Layer: ( Input = 100 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity
: Using 1280 events for training and 320 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 89.511
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 1.8727 0.795331 0.914378 0.0750121 1429.65 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.76852 0.738468 0.910126 0.078191 1442.42 0
: 3 | 0.718273 0.744798 0.876324 0.0691804 1486.72 1
: 4 Minimum Test error found - save the configuration
: 4 | 0.686016 0.696356 0.88113 0.0769613 1492.22 0
: 5 | 0.657738 0.698131 0.900093 0.0759498 1456.06 1
: 6 Minimum Test error found - save the configuration
: 6 | 0.652634 0.679112 0.933566 0.0881071 1419.35 0
: 7 | 0.617941 0.686965 0.883596 0.0787711 1491.01 1
: 8 | 0.597027 0.681877 0.869577 0.071505 1503.62 2
: 9 Minimum Test error found - save the configuration
: 9 | 0.571234 0.674748 0.917084 0.0726142 1421.01 0
: 10 Minimum Test error found - save the configuration
: 10 | 0.541556 0.671306 0.967583 0.0783594 1349.49 0
:
: Elapsed time for training with 1600 events: 9.13 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
TMVA_CNN_CPU : [dataset] : Evaluation of TMVA_CNN_CPU on training sample (1600 events)
: Elapsed time for evaluation of 1600 events: 0.399 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.class.C␛[0m
Factory : Training finished
:
: Ranking input variables (method specific)...
BDT : Ranking result (top variable is best ranked)
: --------------------------------------
: Rank : Variable : Variable Importance
: --------------------------------------
: 1 : vars : 1.094e-02
: 2 : vars : 9.633e-03
: 3 : vars : 9.211e-03
: 4 : vars : 8.955e-03
: 5 : vars : 8.688e-03
: 6 : vars : 8.261e-03
: 7 : vars : 8.131e-03
: 8 : vars : 7.926e-03
: 9 : vars : 7.732e-03
: 10 : vars : 7.717e-03
: 11 : vars : 7.433e-03
: 12 : vars : 7.337e-03
: 13 : vars : 7.294e-03
: 14 : vars : 7.245e-03
: 15 : vars : 7.210e-03
: 16 : vars : 7.173e-03
: 17 : vars : 7.172e-03
: 18 : vars : 7.048e-03
: 19 : vars : 6.993e-03
: 20 : vars : 6.678e-03
: 21 : vars : 6.524e-03
: 22 : vars : 6.449e-03
: 23 : vars : 6.446e-03
: 24 : vars : 6.422e-03
: 25 : vars : 6.361e-03
: 26 : vars : 6.239e-03
: 27 : vars : 6.049e-03
: 28 : vars : 5.996e-03
: 29 : vars : 5.978e-03
: 30 : vars : 5.929e-03
: 31 : vars : 5.901e-03
: 32 : vars : 5.888e-03
: 33 : vars : 5.810e-03
: 34 : vars : 5.771e-03
: 35 : vars : 5.676e-03
: 36 : vars : 5.574e-03
: 37 : vars : 5.554e-03
: 38 : vars : 5.526e-03
: 39 : vars : 5.515e-03
: 40 : vars : 5.469e-03
: 41 : vars : 5.466e-03
: 42 : vars : 5.461e-03
: 43 : vars : 5.449e-03
: 44 : vars : 5.449e-03
: 45 : vars : 5.428e-03
: 46 : vars : 5.427e-03
: 47 : vars : 5.417e-03
: 48 : vars : 5.374e-03
: 49 : vars : 5.350e-03
: 50 : vars : 5.337e-03
: 51 : vars : 5.278e-03
: 52 : vars : 5.270e-03
: 53 : vars : 5.215e-03
: 54 : vars : 5.194e-03
: 55 : vars : 5.163e-03
: 56 : vars : 5.155e-03
: 57 : vars : 5.153e-03
: 58 : vars : 5.137e-03
: 59 : vars : 5.133e-03
: 60 : vars : 5.130e-03
: 61 : vars : 5.110e-03
: 62 : vars : 5.040e-03
: 63 : vars : 5.001e-03
: 64 : vars : 4.998e-03
: 65 : vars : 4.984e-03
: 66 : vars : 4.964e-03
: 67 : vars : 4.924e-03
: 68 : vars : 4.924e-03
: 69 : vars : 4.896e-03
: 70 : vars : 4.860e-03
: 71 : vars : 4.834e-03
: 72 : vars : 4.803e-03
: 73 : vars : 4.765e-03
: 74 : vars : 4.763e-03
: 75 : vars : 4.747e-03
: 76 : vars : 4.732e-03
: 77 : vars : 4.724e-03
: 78 : vars : 4.712e-03
: 79 : vars : 4.690e-03
: 80 : vars : 4.673e-03
: 81 : vars : 4.648e-03
: 82 : vars : 4.646e-03
: 83 : vars : 4.638e-03
: 84 : vars : 4.610e-03
: 85 : vars : 4.590e-03
: 86 : vars : 4.573e-03
: 87 : vars : 4.532e-03
: 88 : vars : 4.526e-03
: 89 : vars : 4.519e-03
: 90 : vars : 4.517e-03
: 91 : vars : 4.491e-03
: 92 : vars : 4.423e-03
: 93 : vars : 4.416e-03
: 94 : vars : 4.404e-03
: 95 : vars : 4.338e-03
: 96 : vars : 4.334e-03
: 97 : vars : 4.333e-03
: 98 : vars : 4.329e-03
: 99 : vars : 4.324e-03
: 100 : vars : 4.316e-03
: 101 : vars : 4.304e-03
: 102 : vars : 4.300e-03
: 103 : vars : 4.277e-03
: 104 : vars : 4.246e-03
: 105 : vars : 4.237e-03
: 106 : vars : 4.231e-03
: 107 : vars : 4.224e-03
: 108 : vars : 4.223e-03
: 109 : vars : 4.206e-03
: 110 : vars : 4.173e-03
: 111 : vars : 4.164e-03
: 112 : vars : 4.138e-03
: 113 : vars : 4.124e-03
: 114 : vars : 4.050e-03
: 115 : vars : 4.039e-03
: 116 : vars : 4.028e-03
: 117 : vars : 3.997e-03
: 118 : vars : 3.978e-03
: 119 : vars : 3.965e-03
: 120 : vars : 3.961e-03
: 121 : vars : 3.952e-03
: 122 : vars : 3.945e-03
: 123 : vars : 3.928e-03
: 124 : vars : 3.927e-03
: 125 : vars : 3.914e-03
: 126 : vars : 3.913e-03
: 127 : vars : 3.887e-03
: 128 : vars : 3.854e-03
: 129 : vars : 3.830e-03
: 130 : vars : 3.821e-03
: 131 : vars : 3.804e-03
: 132 : vars : 3.788e-03
: 133 : vars : 3.780e-03
: 134 : vars : 3.774e-03
: 135 : vars : 3.774e-03
: 136 : vars : 3.773e-03
: 137 : vars : 3.765e-03
: 138 : vars : 3.753e-03
: 139 : vars : 3.695e-03
: 140 : vars : 3.687e-03
: 141 : vars : 3.686e-03
: 142 : vars : 3.684e-03
: 143 : vars : 3.673e-03
: 144 : vars : 3.673e-03
: 145 : vars : 3.653e-03
: 146 : vars : 3.647e-03
: 147 : vars : 3.642e-03
: 148 : vars : 3.639e-03
: 149 : vars : 3.607e-03
: 150 : vars : 3.595e-03
: 151 : vars : 3.584e-03
: 152 : vars : 3.575e-03
: 153 : vars : 3.564e-03
: 154 : vars : 3.556e-03
: 155 : vars : 3.534e-03
: 156 : vars : 3.514e-03
: 157 : vars : 3.509e-03
: 158 : vars : 3.507e-03
: 159 : vars : 3.507e-03
: 160 : vars : 3.488e-03
: 161 : vars : 3.448e-03
: 162 : vars : 3.420e-03
: 163 : vars : 3.419e-03
: 164 : vars : 3.381e-03
: 165 : vars : 3.379e-03
: 166 : vars : 3.372e-03
: 167 : vars : 3.365e-03
: 168 : vars : 3.364e-03
: 169 : vars : 3.363e-03
: 170 : vars : 3.329e-03
: 171 : vars : 3.327e-03
: 172 : vars : 3.320e-03
: 173 : vars : 3.287e-03
: 174 : vars : 3.242e-03
: 175 : vars : 3.229e-03
: 176 : vars : 3.215e-03
: 177 : vars : 3.172e-03
: 178 : vars : 3.167e-03
: 179 : vars : 3.143e-03
: 180 : vars : 3.138e-03
: 181 : vars : 3.133e-03
: 182 : vars : 3.117e-03
: 183 : vars : 3.056e-03
: 184 : vars : 3.037e-03
: 185 : vars : 3.026e-03
: 186 : vars : 3.022e-03
: 187 : vars : 3.016e-03
: 188 : vars : 2.939e-03
: 189 : vars : 2.923e-03
: 190 : vars : 2.918e-03
: 191 : vars : 2.906e-03
: 192 : vars : 2.891e-03
: 193 : vars : 2.887e-03
: 194 : vars : 2.878e-03
: 195 : vars : 2.859e-03
: 196 : vars : 2.851e-03
: 197 : vars : 2.850e-03
: 198 : vars : 2.827e-03
: 199 : vars : 2.814e-03
: 200 : vars : 2.743e-03
: 201 : vars : 2.705e-03
: 202 : vars : 2.646e-03
: 203 : vars : 2.634e-03
: 204 : vars : 2.601e-03
: 205 : vars : 2.528e-03
: 206 : vars : 2.464e-03
: 207 : vars : 2.455e-03
: 208 : vars : 2.447e-03
: 209 : vars : 2.441e-03
: 210 : vars : 2.388e-03
: 211 : vars : 2.319e-03
: 212 : vars : 2.257e-03
: 213 : vars : 2.248e-03
: 214 : vars : 2.230e-03
: 215 : vars : 2.192e-03
: 216 : vars : 2.131e-03
: 217 : vars : 2.086e-03
: 218 : vars : 2.071e-03
: 219 : vars : 2.034e-03
: 220 : vars : 2.023e-03
: 221 : vars : 1.977e-03
: 222 : vars : 1.974e-03
: 223 : vars : 1.946e-03
: 224 : vars : 1.917e-03
: 225 : vars : 1.888e-03
: 226 : vars : 1.879e-03
: 227 : vars : 1.849e-03
: 228 : vars : 1.835e-03
: 229 : vars : 1.767e-03
: 230 : vars : 1.598e-03
: 231 : vars : 1.540e-03
: 232 : vars : 1.504e-03
: 233 : vars : 1.447e-03
: 234 : vars : 1.368e-03
: 235 : vars : 8.855e-04
: 236 : vars : 7.247e-04
: 237 : vars : 2.653e-04
: 238 : vars : 2.028e-04
: 239 : vars : 0.000e+00
: 240 : vars : 0.000e+00
: 241 : vars : 0.000e+00
: 242 : vars : 0.000e+00
: 243 : vars : 0.000e+00
: 244 : vars : 0.000e+00
: 245 : vars : 0.000e+00
: 246 : vars : 0.000e+00
: 247 : vars : 0.000e+00
: 248 : vars : 0.000e+00
: 249 : vars : 0.000e+00
: 250 : vars : 0.000e+00
: 251 : vars : 0.000e+00
: 252 : vars : 0.000e+00
: 253 : vars : 0.000e+00
: 254 : vars : 0.000e+00
: 255 : vars : 0.000e+00
: 256 : vars : 0.000e+00
: --------------------------------------
: No variable ranking supplied by classifier: TMVA_DNN_CPU
: No variable ranking supplied by classifier: TMVA_CNN_CPU
TH1.Print Name = TrainingHistory_TMVA_DNN_CPU_trainingError, Entries= 0, Total sum= 4.68245
TH1.Print Name = TrainingHistory_TMVA_DNN_CPU_valError, Entries= 0, Total sum= 6.85542
TH1.Print Name = TrainingHistory_TMVA_CNN_CPU_trainingError, Entries= 0, Total sum= 7.68364
TH1.Print Name = TrainingHistory_TMVA_CNN_CPU_valError, Entries= 0, Total sum= 7.06709
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: BDT for Classification performance
:
BDT : [dataset] : Evaluation of BDT on testing sample (400 events)
: Elapsed time for evaluation of 400 events: 0.00473 sec
Factory : Test method: TMVA_DNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 400
:
TMVA_DNN_CPU : [dataset] : Evaluation of TMVA_DNN_CPU on testing sample (400 events)
: Elapsed time for evaluation of 400 events: 0.0144 sec
Factory : Test method: TMVA_CNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 400
:
TMVA_CNN_CPU : [dataset] : Evaluation of TMVA_CNN_CPU on testing sample (400 events)
: Elapsed time for evaluation of 400 events: 0.102 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: BDT
:
BDT : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
Factory : Evaluate classifier: TMVA_DNN_CPU
:
TMVA_DNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
Factory : Evaluate classifier: TMVA_CNN_CPU
:
TMVA_CNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset BDT : 0.772
: dataset TMVA_CNN_CPU : 0.685
: dataset TMVA_DNN_CPU : 0.596
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset BDT : 0.215 (0.405) 0.325 (0.770) 0.690 (0.900)
: dataset TMVA_CNN_CPU : 0.065 (0.065) 0.315 (0.396) 0.561 (0.655)
: dataset TMVA_DNN_CPU : 0.008 (0.012) 0.162 (0.265) 0.422 (0.529)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 400 events
:
Dataset:dataset : Created tree 'TrainTree' with 1600 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
import ROOT
import os
opt = [1, 1, 1, 1, 1]
useTMVACNN = opt[0]
if len(opt) > 0
else False
useKerasCNN = opt[1]
if len(opt) > 1
else False
useTMVADNN = opt[2]
if len(opt) > 2
else False
useTMVABDT = opt[3]
if len(opt) > 3
else False
usePyTorchCNN = opt[4]
if len(opt) > 4
else False
ntot = nh * nw
fileOutName = "images_data_16x16.root"
nRndmEvts = 10000
delta_sigma = 0.1
pixelNoise = 5
sX1 = 3
sY1 = 3
sX2 = sX1 + delta_sigma
sY2 = sY1 - delta_sigma
h1 =
ROOT.TH2D(
"h1",
"h1", nh, 0, 10, nw, 0, 10)
h2 =
ROOT.TH2D(
"h2",
"h2", nh, 0, 10, nw, 0, 10)
f =
TFile(fileOutName,
"RECREATE")
ROOT.Info(
"TMVA_CNN_Classification",
"Filling ROOT tree \n")
if i % 1000 == 0:
print("Generating image event ...", i)
m = k * nw + l
print(
"Signal and background tree with images data written to the file %s",
f.GetName())
nevt = 1000
if (not hasCPU and not hasGPU) :
ROOT.Warning(
"TMVA_CNN_Classificaton",
"ROOT is not supporting tmva-cpu and tmva-gpu skip using TMVA-DNN and TMVA-CNN")
useTMVACNN = False
useTMVADNN = False
useKerasCNN = False
usePyTorchCNN = False
else:
if not useTMVACNN:
"TMVA_CNN_Classificaton",
"TMVA is not build with GPU or CPU multi-thread support. Cannot use TMVA Deep Learning for CNN",
)
writeOutputFile = True
num_threads = 4
max_epochs = 10
else:
print("Running in serial mode since ROOT does not support MT")
outputFile = None
if writeOutputFile:
outputFile =
TFile.Open(
"TMVA_CNN_ClassificationOutput.root",
"RECREATE")
"TMVA_CNN_Classification",
outputFile,
V=False,
ROC=True,
Silent=False,
Color=True,
AnalysisType="Classification",
Transformations=None,
Correlations=False,
)
imgSize = 16 * 16
inputFileName = "images_data_16x16.root"
if inputFile is None:
signalWeight = 1.0
backgroundWeight = 1.0
mycuts = ""
mycutb = ""
nTrainSig = 0.8 * nEventsSig
nTrainBkg = 0.8 * nEventsBkg
mycuts,
mycutb,
nTrain_Signal=nTrainSig,
nTrain_Background=nTrainBkg,
SplitMode="Random",
SplitSeed=100,
NormMode="NumEvents",
V=False,
CalcCorrelations=False,
)
if useTMVABDT:
loader,
"BDT",
V=False,
NTrees=400,
MinNodeSize="2.5%",
MaxDepth=2,
BoostType="AdaBoost",
AdaBoostBeta=0.5,
UseBaggedBoost=True,
BaggedSampleFraction=0.5,
SeparationType="GiniIndex",
nCuts=20,
)
if useTMVADNN:
"DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR"
)
"LearningRate=1e-3,Momentum=0.9,Repetitions=1,"
"ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,"
"WeightDecay=1e-4,Regularization=None,"
"Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0."
)
trainingString1 += ",MaxEpochs=" + str(max_epochs)
dnnMethodName = "TMVA_DNN_CPU"
dnnOptions = "CPU"
if hasGPU :
dnnOptions = "GPU"
dnnMethodName = "TMVA_DNN_GPU"
loader,
dnnMethodName,
H=False,
V=True,
ErrorStrategy="CROSSENTROPY",
VarTransform=None,
WeightInitialization="XAVIER",
Layout=layoutString,
TrainingStrategy=trainingString1,
Architecture=dnnOptions
)
if useTMVACNN:
"LearningRate=1e-3,Momentum=0.9,Repetitions=1,"
"ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,"
"WeightDecay=1e-4,Regularization=None,"
"Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0"
)
trainingString1 += ",MaxEpochs=" + str(max_epochs)
cnnMethodName = "TMVA_CNN_CPU"
cnnOptions = "CPU"
if hasGPU:
cnnOptions = "GPU"
cnnMethodName = "TMVA_CNN_GPU"
loader,
cnnMethodName,
H=False,
V=True,
ErrorStrategy="CROSSENTROPY",
VarTransform=None,
WeightInitialization="XAVIER",
InputLayout="1|16|16",
Layout="CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR",
TrainingStrategy=trainingString1,
Architecture=cnnOptions,
)
if usePyTorchCNN:
ROOT.Info(
"TMVA_CNN_Classification",
"Using Convolutional PyTorch Model")
pyTorchFileName += "/machine_learning/PyTorch_Generate_CNN_Model.py"
ROOT.Info(
"TMVA_CNN_Classification",
"Booking PyTorch CNN model")
loader,
"PyTorch",
H=True,
V=False,
VarTransform=None,
FilenameModel="PyTorchModelCNN.pt",
FilenameTrainedModel="PyTorchTrainedModelCNN.pt",
NumEpochs=max_epochs,
BatchSize=100,
UserCode=str(pyTorchFileName)
)
else:
"TMVA_CNN_Classification",
"PyTorch is not installed or model building file is not existing - skip using PyTorch",
)
if useKerasCNN:
ROOT.Info(
"TMVA_CNN_Classification",
"Building convolutional keras model")
import tensorflow
model.add(Reshape((16, 16, 1), input_shape=(256,)))
model.add(
Conv2D(10, kernel_size=(3, 3), kernel_initializer=
"TruncatedNormal", activation=
"relu", padding=
"same"))
model.add(
Conv2D(10, kernel_size=(3, 3), kernel_initializer=
"TruncatedNormal", activation=
"relu", padding=
"same"))
model.compile(loss=
"binary_crossentropy", optimizer=
Adam(learning_rate=0.001), weighted_metrics=[
"accuracy"])
else:
ROOT.Info(
"TMVA_CNN_Classification",
"Booking convolutional keras model")
loader,
"PyKeras",
H=True,
V=False,
VarTransform=None,
FilenameModel="model_cnn.h5",
FilenameTrainedModel="trained_model_cnn.h5",
NumEpochs=max_epochs,
BatchSize=100,
GpuOptions="allow_growth=True",
)
ROOT::Detail::TRangeCast< T, true > TRangeDynCast
TRangeDynCast is an adapter class that allows the typed iteration through a TCollection.
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t GCValues_t GetPrimarySelectionOwner GetDisplay GetScreen GetColormap GetNativeEvent const char const char dpyName wid window const char font_name cursor keysym reg const char only_if_exist regb h Point_t winding char text const char depth char const char Int_t count const char ColorStruct_t color const char Pixmap_t Pixmap_t PictureAttributes_t attr const char char ret_data h unsigned char height h Atom_t Int_t ULong_t ULong_t unsigned char prop_list Atom_t Atom_t Atom_t Time_t UChar_t len
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t GCValues_t GetPrimarySelectionOwner GetDisplay GetScreen GetColormap GetNativeEvent const char const char dpyName wid window const char font_name cursor keysym reg const char only_if_exist regb h Point_t winding char text const char depth char const char Int_t count const char ColorStruct_t color const char Pixmap_t Pixmap_t PictureAttributes_t attr const char char ret_data h unsigned char height h Atom_t Int_t ULong_t ULong_t unsigned char prop_list Atom_t Atom_t Atom_t Time_t format
A ROOT file is an on-disk file, usually with extension .root, that stores objects in a file-system-li...
This is the main MVA steering class.