Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TMVA_RNN_Classification.C File Reference

Detailed Description

View in nbviewer Open in SWAN
TMVA Classification Example Using a Recurrent Neural Network

This is an example of using a RNN in TMVA. We do classification using a toy time dependent data set that is generated when running this example macro

Running with nthreads = 4
--- RNNClassification : Using input file: time_data_t10_d30.root
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sgn of type Signal with 2000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg of type Background with 2000 events
number of variables is 300
vars_time0[0],vars_time0[1],vars_time0[2],vars_time0[3],vars_time0[4],vars_time0[5],vars_time0[6],vars_time0[7],vars_time0[8],vars_time0[9],vars_time0[10],vars_time0[11],vars_time0[12],vars_time0[13],vars_time0[14],vars_time0[15],vars_time0[16],vars_time0[17],vars_time0[18],vars_time0[19],vars_time0[20],vars_time0[21],vars_time0[22],vars_time0[23],vars_time0[24],vars_time0[25],vars_time0[26],vars_time0[27],vars_time0[28],vars_time0[29],vars_time1[0],vars_time1[1],vars_time1[2],vars_time1[3],vars_time1[4],vars_time1[5],vars_time1[6],vars_time1[7],vars_time1[8],vars_time1[9],vars_time1[10],vars_time1[11],vars_time1[12],vars_time1[13],vars_time1[14],vars_time1[15],vars_time1[16],vars_time1[17],vars_time1[18],vars_time1[19],vars_time1[20],vars_time1[21],vars_time1[22],vars_time1[23],vars_time1[24],vars_time1[25],vars_time1[26],vars_time1[27],vars_time1[28],vars_time1[29],vars_time2[0],vars_time2[1],vars_time2[2],vars_time2[3],vars_time2[4],vars_time2[5],vars_time2[6],vars_time2[7],vars_time2[8],vars_time2[9],vars_time2[10],vars_time2[11],vars_time2[12],vars_time2[13],vars_time2[14],vars_time2[15],vars_time2[16],vars_time2[17],vars_time2[18],vars_time2[19],vars_time2[20],vars_time2[21],vars_time2[22],vars_time2[23],vars_time2[24],vars_time2[25],vars_time2[26],vars_time2[27],vars_time2[28],vars_time2[29],vars_time3[0],vars_time3[1],vars_time3[2],vars_time3[3],vars_time3[4],vars_time3[5],vars_time3[6],vars_time3[7],vars_time3[8],vars_time3[9],vars_time3[10],vars_time3[11],vars_time3[12],vars_time3[13],vars_time3[14],vars_time3[15],vars_time3[16],vars_time3[17],vars_time3[18],vars_time3[19],vars_time3[20],vars_time3[21],vars_time3[22],vars_time3[23],vars_time3[24],vars_time3[25],vars_time3[26],vars_time3[27],vars_time3[28],vars_time3[29],vars_time4[0],vars_time4[1],vars_time4[2],vars_time4[3],vars_time4[4],vars_time4[5],vars_time4[6],vars_time4[7],vars_time4[8],vars_time4[9],vars_time4[10],vars_time4[11],vars_time4[12],vars_time4[13],vars_time4[14],vars_time4[15],vars_time4[16],vars_time4[17],vars_time4[18],vars_time4[19],vars_time4[20],vars_time4[21],vars_time4[22],vars_time4[23],vars_time4[24],vars_time4[25],vars_time4[26],vars_time4[27],vars_time4[28],vars_time4[29],vars_time5[0],vars_time5[1],vars_time5[2],vars_time5[3],vars_time5[4],vars_time5[5],vars_time5[6],vars_time5[7],vars_time5[8],vars_time5[9],vars_time5[10],vars_time5[11],vars_time5[12],vars_time5[13],vars_time5[14],vars_time5[15],vars_time5[16],vars_time5[17],vars_time5[18],vars_time5[19],vars_time5[20],vars_time5[21],vars_time5[22],vars_time5[23],vars_time5[24],vars_time5[25],vars_time5[26],vars_time5[27],vars_time5[28],vars_time5[29],vars_time6[0],vars_time6[1],vars_time6[2],vars_time6[3],vars_time6[4],vars_time6[5],vars_time6[6],vars_time6[7],vars_time6[8],vars_time6[9],vars_time6[10],vars_time6[11],vars_time6[12],vars_time6[13],vars_time6[14],vars_time6[15],vars_time6[16],vars_time6[17],vars_time6[18],vars_time6[19],vars_time6[20],vars_time6[21],vars_time6[22],vars_time6[23],vars_time6[24],vars_time6[25],vars_time6[26],vars_time6[27],vars_time6[28],vars_time6[29],vars_time7[0],vars_time7[1],vars_time7[2],vars_time7[3],vars_time7[4],vars_time7[5],vars_time7[6],vars_time7[7],vars_time7[8],vars_time7[9],vars_time7[10],vars_time7[11],vars_time7[12],vars_time7[13],vars_time7[14],vars_time7[15],vars_time7[16],vars_time7[17],vars_time7[18],vars_time7[19],vars_time7[20],vars_time7[21],vars_time7[22],vars_time7[23],vars_time7[24],vars_time7[25],vars_time7[26],vars_time7[27],vars_time7[28],vars_time7[29],vars_time8[0],vars_time8[1],vars_time8[2],vars_time8[3],vars_time8[4],vars_time8[5],vars_time8[6],vars_time8[7],vars_time8[8],vars_time8[9],vars_time8[10],vars_time8[11],vars_time8[12],vars_time8[13],vars_time8[14],vars_time8[15],vars_time8[16],vars_time8[17],vars_time8[18],vars_time8[19],vars_time8[20],vars_time8[21],vars_time8[22],vars_time8[23],vars_time8[24],vars_time8[25],vars_time8[26],vars_time8[27],vars_time8[28],vars_time8[29],vars_time9[0],vars_time9[1],vars_time9[2],vars_time9[3],vars_time9[4],vars_time9[5],vars_time9[6],vars_time9[7],vars_time9[8],vars_time9[9],vars_time9[10],vars_time9[11],vars_time9[12],vars_time9[13],vars_time9[14],vars_time9[15],vars_time9[16],vars_time9[17],vars_time9[18],vars_time9[19],vars_time9[20],vars_time9[21],vars_time9[22],vars_time9[23],vars_time9[24],vars_time9[25],vars_time9[26],vars_time9[27],vars_time9[28],vars_time9[29],
prepared DATA LOADER
Factory : Booking method: ␛[1mTMVA_LSTM␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIERUNIFORM:ValidationSize=0.2:RandomSeed=1234:InputLayout=10|30:Layout=LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0.:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIERUNIFORM:ValidationSize=0.2:RandomSeed=1234:InputLayout=10|30:Layout=LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0.:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "10|30" [The Layout of the input]
: Layout: "LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIERUNIFORM" [Weight initialization strategy]
: RandomSeed: "1234" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "0.2" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0." [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: BatchLayout: "0|0|0" [The Layout of the batch]
: Will now use the CPU architecture with BLAS and IMT support !
Factory : Booking method: ␛[1mTMVA_DNN␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:RandomSeed=0:InputLayout=1|1|300:Layout=DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM:CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:RandomSeed=0:InputLayout=1|1|300:Layout=DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM:CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|1|300" [The Layout of the input]
: Layout: "DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: BatchLayout: "0|0|0" [The Layout of the batch]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape (Reshape) (None, 10, 30) 0
lstm (LSTM) (None, 10, 10) 1640
flatten (Flatten) (None, 100) 0
dense (Dense) (None, 64) 6464
dense_1 (Dense) (None, 2) 130
=================================================================
Total params: 8234 (32.16 KB)
Trainable params: 8234 (32.16 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
(TString) "python3"[7]
Factory : Booking method: ␛[1mPyKeras_LSTM␛[0m
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Loading Keras Model
: Loaded model from file: model_LSTM.h5
Factory : Booking method: ␛[1mBDTG␛[0m
:
: the option NegWeightTreatment=InverseBoostNegWeights does not exist for BoostType=Grad
: --> change to new default NegWeightTreatment=Pray
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sgn
: Using variable vars_time0[0] from array expression vars_time0 of size 30
: Using variable vars_time1[0] from array expression vars_time1 of size 30
: Using variable vars_time2[0] from array expression vars_time2 of size 30
: Using variable vars_time3[0] from array expression vars_time3 of size 30
: Using variable vars_time4[0] from array expression vars_time4 of size 30
: Using variable vars_time5[0] from array expression vars_time5 of size 30
: Using variable vars_time6[0] from array expression vars_time6 of size 30
: Using variable vars_time7[0] from array expression vars_time7 of size 30
: Using variable vars_time8[0] from array expression vars_time8 of size 30
: Using variable vars_time9[0] from array expression vars_time9 of size 30
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg
: Using variable vars_time0[0] from array expression vars_time0 of size 30
: Using variable vars_time1[0] from array expression vars_time1 of size 30
: Using variable vars_time2[0] from array expression vars_time2 of size 30
: Using variable vars_time3[0] from array expression vars_time3 of size 30
: Using variable vars_time4[0] from array expression vars_time4 of size 30
: Using variable vars_time5[0] from array expression vars_time5 of size 30
: Using variable vars_time6[0] from array expression vars_time6 of size 30
: Using variable vars_time7[0] from array expression vars_time7 of size 30
: Using variable vars_time8[0] from array expression vars_time8 of size 30
: Using variable vars_time9[0] from array expression vars_time9 of size 30
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 1600
: Signal -- testing events : 400
: Signal -- training and testing events: 2000
: Background -- training events : 1600
: Background -- testing events : 400
: Background -- training and testing events: 2000
:
Factory : ␛[1mTrain all methods␛[0m
Factory : Train method: TMVA_LSTM for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 4
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 4 Input = ( 10, 1, 30 ) Batch size = 100 Loss function = C
Layer 0 LSTM Layer: (NInput = 30, NState = 10, NTime = 10 ) Output = ( 100 , 10 , 10 )
Layer 1 RESHAPE Layer Input = ( 1 , 10 , 10 ) Output = ( 1 , 100 , 100 )
Layer 2 DENSE Layer: ( Input = 100 , Width = 64 ) Output = ( 1 , 100 , 64 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 64 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity
: Using 2560 events for training and 640 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.706595
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.695268 0.697051 0.649413 0.0442319 4130.99 0
: 2 | 0.686313 0.700862 0.698661 0.0466571 3834.33 1
: 3 | 0.680989 0.705696 0.714043 0.0468632 3747.12 2
: 4 Minimum Test error found - save the configuration
: 4 | 0.672151 0.696471 0.721485 0.0478552 3711.24 0
: 5 Minimum Test error found - save the configuration
: 5 | 0.665104 0.685034 0.718595 0.047168 3723.41 0
: 6 Minimum Test error found - save the configuration
: 6 | 0.653614 0.677503 0.71989 0.0483257 3722.65 0
: 7 Minimum Test error found - save the configuration
: 7 | 0.64515 0.673546 0.735168 0.0488033 3642.38 0
: 8 Minimum Test error found - save the configuration
: 8 | 0.636309 0.665543 0.748161 0.0489943 3575.68 0
: 9 Minimum Test error found - save the configuration
: 9 | 0.623004 0.649426 0.739047 0.0496191 3626.2 0
: 10 Minimum Test error found - save the configuration
: 10 | 0.612092 0.645156 0.75148 0.0521082 3574.64 0
: 11 Minimum Test error found - save the configuration
: 11 | 0.597131 0.643012 0.741908 0.0482947 3604.31 0
: 12 Minimum Test error found - save the configuration
: 12 | 0.582842 0.634933 0.740242 0.0481555 3612.26 0
: 13 Minimum Test error found - save the configuration
: 13 | 0.573252 0.62383 0.733326 0.0476341 3645.95 0
: 14 Minimum Test error found - save the configuration
: 14 | 0.565089 0.622502 0.718121 0.0460395 3719.79 0
: 15 Minimum Test error found - save the configuration
: 15 | 0.550469 0.601601 0.674835 0.0424641 3953.38 0
: 16 Minimum Test error found - save the configuration
: 16 | 0.533566 0.589293 0.617395 0.0419035 4344.11 0
: 17 | 0.527133 0.594883 0.604732 0.0403669 4429.75 1
: 18 Minimum Test error found - save the configuration
: 18 | 0.519148 0.588615 0.596471 0.0407222 4498.43 0
: 19 Minimum Test error found - save the configuration
: 19 | 0.509995 0.548657 0.607057 0.0407191 4414.33 0
: 20 | 0.495213 0.572425 0.603657 0.0411503 4444.39 1
:
: Elapsed time for training with 3200 events: 13.9 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
TMVA_LSTM : [dataset] : Evaluation of TMVA_LSTM on training sample (3200 events)
: Elapsed time for evaluation of 3200 events: 0.215 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.class.C␛[0m
Factory : Training finished
:
Factory : Train method: TMVA_DNN for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 4
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 4 Input = ( 1, 1, 300 ) Batch size = 256 Loss function = C
Layer 0 DENSE Layer: ( Input = 300 , Width = 64 ) Output = ( 1 , 256 , 64 ) Activation Function = Tanh
Layer 1 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 256 , 64 ) Activation Function = Tanh
Layer 2 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 256 , 64 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 64 , Width = 1 ) Output = ( 1 , 256 , 1 ) Activation Function = Identity
: Using 2560 events for training and 640 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 1.0221
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.785448 0.723541 0.191914 0.0154614 14508.2 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.701254 0.706834 0.192001 0.0152122 14480.5 0
: 3 | 0.690028 0.712367 0.190415 0.0152004 14610.6 1
: 4 Minimum Test error found - save the configuration
: 4 | 0.692333 0.698735 0.191706 0.0153653 14517.3 0
: 5 | 0.686311 0.699618 0.189787 0.0150078 14647.1 1
: 6 | 0.678428 0.699309 0.189694 0.015083 14661.2 2
: 7 | 0.675416 0.701598 0.190293 0.0151329 14615.2 3
: 8 Minimum Test error found - save the configuration
: 8 | 0.679167 0.690905 0.191877 0.0153131 14499 0
: 9 | 0.676598 0.69514 0.189933 0.0148608 14622.5 1
: 10 Minimum Test error found - save the configuration
: 10 | 0.680779 0.69038 0.191012 0.0153943 14577.1 0
: 11 | 0.676321 0.697593 0.191639 0.0148748 14482.6 1
: 12 Minimum Test error found - save the configuration
: 12 | 0.676159 0.686455 0.191533 0.0155488 14546.7 0
: 13 | 0.671214 0.696335 0.192385 0.0150791 14438.4 1
: 14 | 0.670954 0.689304 0.191531 0.0148595 14490.2 2
: 15 Minimum Test error found - save the configuration
: 15 | 0.669089 0.680148 0.191946 0.0160341 14552.8 0
: 16 | 0.663114 0.682748 0.196966 0.0157831 14129.4 1
: 17 | 0.66076 0.703162 0.192366 0.0153067 14458.4 2
: 18 | 0.6716 0.684674 0.193025 0.0152946 14403.8 3
: 19 Minimum Test error found - save the configuration
: 19 | 0.667279 0.677057 0.192848 0.015545 14438.5 0
: 20 Minimum Test error found - save the configuration
: 20 | 0.654641 0.675079 0.195755 0.0154026 14194.4 0
:
: Elapsed time for training with 3200 events: 3.86 sec
: Evaluate deep neural network on CPU using batches with size = 256
:
TMVA_DNN : [dataset] : Evaluation of TMVA_DNN on training sample (3200 events)
: Elapsed time for evaluation of 3200 events: 0.099 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.class.C␛[0m
Factory : Training finished
:
Factory : Train method: PyKeras_LSTM for Classification
:
: Split TMVA training data in 2560 training events and 640 validation events
: Training Model Summary
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape (Reshape) (None, 10, 30) 0
lstm (LSTM) (None, 10, 10) 1640
flatten (Flatten) (None, 100) 0
dense (Dense) (None, 64) 6464
dense_1 (Dense) (None, 2) 130
=================================================================
Total params: 8234 (32.16 KB)
Trainable params: 8234 (32.16 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
: Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
1/26 [>.............................] - ETA: 45s - loss: 0.7625 - accuracy: 0.4300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.7204 - accuracy: 0.4843 ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
13/26 [==============>...............] - ETA: 0s - loss: 0.7118 - accuracy: 0.5138␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.7089 - accuracy: 0.5105
Epoch 1: val_loss improved from inf to 0.69315, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 3s 41ms/step - loss: 0.7070 - accuracy: 0.5133 - val_loss: 0.6931 - val_accuracy: 0.5453
Epoch 2/20
1/26 [>.............................] - ETA: 0s - loss: 0.6638 - accuracy: 0.6000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/26 [=========>....................] - ETA: 0s - loss: 0.6795 - accuracy: 0.5589␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/26 [=================>............] - ETA: 0s - loss: 0.6808 - accuracy: 0.5612␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
23/26 [=========================>....] - ETA: 0s - loss: 0.6780 - accuracy: 0.5704
Epoch 2: val_loss improved from 0.69315 to 0.67458, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 12ms/step - loss: 0.6771 - accuracy: 0.5727 - val_loss: 0.6746 - val_accuracy: 0.5938
Epoch 3/20
1/26 [>.............................] - ETA: 0s - loss: 0.6374 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.6519 - accuracy: 0.6200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/26 [============>.................] - ETA: 0s - loss: 0.6587 - accuracy: 0.6033␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.6577 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/26 [========================>.....] - ETA: 0s - loss: 0.6571 - accuracy: 0.6068
Epoch 3: val_loss improved from 0.67458 to 0.65430, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 15ms/step - loss: 0.6534 - accuracy: 0.6176 - val_loss: 0.6543 - val_accuracy: 0.6328
Epoch 4/20
1/26 [>.............................] - ETA: 0s - loss: 0.6658 - accuracy: 0.5500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/26 [====>.........................] - ETA: 0s - loss: 0.6418 - accuracy: 0.6200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.6438 - accuracy: 0.6290␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.6364 - accuracy: 0.6467␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/26 [====================>.........] - ETA: 0s - loss: 0.6337 - accuracy: 0.6516␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.6312 - accuracy: 0.6529
Epoch 4: val_loss improved from 0.65430 to 0.63631, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 16ms/step - loss: 0.6305 - accuracy: 0.6547 - val_loss: 0.6363 - val_accuracy: 0.6344
Epoch 5/20
1/26 [>.............................] - ETA: 0s - loss: 0.6046 - accuracy: 0.7100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.6116 - accuracy: 0.6833␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.6146 - accuracy: 0.6836␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/26 [=================>............] - ETA: 0s - loss: 0.6134 - accuracy: 0.6806␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.6098 - accuracy: 0.6814
Epoch 5: val_loss improved from 0.63631 to 0.61879, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 14ms/step - loss: 0.6074 - accuracy: 0.6836 - val_loss: 0.6188 - val_accuracy: 0.6453
Epoch 6/20
1/26 [>.............................] - ETA: 0s - loss: 0.5451 - accuracy: 0.7700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.5699 - accuracy: 0.7167␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.5830 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.5825 - accuracy: 0.6993␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.5845 - accuracy: 0.6950␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/26 [===========================>..] - ETA: 0s - loss: 0.5869 - accuracy: 0.6916
Epoch 6: val_loss improved from 0.61879 to 0.59531, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 16ms/step - loss: 0.5857 - accuracy: 0.6930 - val_loss: 0.5953 - val_accuracy: 0.6703
Epoch 7/20
1/26 [>.............................] - ETA: 0s - loss: 0.5719 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.5659 - accuracy: 0.7033␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.5664 - accuracy: 0.7070␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
14/26 [===============>..............] - ETA: 0s - loss: 0.5584 - accuracy: 0.7136␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/26 [====================>.........] - ETA: 0s - loss: 0.5545 - accuracy: 0.7211␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.5556 - accuracy: 0.7200
Epoch 7: val_loss improved from 0.59531 to 0.58434, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 15ms/step - loss: 0.5569 - accuracy: 0.7191 - val_loss: 0.5843 - val_accuracy: 0.6859
Epoch 8/20
1/26 [>.............................] - ETA: 0s - loss: 0.5279 - accuracy: 0.7500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.5597 - accuracy: 0.7183␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.5503 - accuracy: 0.7220␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.5498 - accuracy: 0.7220␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.5508 - accuracy: 0.7195␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/26 [===========================>..] - ETA: 0s - loss: 0.5473 - accuracy: 0.7244
Epoch 8: val_loss improved from 0.58434 to 0.56294, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 16ms/step - loss: 0.5469 - accuracy: 0.7254 - val_loss: 0.5629 - val_accuracy: 0.6938
Epoch 9/20
1/26 [>.............................] - ETA: 0s - loss: 0.5529 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.5079 - accuracy: 0.7557␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
14/26 [===============>..............] - ETA: 0s - loss: 0.5157 - accuracy: 0.7536␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.5184 - accuracy: 0.7475␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/26 [===========================>..] - ETA: 0s - loss: 0.5187 - accuracy: 0.7484
Epoch 9: val_loss improved from 0.56294 to 0.53884, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 14ms/step - loss: 0.5190 - accuracy: 0.7480 - val_loss: 0.5388 - val_accuracy: 0.7500
Epoch 10/20
1/26 [>.............................] - ETA: 0s - loss: 0.4753 - accuracy: 0.8000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.5011 - accuracy: 0.7717␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.4948 - accuracy: 0.7670␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.5010 - accuracy: 0.7607␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.4971 - accuracy: 0.7629
Epoch 10: val_loss improved from 0.53884 to 0.53290, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 15ms/step - loss: 0.4972 - accuracy: 0.7617 - val_loss: 0.5329 - val_accuracy: 0.7344
Epoch 11/20
1/26 [>.............................] - ETA: 0s - loss: 0.4282 - accuracy: 0.8600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/26 [====>.........................] - ETA: 0s - loss: 0.4825 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.4684 - accuracy: 0.7930␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.4713 - accuracy: 0.7820␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.4698 - accuracy: 0.7800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.4753 - accuracy: 0.7733
Epoch 11: val_loss improved from 0.53290 to 0.51444, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 17ms/step - loss: 0.4771 - accuracy: 0.7715 - val_loss: 0.5144 - val_accuracy: 0.7344
Epoch 12/20
1/26 [>.............................] - ETA: 0s - loss: 0.4935 - accuracy: 0.8000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/26 [====>.........................] - ETA: 0s - loss: 0.4717 - accuracy: 0.7860␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.4698 - accuracy: 0.7790␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.4615 - accuracy: 0.7820␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.4720 - accuracy: 0.7735␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.4721 - accuracy: 0.7704
Epoch 12: val_loss did not improve from 0.51444
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 17ms/step - loss: 0.4713 - accuracy: 0.7723 - val_loss: 0.5169 - val_accuracy: 0.7531
Epoch 13/20
1/26 [>.............................] - ETA: 0s - loss: 0.4200 - accuracy: 0.8100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.4619 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
13/26 [==============>...............] - ETA: 0s - loss: 0.4599 - accuracy: 0.7831␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/26 [===================>..........] - ETA: 0s - loss: 0.4556 - accuracy: 0.7861␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
23/26 [=========================>....] - ETA: 0s - loss: 0.4549 - accuracy: 0.7830
Epoch 13: val_loss improved from 0.51444 to 0.49422, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 14ms/step - loss: 0.4606 - accuracy: 0.7797 - val_loss: 0.4942 - val_accuracy: 0.7672
Epoch 14/20
1/26 [>.............................] - ETA: 0s - loss: 0.5058 - accuracy: 0.7100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.4211 - accuracy: 0.8017␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.4192 - accuracy: 0.8009␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/26 [=================>............] - ETA: 0s - loss: 0.4216 - accuracy: 0.8012␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.4377 - accuracy: 0.7933␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - ETA: 0s - loss: 0.4346 - accuracy: 0.7977
Epoch 14: val_loss improved from 0.49422 to 0.47640, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 15ms/step - loss: 0.4346 - accuracy: 0.7977 - val_loss: 0.4764 - val_accuracy: 0.7859
Epoch 15/20
1/26 [>.............................] - ETA: 0s - loss: 0.4364 - accuracy: 0.7500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.4206 - accuracy: 0.8114␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/26 [============>.................] - ETA: 0s - loss: 0.4187 - accuracy: 0.8092␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.4155 - accuracy: 0.8135␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/26 [========================>.....] - ETA: 0s - loss: 0.4149 - accuracy: 0.8114
Epoch 15: val_loss did not improve from 0.47640
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 14ms/step - loss: 0.4258 - accuracy: 0.8039 - val_loss: 0.4777 - val_accuracy: 0.7797
Epoch 16/20
1/26 [>.............................] - ETA: 0s - loss: 0.4300 - accuracy: 0.8000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.4151 - accuracy: 0.7933␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.4239 - accuracy: 0.7955␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/26 [=================>............] - ETA: 0s - loss: 0.4206 - accuracy: 0.8000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/26 [========================>.....] - ETA: 0s - loss: 0.4151 - accuracy: 0.8055
Epoch 16: val_loss improved from 0.47640 to 0.47259, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 15ms/step - loss: 0.4170 - accuracy: 0.8055 - val_loss: 0.4726 - val_accuracy: 0.7875
Epoch 17/20
1/26 [>.............................] - ETA: 0s - loss: 0.3489 - accuracy: 0.8700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.3979 - accuracy: 0.8250␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/26 [============>.................] - ETA: 0s - loss: 0.3973 - accuracy: 0.8242␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.3880 - accuracy: 0.8282␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/26 [========================>.....] - ETA: 0s - loss: 0.3999 - accuracy: 0.8195
Epoch 17: val_loss improved from 0.47259 to 0.46766, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 14ms/step - loss: 0.4067 - accuracy: 0.8137 - val_loss: 0.4677 - val_accuracy: 0.7859
Epoch 18/20
1/26 [>.............................] - ETA: 0s - loss: 0.2669 - accuracy: 0.8900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.3777 - accuracy: 0.8329␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
13/26 [==============>...............] - ETA: 0s - loss: 0.3944 - accuracy: 0.8246␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/26 [====================>.........] - ETA: 0s - loss: 0.4005 - accuracy: 0.8158␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/26 [===========================>..] - ETA: 0s - loss: 0.3990 - accuracy: 0.8168
Epoch 18: val_loss improved from 0.46766 to 0.45678, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 14ms/step - loss: 0.3984 - accuracy: 0.8172 - val_loss: 0.4568 - val_accuracy: 0.7922
Epoch 19/20
1/26 [>.............................] - ETA: 0s - loss: 0.4671 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.3778 - accuracy: 0.8300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/26 [============>.................] - ETA: 0s - loss: 0.3917 - accuracy: 0.8225␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.3837 - accuracy: 0.8288␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.3822 - accuracy: 0.8254
Epoch 19: val_loss did not improve from 0.45678
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 12ms/step - loss: 0.3856 - accuracy: 0.8234 - val_loss: 0.4583 - val_accuracy: 0.7891
Epoch 20/20
1/26 [>.............................] - ETA: 0s - loss: 0.3896 - accuracy: 0.8200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/26 [=====>........................] - ETA: 0s - loss: 0.3962 - accuracy: 0.8183␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/26 [============>.................] - ETA: 0s - loss: 0.3906 - accuracy: 0.8250␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.3871 - accuracy: 0.8235␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/26 [========================>.....] - ETA: 0s - loss: 0.3844 - accuracy: 0.8236
Epoch 20: val_loss improved from 0.45678 to 0.45113, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 14ms/step - loss: 0.3845 - accuracy: 0.8223 - val_loss: 0.4511 - val_accuracy: 0.7828
: Getting training history for item:0 name = 'loss'
: Getting training history for item:1 name = 'accuracy'
: Getting training history for item:2 name = 'val_loss'
: Getting training history for item:3 name = 'val_accuracy'
: Elapsed time for training with 3200 events: 10.2 sec
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: trained_model_LSTM.h5
PyKeras_LSTM : [dataset] : Evaluation of PyKeras_LSTM on training sample (3200 events)
: Elapsed time for evaluation of 3200 events: 0.43 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.class.C␛[0m
Factory : Training finished
:
Factory : Train method: BDTG for Classification
:
BDTG : #events: (reweighted) sig: 1600 bkg: 1600
: #events: (unweighted) sig: 1600 bkg: 1600
: Training 100 Decision Trees ... patience please
: Elapsed time for training with 3200 events: 1.81 sec
BDTG : [dataset] : Evaluation of BDTG on training sample (3200 events)
: Elapsed time for evaluation of 3200 events: 0.0192 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_BDTG.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_BDTG.class.C␛[0m
: data_RNN_CPU.root:/dataset/Method_BDT/BDTG
Factory : Training finished
:
: Ranking input variables (method specific)...
: No variable ranking supplied by classifier: TMVA_LSTM
: No variable ranking supplied by classifier: TMVA_DNN
: No variable ranking supplied by classifier: PyKeras_LSTM
BDTG : Ranking result (top variable is best ranked)
: --------------------------------------------
: Rank : Variable : Variable Importance
: --------------------------------------------
: 1 : vars_time8 : 2.393e-02
: 2 : vars_time7 : 2.334e-02
: 3 : vars_time8 : 2.220e-02
: 4 : vars_time8 : 2.208e-02
: 5 : vars_time7 : 2.160e-02
: 6 : vars_time9 : 1.927e-02
: 7 : vars_time6 : 1.907e-02
: 8 : vars_time9 : 1.826e-02
: 9 : vars_time7 : 1.821e-02
: 10 : vars_time8 : 1.750e-02
: 11 : vars_time7 : 1.687e-02
: 12 : vars_time5 : 1.686e-02
: 13 : vars_time8 : 1.666e-02
: 14 : vars_time6 : 1.666e-02
: 15 : vars_time9 : 1.605e-02
: 16 : vars_time9 : 1.598e-02
: 17 : vars_time7 : 1.561e-02
: 18 : vars_time0 : 1.488e-02
: 19 : vars_time8 : 1.477e-02
: 20 : vars_time6 : 1.451e-02
: 21 : vars_time8 : 1.367e-02
: 22 : vars_time8 : 1.360e-02
: 23 : vars_time7 : 1.293e-02
: 24 : vars_time8 : 1.266e-02
: 25 : vars_time8 : 1.257e-02
: 26 : vars_time9 : 1.256e-02
: 27 : vars_time5 : 1.249e-02
: 28 : vars_time9 : 1.209e-02
: 29 : vars_time7 : 1.199e-02
: 30 : vars_time9 : 1.180e-02
: 31 : vars_time6 : 1.172e-02
: 32 : vars_time8 : 1.160e-02
: 33 : vars_time6 : 1.156e-02
: 34 : vars_time6 : 1.128e-02
: 35 : vars_time5 : 1.127e-02
: 36 : vars_time9 : 1.124e-02
: 37 : vars_time0 : 1.114e-02
: 38 : vars_time0 : 1.112e-02
: 39 : vars_time5 : 1.074e-02
: 40 : vars_time9 : 1.068e-02
: 41 : vars_time7 : 1.062e-02
: 42 : vars_time8 : 1.050e-02
: 43 : vars_time0 : 1.023e-02
: 44 : vars_time7 : 1.016e-02
: 45 : vars_time4 : 9.923e-03
: 46 : vars_time7 : 9.891e-03
: 47 : vars_time9 : 9.871e-03
: 48 : vars_time0 : 9.614e-03
: 49 : vars_time5 : 9.547e-03
: 50 : vars_time7 : 9.434e-03
: 51 : vars_time0 : 9.186e-03
: 52 : vars_time1 : 9.114e-03
: 53 : vars_time7 : 8.835e-03
: 54 : vars_time5 : 8.392e-03
: 55 : vars_time8 : 8.261e-03
: 56 : vars_time0 : 8.232e-03
: 57 : vars_time0 : 7.740e-03
: 58 : vars_time6 : 7.715e-03
: 59 : vars_time6 : 7.628e-03
: 60 : vars_time1 : 7.547e-03
: 61 : vars_time6 : 7.479e-03
: 62 : vars_time6 : 7.462e-03
: 63 : vars_time9 : 7.396e-03
: 64 : vars_time7 : 7.360e-03
: 65 : vars_time7 : 7.044e-03
: 66 : vars_time5 : 6.938e-03
: 67 : vars_time6 : 6.934e-03
: 68 : vars_time7 : 6.659e-03
: 69 : vars_time4 : 6.646e-03
: 70 : vars_time8 : 6.447e-03
: 71 : vars_time0 : 6.416e-03
: 72 : vars_time0 : 6.335e-03
: 73 : vars_time1 : 6.120e-03
: 74 : vars_time4 : 5.869e-03
: 75 : vars_time9 : 5.805e-03
: 76 : vars_time6 : 5.640e-03
: 77 : vars_time5 : 5.550e-03
: 78 : vars_time1 : 5.477e-03
: 79 : vars_time1 : 5.396e-03
: 80 : vars_time1 : 5.368e-03
: 81 : vars_time0 : 5.322e-03
: 82 : vars_time9 : 5.321e-03
: 83 : vars_time0 : 5.293e-03
: 84 : vars_time4 : 5.105e-03
: 85 : vars_time0 : 4.965e-03
: 86 : vars_time2 : 4.956e-03
: 87 : vars_time2 : 4.920e-03
: 88 : vars_time4 : 4.874e-03
: 89 : vars_time9 : 4.694e-03
: 90 : vars_time7 : 4.693e-03
: 91 : vars_time7 : 4.601e-03
: 92 : vars_time4 : 4.583e-03
: 93 : vars_time2 : 4.411e-03
: 94 : vars_time0 : 4.385e-03
: 95 : vars_time4 : 4.126e-03
: 96 : vars_time6 : 4.093e-03
: 97 : vars_time8 : 3.511e-03
: 98 : vars_time8 : 3.433e-03
: 99 : vars_time4 : 2.889e-03
: 100 : vars_time0 : 0.000e+00
: 101 : vars_time0 : 0.000e+00
: 102 : vars_time0 : 0.000e+00
: 103 : vars_time0 : 0.000e+00
: 104 : vars_time0 : 0.000e+00
: 105 : vars_time0 : 0.000e+00
: 106 : vars_time0 : 0.000e+00
: 107 : vars_time0 : 0.000e+00
: 108 : vars_time0 : 0.000e+00
: 109 : vars_time0 : 0.000e+00
: 110 : vars_time0 : 0.000e+00
: 111 : vars_time0 : 0.000e+00
: 112 : vars_time0 : 0.000e+00
: 113 : vars_time0 : 0.000e+00
: 114 : vars_time0 : 0.000e+00
: 115 : vars_time0 : 0.000e+00
: 116 : vars_time1 : 0.000e+00
: 117 : vars_time1 : 0.000e+00
: 118 : vars_time1 : 0.000e+00
: 119 : vars_time1 : 0.000e+00
: 120 : vars_time1 : 0.000e+00
: 121 : vars_time1 : 0.000e+00
: 122 : vars_time1 : 0.000e+00
: 123 : vars_time1 : 0.000e+00
: 124 : vars_time1 : 0.000e+00
: 125 : vars_time1 : 0.000e+00
: 126 : vars_time1 : 0.000e+00
: 127 : vars_time1 : 0.000e+00
: 128 : vars_time1 : 0.000e+00
: 129 : vars_time1 : 0.000e+00
: 130 : vars_time1 : 0.000e+00
: 131 : vars_time1 : 0.000e+00
: 132 : vars_time1 : 0.000e+00
: 133 : vars_time1 : 0.000e+00
: 134 : vars_time1 : 0.000e+00
: 135 : vars_time1 : 0.000e+00
: 136 : vars_time1 : 0.000e+00
: 137 : vars_time1 : 0.000e+00
: 138 : vars_time1 : 0.000e+00
: 139 : vars_time1 : 0.000e+00
: 140 : vars_time2 : 0.000e+00
: 141 : vars_time2 : 0.000e+00
: 142 : vars_time2 : 0.000e+00
: 143 : vars_time2 : 0.000e+00
: 144 : vars_time2 : 0.000e+00
: 145 : vars_time2 : 0.000e+00
: 146 : vars_time2 : 0.000e+00
: 147 : vars_time2 : 0.000e+00
: 148 : vars_time2 : 0.000e+00
: 149 : vars_time2 : 0.000e+00
: 150 : vars_time2 : 0.000e+00
: 151 : vars_time2 : 0.000e+00
: 152 : vars_time2 : 0.000e+00
: 153 : vars_time2 : 0.000e+00
: 154 : vars_time2 : 0.000e+00
: 155 : vars_time2 : 0.000e+00
: 156 : vars_time2 : 0.000e+00
: 157 : vars_time2 : 0.000e+00
: 158 : vars_time2 : 0.000e+00
: 159 : vars_time2 : 0.000e+00
: 160 : vars_time2 : 0.000e+00
: 161 : vars_time2 : 0.000e+00
: 162 : vars_time2 : 0.000e+00
: 163 : vars_time2 : 0.000e+00
: 164 : vars_time2 : 0.000e+00
: 165 : vars_time2 : 0.000e+00
: 166 : vars_time2 : 0.000e+00
: 167 : vars_time3 : 0.000e+00
: 168 : vars_time3 : 0.000e+00
: 169 : vars_time3 : 0.000e+00
: 170 : vars_time3 : 0.000e+00
: 171 : vars_time3 : 0.000e+00
: 172 : vars_time3 : 0.000e+00
: 173 : vars_time3 : 0.000e+00
: 174 : vars_time3 : 0.000e+00
: 175 : vars_time3 : 0.000e+00
: 176 : vars_time3 : 0.000e+00
: 177 : vars_time3 : 0.000e+00
: 178 : vars_time3 : 0.000e+00
: 179 : vars_time3 : 0.000e+00
: 180 : vars_time3 : 0.000e+00
: 181 : vars_time3 : 0.000e+00
: 182 : vars_time3 : 0.000e+00
: 183 : vars_time3 : 0.000e+00
: 184 : vars_time3 : 0.000e+00
: 185 : vars_time3 : 0.000e+00
: 186 : vars_time3 : 0.000e+00
: 187 : vars_time3 : 0.000e+00
: 188 : vars_time3 : 0.000e+00
: 189 : vars_time3 : 0.000e+00
: 190 : vars_time3 : 0.000e+00
: 191 : vars_time3 : 0.000e+00
: 192 : vars_time3 : 0.000e+00
: 193 : vars_time3 : 0.000e+00
: 194 : vars_time3 : 0.000e+00
: 195 : vars_time3 : 0.000e+00
: 196 : vars_time3 : 0.000e+00
: 197 : vars_time4 : 0.000e+00
: 198 : vars_time4 : 0.000e+00
: 199 : vars_time4 : 0.000e+00
: 200 : vars_time4 : 0.000e+00
: 201 : vars_time4 : 0.000e+00
: 202 : vars_time4 : 0.000e+00
: 203 : vars_time4 : 0.000e+00
: 204 : vars_time4 : 0.000e+00
: 205 : vars_time4 : 0.000e+00
: 206 : vars_time4 : 0.000e+00
: 207 : vars_time4 : 0.000e+00
: 208 : vars_time4 : 0.000e+00
: 209 : vars_time4 : 0.000e+00
: 210 : vars_time4 : 0.000e+00
: 211 : vars_time4 : 0.000e+00
: 212 : vars_time4 : 0.000e+00
: 213 : vars_time4 : 0.000e+00
: 214 : vars_time4 : 0.000e+00
: 215 : vars_time4 : 0.000e+00
: 216 : vars_time4 : 0.000e+00
: 217 : vars_time4 : 0.000e+00
: 218 : vars_time4 : 0.000e+00
: 219 : vars_time5 : 0.000e+00
: 220 : vars_time5 : 0.000e+00
: 221 : vars_time5 : 0.000e+00
: 222 : vars_time5 : 0.000e+00
: 223 : vars_time5 : 0.000e+00
: 224 : vars_time5 : 0.000e+00
: 225 : vars_time5 : 0.000e+00
: 226 : vars_time5 : 0.000e+00
: 227 : vars_time5 : 0.000e+00
: 228 : vars_time5 : 0.000e+00
: 229 : vars_time5 : 0.000e+00
: 230 : vars_time5 : 0.000e+00
: 231 : vars_time5 : 0.000e+00
: 232 : vars_time5 : 0.000e+00
: 233 : vars_time5 : 0.000e+00
: 234 : vars_time5 : 0.000e+00
: 235 : vars_time5 : 0.000e+00
: 236 : vars_time5 : 0.000e+00
: 237 : vars_time5 : 0.000e+00
: 238 : vars_time5 : 0.000e+00
: 239 : vars_time5 : 0.000e+00
: 240 : vars_time5 : 0.000e+00
: 241 : vars_time6 : 0.000e+00
: 242 : vars_time6 : 0.000e+00
: 243 : vars_time6 : 0.000e+00
: 244 : vars_time6 : 0.000e+00
: 245 : vars_time6 : 0.000e+00
: 246 : vars_time6 : 0.000e+00
: 247 : vars_time6 : 0.000e+00
: 248 : vars_time6 : 0.000e+00
: 249 : vars_time6 : 0.000e+00
: 250 : vars_time6 : 0.000e+00
: 251 : vars_time6 : 0.000e+00
: 252 : vars_time6 : 0.000e+00
: 253 : vars_time6 : 0.000e+00
: 254 : vars_time6 : 0.000e+00
: 255 : vars_time6 : 0.000e+00
: 256 : vars_time6 : 0.000e+00
: 257 : vars_time6 : 0.000e+00
: 258 : vars_time7 : 0.000e+00
: 259 : vars_time7 : 0.000e+00
: 260 : vars_time7 : 0.000e+00
: 261 : vars_time7 : 0.000e+00
: 262 : vars_time7 : 0.000e+00
: 263 : vars_time7 : 0.000e+00
: 264 : vars_time7 : 0.000e+00
: 265 : vars_time7 : 0.000e+00
: 266 : vars_time7 : 0.000e+00
: 267 : vars_time7 : 0.000e+00
: 268 : vars_time7 : 0.000e+00
: 269 : vars_time7 : 0.000e+00
: 270 : vars_time7 : 0.000e+00
: 271 : vars_time8 : 0.000e+00
: 272 : vars_time8 : 0.000e+00
: 273 : vars_time8 : 0.000e+00
: 274 : vars_time8 : 0.000e+00
: 275 : vars_time8 : 0.000e+00
: 276 : vars_time8 : 0.000e+00
: 277 : vars_time8 : 0.000e+00
: 278 : vars_time8 : 0.000e+00
: 279 : vars_time8 : 0.000e+00
: 280 : vars_time8 : 0.000e+00
: 281 : vars_time8 : 0.000e+00
: 282 : vars_time8 : 0.000e+00
: 283 : vars_time8 : 0.000e+00
: 284 : vars_time8 : 0.000e+00
: 285 : vars_time9 : 0.000e+00
: 286 : vars_time9 : 0.000e+00
: 287 : vars_time9 : 0.000e+00
: 288 : vars_time9 : 0.000e+00
: 289 : vars_time9 : 0.000e+00
: 290 : vars_time9 : 0.000e+00
: 291 : vars_time9 : 0.000e+00
: 292 : vars_time9 : 0.000e+00
: 293 : vars_time9 : 0.000e+00
: 294 : vars_time9 : 0.000e+00
: 295 : vars_time9 : 0.000e+00
: 296 : vars_time9 : 0.000e+00
: 297 : vars_time9 : 0.000e+00
: 298 : vars_time9 : 0.000e+00
: 299 : vars_time9 : 0.000e+00
: 300 : vars_time9 : 0.000e+00
: --------------------------------------------
TH1.Print Name = TrainingHistory_TMVA_LSTM_trainingError, Entries= 0, Total sum= 12.0238
TH1.Print Name = TrainingHistory_TMVA_LSTM_valError, Entries= 0, Total sum= 12.816
TH1.Print Name = TrainingHistory_TMVA_DNN_trainingError, Entries= 0, Total sum= 13.6269
TH1.Print Name = TrainingHistory_TMVA_DNN_valError, Entries= 0, Total sum= 13.891
TH1.Print Name = TrainingHistory_PyKeras_LSTM_'accuracy', Entries= 0, Total sum= 14.6961
TH1.Print Name = TrainingHistory_PyKeras_LSTM_'loss', Entries= 0, Total sum= 10.2428
TH1.Print Name = TrainingHistory_PyKeras_LSTM_'val_accuracy', Entries= 0, Total sum= 14.3438
TH1.Print Name = TrainingHistory_PyKeras_LSTM_'val_loss', Entries= 0, Total sum= 10.8776
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_BDTG.weights.xml␛[0m
nthreads = 4
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: TMVA_LSTM for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 800
:
TMVA_LSTM : [dataset] : Evaluation of TMVA_LSTM on testing sample (800 events)
: Elapsed time for evaluation of 800 events: 0.0584 sec
Factory : Test method: TMVA_DNN for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 800
:
TMVA_DNN : [dataset] : Evaluation of TMVA_DNN on testing sample (800 events)
: Elapsed time for evaluation of 800 events: 0.0236 sec
Factory : Test method: PyKeras_LSTM for Classification performance
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: trained_model_LSTM.h5
PyKeras_LSTM : [dataset] : Evaluation of PyKeras_LSTM on testing sample (800 events)
: Elapsed time for evaluation of 800 events: 0.264 sec
Factory : Test method: BDTG for Classification performance
:
BDTG : [dataset] : Evaluation of BDTG on testing sample (800 events)
: Elapsed time for evaluation of 800 events: 0.00607 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: TMVA_LSTM
:
TMVA_LSTM : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory : Evaluate classifier: TMVA_DNN
:
TMVA_DNN : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory : Evaluate classifier: PyKeras_LSTM
:
PyKeras_LSTM : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory : Evaluate classifier: BDTG
:
BDTG : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 300 , it is larger than 200
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset PyKeras_LSTM : 0.870
: dataset BDTG : 0.847
: dataset TMVA_LSTM : 0.802
: dataset TMVA_DNN : 0.633
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset PyKeras_LSTM : 0.215 (0.255) 0.655 (0.682) 0.845 (0.858)
: dataset BDTG : 0.205 (0.300) 0.565 (0.662) 0.789 (0.868)
: dataset TMVA_LSTM : 0.125 (0.155) 0.520 (0.501) 0.753 (0.775)
: dataset TMVA_DNN : 0.036 (0.028) 0.188 (0.198) 0.454 (0.463)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 800 events
:
Dataset:dataset : Created tree 'TrainTree' with 3200 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
/***
# TMVA Classification Example Using a Recurrent Neural Network
This is an example of using a RNN in TMVA.
We do the classification using a toy data set containing a time series of data sample ntimes
and with dimension ndim that is generated when running the provided function `MakeTimeData (nevents, ntime, ndim)`
**/
#include<TROOT.h>
#include "TMVA/Factory.h"
#include "TMVA/Config.h"
#include "TMVA/MethodDL.h"
#include "TFile.h"
#include "TTree.h"
/// Helper function to generate the time data set
/// make some time data but not of fixed length.
/// use a poisson with mu = 5 and truncated at 10
///
void MakeTimeData(int n, int ntime, int ndim )
{
// const int ntime = 10;
// const int ndim = 30; // number of dim/time
TString fname = TString::Format("time_data_t%d_d%d.root", ntime, ndim);
std::vector<TH1 *> v1(ntime);
std::vector<TH1 *> v2(ntime);
int i = 0;
for (int i = 0; i < ntime; ++i) {
v1[i] = new TH1D(TString::Format("h1_%d", i), "h1", ndim, 0, 10);
v2[i] = new TH1D(TString::Format("h2_%d", i), "h2", ndim, 0, 10);
}
auto f1 = new TF1("f1", "gaus");
auto f2 = new TF1("f2", "gaus");
TFile f(fname, "RECREATE");
TTree sgn("sgn", "sgn");
TTree bkg("bkg", "bkg");
std::vector<std::vector<float>> x1(ntime);
std::vector<std::vector<float>> x2(ntime);
for (int i = 0; i < ntime; ++i) {
x1[i] = std::vector<float>(ndim);
x2[i] = std::vector<float>(ndim);
}
for (auto i = 0; i < ntime; i++) {
bkg.Branch(Form("vars_time%d", i), "std::vector<float>", &x1[i]);
sgn.Branch(Form("vars_time%d", i), "std::vector<float>", &x2[i]);
}
sgn.SetDirectory(&f);
bkg.SetDirectory(&f);
std::vector<double> mean1(ntime);
std::vector<double> mean2(ntime);
std::vector<double> sigma1(ntime);
std::vector<double> sigma2(ntime);
for (int j = 0; j < ntime; ++j) {
mean1[j] = 5. + 0.2 * sin(TMath::Pi() * j / double(ntime));
mean2[j] = 5. + 0.2 * cos(TMath::Pi() * j / double(ntime));
sigma1[j] = 4 + 0.3 * sin(TMath::Pi() * j / double(ntime));
sigma2[j] = 4 + 0.3 * cos(TMath::Pi() * j / double(ntime));
}
for (int i = 0; i < n; ++i) {
if (i % 1000 == 0)
std::cout << "Generating event ... " << i << std::endl;
for (int j = 0; j < ntime; ++j) {
auto h1 = v1[j];
auto h2 = v2[j];
h1->Reset();
h2->Reset();
f1->SetParameters(1, mean1[j], sigma1[j]);
f2->SetParameters(1, mean2[j], sigma2[j]);
h1->FillRandom("f1", 1000);
h2->FillRandom("f2", 1000);
for (int k = 0; k < ndim; ++k) {
// std::cout << j*10+k << " ";
x1[j][k] = h1->GetBinContent(k + 1) + gRandom->Gaus(0, 10);
x2[j][k] = h2->GetBinContent(k + 1) + gRandom->Gaus(0, 10);
}
}
// std::cout << std::endl;
sgn.Fill();
bkg.Fill();
if (n == 1) {
auto c1 = new TCanvas();
c1->Divide(ntime, 2);
for (int j = 0; j < ntime; ++j) {
c1->cd(j + 1);
v1[j]->Draw();
}
for (int j = 0; j < ntime; ++j) {
c1->cd(ntime + j + 1);
v2[j]->Draw();
}
gPad->Update();
}
}
if (n > 1) {
sgn.Write();
bkg.Write();
sgn.Print();
bkg.Print();
f.Close();
}
}
/// macro for performing a classification using a Recurrent Neural Network
/// @param nevts = 2000 Number of events used. (increase for better classification results)
/// @param use_type
/// use_type = 0 use Simple RNN network
/// use_type = 1 use LSTM network
/// use_type = 2 use GRU
/// use_type = 3 build 3 different networks with RNN, LSTM and GRU
void TMVA_RNN_Classification(int nevts = 2000, int use_type = 1)
{
const int ninput = 30;
const int ntime = 10;
const int batchSize = 100;
const int maxepochs = 20;
int nTotEvts = nevts; // total events to be generated for signal or background
bool useKeras = true;
bool useTMVA_RNN = true;
bool useTMVA_DNN = true;
bool useTMVA_BDT = false;
std::vector<std::string> rnn_types = {"RNN", "LSTM", "GRU"};
std::vector<bool> use_rnn_type = {1, 1, 1};
if (use_type >=0 && use_type < 3) {
use_rnn_type = {0,0,0};
use_rnn_type[use_type] = 1;
}
bool useGPU = true; // use GPU for TMVA if available
#ifndef R__HAS_TMVAGPU
useGPU = false;
#ifndef R__HAS_TMVACPU
Warning("TMVA_RNN_Classification", "TMVA is not build with GPU or CPU multi-thread support. Cannot use TMVA Deep Learning for RNN");
useTMVA_RNN = false;
#endif
#endif
TString archString = (useGPU) ? "GPU" : "CPU";
bool writeOutputFile = true;
const char *rnn_type = "RNN";
#ifdef R__HAS_PYMVA
#else
useKeras = false;
#endif
#ifdef R__USE_IMT
int num_threads = 4; // use max 4 threads
// switch off MT in OpenBLAS to avoid conflict with tbb
gSystem->Setenv("OMP_NUM_THREADS", "1");
// do enable MT running
if (num_threads >= 0) {
ROOT::EnableImplicitMT(num_threads);
}
#endif
std::cout << "Running with nthreads = " << ROOT::GetThreadPoolSize() << std::endl;
TString inputFileName = "time_data_t10_d30.root";
bool fileExist = !gSystem->AccessPathName(inputFileName);
// if file does not exists create it
if (!fileExist) {
MakeTimeData(nTotEvts,ntime, ninput);
}
auto inputFile = TFile::Open(inputFileName);
if (!inputFile) {
Error("TMVA_RNN_Classification", "Error opening input file %s - exit", inputFileName.Data());
return;
}
std::cout << "--- RNNClassification : Using input file: " << inputFile->GetName() << std::endl;
// Create a ROOT output file where TMVA will store ntuples, histograms, etc.
TString outfileName(TString::Format("data_RNN_%s.root", archString.Data()));
TFile *outputFile = nullptr;
if (writeOutputFile) outputFile = TFile::Open(outfileName, "RECREATE");
/**
## Declare Factory
Create the Factory class. Later you can choose the methods
whose performance you'd like to investigate.
The factory is the major TMVA object you have to interact with. Here is the list of parameters you need to
pass
- The first argument is the base of the name of all the output
weightfiles in the directory weight/ that will be created with the
method parameters
- The second argument is the output file for the training results
- The third argument is a string option defining some general configuration for the TMVA session.
For example all TMVA output can be suppressed by removing the "!" (not) in front of the "Silent" argument in
the option string
**/
// Creating the factory object
TMVA::Factory *factory = new TMVA::Factory("TMVAClassification", outputFile,
"!V:!Silent:Color:DrawProgressBar:Transformations=None:!Correlations:"
"AnalysisType=Classification:ModelPersistence");
TMVA::DataLoader *dataloader = new TMVA::DataLoader("dataset");
TTree *signalTree = (TTree *)inputFile->Get("sgn");
TTree *background = (TTree *)inputFile->Get("bkg");
const int nvar = ninput * ntime;
/// add variables - use new AddVariablesArray function
for (auto i = 0; i < ntime; i++) {
dataloader->AddVariablesArray(Form("vars_time%d", i), ninput);
}
dataloader->AddSignalTree(signalTree, 1.0);
dataloader->AddBackgroundTree(background, 1.0);
// check given input
auto &datainfo = dataloader->GetDataSetInfo();
auto vars = datainfo.GetListOfVariables();
std::cout << "number of variables is " << vars.size() << std::endl;
for (auto &v : vars)
std::cout << v << ",";
std::cout << std::endl;
int nTrainSig = 0.8 * nTotEvts;
int nTrainBkg = 0.8 * nTotEvts;
// build the string options for DataLoader::PrepareTrainingAndTestTree
TString prepareOptions = TString::Format("nTrain_Signal=%d:nTrain_Background=%d:SplitMode=Random:SplitSeed=100:NormMode=NumEvents:!V:!CalcCorrelations", nTrainSig, nTrainBkg);
// Apply additional cuts on the signal and background samples (can be different)
TCut mycuts = ""; // for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1";
TCut mycutb = "";
dataloader->PrepareTrainingAndTestTree(mycuts, mycutb, prepareOptions);
std::cout << "prepared DATA LOADER " << std::endl;
/**
## Book TMVA recurrent models
Book the different types of recurrent models in TMVA (SimpleRNN, LSTM or GRU)
**/
if (useTMVA_RNN) {
for (int i = 0; i < 3; ++i) {
if (!use_rnn_type[i])
continue;
const char *rnn_type = rnn_types[i].c_str();
/// define the inputlayout string for RNN
/// the input data should be organize as following:
//// input layout for RNN: time x ndim
TString inputLayoutString = TString::Format("InputLayout=%d|%d", ntime, ninput);
/// Define RNN layer layout
/// it should be LayerType (RNN or LSTM or GRU) | number of units | number of inputs | time steps | remember output (typically no=0 | return full sequence
TString rnnLayout = TString::Format("%s|10|%d|%d|0|1", rnn_type, ninput, ntime);
/// add after RNN a reshape layer (needed top flatten the output) and a dense layer with 64 units and a last one
/// Note the last layer is linear because when using Crossentropy a Sigmoid is applied already
TString layoutString = TString("Layout=") + rnnLayout + TString(",RESHAPE|FLAT,DENSE|64|TANH,LINEAR");
/// Defining Training strategies. Different training strings can be concatenate. Use however only one
TString trainingString1 = TString::Format("LearningRate=1e-3,Momentum=0.0,Repetitions=1,"
"ConvergenceSteps=5,BatchSize=%d,TestRepetitions=1,"
"WeightDecay=1e-2,Regularization=None,MaxEpochs=%d,"
"Optimizer=ADAM,DropConfig=0.0+0.+0.+0.",
batchSize,maxepochs);
TString trainingStrategyString("TrainingStrategy=");
trainingStrategyString += trainingString1; // + "|" + trainingString2
/// Define the full RNN Noption string adding the final options for all network
TString rnnOptions("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:"
"WeightInitialization=XAVIERUNIFORM:ValidationSize=0.2:RandomSeed=1234");
rnnOptions.Append(":");
rnnOptions.Append(inputLayoutString);
rnnOptions.Append(":");
rnnOptions.Append(layoutString);
rnnOptions.Append(":");
rnnOptions.Append(trainingStrategyString);
rnnOptions.Append(":");
rnnOptions.Append(TString::Format("Architecture=%s", archString.Data()));
TString rnnName = "TMVA_" + TString(rnn_type);
factory->BookMethod(dataloader, TMVA::Types::kDL, rnnName, rnnOptions);
}
}
/**
## Book TMVA fully connected dense layer models
**/
if (useTMVA_DNN) {
// Method DL with Dense Layer
TString inputLayoutString = TString::Format("InputLayout=1|1|%d", ntime * ninput);
TString layoutString("Layout=DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR");
// Training strategies.
TString trainingString1("LearningRate=1e-3,Momentum=0.0,Repetitions=1,"
"ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,"
"WeightDecay=1e-4,Regularization=None,MaxEpochs=20"
"DropConfig=0.0+0.+0.+0.,Optimizer=ADAM");
TString trainingStrategyString("TrainingStrategy=");
trainingStrategyString += trainingString1; // + "|" + trainingString2
// General Options.
TString dnnOptions("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:"
"WeightInitialization=XAVIER:RandomSeed=0");
dnnOptions.Append(":");
dnnOptions.Append(inputLayoutString);
dnnOptions.Append(":");
dnnOptions.Append(layoutString);
dnnOptions.Append(":");
dnnOptions.Append(trainingStrategyString);
dnnOptions.Append(":");
dnnOptions.Append(archString);
TString dnnName = "TMVA_DNN";
factory->BookMethod(dataloader, TMVA::Types::kDL, dnnName, dnnOptions);
}
/**
## Book Keras recurrent models
Book the different types of recurrent models in Keras (SimpleRNN, LSTM or GRU)
**/
if (useKeras) {
for (int i = 0; i < 3; i++) {
if (use_rnn_type[i]) {
TString modelName = TString::Format("model_%s.h5", rnn_types[i].c_str());
TString trainedModelName = TString::Format("trained_model_%s.h5", rnn_types[i].c_str());
Info("TMVA_RNN_Classification", "Building recurrent keras model using a %s layer", rnn_types[i].c_str());
// create python script which can be executed
// create 2 conv2d layer + maxpool + dense
m.AddLine("import tensorflow");
m.AddLine("from tensorflow.keras.models import Sequential");
m.AddLine("from tensorflow.keras.optimizers import Adam");
m.AddLine("from tensorflow.keras.layers import Input, Dense, Dropout, Flatten, SimpleRNN, GRU, LSTM, Reshape, "
"BatchNormalization");
m.AddLine("");
m.AddLine("model = Sequential() ");
m.AddLine("model.add(Reshape((10, 30), input_shape = (10*30, )))");
// add recurrent neural network depending on type / Use option to return the full output
if (rnn_types[i] == "LSTM")
m.AddLine("model.add(LSTM(units=10, return_sequences=True) )");
else if (rnn_types[i] == "GRU")
m.AddLine("model.add(GRU(units=10, return_sequences=True) )");
else
m.AddLine("model.add(SimpleRNN(units=10, return_sequences=True) )");
// m.AddLine("model.add(BatchNormalization())");
m.AddLine("model.add(Flatten())"); // needed if returning the full time output sequence
m.AddLine("model.add(Dense(64, activation = 'tanh')) ");
m.AddLine("model.add(Dense(2, activation = 'sigmoid')) ");
m.AddLine(
"model.compile(loss = 'binary_crossentropy', optimizer = Adam(learning_rate = 0.001), weighted_metrics = ['accuracy'])");
m.AddLine(TString::Format("modelName = '%s'", modelName.Data()));
m.AddLine("model.save(modelName)");
m.AddLine("model.summary()");
m.SaveSource("make_rnn_model.py");
// execute python script to make the model
auto ret = (TString *)gROOT->ProcessLine("TMVA::Python_Executable()");
TString python_exe = (ret) ? *(ret) : "python";
gSystem->Exec(python_exe + " make_rnn_model.py");
if (gSystem->AccessPathName(modelName)) {
Warning("TMVA_RNN_Classification", "Error creating Keras recurrent model file - Skip using Keras");
useKeras = false;
} else {
// book PyKeras method only if Keras model could be created
Info("TMVA_RNN_Classification", "Booking Keras %s model", rnn_types[i].c_str());
factory->BookMethod(dataloader, TMVA::Types::kPyKeras,
TString::Format("PyKeras_%s", rnn_types[i].c_str()),
TString::Format("!H:!V:VarTransform=None:FilenameModel=%s:tf.keras:"
"FilenameTrainedModel=%s:GpuOptions=allow_growth=True:"
"NumEpochs=%d:BatchSize=%d",
modelName.Data(), trainedModelName.Data(), maxepochs, batchSize));
}
}
}
}
// use BDT in case not using Keras or TMVA DL
if (!useKeras || !useTMVA_BDT)
useTMVA_BDT = true;
/**
## Book TMVA BDT
**/
if (useTMVA_BDT) {
factory->BookMethod(dataloader, TMVA::Types::kBDT, "BDTG",
"!H:!V:NTrees=100:MinNodeSize=2.5%:BoostType=Grad:Shrinkage=0.10:UseBaggedBoost:"
"BaggedSampleFraction=0.5:nCuts=20:"
"MaxDepth=2");
}
/// Train all methods
factory->TrainAllMethods();
std::cout << "nthreads = " << ROOT::GetThreadPoolSize() << std::endl;
// ---- Evaluate all MVAs using the set of test events
factory->TestAllMethods();
// ----- Evaluate and compare performance of all configured MVAs
factory->EvaluateAllMethods();
// check method
// plot ROC curve
auto c1 = factory->GetROCCurve(dataloader);
c1->Draw();
if (outputFile) outputFile->Close();
}
#define f(i)
Definition RSha256.hxx:104
void Info(const char *location, const char *msgfmt,...)
Use this function for informational messages.
Definition TError.cxx:218
void Error(const char *location, const char *msgfmt,...)
Use this function in case an error occurred.
Definition TError.cxx:185
void Warning(const char *location, const char *msgfmt,...)
Use this function in warning situations.
Definition TError.cxx:229
Option_t Option_t TPoint TPoint const char x2
Option_t Option_t TPoint TPoint const char x1
#define gROOT
Definition TROOT.h:406
R__EXTERN TRandom * gRandom
Definition TRandom.h:62
char * Form(const char *fmt,...)
Formats a string in a circular formatting buffer.
Definition TString.cxx:2489
R__EXTERN TSystem * gSystem
Definition TSystem.h:555
#define gPad
The Canvas class.
Definition TCanvas.h:23
A specialized string object used for TTree selections.
Definition TCut.h:25
1-Dim function class
Definition TF1.h:233
virtual void SetParameters(const Double_t *params)
Definition TF1.h:670
A ROOT file is an on-disk file, usually with extension .root, that stores objects in a file-system-li...
Definition TFile.h:53
static TFile * Open(const char *name, Option_t *option="", const char *ftitle="", Int_t compress=ROOT::RCompressionSetting::EDefaults::kUseCompiledDefault, Int_t netopt=0)
Create / open a file.
Definition TFile.cxx:4082
void Close(Option_t *option="") override
Close a file.
Definition TFile.cxx:943
void Draw(Option_t *chopt="") override
Draw this graph with its current attributes.
Definition TGraph.cxx:809
1-D histogram with a double per channel (see TH1 documentation)
Definition TH1.h:669
void Reset(Option_t *option="") override
Reset.
Definition TH1.cxx:10278
virtual void FillRandom(const char *fname, Int_t ntimes=5000, TRandom *rng=nullptr)
Fill histogram following distribution in function fname.
Definition TH1.cxx:3519
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition TH1.cxx:5029
static Config & Instance()
static function: returns TMVA instance
Definition Config.cxx:98
void AddVariablesArray(const TString &expression, int size, char type='F', Double_t min=0, Double_t max=0)
user inserts discriminating array of variables in data set info in case input tree provides an array ...
void AddSignalTree(TTree *signal, Double_t weight=1.0, Types::ETreeType treetype=Types::kMaxTreeType)
number of signal events (used to compute significance)
void PrepareTrainingAndTestTree(const TCut &cut, const TString &splitOpt)
prepare the training and test trees -> same cuts for signal and background
void AddBackgroundTree(TTree *background, Double_t weight=1.0, Types::ETreeType treetype=Types::kMaxTreeType)
number of signal events (used to compute significance)
DataSetInfo & GetDataSetInfo()
std::vector< TString > GetListOfVariables() const
returns list of variables
This is the main MVA steering class.
Definition Factory.h:80
void TrainAllMethods()
Iterates through all booked methods and calls training.
Definition Factory.cxx:1114
MethodBase * BookMethod(DataLoader *loader, TString theMethodName, TString methodTitle, TString theOption="")
Book a classifier or regression method.
Definition Factory.cxx:352
void TestAllMethods()
Evaluates all booked methods on the testing data and adds the output to the Results in the corresponi...
Definition Factory.cxx:1271
void EvaluateAllMethods(void)
Iterates over all MVAs that have been booked, and calls their evaluation methods.
Definition Factory.cxx:1376
TGraph * GetROCCurve(DataLoader *loader, TString theMethodName, Bool_t setTitles=kTRUE, UInt_t iClass=0, Types::ETreeType type=Types::kTesting)
Argument iClass specifies the class to generate the ROC curve in a multiclass setting.
Definition Factory.cxx:912
static void PyInitialize()
Initialize Python interpreter.
Class supporting a collection of lines with C++ code.
Definition TMacro.h:31
virtual Double_t Gaus(Double_t mean=0, Double_t sigma=1)
Samples a random number from the standard Normal (Gaussian) Distribution with the given mean and sigm...
Definition TRandom.cxx:275
virtual void SetSeed(ULong_t seed=0)
Set the random generator seed.
Definition TRandom.cxx:615
Basic string class.
Definition TString.h:139
const char * Data() const
Definition TString.h:376
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString.
Definition TString.cxx:2378
virtual Int_t Exec(const char *shellcmd)
Execute a command.
Definition TSystem.cxx:653
virtual Bool_t AccessPathName(const char *path, EAccessMode mode=kFileExists)
Returns FALSE if one can access a file using the specified access mode.
Definition TSystem.cxx:1296
virtual void Setenv(const char *name, const char *value)
Set environment variable.
Definition TSystem.cxx:1649
A TTree represents a columnar dataset.
Definition TTree.h:79
RVec< PromoteType< T > > cos(const RVec< T > &v)
Definition RVec.hxx:1815
RVec< PromoteType< T > > sin(const RVec< T > &v)
Definition RVec.hxx:1814
return c1
Definition legend1.C:41
const Int_t n
Definition legend1.C:16
TH1F * h1
Definition legend1.C:5
TF1 * f1
Definition legend1.C:11
void EnableImplicitMT(UInt_t numthreads=0)
Enable ROOT's implicit multi-threading for all objects and methods that provide an internal paralleli...
Definition TROOT.cxx:537
UInt_t GetThreadPoolSize()
Returns the size of ROOT's thread pool.
Definition TROOT.cxx:575
constexpr Double_t Pi()
Definition TMath.h:37
TMarker m
Definition textangle.C:8
Author
Lorenzo Moneta

Definition in file TMVA_RNN_Classification.C.