******************************************************************************
*Tree :sig_tree : tree *
*Entries : 10000 : Total = 1177229 bytes File Size = 785298 *
* : : Tree compression factor = 1.48 *
******************************************************************************
*Br 0 :Type : Type/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 307 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 130.54 *
*............................................................................*
*Br 1 :lepton_pT : lepton_pT/F *
*Entries : 10000 : Total Size= 40581 bytes File Size = 30464 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 2 :lepton_eta : lepton_eta/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 28650 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.40 *
*............................................................................*
*Br 3 :lepton_phi : lepton_phi/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 30508 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 4 :missing_energy_magnitude : missing_energy_magnitude/F *
*Entries : 10000 : Total Size= 40656 bytes File Size = 35749 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.12 *
*............................................................................*
*Br 5 :missing_energy_phi : missing_energy_phi/F *
*Entries : 10000 : Total Size= 40626 bytes File Size = 36766 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.09 *
*............................................................................*
*Br 6 :jet1_pt : jet1_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 32298 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.24 *
*............................................................................*
*Br 7 :jet1_eta : jet1_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28467 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.41 *
*............................................................................*
*Br 8 :jet1_phi : jet1_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30399 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 9 :jet1_b-tag : jet1_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 5087 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 7.88 *
*............................................................................*
*Br 10 :jet2_pt : jet2_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 31561 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.27 *
*............................................................................*
*Br 11 :jet2_eta : jet2_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28616 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.40 *
*............................................................................*
*Br 12 :jet2_phi : jet2_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30547 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 13 :jet2_b-tag : jet2_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 5031 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 7.97 *
*............................................................................*
*Br 14 :jet3_pt : jet3_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 30642 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 15 :jet3_eta : jet3_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28955 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.38 *
*............................................................................*
*Br 16 :jet3_phi : jet3_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30433 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 17 :jet3_b-tag : jet3_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 4879 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 8.22 *
*............................................................................*
*Br 18 :jet4_pt : jet4_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 29189 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.37 *
*............................................................................*
*Br 19 :jet4_eta : jet4_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 29311 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.37 *
*............................................................................*
*Br 20 :jet4_phi : jet4_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30525 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 21 :jet4_b-tag : jet4_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 4725 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 8.48 *
*............................................................................*
*Br 22 :m_jj : m_jj/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 34991 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.15 *
*............................................................................*
*Br 23 :m_jjj : m_jjj/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34460 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 24 :m_lv : m_lv/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 32232 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.24 *
*............................................................................*
*Br 25 :m_jlv : m_jlv/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34598 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 26 :m_bb : m_bb/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 35012 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.14 *
*............................................................................*
*Br 27 :m_wbb : m_wbb/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34493 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 28 :m_wwbb : m_wwbb/F *
*Entries : 10000 : Total Size= 40566 bytes File Size = 34410 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sig_tree of type Signal with 10000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg_tree of type Background with 10000 events
Factory : Booking method: ␛[1mLikelihood␛[0m
:
Factory : Booking method: ␛[1mFisher␛[0m
:
Factory : Booking method: ␛[1mBDT␛[0m
:
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sig_tree
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg_tree
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 7000
: Signal -- testing events : 3000
: Signal -- training and testing events: 10000
: Background -- training events : 7000
: Background -- testing events : 3000
: Background -- training and testing events: 10000
:
DataSetInfo : Correlation matrix (Signal):
: ----------------------------------------------------------------
: m_jj m_jjj m_lv m_jlv m_bb m_wbb m_wwbb
: m_jj: +1.000 +0.774 -0.004 +0.096 +0.024 +0.512 +0.533
: m_jjj: +0.774 +1.000 -0.010 +0.073 +0.152 +0.674 +0.668
: m_lv: -0.004 -0.010 +1.000 +0.121 -0.027 +0.009 +0.021
: m_jlv: +0.096 +0.073 +0.121 +1.000 +0.313 +0.544 +0.552
: m_bb: +0.024 +0.152 -0.027 +0.313 +1.000 +0.445 +0.333
: m_wbb: +0.512 +0.674 +0.009 +0.544 +0.445 +1.000 +0.915
: m_wwbb: +0.533 +0.668 +0.021 +0.552 +0.333 +0.915 +1.000
: ----------------------------------------------------------------
DataSetInfo : Correlation matrix (Background):
: ----------------------------------------------------------------
: m_jj m_jjj m_lv m_jlv m_bb m_wbb m_wwbb
: m_jj: +1.000 +0.808 +0.022 +0.150 +0.028 +0.407 +0.415
: m_jjj: +0.808 +1.000 +0.041 +0.206 +0.177 +0.569 +0.547
: m_lv: +0.022 +0.041 +1.000 +0.139 +0.037 +0.081 +0.085
: m_jlv: +0.150 +0.206 +0.139 +1.000 +0.309 +0.607 +0.557
: m_bb: +0.028 +0.177 +0.037 +0.309 +1.000 +0.625 +0.447
: m_wbb: +0.407 +0.569 +0.081 +0.607 +0.625 +1.000 +0.884
: m_wwbb: +0.415 +0.547 +0.085 +0.557 +0.447 +0.884 +1.000
: ----------------------------------------------------------------
DataSetFactory : [dataset] :
:
Factory : Booking method: ␛[1mDNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "G" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|1|7" [The Layout of the input]
: BatchLayout: "1|128|7" [The Layout of the batch]
: Layout: "DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0." [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DNN_CPU : [dataset] : Create Transformation "G" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'm_jj' <---> Output : variable 'm_jj'
: Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
: Input : variable 'm_lv' <---> Output : variable 'm_lv'
: Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
: Input : variable 'm_bb' <---> Output : variable 'm_bb'
: Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
: Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
: Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 512
dense_1 (Dense) (None, 64) 4160
dense_2 (Dense) (None, 64) 4160
dense_3 (Dense) (None, 64) 4160
dense_4 (Dense) (None, 2) 130
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
(TString) "python3"[7]
Factory : Booking method: ␛[1mPyKeras␛[0m
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Loading Keras Model
: Loaded model from file: Higgs_model.h5
Factory : ␛[1mTrain all methods␛[0m
Factory : [dataset] : Create Transformation "I" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'm_jj' <---> Output : variable 'm_jj'
: Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
: Input : variable 'm_lv' <---> Output : variable 'm_lv'
: Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
: Input : variable 'm_bb' <---> Output : variable 'm_bb'
: Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
: Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0318 0.65629 [ 0.15106 16.132 ]
: m_jjj: 1.0217 0.37420 [ 0.34247 8.9401 ]
: m_lv: 1.0507 0.16678 [ 0.26679 3.6823 ]
: m_jlv: 1.0161 0.40288 [ 0.38441 6.5831 ]
: m_bb: 0.97707 0.53961 [ 0.080986 8.2551 ]
: m_wbb: 1.0358 0.36856 [ 0.38503 6.4013 ]
: m_wwbb: 0.96265 0.31608 [ 0.43228 4.5350 ]
: -----------------------------------------------------------
: Ranking input variables (method unspecific)...
IdTransformation : Ranking result (top variable is best ranked)
: -------------------------------
: Rank : Variable : Separation
: -------------------------------
: 1 : m_bb : 9.511e-02
: 2 : m_wbb : 4.268e-02
: 3 : m_wwbb : 4.178e-02
: 4 : m_jjj : 2.825e-02
: 5 : m_jlv : 1.999e-02
: 6 : m_jj : 3.834e-03
: 7 : m_lv : 3.699e-03
: -------------------------------
Factory : Train method: Likelihood for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Likelihood ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: The maximum-likelihood classifier models the data with probability
: density functions (PDF) reproducing the signal and background
: distributions of the input variables. Correlations among the
: variables are ignored.
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Required for good performance are decorrelated input variables
: (PCA transformation via the option "VarTransform=Decorrelate"
: may be tried). Irreducible non-linear correlations may be reduced
: by precombining strongly correlated input variables, or by simply
: removing one of the variables.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: High fidelity PDF estimates are mandatory, i.e., sufficient training
: statistics is required to populate the tails of the distributions
: It would be a surprise if the default Spline or KDE kernel parameters
: provide a satisfying fit to the data. The user is advised to properly
: tune the events per bin and smooth options in the spline cases
: individually per variable. If the KDE kernel is used, the adaptive
: Gaussian kernel may lead to artefacts, so please always also try
: the non-adaptive one.
:
: All tuning parameters must be adjusted individually for each input
: variable!
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Filling reference histograms
: Building PDF out of reference histograms
: Elapsed time for training with 14000 events: 0.119 sec
Likelihood : [dataset] : Evaluation of Likelihood on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.0196 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.class.C␛[0m
: Higgs_ClassificationOutput.root:/dataset/Method_Likelihood/Likelihood
Factory : Training finished
:
Factory : Train method: Fisher for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Fisher ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: Fisher discriminants select events by distinguishing the mean
: values of the signal and background distributions in a trans-
: formed variable space where linear correlations are removed.
:
: (More precisely: the "linear discriminator" determines
: an axis in the (correlated) hyperspace of the input
: variables such that, when projecting the output classes
: (signal and background) upon this axis, they are pushed
: as far as possible away from each other, while events
: of a same class are confined in a close vicinity. The
: linearity property of this classifier is reflected in the
: metric with which "far apart" and "close vicinity" are
: determined: the covariance matrix of the discriminating
: variable space.)
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Optimal performance for Fisher discriminants is obtained for
: linearly correlated Gaussian-distributed variables. Any deviation
: from this ideal reduces the achievable separation power. In
: particular, no discrimination at all is achieved for a variable
: that has the same sample mean for signal and background, even if
: the shapes of the distributions are very different. Thus, Fisher
: discriminants often benefit from suitable transformations of the
: input variables. For example, if a variable x in [-1,1] has a
: a parabolic signal distributions, and a uniform background
: distributions, their mean value is zero in both cases, leading
: to no separation. The simple transformation x -> |x| renders this
: variable powerful for the use in a Fisher discriminant.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: <None>
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
Fisher : Results for Fisher coefficients:
: -----------------------
: Variable: Coefficient:
: -----------------------
: m_jj: -0.051
: m_jjj: +0.192
: m_lv: +0.045
: m_jlv: +0.059
: m_bb: -0.211
: m_wbb: +0.549
: m_wwbb: -0.778
: (offset): +0.136
: -----------------------
: Elapsed time for training with 14000 events: 0.0103 sec
Fisher : [dataset] : Evaluation of Fisher on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.00378 sec
: <CreateMVAPdfs> Separation from histogram (PDF): 0.090 (0.000)
: Dataset[dataset] : Evaluation of Fisher on training sample
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.class.C␛[0m
Factory : Training finished
:
Factory : Train method: BDT for Classification
:
BDT : #events: (reweighted) sig: 7000 bkg: 7000
: #events: (unweighted) sig: 7000 bkg: 7000
: Training 200 Decision Trees ... patience please
: Elapsed time for training with 14000 events: 0.681 sec
BDT : [dataset] : Evaluation of BDT on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.111 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.class.C␛[0m
: Higgs_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory : Training finished
:
Factory : Train method: DNN_CPU for Classification
:
: Preparing the Gaussian transformation...
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
: Start of deep neural network training on CPU using MT, nthreads = 1
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 5 Input = ( 1, 1, 7 ) Batch size = 128 Loss function = C
Layer 0 DENSE Layer: ( Input = 7 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 1 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 2 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 4 DENSE Layer: ( Input = 64 , Width = 1 ) Output = ( 1 , 128 , 1 ) Activation Function = Identity
: Using 11200 events for training and 2800 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.788931
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.653802 0.620423 0.589588 0.0476164 20547.2 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.599961 0.596993 0.589861 0.0475929 20536 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.580627 0.58947 0.591644 0.047981 20483.3 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.576665 0.585694 0.590965 0.0477428 20499.9 0
: 5 | 0.568338 0.585743 0.590981 0.0476157 20494.5 1
: 6 Minimum Test error found - save the configuration
: 6 | 0.567414 0.581527 0.593362 0.0478694 20414.6 0
: 7 | 0.561758 0.585297 0.592022 0.0476293 20455.8 1
: 8 | 0.560553 0.583089 0.5918 0.0477064 20467.1 2
: 9 | 0.559316 0.585902 0.59228 0.04765 20446.9 3
: 10 Minimum Test error found - save the configuration
: 10 | 0.556386 0.578494 0.59254 0.0478811 20445.8 0
: 11 Minimum Test error found - save the configuration
: 11 | 0.557083 0.577278 0.592929 0.0479001 20432 0
: 12 | 0.555542 0.581668 0.594656 0.0477573 20362.1 1
: 13 | 0.550339 0.584055 0.593748 0.0477832 20396.9 2
: 14 | 0.549245 0.585316 0.594671 0.0481005 20374.3 3
: 15 | 0.548926 0.586883 0.594343 0.0480989 20386.5 4
: 16 | 0.548671 0.58353 0.595545 0.0483536 20351.2 5
: 17 | 0.5461 0.581832 0.596268 0.0482285 20319.7 6
: 18 | 0.543965 0.591275 0.594745 0.0481289 20372.6 7
: 19 | 0.54367 0.583791 0.595029 0.0481258 20361.9 8
: 20 | 0.540722 0.577538 0.59568 0.048192 20340.2 9
: 21 | 0.539904 0.582319 0.595427 0.0481289 20347.2 10
: 22 | 0.538642 0.583248 0.595348 0.048162 20351.4 11
:
: Elapsed time for training with 14000 events: 13.2 sec
: Evaluate deep neural network on CPU using batches with size = 128
:
DNN_CPU : [dataset] : Evaluation of DNN_CPU on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.248 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.class.C␛[0m
Factory : Training finished
:
Factory : Train method: PyKeras for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ PyKeras ] :␛[0m
:
: Keras is a high-level API for the Theano and Tensorflow packages.
: This method wraps the training and predictions steps of the Keras
: Python package for TMVA, so that dataloading, preprocessing and
: evaluation can be done within the TMVA system. To use this Keras
: interface, you have to generate a model with Keras first. Then,
: this model can be loaded and trained in TMVA.
:
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Split TMVA training data in 11200 training events and 2800 validation events
: Training Model Summary
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 512
dense_1 (Dense) (None, 64) 4160
dense_2 (Dense) (None, 64) 4160
dense_3 (Dense) (None, 64) 4160
dense_4 (Dense) (None, 2) 130
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
: Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
1/112 [..............................] - ETA: 1:11 - loss: 0.6975 - accuracy: 0.4700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/112 [====>.........................] - ETA: 0s - loss: 0.6877 - accuracy: 0.5130 ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
49/112 [============>.................] - ETA: 0s - loss: 0.6779 - accuracy: 0.5594␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
79/112 [====================>.........] - ETA: 0s - loss: 0.6733 - accuracy: 0.5739␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
110/112 [============================>.] - ETA: 0s - loss: 0.6686 - accuracy: 0.5819
Epoch 1: val_loss improved from inf to 0.65621, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 1s 5ms/step - loss: 0.6688 - accuracy: 0.5812 - val_loss: 0.6562 - val_accuracy: 0.6121
Epoch 2/20
1/112 [..............................] - ETA: 0s - loss: 0.6564 - accuracy: 0.6300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6502 - accuracy: 0.6131␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.6472 - accuracy: 0.6245␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
92/112 [=======================>......] - ETA: 0s - loss: 0.6426 - accuracy: 0.6334
Epoch 2: val_loss improved from 0.65621 to 0.63412, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6406 - accuracy: 0.6354 - val_loss: 0.6341 - val_accuracy: 0.6425
Epoch 3/20
1/112 [..............................] - ETA: 0s - loss: 0.6236 - accuracy: 0.6300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6303 - accuracy: 0.6469␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.6312 - accuracy: 0.6467␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
93/112 [=======================>......] - ETA: 0s - loss: 0.6279 - accuracy: 0.6498
Epoch 3: val_loss improved from 0.63412 to 0.62542, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6265 - accuracy: 0.6513 - val_loss: 0.6254 - val_accuracy: 0.6521
Epoch 4/20
1/112 [..............................] - ETA: 0s - loss: 0.6315 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6265 - accuracy: 0.6553␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.6245 - accuracy: 0.6556␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
93/112 [=======================>......] - ETA: 0s - loss: 0.6196 - accuracy: 0.6589
Epoch 4: val_loss did not improve from 0.62542
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6198 - accuracy: 0.6576 - val_loss: 0.6274 - val_accuracy: 0.6554
Epoch 5/20
1/112 [..............................] - ETA: 0s - loss: 0.6543 - accuracy: 0.5800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6162 - accuracy: 0.6584␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.6170 - accuracy: 0.6585␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
92/112 [=======================>......] - ETA: 0s - loss: 0.6153 - accuracy: 0.6601
Epoch 5: val_loss improved from 0.62542 to 0.61118, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6151 - accuracy: 0.6606 - val_loss: 0.6112 - val_accuracy: 0.6636
Epoch 6/20
1/112 [..............................] - ETA: 0s - loss: 0.5620 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/112 [=======>......................] - ETA: 0s - loss: 0.6127 - accuracy: 0.6639␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.6113 - accuracy: 0.6669␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
93/112 [=======================>......] - ETA: 0s - loss: 0.6071 - accuracy: 0.6701
Epoch 6: val_loss improved from 0.61118 to 0.60765, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6081 - accuracy: 0.6675 - val_loss: 0.6076 - val_accuracy: 0.6625
Epoch 7/20
1/112 [..............................] - ETA: 0s - loss: 0.6179 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5981 - accuracy: 0.6747␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/112 [================>.............] - ETA: 0s - loss: 0.5991 - accuracy: 0.6795␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
95/112 [========================>.....] - ETA: 0s - loss: 0.6036 - accuracy: 0.6747
Epoch 7: val_loss did not improve from 0.60765
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6050 - accuracy: 0.6721 - val_loss: 0.6097 - val_accuracy: 0.6679
Epoch 8/20
1/112 [..............................] - ETA: 0s - loss: 0.6163 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6190 - accuracy: 0.6553␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.6089 - accuracy: 0.6615␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
92/112 [=======================>......] - ETA: 0s - loss: 0.6054 - accuracy: 0.6650
Epoch 8: val_loss did not improve from 0.60765
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6019 - accuracy: 0.6692 - val_loss: 0.6160 - val_accuracy: 0.6589
Epoch 9/20
1/112 [..............................] - ETA: 0s - loss: 0.6229 - accuracy: 0.6400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5936 - accuracy: 0.6794␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.5959 - accuracy: 0.6755␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
93/112 [=======================>......] - ETA: 0s - loss: 0.5948 - accuracy: 0.6772
Epoch 9: val_loss improved from 0.60765 to 0.60193, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5976 - accuracy: 0.6768 - val_loss: 0.6019 - val_accuracy: 0.6725
Epoch 10/20
1/112 [..............................] - ETA: 0s - loss: 0.5570 - accuracy: 0.7500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/112 [=======>......................] - ETA: 0s - loss: 0.5943 - accuracy: 0.6794␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.5998 - accuracy: 0.6726␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
93/112 [=======================>......] - ETA: 0s - loss: 0.6017 - accuracy: 0.6725
Epoch 10: val_loss did not improve from 0.60193
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6015 - accuracy: 0.6743 - val_loss: 0.6033 - val_accuracy: 0.6671
Epoch 11/20
1/112 [..............................] - ETA: 0s - loss: 0.5615 - accuracy: 0.7400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6066 - accuracy: 0.6684␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.6009 - accuracy: 0.6697␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
94/112 [========================>.....] - ETA: 0s - loss: 0.5947 - accuracy: 0.6783
Epoch 11: val_loss improved from 0.60193 to 0.59979, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5946 - accuracy: 0.6776 - val_loss: 0.5998 - val_accuracy: 0.6707
Epoch 12/20
1/112 [..............................] - ETA: 0s - loss: 0.6224 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5879 - accuracy: 0.6875␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.5906 - accuracy: 0.6883␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
94/112 [========================>.....] - ETA: 0s - loss: 0.5938 - accuracy: 0.6796
Epoch 12: val_loss did not improve from 0.59979
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5927 - accuracy: 0.6814 - val_loss: 0.6012 - val_accuracy: 0.6771
Epoch 13/20
1/112 [..............................] - ETA: 0s - loss: 0.5970 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5940 - accuracy: 0.6659␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/112 [================>.............] - ETA: 0s - loss: 0.5938 - accuracy: 0.6698␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
97/112 [========================>.....] - ETA: 0s - loss: 0.5951 - accuracy: 0.6730
Epoch 13: val_loss improved from 0.59979 to 0.59568, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5930 - accuracy: 0.6759 - val_loss: 0.5957 - val_accuracy: 0.6768
Epoch 14/20
1/112 [..............................] - ETA: 0s - loss: 0.5698 - accuracy: 0.7100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5846 - accuracy: 0.6928␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/112 [================>.............] - ETA: 0s - loss: 0.5900 - accuracy: 0.6852␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
96/112 [========================>.....] - ETA: 0s - loss: 0.5877 - accuracy: 0.6825
Epoch 14: val_loss did not improve from 0.59568
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5867 - accuracy: 0.6832 - val_loss: 0.5982 - val_accuracy: 0.6711
Epoch 15/20
1/112 [..............................] - ETA: 0s - loss: 0.5904 - accuracy: 0.6500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5865 - accuracy: 0.6787␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.5884 - accuracy: 0.6794␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
94/112 [========================>.....] - ETA: 0s - loss: 0.5871 - accuracy: 0.6821
Epoch 15: val_loss did not improve from 0.59568
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5873 - accuracy: 0.6813 - val_loss: 0.5971 - val_accuracy: 0.6707
Epoch 16/20
1/112 [..............................] - ETA: 0s - loss: 0.6895 - accuracy: 0.5900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.5883 - accuracy: 0.6824␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.5837 - accuracy: 0.6882␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
94/112 [========================>.....] - ETA: 0s - loss: 0.5831 - accuracy: 0.6853
Epoch 16: val_loss improved from 0.59568 to 0.59511, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5865 - accuracy: 0.6833 - val_loss: 0.5951 - val_accuracy: 0.6771
Epoch 17/20
1/112 [..............................] - ETA: 0s - loss: 0.5684 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5875 - accuracy: 0.6834␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.5853 - accuracy: 0.6873␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
94/112 [========================>.....] - ETA: 0s - loss: 0.5851 - accuracy: 0.6877
Epoch 17: val_loss improved from 0.59511 to 0.59475, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5848 - accuracy: 0.6880 - val_loss: 0.5947 - val_accuracy: 0.6796
Epoch 18/20
1/112 [..............................] - ETA: 0s - loss: 0.5219 - accuracy: 0.7600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5814 - accuracy: 0.6884␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/112 [================>.............] - ETA: 0s - loss: 0.5861 - accuracy: 0.6814␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
96/112 [========================>.....] - ETA: 0s - loss: 0.5836 - accuracy: 0.6833
Epoch 18: val_loss did not improve from 0.59475
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5842 - accuracy: 0.6837 - val_loss: 0.6056 - val_accuracy: 0.6636
Epoch 19/20
1/112 [..............................] - ETA: 0s - loss: 0.5837 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.5863 - accuracy: 0.6803␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/112 [================>.............] - ETA: 0s - loss: 0.5873 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
96/112 [========================>.....] - ETA: 0s - loss: 0.5836 - accuracy: 0.6819
Epoch 19: val_loss did not improve from 0.59475
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5827 - accuracy: 0.6835 - val_loss: 0.5949 - val_accuracy: 0.6736
Epoch 20/20
1/112 [..............................] - ETA: 0s - loss: 0.6124 - accuracy: 0.6400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5831 - accuracy: 0.6881␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.5824 - accuracy: 0.6871␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
95/112 [========================>.....] - ETA: 0s - loss: 0.5822 - accuracy: 0.6898
Epoch 20: val_loss improved from 0.59475 to 0.58916, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5797 - accuracy: 0.6919 - val_loss: 0.5892 - val_accuracy: 0.6764
: Getting training history for item:0 name = 'loss'
: Getting training history for item:1 name = 'accuracy'
: Getting training history for item:2 name = 'val_loss'
: Getting training history for item:3 name = 'val_accuracy'
: Elapsed time for training with 14000 events: 6.19 sec
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: Higgs_trained_model.h5
PyKeras : [dataset] : Evaluation of PyKeras on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.258 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.class.C␛[0m
Factory : Training finished
:
: Ranking input variables (method specific)...
Likelihood : Ranking result (top variable is best ranked)
: -------------------------------------
: Rank : Variable : Delta Separation
: -------------------------------------
: 1 : m_bb : 4.061e-02
: 2 : m_wbb : 3.765e-02
: 3 : m_wwbb : 3.119e-02
: 4 : m_jj : -1.589e-03
: 5 : m_jjj : -2.901e-03
: 6 : m_lv : -7.919e-03
: 7 : m_jlv : -8.293e-03
: -------------------------------------
Fisher : Ranking result (top variable is best ranked)
: ---------------------------------
: Rank : Variable : Discr. power
: ---------------------------------
: 1 : m_bb : 1.279e-02
: 2 : m_wwbb : 9.131e-03
: 3 : m_wbb : 2.668e-03
: 4 : m_jlv : 9.145e-04
: 5 : m_jjj : 1.769e-04
: 6 : m_lv : 6.617e-05
: 7 : m_jj : 6.707e-06
: ---------------------------------
BDT : Ranking result (top variable is best ranked)
: ----------------------------------------
: Rank : Variable : Variable Importance
: ----------------------------------------
: 1 : m_bb : 2.089e-01
: 2 : m_wwbb : 1.673e-01
: 3 : m_wbb : 1.568e-01
: 4 : m_jlv : 1.560e-01
: 5 : m_jjj : 1.421e-01
: 6 : m_jj : 1.052e-01
: 7 : m_lv : 6.369e-02
: ----------------------------------------
: No variable ranking supplied by classifier: DNN_CPU
: No variable ranking supplied by classifier: PyKeras
TH1.Print Name = TrainingHistory_DNN_CPU_trainingError, Entries= 0, Total sum= 12.3476
TH1.Print Name = TrainingHistory_DNN_CPU_valError, Entries= 0, Total sum= 12.8914
TH1.Print Name = TrainingHistory_PyKeras_'accuracy', Entries= 0, Total sum= 13.3759
TH1.Print Name = TrainingHistory_PyKeras_'loss', Entries= 0, Total sum= 12.0569
TH1.Print Name = TrainingHistory_PyKeras_'val_accuracy', Entries= 0, Total sum= 13.2914
TH1.Print Name = TrainingHistory_PyKeras_'val_loss', Entries= 0, Total sum= 12.1642
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: Likelihood for Classification performance
:
Likelihood : [dataset] : Evaluation of Likelihood on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0112 sec
Factory : Test method: Fisher for Classification performance
:
Fisher : [dataset] : Evaluation of Fisher on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.00339 sec
: Dataset[dataset] : Evaluation of Fisher on testing sample
Factory : Test method: BDT for Classification performance
:
BDT : [dataset] : Evaluation of BDT on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.048 sec
Factory : Test method: DNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.017919 1.0069 [ -3.3498 3.4247 ]
: m_jjj: 0.020352 1.0044 [ -3.2831 3.3699 ]
: m_lv: 0.016289 0.99263 [ -3.2339 3.3958 ]
: m_jlv: -0.018431 0.98242 [ -3.0632 5.7307 ]
: m_bb: 0.0069564 0.98851 [ -2.9734 3.3513 ]
: m_wbb: -0.010633 0.99340 [ -3.2442 3.2244 ]
: m_wwbb: -0.012669 0.99259 [ -3.1871 5.7307 ]
: -----------------------------------------------------------
DNN_CPU : [dataset] : Evaluation of DNN_CPU on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0988 sec
Factory : Test method: PyKeras for Classification performance
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: Higgs_trained_model.h5
PyKeras : [dataset] : Evaluation of PyKeras on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.152 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: Likelihood
:
Likelihood : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_Likelihood : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: Fisher
:
Fisher : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Also filling probability and rarity histograms (on request)...
TFHandler_Fisher : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: BDT
:
BDT : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_BDT : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: DNN_CPU
:
DNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.017919 1.0069 [ -3.3498 3.4247 ]
: m_jjj: 0.020352 1.0044 [ -3.2831 3.3699 ]
: m_lv: 0.016289 0.99263 [ -3.2339 3.3958 ]
: m_jlv: -0.018431 0.98242 [ -3.0632 5.7307 ]
: m_bb: 0.0069564 0.98851 [ -2.9734 3.3513 ]
: m_wbb: -0.010633 0.99340 [ -3.2442 3.2244 ]
: m_wwbb: -0.012669 0.99259 [ -3.1871 5.7307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: PyKeras
:
PyKeras : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_PyKeras : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset DNN_CPU : 0.765
: dataset PyKeras : 0.756
: dataset BDT : 0.754
: dataset Likelihood : 0.699
: dataset Fisher : 0.642
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset DNN_CPU : 0.132 (0.133) 0.417 (0.443) 0.680 (0.714)
: dataset PyKeras : 0.113 (0.093) 0.403 (0.410) 0.658 (0.660)
: dataset BDT : 0.098 (0.099) 0.393 (0.402) 0.657 (0.681)
: dataset Likelihood : 0.070 (0.075) 0.356 (0.363) 0.581 (0.597)
: dataset Fisher : 0.015 (0.015) 0.121 (0.131) 0.487 (0.506)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 6000 events
:
Dataset:dataset : Created tree 'TrainTree' with 14000 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m