ROOT 6.14/05 Reference Guide |
Definition at line 65 of file DLMinimizers.h.
Public Types | |
using | DeepNet_t = TDeepNet< Architecture_t > |
using | Matrix_t = typename Architecture_t::Matrix_t |
using | Scalar_t = typename Architecture_t::Scalar_t |
Public Member Functions | |
TDLGradientDescent () | |
TDLGradientDescent (Scalar_t learningRate, size_t convergenceSteps, size_t testInterval) | |
size_t | GetConvergenceCount () const |
Getters. More... | |
size_t | GetConvergenceSteps () const |
Scalar_t | GetTestError () const |
size_t | GetTestInterval () const |
Scalar_t | GetTrainingError () const |
bool | HasConverged () |
Increases the minimization step counter by the test error evaluation period and uses the current internal value of the test error to determine if the minimization has converged. More... | |
bool | HasConverged (Scalar_t testError) |
Increases the minimization step counter by the test error evaluation period and uses the provided test error value to determine if the minimization has converged. More... | |
void | Reset () |
Reset minimizer object to default state. More... | |
void | SetBatchSize (Scalar_t rate) |
void | SetConvergenceSteps (size_t steps) |
Setters. More... | |
void | SetLearningRate (Scalar_t rate) |
void | SetTestInterval (size_t interval) |
void | Step (DeepNet_t &deepNet, std::vector< Matrix_t > &input, const Matrix_t &output, const Matrix_t &weights) |
Perform a single optimization step on a given batch. More... | |
void | Step (DeepNet_t &master, std::vector< DeepNet_t > &nets, std::vector< TTensorBatch< Architecture_t >> &batches) |
Perform multiple optimization steps simultaneously. More... | |
Scalar_t | StepLoss (DeepNet_t &deepNet, std::vector< Matrix_t > &input, const Matrix_t &output, const Matrix_t &weights) |
Same as Step(...) but also evaluate the loss on the given training data. More... | |
void | StepMomentum (DeepNet_t &master, std::vector< DeepNet_t > &nets, std::vector< TTensorBatch< Architecture_t >> &batches, Scalar_t momentum) |
Same as the Step(...) method for multiple batches but uses momentum. More... | |
void | StepNesterov (DeepNet_t &master, std::vector< DeepNet_t > &nets, std::vector< TTensorBatch< Architecture_t >> &batches, Scalar_t momentum) |
Same as the Step(...) method for multiple batches but uses Nesterov momentum. More... | |
void | StepReducedWeights (DeepNet_t &deepNet, std::vector< Matrix_t > &input, const Matrix_t &output, const Matrix_t &weights) |
Does not evaluate the loss and therefore not trigger a possible synchronization with the device. More... | |
Scalar_t | StepReducedWeightsLoss (DeepNet_t &deepNet, std::vector< Matrix_t > &input, const Matrix_t &output, const Matrix_t &weights) |
Similar to StepReducedWeights(...) but also evaluates the loss. More... | |
Private Attributes | |
size_t | fBatchSize |
Batch size to use for the training. More... | |
size_t | fConvergenceCount |
Current number of training epochs without. More... | |
size_t | fConvergenceSteps |
Number of training epochs without considerable. More... | |
Scalar_t | fLearningRate |
Learning rate \(\alpha\). More... | |
Scalar_t | fMinimumError |
The minimum loss achieved on the training set. More... | |
size_t | fStepCount |
Number of steps performed in the current training session. More... | |
Scalar_t | fTestError |
Holds the most recently computed test loss. More... | |
size_t | fTestInterval |
Interval for the computation of the test error. More... | |
Scalar_t | fTrainingError |
Holds the most recently computed training loss. More... | |
#include <TMVA/DNN/DLMinimizers.h>
using TMVA::DNN::TDLGradientDescent< Architecture_t >::DeepNet_t = TDeepNet<Architecture_t> |
Definition at line 67 of file DLMinimizers.h.
using TMVA::DNN::TDLGradientDescent< Architecture_t >::Matrix_t = typename Architecture_t::Matrix_t |
Definition at line 69 of file DLMinimizers.h.
using TMVA::DNN::TDLGradientDescent< Architecture_t >::Scalar_t = typename Architecture_t::Scalar_t |
Definition at line 68 of file DLMinimizers.h.
TMVA::DNN::TDLGradientDescent< Architecture_t >::TDLGradientDescent | ( | ) |
Definition at line 164 of file DLMinimizers.h.
TMVA::DNN::TDLGradientDescent< Architecture_t >::TDLGradientDescent | ( | Scalar_t | learningRate, |
size_t | convergenceSteps, | ||
size_t | testInterval | ||
) |
Definition at line 173 of file DLMinimizers.h.
|
inline |
Getters.
Definition at line 147 of file DLMinimizers.h.
|
inline |
Definition at line 148 of file DLMinimizers.h.
|
inline |
Definition at line 150 of file DLMinimizers.h.
|
inline |
Definition at line 151 of file DLMinimizers.h.
|
inline |
Definition at line 149 of file DLMinimizers.h.
bool TMVA::DNN::TDLGradientDescent< Architecture_t >::HasConverged | ( | ) |
Increases the minimization step counter by the test error evaluation period and uses the current internal value of the test error to determine if the minimization has converged.
Definition at line 277 of file DLMinimizers.h.
bool TMVA::DNN::TDLGradientDescent< Architecture_t >::HasConverged | ( | Scalar_t | testError | ) |
Increases the minimization step counter by the test error evaluation period and uses the provided test error value to determine if the minimization has converged.
Definition at line 291 of file DLMinimizers.h.
|
inline |
Reset minimizer object to default state.
Definition at line 90 of file DLMinimizers.h.
|
inline |
Definition at line 157 of file DLMinimizers.h.
|
inline |
Setters.
Definition at line 154 of file DLMinimizers.h.
|
inline |
Definition at line 156 of file DLMinimizers.h.
|
inline |
Definition at line 155 of file DLMinimizers.h.
void TMVA::DNN::TDLGradientDescent< Architecture_t >::Step | ( | DeepNet_t & | deepNet, |
std::vector< Matrix_t > & | input, | ||
const Matrix_t & | output, | ||
const Matrix_t & | weights | ||
) |
Perform a single optimization step on a given batch.
Propagates the input matrix foward through the net, evaluates the loss and propagates the gradients backward through the net. The computed gradients are scaled by the learning rate \(\alpha\) and subtracted from the weights and bias values of each layer.
Definition at line 183 of file DLMinimizers.h.
void TMVA::DNN::TDLGradientDescent< Architecture_t >::Step | ( | DeepNet_t & | master, |
std::vector< DeepNet_t > & | nets, | ||
std::vector< TTensorBatch< Architecture_t >> & | batches | ||
) |
Perform multiple optimization steps simultaneously.
Performs the backprop algorithm on the input batches given in batches
on the neural networks given in nets
. The forward and backward propagation steps are executed in an interleaving manner in order to exploit potential batch-level parallelism for asynchronous device calls.
Definition at line 247 of file DLMinimizers.h.
auto TMVA::DNN::TDLGradientDescent< Architecture_t >::StepLoss | ( | DeepNet_t & | deepNet, |
std::vector< Matrix_t > & | input, | ||
const Matrix_t & | output, | ||
const Matrix_t & | weights | ||
) |
Same as Step(...) but also evaluate the loss on the given training data.
Note that this requires synchronization between host and device.
Definition at line 213 of file DLMinimizers.h.
void TMVA::DNN::TDLGradientDescent< Architecture_t >::StepMomentum | ( | DeepNet_t & | master, |
std::vector< DeepNet_t > & | nets, | ||
std::vector< TTensorBatch< Architecture_t >> & | batches, | ||
Scalar_t | momentum | ||
) |
Same as the Step(...) method for multiple batches but uses momentum.
Definition at line 257 of file DLMinimizers.h.
void TMVA::DNN::TDLGradientDescent< Architecture_t >::StepNesterov | ( | DeepNet_t & | master, |
std::vector< DeepNet_t > & | nets, | ||
std::vector< TTensorBatch< Architecture_t >> & | batches, | ||
Scalar_t | momentum | ||
) |
Same as the Step(...) method for multiple batches but uses Nesterov momentum.
Definition at line 267 of file DLMinimizers.h.
void TMVA::DNN::TDLGradientDescent< Architecture_t >::StepReducedWeights | ( | DeepNet_t & | deepNet, |
std::vector< Matrix_t > & | input, | ||
const Matrix_t & | output, | ||
const Matrix_t & | weights | ||
) |
Does not evaluate the loss and therefore not trigger a possible synchronization with the device.
Trains the weights of each layer, but only the bias terms of the first layer for compatibility with the previous implementation.
Definition at line 194 of file DLMinimizers.h.
auto TMVA::DNN::TDLGradientDescent< Architecture_t >::StepReducedWeightsLoss | ( | DeepNet_t & | deepNet, |
std::vector< Matrix_t > & | input, | ||
const Matrix_t & | output, | ||
const Matrix_t & | weights | ||
) |
Similar to StepReducedWeights(...) but also evaluates the loss.
May trigger synchronization with the device.
Definition at line 225 of file DLMinimizers.h.
|
private |
Batch size to use for the training.
Definition at line 72 of file DLMinimizers.h.
|
private |
Current number of training epochs without.
considerable decrease in the test error.
Definition at line 76 of file DLMinimizers.h.
|
private |
Number of training epochs without considerable.
decrease in the test error for convergence.
Definition at line 74 of file DLMinimizers.h.
|
private |
Learning rate \(\alpha\).
Definition at line 81 of file DLMinimizers.h.
|
private |
The minimum loss achieved on the training set.
during the current traning session.
Definition at line 82 of file DLMinimizers.h.
|
private |
Number of steps performed in the current training session.
Definition at line 73 of file DLMinimizers.h.
|
private |
Holds the most recently computed test loss.
Definition at line 80 of file DLMinimizers.h.
|
private |
Interval for the computation of the test error.
Definition at line 78 of file DLMinimizers.h.
|
private |
Holds the most recently computed training loss.
Definition at line 79 of file DLMinimizers.h.