Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t > Class Template Reference

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
class TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >

RMSProp Optimizer class.

This class represents the RMSProp Optimizer with options for applying momentum.

Definition at line 45 of file RMSProp.h.

Public Types

using Matrix_t = typename Architecture_t::Matrix_t
 
using Scalar_t = typename Architecture_t::Scalar_t
 
- Public Types inherited from TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t >
using Matrix_t = typename Architecture_t::Matrix_t
 
using Scalar_t = typename Architecture_t::Scalar_t
 

Public Member Functions

 TRMSProp (DeepNet_t &deepNet, Scalar_t learningRate=0.001, Scalar_t momentum=0.0, Scalar_t rho=0.9, Scalar_t epsilon=1e-7)
 Constructor.
 
 ~TRMSProp ()=default
 Destructor.
 
std::vector< std::vector< Matrix_t > > & GetBiasUpdates ()
 
std::vector< Matrix_t > & GetBiasUpdatesAt (size_t i)
 
Scalar_t GetEpsilon () const
 
Scalar_t GetMomentum () const
 Getters.
 
std::vector< std::vector< Matrix_t > > & GetPastSquaredBiasGradients ()
 
std::vector< Matrix_t > & GetPastSquaredBiasGradientsAt (size_t i)
 
std::vector< std::vector< Matrix_t > > & GetPastSquaredWeightGradients ()
 
std::vector< Matrix_t > & GetPastSquaredWeightGradientsAt (size_t i)
 
Scalar_t GetRho () const
 
std::vector< std::vector< Matrix_t > > & GetWeightUpdates ()
 
std::vector< Matrix_t > & GetWeightUpdatesAt (size_t i)
 
- Public Member Functions inherited from TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t >
 VOptimizer (Scalar_t learningRate, DeepNet_t &deepNet)
 Constructor.
 
virtual ~VOptimizer ()=default
 Virtual Destructor.
 
size_t GetGlobalStep () const
 
Layer_t * GetLayerAt (size_t i)
 
std::vector< Layer_t * > & GetLayers ()
 
Scalar_t GetLearningRate () const
 Getters.
 
void IncrementGlobalStep ()
 Increments the global step.
 
void SetLearningRate (size_t learningRate)
 Setters.
 
void Step ()
 Performs one step of optimization.
 

Protected Member Functions

void UpdateBiases (size_t layerIndex, std::vector< Matrix_t > &biases, const std::vector< Matrix_t > &biasGradients)
 Update the biases, given the current bias gradients.
 
void UpdateWeights (size_t layerIndex, std::vector< Matrix_t > &weights, const std::vector< Matrix_t > &weightGradients)
 Update the weights, given the current weight gradients.
 

Protected Attributes

std::vector< std::vector< Matrix_t > > fBiasUpdates
 The accumulation of the past Biases for performing updates.
 
Scalar_t fEpsilon
 The Smoothing term used to avoid division by zero.
 
Scalar_t fMomentum
 The momentum used for training.
 
std::vector< std::vector< Matrix_t > > fPastSquaredBiasGradients
 The sum of the square of the past bias gradients associated with the deep net.
 
std::vector< std::vector< Matrix_t > > fPastSquaredWeightGradients
 The sum of the square of the past weight gradients associated with the deep net.
 
Scalar_t fRho
 The Rho constant used by the optimizer.
 
std::vector< std::vector< Matrix_t > > fWeightUpdates
 The accumulation of the past Weights for performing updates.
 
std::vector< std::vector< Matrix_t > > fWorkBiasTensor1
 working tensor used to keep a temporary copy of bias or bias gradients
 
std::vector< std::vector< Matrix_t > > fWorkBiasTensor2
 working tensor used to keep a temporary copy of bias or bias gradients
 
std::vector< std::vector< Matrix_t > > fWorkWeightTensor1
 working tensor used to keep a temporary copy of weights or weight gradients
 
std::vector< std::vector< Matrix_t > > fWorkWeightTensor2
 working tensor used to keep a temporary copy of weights or weight gradients
 
- Protected Attributes inherited from TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t >
DeepNet_t & fDeepNet
 The reference to the deep net.
 
size_t fGlobalStep
 The current global step count during training.
 
Scalar_t fLearningRate
 The learning rate used for training.
 

#include <TMVA/DNN/RMSProp.h>

Inheritance diagram for TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >:
[legend]

Member Typedef Documentation

◆ Matrix_t

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
using TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::Matrix_t = typename Architecture_t::Matrix_t

Definition at line 47 of file RMSProp.h.

◆ Scalar_t

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
using TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::Scalar_t = typename Architecture_t::Scalar_t

Definition at line 48 of file RMSProp.h.

Constructor & Destructor Documentation

◆ TRMSProp()

template<typename Architecture_t , typename Layer_t , typename DeepNet_t >
TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::TRMSProp ( DeepNet_t &  deepNet,
Scalar_t  learningRate = 0.001,
Scalar_t  momentum = 0.0,
Scalar_t  rho = 0.9,
Scalar_t  epsilon = 1e-7 
)

Constructor.

Definition at line 107 of file RMSProp.h.

◆ ~TRMSProp()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::~TRMSProp ( )
default

Destructor.

Member Function Documentation

◆ GetBiasUpdates()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetBiasUpdates ( )
inline

Definition at line 98 of file RMSProp.h.

◆ GetBiasUpdatesAt()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetBiasUpdatesAt ( size_t  i)
inline

Definition at line 99 of file RMSProp.h.

◆ GetEpsilon()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetEpsilon ( ) const
inline

Definition at line 87 of file RMSProp.h.

◆ GetMomentum()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetMomentum ( ) const
inline

Getters.

Definition at line 85 of file RMSProp.h.

◆ GetPastSquaredBiasGradients()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredBiasGradients ( )
inline

Definition at line 92 of file RMSProp.h.

◆ GetPastSquaredBiasGradientsAt()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredBiasGradientsAt ( size_t  i)
inline

Definition at line 93 of file RMSProp.h.

◆ GetPastSquaredWeightGradients()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredWeightGradients ( )
inline

Definition at line 89 of file RMSProp.h.

◆ GetPastSquaredWeightGradientsAt()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredWeightGradientsAt ( size_t  i)
inline

Definition at line 90 of file RMSProp.h.

◆ GetRho()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetRho ( ) const
inline

Definition at line 86 of file RMSProp.h.

◆ GetWeightUpdates()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetWeightUpdates ( )
inline

Definition at line 95 of file RMSProp.h.

◆ GetWeightUpdatesAt()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::GetWeightUpdatesAt ( size_t  i)
inline

Definition at line 96 of file RMSProp.h.

◆ UpdateBiases()

template<typename Architecture_t , typename Layer_t , typename DeepNet_t >
auto TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::UpdateBiases ( size_t  layerIndex,
std::vector< Matrix_t > &  biases,
const std::vector< Matrix_t > &  biasGradients 
)
protectedvirtual

Update the biases, given the current bias gradients.

Implements TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t >.

Definition at line 196 of file RMSProp.h.

◆ UpdateWeights()

template<typename Architecture_t , typename Layer_t , typename DeepNet_t >
auto TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::UpdateWeights ( size_t  layerIndex,
std::vector< Matrix_t > &  weights,
const std::vector< Matrix_t > &  weightGradients 
)
protectedvirtual

Update the weights, given the current weight gradients.

Implements TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t >.

Definition at line 152 of file RMSProp.h.

Member Data Documentation

◆ fBiasUpdates

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fBiasUpdates
protected

The accumulation of the past Biases for performing updates.

Definition at line 60 of file RMSProp.h.

◆ fEpsilon

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fEpsilon
protected

The Smoothing term used to avoid division by zero.

Definition at line 53 of file RMSProp.h.

◆ fMomentum

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fMomentum
protected

The momentum used for training.

Definition at line 51 of file RMSProp.h.

◆ fPastSquaredBiasGradients

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fPastSquaredBiasGradients
protected

The sum of the square of the past bias gradients associated with the deep net.

Definition at line 57 of file RMSProp.h.

◆ fPastSquaredWeightGradients

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fPastSquaredWeightGradients
protected

The sum of the square of the past weight gradients associated with the deep net.

Definition at line 55 of file RMSProp.h.

◆ fRho

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fRho
protected

The Rho constant used by the optimizer.

Definition at line 52 of file RMSProp.h.

◆ fWeightUpdates

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fWeightUpdates
protected

The accumulation of the past Weights for performing updates.

Definition at line 59 of file RMSProp.h.

◆ fWorkBiasTensor1

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fWorkBiasTensor1
protected

working tensor used to keep a temporary copy of bias or bias gradients

Definition at line 64 of file RMSProp.h.

◆ fWorkBiasTensor2

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fWorkBiasTensor2
protected

working tensor used to keep a temporary copy of bias or bias gradients

Definition at line 68 of file RMSProp.h.

◆ fWorkWeightTensor1

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fWorkWeightTensor1
protected

working tensor used to keep a temporary copy of weights or weight gradients

Definition at line 62 of file RMSProp.h.

◆ fWorkWeightTensor2

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TRMSProp< Architecture_t, Layer_t, DeepNet_t >::fWorkWeightTensor2
protected

working tensor used to keep a temporary copy of weights or weight gradients

Definition at line 66 of file RMSProp.h.

  • tmva/tmva/inc/TMVA/DNN/RMSProp.h