Logo ROOT  
Reference Guide
TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t > Class Template Reference

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
class TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >

Adadelta Optimizer class.

This class represents the Adadelta Optimizer.

Definition at line 44 of file Adadelta.h.

Public Types

using Matrix_t = typename Architecture_t::Matrix_t
 
using Scalar_t = typename Architecture_t::Scalar_t
 
- Public Types inherited from TMVA::DNN::VOptimizer< Architecture_t, VGeneralLayer< Architecture_t >, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > >
using Matrix_t = typename Architecture_t::Matrix_t
 
using Scalar_t = typename Architecture_t::Scalar_t
 

Public Member Functions

 TAdadelta (DeepNet_t &deepNet, Scalar_t learningRate=1.0, Scalar_t rho=0.95, Scalar_t epsilon=1e-8)
 Constructor. More...
 
 ~TAdadelta ()=default
 Destructor. More...
 
Scalar_t GetEpsilon () const
 
std::vector< std::vector< Matrix_t > > & GetPastSquaredBiasGradients ()
 
std::vector< Matrix_t > & GetPastSquaredBiasGradientsAt (size_t i)
 
std::vector< std::vector< Matrix_t > > & GetPastSquaredBiasUpdates ()
 
std::vector< Matrix_t > & GetPastSquaredBiasUpdatesAt (size_t i)
 
std::vector< std::vector< Matrix_t > > & GetPastSquaredWeightGradients ()
 
std::vector< Matrix_t > & GetPastSquaredWeightGradientsAt (size_t i)
 
std::vector< std::vector< Matrix_t > > & GetPastSquaredWeightUpdates ()
 
std::vector< Matrix_t > & GetPastSquaredWeightUpdatesAt (size_t i)
 
Scalar_t GetRho () const
 Getters. More...
 
- Public Member Functions inherited from TMVA::DNN::VOptimizer< Architecture_t, VGeneralLayer< Architecture_t >, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > >
 VOptimizer (Scalar_t learningRate, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > &deepNet)
 Constructor. More...
 
virtual ~VOptimizer ()=default
 Virtual Destructor. More...
 
size_t GetGlobalStep () const
 
VGeneralLayer< Architecture_t > * GetLayerAt (size_t i)
 
std::vector< VGeneralLayer< Architecture_t > * > & GetLayers ()
 
Scalar_t GetLearningRate () const
 Getters. More...
 
void IncrementGlobalStep ()
 Increments the global step. More...
 
void SetLearningRate (size_t learningRate)
 Setters. More...
 
void Step ()
 Performs one step of optimization. More...
 

Protected Member Functions

void UpdateBiases (size_t layerIndex, std::vector< Matrix_t > &biases, const std::vector< Matrix_t > &biasGradients)
 Update the biases, given the current bias gradients. More...
 
void UpdateWeights (size_t layerIndex, std::vector< Matrix_t > &weights, const std::vector< Matrix_t > &weightGradients)
 Update the weights, given the current weight gradients. More...
 
virtual void UpdateBiases (size_t layerIndex, std::vector< Matrix_t > &biases, const std::vector< Matrix_t > &biasGradients)=0
 Update the biases, given the current bias gradients. More...
 
virtual void UpdateWeights (size_t layerIndex, std::vector< Matrix_t > &weights, const std::vector< Matrix_t > &weightGradients)=0
 Update the weights, given the current weight gradients. More...
 

Protected Attributes

Scalar_t fEpsilon
 The Smoothing term used to avoid division by zero. More...
 
std::vector< std::vector< Matrix_t > > fPastSquaredBiasGradients
 The accumulation of the square of the past bias gradients associated with the deep net. More...
 
std::vector< std::vector< Matrix_t > > fPastSquaredBiasUpdates
 The accumulation of the square of the past bias updates associated with the deep net. More...
 
std::vector< std::vector< Matrix_t > > fPastSquaredWeightGradients
 The accumulation of the square of the past weight gradients associated with the deep net. More...
 
std::vector< std::vector< Matrix_t > > fPastSquaredWeightUpdates
 The accumulation of the square of the past weight updates associated with the deep net. More...
 
Scalar_t fRho
 The Rho constant used by the optimizer. More...
 
std::vector< std::vector< Matrix_t > > fWorkBiasTensor1
 working tensor used to keep a temporary copy of bias or bias gradients More...
 
std::vector< std::vector< Matrix_t > > fWorkBiasTensor2
 working tensor used to keep a temporary copy of bias or bias gradients More...
 
std::vector< std::vector< Matrix_t > > fWorkWeightTensor1
 working tensor used to keep a temporary copy of weights or weight gradients More...
 
std::vector< std::vector< Matrix_t > > fWorkWeightTensor2
 working tensor used to keep a temporary copy of weights or weight gradients More...
 
- Protected Attributes inherited from TMVA::DNN::VOptimizer< Architecture_t, VGeneralLayer< Architecture_t >, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > >
TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > & fDeepNet
 The reference to the deep net. More...
 
size_t fGlobalStep
 The current global step count during training. More...
 
Scalar_t fLearningRate
 The learning rate used for training. More...
 

#include <TMVA/DNN/Adadelta.h>

Inheritance diagram for TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >:
[legend]

Member Typedef Documentation

◆ Matrix_t

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
using TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::Matrix_t = typename Architecture_t::Matrix_t

Definition at line 46 of file Adadelta.h.

◆ Scalar_t

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
using TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::Scalar_t = typename Architecture_t::Scalar_t

Definition at line 47 of file Adadelta.h.

Constructor & Destructor Documentation

◆ TAdadelta()

template<typename Architecture_t , typename Layer_t , typename DeepNet_t >
TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::TAdadelta ( DeepNet_t &  deepNet,
Scalar_t  learningRate = 1.0,
Scalar_t  rho = 0.95,
Scalar_t  epsilon = 1e-8 
)

Constructor.

Definition at line 101 of file Adadelta.h.

◆ ~TAdadelta()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::~TAdadelta ( )
default

Destructor.

Member Function Documentation

◆ GetEpsilon()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetEpsilon ( ) const
inline

Definition at line 81 of file Adadelta.h.

◆ GetPastSquaredBiasGradients()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredBiasGradients ( )
inline

Definition at line 86 of file Adadelta.h.

◆ GetPastSquaredBiasGradientsAt()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredBiasGradientsAt ( size_t  i)
inline

Definition at line 87 of file Adadelta.h.

◆ GetPastSquaredBiasUpdates()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredBiasUpdates ( )
inline

Definition at line 92 of file Adadelta.h.

◆ GetPastSquaredBiasUpdatesAt()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredBiasUpdatesAt ( size_t  i)
inline

Definition at line 93 of file Adadelta.h.

◆ GetPastSquaredWeightGradients()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredWeightGradients ( )
inline

Definition at line 83 of file Adadelta.h.

◆ GetPastSquaredWeightGradientsAt()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredWeightGradientsAt ( size_t  i)
inline

Definition at line 84 of file Adadelta.h.

◆ GetPastSquaredWeightUpdates()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredWeightUpdates ( )
inline

Definition at line 89 of file Adadelta.h.

◆ GetPastSquaredWeightUpdatesAt()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredWeightUpdatesAt ( size_t  i)
inline

Definition at line 90 of file Adadelta.h.

◆ GetRho()

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::GetRho ( ) const
inline

Getters.

Definition at line 80 of file Adadelta.h.

◆ UpdateBiases()

template<typename Architecture_t , typename Layer_t , typename DeepNet_t >
auto TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::UpdateBiases ( size_t  layerIndex,
std::vector< Matrix_t > &  biases,
const std::vector< Matrix_t > &  biasGradients 
)
protectedvirtual

◆ UpdateWeights()

template<typename Architecture_t , typename Layer_t , typename DeepNet_t >
auto TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::UpdateWeights ( size_t  layerIndex,
std::vector< Matrix_t > &  weights,
const std::vector< Matrix_t > &  weightGradients 
)
protectedvirtual

Update the weights, given the current weight gradients.

Implements TMVA::DNN::VOptimizer< Architecture_t, VGeneralLayer< Architecture_t >, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > >.

Definition at line 146 of file Adadelta.h.

Member Data Documentation

◆ fEpsilon

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fEpsilon
protected

The Smoothing term used to avoid division by zero.

Definition at line 51 of file Adadelta.h.

◆ fPastSquaredBiasGradients

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fPastSquaredBiasGradients
protected

The accumulation of the square of the past bias gradients associated with the deep net.

Definition at line 54 of file Adadelta.h.

◆ fPastSquaredBiasUpdates

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fPastSquaredBiasUpdates
protected

The accumulation of the square of the past bias updates associated with the deep net.

Definition at line 59 of file Adadelta.h.

◆ fPastSquaredWeightGradients

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fPastSquaredWeightGradients
protected

The accumulation of the square of the past weight gradients associated with the deep net.

Definition at line 52 of file Adadelta.h.

◆ fPastSquaredWeightUpdates

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fPastSquaredWeightUpdates
protected

The accumulation of the square of the past weight updates associated with the deep net.

Definition at line 57 of file Adadelta.h.

◆ fRho

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fRho
protected

The Rho constant used by the optimizer.

Definition at line 50 of file Adadelta.h.

◆ fWorkBiasTensor1

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fWorkBiasTensor1
protected

working tensor used to keep a temporary copy of bias or bias gradients

Definition at line 62 of file Adadelta.h.

◆ fWorkBiasTensor2

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fWorkBiasTensor2
protected

working tensor used to keep a temporary copy of bias or bias gradients

Definition at line 64 of file Adadelta.h.

◆ fWorkWeightTensor1

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fWorkWeightTensor1
protected

working tensor used to keep a temporary copy of weights or weight gradients

Definition at line 61 of file Adadelta.h.

◆ fWorkWeightTensor2

template<typename Architecture_t , typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::fWorkWeightTensor2
protected

working tensor used to keep a temporary copy of weights or weight gradients

Definition at line 63 of file Adadelta.h.


The documentation for this class was generated from the following file: