Loading [MathJax]/extensions/tex2jax.js
Logo ROOT   6.10/09
Reference Guide
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Modules Pages
List of all members | Public Types | List of all members
TMVA::DNN::TCuda< AFloat > Class Template Reference

template<typename AFloat = Real_t>
class TMVA::DNN::TCuda< AFloat >

The TCuda architecture class.

Low-level interface class for CUDA computing architectures. Contains as public types the declaration of the scalar, matrix and buffer types for this architecture as well as the remaining functions in the low-level interface in the form of static members.

Definition at line 40 of file Cuda.h.

Public Types

using DeviceBuffer_t = TCudaDeviceBuffer< AFloat >
 
using HostBuffer_t = TCudaHostBuffer< AFloat >
 
using Matrix_t = TCudaMatrix< AFloat >
 
using Scalar_t = AFloat
 

Static Public Member Functions

Forward Propagation

Low-level functions required for the forward propagation of activations through the network.

static void MultiplyTranspose (TCudaMatrix< AFloat > &output, const TCudaMatrix< AFloat > &input, const TCudaMatrix< AFloat > &weights)
 Matrix-multiply input with the transpose of and write the results into output. More...
 
static void AddRowWise (TCudaMatrix< AFloat > &output, const TCudaMatrix< AFloat > &biases)
 Add the vectors biases row-wise to the matrix output. More...
 
Backward Propagation

Low-level functions required for the forward propagation of activations through the network.

static void Backward (TCudaMatrix< AFloat > &activationGradientsBackward, TCudaMatrix< AFloat > &weightGradients, TCudaMatrix< AFloat > &biasGradients, TCudaMatrix< AFloat > &df, const TCudaMatrix< AFloat > &activationGradients, const TCudaMatrix< AFloat > &weights, const TCudaMatrix< AFloat > &activationBackward)
 Perform the complete backward propagation step. More...
 
static void ScaleAdd (TCudaMatrix< AFloat > &A, const TCudaMatrix< AFloat > &B, Scalar_t beta=1.0)
 Adds a the elements in matrix B scaled by c to the elements in the matrix A. More...
 
static void Copy (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 Copy the elements of matrix A into matrix B. More...
 
Activation Functions

For each activation function, the low-level interface contains two routines.

One that applies the acitvation function to a matrix and one that evaluate the derivatives of the activation function at the elements of a given matrix and writes the results into the result matrix.

static void Identity (TCudaMatrix< AFloat > &B)
 
static void IdentityDerivative (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 
static void Relu (TCudaMatrix< AFloat > &B)
 
static void ReluDerivative (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 
static void Sigmoid (TCudaMatrix< AFloat > &B)
 
static void SigmoidDerivative (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 
static void Tanh (TCudaMatrix< AFloat > &B)
 
static void TanhDerivative (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 
static void SymmetricRelu (TCudaMatrix< AFloat > &B)
 
static void SymmetricReluDerivative (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 
static void SoftSign (TCudaMatrix< AFloat > &B)
 
static void SoftSignDerivative (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 
static void Gauss (TCudaMatrix< AFloat > &B)
 
static void GaussDerivative (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 
Loss Functions

Loss functions compute a scalar value given the output of the network for a given training input and the expected network prediction Y that quantifies the quality of the prediction.

For each function also a routing that computes the gradients (suffixed by Gradients) must be provided for the starting of the backpropagation algorithm.

static AFloat MeanSquaredError (const TCudaMatrix< AFloat > &Y, const TCudaMatrix< AFloat > &output)
 
static void MeanSquaredErrorGradients (TCudaMatrix< AFloat > &dY, const TCudaMatrix< AFloat > &Y, const TCudaMatrix< AFloat > &output)
 
static AFloat CrossEntropy (const TCudaMatrix< AFloat > &Y, const TCudaMatrix< AFloat > &output)
 Sigmoid transformation is implicitly applied, thus output should hold the linear activations of the last layer in the net. More...
 
static void CrossEntropyGradients (TCudaMatrix< AFloat > &dY, const TCudaMatrix< AFloat > &Y, const TCudaMatrix< AFloat > &output)
 
static AFloat SoftmaxCrossEntropy (const TCudaMatrix< AFloat > &Y, const TCudaMatrix< AFloat > &output)
 Softmax transformation is implicitly applied, thus output should hold the linear activations of the last layer in the net. More...
 
static void SoftmaxCrossEntropyGradients (TCudaMatrix< AFloat > &dY, const TCudaMatrix< AFloat > &Y, const TCudaMatrix< AFloat > &output)
 
Output Functions

Output functions transform the activations output of the output layer in the network to a valid prediction YHat for the desired usage of the network, e.g.

the identity function for regression or the sigmoid transformation for two-class classification.

static void Sigmoid (TCudaMatrix< AFloat > &YHat, const TCudaMatrix< AFloat > &)
 
static void Softmax (TCudaMatrix< AFloat > &YHat, const TCudaMatrix< AFloat > &)
 
Regularization

For each regularization type two functions are required, one named <Type>Regularization that evaluates the corresponding regularization functional for a given weight matrix and the Add<Type>RegularizationGradients, that adds the regularization component in the gradients to the provided matrix.

static AFloat L1Regularization (const TCudaMatrix< AFloat > &W)
 
static void AddL1RegularizationGradients (TCudaMatrix< AFloat > &A, const TCudaMatrix< AFloat > &W, AFloat weightDecay)
 
static AFloat L2Regularization (const TCudaMatrix< AFloat > &W)
 
static void AddL2RegularizationGradients (TCudaMatrix< AFloat > &A, const TCudaMatrix< AFloat > &W, AFloat weightDecay)
 
Initialization

For each initialization method, one function in the low-level interface is provided.

The naming scheme is

Initialize<Type>

for a given initialization method Type.

static void InitializeGauss (TCudaMatrix< AFloat > &A)
 
static void InitializeUniform (TCudaMatrix< AFloat > &A)
 
static void InitializeIdentity (TCudaMatrix< AFloat > &A)
 
static void InitializeZero (TCudaMatrix< AFloat > &A)
 
Dropout
static void Dropout (TCudaMatrix< AFloat > &A, AFloat p)
 Apply dropout with activation probability p to the given matrix A and scale the result by reciprocal of p. More...
 
Additional Arithmetic Functions

Additional arithmetic on CUDA matrices used to implement the low-level interface.

static void Multiply (TCudaMatrix< AFloat > &C, const TCudaMatrix< AFloat > &A, const TCudaMatrix< AFloat > &B)
 Standard multiplication of two matrices A and B with the result being written into C. More...
 
static void TransposeMultiply (TCudaMatrix< AFloat > &output, const TCudaMatrix< AFloat > &input, const TCudaMatrix< AFloat > &Weights)
 Matrix multiplication of two matrices A and B^T (transposed) with the result being written into C. More...
 
static void Hadamard (TCudaMatrix< AFloat > &A, const TCudaMatrix< AFloat > &B)
 In-place Hadamard (element-wise) product of matrices A and B with the result being written into A. More...
 
static void SumColumns (TCudaMatrix< AFloat > &B, const TCudaMatrix< AFloat > &A)
 Sum columns of (m x n) matrixx A and write the results into the first m elements in A. More...
 
static AFloat Sum (const TCudaMatrix< AFloat > &A)
 Compute the sum of all elements in A. More...
 

#include <TMVA/DNN/Architectures/Cuda.h>

Member Typedef Documentation

◆ DeviceBuffer_t

template<typename AFloat = Real_t>
using TMVA::DNN::TCuda< AFloat >::DeviceBuffer_t = TCudaDeviceBuffer<AFloat>

Definition at line 47 of file Cuda.h.

◆ HostBuffer_t

template<typename AFloat = Real_t>
using TMVA::DNN::TCuda< AFloat >::HostBuffer_t = TCudaHostBuffer<AFloat>

Definition at line 48 of file Cuda.h.

◆ Matrix_t

template<typename AFloat = Real_t>
using TMVA::DNN::TCuda< AFloat >::Matrix_t = TCudaMatrix<AFloat>

Definition at line 46 of file Cuda.h.

◆ Scalar_t

template<typename AFloat = Real_t>
using TMVA::DNN::TCuda< AFloat >::Scalar_t = AFloat

Definition at line 45 of file Cuda.h.

Member Function Documentation

◆ AddL1RegularizationGradients()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::AddL1RegularizationGradients ( TCudaMatrix< AFloat > &  A,
const TCudaMatrix< AFloat > &  W,
AFloat  weightDecay 
)
static

◆ AddL2RegularizationGradients()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::AddL2RegularizationGradients ( TCudaMatrix< AFloat > &  A,
const TCudaMatrix< AFloat > &  W,
AFloat  weightDecay 
)
static

◆ AddRowWise()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::AddRowWise ( TCudaMatrix< AFloat > &  output,
const TCudaMatrix< AFloat > &  biases 
)
static

Add the vectors biases row-wise to the matrix output.

◆ Backward()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Backward ( TCudaMatrix< AFloat > &  activationGradientsBackward,
TCudaMatrix< AFloat > &  weightGradients,
TCudaMatrix< AFloat > &  biasGradients,
TCudaMatrix< AFloat > &  df,
const TCudaMatrix< AFloat > &  activationGradients,
const TCudaMatrix< AFloat > &  weights,
const TCudaMatrix< AFloat > &  activationBackward 
)
static

Perform the complete backward propagation step.

If the provided activationGradientsBackward matrix is not empty, compute the gradients of the objective function with respect to the activations of the previous layer (backward direction). Also compute the weight and the bias gradients. Modifies the values in df and thus produces only a valid result, if it is applied the first time after the corresponding forward propagation has been per- formed.

◆ Copy()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Copy ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

Copy the elements of matrix A into matrix B.

◆ CrossEntropy()

template<typename AFloat = Real_t>
static AFloat TMVA::DNN::TCuda< AFloat >::CrossEntropy ( const TCudaMatrix< AFloat > &  Y,
const TCudaMatrix< AFloat > &  output 
)
static

Sigmoid transformation is implicitly applied, thus output should hold the linear activations of the last layer in the net.

◆ CrossEntropyGradients()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::CrossEntropyGradients ( TCudaMatrix< AFloat > &  dY,
const TCudaMatrix< AFloat > &  Y,
const TCudaMatrix< AFloat > &  output 
)
static

◆ Dropout()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Dropout ( TCudaMatrix< AFloat > &  A,
AFloat  p 
)
static

Apply dropout with activation probability p to the given matrix A and scale the result by reciprocal of p.

◆ Gauss()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Gauss ( TCudaMatrix< AFloat > &  B)
static

◆ GaussDerivative()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::GaussDerivative ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

◆ Hadamard()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Hadamard ( TCudaMatrix< AFloat > &  A,
const TCudaMatrix< AFloat > &  B 
)
static

In-place Hadamard (element-wise) product of matrices A and B with the result being written into A.

◆ Identity()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Identity ( TCudaMatrix< AFloat > &  B)
static

◆ IdentityDerivative()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::IdentityDerivative ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

◆ InitializeGauss()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::InitializeGauss ( TCudaMatrix< AFloat > &  A)
static

◆ InitializeIdentity()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::InitializeIdentity ( TCudaMatrix< AFloat > &  A)
static

◆ InitializeUniform()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::InitializeUniform ( TCudaMatrix< AFloat > &  A)
static

◆ InitializeZero()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::InitializeZero ( TCudaMatrix< AFloat > &  A)
static

◆ L1Regularization()

template<typename AFloat = Real_t>
static AFloat TMVA::DNN::TCuda< AFloat >::L1Regularization ( const TCudaMatrix< AFloat > &  W)
static

◆ L2Regularization()

template<typename AFloat = Real_t>
static AFloat TMVA::DNN::TCuda< AFloat >::L2Regularization ( const TCudaMatrix< AFloat > &  W)
static

◆ MeanSquaredError()

template<typename AFloat = Real_t>
static AFloat TMVA::DNN::TCuda< AFloat >::MeanSquaredError ( const TCudaMatrix< AFloat > &  Y,
const TCudaMatrix< AFloat > &  output 
)
static

◆ MeanSquaredErrorGradients()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::MeanSquaredErrorGradients ( TCudaMatrix< AFloat > &  dY,
const TCudaMatrix< AFloat > &  Y,
const TCudaMatrix< AFloat > &  output 
)
static

◆ Multiply()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Multiply ( TCudaMatrix< AFloat > &  C,
const TCudaMatrix< AFloat > &  A,
const TCudaMatrix< AFloat > &  B 
)
static

Standard multiplication of two matrices A and B with the result being written into C.

◆ MultiplyTranspose()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::MultiplyTranspose ( TCudaMatrix< AFloat > &  output,
const TCudaMatrix< AFloat > &  input,
const TCudaMatrix< AFloat > &  weights 
)
static

Matrix-multiply input with the transpose of and write the results into output.

◆ Relu()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Relu ( TCudaMatrix< AFloat > &  B)
static

◆ ReluDerivative()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::ReluDerivative ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

◆ ScaleAdd()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::ScaleAdd ( TCudaMatrix< AFloat > &  A,
const TCudaMatrix< AFloat > &  B,
Scalar_t  beta = 1.0 
)
static

Adds a the elements in matrix B scaled by c to the elements in the matrix A.

This is required for the weight update in the gradient descent step.

◆ Sigmoid() [1/2]

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Sigmoid ( TCudaMatrix< AFloat > &  B)
static

◆ Sigmoid() [2/2]

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Sigmoid ( TCudaMatrix< AFloat > &  YHat,
const TCudaMatrix< AFloat > &   
)
static

◆ SigmoidDerivative()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::SigmoidDerivative ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

◆ Softmax()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Softmax ( TCudaMatrix< AFloat > &  YHat,
const TCudaMatrix< AFloat > &   
)
static

◆ SoftmaxCrossEntropy()

template<typename AFloat = Real_t>
static AFloat TMVA::DNN::TCuda< AFloat >::SoftmaxCrossEntropy ( const TCudaMatrix< AFloat > &  Y,
const TCudaMatrix< AFloat > &  output 
)
static

Softmax transformation is implicitly applied, thus output should hold the linear activations of the last layer in the net.

◆ SoftmaxCrossEntropyGradients()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::SoftmaxCrossEntropyGradients ( TCudaMatrix< AFloat > &  dY,
const TCudaMatrix< AFloat > &  Y,
const TCudaMatrix< AFloat > &  output 
)
static

◆ SoftSign()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::SoftSign ( TCudaMatrix< AFloat > &  B)
static

◆ SoftSignDerivative()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::SoftSignDerivative ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

◆ Sum()

template<typename AFloat = Real_t>
static AFloat TMVA::DNN::TCuda< AFloat >::Sum ( const TCudaMatrix< AFloat > &  A)
static

Compute the sum of all elements in A.

◆ SumColumns()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::SumColumns ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

Sum columns of (m x n) matrixx A and write the results into the first m elements in A.

◆ SymmetricRelu()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::SymmetricRelu ( TCudaMatrix< AFloat > &  B)
static

◆ SymmetricReluDerivative()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::SymmetricReluDerivative ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

◆ Tanh()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::Tanh ( TCudaMatrix< AFloat > &  B)
static

◆ TanhDerivative()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::TanhDerivative ( TCudaMatrix< AFloat > &  B,
const TCudaMatrix< AFloat > &  A 
)
static

◆ TransposeMultiply()

template<typename AFloat = Real_t>
static void TMVA::DNN::TCuda< AFloat >::TransposeMultiply ( TCudaMatrix< AFloat > &  output,
const TCudaMatrix< AFloat > &  input,
const TCudaMatrix< AFloat > &  Weights 
)
static

Matrix multiplication of two matrices A and B^T (transposed) with the result being written into C.


The documentation for this class was generated from the following file: