library: libMLP
#include "TNeuron.h"

TNeuron


class description - header file - source file - inheritance tree (.pdf)

class TNeuron : public TNamed

Inheritance Chart:
TObject
<-
TNamed
<-
TNeuron

    protected:
void AddPost(TSynapse*) void AddPre(TSynapse*) Double_t DSigmoid(Double_t x) const Double_t Sigmoid(Double_t x) const public:
TNeuron(TNeuron::NeuronType type = kSigmoid, const char* name = "", const char* title = "", const char* extF = "", const char* extD = "") TNeuron(const TNeuron&) virtual ~TNeuron() void AddInLayer(TNeuron*) static TClass* Class() void ForceExternalValue(Double_t value) Double_t GetBranch() const Double_t GetDeDw() const Double_t GetDEDw() const Double_t GetDerivative() const Double_t GetError() const TNeuron* GetInLayer(Int_t n) const Double_t GetInput() const const Double_t* GetNormalisation() const TSynapse* GetPost(Int_t n) const TSynapse* GetPre(Int_t n) const Double_t GetTarget() const TNeuron::NeuronType GetType() const Double_t GetValue() const Double_t GetWeight() const virtual TClass* IsA() const void SetDEDw(Double_t in) void SetNewEvent() const void SetNormalisation(Double_t mean, Double_t RMS) void SetWeight(Double_t w) virtual void ShowMembers(TMemberInspector& insp, char* parent) virtual void Streamer(TBuffer& b) void StreamerNVirtual(TBuffer& b) TTreeFormula* UseBranch(TTree*, const char*)

Data Members

    private:
TObjArray fpre pointers to the previous level in a network TObjArray fpost pointers to the next level in a network TObjArray flayer pointers to the current level in a network (neurons, not synapses) Double_t fWeight weight used for computation Double_t fNorm[2] normalisation to mean=0, RMS=1. TNeuron::NeuronType fType neuron type TFormula* fExtF function (external mode) TFormula* fExtD derivative (external mode) TTreeFormula* fFormula ! formula to be used for inputs and outputs Int_t fIndex ! index in the formula Bool_t fNewInput ! do we need to compute fInput again ? Double_t fInput ! buffer containing the last neuron input Bool_t fNewValue ! do we need to compute fValue again ? Double_t fValue ! buffer containing the last neuron output Bool_t fNewDeriv ! do we need to compute fDerivative again ? Double_t fDerivative ! buffer containing the last neuron derivative Bool_t fNewDeDw ! do we need to compute fDeDw again ? Double_t fDeDw ! buffer containing the last derivative of the error Double_t fDEDw ! buffer containing the sum over all examples of DeDw public:
static const TNeuron::NeuronType kOff static const TNeuron::NeuronType kLinear static const TNeuron::NeuronType kSigmoid static const TNeuron::NeuronType kTanh static const TNeuron::NeuronType kGauss static const TNeuron::NeuronType kSoftmax static const TNeuron::NeuronType kExternal

Class Description


 TNeuron

 This class decribes an elementary neuron, which is the basic
 element for a Neural Network.
 A network is built connecting neurons by synapses.
 There are different types of neurons: linear (a+bx),
 sigmoid (1/(1+exp(-x)), tanh or gaussian.
 An external function can also be used, together with its derivative.
 In a Multi Layer Perceptron, the input layer is made of
 inactive neurons (returning the normalized input) and output neurons
 are linear. Hidden neurons may be anything, the default being sigmoids.

 This implementation contains several methods to compute the value,
 the derivative, the DeDw, ...
 Values are stored in local buffers. The SetNewEvent() method is
 there to inform buffered values are outdated.


Double_t Sigmoid(Double_t x)
 The Sigmoid.
 Fast computation of the values of the sigmoid function.
 Uses values of the function up  to the seventh order
 tabulated at 700 points.
 Values were computed in long double precision (16 bytes,
 precision to about 37 digits) on a hp computer.
 Some values were checked with Mathematica.
 Result should be correct to ~ 15 digits (about double
 precision)

 From the mlpfit package (J.Schwindling   20-Jul-1999)
Double_t DSigmoid(Double_t x)
 The Derivative of the Sigmoid.
void AddPre(TSynapse * pre)
 Adds a synapse to the neuron as an input
 This method is used by the TSynapse while
 connecting two neurons.
void AddPost(TSynapse * post)
 Adds a synapse to the neuron as an output
 This method is used by the TSynapse while
 connecting two neurons.
void AddInLayer(TNeuron * nearP)
 Tells a neuron which neurons form its layer (including itself).
 This is needed for self-normalizing functions, like Softmax.
TTreeFormula* UseBranch(TTree* input, const char* formula)
 Sets a formula that can be used to make the neuron an input.
 The formula is automatically normalized to mean=0, RMS=1.
 This normalisation is used by GetValue() (input neurons)
 and GetError() (output neurons)
Double_t GetBranch()
 Returns the formula value.
Double_t GetInput()
 Returns neuron input
Double_t GetValue()
 Computes the output using the appropriate function and all
 the weighted inputs, or uses the branch as input.
 In that case, the branch normalisation is also used.
Double_t GetDerivative()
 computes the derivative for the appropriate function
 at the working point
Double_t GetError()
 Computes the error for output neurons.
 Returns 0 for other neurons.
Double_t GetTarget()
 Computes the normalized target pattern for output neurons.
 Returns 0 for other neurons.
Double_t GetDeDw()
 Computes the derivative of the error wrt the neuron weight.
void ForceExternalValue(Double_t value)
 Uses the branch type to force an external value.
void SetNormalisation(Double_t mean, Double_t RMS)
 Sets the normalization variables.
 Any input neuron will return (branch-mean)/RMS.
 When UseBranch is called, mean and RMS are automatically set
 to the actual branch mean and RMS.
void SetWeight(Double_t w)
 Sets the neuron weight to w.
 The neuron weight corresponds to the bias in the
 linear combination of the inputs.
void SetNewEvent()
 Inform the neuron that inputs of the network have changed,
 so that the buffered values have to be recomputed.
void SetDEDw(Double_t in)
 Sets the derivative of the total error wrt the neuron weight.
TNeuron(NeuronType type = kSigmoid, const char* name = "", const char* title = "", const char* extF = "", const char* extD = "" )
~ TNeuron()
TSynapse* GetPre(Int_t n)
TSynapse* GetPost(Int_t n)
TNeuron* GetInLayer(Int_t n)
NeuronType GetType()
Double_t GetWeight()
const Double_t* GetNormalisation()
Double_t GetDEDw()

Author: Christophe.Delaere@cern.ch 20/07/03
Last update: root/mlp:$Name: $:$Id: TNeuron.cxx,v 1.20 2006/05/26 15:13:02 rdm Exp $
Copyright (C) 1995-2003, Rene Brun and Fons Rademakers. *


ROOT page - Class index - Class Hierarchy - Top of the page

This page has been automatically generated. If you have any comments or suggestions about the page layout send a mail to ROOT support, or contact the developers with any questions or problems regarding ROOT.