Log of /trunk/math/mlp/src/TMultiLayerPerceptron.cxx
Parent Directory
Revision
48992 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Thu Mar 28 15:26:26 2013 UTC (21 months, 3 weeks ago) by
rdm
File length: 97709 byte(s)
Diff to
previous 44507
From Lifeng Sun:
The attached patchset fixes a bunch of typo in the source:
0001-succes-success.patch
0002-preceed-preced.patch
0003-informations-information.patch
0004-childs-children.patch
0005-avaliable-available.patch
0006-writeable-writable.patch
0007-comand-command.patch
0008-unkown-unknown.patch
0009-wierd-weird.patch
0010-wheter-whether.patch
0011-unecessary-unnecessary.patch
0012-splitted-split.patch
0013-registerd-registered.patch
0014-recieve-receive.patch
0015-processsing-processing.patch
0016-ouput-output.patch
0017-mutiple-multiple.patch
0018-lenght-length.patch
0019-interupted-interrupted.patch
0020-independant-independent.patch
0021-inconsistant-inconsistent.patch
0022-expresion-expression.patch
0023-explicitely-explicitly.patch
0024-enviroment-environment.patch
0025-deafult-default.patch
0026-continous-continuous.patch
0027-completly-completely.patch
0028-commited-committed.patch
0029-choosen-chosen.patch
0030-backgroud-background.patch
0031-auxilliary-auxiliary.patch
0032-authentification-authentication.patch
0033-appropiate-appropriate.patch
0034-an-other-another.patch
0035-environement-environment.patch
0036-targetting-targeting.patch
0037-suppported-supported.patch
0038-paramater-parameter.patch
Revision
44507 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Mon Jun 4 12:30:41 2012 UTC (2 years, 7 months ago) by
axel
File length: 97713 byte(s)
Diff to
previous 44143
Remove
using namespace std;
from Riostream.h, which has huge consequences for all of ROOT.
Riostream.h is now a simple wrapper for fstream, iostream, iomanip for backward compatibility; Riosfwd.h simply wraps iosfwd.
Because of templates and their inline functions, Riostream.h needed to be included in headers, too (e.g. TParameter.h), which violated the assumption that Riostream.h is not exposing its using namespace std to headers.
ROOT now requires R__ANSISTREAM, R__SSTREAM, which does not change the set of supported compilers.
Without "using namespace std", several identifiers are now prefixed by std::; e.g. roofit/* source files now have a using namespace std to keep their coding style.
TFile::MakeProject() now generates "using namespace std" to convert the CINT-style class names into C++ ones.
Revision
36832 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Mon Nov 22 08:53:49 2010 UTC (4 years, 2 months ago) by
brun
File length: 96629 byte(s)
Diff to
previous 35375
From Christophe Delaere:
Added a third parameter to set the limit, and the behavior is then controled
by extra options.
// - "minErrorTrain" (stop when NN error on the training sample gets below
minE
// - "minErrorTest" (stop when NN error on the test sample gets below minE
The limit on epochs is always active but can be put to a high value. This
guaranties the stopping condition.
Revision
30749 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Thu Oct 15 16:33:04 2009 UTC (5 years, 3 months ago) by
brun
File length: 95859 byte(s)
Diff to
previous 29964
From Matthew Strait:
This patch fixes the spelling of "function" in the root source code
and documentation, which is misspelled (sometimes as part of larger
"function"-based words) at least 152 times:
* "funciton" 48 times
* "funcion" 36 times
* "funtion" 23 times
* "fucntion" 17 times
* "functionn" 6 times
* "fuction" 6 times
* "fuunction" 4 times
* "functioin" 3 times
* "fonction" 3 times
* "funstion" twice
* "fnuction" once
* "functiom" once
* "functio" once
* "funcition" once
Revision
22885 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Fri Mar 28 13:57:25 2008 UTC (6 years, 9 months ago) by
rdm
File length: 94476 byte(s)
Diff to
previous 22419
move the following directories under the new "math" meta directory:
mathcore
mathmore
fftw
foam
fumili
genvector
matrix
minuit
minuit2
mlp
physics
smatrix
splot
unuran
quadp
Revision
22419 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Mon Mar 3 00:25:01 2008 UTC (6 years, 10 months ago) by
rdm
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 94476 byte(s)
Diff to
previous 20882
From Andrew Savchenko:
ROOT can not be compiled with gcc-4.3.
Some ROOT source files doesn't contain required #include directives,
for example, they use strlen(), but #include <string.h> is missed or
malloc() is used and #include <stdlib.h> is missed.
Earlier versions of gcc allowed some headers to be included implicitly,
but issued a warning (-Wimplicit-function-declaration). Newer one,
gcc-4.3 denies such silly behaviour: all required headers must be explicitly
included.
Attached patch fixes this. Also it fixes another issue, which disallows
ROOT to compile under gcc-4.3: C functions don't belong to namespace std,
so expressions like std::memcpy() are no longer valid and plain memcpy()
should be used instead.
Revision
19561 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Tue Aug 7 07:48:44 2007 UTC (7 years, 5 months ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 94909 byte(s)
Diff to
previous 18705
From Christophe Delaere:
When I moved BFGS matrices into the BFGS specific code, I overlooked the fact
that matrices are not to be reinitialized at each step of the loop.
The consequence is that BFGS training is far from optimal from release
v5-15-08, but other methods are not affected. Using MLP with ROOT
v5-15-08 -> v5-16-00 I recommend to use the
TMultiLayerPerceptron::kFletcherReeves learning method.
Revision
17250 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Tue Jan 2 14:05:48 2007 UTC (8 years ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 96616 byte(s)
Diff to
previous 16966
From Christophe Delaere:
Fix a problem reported by Herbert Greenlee
The fortran-generated code was indeed
protected, but not the C++/python versions.
Herbert Greelee said:
"Neural network code generated by TMultiLayerPerceptron contains code
similar to the following in "neuron" methods:
return ((1/(1+exp(-input)))*1)+0;
This code is vulnerable to floating point overflows for moderately large
negative values of "input." The mathematical function itself is well
defined for all values of input. This is a computer arithmetic problem.
A better computer science implementation of this function is needed that
avoids floating point overflows in the intermediate results."
Revision
14745 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Wed Apr 19 08:22:26 2006 UTC (8 years, 9 months ago) by
rdm
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 95941 byte(s)
Diff to
previous 14363
Change the TError.h macros:
Assert -> R__ASSERT
Check -> R__CHECK
Change the TCollection.h macro:
ForEach -> R__FOR_EACH
This to avoid potential problems due too trivial macro names.
The old macros will be removed in the next release. Currently
they will print out warning messages with the advice to move
to the new macro names.
Revision
13823 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Fri Jan 13 09:10:12 2006 UTC (9 years ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 96258 byte(s)
Diff to
previous 13804
From Christophe Delaere:
-When the option "current" is specified, TMultiLayerPerceptron::Draw
will show the learning curves in the pad. This allows to run in batch
and produce an eps file.
-Small protection when normalizing data (if RMS=0).
-Fix a missing delete in LineSearch function
Revision
13804 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Mon Jan 9 15:47:30 2006 UTC (9 years ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 95955 byte(s)
Diff to
previous 12622
From Christophe Delaere and Andrea Bocci:
Andrea has extended a bit ROOT's TMultiLayerPerceptron to
optionally use cross-entropy errors, which allows to train a network
for pattern classification based on Bayesian posterior probability.
Reference: [Bishop 1995 , Neural Networks for Pattern Recognition], in
particular chapter 6.
In order to achieve this, I had to add the softmax (generalized
sigmoid) neuron function, which in turn required a bit of changes to
the neuron itself.
Also, I added softmax and sigmoid as possible output neurons, requiring
some changes to how error back propagation is performed.
Currently, softmax neurons are used only in the output layer, but
everything is setup so that they should be OK as hidden units, too,
provided they form a whole layer.
Revision
12329 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Mon Jul 18 12:02:02 2005 UTC (9 years, 6 months ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 88950 byte(s)
Diff to
previous 11101
From Christophe Delaere:
- input normalization is now optional. A "@" must be added at the beginning of
the input description to enforce normalization.
- the input/output normalization is now saved/loaded with the weight
(DumpWeights() / LoadWeights() )
- the input/output normalization is taken into account when a function is
exported (C++, FORTRAN, PYTHON)
- The neuron transfer function can now be chosen, either as a predefined
function (sigmoid(default), tanh, gauss, linear) or as an external function
(TFormula).
- arrays can now be used as input. If no index is specified, a neuron will be
created for each element in the array. Only fixed-size arrays are handled
this way.
- TChains can now be used without crash.
- bugfix in TMultiLayerPerceptron::DrawResult() (thanks to Axel): the training
sample was always used, ignoring the option field.
Revision
11024 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Thu Feb 3 07:29:32 2005 UTC (9 years, 11 months ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 81403 byte(s)
Diff to
previous 10831
From Axel Nauman & Christophe Delaere
This patch fixes a bug in DrawNetwork, where the hists' upper edge was smaller than the lower edge (this was causing the corrupted histos in the mlpHiggs tutorial). I updated the new regression methods (some null pointer checks, better labels) and their doc. I added the following comment to the doc of TMultiLayerPerceptron: "(One should still try to pass normalized inputs, e.g. between [0.,1])", and added labels for the output nodes in Draw.
Revision
10831 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Fri Dec 17 22:34:01 2004 UTC (10 years, 1 month ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 80903 byte(s)
Diff to
previous 10822
From Axel Naumann:
As you pointed out, MSVC found a real problem. Your quick correction
still left the underlying logic problem. Fixed now, see attachment. I
also renamed the vars.
On the mlpHiggs result: the result is (within the randomization
variations) unchanged - I did not touch any of the internals of the MLP
algorithm. Only DrawNetwork didn't manage to display the stack of
histos. This looks like a problem with THStack, its side effect shows up
in mlpHiggs. I replaced the THStack->Draw by sigh->Draw(),
bgh->Draw("same") for now, leaving a reminder comment that this needs to
be fixed. I'll look into that later.
My new methods don't work on the mlpHiggs example (they do work on my
private use case, though). There is a problem creating the TProfile
histos for the mlpHiggs tutorial which I didn't find after chasing it
down for hours. I need more time for that.
I forgot one deletion, now the legends are only created if options
doesn't contain "goff". The profiles are now filled with O-T:T, not
(O-T)/T:T (O: output value, T: truth value), to avoid div by 0. One
THStack had an invalid Form()'ed title.
TMLP::Train now tells how many test and train events are used.
Revision
10822 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Thu Dec 16 21:20:47 2004 UTC (10 years, 1 month ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 80804 byte(s)
Diff to
previous 10287
From Axel Naumann
I've added four utility methods to TMLPAnalyzer. ANNs used to represent
an unknown function (i.e. not for classification) had no appropriate
tools for testing the output quality.
For the sake of clarity I made the TNeurons derive from TNamed; they now
get names assigned. First, last layer: their TTreeFormula, hidden layer:
Form("HiddenL%d:N%d",layer,i). This allows quick access to the nodes'
names when drawing their input / output for e.g. axis labels.
There was a small bug in the "vary the inputs by a bit, look what
happens to the output" algorithm. The input wasn't reset to its original
value before the next node's input was modified, creating "cross talk".
The loops are also a bit more efficient now.
The status graph is now only updated once per round (works for me, even
with the zoomed axis).
The example in the class descr is fixed (no type specifiers "/F").
The c'tor doc for const char* test/train cuts is fixed.
TMLP::DrawResult now has the option "nocanv" which noesn't create a new
canvas (it's this way around for backwards comp).
tutorials/mlpHiggs now has the proper orthogonal train and test cut.
TMLPAnalyzer now creates a TTree containing the input, true output and
real output - good for quick plots of any possible dependencies one can
think of. This is used to plot the relative difference of (output,true)
vs true (by DrawTruthDeviation) and (output,true) vs input (by
DrawTruthDeviationInOut). The former shows how dependent the error is on
the output value, the latter how much it depends on tyhe input. These
histos are plotted (and returned) as TProfiles, showing the mean
deviation (and the std dev of the deviation) vs true or input value.
Revision
7219 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Modified
Fri Sep 5 10:40:01 2003 UTC (11 years, 4 months ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 56809 byte(s)
Diff to
previous 7204
Update to the Neural Net package by Christophe Delaere
-Problem in writing TNeuron solved.
-Problem reading a TChain solved.
-Empty canvas appearing when normalizing a neuron does not appear anymore.
-Option to normalize output added.
-Few problems in teh documentation corrected.
Revision
7159 -
(
view)
(
download)
(
as text)
(
annotate)
-
[select for diffs]
Added
Wed Aug 27 15:31:14 2003 UTC (11 years, 5 months ago) by
brun
Original Path:
trunk/mlp/src/TMultiLayerPerceptron.cxx
File length: 57694 byte(s)
New package "mlp" (MultiLayerPerceptron" by Christophe.Delaere
The package has 3 classes TMultiLayerPerceptron, TNeuron, TSynapse
// TMultiLayerPerceptron
//
// This class describes a neural network.
// There are facilities to train the network and use the output.
//
// The input layer is made of inactive neurons (returning the
// normalized input), hidden layers are made of sigmoids and output
// neurons are linear.
//
// The basic input is a TTree and two (training and test) TEventLists.
// For classification jobs, a branch (maybe in a TFriend) must contain
// the expected output.
// 6 learning methods are available: kStochastic, kBatch,
// kSteepestDescent, kRibierePolak, kFletcherReeves and kBFGS.
//
// This implementation is *inspired* from the mlpfit package from
// J.Schwindling et al.
This form allows you to request diffs between any two revisions of this file.
For each of the two "sides" of the diff,
enter a numeric revision.