ROOT  6.07/01
Reference Guide
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Groups Pages
TUnfold.cxx
Go to the documentation of this file.
1 // Author: Stefan Schmitt
2 // DESY, 13/10/08
3 
4 // Version 17.1, bug fixes in GetFoldedOutput, GetOutput
5 //
6 // History:
7 // Version 17.0, option to specify an error matrix with SetInput(), new ScanRho() method
8 // Version 16.2, in parallel to bug-fix in TUnfoldSys
9 // Version 16.1, fix bug with error matrix in case kEConstraintArea is used
10 // Version 16.0, fix calculation of global correlations, improved error messages
11 // Version 15, simplified L-curve scan, new tau definition, new error calc., area preservation
12 // Version 14, with changes in TUnfoldSys.cxx
13 // Version 13, new methods for derived classes and small bug fix
14 // Version 12, report singular matrices
15 // Version 11, reduce the amount of printout
16 // Version 10, more correct definition of the L curve, update references
17 // Version 9, faster matrix inversion and skip edge points for L-curve scan
18 // Version 8, replace all TMatrixSparse matrix operations by private code
19 // Version 7, fix problem with TMatrixDSparse,TMatrixD multiplication
20 // Version 6, replace class XY by std::pair
21 // Version 5, replace run-time dynamic arrays by new and delete[]
22 // Version 4, fix new bug from V3 with initial regularisation condition
23 // Version 3, fix bug with initial regularisation condition
24 // Version 2, with improved ScanLcurve() algorithm
25 // Version 1, added ScanLcurve() method
26 // Version 0, stable version of basic unfolding algorithm
27 
28 /** \class TUnfold
29  \ingroup Hist
30  TUnfold is used to decompose a measurement y into several sources x
31  given the measurement uncertainties and a matrix of migrations A
32 
33  **For most applications, it is better to use TUnfoldDensity
34  instead of using TUnfoldSys or TUnfold**
35 
36  If you use this software, please consider the following citation
37  S.Schmitt, JINST 7 (2012) T10003 [arXiv:1205.6201]
38 
39  More documentation and updates are available on
40  http://www.desy.de/~sschmitt
41 
42  A short summary of the algorithm is given in the following:
43  the "best" x matching the measurement y within errors
44  is determined by minimizing the function
45  L1+L2+L3
46 
47  where
48  L1 = (y-Ax)# Vyy^-1 (y-Ax)
49  L2 = tau^2 (L(x-x0))# L(x-x0)
50  L3 = lambda sum_i(y_i -(Ax)_i)
51 
52  [the notation # means that the matrix is transposed ]
53 
54  The term L1 is familiar from a least-square minimisation
55  The term L2 defines the regularisation (smootheness condition on x),
56  where the parameter tau^2 gives the strength of teh regularisation
57  The term L3 is an optional area constraint with Lagrangian parameter
58  lambda, ensuring that that the normalisation of x is consistent with the
59  normalisation of y
60 
61  The method can be applied to a very large number of problems,
62  where the measured distribution y is a linear superposition
63  of several Monte Carlo shapes
64 
65  ## Input from measurement:
66  y: vector of measured quantities (dimension ny)
67  Vyy: covariance matrix for y (dimension ny x ny)
68  in many cases V is diagonal and calculated from the errors of y
69 
70  ## From simulation:
71  A: migration matrix (dimension ny x nx)
72 
73  ## Result
74  x: unknown underlying distribution (dimension nx)
75  The error matrix of x, V_xx, is also determined
76 
77  ## Regularisation
78  tau: parameter, defining the regularisation strength
79  L: matrix of regularisation conditions (dimension nl x nx)
80  depends on the structure of the input data
81  x0: bias distribution, from simulation
82 
83  ## Preservation of the area
84  lambda: lagrangian multiplier
85  y_i: one component of the vector y
86  (Ax)_i: one component of the vector Ax
87 
88 
89  Determination of the unfolding result x:
90  - (a) not constrained: minimisation is performed as a function of x
91  for fixed lambda=0
92  - (b) constrained: stationary point is found as a function of x and lambda
93 
94  The constraint can be useful to reduce biases on the result x
95  in cases where the vector y follows non-Gaussian probability densities
96  (example: Poisson statistics at counting experiments in particle physics)
97 
98  Some random examples:
99  1. measure a cross-section as a function of, say, E_T(detector)
100  and unfold it to obtain the underlying distribution E_T(generator)
101  2. measure a lifetime distribution and unfold the contributions from
102  different flavours
103  3. measure the transverse mass and decay angle
104  and unfold for the true mass distribution plus background
105 
106  ## Documentation
107  Some technical documentation is available here:
108  http://www.desy.de/~sschmitt
109 
110  Note:
111 
112  For most applications it is better to use the derived class
113  TUnfoldSys or even better TUnfoldDensity
114 
115  TUnfoldSys extends the functionality of TUnfold
116  such that systematic errors are propagated to teh result
117  and that the unfolding can be done with proper background
118  subtraction
119 
120  TUnfoldDensity extends further the functionality of TUnfoldSys
121  complex binning schemes are supported
122  The binning of input histograms is handeld consistently:
123  (1) the regularisation may be done by density,
124  i.e respecting the bin widths
125  (2) methods are provided which preserve the proper binning
126  of the result histograms
127  ## Implementation
128  The result of the unfolding is calculated as follows:
129 
130  Lsquared = L#L regularisation conditions squared
131 
132  epsilon_j = sum_i A_ij vector of efficiencies
133 
134  E^-1 = ((A# Vyy^-1 A)+tau^2 Lsquared)
135 
136  x = E (A# Vyy^-1 y + tau^2 Lsquared x0 +lambda/2 * epsilon) is the result
137 
138  The derivatives
139  dx_k/dy_i
140  dx_k/dA_ij
141  dx_k/d(tau^2)
142  are calculated for further usage.
143 
144  The covariance matrix V_xx is calculated as:
145  Vxx_ij = sum_kl dx_i/dy_k Vyy_kl dx_j/dy_l
146 
147  ## Warning:
148  The algorithm is based on "standard" matrix inversion, with the
149  known limitations in numerical accuracy and computing cost for
150  matrices with large dimensions.
151 
152  Thus the algorithm should not used for large dimensions of x and y
153  nx should not be much larger than 200
154  ny should not be much larger than 1000
155 
156 ## Proper choice of tau
157  One of the difficult questions is about the choice of tau.
158  The method implemented in TUnfold is the L-curve method:
159  a two-dimensional curve is plotted
160  x-axis: log10(chisquare)
161  y-axis: log10(regularisation condition)
162  In many cases this curve has an L-shape. The best choice of tau is in the
163  kink of the L
164 
165  Within TUnfold a simple version of the L-curve analysis is available.
166  It tests a given number of points in a predefined tau-range and searches
167  for the maximum of the curvature in the L-curve (kink position).
168  if no tau range is given, the range of the scan is determined automatically
169 
170  A nice overview of the L-curve method is given in:
171  The L-curve and Its Use in the Numerical Treatment of Inverse Problems
172  (2000) by P. C. Hansen, in Computational Inverse Problems in
173  Electrocardiology, ed. P. Johnston,
174  Advances in Computational Bioengineering
175  http://www.imm.dtu.dk/~pch/TR/Lcurve.ps
176 
177  ## Alternative Regularisation conditions
178  Regularisation is needed for most unfolding problems, in order to avoid
179  large oscillations and large correlations on the output bins.
180  It means that some extra conditions are applied on the output bins
181 
182  Within TUnfold these conditions are posed on the difference (x-x0), where
183  x: unfolding output
184  x0: the bias distribution, by default calculated from
185  the input matrix A. There is a method SetBias() to change the
186  bias distribution.
187  The 3rd argument to DoUnfold() is a scale factor applied to the bias
188  bias_default[j] = sum_i A[i][j]
189  x0[j] = scaleBias*bias[j]
190  The scale factor can be used to
191  (a) completely suppress the bias by setting it to zero
192  (b) compensate differences in the normalisation between data
193  and Monte Carlo
194 
195  If the regularisation is strong, i.e. large parameter tau,
196  then the distribution x or its derivatives will look like the bias
197  distribution. If the parameter tau is small, the distribution x is
198  independent of the bias.
199 
200  Three basic types of regularisation are implemented in TUnfold
201 
202  condition | regularisation
203  -----------------|------------------------------------
204  kRegModeNone | none
205  kRegModeSize | minimize the size of (x-x0)
206  kRegModeDerivative | minimize the 1st derivative of (x-x0)
207  kRegModeCurvature | minimize the 2nd derivative of (x-x0)
208 
209  kRegModeSize is the regularisation scheme which often is found in
210  literature. The second derivative is often named curvature.
211  Sometimes the bias is not discussed, equivalent to a bias scale factor
212  of zero.
213 
214  The regularisation schemes kRegModeDerivative and
215  kRegModeCurvature have the nice feature that they create correlations
216  between x-bins, whereas the non-regularized unfolding tends to create
217  negative correlations between bins. For these regularisation schemes the
218  parameter tau could be tuned such that the correlations are smallest,
219  as an alternative to the L-curve method.
220 
221  If kRegModeSize is chosen or if x is a smooth function through all bins,
222  the regularisation condition can be set on all bins together by giving
223  the appropriate argument in the constructor (see examples above).
224 
225  If x is composed of independent groups of bins (for example,
226  signal and background binning in two variables), it may be necessary to
227  set regularisation conditions for the individual groups of bins.
228  In this case, give kRegModeNone in the constructor and specify
229  the bin grouping with calls to
230  RegularizeBins() specify a 1-dimensional group of bins
231  RegularizeBins2D() specify a 2-dimensional group of bins
232 
233  Note, the class TUnfoldDensity provides an automatic setup of complex
234  regularisation schemes
235 
236  For ultimate flexibility, the regularisation condition can be set on each
237  bin individually
238  -> give kRegModeNone in the constructor and use
239  RegularizeSize() regularize one bin
240  RegularizeDerivative() regularize the slope given by two bins
241  RegularizeCurvature() regularize the curvature given by three bins
242  AddRegularisationCondition()
243  define an arbitrary regulatisation condition
244 */
245 
246 /*
247  This file is part of TUnfold.
248 
249  TUnfold is free software: you can redistribute it and/or modify
250  it under the terms of the GNU General Public License as published by
251  the Free Software Foundation, either version 3 of the License, or
252  (at your option) any later version.
253 
254  TUnfold is distributed in the hope that it will be useful,
255  but WITHOUT ANY WARRANTY; without even the implied warranty of
256  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
257  GNU General Public License for more details.
258 
259  You should have received a copy of the GNU General Public License
260  along with TUnfold. If not, see <http://www.gnu.org/licenses/>.
261  */
262 
263 #include <iostream>
264 #include <TMatrixD.h>
265 #include <TMatrixDSparse.h>
266 #include <TMatrixDSym.h>
267 #include <TMatrixDSymEigen.h>
268 #include <TMath.h>
269 #include "TUnfold.h"
270 
271 #include <map>
272 #include <vector>
273 
274 // this option saves the spline of the L curve curvature to a file
275 // named splinec.ps for debugging
276 
277 //#define DEBUG
278 //#define DEBUG_DETAIL
279 //#define DEBUG_LCURVE
280 //#define FORCE_EIGENVALUE_DECOMPOSITION
281 
282 #ifdef DEBUG_LCURVE
283 #include <TCanvas.h>
284 #endif
285 
287 //______________________________________________________________________________
288 
289 const char *TUnfold::GetTUnfoldVersion(void)
290 {
291  return TUnfold_VERSION;
292 }
293 
294 
296 {
297  // reset all data members
298  fXToHist.Set(0);
299  fHistToX.Set(0);
300  fSumOverY.Set(0);
301  fA = 0;
302  fL = 0;
303  fVyy = 0;
304  fY = 0;
305  fX0 = 0;
306  fTauSquared = 0.0;
307  fBiasScale = 0.0;
308  fNdf = 0;
311  // output
312  fVyyInv = 0;
313  fVxx = 0;
314  fX = 0;
315  fAx = 0;
316  fChi2A = 0.0;
317  fLXsquared = 0.0;
318  fRhoMax = 999.0;
319  fRhoAvg = -1.0;
320  fDXDAM[0] = 0;
321  fDXDAZ[0] = 0;
322  fDXDAM[1] = 0;
323  fDXDAZ[1] = 0;
324  fDXDtauSquared = 0;
325  fDXDY = 0;
326  fEinv = 0;
327  fE = 0;
328  fVxxInv = 0;
329  fEpsMatrix=1.E-13;
330  fIgnoredBins=0;
331 }
332 
334 {
335  if(*m) delete *m;
336  *m=0;
337 }
338 
340 {
341  if(*m) delete *m;
342  *m=0;
343 }
344 
346 {
347  // delete old results (if any)
348  // this function is virtual, so derived classes may implement their own
349  // method to flag results as non-valid
350 
351  // note: the inverse of the input covariance is not cleared
352  // because it does not change until the input is changed
353 
354  DeleteMatrix(&fVxx);
355  DeleteMatrix(&fX);
356  DeleteMatrix(&fAx);
357  for(Int_t i=0;i<2;i++) {
358  DeleteMatrix(fDXDAM+i);
359  DeleteMatrix(fDXDAZ+i);
360  }
364  DeleteMatrix(&fE);
366  fChi2A = 0.0;
367  fLXsquared = 0.0;
368  fRhoMax = 999.0;
369  fRhoAvg = -1.0;
370 }
371 
373 {
374  // set all matrix pointers to zero
375  InitTUnfold();
376 }
377 
379 {
380  // main unfolding algorithm. Declared virtual, because other algorithms
381  // could be implemented
382  //
383  // Purpose: unfold y -> x
384  // Data members required:
385  // fA: matrix to relate x and y
386  // fY: measured data points
387  // fX0: bias on x
388  // fBiasScale: scale factor for fX0
389  // fVyy: covariance matrix for y
390  // fL: regularisation conditions
391  // fTauSquared: regularisation strength
392  // fConstraint: whether the constraint is applied
393  // Data members modified:
394  // fVyyInv: inverse of input data covariance matrix
395  // fNdf: number of dgerres of freedom
396  // fEinv: inverse of the matrix needed for unfolding calculations
397  // fE: the matrix needed for unfolding calculations
398  // fX: unfolded data points
399  // fDXDY: derivative of x wrt y (for error propagation)
400  // fVxx: error matrix (covariance matrix) on x
401  // fAx: estimate of distribution y from unfolded data
402  // fChi2A: contribution to chi**2 from y-Ax
403  // fChi2L: contribution to chi**2 from L*(x-x0)
404  // fDXDtauSquared: derivative of x wrt tau
405  // fDXDAM[0,1]: matrix parts of derivative x wrt A
406  // fDXDAZ[0,1]: vector parts of derivative x wrt A
407  // fRhoMax: maximum global correlation coefficient
408  // fRhoAvg: average global correlation coefficient
409  // return code:
410  // fRhoMax if(fRhoMax>=1.0) then the unfolding has failed!
411 
412  ClearResults();
413 
414  // get pseudo-inverse matrix Vyyinv and NDF
415  if(!fVyyInv) {
418  fNdf--;
419  }
420  }
421  //
422  // get matrix
423  // T
424  // fA fV = mAt_V
425  //
427  //
428  // get
429  // T
430  // fA fVyyinv fY + fTauSquared fBiasScale Lsquared fX0 = rhs
431  //
432  TMatrixDSparse *rhs=MultiplyMSparseM(AtVyyinv,fY);
434  if (fBiasScale != 0.0) {
435  TMatrixDSparse *rhs2=MultiplyMSparseM(lSquared,fX0);
436  AddMSparse(rhs, fTauSquared * fBiasScale ,rhs2);
437  DeleteMatrix(&rhs2);
438  }
439 
440  //
441  // get matrix
442  // T
443  // (fA fV)fA + fTauSquared*fLsquared = fEinv
444  fEinv=MultiplyMSparseMSparse(AtVyyinv,fA);
445  AddMSparse(fEinv,fTauSquared,lSquared);
446 
447  //
448  // get matrix
449  // -1
450  // fEinv = fE
451  //
452  Int_t rank=0;
453  fE = InvertMSparseSymmPos(fEinv,&rank);
454  if(rank != GetNx()) {
455  Warning("DoUnfold","rank of matrix E %d expect %d",rank,GetNx());
456  }
457 
458  //
459  // get result
460  // fE rhs = x
461  //
463  fX = new TMatrixD(*xSparse);
464  DeleteMatrix(&rhs);
465  DeleteMatrix(&xSparse);
466 
467  // additional correction for constraint
468  Double_t lambda_half=0.0;
469  Double_t one_over_epsEeps=0.0;
471  TMatrixDSparse *Eepsilon=0;
473  // calculate epsilon: verctor of efficiencies
474  const Int_t *A_rows=fA->GetRowIndexArray();
475  const Int_t *A_cols=fA->GetColIndexArray();
476  const Double_t *A_data=fA->GetMatrixArray();
477  TMatrixD epsilonNosparse(fA->GetNcols(),1);
478  for(Int_t i=0;i<A_rows[fA->GetNrows()];i++) {
479  epsilonNosparse(A_cols[i],0) += A_data[i];
480  }
481  epsilon=new TMatrixDSparse(epsilonNosparse);
482  // calculate vector EE*epsilon
483  Eepsilon=MultiplyMSparseMSparse(fE,epsilon);
484  // calculate scalar product epsilon#*Eepsilon
485  TMatrixDSparse *epsilonEepsilon=MultiplyMSparseTranspMSparse(epsilon,
486  Eepsilon);
487  // if epsilonEepsilon is zero, nothing works...
488  if(epsilonEepsilon->GetRowIndexArray()[1]==1) {
489  one_over_epsEeps=1./epsilonEepsilon->GetMatrixArray()[0];
490  } else {
491  Fatal("Unfold","epsilon#Eepsilon has dimension %d != 1",
492  epsilonEepsilon->GetRowIndexArray()[1]);
493  }
494  DeleteMatrix(&epsilonEepsilon);
495  // calculate sum(Y)
496  Double_t y_minus_epsx=0.0;
497  for(Int_t iy=0;iy<fY->GetNrows();iy++) {
498  y_minus_epsx += (*fY)(iy,0);
499  }
500  // calculate sum(Y)-epsilon#*X
501  for(Int_t ix=0;ix<epsilonNosparse.GetNrows();ix++) {
502  y_minus_epsx -= epsilonNosparse(ix,0) * (*fX)(ix,0);
503  }
504  // calculate lambda_half
505  lambda_half=y_minus_epsx*one_over_epsEeps;
506  // calculate final vector X
507  const Int_t *EEpsilon_rows=Eepsilon->GetRowIndexArray();
508  const Double_t *EEpsilon_data=Eepsilon->GetMatrixArray();
509  for(Int_t ix=0;ix<Eepsilon->GetNrows();ix++) {
510  if(EEpsilon_rows[ix]<EEpsilon_rows[ix+1]) {
511  (*fX)(ix,0) += lambda_half * EEpsilon_data[EEpsilon_rows[ix]];
512  }
513  }
514  }
515  //
516  // get derivative dx/dy
517  // for error propagation
518  // dx/dy = E A# Vyy^-1 ( = B )
519  fDXDY = MultiplyMSparseMSparse(fE,AtVyyinv);
520 
521  // additional correction for constraint
523  // transposed vector of dimension GetNy() all elements equal 1/epseEeps
524  Int_t *rows=new Int_t[GetNy()];
525  Int_t *cols=new Int_t[GetNy()];
526  Double_t *data=new Double_t[GetNy()];
527  for(Int_t i=0;i<GetNy();i++) {
528  rows[i]=0;
529  cols[i]=i;
530  data[i]=one_over_epsEeps;
531  }
533  (1,GetNy(),GetNy(),rows,cols,data);
534  delete[] data;
535  delete[] rows;
536  delete[] cols;
537  // B# * epsilon
539  // temp- one_over_epsEeps*Bepsilon
540  AddMSparse(temp, -one_over_epsEeps, epsilonB);
541  DeleteMatrix(&epsilonB);
542  // correction matrix
543  TMatrixDSparse *corr=MultiplyMSparseMSparse(Eepsilon,temp);
544  DeleteMatrix(&temp);
545  // determine new derivative
546  AddMSparse(fDXDY,1.0,corr);
547  DeleteMatrix(&corr);
548  }
549 
550  DeleteMatrix(&AtVyyinv);
551 
552  //
553  // get error matrix on x
554  // fDXDY * Vyy * fDXDY#
557 
558  DeleteMatrix(&fDXDYVyy);
559 
560  //
561  // get result
562  // fA x = fAx
563  //
565 
566  //
567  // calculate chi**2 etc
568 
569  // chi**2 contribution from (y-Ax)V(y-Ax)
571  TMatrixDSparse *VyyinvDy=MultiplyMSparseM(fVyyInv,&dy);
572 
573  const Int_t *VyyinvDy_rows=VyyinvDy->GetRowIndexArray();
574  const Double_t *VyyinvDy_data=VyyinvDy->GetMatrixArray();
575  fChi2A=0.0;
576  for(Int_t iy=0;iy<VyyinvDy->GetNrows();iy++) {
577  if(VyyinvDy_rows[iy]<VyyinvDy_rows[iy+1]) {
578  fChi2A += VyyinvDy_data[VyyinvDy_rows[iy]]*dy(iy,0);
579  }
580  }
581  TMatrixD dx( fBiasScale * (*fX0), TMatrixD::kMinus,(*fX));
582  TMatrixDSparse *LsquaredDx=MultiplyMSparseM(lSquared,&dx);
583  const Int_t *LsquaredDx_rows=LsquaredDx->GetRowIndexArray();
584  const Double_t *LsquaredDx_data=LsquaredDx->GetMatrixArray();
585  fLXsquared = 0.0;
586  for(Int_t ix=0;ix<LsquaredDx->GetNrows();ix++) {
587  if(LsquaredDx_rows[ix]<LsquaredDx_rows[ix+1]) {
588  fLXsquared += LsquaredDx_data[LsquaredDx_rows[ix]]*dx(ix,0);
589  }
590  }
591 
592  //
593  // get derivative dx/dtau
595 
598  Double_t f=0.0;
599  if(temp->GetRowIndexArray()[1]==1) {
600  f=temp->GetMatrixArray()[0]*one_over_epsEeps;
601  } else if(temp->GetRowIndexArray()[1]>1) {
602  Fatal("Unfold",
603  "epsilon#fDXDtauSquared has dimension %d != 1",
604  temp->GetRowIndexArray()[1]);
605  }
606  if(f!=0.0) {
607  AddMSparse(fDXDtauSquared, -f,Eepsilon);
608  }
609  DeleteMatrix(&temp);
610  }
611  DeleteMatrix(&epsilon);
612 
613  DeleteMatrix(&LsquaredDx);
614  DeleteMatrix(&lSquared);
615 
616  // calculate/store matrices defining the derivatives dx/dA
617  fDXDAM[0]=new TMatrixDSparse(*fE);
618  fDXDAM[1]=new TMatrixDSparse(*fDXDY); // create a copy
619  fDXDAZ[0]=VyyinvDy; // instead of deleting VyyinvDy
620  VyyinvDy=0;
621  fDXDAZ[1]=new TMatrixDSparse(*fX); // create a copy
622 
624  // add correction to fDXDAM[0]
626  (Eepsilon,Eepsilon,0);
627  AddMSparse(fDXDAM[0], -one_over_epsEeps,temp1);
628  DeleteMatrix(&temp1);
629  // add correction to fDXDAZ[0]
630  Int_t *rows=new Int_t[GetNy()];
631  Int_t *cols=new Int_t[GetNy()];
632  Double_t *data=new Double_t[GetNy()];
633  for(Int_t i=0;i<GetNy();i++) {
634  rows[i]=i;
635  cols[i]=0;
636  data[i]=lambda_half;
637  }
639  (GetNy(),1,GetNy(),rows,cols,data);
640  delete[] data;
641  delete[] rows;
642  delete[] cols;
643  AddMSparse(fDXDAZ[0],1.0,temp2);
644  DeleteMatrix(&temp2);
645  }
646 
647  DeleteMatrix(&Eepsilon);
648 
649  rank=0;
651  if(rank != GetNx()) {
652  Warning("DoUnfold","rank of output covariance is %d expect %d",
653  rank,GetNx());
654  }
655 
656  TVectorD VxxInvDiag(fVxxInv->GetNrows());
657  const Int_t *VxxInv_rows=fVxxInv->GetRowIndexArray();
658  const Int_t *VxxInv_cols=fVxxInv->GetColIndexArray();
659  const Double_t *VxxInv_data=fVxxInv->GetMatrixArray();
660  for (int ix = 0; ix < fVxxInv->GetNrows(); ix++) {
661  for(int ik=VxxInv_rows[ix];ik<VxxInv_rows[ix+1];ik++) {
662  if(ix==VxxInv_cols[ik]) {
663  VxxInvDiag(ix)=VxxInv_data[ik];
664  }
665  }
666  }
667 
668  // maximum global correlation coefficient
669  const Int_t *Vxx_rows=fVxx->GetRowIndexArray();
670  const Int_t *Vxx_cols=fVxx->GetColIndexArray();
671  const Double_t *Vxx_data=fVxx->GetMatrixArray();
672 
673  Double_t rho_squared_max = 0.0;
674  Double_t rho_sum = 0.0;
675  Int_t n_rho=0;
676  for (int ix = 0; ix < fVxx->GetNrows(); ix++) {
677  for(int ik=Vxx_rows[ix];ik<Vxx_rows[ix+1];ik++) {
678  if(ix==Vxx_cols[ik]) {
679  Double_t rho_squared =
680  1. - 1. / (VxxInvDiag(ix) * Vxx_data[ik]);
681  if (rho_squared > rho_squared_max)
682  rho_squared_max = rho_squared;
683  if(rho_squared>0.0) {
684  rho_sum += TMath::Sqrt(rho_squared);
685  n_rho++;
686  }
687  break;
688  }
689  }
690  }
691  fRhoMax = TMath::Sqrt(rho_squared_max);
692  fRhoAvg = (n_rho>0) ? (rho_sum/n_rho) : -1.0;
693 
694  return fRhoMax;
695 }
696 
698 (Int_t nrow,Int_t ncol,Int_t nel,Int_t *row,Int_t *col,Double_t *data) const
699 {
700  // create a sparse matri
701  // nrow,ncol : dimension of the matrix
702  // nel: number of non-zero elements
703  // row[nel],col[nel],data[nel] : indices and data of the non-zero elements
704  TMatrixDSparse *A=new TMatrixDSparse(nrow,ncol);
705  if(nel>0) {
706  A->SetMatrixArray(nel,row,col,data);
707  }
708  return A;
709 }
710 
712  const TMatrixDSparse *b) const
713 {
714  // calculate the product of two sparse matrices
715  // a,b: pointers to sparse matrices, where a->GetNcols()==b->GetNrows()
716  // this is a replacement for the call
717  // new TMatrixDSparse(*a,TMatrixDSparse::kMult,*b);
718  if(a->GetNcols()!=b->GetNrows()) {
719  Fatal("MultiplyMSparseMSparse",
720  "inconsistent matrix col/ matrix row %d !=%d",
721  a->GetNcols(),b->GetNrows());
722  }
723 
725  const Int_t *a_rows=a->GetRowIndexArray();
726  const Int_t *a_cols=a->GetColIndexArray();
727  const Double_t *a_data=a->GetMatrixArray();
728  const Int_t *b_rows=b->GetRowIndexArray();
729  const Int_t *b_cols=b->GetColIndexArray();
730  const Double_t *b_data=b->GetMatrixArray();
731  // maximum size of the output matrix
732  int nMax=0;
733  for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
734  if(a_rows[irow+1]>a_rows[irow]) nMax += b->GetNcols();
735  }
736  if((nMax>0)&&(a_cols)&&(b_cols)) {
737  Int_t *r_rows=new Int_t[nMax];
738  Int_t *r_cols=new Int_t[nMax];
739  Double_t *r_data=new Double_t[nMax];
740  Double_t *row_data=new Double_t[b->GetNcols()];
741  Int_t n=0;
742  for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
743  if(a_rows[irow+1]<=a_rows[irow]) continue;
744  // clear row data
745  for(Int_t icol=0;icol<b->GetNcols();icol++) {
746  row_data[icol]=0.0;
747  }
748  // loop over a-columns in this a-row
749  for(Int_t ia=a_rows[irow];ia<a_rows[irow+1];ia++) {
750  Int_t k=a_cols[ia];
751  // loop over b-columns in b-row k
752  for(Int_t ib=b_rows[k];ib<b_rows[k+1];ib++) {
753  row_data[b_cols[ib]] += a_data[ia]*b_data[ib];
754  }
755  }
756  // store nonzero elements
757  for(Int_t icol=0;icol<b->GetNcols();icol++) {
758  if(row_data[icol] != 0.0) {
759  r_rows[n]=irow;
760  r_cols[n]=icol;
761  r_data[n]=row_data[icol];
762  n++;
763  }
764  }
765  }
766  if(n>0) {
767  r->SetMatrixArray(n,r_rows,r_cols,r_data);
768  }
769  delete[] r_rows;
770  delete[] r_cols;
771  delete[] r_data;
772  delete[] row_data;
773  }
774 
775  return r;
776 }
777 
778 
780 (const TMatrixDSparse *a,const TMatrixDSparse *b) const
781 {
782  // multiply a transposed Sparse matrix with another Sparse matrix
783  // a: pointer to sparse matrix (to be transposed)
784  // b: pointer to sparse matrix
785  // this is a replacement for the call
786  // new TMatrixDSparse(TMatrixDSparse(TMatrixDSparse::kTransposed,*a),
787  // TMatrixDSparse::kMult,*b)
788  if(a->GetNrows() != b->GetNrows()) {
789  Fatal("MultiplyMSparseTranspMSparse",
790  "inconsistent matrix row numbers %d!=%d",
791  a->GetNrows(),b->GetNrows());
792  }
793 
795  const Int_t *a_rows=a->GetRowIndexArray();
796  const Int_t *a_cols=a->GetColIndexArray();
797  const Double_t *a_data=a->GetMatrixArray();
798  const Int_t *b_rows=b->GetRowIndexArray();
799  const Int_t *b_cols=b->GetColIndexArray();
800  const Double_t *b_data=b->GetMatrixArray();
801  // maximum size of the output matrix
802 
803  // matrix multiplication
804  typedef std::map<Int_t,Double_t> MMatrixRow_t;
805  typedef std::map<Int_t, MMatrixRow_t > MMatrix_t;
806  MMatrix_t matrix;
807 
808  for(Int_t iRowAB=0;iRowAB<a->GetNrows();iRowAB++) {
809  for(Int_t ia=a_rows[iRowAB];ia<a_rows[iRowAB+1];ia++) {
810  for(Int_t ib=b_rows[iRowAB];ib<b_rows[iRowAB+1];ib++) {
811  // this creates a new row if necessary
812  MMatrixRow_t &row=matrix[a_cols[ia]];
813  MMatrixRow_t::iterator icol=row.find(b_cols[ib]);
814  if(icol!=row.end()) {
815  // update existing row
816  (*icol).second += a_data[ia]*b_data[ib];
817  } else {
818  // create new row
819  row[b_cols[ib]] = a_data[ia]*b_data[ib];
820  }
821  }
822  }
823  }
824 
825  Int_t n=0;
826  for(MMatrix_t::const_iterator irow=matrix.begin();
827  irow!=matrix.end();irow++) {
828  n += (*irow).second.size();
829  }
830  if(n>0) {
831  // pack matrix into arrays
832  Int_t *r_rows=new Int_t[n];
833  Int_t *r_cols=new Int_t[n];
834  Double_t *r_data=new Double_t[n];
835  n=0;
836  for(MMatrix_t::const_iterator irow=matrix.begin();
837  irow!=matrix.end();irow++) {
838  for(MMatrixRow_t::const_iterator icol=(*irow).second.begin();
839  icol!=(*irow).second.end();icol++) {
840  r_rows[n]=(*irow).first;
841  r_cols[n]=(*icol).first;
842  r_data[n]=(*icol).second;
843  n++;
844  }
845  }
846  // pack arrays into TMatrixDSparse
847  if(n>0) {
848  r->SetMatrixArray(n,r_rows,r_cols,r_data);
849  }
850  delete[] r_rows;
851  delete[] r_cols;
852  delete[] r_data;
853  }
854 
855  return r;
856 }
857 
859  const TMatrixD *b) const
860 {
861  // multiply a Sparse matrix with a non-sparse matrix
862  // a: pointer to sparse matrix
863  // b: pointer to non-sparse matrix
864  // this is a replacement for the call
865  // new TMatrixDSparse(*a,TMatrixDSparse::kMult,*b);
866  if(a->GetNcols()!=b->GetNrows()) {
867  Fatal("MultiplyMSparseM","inconsistent matrix col /matrix row %d!=%d",
868  a->GetNcols(),b->GetNrows());
869  }
870 
872  const Int_t *a_rows=a->GetRowIndexArray();
873  const Int_t *a_cols=a->GetColIndexArray();
874  const Double_t *a_data=a->GetMatrixArray();
875  // maximum size of the output matrix
876  int nMax=0;
877  for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
878  if(a_rows[irow+1]-a_rows[irow]>0) nMax += b->GetNcols();
879  }
880  if(nMax>0) {
881  Int_t *r_rows=new Int_t[nMax];
882  Int_t *r_cols=new Int_t[nMax];
883  Double_t *r_data=new Double_t[nMax];
884 
885  Int_t n=0;
886  // fill matrix r
887  for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
888  if(a_rows[irow+1]-a_rows[irow]<=0) continue;
889  for(Int_t icol=0;icol<b->GetNcols();icol++) {
890  r_rows[n]=irow;
891  r_cols[n]=icol;
892  r_data[n]=0.0;
893  for(Int_t i=a_rows[irow];i<a_rows[irow+1];i++) {
894  Int_t j=a_cols[i];
895  r_data[n] += a_data[i]*(*b)(j,icol);
896  }
897  if(r_data[n]!=0.0) n++;
898  }
899  }
900  if(n>0) {
901  r->SetMatrixArray(n,r_rows,r_cols,r_data);
902  }
903  delete[] r_rows;
904  delete[] r_cols;
905  delete[] r_data;
906  }
907  return r;
908 }
909 
911 (const TMatrixDSparse *m1,const TMatrixDSparse *m2,
912  const TMatrixTBase<Double_t> *v) const
913 {
914  // calculate M_ij = sum_k [m1_ik*m2_jk*v[k] ].
915  // m1: pointer to sparse matrix with dimension I*K
916  // m2: pointer to sparse matrix with dimension J*K
917  // v: pointer to vector (matrix) with dimension K*1
918  if((m1->GetNcols() != m2->GetNcols())||
919  (v && ((m1->GetNcols()!=v->GetNrows())||(v->GetNcols()!=1)))) {
920  if(v) {
921  Fatal("MultiplyMSparseMSparseTranspVector",
922  "matrix cols/vector rows %d!=%d!=%d or vector rows %d!=1\n",
923  m1->GetNcols(),m2->GetNcols(),v->GetNrows(),v->GetNcols());
924  } else {
925  Fatal("MultiplyMSparseMSparseTranspVector",
926  "matrix cols %d!=%d\n",m1->GetNcols(),m2->GetNcols());
927  }
928  }
929  const Int_t *rows_m1=m1->GetRowIndexArray();
930  const Int_t *cols_m1=m1->GetColIndexArray();
931  const Double_t *data_m1=m1->GetMatrixArray();
932  Int_t num_m1=0;
933  for(Int_t i=0;i<m1->GetNrows();i++) {
934  if(rows_m1[i]<rows_m1[i+1]) num_m1++;
935  }
936  const Int_t *rows_m2=m2->GetRowIndexArray();
937  const Int_t *cols_m2=m2->GetColIndexArray();
938  const Double_t *data_m2=m2->GetMatrixArray();
939  Int_t num_m2=0;
940  for(Int_t j=0;j<m2->GetNrows();j++) {
941  if(rows_m2[j]<rows_m2[j+1]) num_m2++;
942  }
943  const TMatrixDSparse *v_sparse=dynamic_cast<const TMatrixDSparse *>(v);
944  const Int_t *v_rows=0;
945  const Double_t *v_data=0;
946  if(v_sparse) {
947  v_rows=v_sparse->GetRowIndexArray();
948  v_data=v_sparse->GetMatrixArray();
949  }
950  Int_t num_r=num_m1*num_m2+1;
951  Int_t *row_r=new Int_t[num_r];
952  Int_t *col_r=new Int_t[num_r];
953  Double_t *data_r=new Double_t[num_r];
954  num_r=0;
955  for(Int_t i=0;i<m1->GetNrows();i++) {
956  for(Int_t j=0;j<m2->GetNrows();j++) {
957  data_r[num_r]=0.0;
958  Int_t index_m1=rows_m1[i];
959  Int_t index_m2=rows_m2[j];
960  while((index_m1<rows_m1[i+1])&&(index_m2<rows_m2[j+1])) {
961  Int_t k1=cols_m1[index_m1];
962  Int_t k2=cols_m2[index_m2];
963  if(k1<k2) {
964  index_m1++;
965  } else if(k1>k2) {
966  index_m2++;
967  } else {
968  if(v_sparse) {
969  Int_t v_index=v_rows[k1];
970  if(v_index<v_rows[k1+1]) {
971  data_r[num_r] += data_m1[index_m1] * data_m2[index_m2]
972  * v_data[v_index];
973  } else {
974  data_r[num_r] =0.0;
975  }
976  } else if(v) {
977  data_r[num_r] += data_m1[index_m1] * data_m2[index_m2]
978  * (*v)(k1,0);
979  } else {
980  data_r[num_r] += data_m1[index_m1] * data_m2[index_m2];
981  }
982  index_m1++;
983  index_m2++;
984  }
985  }
986  if(data_r[num_r] !=0.0) {
987  row_r[num_r]=i;
988  col_r[num_r]=j;
989  num_r++;
990  }
991  }
992  }
993  TMatrixDSparse *r=CreateSparseMatrix(m1->GetNrows(),m2->GetNrows(),
994  num_r,row_r,col_r,data_r);
995  delete[] row_r;
996  delete[] col_r;
997  delete[] data_r;
998  return r;
999 }
1000 
1002  const TMatrixDSparse *src) const
1003 {
1004  // a replacement for
1005  // (*dest) += f*(*src)
1006  const Int_t *dest_rows=dest->GetRowIndexArray();
1007  const Int_t *dest_cols=dest->GetColIndexArray();
1008  const Double_t *dest_data=dest->GetMatrixArray();
1009  const Int_t *src_rows=src->GetRowIndexArray();
1010  const Int_t *src_cols=src->GetColIndexArray();
1011  const Double_t *src_data=src->GetMatrixArray();
1012 
1013  if((dest->GetNrows()!=src->GetNrows())||
1014  (dest->GetNcols()!=src->GetNcols())) {
1015  Fatal("AddMSparse","inconsistent matrix rows %d!=%d OR cols %d!=%d",
1016  src->GetNrows(),dest->GetNrows(),src->GetNcols(),dest->GetNcols());
1017  }
1018  Int_t nmax=dest->GetNrows()*dest->GetNcols();
1019  Double_t *result_data=new Double_t[nmax];
1020  Int_t *result_rows=new Int_t[nmax];
1021  Int_t *result_cols=new Int_t[nmax];
1022  Int_t n=0;
1023  for(Int_t row=0;row<dest->GetNrows();row++) {
1024  Int_t i_dest=dest_rows[row];
1025  Int_t i_src=src_rows[row];
1026  while((i_dest<dest_rows[row+1])||(i_src<src_rows[row+1])) {
1027  Int_t col_dest=(i_dest<dest_rows[row+1]) ?
1028  dest_cols[i_dest] : dest->GetNcols();
1029  Int_t col_src =(i_src <src_rows[row+1] ) ?
1030  src_cols [i_src] : src->GetNcols();
1031  result_rows[n]=row;
1032  if(col_dest<col_src) {
1033  result_cols[n]=col_dest;
1034  result_data[n]=dest_data[i_dest++];
1035  } else if(col_dest>col_src) {
1036  result_cols[n]=col_src;
1037  result_data[n]=f*src_data[i_src++];
1038  } else {
1039  result_cols[n]=col_dest;
1040  result_data[n]=dest_data[i_dest++]+f*src_data[i_src++];
1041  }
1042  if(result_data[n] !=0.0) {
1043  if(!TMath::Finite(result_data[n])) {
1044  Fatal("AddMSparse","Nan detected %d %d %d",
1045  row,i_dest,i_src);
1046  }
1047  n++;
1048  }
1049  }
1050  }
1051  if(n<=0) {
1052  n=1;
1053  result_rows[0]=0;
1054  result_cols[0]=0;
1055  result_data[0]=0.0;
1056  }
1057  dest->SetMatrixArray(n,result_rows,result_cols,result_data);
1058  delete[] result_data;
1059  delete[] result_rows;
1060  delete[] result_cols;
1061 }
1062 
1064 (const TMatrixDSparse *A,Int_t *rankPtr) const
1065 {
1066  // get the inverse or pseudo-inverse of a sparse matrix
1067  // A: the original matrix
1068  // rank:
1069  // if(rankPtr==0)
1070  // rerturn the inverse if it exists, or return NULL
1071  // else
1072  // return the pseudo-invers
1073  // and return the rank of the matrix in *rankPtr
1074  // the matrix inversion is optimized for the case
1075  // where a large submatrix of A is diagonal
1076 
1077  if(A->GetNcols()!=A->GetNrows()) {
1078  Fatal("InvertMSparseSymmPos","inconsistent matrix row/col %d!=%d",
1079  A->GetNcols(),A->GetNrows());
1080  }
1081 
1082  Bool_t *isZero=new Bool_t[A->GetNrows()];
1083  const Int_t *a_rows=A->GetRowIndexArray();
1084  const Int_t *a_cols=A->GetColIndexArray();
1085  const Double_t *a_data=A->GetMatrixArray();
1086 
1087  // determine diagonal elements
1088  // Aii: diagonals of A
1089  Int_t iDiagonal=0;
1090  Int_t iBlock=A->GetNrows();
1091  Bool_t isDiagonal=kTRUE;
1092  TVectorD aII(A->GetNrows());
1093  Int_t nError=0;
1094  for(Int_t iA=0;iA<A->GetNrows();iA++) {
1095  for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1096  Int_t jA=a_cols[indexA];
1097  if(iA==jA) {
1098  if(!(a_data[indexA]>=0.0)) nError++;
1099  aII(iA)=a_data[indexA];
1100  }
1101  }
1102  }
1103  if(nError>0) {
1104  delete [] isZero;
1105  Fatal("InvertMSparseSymmPos",
1106  "Matrix has %d negative elements on the diagonal", nError);
1107  return 0;
1108  }
1109 
1110  // reorder matrix such that the largest block of zeros is swapped
1111  // to the lowest indices
1112  // the result of this ordering:
1113  // swap[i] : for index i point to the row in A
1114  // swapBack[iA] : for index iA in A point to the swapped index i
1115  // the indices are grouped into three groups
1116  // 0..iDiagonal-1 : these rows only have diagonal elements
1117  // iDiagonal..iBlock : these rows contain a diagonal block matrix
1118  //
1119  Int_t *swap=new Int_t[A->GetNrows()];
1120  for(Int_t j=0;j<A->GetNrows();j++) swap[j]=j;
1121  for(Int_t iStart=0;iStart<iBlock;iStart++) {
1122  Int_t nZero=0;
1123  for(Int_t i=iStart;i<iBlock;i++) {
1124  Int_t iA=swap[i];
1125  Int_t n=A->GetNrows()-(a_rows[iA+1]-a_rows[iA]);
1126  if(n>nZero) {
1127  Int_t tmp=swap[iStart];
1128  swap[iStart]=swap[i];
1129  swap[i]=tmp;
1130  nZero=n;
1131  }
1132  }
1133  for(Int_t i=0;i<A->GetNrows();i++) isZero[i]=kTRUE;
1134  Int_t iA=swap[iStart];
1135  for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1136  Int_t jA=a_cols[indexA];
1137  isZero[jA]=kFALSE;
1138  if(iA!=jA) {
1139  isDiagonal=kFALSE;
1140  }
1141  }
1142  if(isDiagonal) {
1143  iDiagonal=iStart+1;
1144  } else {
1145  for(Int_t i=iStart+1;i<iBlock;) {
1146  if(isZero[swap[i]]) {
1147  i++;
1148  } else {
1149  iBlock--;
1150  Int_t tmp=swap[iBlock];
1151  swap[iBlock]=swap[i];
1152  swap[i]=tmp;
1153  }
1154  }
1155  }
1156  }
1157 
1158  // for tests uncomment this:
1159  // iBlock=iDiagonal;
1160 
1161 
1162  // conditioning of the iBlock part
1163 #ifdef CONDITION_BLOCK_PART
1164  Int_t nn=A->GetNrows()-iBlock;
1165  for(int inc=(nn+1)/2;inc;inc /= 2) {
1166  for(int i=inc;i<nn;i++) {
1167  int itmp=swap[i+iBlock];
1168  int j;
1169  for(j=i;(j>=inc)&&(aII(swap[j-inc+iBlock])<aII(itmp));j -=inc) {
1170  swap[j+iBlock]=swap[j-inc+iBlock];
1171  }
1172  swap[j+iBlock]=itmp;
1173  }
1174  }
1175 #endif
1176  // construct array for swapping back
1177  Int_t *swapBack=new Int_t[A->GetNrows()];
1178  for(Int_t i=0;i<A->GetNrows();i++) {
1179  swapBack[swap[i]]=i;
1180  }
1181 #ifdef DEBUG_DETAIL
1182  for(Int_t i=0;i<A->GetNrows();i++) {
1183  std::cout<<" "<<i<<" "<<swap[i]<<" "<<swapBack[i]<<"\n";
1184  }
1185  std::cout<<"after sorting\n";
1186  for(Int_t i=0;i<A->GetNrows();i++) {
1187  if(i==iDiagonal) std::cout<<"iDiagonal="<<i<<"\n";
1188  if(i==iBlock) std::cout<<"iBlock="<<i<<"\n";
1189  std::cout<<" "<<swap[i]<<" "<<aII(swap[i])<<"\n";
1190  }
1191  {
1192  // sanity check:
1193  // (1) sub-matrix swapped[0]..swapped[iDiagonal]
1194  // must not contain off-diagonals
1195  // (2) sub-matrix swapped[0]..swapped[iBlock-1] must be diagonal
1196  Int_t nDiag=0;
1197  Int_t nError1=0;
1198  Int_t nError2=0;
1199  Int_t nBlock=0;
1200  Int_t nNonzero=0;
1201  for(Int_t i=0;i<A->GetNrows();i++) {
1202  Int_t row=swap[i];
1203  for(Int_t indexA=a_rows[row];indexA<a_rows[row+1];indexA++) {
1204  Int_t j=swapBack[a_cols[indexA]];
1205  if(i==j) nDiag++;
1206  else if((i<iDiagonal)||(j<iDiagonal)) nError1++;
1207  else if((i<iBlock)&&(j<iBlock)) nError2++;
1208  else if((i<iBlock)||(j<iBlock)) nBlock++;
1209  else nNonzero++;
1210  }
1211  }
1212  if(nError1+nError2>0) {
1213  Fatal("InvertMSparseSymmPos","sparse matrix analysis failed %d %d %d %d %d",
1214  iDiagonal,iBlock,A->GetNrows(),nError1,nError2);
1215  }
1216  }
1217 #endif
1218 #ifdef DEBUG
1219  Info("InvertMSparseSymmPos","iDiagonal=%d iBlock=%d nRow=%d",
1220  iDiagonal,iBlock,A->GetNrows());
1221 #endif
1222 
1223  //=============================================
1224  // matrix inversion starts here
1225  //
1226  // the matrix is split into several parts
1227  // D1 0 0
1228  // A = ( 0 D2 B# )
1229  // 0 B C
1230  //
1231  // D1,D2 are diagonal matrices
1232  //
1233  // first, the D1 part is inverted by calculating 1/D1
1234  // dim(D1)= iDiagonal
1235  // next, the other parts are inverted using Schur complement
1236  //
1237  // 1/D1 0 0
1238  // Ainv = ( 0 E G# )
1239  // 0 G F^-1
1240  //
1241  // where F = C + (-B/D2) B#
1242  // G = (F^-1) (-B/D2)
1243  // E = 1/D2 + (-B#/D2) G)
1244  //
1245  // the inverse of F is calculated using a Cholesky decomposition
1246  //
1247  // Error handling:
1248  // (a) if rankPtr==0: user requests inverse
1249  //
1250  // if D1 is not strictly positive, return NULL
1251  // if D2 is not strictly positive, return NULL
1252  // if F is not strictly positive (Cholesky decomposition failed)
1253  // return NULL
1254  // else
1255  // return Ainv as defined above
1256  //
1257  // (b) if rankPtr !=0: user requests pseudo-inverse
1258  // if D2 is not strictly positive or F is not strictly positive
1259  // calculate singular-value decomposition of
1260  // D2 B#
1261  // ( ) = O D O^-1
1262  // B C
1263  // if some eigenvalues are negative, return NULL
1264  // else calculate pseudo-inverse
1265  // *rankPtr = rank(D1)+rank(D)
1266  // else
1267  // calculate pseudo-inverse of D1 (D1_ii==0) ? 0 : 1/D1_ii
1268  // *rankPtr=rank(D1)+nrow(D2)+nrow(C)
1269  // return Ainv as defined above
1270 
1271  Double_t *rEl_data=new Double_t[A->GetNrows()*A->GetNrows()];
1272  Int_t *rEl_col=new Int_t[A->GetNrows()*A->GetNrows()];
1273  Int_t *rEl_row=new Int_t[A->GetNrows()*A->GetNrows()];
1274  Int_t rNumEl=0;
1275 
1276  //====================================================
1277  // invert D1
1278  Int_t rankD1=0;
1279  for(Int_t i=0;i<iDiagonal;++i) {
1280  Int_t iA=swap[i];
1281  if(aII(iA)>0.0) {
1282  rEl_col[rNumEl]=iA;
1283  rEl_row[rNumEl]=iA;
1284  rEl_data[rNumEl]=1./aII(iA);
1285  ++rankD1;
1286  ++rNumEl;
1287  }
1288  }
1289  if((!rankPtr)&&(rankD1!=iDiagonal)) {
1290  Fatal("InvertMSparseSymmPos",
1291  "diagonal part 1 has rank %d != %d, matrix can not be inverted",
1292  rankD1,iDiagonal);
1293  rNumEl=-1;
1294  }
1295 
1296 
1297  //====================================================
1298  // invert D2
1299  Int_t nD2=iBlock-iDiagonal;
1300  TMatrixDSparse *D2inv=0;
1301  if((rNumEl>=0)&&(nD2>0)) {
1302  Double_t *D2inv_data=new Double_t[nD2];
1303  Int_t *D2inv_col=new Int_t[nD2];
1304  Int_t *D2inv_row=new Int_t[nD2];
1305  Int_t D2invNumEl=0;
1306  for(Int_t i=0;i<nD2;++i) {
1307  Int_t iA=swap[i+iDiagonal];
1308  if(aII(iA)>0.0) {
1309  D2inv_col[D2invNumEl]=i;
1310  D2inv_row[D2invNumEl]=i;
1311  D2inv_data[D2invNumEl]=1./aII(iA);
1312  ++D2invNumEl;
1313  }
1314  }
1315  if(D2invNumEl==nD2) {
1316  D2inv=CreateSparseMatrix(nD2,nD2,D2invNumEl,D2inv_row,D2inv_col,
1317  D2inv_data);
1318  } else if(!rankPtr) {
1319  Fatal("InvertMSparseSymmPos",
1320  "diagonal part 2 has rank %d != %d, matrix can not be inverted",
1321  D2invNumEl,nD2);
1322  rNumEl=-2;
1323  }
1324  delete [] D2inv_data;
1325  delete [] D2inv_col;
1326  delete [] D2inv_row;
1327  }
1328 
1329  //====================================================
1330  // invert F
1331 
1332  Int_t nF=A->GetNrows()-iBlock;
1333  TMatrixDSparse *Finv=0;
1334  TMatrixDSparse *B=0;
1335  TMatrixDSparse *minusBD2inv=0;
1336  if((rNumEl>=0)&&(nF>0)&&((nD2==0)||D2inv)) {
1337  // construct matrices F and B
1338  Int_t nFmax=nF*nF;
1339  Double_t epsilonF2=fEpsMatrix;
1340  Double_t *F_data=new Double_t[nFmax];
1341  Int_t *F_col=new Int_t[nFmax];
1342  Int_t *F_row=new Int_t[nFmax];
1343  Int_t FnumEl=0;
1344 
1345  Int_t nBmax=nF*(nD2+1);
1346  Double_t *B_data=new Double_t[nBmax];
1347  Int_t *B_col=new Int_t[nBmax];
1348  Int_t *B_row=new Int_t[nBmax];
1349  Int_t BnumEl=0;
1350 
1351  for(Int_t i=0;i<nF;i++) {
1352  Int_t iA=swap[i+iBlock];
1353  for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1354  Int_t jA=a_cols[indexA];
1355  Int_t j0=swapBack[jA];
1356  if(j0>=iBlock) {
1357  Int_t j=j0-iBlock;
1358  F_row[FnumEl]=i;
1359  F_col[FnumEl]=j;
1360  F_data[FnumEl]=a_data[indexA];
1361  FnumEl++;
1362  } else if(j0>=iDiagonal) {
1363  Int_t j=j0-iDiagonal;
1364  B_row[BnumEl]=i;
1365  B_col[BnumEl]=j;
1366  B_data[BnumEl]=a_data[indexA];
1367  BnumEl++;
1368  }
1369  }
1370  }
1371  TMatrixDSparse *F=0;
1372  if(FnumEl) {
1373 #ifndef FORCE_EIGENVALUE_DECOMPOSITION
1374  F=CreateSparseMatrix(nF,nF,FnumEl,F_row,F_col,F_data);
1375 #endif
1376  }
1377  delete [] F_data;
1378  delete [] F_col;
1379  delete [] F_row;
1380  if(BnumEl) {
1381  B=CreateSparseMatrix(nF,nD2,BnumEl,B_row,B_col,B_data);
1382  }
1383  delete [] B_data;
1384  delete [] B_col;
1385  delete [] B_row;
1386 
1387  if(B && D2inv) {
1388  minusBD2inv=MultiplyMSparseMSparse(B,D2inv);
1389  if(minusBD2inv) {
1390  Int_t mbd2_nMax=minusBD2inv->GetRowIndexArray()
1391  [minusBD2inv->GetNrows()];
1392  Double_t *mbd2_data=minusBD2inv->GetMatrixArray();
1393  for(Int_t i=0;i<mbd2_nMax;i++) {
1394  mbd2_data[i] = - mbd2_data[i];
1395  }
1396  }
1397  }
1398  if(minusBD2inv && F) {
1399  TMatrixDSparse *minusBD2invBt=
1400  MultiplyMSparseMSparseTranspVector(minusBD2inv,B,0);
1401  AddMSparse(F,1.,minusBD2invBt);
1402  DeleteMatrix(&minusBD2invBt);
1403  }
1404 
1405  if(F) {
1406  // cholesky decomposition with preconditioning
1407  const Int_t *f_rows=F->GetRowIndexArray();
1408  const Int_t *f_cols=F->GetColIndexArray();
1409  const Double_t *f_data=F->GetMatrixArray();
1410  // cholesky-type decomposition of F
1411  TMatrixD c(nF,nF);
1412  Int_t nErrorF=0;
1413  for(Int_t i=0;i<nF;i++) {
1414  for(Int_t indexF=f_rows[i];indexF<f_rows[i+1];indexF++) {
1415  if(f_cols[indexF]>=i) c(f_cols[indexF],i)=f_data[indexF];
1416  }
1417  // calculate diagonal element
1418  Double_t c_ii=c(i,i);
1419  for(Int_t j=0;j<i;j++) {
1420  Double_t c_ij=c(i,j);
1421  c_ii -= c_ij*c_ij;
1422  }
1423  if(c_ii<=0.0) {
1424  nErrorF++;
1425  break;
1426  }
1427  c_ii=TMath::Sqrt(c_ii);
1428  c(i,i)=c_ii;
1429  // off-diagonal elements
1430  for(Int_t j=i+1;j<nF;j++) {
1431  Double_t c_ji=c(j,i);
1432  for(Int_t k=0;k<i;k++) {
1433  c_ji -= c(i,k)*c(j,k);
1434  }
1435  c(j,i) = c_ji/c_ii;
1436  }
1437  }
1438  // check condition of dInv
1439  if(!nErrorF) {
1440  Double_t dmin=c(0,0);
1441  Double_t dmax=dmin;
1442  for(Int_t i=1;i<nF;i++) {
1443  dmin=TMath::Min(dmin,c(i,i));
1444  dmax=TMath::Max(dmax,c(i,i));
1445  }
1446 #ifdef DEBUG
1447  std::cout<<"dmin,dmax: "<<dmin<<" "<<dmax<<"\n";
1448 #endif
1449  if(dmin/dmax<epsilonF2*nF) nErrorF=2;
1450  }
1451  if(!nErrorF) {
1452  // here: F = c c#
1453  // construct inverse of c
1454  TMatrixD cinv(nF,nF);
1455  for(Int_t i=0;i<nF;i++) {
1456  cinv(i,i)=1./c(i,i);
1457  }
1458  for(Int_t i=0;i<nF;i++) {
1459  for(Int_t j=i+1;j<nF;j++) {
1460  Double_t tmp=-c(j,i)*cinv(i,i);
1461  for(Int_t k=i+1;k<j;k++) {
1462  tmp -= cinv(k,i)*c(j,k);
1463  }
1464  cinv(j,i)=tmp*cinv(j,j);
1465  }
1466  }
1467  TMatrixDSparse cInvSparse(cinv);
1468  Finv=MultiplyMSparseTranspMSparse
1469  (&cInvSparse,&cInvSparse);
1470  }
1471  DeleteMatrix(&F);
1472  }
1473  }
1474 
1475  // here:
1476  // rNumEl>=0: diagonal part has been inverted
1477  // (nD2==0)||D2inv : D2 part has been inverted or is empty
1478  // (nF==0)||Finv : F part has been inverted or is empty
1479 
1480  Int_t rankD2Block=0;
1481  if(rNumEl>=0) {
1482  if( ((nD2==0)||D2inv) && ((nF==0)||Finv) ) {
1483  // all matrix parts have been inverted, compose full matrix
1484  // 1/D1 0 0
1485  // Ainv = ( 0 E G# )
1486  // 0 G F^-1
1487  //
1488  // G = (F^-1) (-B/D2)
1489  // E = 1/D2 + (-B#/D2) G)
1490 
1491  TMatrixDSparse *G=0;
1492  if(Finv && minusBD2inv) {
1493  G=MultiplyMSparseMSparse(Finv,minusBD2inv);
1494  }
1495  TMatrixDSparse *E=0;
1496  if(D2inv) E=new TMatrixDSparse(*D2inv);
1497  if(G && minusBD2inv) {
1498  TMatrixDSparse *minusBD2invTransG=
1499  MultiplyMSparseTranspMSparse(minusBD2inv,G);
1500  if(E) {
1501  AddMSparse(E,1.,minusBD2invTransG);
1502  DeleteMatrix(&minusBD2invTransG);
1503  } else {
1504  E=minusBD2invTransG;
1505  }
1506  }
1507  // pack matrix E to r
1508  if(E) {
1509  const Int_t *e_rows=E->GetRowIndexArray();
1510  const Int_t *e_cols=E->GetColIndexArray();
1511  const Double_t *e_data=E->GetMatrixArray();
1512  for(Int_t iE=0;iE<E->GetNrows();iE++) {
1513  Int_t iA=swap[iE+iDiagonal];
1514  for(Int_t indexE=e_rows[iE];indexE<e_rows[iE+1];++indexE) {
1515  Int_t jE=e_cols[indexE];
1516  Int_t jA=swap[jE+iDiagonal];
1517  rEl_col[rNumEl]=iA;
1518  rEl_row[rNumEl]=jA;
1519  rEl_data[rNumEl]=e_data[indexE];
1520  ++rNumEl;
1521  }
1522  }
1523  DeleteMatrix(&E);
1524  }
1525  // pack matrix G to r
1526  if(G) {
1527  const Int_t *g_rows=G->GetRowIndexArray();
1528  const Int_t *g_cols=G->GetColIndexArray();
1529  const Double_t *g_data=G->GetMatrixArray();
1530  for(Int_t iG=0;iG<G->GetNrows();iG++) {
1531  Int_t iA=swap[iG+iBlock];
1532  for(Int_t indexG=g_rows[iG];indexG<g_rows[iG+1];++indexG) {
1533  Int_t jG=g_cols[indexG];
1534  Int_t jA=swap[jG+iDiagonal];
1535  // G
1536  rEl_col[rNumEl]=iA;
1537  rEl_row[rNumEl]=jA;
1538  rEl_data[rNumEl]=g_data[indexG];
1539  ++rNumEl;
1540  // G#
1541  rEl_col[rNumEl]=jA;
1542  rEl_row[rNumEl]=iA;
1543  rEl_data[rNumEl]=g_data[indexG];
1544  ++rNumEl;
1545  }
1546  }
1547  DeleteMatrix(&G);
1548  }
1549  if(Finv) {
1550  // pack matrix Finv to r
1551  const Int_t *finv_rows=Finv->GetRowIndexArray();
1552  const Int_t *finv_cols=Finv->GetColIndexArray();
1553  const Double_t *finv_data=Finv->GetMatrixArray();
1554  for(Int_t iF=0;iF<Finv->GetNrows();iF++) {
1555  Int_t iA=swap[iF+iBlock];
1556  for(Int_t indexF=finv_rows[iF];indexF<finv_rows[iF+1];++indexF) {
1557  Int_t jF=finv_cols[indexF];
1558  Int_t jA=swap[jF+iBlock];
1559  rEl_col[rNumEl]=iA;
1560  rEl_row[rNumEl]=jA;
1561  rEl_data[rNumEl]=finv_data[indexF];
1562  ++rNumEl;
1563  }
1564  }
1565  }
1566  rankD2Block=nD2+nF;
1567  } else if(!rankPtr) {
1568  rNumEl=-3;
1569  Fatal("InvertMSparseSymmPos",
1570  "non-trivial part has rank < %d, matrix can not be inverted",
1571  nF);
1572  } else {
1573  // part of the matrix could not be inverted, get eingenvalue
1574  // decomposition
1575  Int_t nEV=nD2+nF;
1576  Double_t epsilonEV2=fEpsMatrix /* *nEV*nEV */;
1577  Info("InvertMSparseSymmPos",
1578  "cholesky-decomposition failed, try eigenvalue analysis");
1579 #ifdef DEBUG
1580  std::cout<<"nEV="<<nEV<<" iDiagonal="<<iDiagonal<<"\n";
1581 #endif
1582  TMatrixDSym EV(nEV);
1583  for(Int_t i=0;i<nEV;i++) {
1584  Int_t iA=swap[i+iDiagonal];
1585  for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1586  Int_t jA=a_cols[indexA];
1587  Int_t j0=swapBack[jA];
1588  if(j0>=iDiagonal) {
1589  Int_t j=j0-iDiagonal;
1590 #ifdef DEBUG
1591  if((i<0)||(j<0)||(i>=nEV)||(j>=nEV)) {
1592  std::cout<<" error "<<nEV<<" "<<i<<" "<<j<<"\n";
1593  }
1594 #endif
1595  if(!TMath::Finite(a_data[indexA])) {
1596  Fatal("InvertMSparseSymmPos",
1597  "non-finite number detected element %d %d\n",
1598  iA,jA);
1599  }
1600  EV(i,j)=a_data[indexA];
1601  }
1602  }
1603  }
1604  // EV.Print();
1605  TMatrixDSymEigen Eigen(EV);
1606 #ifdef DEBUG
1607  std::cout<<"Eigenvalues\n";
1608  // Eigen.GetEigenValues().Print();
1609 #endif
1610  Int_t errorEV=0;
1611  Double_t condition=1.0;
1612  if(Eigen.GetEigenValues()(0)<0.0) {
1613  errorEV=1;
1614  } else if(Eigen.GetEigenValues()(0)>0.0) {
1615  condition=Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0);
1616  }
1617  if(condition<-epsilonEV2) {
1618  errorEV=2;
1619  }
1620  if(errorEV) {
1621  if(errorEV==1) {
1622  Error("InvertMSparseSymmPos",
1623  "Largest Eigenvalue is negative %f",
1624  Eigen.GetEigenValues()(0));
1625  } else {
1626  Error("InvertMSparseSymmPos",
1627  "Some Eigenvalues are negative (EV%d/EV0=%g epsilon=%g)",
1628  nEV-1,
1629  Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0),
1630  epsilonEV2);
1631  }
1632  }
1633  // compose inverse matrix
1634  rankD2Block=0;
1635  Double_t setToZero=epsilonEV2*Eigen.GetEigenValues()(0);
1636  TMatrixD inverseEV(nEV,1);
1637  for(Int_t i=0;i<nEV;i++) {
1638  Double_t x=Eigen.GetEigenValues()(i);
1639  if(x>setToZero) {
1640  inverseEV(i,0)=1./x;
1641  ++rankD2Block;
1642  }
1643  }
1644  TMatrixDSparse V(Eigen.GetEigenVectors());
1645  TMatrixDSparse *VDVt=MultiplyMSparseMSparseTranspVector
1646  (&V,&V,&inverseEV);
1647 
1648  // pack matrix VDVt to r
1649  const Int_t *vdvt_rows=VDVt->GetRowIndexArray();
1650  const Int_t *vdvt_cols=VDVt->GetColIndexArray();
1651  const Double_t *vdvt_data=VDVt->GetMatrixArray();
1652  for(Int_t iVDVt=0;iVDVt<VDVt->GetNrows();iVDVt++) {
1653  Int_t iA=swap[iVDVt+iDiagonal];
1654  for(Int_t indexVDVt=vdvt_rows[iVDVt];
1655  indexVDVt<vdvt_rows[iVDVt+1];++indexVDVt) {
1656  Int_t jVDVt=vdvt_cols[indexVDVt];
1657  Int_t jA=swap[jVDVt+iDiagonal];
1658  rEl_col[rNumEl]=iA;
1659  rEl_row[rNumEl]=jA;
1660  rEl_data[rNumEl]=vdvt_data[indexVDVt];
1661  ++rNumEl;
1662  }
1663  }
1664  DeleteMatrix(&VDVt);
1665  }
1666  }
1667  if(rankPtr) {
1668  *rankPtr=rankD1+rankD2Block;
1669  }
1670 
1671 
1672  DeleteMatrix(&D2inv);
1673  DeleteMatrix(&Finv);
1674  DeleteMatrix(&B);
1675  DeleteMatrix(&minusBD2inv);
1676 
1677  delete [] swap;
1678  delete [] swapBack;
1679  delete [] isZero;
1680 
1681  TMatrixDSparse *r=(rNumEl>=0) ?
1682  CreateSparseMatrix(A->GetNrows(),A->GetNrows(),rNumEl,
1683  rEl_row,rEl_col,rEl_data) : 0;
1684  delete [] rEl_data;
1685  delete [] rEl_col;
1686  delete [] rEl_row;
1687 
1688 #ifdef DEBUG_DETAIL
1689  // sanity test
1690  if(r) {
1691  TMatrixDSparse *Ar=MultiplyMSparseMSparse(A,r);
1692  TMatrixDSparse *ArA=MultiplyMSparseMSparse(Ar,A);
1693  TMatrixDSparse *rAr=MultiplyMSparseMSparse(r,Ar);
1694 
1695  TMatrixD ar(*Ar);
1696  TMatrixD a(*A);
1697  TMatrixD ara(*ArA);
1698  TMatrixD R(*r);
1699  TMatrixD rar(*rAr);
1700 
1701  DeleteMatrix(&Ar);
1702  DeleteMatrix(&ArA);
1703  DeleteMatrix(&rAr);
1704 
1705  Double_t epsilonA2=fEpsMatrix /* *a.GetNrows()*a.GetNcols() */;
1706  for(Int_t i=0;i<a.GetNrows();i++) {
1707  for(Int_t j=0;j<a.GetNcols();j++) {
1708  // ar should be symmetric
1709  if(TMath::Abs(ar(i,j)-ar(j,i))>
1710  epsilonA2*(TMath::Abs(ar(i,j))+TMath::Abs(ar(j,i)))) {
1711  std::cout<<"Ar is not symmetric Ar("<<i<<","<<j<<")="<<ar(i,j)
1712  <<" Ar("<<j<<","<<i<<")="<<ar(j,i)<<"\n";
1713  }
1714  // ara should be equal a
1715  if(TMath::Abs(ara(i,j)-a(i,j))>
1716  epsilonA2*(TMath::Abs(ara(i,j))+TMath::Abs(a(i,j)))) {
1717  std::cout<<"ArA is not equal A ArA("<<i<<","<<j<<")="<<ara(i,j)
1718  <<" A("<<i<<","<<j<<")="<<a(i,j)<<"\n";
1719  }
1720  // ara should be equal a
1721  if(TMath::Abs(rar(i,j)-R(i,j))>
1722  epsilonA2*(TMath::Abs(rar(i,j))+TMath::Abs(R(i,j)))) {
1723  std::cout<<"rAr is not equal r rAr("<<i<<","<<j<<")="<<rar(i,j)
1724  <<" r("<<i<<","<<j<<")="<<R(i,j)<<"\n";
1725  }
1726  }
1727  }
1728  if(rankPtr) std::cout<<"rank="<<*rankPtr<<"\n";
1729  } else {
1730  std::cout<<"Matrix is not positive\n";
1731  }
1732 #endif
1733  return r;
1734 
1735 }
1736 
1738 {
1739  // given a bin number, return the name of the output bin
1740  // this method makes more sense for the class TUnfoldDnesity
1741  // where it gets overwritten
1742  return TString::Format("#%d",iBinX);
1743 }
1744 
1745 TUnfold::TUnfold(const TH2 *hist_A, EHistMap histmap, ERegMode regmode,
1746  EConstraint constraint)
1747 {
1748  // set up unfolding matrix and initial regularisation scheme
1749  // hist_A: matrix that describes the migrations
1750  // histmap: mapping of the histogram axes to the unfolding output
1751  // regmode: global regularisation mode
1752  // constraint: type of constraint to use
1753  // data members initialized to something different from zero:
1754  // fA: filled from hist_A
1755  // fDA: filled from hist_A
1756  // fX0: filled from hist_A
1757  // fL: filled depending on the regularisation scheme
1758  // Treatment of overflow bins
1759  // Bins where the unfolding input (Detector level) is in overflow
1760  // are used for the efficiency correction. They have to be filled
1761  // properly!
1762  // Bins where the unfolding output (Generator level) is in overflow
1763  // are treated as a part of the generator level distribution.
1764  // I.e. the unfolding output could have non-zero overflow bins if the
1765  // input matrix does have such bins.
1766 
1767  InitTUnfold();
1768  SetConstraint(constraint);
1769  Int_t nx0, nx, ny;
1770  if (histmap == kHistMapOutputHoriz) {
1771  // include overflow bins on the X axis
1772  nx0 = hist_A->GetNbinsX()+2;
1773  ny = hist_A->GetNbinsY();
1774  } else {
1775  // include overflow bins on the X axis
1776  nx0 = hist_A->GetNbinsY()+2;
1777  ny = hist_A->GetNbinsX();
1778  }
1779  nx = 0;
1780  // fNx is expected to be nx0, but the input matrix may be ill-formed
1781  // -> all columns with zero events have to be removed
1782  // (because y does not contain any information on that bin in x)
1783  fSumOverY.Set(nx0);
1784  fXToHist.Set(nx0);
1785  fHistToX.Set(nx0);
1786  Int_t nonzeroA=0;
1787  // loop
1788  // - calculate bias distribution
1789  // sum_over_y
1790  // - count those generator bins which can be unfolded
1791  // fNx
1792  // - histogram bins are added to the lookup-tables
1793  // fHistToX[] and fXToHist[]
1794  // these convert histogram bins to matrix indices and vice versa
1795  // - the number of non-zero elements is counted
1796  // nonzeroA
1797  Int_t nskipped=0;
1798  for (Int_t ix = 0; ix < nx0; ix++) {
1799  // calculate sum over all detector bins
1800  // excluding the overflow bins
1801  Double_t sum = 0.0;
1802  Int_t nonZeroY = 0;
1803  for (Int_t iy = 0; iy < ny; iy++) {
1804  Double_t z;
1805  if (histmap == kHistMapOutputHoriz) {
1806  z = hist_A->GetBinContent(ix, iy + 1);
1807  } else {
1808  z = hist_A->GetBinContent(iy + 1, ix);
1809  }
1810  if (z != 0.0) {
1811  nonzeroA++;
1812  nonZeroY++;
1813  sum += z;
1814  }
1815  }
1816  // check whether there is any sensitivity to this generator bin
1817 
1818  if (nonZeroY) {
1819  // update mapping tables to convert bin number to matrix index
1820  fXToHist[nx] = ix;
1821  fHistToX[ix] = nx;
1822  // add overflow bins -> important to keep track of the
1823  // non-reconstructed events
1824  fSumOverY[nx] = sum;
1825  if (histmap == kHistMapOutputHoriz) {
1826  fSumOverY[nx] +=
1827  hist_A->GetBinContent(ix, 0) +
1828  hist_A->GetBinContent(ix, ny + 1);
1829  } else {
1830  fSumOverY[nx] +=
1831  hist_A->GetBinContent(0, ix) +
1832  hist_A->GetBinContent(ny + 1, ix);
1833  }
1834  nx++;
1835  } else {
1836  nskipped++;
1837  // histogram bin pointing to -1 (non-existing matrix entry)
1838  fHistToX[ix] = -1;
1839  }
1840  }
1841  Int_t underflowBin=0,overflowBin=0;
1842  for (Int_t ix = 0; ix < nx; ix++) {
1843  Int_t ibinx = fXToHist[ix];
1844  if(ibinx<1) underflowBin=1;
1845  if (histmap == kHistMapOutputHoriz) {
1846  if(ibinx>hist_A->GetNbinsX()) overflowBin=1;
1847  } else {
1848  if(ibinx>hist_A->GetNbinsY()) overflowBin=1;
1849  }
1850  }
1851  if(nskipped) {
1852  Int_t nprint=0;
1853  Int_t ixfirst=-1,ixlast=-1;
1854  TString binlist;
1855  int nDisconnected=0;
1856  for (Int_t ix = 0; ix < nx0; ix++) {
1857  if(fHistToX[ix]<0) {
1858  nprint++;
1859  if(ixlast<0) {
1860  binlist +=" ";
1861  binlist +=ix;
1862  ixfirst=ix;
1863  }
1864  ixlast=ix;
1865  nDisconnected++;
1866  }
1867  if(((fHistToX[ix]>=0)&&(ixlast>=0))||
1868  (nprint==nskipped)) {
1869  if(ixlast>ixfirst) {
1870  binlist += "-";
1871  binlist += ixlast;
1872  }
1873  ixfirst=-1;
1874  ixlast=-1;
1875  }
1876  if(nprint==nskipped) {
1877  break;
1878  }
1879  }
1880  if(nskipped==(2-underflowBin-overflowBin)) {
1881  Info("TUnfold","underflow and overflow bin "
1882  "do not depend on the input data");
1883  } else {
1884  Warning("TUnfold","%d output bins "
1885  "do not depend on the input data %s",nDisconnected,
1886  static_cast<const char *>(binlist));
1887  }
1888  }
1889  // store bias as matrix
1890  fX0 = new TMatrixD(nx, 1, fSumOverY.GetArray());
1891  // store non-zero elements in sparse matrix fA
1892  // construct sparse matrix fA
1893  Int_t *rowA = new Int_t[nonzeroA];
1894  Int_t *colA = new Int_t[nonzeroA];
1895  Double_t *dataA = new Double_t[nonzeroA];
1896  Int_t index=0;
1897  for (Int_t iy = 0; iy < ny; iy++) {
1898  for (Int_t ix = 0; ix < nx; ix++) {
1899  Int_t ibinx = fXToHist[ix];
1900  Double_t z;
1901  if (histmap == kHistMapOutputHoriz) {
1902  z = hist_A->GetBinContent(ibinx, iy + 1);
1903  } else {
1904  z = hist_A->GetBinContent(iy + 1, ibinx);
1905  }
1906  if (z != 0.0) {
1907  rowA[index]=iy;
1908  colA[index] = ix;
1909  dataA[index] = z / fSumOverY[ix];
1910  index++;
1911  }
1912  }
1913  }
1914  if(underflowBin && overflowBin) {
1915  Info("TUnfold","%d input bins and %d output bins (includes 2 underflow/overflow bins)",ny,nx);
1916  } else if(underflowBin) {
1917  Info("TUnfold","%d input bins and %d output bins (includes 1 underflow bin)",ny,nx);
1918  } else if(overflowBin) {
1919  Info("TUnfold","%d input bins and %d output bins (includes 1 overflow bin)",ny,nx);
1920  } else {
1921  Info("TUnfold","%d input bins and %d output bins",ny,nx);
1922  }
1923  fA = CreateSparseMatrix(ny,nx,index,rowA,colA,dataA);
1924  if(ny<nx) {
1925  Error("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1926  } else if(ny==nx) {
1927  Warning("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1928  }
1929  delete[] rowA;
1930  delete[] colA;
1931  delete[] dataA;
1932  // regularisation conditions squared
1933  fL = new TMatrixDSparse(1, GetNx());
1934  if (regmode != kRegModeNone) {
1935  Int_t nError=RegularizeBins(0, 1, nx0, regmode);
1936  if(nError>0) {
1937  if(nError>1) {
1938  Info("TUnfold",
1939  "%d regularisation conditions have been skipped",nError);
1940  } else {
1941  Info("TUnfold",
1942  "One regularisation condition has been skipped");
1943  }
1944  }
1945  }
1946 }
1947 
1949 {
1950  // delete all data members
1951 
1952  DeleteMatrix(&fA);
1953  DeleteMatrix(&fL);
1954  DeleteMatrix(&fVyy);
1955  DeleteMatrix(&fY);
1956  DeleteMatrix(&fX0);
1957 
1958  ClearResults();
1959 }
1960 
1961 void TUnfold::SetBias(const TH1 *bias)
1962 {
1963  // initialize alternative bias from histogram
1964  // modifies data member fX0
1965  DeleteMatrix(&fX0);
1966  fX0 = new TMatrixD(GetNx(), 1);
1967  for (Int_t i = 0; i < GetNx(); i++) {
1968  (*fX0) (i, 0) = bias->GetBinContent(fXToHist[i]);
1969  }
1970 }
1971 
1974 {
1975  // add regularisation condition for a triplet of bins and scale factors
1976  // the arguments are used to form one row (k) of the matrix L
1977  // L(k,i0)=f0
1978  // L(k,i1)=f1
1979  // L(k,i2)=f2
1980  // the indices i0,i1,i2 are transformed of bin numbers
1981  // if i2<0, do not fill L(k,i2)
1982  // if i1<0, do not fill L(k,i1)
1983 
1984  Int_t indices[3];
1985  Double_t data[3];
1986  Int_t nEle=0;
1987 
1988  if(i2>=0) {
1989  data[nEle]=f2;
1990  indices[nEle]=i2;
1991  nEle++;
1992  }
1993  if(i1>=0) {
1994  data[nEle]=f1;
1995  indices[nEle]=i1;
1996  nEle++;
1997  }
1998  if(i0>=0) {
1999  data[nEle]=f0;
2000  indices[nEle]=i0;
2001  nEle++;
2002  }
2003  return AddRegularisationCondition(nEle,indices,data);
2004 }
2005 
2007 (Int_t nEle,const Int_t *indices,const Double_t *rowData)
2008 {
2009  // add a regularisation condition
2010  // the arguments are used to form one row (k) of the matrix L
2011  // L(k,indices[0])=rowData[0]
2012  // ...
2013  // L(k,indices[nEle-1])=rowData[nEle-1]
2014  //
2015  // the indices are taken as bin numbers,
2016  // transformed to internal bin numbers using the array fHistToX
2017 
2018 
2019  Bool_t r=kTRUE;
2020  const Int_t *l0_rows=fL->GetRowIndexArray();
2021  const Int_t *l0_cols=fL->GetColIndexArray();
2022  const Double_t *l0_data=fL->GetMatrixArray();
2023 
2024  Int_t nF=l0_rows[fL->GetNrows()]+nEle;
2025  Int_t *l_row=new Int_t[nF];
2026  Int_t *l_col=new Int_t[nF];
2027  Double_t *l_data=new Double_t[nF];
2028  // decode original matrix
2029  nF=0;
2030  for(Int_t row=0;row<fL->GetNrows();row++) {
2031  for(Int_t k=l0_rows[row];k<l0_rows[row+1];k++) {
2032  l_row[nF]=row;
2033  l_col[nF]=l0_cols[k];
2034  l_data[nF]=l0_data[k];
2035  nF++;
2036  }
2037  }
2038 
2039  // if the original matrix does not have any data, reset its row count
2040  Int_t rowMax=0;
2041  if(nF>0) {
2042  rowMax=fL->GetNrows();
2043  }
2044 
2045  // add the new regularisation conditions
2046  for(Int_t i=0;i<nEle;i++) {
2047  Int_t col=fHistToX[indices[i]];
2048  if(col<0) {
2049  r=kFALSE;
2050  break;
2051  }
2052  l_row[nF]=rowMax;
2053  l_col[nF]=col;
2054  l_data[nF]=rowData[i];
2055  nF++;
2056  }
2057 
2058  // replace the old matrix fL
2059  if(r) {
2060  DeleteMatrix(&fL);
2061  fL=CreateSparseMatrix(rowMax+1,GetNx(),nF,l_row,l_col,l_data);
2062  }
2063  delete [] l_row;
2064  delete [] l_col;
2065  delete [] l_data;
2066  return r;
2067 }
2068 
2070 {
2071  // add regularisation on the size of bin i
2072  // bin: bin number
2073  // scale: size of the regularisation
2074  // return value: number of conditions which have been skipped
2075  // modifies data member fL
2076 
2079 
2080  return AddRegularisationCondition(bin,scale) ? 0 : 1;
2081 }
2082 
2083 Int_t TUnfold::RegularizeDerivative(int left_bin, int right_bin,
2084  Double_t scale)
2085 {
2086  // add regularisation on the difference of two bins
2087  // left_bin: 1st bin
2088  // right_bin: 2nd bin
2089  // scale: size of the regularisation
2090  // return value: number of conditions which have been skipped
2091  // modifies data member fL
2092 
2095 
2096  return AddRegularisationCondition(left_bin,-scale,right_bin,scale) ? 0 : 1;
2097 }
2098 
2099 Int_t TUnfold::RegularizeCurvature(int left_bin, int center_bin,
2100  int right_bin,
2101  Double_t scale_left,
2102  Double_t scale_right)
2103 {
2104  // add regularisation on the curvature through 3 bins (2nd derivative)
2105  // left_bin: 1st bin
2106  // center_bin: 2nd bin
2107  // right_bin: 3rd bin
2108  // scale_left: scale factor on center-left difference
2109  // scale_right: scale factor on right-center difference
2110  // return value: number of conditions which have been skipped
2111  // modifies data member fL
2112 
2115 
2117  (left_bin,-scale_left,
2118  center_bin,scale_left+scale_right,
2119  right_bin,-scale_right)
2120  ? 0 : 1;
2121 }
2122 
2123 Int_t TUnfold::RegularizeBins(int start, int step, int nbin,
2124  ERegMode regmode)
2125 {
2126  // set regulatisation on a 1-dimensional curve
2127  // start: first bin
2128  // step: distance between neighbouring bins
2129  // nbin: total number of bins
2130  // regmode: regularisation mode
2131  // return value:
2132  // number of errors (i.e. conditions which have been skipped)
2133  // modifies data member fL
2134 
2135  Int_t i0, i1, i2;
2136  i0 = start;
2137  i1 = i0 + step;
2138  i2 = i1 + step;
2139  Int_t nSkip = 0;
2140  Int_t nError= 0;
2141  if (regmode == kRegModeDerivative) {
2142  nSkip = 1;
2143  } else if (regmode == kRegModeCurvature) {
2144  nSkip = 2;
2145  } else if(regmode != kRegModeSize) {
2146  Error("RegularizeBins","regmode = %d is not valid",regmode);
2147  }
2148  for (Int_t i = nSkip; i < nbin; i++) {
2149  if (regmode == kRegModeSize) {
2150  nError += RegularizeSize(i0);
2151  } else if (regmode == kRegModeDerivative) {
2152  nError += RegularizeDerivative(i0, i1);
2153  } else if (regmode == kRegModeCurvature) {
2154  nError += RegularizeCurvature(i0, i1, i2);
2155  }
2156  i0 = i1;
2157  i1 = i2;
2158  i2 += step;
2159  }
2160  return nError;
2161 }
2162 
2163 Int_t TUnfold::RegularizeBins2D(int start_bin, int step1, int nbin1,
2164  int step2, int nbin2, ERegMode regmode)
2165 {
2166  // set regularisation on a 2-dimensional grid of bins
2167  // start: first bin
2168  // step1: distance between bins in 1st direction
2169  // nbin1: number of bins in 1st direction
2170  // step2: distance between bins in 2nd direction
2171  // nbin2: number of bins in 2nd direction
2172  // return value:
2173  // number of errors (i.e. conditions which have been skipped)
2174  // modifies data member fL
2175 
2176  Int_t nError = 0;
2177  for (Int_t i1 = 0; i1 < nbin1; i1++) {
2178  nError += RegularizeBins(start_bin + step1 * i1, step2, nbin2, regmode);
2179  }
2180  for (Int_t i2 = 0; i2 < nbin2; i2++) {
2181  nError += RegularizeBins(start_bin + step2 * i2, step1, nbin1, regmode);
2182  }
2183  return nError;
2184 }
2185 
2187  Double_t scaleBias)
2188 {
2189  // Do unfolding of an input histogram
2190  // tau_reg: regularisation parameter
2191  // input: input distribution with errors
2192  // scaleBias: scale factor applied to the bias
2193  // Data members required:
2194  // fA, fX0, fL
2195  // Data members modified:
2196  // those documented in SetInput()
2197  // and those documented in DoUnfold(Double_t)
2198  // Return value:
2199  // maximum global correlation coefficient
2200  // NOTE!!! return value >=1.0 means error, and the result is junk
2201  //
2202  // Overflow bins of the input distribution are ignored!
2203 
2204  SetInput(input,scaleBias);
2205  return DoUnfold(tau_reg);
2206 }
2207 
2208 Int_t TUnfold::SetInput(const TH1 *input, Double_t scaleBias,
2209  Double_t oneOverZeroError,const TH2 *hist_vyy,
2210  const TH2 *hist_vyy_inv)
2211 {
2212  // Define the input data for subsequent calls to DoUnfold(Double_t)
2213  // input: input distribution with errors
2214  // scaleBias: scale factor applied to the bias
2215  // oneOverZeroError: for bins with zero error, this number defines 1/error.
2216  // hist_vyy: if non-zero, defines the data covariance matrix
2217  // otherwise it is calculated from the data errors
2218  // hist_vyy_inv: if non-zero and if hist_vyy is set, defines the inverse of the data covariance matrix
2219  // Return value: number of bins with bad error
2220  // +10000*number of unconstrained output bins
2221  // Note: return values>=10000 are fatal errors,
2222  // for the given input, the unfolding can not be done!
2223  // Data members modified:
2224  // fY, fVyy, , fBiasScale
2225  // Data members cleared
2226  // fVyyInv, fNdf
2227  // + see ClearResults
2228 
2230  fNdf=0;
2231 
2232  fBiasScale = scaleBias;
2233 
2234  // delete old results (if any)
2235  ClearResults();
2236 
2237  // construct error matrix and inverted error matrix of measured quantities
2238  // from errors of input histogram or use error matrix
2239 
2240  Int_t *rowVyyN=new Int_t[GetNy()*GetNy()+1];
2241  Int_t *colVyyN=new Int_t[GetNy()*GetNy()+1];
2242  Double_t *dataVyyN=new Double_t[GetNy()*GetNy()+1];
2243 
2244  Int_t *rowVyy1=new Int_t[GetNy()];
2245  Int_t *colVyy1=new Int_t[GetNy()];
2246  Double_t *dataVyy1=new Double_t[GetNy()];
2247  Double_t *dataVyyDiag=new Double_t[GetNy()];
2248 
2249  Int_t nError=0;
2250  Int_t nVyyN=0;
2251  Int_t nVyy1=0;
2252  for (Int_t iy = 0; iy < GetNy(); iy++) {
2253  // diagonals
2254  Double_t dy2;
2255  if(!hist_vyy) {
2256  Double_t dy = input->GetBinError(iy + 1);
2257  dy2=dy*dy;
2258  if (dy2 <= 0.0) {
2259  nError++;
2260  if(oneOverZeroError>0.0) {
2261  dy2 = 1./ ( oneOverZeroError*oneOverZeroError);
2262  }
2263  }
2264  } else {
2265  dy2 = hist_vyy->GetBinContent(iy+1,iy+1);
2266  }
2267  rowVyyN[nVyyN] = iy;
2268  colVyyN[nVyyN] = iy;
2269  rowVyy1[nVyy1] = iy;
2270  colVyy1[nVyy1] = 0;
2271  dataVyyDiag[iy] = dy2;
2272  if(dy2>0.0) {
2273  dataVyyN[nVyyN++] = dy2;
2274  dataVyy1[nVyy1++] = dy2;
2275  }
2276  }
2277  if(hist_vyy) {
2278  // non-diagonal elements
2279  for (Int_t iy = 0; iy < GetNy(); iy++) {
2280  // ignore rows where the diagonal is zero
2281  if(dataVyyDiag[iy]<=0.0) continue;
2282  for (Int_t jy = 0; jy < GetNy(); jy++) {
2283  // skip diagonal elements
2284  if(iy==jy) continue;
2285  // ignore columns where the diagonal is zero
2286  if(dataVyyDiag[jy]<=0.0) continue;
2287 
2288  rowVyyN[nVyyN] = iy;
2289  colVyyN[nVyyN] = jy;
2290  dataVyyN[nVyyN]= hist_vyy->GetBinContent(iy+1,jy+1);
2291  if(dataVyyN[nVyyN] == 0.0) continue;
2292  nVyyN ++;
2293  }
2294  }
2295  if(hist_vyy_inv) {
2296  Warning("SetInput",
2297  "inverse of input covariance is taken from user input");
2298  Int_t *rowVyyInv=new Int_t[GetNy()*GetNy()+1];
2299  Int_t *colVyyInv=new Int_t[GetNy()*GetNy()+1];
2300  Double_t *dataVyyInv=new Double_t[GetNy()*GetNy()+1];
2301  Int_t nVyyInv=0;
2302  for (Int_t iy = 0; iy < GetNy(); iy++) {
2303  for (Int_t jy = 0; jy < GetNy(); jy++) {
2304  rowVyyInv[nVyyInv] = iy;
2305  colVyyInv[nVyyInv] = jy;
2306  dataVyyInv[nVyyInv]= hist_vyy_inv->GetBinContent(iy+1,jy+1);
2307  if(dataVyyInv[nVyyInv] == 0.0) continue;
2308  nVyyInv ++;
2309  }
2310  }
2312  (GetNy(),GetNy(),nVyyInv,rowVyyInv,colVyyInv,dataVyyInv);
2313  delete [] rowVyyInv;
2314  delete [] colVyyInv;
2315  delete [] dataVyyInv;
2316  }
2317  }
2318  DeleteMatrix(&fVyy);
2320  (GetNy(),GetNy(),nVyyN,rowVyyN,colVyyN,dataVyyN);
2321 
2322  delete[] rowVyyN;
2323  delete[] colVyyN;
2324  delete[] dataVyyN;
2325 
2327  (GetNy(),1,nVyy1,rowVyy1,colVyy1, dataVyy1);
2328 
2329  delete[] rowVyy1;
2330  delete[] colVyy1;
2331  delete[] dataVyy1;
2332 
2333  //
2334  // get input vector
2335  DeleteMatrix(&fY);
2336  fY = new TMatrixD(GetNy(), 1);
2337  for (Int_t i = 0; i < GetNy(); i++) {
2338  (*fY) (i, 0) = input->GetBinContent(i + 1);
2339  }
2340  // simple check whether unfolding is possible, given the matrices fA and fV
2342  DeleteMatrix(&vecV);
2343  Int_t nError2=0;
2344  for (Int_t i = 0; i <mAtV->GetNrows();i++) {
2345  if(mAtV->GetRowIndexArray()[i]==
2346  mAtV->GetRowIndexArray()[i+1]) {
2347  nError2 ++;
2348  }
2349  }
2350  if(nError>0) {
2351  if(oneOverZeroError !=0.0) {
2352  if(nError>1) {
2353  Warning("SetInput","%d/%d input bins have zero error,"
2354  " 1/error set to %lf.",nError,GetNy(),oneOverZeroError);
2355  } else {
2356  Warning("SetInput","One input bin has zero error,"
2357  " 1/error set to %lf.",oneOverZeroError);
2358  }
2359  } else {
2360  if(nError>1) {
2361  Warning("SetInput","%d/%d input bins have zero error,"
2362  " and are ignored.",nError,GetNy());
2363  } else {
2364  Warning("SetInput","One input bin has zero error,"
2365  " and is ignored.");
2366  }
2367  }
2368  fIgnoredBins=nError;
2369  }
2370  if(nError2>0) {
2371  // check whether data points with zero error are responsible
2372  if(oneOverZeroError<=0.0) {
2373  //const Int_t *a_rows=fA->GetRowIndexArray();
2374  //const Int_t *a_cols=fA->GetColIndexArray();
2375  for (Int_t col = 0; col <mAtV->GetNrows();col++) {
2376  if(mAtV->GetRowIndexArray()[col]==
2377  mAtV->GetRowIndexArray()[col+1]) {
2378  TString binlist("no data to constrain output bin ");
2379  binlist += GetOutputBinName(fXToHist[col]);
2380  /* binlist +=" depends on ignored input bins ";
2381  for(Int_t row=0;row<fA->GetNrows();row++) {
2382  if(dataVyyDiag[row]>0.0) continue;
2383  for(Int_t i=a_rows[row];i<a_rows[row+1];i++) {
2384  if(a_cols[i]!=col) continue;
2385  binlist +=" ";
2386  binlist +=row;
2387  }
2388  } */
2389  Warning("SetInput","%s",binlist.Data());
2390  }
2391  }
2392  }
2393  if(nError2>1) {
2394  Error("SetInput","%d/%d output bins are not constrained by any data.",
2395  nError2,mAtV->GetNrows());
2396  } else {
2397  Error("SetInput","One output bins is not constrained by any data.");
2398  }
2399  }
2400  DeleteMatrix(&mAtV);
2401 
2402  delete[] dataVyyDiag;
2403 
2404  return nError+10000*nError2;
2405 }
2406 
2408 {
2409  // Unfold with given value of regularisation parameter tau
2410  // tau: new tau parameter
2411  // required data members:
2412  // fA: matrix to relate x and y
2413  // fY: measured data points
2414  // fX0: bias on x
2415  // fBiasScale: scale factor for fX0
2416  // fV: inverse of covariance matrix for y
2417  // fL: regularisation conditions
2418  // modified data members:
2419  // fTauSquared and those documented in DoUnfold(void)
2420  fTauSquared=tau*tau;
2421  return DoUnfold();
2422 }
2423 
2425  Double_t tauMin,Double_t tauMax,
2426  TGraph **lCurve,TSpline **logTauX,
2427  TSpline **logTauY)
2428 {
2429  // scan the L curve
2430  // nPoint: number of points on the resulting curve
2431  // tauMin: smallest tau value to study
2432  // tauMax: largest tau value to study
2433  // lCurve: the L curve as graph
2434  // logTauX: output spline of x-coordinates vs tau for the L curve
2435  // logTauY: output spline of y-coordinates vs tau for the L curve
2436  // return value: the coordinate number (0..nPoint-1) with the "best" choice
2437  // of tau
2438  typedef std::map<Double_t,std::pair<Double_t,Double_t> > XYtau_t;
2439  XYtau_t curve;
2440 
2441  //==========================================================
2442  // algorithm:
2443  // (1) do the unfolding for nPoint-1 points
2444  // and store the results in the map
2445  // curve
2446  // (1a) store minimum and maximum tau to curve
2447  // (1b) insert additional points, until nPoint-1 values
2448  // have been calculated
2449  //
2450  // (2) determine the best choice of tau
2451  // do the unfolding for this point
2452  // and store the result in
2453  // curve
2454  // (3) return the result in
2455  // lCurve logTauX logTauY
2456 
2457  //==========================================================
2458  // (1) do the unfolding for nPoint-1 points
2459  // and store the results in
2460  // curve
2461  // (1a) store minimum and maximum tau to curve
2462 
2463  if((tauMin<=0)||(tauMax<=0.0)||(tauMin>=tauMax)) {
2464  // here no range is given, has to be determined automatically
2465  // the maximum tau is determined from the chi**2 values
2466  // observed from unfolding without regulatisation
2467 
2468  // first unfolding, without regularisation
2469  DoUnfold(0.0);
2470 
2471  // if the number of degrees of freedom is too small, create an error
2472  if(GetNdf()<=0) {
2473  Error("ScanLcurve","too few input bins, NDF<=0 %d",GetNdf());
2474  }
2475 
2476  Double_t x0=GetLcurveX();
2477  Double_t y0=GetLcurveY();
2478  Info("ScanLcurve","logtau=-Infinity X=%lf Y=%lf",x0,y0);
2479  if(!TMath::Finite(x0)) {
2480  Fatal("ScanLcurve","problem (too few input bins?) X=%f",x0);
2481  }
2482  if(!TMath::Finite(y0)) {
2483  Fatal("ScanLcurve","problem (missing regularisation?) Y=%f",y0);
2484  }
2485  {
2486  // unfolding guess maximum tau and store it
2487  Double_t logTau=
2488  0.5*(TMath::Log10(fChi2A+3.*TMath::Sqrt(GetNdf()+1.0))
2489  -GetLcurveY());
2490  DoUnfold(TMath::Power(10.,logTau));
2491  if((!TMath::Finite(GetLcurveX())) ||(!TMath::Finite(GetLcurveY()))) {
2492  Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2493  GetLcurveX(),GetLcurveY());
2494  }
2495  curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2496  Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2497  logTau,GetLcurveX(),GetLcurveY());
2498  }
2499  if((*curve.begin()).second.first<x0) {
2500  // if the point at tau==0 seems numerically unstable,
2501  // try to find the minimum chi**2 as start value
2502  //
2503  // "unstable" means that there is a finite tau where the
2504  // unfolding chi**2 is smaller than for the case of no
2505  // regularisation. Ideally this should never happen
2506  do {
2507  x0=GetLcurveX();
2508  Double_t logTau=(*curve.begin()).first-0.5;
2509  DoUnfold(TMath::Power(10.,logTau));
2510  if((!TMath::Finite(GetLcurveX())) ||(!TMath::Finite(GetLcurveY()))) {
2511  Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2512  GetLcurveX(),GetLcurveY());
2513  }
2514  curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2515  Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2516  logTau,GetLcurveX(),GetLcurveY());
2517  }
2518  while(((int)curve.size()<(nPoint-1)/2)&&
2519  ((*curve.begin()).second.first<x0));
2520  } else {
2521  // minimum tau is chosen such that is less than
2522  // 1% different from the case of no regularusation
2523  // log10(1.01) = 0.00432
2524 
2525  // here, more than one point are inserted if necessary
2526  while(((int)curve.size()<nPoint-1)&&
2527  (((*curve.begin()).second.first-x0>0.00432)||
2528  ((*curve.begin()).second.second-y0>0.00432)||
2529  (curve.size()<2))) {
2530  Double_t logTau=(*curve.begin()).first-0.5;
2531  DoUnfold(TMath::Power(10.,logTau));
2532  if((!TMath::Finite(GetLcurveX())) ||(!TMath::Finite(GetLcurveY()))) {
2533  Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2534  GetLcurveX(),GetLcurveY());
2535  }
2536  curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2537  Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2538  logTau,GetLcurveX(),GetLcurveY());
2539  }
2540  }
2541  } else {
2542  Double_t logTauMin=TMath::Log10(tauMin);
2543  Double_t logTauMax=TMath::Log10(tauMax);
2544  if(nPoint>1) {
2545  // insert maximum tau
2546  DoUnfold(TMath::Power(10.,logTauMax));
2547  if((!TMath::Finite(GetLcurveX())) ||(!TMath::Finite(GetLcurveY()))) {
2548  Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2549  GetLcurveX(),GetLcurveY());
2550  }
2551  Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2552  logTauMax,GetLcurveX(),GetLcurveY());
2553  curve[logTauMax]=std::make_pair(GetLcurveX(),GetLcurveY());
2554  }
2555  // insert minimum tau
2556  DoUnfold(TMath::Power(10.,logTauMin));
2557  if((!TMath::Finite(GetLcurveX())) ||(!TMath::Finite(GetLcurveY()))) {
2558  Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2559  GetLcurveX(),GetLcurveY());
2560  }
2561  Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2562  logTauMin,GetLcurveX(),GetLcurveY());
2563  curve[logTauMin]=std::make_pair(GetLcurveX(),GetLcurveY());
2564  }
2565 
2566 
2567  //==========================================================
2568  // (1b) insert additional points, until nPoint-1 values
2569  // have been calculated
2570 
2571  while(int(curve.size())<nPoint-1) {
2572  // insert additional points, such that the sizes of the delta(XY) vectors
2573  // are getting smaller and smaller
2574  XYtau_t::const_iterator i0,i1;
2575  i0=curve.begin();
2576  i1=i0;
2577  Double_t logTau=(*i0).first;
2578  Double_t distMax=0.0;
2579  for(i1++;i1!=curve.end();i1++) {
2580  const std::pair<Double_t,Double_t> &xy0=(*i0).second;
2581  const std::pair<Double_t,Double_t> &xy1=(*i1).second;
2582  Double_t dx=xy1.first-xy0.first;
2583  Double_t dy=xy1.second-xy0.second;
2584  Double_t d=TMath::Sqrt(dx*dx+dy*dy);
2585  if(d>=distMax) {
2586  distMax=d;
2587  logTau=0.5*((*i0).first+(*i1).first);
2588  }
2589  i0=i1;
2590  }
2591  DoUnfold(TMath::Power(10.,logTau));
2592  if((!TMath::Finite(GetLcurveX())) ||(!TMath::Finite(GetLcurveY()))) {
2593  Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2594  GetLcurveX(),GetLcurveY());
2595  }
2596  Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",logTau,GetLcurveX(),GetLcurveY());
2597  curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2598  }
2599 
2600  //==========================================================
2601  // (2) determine the best choice of tau
2602  // do the unfolding for this point
2603  // and store the result in
2604  // curve
2605  XYtau_t::const_iterator i0,i1;
2606  i0=curve.begin();
2607  i1=i0;
2608  i1++;
2609  Double_t logTauFin=(*i0).first;
2610  if( ((int)curve.size())<nPoint) {
2611  // set up splines and determine (x,y) curvature in each point
2612  Double_t *cTi=new Double_t[curve.size()-1];
2613  Double_t *cCi=new Double_t[curve.size()-1];
2614  Int_t n=0;
2615  {
2616  Double_t *lXi=new Double_t[curve.size()];
2617  Double_t *lYi=new Double_t[curve.size()];
2618  Double_t *lTi=new Double_t[curve.size()];
2619  for( XYtau_t::const_iterator i=curve.begin();i!=curve.end();i++) {
2620  lXi[n]=(*i).second.first;
2621  lYi[n]=(*i).second.second;
2622  lTi[n]=(*i).first;
2623  n++;
2624  }
2625  TSpline3 *splineX=new TSpline3("x vs tau",lTi,lXi,n);
2626  TSpline3 *splineY=new TSpline3("y vs tau",lTi,lYi,n);
2627  // calculate (x,y) curvature for all points
2628  // the curvature is stored in the array cCi[] as a function of cTi[]
2629  for(Int_t i=0;i<n-1;i++) {
2630  Double_t ltau,xy,bi,ci,di;
2631  splineX->GetCoeff(i,ltau,xy,bi,ci,di);
2632  Double_t tauBar=0.5*(lTi[i]+lTi[i+1]);
2633  Double_t dTau=0.5*(lTi[i+1]-lTi[i]);
2634  Double_t dx1=bi+dTau*(2.*ci+3.*di*dTau);
2635  Double_t dx2=2.*ci+6.*di*dTau;
2636  splineY->GetCoeff(i,ltau,xy,bi,ci,di);
2637  Double_t dy1=bi+dTau*(2.*ci+3.*di*dTau);
2638  Double_t dy2=2.*ci+6.*di*dTau;
2639  cTi[i]=tauBar;
2640  cCi[i]=(dy2*dx1-dy1*dx2)/TMath::Power(dx1*dx1+dy1*dy1,1.5);
2641  }
2642  delete splineX;
2643  delete splineY;
2644  delete[] lXi;
2645  delete[] lYi;
2646  delete[] lTi;
2647  }
2648  // create curvature Spline
2649  TSpline3 *splineC=new TSpline3("L curve curvature",cTi,cCi,n-1);
2650  // find the maximum of the curvature
2651  // if the parameter iskip is non-zero, then iskip points are
2652  // ignored when looking for the largest curvature
2653  // (there are problems with the curvature determined from the first
2654  // few points of splineX,splineY in the algorithm above)
2655  Int_t iskip=0;
2656  if(n>4) iskip=1;
2657  if(n>7) iskip=2;
2658  Double_t cCmax=cCi[iskip];
2659  Double_t cTmax=cTi[iskip];
2660  for(Int_t i=iskip;i<n-2-iskip;i++) {
2661  // find maximum on this spline section
2662  // check boundary conditions for x[i+1]
2663  Double_t xMax=cTi[i+1];
2664  Double_t yMax=cCi[i+1];
2665  if(cCi[i]>yMax) {
2666  yMax=cCi[i];
2667  xMax=cTi[i];
2668  }
2669  // find maximum for x[i]<x<x[i+1]
2670  // get spline coefficiencts and solve equation
2671  // derivative(x)==0
2672  Double_t x,y,b,c,d;
2673  splineC->GetCoeff(i,x,y,b,c,d);
2674  // coefficiencts of quadratic equation
2675  Double_t m_p_half=-c/(3.*d);
2676  Double_t q=b/(3.*d);
2677  Double_t discr=m_p_half*m_p_half-q;
2678  if(discr>=0.0) {
2679  // solution found
2680  discr=TMath::Sqrt(discr);
2681  Double_t xx;
2682  if(m_p_half>0.0) {
2683  xx = m_p_half + discr;
2684  } else {
2685  xx = m_p_half - discr;
2686  }
2687  Double_t dx=cTi[i+1]-x;
2688  // check first solution
2689  if((xx>0.0)&&(xx<dx)) {
2690  y=splineC->Eval(x+xx);
2691  if(y>yMax) {
2692  yMax=y;
2693  xMax=x+xx;
2694  }
2695  }
2696  // second solution
2697  if(xx !=0.0) {
2698  xx= q/xx;
2699  } else {
2700  xx=0.0;
2701  }
2702  // check second solution
2703  if((xx>0.0)&&(xx<dx)) {
2704  y=splineC->Eval(x+xx);
2705  if(y>yMax) {
2706  yMax=y;
2707  xMax=x+xx;
2708  }
2709  }
2710  }
2711  // check whether this local minimum is a global minimum
2712  if(yMax>cCmax) {
2713  cCmax=yMax;
2714  cTmax=xMax;
2715  }
2716  }
2717 #ifdef DEBUG_LCURVE
2718  {
2719  TCanvas lcc;
2720  lcc.Divide(1,1);
2721  lcc.cd(1);
2722  splineC->Draw();
2723  lcc.SaveAs("splinec.ps");
2724  }
2725 #endif
2726  delete splineC;
2727  delete[] cTi;
2728  delete[] cCi;
2729  logTauFin=cTmax;
2730  DoUnfold(TMath::Power(10.,logTauFin));
2731  if((!TMath::Finite(GetLcurveX())) ||(!TMath::Finite(GetLcurveY()))) {
2732  Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2733  GetLcurveX(),GetLcurveY());
2734  }
2735  Info("ScanLcurve","Result logtau=%lf X=%lf Y=%lf",
2736  logTauFin,GetLcurveX(),GetLcurveY());
2737  curve[logTauFin]=std::make_pair(GetLcurveX(),GetLcurveY());
2738  }
2739 
2740 
2741  //==========================================================
2742  // (3) return the result in
2743  // lCurve logTauX logTauY
2744 
2745  Int_t bestChoice=-1;
2746  if(curve.size()>0) {
2747  Double_t *x=new Double_t[curve.size()];
2748  Double_t *y=new Double_t[curve.size()];
2749  Double_t *logT=new Double_t[curve.size()];
2750  int n=0;
2751  for( XYtau_t::const_iterator i=curve.begin();i!=curve.end();i++) {
2752  if(logTauFin==(*i).first) {
2753  bestChoice=n;
2754  }
2755  x[n]=(*i).second.first;
2756  y[n]=(*i).second.second;
2757  logT[n]=(*i).first;
2758  n++;
2759  }
2760  if(lCurve) {
2761  (*lCurve)=new TGraph(n,x,y);
2762  (*lCurve)->SetTitle("L curve");
2763  }
2764  if(logTauX) (*logTauX)=new TSpline3("log(chi**2)%log(tau)",logT,x,n);
2765  if(logTauY) (*logTauY)=new TSpline3("log(reg.cond)%log(tau)",logT,y,n);
2766  delete[] x;
2767  delete[] y;
2768  delete[] logT;
2769  }
2770 
2771  return bestChoice;
2772 }
2773 
2774 void TUnfold::GetNormalisationVector(TH1 *out,const Int_t *binMap) const
2775 {
2776  // get vector of normalisation factors
2777  // out: output histogram
2778  // binMap: for each bin of the original output distribution
2779  // specify the destination bin. A value of -1 means that the bin
2780  // is discarded. 0 means underflow bin, 1 first bin, ...
2781  // binMap[0] : destination of underflow bin
2782  // binMap[1] : destination of first bin
2783  // ...
2784 
2785  ClearHistogram(out);
2786  for (Int_t i = 0; i < GetNx(); i++) {
2787  int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2788  if(dest>=0) {
2789  out->SetBinContent(dest, fSumOverY[i] + out->GetBinContent(dest));
2790  }
2791  }
2792 }
2793 
2794 void TUnfold::GetBias(TH1 *out,const Int_t *binMap) const
2795 {
2796  // get bias distribution, possibly with bin remapping
2797  // out: output histogram
2798  // binMap: for each bin of the original output distribution
2799  // specify the destination bin. A value of -1 means that the bin
2800  // is discarded. 0 means underflow bin, 1 first bin, ...
2801  // binMap[0] : destination of underflow bin
2802  // binMap[1] : destination of first bin
2803  // ...
2804 
2805  ClearHistogram(out);
2806  for (Int_t i = 0; i < GetNx(); i++) {
2807  int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2808  if(dest>=0) {
2809  out->SetBinContent(dest, fBiasScale*(*fX0) (i, 0) +
2810  out->GetBinContent(dest));
2811  }
2812  }
2813 }
2814 
2815 void TUnfold::GetFoldedOutput(TH1 *out,const Int_t *binMap) const
2816 {
2817  // get unfolding result, folded back trough the matrix
2818  // out: output histogram
2819  // binMap: for each bin of the original output distribution
2820  // specify the destination bin. A value of -1 means that the bin
2821  // is discarded. 0 means underflow bin, 1 first bin, ...
2822  // binMap[0] : destination of underflow bin
2823  // binMap[1] : destination of first bin
2824  // ...
2825 
2826  ClearHistogram(out);
2827 
2829 
2830  const Int_t *rows_A=fA->GetRowIndexArray();
2831  const Int_t *cols_A=fA->GetColIndexArray();
2832  const Double_t *data_A=fA->GetMatrixArray();
2833  const Int_t *rows_AVxx=AVxx->GetRowIndexArray();
2834  const Int_t *cols_AVxx=AVxx->GetColIndexArray();
2835  const Double_t *data_AVxx=AVxx->GetMatrixArray();
2836 
2837  for (Int_t i = 0; i < GetNy(); i++) {
2838  Int_t destI = binMap ? binMap[i+1] : i+1;
2839  if(destI<0) continue;
2840 
2841  out->SetBinContent(destI, (*fAx) (i, 0)+ out->GetBinContent(destI));
2842  Double_t e2=0.0;
2843  Int_t index_a=rows_A[i];
2844  Int_t index_av=rows_AVxx[i];
2845  while((index_a<rows_A[i+1])&&(index_av<rows_AVxx[i])) {
2846  Int_t j_a=cols_A[index_a];
2847  Int_t j_av=cols_AVxx[index_av];
2848  if(j_a<j_av) {
2849  index_a++;
2850  } else if(j_a>j_av) {
2851  index_av++;
2852  } else {
2853  e2 += data_AVxx[index_av] * data_A[index_a];
2854  index_a++;
2855  index_av++;
2856  }
2857  }
2858  out->SetBinError(destI,TMath::Sqrt(e2));
2859  }
2860  DeleteMatrix(&AVxx);
2861 }
2862 
2864 {
2865  // retreive matrix of probabilities
2866  // histmap: on which axis to arrange the input/output vector
2867  // A: histogram to store the probability matrix
2868  const Int_t *rows_A=fA->GetRowIndexArray();
2869  const Int_t *cols_A=fA->GetColIndexArray();
2870  const Double_t *data_A=fA->GetMatrixArray();
2871  for (Int_t iy = 0; iy <fA->GetNrows(); iy++) {
2872  for(Int_t indexA=rows_A[iy];indexA<rows_A[iy+1];indexA++) {
2873  Int_t ix = cols_A[indexA];
2874  Int_t ih=fXToHist[ix];
2875  if (histmap == kHistMapOutputHoriz) {
2876  A->SetBinContent(ih, iy,data_A[indexA]);
2877  } else {
2878  A->SetBinContent(iy, ih,data_A[indexA]);
2879  }
2880  }
2881  }
2882 }
2883 
2884 void TUnfold::GetInput(TH1 *out,const Int_t *binMap) const
2885 {
2886  // retreive input distribution
2887  // out: output histogram
2888  // binMap: for each bin of the original output distribution
2889  // specify the destination bin. A value of -1 means that the bin
2890  // is discarded. 0 means underflow bin, 1 first bin, ...
2891  // binMap[0] : destination of underflow bin
2892  // binMap[1] : destination of first bin
2893  // ...
2894 
2895  ClearHistogram(out);
2896 
2897  const Int_t *rows_Vyy=fVyy->GetRowIndexArray();
2898  const Int_t *cols_Vyy=fVyy->GetColIndexArray();
2899  const Double_t *data_Vyy=fVyy->GetMatrixArray();
2900 
2901  for (Int_t i = 0; i < GetNy(); i++) {
2902  Int_t destI=binMap ? binMap[i] : i;
2903  if(destI<0) continue;
2904 
2905  out->SetBinContent(destI, (*fY) (i, 0)+out->GetBinContent(destI));
2906 
2907  Double_t e=0.0;
2908  for(int index=rows_Vyy[i];index<rows_Vyy[i+1];index++) {
2909  if(cols_Vyy[index]==i) {
2910  e=TMath::Sqrt(data_Vyy[index]);
2911  }
2912  }
2913  out->SetBinError(destI, e);
2914  }
2915 }
2916 
2918 {
2919  // calculate the inverse of the contribution to the error matrix
2920  // corresponding to the input data
2921  if(!fVyyInv) {
2922  Int_t rank=0;
2924  // and count number of degrees of freedom
2925  fNdf = rank-GetNpar();
2926 
2927  if(rank<GetNy()-fIgnoredBins) {
2928  Warning("GetInputInverseEmatrix","input covariance matrix has rank %d expect %d",
2929  rank,GetNy());
2930  }
2931  if(fNdf<0) {
2932  Error("GetInputInverseEmatrix","number of parameters %d > %d (rank of input covariance). Problem can not be solved",GetNpar(),rank);
2933  } else if(fNdf==0) {
2934  Warning("GetInputInverseEmatrix","number of parameters %d = input rank %d. Problem is ill posed",GetNpar(),rank);
2935  }
2936  }
2937  if(out) {
2938  // return matrix as histogram
2939  const Int_t *rows_Vyy=fVyy->GetRowIndexArray();
2940  const Int_t *cols_Vyy=fVyy->GetColIndexArray();
2941  const Double_t *data_Vyy=fVyy->GetMatrixArray();
2942 
2943  for(int i=0;i<=out->GetNbinsX()+1;i++) {
2944  for(int j=0;j<=out->GetNbinsY()+1;j++) {
2945  out->SetBinContent(i,j,0.);
2946  }
2947  }
2948 
2949  for (Int_t i = 0; i < fVyy->GetNrows(); i++) {
2950  for(int index=rows_Vyy[i];index<rows_Vyy[i+1];index++) {
2951  Int_t j=cols_Vyy[index];
2952  out->SetBinContent(i+1,j+1,data_Vyy[index]);
2953  }
2954  }
2955  }
2956 }
2957 
2959 {
2960  // retreive matrix of regularisation conditions squared
2961  // out: prebooked matrix
2962 
2964  // loop over sparse matrix
2965  const Int_t *rows=lSquared->GetRowIndexArray();
2966  const Int_t *cols=lSquared->GetColIndexArray();
2967  const Double_t *data=lSquared->GetMatrixArray();
2968  for (Int_t i = 0; i < GetNx(); i++) {
2969  for (Int_t cindex = rows[i]; cindex < rows[i+1]; cindex++) {
2970  Int_t j=cols[cindex];
2971  out->SetBinContent(fXToHist[i], fXToHist[j],data[cindex]);
2972  }
2973  }
2974  DeleteMatrix(&lSquared);
2975 }
2976 
2977 void TUnfold::GetL(TH2 *out) const
2978 {
2979  // retreive matrix of regularisation conditions
2980  // out: prebooked matrix
2981 
2982  // loop over sparse matrix
2983  const Int_t *rows=fL->GetRowIndexArray();
2984  const Int_t *cols=fL->GetColIndexArray();
2985  const Double_t *data=fL->GetMatrixArray();
2986  for (Int_t row = 0; row < GetNr(); row++) {
2987  for (Int_t cindex = rows[row]; cindex < rows[row+1]; cindex++) {
2988  Int_t col=cols[cindex];
2989  Int_t indexH=fXToHist[col];
2990  out->SetBinContent(indexH,row+1,data[cindex]);
2991  }
2992  }
2993 }
2994 
2996 {
2997  // set type of constraint for the next unfolding
2998  if(fConstraint !=constraint) ClearResults();
2999  fConstraint=constraint;
3000  Info("SetConstraint","fConstraint=%d",fConstraint);
3001 }
3002 
3004 {
3005  // return regularisation parameter
3006  return TMath::Sqrt(fTauSquared);
3007 }
3008 
3010 {
3011  // return chi**2 contribution from regularisation conditions
3012  return fLXsquared*fTauSquared;
3013 }
3014 
3016 {
3017  // return number of parameters
3018  return GetNx();
3019 }
3020 
3022 {
3023  // return value on x axis of L curve
3024  return TMath::Log10(fChi2A);
3025 }
3026 
3028 {
3029  // return value on y axis of L curve
3030  return TMath::Log10(fLXsquared);
3031 }
3032 
3033 void TUnfold::GetOutput(TH1 *output,const Int_t *binMap) const
3034 {
3035  // get output distribution, cumulated over several bins
3036  // output: output histogram
3037  // binMap: for each bin of the original output distribution
3038  // specify the destination bin. A value of -1 means that the bin
3039  // is discarded. 0 means underflow bin, 1 first bin, ...
3040  // binMap[0] : destination of underflow bin
3041  // binMap[1] : destination of first bin
3042  // ...
3043 
3044  ClearHistogram(output);
3045  /* Int_t nbin=output->GetNbinsX();
3046  Double_t *c=new Double_t[nbin+2];
3047  Double_t *e2=new Double_t[nbin+2];
3048  for(Int_t i=0;i<nbin+2;i++) {
3049  c[i]=0.0;
3050  e2[i]=0.0;
3051  } */
3052 
3053  std::map<Int_t,Double_t> e2;
3054 
3055  const Int_t *rows_Vxx=fVxx->GetRowIndexArray();
3056  const Int_t *cols_Vxx=fVxx->GetColIndexArray();
3057  const Double_t *data_Vxx=fVxx->GetMatrixArray();
3058 
3059  Int_t binMapSize = fHistToX.GetSize();
3060  for(Int_t i=0;i<binMapSize;i++) {
3061  Int_t destBinI=binMap ? binMap[i] : i; // histogram bin
3062  Int_t srcBinI=fHistToX[i]; // matrix row index
3063  if((destBinI>=0)&&(srcBinI>=0)) {
3064  output->SetBinContent
3065  (destBinI, (*fX)(srcBinI,0)+ output->GetBinContent(destBinI));
3066  // here we loop over the columns of the error matrix
3067  // j: counts histogram bins
3068  // index: counts sparse matrix index
3069  // the algorithm makes use of the fact that fHistToX is ordered
3070  Int_t j=0; // histogram bin
3071  Int_t index_vxx=rows_Vxx[srcBinI];
3072  //Double_t e2=TMath::Power(output->GetBinError(destBinI),2.);
3073  while((j<binMapSize)&&(index_vxx<rows_Vxx[srcBinI+1])) {
3074  Int_t destBinJ=binMap ? binMap[j] : j; // histogram bin
3075  if(destBinI!=destBinJ) {
3076  // only diagonal elements are calculated
3077  j++;
3078  } else {
3079  Int_t srcBinJ=fHistToX[j]; // matrix column index
3080  if(srcBinJ<0) {
3081  // bin was not unfolded, check next bin
3082  j++;
3083  } else {
3084  if(cols_Vxx[index_vxx]<srcBinJ) {
3085  // index is too small
3086  index_vxx++;
3087  } else if(cols_Vxx[index_vxx]>srcBinJ) {
3088  // index is too large, skip bin
3089  j++;
3090  } else {
3091  // add this bin
3092  e2[destBinI] += data_Vxx[index_vxx];
3093  j++;
3094  index_vxx++;
3095  }
3096  }
3097  }
3098  }
3099  //output->SetBinError(destBinI,TMath::Sqrt(e2));
3100  }
3101  }
3102  for(std::map<Int_t,Double_t>::const_iterator i=e2.begin();
3103  i!=e2.end();i++) {
3104  //cout<<(*i).first<<" "<<(*i).second<<"\n";
3105  output->SetBinError((*i).first,TMath::Sqrt((*i).second));
3106  }
3107 }
3108 
3110  const Int_t *binMap,Bool_t doClear) const
3111 {
3112  // get an error matrix, cumulated over several bins
3113  // ematrix: output error matrix histogram
3114  // emat: error matrix
3115  // binMap: for each bin of the original output distribution
3116  // specify the destination bin. A value of -1 means that the bin
3117  // is discarded. 0 means underflow bin, 1 first bin, ...
3118  // binMap[0] : destination of underflow bin
3119  // binMap[1] : destination of first bin
3120  // ...
3121  Int_t nbin=ematrix->GetNbinsX();
3122  if(doClear) {
3123  for(Int_t i=0;i<nbin+2;i++) {
3124  for(Int_t j=0;j<nbin+2;j++) {
3125  ematrix->SetBinContent(i,j,0.0);
3126  ematrix->SetBinError(i,j,0.0);
3127  }
3128  }
3129  }
3130 
3131  if(emat) {
3132  const Int_t *rows_emat=emat->GetRowIndexArray();
3133  const Int_t *cols_emat=emat->GetColIndexArray();
3134  const Double_t *data_emat=emat->GetMatrixArray();
3135 
3136  Int_t binMapSize = fHistToX.GetSize();
3137  for(Int_t i=0;i<binMapSize;i++) {
3138  Int_t destBinI=binMap ? binMap[i] : i;
3139  Int_t srcBinI=fHistToX[i];
3140  if((destBinI>=0)&&(destBinI<nbin+2)&&(srcBinI>=0)) {
3141  // here we loop over the columns of the source matrix
3142  // j: counts histogram bins
3143  // index: counts sparse matrix index
3144  // the algorithm makes use of the fact that fHistToX is ordered
3145  Int_t j=0;
3146  Int_t index_vxx=rows_emat[srcBinI];
3147  while((j<binMapSize)&&(index_vxx<rows_emat[srcBinI+1])) {
3148  Int_t destBinJ=binMap ? binMap[j] : j;
3149  Int_t srcBinJ=fHistToX[j];
3150  if((destBinJ<0)||(destBinJ>=nbin+2)||(srcBinJ<0)) {
3151  // check next bin
3152  j++;
3153  } else {
3154  if(cols_emat[index_vxx]<srcBinJ) {
3155  // index is too small
3156  index_vxx++;
3157  } else if(cols_emat[index_vxx]>srcBinJ) {
3158  // index is too large, skip bin
3159  j++;
3160  } else {
3161  // add this bin
3162  Double_t e2= ematrix->GetBinContent(destBinI,destBinJ)
3163  + data_emat[index_vxx];
3164  ematrix->SetBinContent(destBinI,destBinJ,e2);
3165  j++;
3166  index_vxx++;
3167  }
3168  }
3169  }
3170  }
3171  }
3172  }
3173 }
3174 
3175 void TUnfold::GetEmatrix(TH2 *ematrix,const Int_t *binMap) const
3176 {
3177  // get output error matrix, cumulated over several bins
3178  // ematrix: output error matrix histogram
3179  // binMap: for each bin of the original output distribution
3180  // specify the destination bin. A value of -1 means that the bin
3181  // is discarded. 0 means underflow bin, 1 first bin, ...
3182  // binMap[0] : destination of underflow bin
3183  // binMap[1] : destination of first bin
3184  // ...
3185  ErrorMatrixToHist(ematrix,fVxx,binMap,kTRUE);
3186 }
3187 
3188 void TUnfold::GetRhoIJ(TH2 *rhoij,const Int_t *binMap) const
3189 {
3190  // get correlation coefficient matrix, cumulated over several bins
3191  // rhoij: correlation coefficient matrix histogram
3192  // binMap: for each bin of the original output distribution
3193  // specify the destination bin. A value of -1 means that the bin
3194  // is discarded. 0 means underflow bin, 1 first bin, ...
3195  // binMap[0] : destination of underflow bin
3196  // binMap[1] : destination of first bin
3197  // ...
3198  GetEmatrix(rhoij,binMap);
3199  Int_t nbin=rhoij->GetNbinsX();
3200  Double_t *e=new Double_t[nbin+2];
3201  for(Int_t i=0;i<nbin+2;i++) {
3202  e[i]=TMath::Sqrt(rhoij->GetBinContent(i,i));
3203  }
3204  for(Int_t i=0;i<nbin+2;i++) {
3205  for(Int_t j=0;j<nbin+2;j++) {
3206  if((e[i]>0.0)&&(e[j]>0.0)) {
3207  rhoij->SetBinContent(i,j,rhoij->GetBinContent(i,j)/e[i]/e[j]);
3208  } else {
3209  rhoij->SetBinContent(i,j,0.0);
3210  }
3211  }
3212  }
3213  delete[] e;
3214 }
3215 
3216 Double_t TUnfold::GetRhoI(TH1 *rhoi,const Int_t *binMap,TH2 *invEmat) const
3217 {
3218  // get global correlation coefficients with arbitrary min map
3219  // rhoi: global correlation histogram
3220  // binMap: for each bin of the original output distribution
3221  // specify the destination bin. A value of -1 means that the bin
3222  // is discarded. 0 means underflow bin, 1 first bin, ...
3223  // binMap[0] : destination of underflow bin
3224  // binMap[1] : destination of first bin
3225  // ...
3226  // return value: maximum global correlation
3227 
3228  ClearHistogram(rhoi,-1.);
3229 
3230  if(binMap) {
3231  // if there is a bin map, the matrix needs to be inverted
3232  // otherwise, use the existing inverse, such that
3233  // no matrix inversion is needed
3234  return GetRhoIFromMatrix(rhoi,fVxx,binMap,invEmat);
3235  } else {
3236  Double_t rhoMax=0.0;
3237 
3238  const Int_t *rows_Vxx=fVxx->GetRowIndexArray();
3239  const Int_t *cols_Vxx=fVxx->GetColIndexArray();
3240  const Double_t *data_Vxx=fVxx->GetMatrixArray();
3241 
3242  const Int_t *rows_VxxInv=fVxxInv->GetRowIndexArray();
3243  const Int_t *cols_VxxInv=fVxxInv->GetColIndexArray();
3244  const Double_t *data_VxxInv=fVxxInv->GetMatrixArray();
3245 
3246  for(Int_t i=0;i<GetNx();i++) {
3247  Int_t destI=fXToHist[i];
3248  Double_t e_ii=0.0,einv_ii=0.0;
3249  for(int index_vxx=rows_Vxx[i];index_vxx<rows_Vxx[i+1];index_vxx++) {
3250  if(cols_Vxx[index_vxx]==i) {
3251  e_ii=data_Vxx[index_vxx];
3252  break;
3253  }
3254  }
3255  for(int index_vxxinv=rows_VxxInv[i];index_vxxinv<rows_VxxInv[i+1];
3256  index_vxxinv++) {
3257  if(cols_VxxInv[index_vxxinv]==i) {
3258  einv_ii=data_VxxInv[index_vxxinv];
3259  break;
3260  }
3261  }
3262  Double_t rho=1.0;
3263  if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3264  if(rho>=0.0) rho=TMath::Sqrt(rho);
3265  else rho=-TMath::Sqrt(-rho);
3266  if(rho>rhoMax) {
3267  rhoMax = rho;
3268  }
3269  rhoi->SetBinContent(destI,rho);
3270  }
3271  return rhoMax;
3272  }
3273 }
3274 
3276  const Int_t *binMap,TH2 *invEmat) const
3277 {
3278  // get global correlation coefficients with arbitrary min map
3279  // rhoi: global correlation histogram
3280  // emat: error matrix
3281  // binMap: for each bin of the original output distribution
3282  // specify the destination bin. A value of -1 means that the bin
3283  // is discarded. 0 means underflow bin, 1 first bin, ...
3284  // binMap[0] : destination of underflow bin
3285  // binMap[1] : destination of first bin
3286  // ...
3287  // return value: maximum global correlation
3288 
3289  Double_t rhoMax=0.;
3290  // original number of bins:
3291  // fHistToX.GetSize()
3292  // loop over binMap and find number of used bins
3293 
3294  Int_t binMapSize = fHistToX.GetSize();
3295 
3296  // histToLocalBin[iBin] points to a local index
3297  // only bins iBin of the histogram rhoi whih are referenced
3298  // in the bin map have a local index
3299  std::map<int,int> histToLocalBin;
3300  Int_t nBin=0;
3301  for(Int_t i=0;i<binMapSize;i++) {
3302  Int_t mapped_i=binMap ? binMap[i] : i;
3303  if(mapped_i>=0) {
3304  if(histToLocalBin.find(mapped_i)==histToLocalBin.end()) {
3305  histToLocalBin[mapped_i]=nBin;
3306  nBin++;
3307  }
3308  }
3309  }
3310  // number of local indices: nBin
3311  if(nBin>0) {
3312  // construct inverse mapping function
3313  // local index -> histogram bin
3314  Int_t *localBinToHist=new Int_t[nBin];
3315  for(std::map<int,int>::const_iterator i=histToLocalBin.begin();
3316  i!=histToLocalBin.end();i++) {
3317  localBinToHist[(*i).second]=(*i).first;
3318  }
3319 
3320  const Int_t *rows_eOrig=eOrig->GetRowIndexArray();
3321  const Int_t *cols_eOrig=eOrig->GetColIndexArray();
3322  const Double_t *data_eOrig=eOrig->GetMatrixArray();
3323 
3324  // remap error matrix
3325  // matrix row i -> origI (fXToHist[i])
3326  // origI -> destI (binMap)
3327  // destI -> ematBinI (histToLocalBin)
3328  TMatrixD eRemap(nBin,nBin);
3329  // i: row of the matrix eOrig
3330  for(Int_t i=0;i<GetNx();i++) {
3331  // origI: pointer in output histogram with all bins
3332  Int_t origI=fXToHist[i];
3333  // destI: pointer in the histogram rhoi
3334  Int_t destI=binMap ? binMap[origI] : origI;
3335  if(destI<0) continue;
3336  Int_t ematBinI=histToLocalBin[destI];
3337  for(int index_eOrig=rows_eOrig[i];index_eOrig<rows_eOrig[i+1];
3338  index_eOrig++) {
3339  // j: column of the matrix fVxx
3340  Int_t j=cols_eOrig[index_eOrig];
3341  // origJ: pointer in output histogram with all bins
3342  Int_t origJ=fXToHist[j];
3343  // destJ: pointer in the histogram rhoi
3344  Int_t destJ=binMap ? binMap[origJ] : origJ;
3345  if(destJ<0) continue;
3346  Int_t ematBinJ=histToLocalBin[destJ];
3347  eRemap(ematBinI,ematBinJ) += data_eOrig[index_eOrig];
3348  }
3349  }
3350  // invert remapped error matrix
3351  TMatrixDSparse eSparse(eRemap);
3352  Int_t rank=0;
3353  TMatrixDSparse *einvSparse=InvertMSparseSymmPos(&eSparse,&rank);
3354  if(rank!=einvSparse->GetNrows()) {
3355  Warning("GetRhoIFormMatrix","Covariance matrix has rank %d expect %d",
3356  rank,einvSparse->GetNrows());
3357  }
3358  // fill to histogram
3359  const Int_t *rows_eInv=einvSparse->GetRowIndexArray();
3360  const Int_t *cols_eInv=einvSparse->GetColIndexArray();
3361  const Double_t *data_eInv=einvSparse->GetMatrixArray();
3362 
3363  for(Int_t i=0;i<einvSparse->GetNrows();i++) {
3364  Double_t e_ii=eRemap(i,i);
3365  Double_t einv_ii=0.0;
3366  for(Int_t index_einv=rows_eInv[i];index_einv<rows_eInv[i+1];
3367  index_einv++) {
3368  Int_t j=cols_eInv[index_einv];
3369  if(j==i) {
3370  einv_ii=data_eInv[index_einv];
3371  }
3372  if(invEmat) {
3373  invEmat->SetBinContent(localBinToHist[i],localBinToHist[j],
3374  data_eInv[index_einv]);
3375  }
3376  }
3377  Double_t rho=1.0;
3378  if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3379  if(rho>=0.0) rho=TMath::Sqrt(rho);
3380  else rho=-TMath::Sqrt(-rho);
3381  if(rho>rhoMax) {
3382  rhoMax = rho;
3383  }
3384  //std::cout<<i<<" "<<localBinToHist[i]<<" "<<rho<<"\n";
3385  rhoi->SetBinContent(localBinToHist[i],rho);
3386  }
3387 
3388  DeleteMatrix(&einvSparse);
3389  delete [] localBinToHist;
3390  }
3391  return rhoMax;
3392 }
3393 
3395 {
3396  // clear histogram contents and error
3397  Int_t nxyz[3];
3398  nxyz[0]=h->GetNbinsX()+1;
3399  nxyz[1]=h->GetNbinsY()+1;
3400  nxyz[2]=h->GetNbinsZ()+1;
3401  for(int i=h->GetDimension();i<3;i++) nxyz[i]=0;
3402  Int_t ixyz[3];
3403  for(int i=0;i<3;i++) ixyz[i]=0;
3404  while((ixyz[0]<=nxyz[0])&&
3405  (ixyz[1]<=nxyz[1])&&
3406  (ixyz[2]<=nxyz[2])){
3407  Int_t ibin=h->GetBin(ixyz[0],ixyz[1],ixyz[2]);
3408  h->SetBinContent(ibin,x);
3409  h->SetBinError(ibin,0.0);
3410  for(Int_t i=0;i<3;i++) {
3411  ixyz[i] += 1;
3412  if(ixyz[i]<=nxyz[i]) break;
3413  if(i<2) ixyz[i]=0;
3414  }
3415  }
3416 }
3417 
3419  // set accuracy for matrix inversion
3420  if((eps>0.0)&&(eps<1.0)) fEpsMatrix=eps;
3421 }
const int nx
Definition: kalman.C:16
const TVectorD & GetEigenValues() const
tuple row
Definition: mrt.py:26
void SetBias(const TH1 *bias)
Definition: TUnfold.cxx:1961
EConstraint fConstraint
Definition: TUnfold.h:125
static double B[]
static void DeleteMatrix(TMatrixD **m)
Definition: TUnfold.cxx:333
TMatrixDSparse * CreateSparseMatrix(Int_t nrow, Int_t ncol, Int_t nele, Int_t *row, Int_t *col, Double_t *data) const
Definition: TUnfold.cxx:698
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition: TH1.cxx:4629
TMatrixDSparse * InvertMSparseSymmPos(const TMatrixDSparse *A, Int_t *rank) const
Definition: TUnfold.cxx:1064
TUnfold(void)
Definition: TUnfold.cxx:372
virtual Int_t ScanLcurve(Int_t nPoint, Double_t tauMin, Double_t tauMax, TGraph **lCurve, TSpline **logTauX=0, TSpline **logTauY=0)
Definition: TUnfold.cxx:2424
void GetOutput(TH1 *output, const Int_t *binMap=0) const
Definition: TUnfold.cxx:3033
void Fatal(const char *location, const char *msgfmt,...)
virtual const Element * GetMatrixArray() const
return c
Int_t fNdf
Definition: TUnfold.h:139
virtual Int_t GetDimension() const
Definition: TH1.h:283
TArrayI fHistToX
Definition: TUnfold.h:123
tuple f2
Definition: surfaces.py:24
TH1 * h
Definition: legend2.C:5
Base class for spline implementation containing the Draw/Paint methods //.
Definition: TSpline.h:22
virtual void Info(const char *method, const char *msgfmt,...) const
Issue info message.
Definition: TObject.cxx:892
TVirtualPad * cd(Int_t subpadnumber=0)
Set current canvas & pad.
Definition: TCanvas.cxx:659
TMatrixD * fX
Definition: TUnfold.h:130
virtual TString GetOutputBinName(Int_t iBinX) const
Definition: TUnfold.cxx:1737
Basic string class.
Definition: TString.h:137
virtual TMatrixTBase< Element > & SetMatrixArray(const Element *data, Option_t *="")
Copy array data to matrix .
Short_t Min(Short_t a, Short_t b)
Definition: TMathBase.h:170
Int_t GetNx(void) const
Definition: TUnfold.h:162
int Int_t
Definition: RtypesCore.h:41
bool Bool_t
Definition: RtypesCore.h:59
TArc * a
Definition: textangle.C:12
const Bool_t kFALSE
Definition: Rtypes.h:92
TMatrixDSparse * fDXDtauSquared
Definition: TUnfold.h:142
Int_t RegularizeBins2D(int start_bin, int step1, int nbin1, int step2, int nbin2, ERegMode regmode)
Definition: TUnfold.cxx:2163
virtual Int_t GetNbinsX() const
Definition: TH1.h:296
TMatrixD * fX0
Definition: TUnfold.h:119
void GetInputInverseEmatrix(TH2 *ematrix)
Definition: TUnfold.cxx:2917
TMatrixDSparse * fDXDAZ[2]
Definition: TUnfold.h:141
static double A[]
Short_t Abs(Short_t d)
Definition: TMathBase.h:110
#define G(x, y, z)
TFile * f
LongDouble_t Power(LongDouble_t x, LongDouble_t y)
Definition: TMath.h:501
TMatrixDSparse * fL
Definition: TUnfold.h:116
virtual Double_t GetLcurveY(void) const
Definition: TUnfold.cxx:3027
Int_t GetNpar(void) const
Definition: TUnfold.cxx:3015
const char * Data() const
Definition: TString.h:349
virtual void Fatal(const char *method, const char *msgfmt,...) const
Issue fatal error message.
Definition: TObject.cxx:946
Bool_t AddRegularisationCondition(Int_t i0, Double_t f0, Int_t i1=-1, Double_t f1=0., Int_t i2=-1, Double_t f2=0.)
Definition: TUnfold.cxx:1973
Double_t x[n]
Definition: legend1.C:17
void GetFoldedOutput(TH1 *folded, const Int_t *binMap=0) const
Definition: TUnfold.cxx:2815
ERegMode fRegMode
Definition: TUnfold.h:126
static void swap(double &a, double &b)
Definition: Rotation3D.cxx:104
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString...
Definition: TString.cxx:2321
TMatrixDSparse * MultiplyMSparseM(const TMatrixDSparse *a, const TMatrixD *b) const
Definition: TUnfold.cxx:858
TMatrixDSparse * fDXDAM[2]
Definition: TUnfold.h:140
void ClearHistogram(TH1 *h, Double_t x=0.) const
Definition: TUnfold.cxx:3394
TArrayD fSumOverY
Definition: TUnfold.h:124
const int ny
Definition: kalman.C:17
int d
Definition: tornado.py:11
EHistMap
Definition: TUnfold.h:188
Double_t Eval(Double_t x) const
Eval this spline at x.
Definition: TSpline.cxx:818
EConstraint
Definition: TUnfold.h:103
Double_t Log10(Double_t x)
Definition: TMath.h:529
virtual void SaveAs(const char *filename="", Option_t *option="") const
Save Pad contents in a file in one of various formats.
Definition: TPad.cxx:5005
virtual Int_t GetBin(Int_t binx, Int_t biny=0, Int_t binz=0) const
Return Global bin number corresponding to binx,y,z.
Definition: TH1.cxx:4535
void Info(const char *location, const char *msgfmt,...)
Int_t RegularizeCurvature(int left_bin, int center_bin, int right_bin, Double_t scale_left=1.0, Double_t scale_right=1.0)
Definition: TUnfold.cxx:2099
Int_t Finite(Double_t x)
Definition: TMath.h:532
Int_t GetNr(void) const
Definition: TUnfold.h:231
TArrayI fXToHist
Definition: TUnfold.h:122
void GetRhoIJ(TH2 *rhoij, const Int_t *binMap=0) const
Definition: TUnfold.cxx:3188
virtual void Error(const char *method, const char *msgfmt,...) const
Issue error message.
Definition: TObject.cxx:918
Double_t fChi2A
Definition: TUnfold.h:135
Float_t z[5]
Definition: Ifit.C:16
void Set(Int_t n)
Set size of this array to n ints.
Definition: TArrayI.cxx:105
void SetEpsMatrix(Double_t eps)
Definition: TUnfold.cxx:3418
virtual void SetBinError(Int_t bin, Double_t error)
see convention for numbering bins in TH1::GetBin
Definition: TH1.cxx:8528
void Error(const char *location, const char *msgfmt,...)
TMatrixTSparse< Double_t > TMatrixDSparse
char * out
Definition: TBase64.cxx:29
ERegMode
Definition: TUnfold.h:107
Int_t GetNrows() const
Definition: TMatrixTBase.h:134
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition: TH2.h:90
void GetEmatrix(TH2 *ematrix, const Int_t *binMap=0) const
Definition: TUnfold.cxx:3175
TMatrixT< Double_t > TMatrixD
Definition: TMatrixDfwd.h:24
Double_t GetChi2L(void) const
Definition: TUnfold.cxx:3009
Double_t GetTau(void) const
Definition: TUnfold.cxx:3003
#define F(x, y, z)
TMatrixDSparse * fVxxInv
Definition: TUnfold.h:133
Double_t fRhoMax
Definition: TUnfold.h:137
TMatrixDSparse * fAx
Definition: TUnfold.h:134
virtual Int_t SetInput(const TH1 *hist_y, Double_t scaleBias=0.0, Double_t oneOverZeroError=0.0, const TH2 *hist_vyy=0, const TH2 *hist_vyy_inv=0)
Definition: TUnfold.cxx:2208
TMatrixDSparse * fVyyInv
Definition: TUnfold.h:131
Double_t GetRhoI(TH1 *rhoi, const Int_t *binMap=0, TH2 *invEmat=0) const
Definition: TUnfold.cxx:3216
Int_t GetNy(void) const
Definition: TUnfold.h:165
virtual Double_t DoUnfold(void)
Definition: TUnfold.cxx:378
TMatrixDSparse * fDXDY
Definition: TUnfold.h:143
ROOT::R::TRInterface & r
Definition: Object.C:4
virtual const Int_t * GetRowIndexArray() const
Service class for 2-Dim histogram classes.
Definition: TH2.h:36
SVector< double, 2 > v
Definition: Dict.h:5
XPoint xy[kMAXMK]
Definition: TGX11.cxx:122
ClassImp(TUnfold) const char *TUnfold
Definition: TUnfold.cxx:286
Double_t fBiasScale
Definition: TUnfold.h:121
virtual void SetBinContent(Int_t bin, Double_t content)
Set bin content see convention for numbering bins in TH1::GetBin In case the bin number is greater th...
Definition: TH1.cxx:8543
void GetLsquared(TH2 *lsquared) const
Definition: TUnfold.cxx:2958
TMarker * m
Definition: textangle.C:8
void InitTUnfold(void)
Definition: TUnfold.cxx:295
bool first
Definition: line3Dfit.C:48
Double_t E()
Definition: TMath.h:54
virtual Int_t GetNbinsZ() const
Definition: TH1.h:298
virtual ~TUnfold(void)
Definition: TUnfold.cxx:1948
void GetNormalisationVector(TH1 *s, const Int_t *binMap=0) const
Definition: TUnfold.cxx:2774
void ErrorMatrixToHist(TH2 *ematrix, const TMatrixDSparse *emat, const Int_t *binMap, Bool_t doClear) const
Definition: TUnfold.cxx:3109
TMatrixD * fY
Definition: TUnfold.h:118
Double_t fTauSquared
Definition: TUnfold.h:120
REAL epsilon
Definition: triangle.c:617
TMatrixDSparse * fVyy
Definition: TUnfold.h:117
const Double_t * GetArray() const
Definition: TArrayD.h:45
Int_t GetSize() const
Definition: TArray.h:49
TMatrixDSparse * MultiplyMSparseTranspMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
Definition: TUnfold.cxx:780
The Canvas class.
Definition: TCanvas.h:48
double Double_t
Definition: RtypesCore.h:55
const TMatrixD & GetEigenVectors() const
TMatrixDSparse * fEinv
Definition: TUnfold.h:144
virtual void ClearResults(void)
Definition: TUnfold.cxx:345
Double_t fRhoAvg
Definition: TUnfold.h:138
virtual void Draw(Option_t *option="")
Draw this function with its current attributes.
Definition: TSpline.cxx:100
void SetConstraint(EConstraint constraint)
Definition: TUnfold.cxx:2995
Int_t GetNcols() const
Definition: TMatrixTBase.h:137
Double_t fLXsquared
Definition: TUnfold.h:136
virtual Double_t GetLcurveX(void) const
Definition: TUnfold.cxx:3021
Double_t y[n]
Definition: legend1.C:17
The TH1 histogram class.
Definition: TH1.h:80
Int_t RegularizeBins(int start, int step, int nbin, ERegMode regmode)
Definition: TUnfold.cxx:2123
virtual const Int_t * GetColIndexArray() const
TMatrixDSparse * fA
Definition: TUnfold.h:115
void GetCoeff(Int_t i, Double_t &x, Double_t &y, Double_t &b, Double_t &c, Double_t &d)
Definition: TSpline.h:233
virtual Int_t GetNbinsY() const
Definition: TH1.h:297
void AddMSparse(TMatrixDSparse *dest, Double_t f, const TMatrixDSparse *src) const
Definition: TUnfold.cxx:1001
Int_t fIgnoredBins
Definition: TUnfold.h:128
virtual void Divide(Int_t nx=1, Int_t ny=1, Float_t xmargin=0.01, Float_t ymargin=0.01, Int_t color=0)
Automatic pad generation by division.
Definition: TPad.cxx:1073
Int_t RegularizeDerivative(int left_bin, int right_bin, Double_t scale=1.0)
Definition: TUnfold.cxx:2083
TUnfold is used to decompose a measurement y into several sources x given the measurement uncertainti...
Definition: TUnfold.h:99
TMatrixDSparse * MultiplyMSparseMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
Definition: TUnfold.cxx:711
#define dest(otri, vertexptr)
Definition: triangle.c:1040
Short_t Max(Short_t a, Short_t b)
Definition: TMathBase.h:202
Double_t fEpsMatrix
Definition: TUnfold.h:129
void GetL(TH2 *l) const
Definition: TUnfold.cxx:2977
TMatrixDSparse * fE
Definition: TUnfold.h:145
A Graph is a graphics object made of two arrays X and Y with npoints each.
Definition: TGraph.h:53
TMatrixDSparse * fVxx
Definition: TUnfold.h:132
TF1 * f1
Definition: legend1.C:11
TArrow ar(9, 23, 9, 21.6, 0.015,"|>")
virtual void SetBinContent(Int_t bin, Double_t content)
Set bin content.
Definition: TH2.cxx:2699
Double_t Sqrt(Double_t x)
Definition: TMath.h:464
static void output(int code)
Definition: gifencode.c:226
virtual Double_t GetBinError(Int_t bin) const
Return value of error associated to bin number bin.
Definition: TH1.cxx:8395
void GetBias(TH1 *bias, const Int_t *binMap=0) const
Definition: TUnfold.cxx:2794
const Bool_t kTRUE
Definition: Rtypes.h:91
float * q
Definition: THbookFile.cxx:87
Int_t GetNdf(void) const
Definition: TUnfold.h:243
Double_t GetRhoIFromMatrix(TH1 *rhoi, const TMatrixDSparse *eOrig, const Int_t *binMap, TH2 *invEmat) const
Definition: TUnfold.cxx:3275
Int_t RegularizeSize(int bin, Double_t scale=1.0)
Definition: TUnfold.cxx:2069
void Set(Int_t n)
Set size of this array to n doubles.
Definition: TArrayD.cxx:105
const Int_t n
Definition: legend1.C:16
void GetProbabilityMatrix(TH2 *A, EHistMap histmap) const
Definition: TUnfold.cxx:2863
void GetInput(TH1 *inputData, const Int_t *binMap=0) const
Definition: TUnfold.cxx:2884
TRandom3 R
a TMatrixD.
Definition: testIO.cxx:28
TMatrixDSparse * MultiplyMSparseMSparseTranspVector(const TMatrixDSparse *m1, const TMatrixDSparse *m2, const TMatrixTBase< Double_t > *v) const
Definition: TUnfold.cxx:911
#define TUnfold_VERSION
Definition: TUnfold.h:95
virtual void Warning(const char *method, const char *msgfmt,...) const
Issue warning message.
Definition: TObject.cxx:904