Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TUnfold.cxx
Go to the documentation of this file.
1// @(#)root/unfold:$Id$
2// Author: Stefan Schmitt DESY, 13/10/08
3
4/** \class TUnfold
5\ingroup Unfold
6
7An algorithm to unfold distributions from detector to truth level
8
9TUnfold is used to decompose a measurement y into several sources x,
10given the measurement uncertainties and a matrix of migrations A.
11The method can be applied to a large number of problems,
12where the measured distribution y is a linear superposition
13of several Monte Carlo shapes. Beyond such a simple template fit,
14TUnfold has an adjustable regularisation term and also supports an
15optional constraint on the total number of events.
16
17<b>For most applications, it is better to use the derived class
18TUnfoldDensity instead of TUnfold</b>. TUnfoldDensity adds various
19features to TUnfold, such as:
20background subtraction, propagation of systematic uncertainties,
21complex multidimensional arrangements of the bins. For innocent
22users, the most notable improvement of TUnfoldDensity over TUnfold are
23the getter functions. For TUnfold, histograms have to be booked by the
24user and the getter functions fill the histogram bins. TUnfoldDensity
25simply returns a new, already filled histogram.
26
27If you use this software, please consider the following citation
28
29<b>S.Schmitt, JINST 7 (2012) T10003 [arXiv:1205.6201]</b>
30
31Detailed documentation and updates are available on
32http://www.desy.de/~sschmitt
33
34Brief recipe to use TUnfold:
35
36 - a matrix (truth,reconstructed) is given as a two-dimensional histogram
37 as argument to the constructor of TUnfold
38 - a vector of measurements is given as one-dimensional histogram using
39 the SetInput() method
40 - The unfolding is performed
41
42 - either once with a fixed parameter tau, method DoUnfold(tau)
43 - or multiple times in a scan to determine the best choice of tau,
44 method ScanLCurve()
45
46 - Unfolding results are retrieved using various GetXXX() methods
47
48Basic formulae:
49\f[
50\chi^{2}_{A}=(Ax-y)^{T}V_{yy}^{-1}(Ax-y) \\
51\chi^{2}_{L}=(x-f*x_{0})^{T}L^{T}L(x-f*x_{0}) \\
52\chi^{2}_{unf}=\chi^{2}_{A}+\tau^{2}\chi^{2}_{L}+\lambda\Sigma_{i}(Ax-y)_{i}
53\f]
54
55 - \f$ x \f$:result,
56 - \f$ A \f$:probabilities,
57 - \f$ y \f$:data,
58 - \f$ V_{yy} \f$:data covariance,
59 - \f$ f \f$:bias scale,
60 - \f$ x_{0} \f$:bias,
61 - \f$ L \f$:regularisation conditions,
62 - \f$ \tau \f$:regularisation strength,
63 - \f$ \lambda \f$:Lagrangian multiplier.
64
65 Without area constraint, \f$ \lambda \f$ is set to zero, and
66\f$ \chi^{2}_{unf} \f$ is minimized to determine \f$ x \f$.
67With area constraint, both \f$ x \f$ and \f$ \lambda \f$ are determined.
68
69--------------------------------------------------------------------------------
70This file is part of TUnfold.
71
72TUnfold is free software: you can redistribute it and/or modify
73it under the terms of the GNU General Public License as published by
74the Free Software Foundation, either version 3 of the License, or
75(at your option) any later version.
76
77TUnfold is distributed in the hope that it will be useful,
78but WITHOUT ANY WARRANTY; without even the implied warranty of
79MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
80GNU General Public License for more details.
81
82You should have received a copy of the GNU General Public License
83along with TUnfold. If not, see <http://www.gnu.org/licenses/>.
84
85<b>Version 17.6, updated doxygen-style comments, add one argument for scanLCurve </b>
86
87#### History:
88 - Version 17.5, fix memory leak with fVyyInv, bugs in GetInputInverseEmatrix(), GetInput(), bug in MultiplyMSparseMSparseTranspVector
89 - Version 17.4, in parallel to changes in TUnfoldBinning
90 - Version 17.3, in parallel to changes in TUnfoldBinning
91 - Version 17.2, bug fix with GetProbabilityMatrix
92 - Version 17.1, bug fixes in GetFoldedOutput, GetOutput
93 - Version 17.0, option to specify an error matrix with SetInput(), new ScanRho() method
94 - Version 16.2, in parallel to bug-fix in TUnfoldSys
95 - Version 16.1, fix bug with error matrix in case kEConstraintArea is used
96 - Version 16.0, fix calculation of global correlations, improved error messages
97 - Version 15, simplified L-curve scan, new tau definition, new error calc., area preservation
98 - Version 14, with changes in TUnfoldSys.cxx
99 - Version 13, new methods for derived classes and small bug fix
100 - Version 12, report singular matrices
101 - Version 11, reduce the amount of printout
102 - Version 10, more correct definition of the L curve, update references
103 - Version 9, faster matrix inversion and skip edge points for L-curve scan
104 - Version 8, replace all TMatrixSparse matrix operations by private code
105 - Version 7, fix problem with TMatrixDSparse,TMatrixD multiplication
106 - Version 6, replace class XY by std::pair
107 - Version 5, replace run-time dynamic arrays by new and delete[]
108 - Version 4, fix new bug from V3 with initial regularisation condition
109 - Version 3, fix bug with initial regularisation condition
110 - Version 2, with improved ScanLcurve() algorithm
111 - Version 1, added ScanLcurve() method
112 - Version 0, stable version of basic unfolding algorithm
113*/
114
115#include <iostream>
116#include <TMatrixD.h>
117#include <TMatrixDSparse.h>
118#include <TMatrixDSym.h>
119#include <TMatrixDSymEigen.h>
120#include <TMath.h>
121#include "TUnfold.h"
122#include "TGraph.h"
123
124#include <map>
125#include <vector>
126
127//#define DEBUG
128//#define DEBUG_DETAIL
129//#define FORCE_EIGENVALUE_DECOMPOSITION
130
132
134{
135 // delete all data members
136
143
144 ClearResults();
145}
146
147////////////////////////////////////////////////////////////////////////////////
148/// Initialize data members, for use in constructors.
149
151{
152 // reset all data members
153 fXToHist.Set(0);
154 fHistToX.Set(0);
155 fSumOverY.Set(0);
156 fA = 0;
157 fL = 0;
158 fVyy = 0;
159 fY = 0;
160 fX0 = 0;
161 fTauSquared = 0.0;
162 fBiasScale = 0.0;
165 // output
166 fX = 0;
167 fVyyInv = 0;
168 fVxx = 0;
169 fVxxInv = 0;
170 fAx = 0;
171 fChi2A = 0.0;
172 fLXsquared = 0.0;
173 fRhoMax = 999.0;
174 fRhoAvg = -1.0;
175 fNdf = 0;
176 fDXDAM[0] = 0;
177 fDXDAZ[0] = 0;
178 fDXDAM[1] = 0;
179 fDXDAZ[1] = 0;
180 fDXDtauSquared = 0;
181 fDXDY = 0;
182 fEinv = 0;
183 fE = 0;
184 fEpsMatrix=1.E-13;
185 fIgnoredBins=0;
186}
187
188////////////////////////////////////////////////////////////////////////////////
189/// Delete matrix and invalidate pointer.
190///
191/// \param[inout] m pointer to a matrix-pointer
192///
193/// If the matrix pointer os non-zero, the matrix id deleted. The matrix pointer
194/// is set to zero.
195
197{
198 if(*m) delete *m;
199 *m=0;
200}
201
202////////////////////////////////////////////////////////////////////////////////
203/// Delete sparse matrix and invalidate pointer
204///
205/// \param[inout] m pointer to a matrix-pointer
206///
207/// if the matrix pointer os non-zero, the matrix id deleted. The matrix pointer
208/// is set to zero.
209
211{
212 if(*m) delete *m;
213 *m=0;
214}
215
216////////////////////////////////////////////////////////////////////////////////
217/// Reset all results.
218
220{
221 // delete old results (if any)
222 // this function is virtual, so derived classes may implement their own
223 // method to flag results as non-valid
224
225 // note: the inverse of the input covariance is not cleared
226 // because it does not change until the input is changed
227
231 for(Int_t i=0;i<2;i++) {
234 }
240 fChi2A = 0.0;
241 fLXsquared = 0.0;
242 fRhoMax = 999.0;
243 fRhoAvg = -1.0;
244}
245
246////////////////////////////////////////////////////////////////////////////////
247/// Only for use by root streamer or derived classes.
248
250{
251 // set all matrix pointers to zero
252 InitTUnfold();
253}
254
255////////////////////////////////////////////////////////////////////////////////
256/// Core unfolding algorithm.
257///
258/// Main unfolding algorithm. Declared virtual, because other algorithms
259/// could be implemented
260///
261/// Purpose: unfold y -> x
262///
263/// - Data members required:
264/// - fA: matrix to relate x and y
265/// - fY: measured data points
266/// - fX0: bias on x
267/// - fBiasScale: scale factor for fX0
268/// - fVyy: covariance matrix for y
269/// - fL: regularisation conditions
270/// - fTauSquared: regularisation strength
271/// - fConstraint: whether the constraint is applied
272/// - Data members modified:
273/// - fVyyInv: inverse of input data covariance matrix
274/// - fNdf: number of degrees of freedom
275/// - fEinv: inverse of the matrix needed for unfolding calculations
276/// - fE: the matrix needed for unfolding calculations
277/// - fX: unfolded data points
278/// - fDXDY: derivative of x wrt y (for error propagation)
279/// - fVxx: error matrix (covariance matrix) on x
280/// - fAx: estimate of distribution y from unfolded data
281/// - fChi2A: contribution to chi**2 from y-Ax
282/// - fChi2L: contribution to chi**2 from L*(x-x0)
283/// - fDXDtauSquared: derivative of x wrt tau
284/// - fDXDAM[0,1]: matrix parts of derivative x wrt A
285/// - fDXDAZ[0,1]: vector parts of derivative x wrt A
286/// - fRhoMax: maximum global correlation coefficient
287/// - fRhoAvg: average global correlation coefficient
288/// - Return code:
289/// - fRhoMax if(fRhoMax>=1.0) then the unfolding has failed!
290
292{
293 ClearResults();
294
295 // get pseudo-inverse matrix Vyyinv and NDF
296 if(!fVyyInv) {
299 fNdf--;
300 }
301 }
302 //
303 // get matrix
304 // T
305 // fA fV = mAt_V
306 //
308 //
309 // get
310 // T
311 // fA fVyyinv fY + fTauSquared fBiasScale Lsquared fX0 = rhs
312 //
313 TMatrixDSparse *rhs=MultiplyMSparseM(AtVyyinv,fY);
315 if (fBiasScale != 0.0) {
316 TMatrixDSparse *rhs2=MultiplyMSparseM(lSquared,fX0);
317 AddMSparse(rhs, fTauSquared * fBiasScale ,rhs2);
318 DeleteMatrix(&rhs2);
319 }
320
321 //
322 // get matrix
323 // T
324 // (fA fV)fA + fTauSquared*fLsquared = fEinv
326 AddMSparse(fEinv,fTauSquared,lSquared);
327
328 //
329 // get matrix
330 // -1
331 // fEinv = fE
332 //
333 Int_t rank=0;
335 if(rank != GetNx()) {
336 Warning("DoUnfold","rank of matrix E %d expect %d",rank,GetNx());
337 }
338
339 //
340 // get result
341 // fE rhs = x
342 //
344 fX = new TMatrixD(*xSparse);
345 DeleteMatrix(&rhs);
346 DeleteMatrix(&xSparse);
347
348 // additional correction for constraint
349 Double_t lambda_half=0.0;
350 Double_t one_over_epsEeps=0.0;
352 TMatrixDSparse *Eepsilon=0;
354 // calculate epsilon: vector of efficiencies
355 const Int_t *A_rows=fA->GetRowIndexArray();
356 const Int_t *A_cols=fA->GetColIndexArray();
357 const Double_t *A_data=fA->GetMatrixArray();
358 TMatrixD epsilonNosparse(fA->GetNcols(),1);
359 for(Int_t i=0;i<A_rows[fA->GetNrows()];i++) {
360 epsilonNosparse(A_cols[i],0) += A_data[i];
361 }
362 epsilon=new TMatrixDSparse(epsilonNosparse);
363 // calculate vector EE*epsilon
365 // calculate scalar product epsilon#*Eepsilon
367 Eepsilon);
368 // if epsilonEepsilon is zero, nothing works...
369 if(epsilonEepsilon->GetRowIndexArray()[1]==1) {
370 one_over_epsEeps=1./epsilonEepsilon->GetMatrixArray()[0];
371 } else {
372 Fatal("Unfold","epsilon#Eepsilon has dimension %d != 1",
373 epsilonEepsilon->GetRowIndexArray()[1]);
374 }
375 DeleteMatrix(&epsilonEepsilon);
376 // calculate sum(Y)
377 Double_t y_minus_epsx=0.0;
378 for(Int_t iy=0;iy<fY->GetNrows();iy++) {
379 y_minus_epsx += (*fY)(iy,0);
380 }
381 // calculate sum(Y)-epsilon#*X
382 for(Int_t ix=0;ix<epsilonNosparse.GetNrows();ix++) {
383 y_minus_epsx -= epsilonNosparse(ix,0) * (*fX)(ix,0);
384 }
385 // calculate lambda_half
386 lambda_half=y_minus_epsx*one_over_epsEeps;
387 // calculate final vector X
388 const Int_t *EEpsilon_rows=Eepsilon->GetRowIndexArray();
389 const Double_t *EEpsilon_data=Eepsilon->GetMatrixArray();
390 for(Int_t ix=0;ix<Eepsilon->GetNrows();ix++) {
391 if(EEpsilon_rows[ix]<EEpsilon_rows[ix+1]) {
392 (*fX)(ix,0) += lambda_half * EEpsilon_data[EEpsilon_rows[ix]];
393 }
394 }
395 }
396 //
397 // get derivative dx/dy
398 // for error propagation
399 // dx/dy = E A# Vyy^-1 ( = B )
400 fDXDY = MultiplyMSparseMSparse(fE,AtVyyinv);
401
402 // additional correction for constraint
404 // transposed vector of dimension GetNy() all elements equal 1/epseEeps
405 Int_t *rows=new Int_t[GetNy()];
406 Int_t *cols=new Int_t[GetNy()];
407 Double_t *data=new Double_t[GetNy()];
408 for(Int_t i=0;i<GetNy();i++) {
409 rows[i]=0;
410 cols[i]=i;
411 data[i]=one_over_epsEeps;
412 }
414 (1,GetNy(),GetNy(),rows,cols,data);
415 delete[] data;
416 delete[] rows;
417 delete[] cols;
418 // B# * epsilon
420 // temp- one_over_epsEeps*Bepsilon
421 AddMSparse(temp, -one_over_epsEeps, epsilonB);
422 DeleteMatrix(&epsilonB);
423 // correction matrix
424 TMatrixDSparse *corr=MultiplyMSparseMSparse(Eepsilon,temp);
425 DeleteMatrix(&temp);
426 // determine new derivative
427 AddMSparse(fDXDY,1.0,corr);
428 DeleteMatrix(&corr);
429 }
430
431 DeleteMatrix(&AtVyyinv);
432
433 //
434 // get error matrix on x
435 // fDXDY * Vyy * fDXDY#
438
439 DeleteMatrix(&fDXDYVyy);
440
441 //
442 // get result
443 // fA x = fAx
444 //
446
447 //
448 // calculate chi**2 etc
449
450 // chi**2 contribution from (y-Ax)V(y-Ax)
453
454 const Int_t *VyyinvDy_rows=VyyinvDy->GetRowIndexArray();
455 const Double_t *VyyinvDy_data=VyyinvDy->GetMatrixArray();
456 fChi2A=0.0;
457 for(Int_t iy=0;iy<VyyinvDy->GetNrows();iy++) {
458 if(VyyinvDy_rows[iy]<VyyinvDy_rows[iy+1]) {
459 fChi2A += VyyinvDy_data[VyyinvDy_rows[iy]]*dy(iy,0);
460 }
461 }
463 TMatrixDSparse *LsquaredDx=MultiplyMSparseM(lSquared,&dx);
464 const Int_t *LsquaredDx_rows=LsquaredDx->GetRowIndexArray();
465 const Double_t *LsquaredDx_data=LsquaredDx->GetMatrixArray();
466 fLXsquared = 0.0;
467 for(Int_t ix=0;ix<LsquaredDx->GetNrows();ix++) {
468 if(LsquaredDx_rows[ix]<LsquaredDx_rows[ix+1]) {
469 fLXsquared += LsquaredDx_data[LsquaredDx_rows[ix]]*dx(ix,0);
470 }
471 }
472
473 //
474 // get derivative dx/dtau
476
479 Double_t f=0.0;
480 if(temp->GetRowIndexArray()[1]==1) {
481 f=temp->GetMatrixArray()[0]*one_over_epsEeps;
482 } else if(temp->GetRowIndexArray()[1]>1) {
483 Fatal("Unfold",
484 "epsilon#fDXDtauSquared has dimension %d != 1",
485 temp->GetRowIndexArray()[1]);
486 }
487 if(f!=0.0) {
488 AddMSparse(fDXDtauSquared, -f,Eepsilon);
489 }
490 DeleteMatrix(&temp);
491 }
493
494 DeleteMatrix(&LsquaredDx);
495 DeleteMatrix(&lSquared);
496
497 // calculate/store matrices defining the derivatives dx/dA
498 fDXDAM[0]=new TMatrixDSparse(*fE);
499 fDXDAM[1]=new TMatrixDSparse(*fDXDY); // create a copy
500 fDXDAZ[0]=VyyinvDy; // instead of deleting VyyinvDy
501 VyyinvDy=0;
502 fDXDAZ[1]=new TMatrixDSparse(*fX); // create a copy
503
505 // add correction to fDXDAM[0]
507 (Eepsilon,Eepsilon,0);
508 AddMSparse(fDXDAM[0], -one_over_epsEeps,temp1);
509 DeleteMatrix(&temp1);
510 // add correction to fDXDAZ[0]
511 Int_t *rows=new Int_t[GetNy()];
512 Int_t *cols=new Int_t[GetNy()];
513 Double_t *data=new Double_t[GetNy()];
514 for(Int_t i=0;i<GetNy();i++) {
515 rows[i]=i;
516 cols[i]=0;
517 data[i]=lambda_half;
518 }
520 (GetNy(),1,GetNy(),rows,cols,data);
521 delete[] data;
522 delete[] rows;
523 delete[] cols;
524 AddMSparse(fDXDAZ[0],1.0,temp2);
525 DeleteMatrix(&temp2);
526 }
527
528 DeleteMatrix(&Eepsilon);
529
530 rank=0;
532 if(rank != GetNx()) {
533 Warning("DoUnfold","rank of output covariance is %d expect %d",
534 rank,GetNx());
535 }
536
537 TVectorD VxxInvDiag(fVxxInv->GetNrows());
538 const Int_t *VxxInv_rows=fVxxInv->GetRowIndexArray();
539 const Int_t *VxxInv_cols=fVxxInv->GetColIndexArray();
540 const Double_t *VxxInv_data=fVxxInv->GetMatrixArray();
541 for (int ix = 0; ix < fVxxInv->GetNrows(); ix++) {
542 for(int ik=VxxInv_rows[ix];ik<VxxInv_rows[ix+1];ik++) {
543 if(ix==VxxInv_cols[ik]) {
544 VxxInvDiag(ix)=VxxInv_data[ik];
545 }
546 }
547 }
548
549 // maximum global correlation coefficient
550 const Int_t *Vxx_rows=fVxx->GetRowIndexArray();
551 const Int_t *Vxx_cols=fVxx->GetColIndexArray();
552 const Double_t *Vxx_data=fVxx->GetMatrixArray();
553
554 Double_t rho_squared_max = 0.0;
555 Double_t rho_sum = 0.0;
556 Int_t n_rho=0;
557 for (int ix = 0; ix < fVxx->GetNrows(); ix++) {
558 for(int ik=Vxx_rows[ix];ik<Vxx_rows[ix+1];ik++) {
559 if(ix==Vxx_cols[ik]) {
560 Double_t rho_squared =
561 1. - 1. / (VxxInvDiag(ix) * Vxx_data[ik]);
562 if (rho_squared > rho_squared_max)
563 rho_squared_max = rho_squared;
564 if(rho_squared>0.0) {
565 rho_sum += TMath::Sqrt(rho_squared);
566 n_rho++;
567 }
568 break;
569 }
570 }
571 }
572 fRhoMax = TMath::Sqrt(rho_squared_max);
573 fRhoAvg = (n_rho>0) ? (rho_sum/n_rho) : -1.0;
574
575 return fRhoMax;
576}
577
578////////////////////////////////////////////////////////////////////////////////
579/// Create a sparse matrix, given the nonzero elements.
580///
581/// \param[in] nrow number of rows
582/// \param[in] ncol number of columns
583/// \param[in] nel number of non-zero elements
584/// \param[in] row row indexes of non-zero elements
585/// \param[in] col column indexes of non-zero elements
586/// \param[in] data non-zero elements data
587///
588/// return pointer to a new sparse matrix
589///
590/// shortcut to new TMatrixDSparse() followed by SetMatrixArray().
591
593(Int_t nrow,Int_t ncol,Int_t nel,Int_t *row,Int_t *col,Double_t *data) const
594{
595 // create a sparse matri
596 // nrow,ncol : dimension of the matrix
597 // nel: number of non-zero elements
598 // row[nel],col[nel],data[nel] : indices and data of the non-zero elements
599 TMatrixDSparse *A=new TMatrixDSparse(nrow,ncol);
600 if(nel>0) {
601 A->SetMatrixArray(nel,row,col,data);
602 }
603 return A;
604}
605
606////////////////////////////////////////////////////////////////////////////////
607/// Multiply two sparse matrices.
608///
609/// \param[in] a sparse matrix
610/// \param[in] b sparse matrix
611///
612/// returns a new sparse matrix a*b.
613///
614/// A replacement for:
615/// new TMatrixDSparse(a,TMatrixDSparse::kMult,b)
616/// the root implementation had problems in older versions of root.
617
619 const TMatrixDSparse *b) const
620{
621 if(a->GetNcols()!=b->GetNrows()) {
622 Fatal("MultiplyMSparseMSparse",
623 "inconsistent matrix col/ matrix row %d !=%d",
624 a->GetNcols(),b->GetNrows());
625 }
626
627 TMatrixDSparse *r=new TMatrixDSparse(a->GetNrows(),b->GetNcols());
628 const Int_t *a_rows=a->GetRowIndexArray();
629 const Int_t *a_cols=a->GetColIndexArray();
630 const Double_t *a_data=a->GetMatrixArray();
631 const Int_t *b_rows=b->GetRowIndexArray();
632 const Int_t *b_cols=b->GetColIndexArray();
633 const Double_t *b_data=b->GetMatrixArray();
634 // maximum size of the output matrix
635 int nMax=0;
636 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
637 if(a_rows[irow+1]>a_rows[irow]) nMax += b->GetNcols();
638 }
639 if((nMax>0)&&(a_cols)&&(b_cols)) {
640 Int_t *r_rows=new Int_t[nMax];
641 Int_t *r_cols=new Int_t[nMax];
642 Double_t *r_data=new Double_t[nMax];
643 Double_t *row_data=new Double_t[b->GetNcols()];
644 Int_t n=0;
645 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
646 if(a_rows[irow+1]<=a_rows[irow]) continue;
647 // clear row data
648 for(Int_t icol=0;icol<b->GetNcols();icol++) {
649 row_data[icol]=0.0;
650 }
651 // loop over a-columns in this a-row
652 for(Int_t ia=a_rows[irow];ia<a_rows[irow+1];ia++) {
653 Int_t k=a_cols[ia];
654 // loop over b-columns in b-row k
655 for(Int_t ib=b_rows[k];ib<b_rows[k+1];ib++) {
656 row_data[b_cols[ib]] += a_data[ia]*b_data[ib];
657 }
658 }
659 // store nonzero elements
660 for(Int_t icol=0;icol<b->GetNcols();icol++) {
661 if(row_data[icol] != 0.0) {
662 r_rows[n]=irow;
663 r_cols[n]=icol;
664 r_data[n]=row_data[icol];
665 n++;
666 }
667 }
668 }
669 if(n>0) {
670 r->SetMatrixArray(n,r_rows,r_cols,r_data);
671 }
672 delete[] r_rows;
673 delete[] r_cols;
674 delete[] r_data;
675 delete[] row_data;
676 }
677
678 return r;
679}
680
681////////////////////////////////////////////////////////////////////////////////
682/// Multiply a transposed Sparse matrix with another sparse matrix,
683///
684/// \param[in] a sparse matrix (to be transposed)
685/// \param[in] b sparse matrix
686///
687/// returns a new sparse matrix a^{T}*b
688///
689/// this is a replacement for the root constructors
690/// new TMatrixDSparse(TMatrixDSparse(TMatrixDSparse::kTransposed,*a),
691/// TMatrixDSparse::kMult,*b)
692
694(const TMatrixDSparse *a,const TMatrixDSparse *b) const
695{
696 if(a->GetNrows() != b->GetNrows()) {
697 Fatal("MultiplyMSparseTranspMSparse",
698 "inconsistent matrix row numbers %d!=%d",
699 a->GetNrows(),b->GetNrows());
700 }
701
702 TMatrixDSparse *r=new TMatrixDSparse(a->GetNcols(),b->GetNcols());
703 const Int_t *a_rows=a->GetRowIndexArray();
704 const Int_t *a_cols=a->GetColIndexArray();
705 const Double_t *a_data=a->GetMatrixArray();
706 const Int_t *b_rows=b->GetRowIndexArray();
707 const Int_t *b_cols=b->GetColIndexArray();
708 const Double_t *b_data=b->GetMatrixArray();
709 // maximum size of the output matrix
710
711 // matrix multiplication
712 typedef std::map<Int_t,Double_t> MMatrixRow_t;
713 typedef std::map<Int_t, MMatrixRow_t > MMatrix_t;
714 MMatrix_t matrix;
715
716 for(Int_t iRowAB=0;iRowAB<a->GetNrows();iRowAB++) {
717 for(Int_t ia=a_rows[iRowAB];ia<a_rows[iRowAB+1];ia++) {
718 for(Int_t ib=b_rows[iRowAB];ib<b_rows[iRowAB+1];ib++) {
719 // this creates a new row if necessary
720 MMatrixRow_t &row=matrix[a_cols[ia]];
721 MMatrixRow_t::iterator icol=row.find(b_cols[ib]);
722 if(icol!=row.end()) {
723 // update existing row
724 (*icol).second += a_data[ia]*b_data[ib];
725 } else {
726 // create new row
727 row[b_cols[ib]] = a_data[ia]*b_data[ib];
728 }
729 }
730 }
731 }
732
733 Int_t n=0;
734 for (MMatrix_t::const_iterator irow = matrix.begin(); irow != matrix.end(); ++irow) {
735 n += (*irow).second.size();
736 }
737 if(n>0) {
738 // pack matrix into arrays
739 Int_t *r_rows=new Int_t[n];
740 Int_t *r_cols=new Int_t[n];
741 Double_t *r_data=new Double_t[n];
742 n=0;
743 for (MMatrix_t::const_iterator irow = matrix.begin(); irow != matrix.end(); ++irow) {
744 for (MMatrixRow_t::const_iterator icol = (*irow).second.begin(); icol != (*irow).second.end(); ++icol) {
745 r_rows[n]=(*irow).first;
746 r_cols[n]=(*icol).first;
747 r_data[n]=(*icol).second;
748 n++;
749 }
750 }
751 // pack arrays into TMatrixDSparse
752 if(n>0) {
753 r->SetMatrixArray(n,r_rows,r_cols,r_data);
754 }
755 delete[] r_rows;
756 delete[] r_cols;
757 delete[] r_data;
758 }
759
760 return r;
761}
762
763////////////////////////////////////////////////////////////////////////////////
764/// Multiply sparse matrix and a non-sparse matrix.
765///
766/// \param[in] a sparse matrix
767/// \param[in] b matrix
768///
769/// returns a new sparse matrix a*b.
770/// A replacement for:
771/// new TMatrixDSparse(a,TMatrixDSparse::kMult,b)
772/// the root implementation had problems in older versions of root.
773
775 const TMatrixD *b) const
776{
777 if(a->GetNcols()!=b->GetNrows()) {
778 Fatal("MultiplyMSparseM","inconsistent matrix col /matrix row %d!=%d",
779 a->GetNcols(),b->GetNrows());
780 }
781
782 TMatrixDSparse *r=new TMatrixDSparse(a->GetNrows(),b->GetNcols());
783 const Int_t *a_rows=a->GetRowIndexArray();
784 const Int_t *a_cols=a->GetColIndexArray();
785 const Double_t *a_data=a->GetMatrixArray();
786 // maximum size of the output matrix
787 int nMax=0;
788 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
789 if(a_rows[irow+1]-a_rows[irow]>0) nMax += b->GetNcols();
790 }
791 if(nMax>0) {
792 Int_t *r_rows=new Int_t[nMax];
793 Int_t *r_cols=new Int_t[nMax];
794 Double_t *r_data=new Double_t[nMax];
795
796 Int_t n=0;
797 // fill matrix r
798 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
799 if(a_rows[irow+1]-a_rows[irow]<=0) continue;
800 for(Int_t icol=0;icol<b->GetNcols();icol++) {
801 r_rows[n]=irow;
802 r_cols[n]=icol;
803 r_data[n]=0.0;
804 for(Int_t i=a_rows[irow];i<a_rows[irow+1];i++) {
805 Int_t j=a_cols[i];
806 r_data[n] += a_data[i]*(*b)(j,icol);
807 }
808 if(r_data[n]!=0.0) n++;
809 }
810 }
811 if(n>0) {
812 r->SetMatrixArray(n,r_rows,r_cols,r_data);
813 }
814 delete[] r_rows;
815 delete[] r_cols;
816 delete[] r_data;
817 }
818 return r;
819}
820
821
822////////////////////////////////////////////////////////////////////////////////
823/// Calculate a sparse matrix product\f$ M1*V*M2^{T} \f$ where the diagonal matrix V is
824/// given by a vector.
825///
826/// \param[in] m1 pointer to sparse matrix with dimension I*K
827/// \param[in] m2 pointer to sparse matrix with dimension J*K
828/// \param[in] v pointer to vector (matrix) with dimension K*1
829///
830/// returns a sparse matrix R with elements
831/// \f$ r_{ij}=\Sigma_{k}M1_{ik}V_{k}M2_{jk} \f$
832
834(const TMatrixDSparse *m1,const TMatrixDSparse *m2,
835 const TMatrixTBase<Double_t> *v) const
836{
837 if((m1->GetNcols() != m2->GetNcols())||
838 (v && ((m1->GetNcols()!=v->GetNrows())||(v->GetNcols()!=1)))) {
839 if(v) {
840 Fatal("MultiplyMSparseMSparseTranspVector",
841 "matrix cols/vector rows %d!=%d!=%d or vector rows %d!=1\n",
842 m1->GetNcols(),m2->GetNcols(),v->GetNrows(),v->GetNcols());
843 } else {
844 Fatal("MultiplyMSparseMSparseTranspVector",
845 "matrix cols %d!=%d\n",m1->GetNcols(),m2->GetNcols());
846 }
847 }
848 const Int_t *rows_m1=m1->GetRowIndexArray();
849 const Int_t *cols_m1=m1->GetColIndexArray();
850 const Double_t *data_m1=m1->GetMatrixArray();
851 Int_t num_m1=0;
852 for(Int_t i=0;i<m1->GetNrows();i++) {
853 if(rows_m1[i]<rows_m1[i+1]) num_m1++;
854 }
855 const Int_t *rows_m2=m2->GetRowIndexArray();
856 const Int_t *cols_m2=m2->GetColIndexArray();
857 const Double_t *data_m2=m2->GetMatrixArray();
858 Int_t num_m2=0;
859 for(Int_t j=0;j<m2->GetNrows();j++) {
860 if(rows_m2[j]<rows_m2[j+1]) num_m2++;
861 }
862 const TMatrixDSparse *v_sparse=dynamic_cast<const TMatrixDSparse *>(v);
863 const Int_t *v_rows=0;
864 const Double_t *v_data=0;
865 if(v_sparse) {
866 v_rows=v_sparse->GetRowIndexArray();
867 v_data=v_sparse->GetMatrixArray();
868 }
869 Int_t num_r=num_m1*num_m2+1;
870 Int_t *row_r=new Int_t[num_r];
871 Int_t *col_r=new Int_t[num_r];
872 Double_t *data_r=new Double_t[num_r];
873 num_r=0;
874 for(Int_t i=0;i<m1->GetNrows();i++) {
875 for(Int_t j=0;j<m2->GetNrows();j++) {
876 data_r[num_r]=0.0;
877 Int_t index_m1=rows_m1[i];
878 Int_t index_m2=rows_m2[j];
879 while((index_m1<rows_m1[i+1])&&(index_m2<rows_m2[j+1])) {
880 Int_t k1=cols_m1[index_m1];
881 Int_t k2=cols_m2[index_m2];
882 if(k1<k2) {
883 index_m1++;
884 } else if(k1>k2) {
885 index_m2++;
886 } else {
887 if(v_sparse) {
888 Int_t v_index=v_rows[k1];
889 if(v_index<v_rows[k1+1]) {
890 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2]
891 * v_data[v_index];
892 }
893 } else if(v) {
894 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2]
895 * (*v)(k1,0);
896 } else {
897 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2];
898 }
899 index_m1++;
900 index_m2++;
901 }
902 }
903 if(data_r[num_r] !=0.0) {
904 row_r[num_r]=i;
905 col_r[num_r]=j;
906 num_r++;
907 }
908 }
909 }
911 num_r,row_r,col_r,data_r);
912 delete[] row_r;
913 delete[] col_r;
914 delete[] data_r;
915 return r;
916}
917
918////////////////////////////////////////////////////////////////////////////////
919/// Add a sparse matrix, scaled by a factor, to another scaled matrix.
920///
921/// \param[inout] dest destination matrix
922/// \param[in] f scaling factor
923/// \param[in] src matrix to be added to dest
924///
925/// a replacement for
926/// ~~~
927/// (*dest) += f * (*src)
928/// ~~~
929/// which suffered from a bug in old root versions.
930
932 const TMatrixDSparse *src) const
933{
934 const Int_t *dest_rows=dest->GetRowIndexArray();
935 const Int_t *dest_cols=dest->GetColIndexArray();
936 const Double_t *dest_data=dest->GetMatrixArray();
937 const Int_t *src_rows=src->GetRowIndexArray();
938 const Int_t *src_cols=src->GetColIndexArray();
939 const Double_t *src_data=src->GetMatrixArray();
940
941 if((dest->GetNrows()!=src->GetNrows())||
942 (dest->GetNcols()!=src->GetNcols())) {
943 Fatal("AddMSparse","inconsistent matrix rows %d!=%d OR cols %d!=%d",
944 src->GetNrows(),dest->GetNrows(),src->GetNcols(),dest->GetNcols());
945 }
946 Int_t nmax=dest->GetNrows()*dest->GetNcols();
947 Double_t *result_data=new Double_t[nmax];
948 Int_t *result_rows=new Int_t[nmax];
949 Int_t *result_cols=new Int_t[nmax];
950 Int_t n=0;
951 for(Int_t row=0;row<dest->GetNrows();row++) {
952 Int_t i_dest=dest_rows[row];
953 Int_t i_src=src_rows[row];
954 while((i_dest<dest_rows[row+1])||(i_src<src_rows[row+1])) {
955 Int_t col_dest=(i_dest<dest_rows[row+1]) ?
956 dest_cols[i_dest] : dest->GetNcols();
957 Int_t col_src =(i_src <src_rows[row+1] ) ?
958 src_cols [i_src] : src->GetNcols();
959 result_rows[n]=row;
960 if(col_dest<col_src) {
961 result_cols[n]=col_dest;
962 result_data[n]=dest_data[i_dest++];
963 } else if(col_dest>col_src) {
964 result_cols[n]=col_src;
965 result_data[n]=f*src_data[i_src++];
966 } else {
967 result_cols[n]=col_dest;
968 result_data[n]=dest_data[i_dest++]+f*src_data[i_src++];
969 }
970 if(result_data[n] !=0.0) {
971 if(!TMath::Finite(result_data[n])) {
972 Fatal("AddMSparse","Nan detected %d %d %d",
973 row,i_dest,i_src);
974 }
975 n++;
976 }
977 }
978 }
979 if(n<=0) {
980 n=1;
981 result_rows[0]=0;
982 result_cols[0]=0;
983 result_data[0]=0.0;
984 }
985 dest->SetMatrixArray(n,result_rows,result_cols,result_data);
986 delete[] result_data;
987 delete[] result_rows;
988 delete[] result_cols;
989}
990
991////////////////////////////////////////////////////////////////////////////////
992/// Get the inverse or pseudo-inverse of a positive, sparse matrix.
993///
994/// \param[in] A the sparse matrix to be inverted, has to be positive
995/// \param[inout] rankPtr if zero, suppress calculation of pseudo-inverse
996/// otherwise the rank of the matrix is returned in *rankPtr
997///
998/// return value: 0 or a new sparse matrix
999///
1000/// - if(rankPtr==0) return the inverse if it exists, or return 0
1001/// - else return a (pseudo-)inverse and store the rank of the matrix in
1002/// *rankPtr
1003///
1004///
1005/// the matrix inversion is optimized in performance for the case
1006/// where a large submatrix of A is diagonal
1007
1009(const TMatrixDSparse *A,Int_t *rankPtr) const
1010{
1011
1012 if(A->GetNcols()!=A->GetNrows()) {
1013 Fatal("InvertMSparseSymmPos","inconsistent matrix row/col %d!=%d",
1014 A->GetNcols(),A->GetNrows());
1015 }
1016
1017 Bool_t *isZero=new Bool_t[A->GetNrows()];
1018 const Int_t *a_rows=A->GetRowIndexArray();
1019 const Int_t *a_cols=A->GetColIndexArray();
1020 const Double_t *a_data=A->GetMatrixArray();
1021
1022 // determine diagonal elements
1023 // Aii: diagonals of A
1024 Int_t iDiagonal=0;
1025 Int_t iBlock=A->GetNrows();
1026 Bool_t isDiagonal=kTRUE;
1027 TVectorD aII(A->GetNrows());
1028 Int_t nError=0;
1029 for(Int_t iA=0;iA<A->GetNrows();iA++) {
1030 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1031 Int_t jA=a_cols[indexA];
1032 if(iA==jA) {
1033 if(!(a_data[indexA]>=0.0)) nError++;
1034 aII(iA)=a_data[indexA];
1035 }
1036 }
1037 }
1038 if(nError>0) {
1039 Fatal("InvertMSparseSymmPos",
1040 "Matrix has %d negative elements on the diagonal", nError);
1041 delete[] isZero;
1042 return 0;
1043 }
1044
1045 // reorder matrix such that the largest block of zeros is swapped
1046 // to the lowest indices
1047 // the result of this ordering:
1048 // swap[i] : for index i point to the row in A
1049 // swapBack[iA] : for index iA in A point to the swapped index i
1050 // the indices are grouped into three groups
1051 // 0..iDiagonal-1 : these rows only have diagonal elements
1052 // iDiagonal..iBlock : these rows contain a diagonal block matrix
1053 //
1054 Int_t *swap=new Int_t[A->GetNrows()];
1055 for(Int_t j=0;j<A->GetNrows();j++) swap[j]=j;
1056 for(Int_t iStart=0;iStart<iBlock;iStart++) {
1057 Int_t nZero=0;
1058 for(Int_t i=iStart;i<iBlock;i++) {
1059 Int_t iA=swap[i];
1060 Int_t n=A->GetNrows()-(a_rows[iA+1]-a_rows[iA]);
1061 if(n>nZero) {
1062 Int_t tmp=swap[iStart];
1063 swap[iStart]=swap[i];
1064 swap[i]=tmp;
1065 nZero=n;
1066 }
1067 }
1068 for(Int_t i=0;i<A->GetNrows();i++) isZero[i]=kTRUE;
1069 Int_t iA=swap[iStart];
1070 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1071 Int_t jA=a_cols[indexA];
1072 isZero[jA]=kFALSE;
1073 if(iA!=jA) {
1074 isDiagonal=kFALSE;
1075 }
1076 }
1077 if(isDiagonal) {
1078 iDiagonal=iStart+1;
1079 } else {
1080 for(Int_t i=iStart+1;i<iBlock;) {
1081 if(isZero[swap[i]]) {
1082 i++;
1083 } else {
1084 iBlock--;
1085 Int_t tmp=swap[iBlock];
1086 swap[iBlock]=swap[i];
1087 swap[i]=tmp;
1088 }
1089 }
1090 }
1091 }
1092
1093 // for tests uncomment this:
1094 // iBlock=iDiagonal;
1095
1096
1097 // conditioning of the iBlock part
1098#ifdef CONDITION_BLOCK_PART
1099 Int_t nn=A->GetNrows()-iBlock;
1100 for(int inc=(nn+1)/2;inc;inc /= 2) {
1101 for(int i=inc;i<nn;i++) {
1102 int itmp=swap[i+iBlock];
1103 int j;
1104 for(j=i;(j>=inc)&&(aII(swap[j-inc+iBlock])<aII(itmp));j -=inc) {
1105 swap[j+iBlock]=swap[j-inc+iBlock];
1106 }
1107 swap[j+iBlock]=itmp;
1108 }
1109 }
1110#endif
1111 // construct array for swapping back
1112 Int_t *swapBack=new Int_t[A->GetNrows()];
1113 for(Int_t i=0;i<A->GetNrows();i++) {
1114 swapBack[swap[i]]=i;
1115 }
1116#ifdef DEBUG_DETAIL
1117 for(Int_t i=0;i<A->GetNrows();i++) {
1118 std::cout<<" "<<i<<" "<<swap[i]<<" "<<swapBack[i]<<"\n";
1119 }
1120 std::cout<<"after sorting\n";
1121 for(Int_t i=0;i<A->GetNrows();i++) {
1122 if(i==iDiagonal) std::cout<<"iDiagonal="<<i<<"\n";
1123 if(i==iBlock) std::cout<<"iBlock="<<i<<"\n";
1124 std::cout<<" "<<swap[i]<<" "<<aII(swap[i])<<"\n";
1125 }
1126 {
1127 // sanity check:
1128 // (1) sub-matrix swapped[0]..swapped[iDiagonal]
1129 // must not contain off-diagonals
1130 // (2) sub-matrix swapped[0]..swapped[iBlock-1] must be diagonal
1131 Int_t nDiag=0;
1132 Int_t nError1=0;
1133 Int_t nError2=0;
1134 Int_t nBlock=0;
1135 Int_t nNonzero=0;
1136 for(Int_t i=0;i<A->GetNrows();i++) {
1137 Int_t row=swap[i];
1138 for(Int_t indexA=a_rows[row];indexA<a_rows[row+1];indexA++) {
1139 Int_t j=swapBack[a_cols[indexA]];
1140 if(i==j) nDiag++;
1141 else if((i<iDiagonal)||(j<iDiagonal)) nError1++;
1142 else if((i<iBlock)&&(j<iBlock)) nError2++;
1143 else if((i<iBlock)||(j<iBlock)) nBlock++;
1144 else nNonzero++;
1145 }
1146 }
1147 if(nError1+nError2>0) {
1148 Fatal("InvertMSparseSymmPos","sparse matrix analysis failed %d %d %d %d %d",
1149 iDiagonal,iBlock,A->GetNrows(),nError1,nError2);
1150 }
1151 }
1152#endif
1153#ifdef DEBUG
1154 Info("InvertMSparseSymmPos","iDiagonal=%d iBlock=%d nRow=%d",
1155 iDiagonal,iBlock,A->GetNrows());
1156#endif
1157
1158 //=============================================
1159 // matrix inversion starts here
1160 //
1161 // the matrix is split into several parts
1162 // D1 0 0
1163 // A = ( 0 D2 B# )
1164 // 0 B C
1165 //
1166 // D1,D2 are diagonal matrices
1167 //
1168 // first, the D1 part is inverted by calculating 1/D1
1169 // dim(D1)= iDiagonal
1170 // next, the other parts are inverted using Schur complement
1171 //
1172 // 1/D1 0 0
1173 // Ainv = ( 0 E G# )
1174 // 0 G F^-1
1175 //
1176 // where F = C + (-B/D2) B#
1177 // G = (F^-1) (-B/D2)
1178 // E = 1/D2 + (-B#/D2) G)
1179 //
1180 // the inverse of F is calculated using a Cholesky decomposition
1181 //
1182 // Error handling:
1183 // (a) if rankPtr==0: user requests inverse
1184 //
1185 // if D1 is not strictly positive, return NULL
1186 // if D2 is not strictly positive, return NULL
1187 // if F is not strictly positive (Cholesky decomposition failed)
1188 // return NULL
1189 // else
1190 // return Ainv as defined above
1191 //
1192 // (b) if rankPtr !=0: user requests pseudo-inverse
1193 // if D2 is not strictly positive or F is not strictly positive
1194 // calculate singular-value decomposition of
1195 // D2 B#
1196 // ( ) = O D O^-1
1197 // B C
1198 // if some eigenvalues are negative, return NULL
1199 // else calculate pseudo-inverse
1200 // *rankPtr = rank(D1)+rank(D)
1201 // else
1202 // calculate pseudo-inverse of D1 (D1_ii==0) ? 0 : 1/D1_ii
1203 // *rankPtr=rank(D1)+nrow(D2)+nrow(C)
1204 // return Ainv as defined above
1205
1206 Double_t *rEl_data=new Double_t[A->GetNrows()*A->GetNrows()];
1207 Int_t *rEl_col=new Int_t[A->GetNrows()*A->GetNrows()];
1208 Int_t *rEl_row=new Int_t[A->GetNrows()*A->GetNrows()];
1209 Int_t rNumEl=0;
1210
1211 //====================================================
1212 // invert D1
1213 Int_t rankD1=0;
1214 for(Int_t i=0;i<iDiagonal;++i) {
1215 Int_t iA=swap[i];
1216 if(aII(iA)>0.0) {
1217 rEl_col[rNumEl]=iA;
1218 rEl_row[rNumEl]=iA;
1219 rEl_data[rNumEl]=1./aII(iA);
1220 ++rankD1;
1221 ++rNumEl;
1222 }
1223 }
1224 if((!rankPtr)&&(rankD1!=iDiagonal)) {
1225 Fatal("InvertMSparseSymmPos",
1226 "diagonal part 1 has rank %d != %d, matrix can not be inverted",
1227 rankD1,iDiagonal);
1228 rNumEl=-1;
1229 }
1230
1231
1232 //====================================================
1233 // invert D2
1234 Int_t nD2=iBlock-iDiagonal;
1235 TMatrixDSparse *D2inv=0;
1236 if((rNumEl>=0)&&(nD2>0)) {
1237 Double_t *D2inv_data=new Double_t[nD2];
1238 Int_t *D2inv_col=new Int_t[nD2];
1239 Int_t *D2inv_row=new Int_t[nD2];
1240 Int_t D2invNumEl=0;
1241 for(Int_t i=0;i<nD2;++i) {
1242 Int_t iA=swap[i+iDiagonal];
1243 if(aII(iA)>0.0) {
1244 D2inv_col[D2invNumEl]=i;
1245 D2inv_row[D2invNumEl]=i;
1246 D2inv_data[D2invNumEl]=1./aII(iA);
1247 ++D2invNumEl;
1248 }
1249 }
1250 if(D2invNumEl==nD2) {
1251 D2inv=CreateSparseMatrix(nD2,nD2,D2invNumEl,D2inv_row,D2inv_col,
1252 D2inv_data);
1253 } else if(!rankPtr) {
1254 Fatal("InvertMSparseSymmPos",
1255 "diagonal part 2 has rank %d != %d, matrix can not be inverted",
1256 D2invNumEl,nD2);
1257 rNumEl=-2;
1258 }
1259 delete [] D2inv_data;
1260 delete [] D2inv_col;
1261 delete [] D2inv_row;
1262 }
1263
1264 //====================================================
1265 // invert F
1266
1267 Int_t nF=A->GetNrows()-iBlock;
1268 TMatrixDSparse *Finv=0;
1269 TMatrixDSparse *B=0;
1270 TMatrixDSparse *minusBD2inv=0;
1271 if((rNumEl>=0)&&(nF>0)&&((nD2==0)||D2inv)) {
1272 // construct matrices F and B
1273 Int_t nFmax=nF*nF;
1274 Double_t epsilonF2=fEpsMatrix;
1275 Double_t *F_data=new Double_t[nFmax];
1276 Int_t *F_col=new Int_t[nFmax];
1277 Int_t *F_row=new Int_t[nFmax];
1278 Int_t FnumEl=0;
1279
1280 Int_t nBmax=nF*(nD2+1);
1281 Double_t *B_data=new Double_t[nBmax];
1282 Int_t *B_col=new Int_t[nBmax];
1283 Int_t *B_row=new Int_t[nBmax];
1284 Int_t BnumEl=0;
1285
1286 for(Int_t i=0;i<nF;i++) {
1287 Int_t iA=swap[i+iBlock];
1288 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1289 Int_t jA=a_cols[indexA];
1290 Int_t j0=swapBack[jA];
1291 if(j0>=iBlock) {
1292 Int_t j=j0-iBlock;
1293 F_row[FnumEl]=i;
1294 F_col[FnumEl]=j;
1295 F_data[FnumEl]=a_data[indexA];
1296 FnumEl++;
1297 } else if(j0>=iDiagonal) {
1298 Int_t j=j0-iDiagonal;
1299 B_row[BnumEl]=i;
1300 B_col[BnumEl]=j;
1301 B_data[BnumEl]=a_data[indexA];
1302 BnumEl++;
1303 }
1304 }
1305 }
1306 TMatrixDSparse *F=0;
1307 if(FnumEl) {
1308#ifndef FORCE_EIGENVALUE_DECOMPOSITION
1309 F=CreateSparseMatrix(nF,nF,FnumEl,F_row,F_col,F_data);
1310#endif
1311 }
1312 delete [] F_data;
1313 delete [] F_col;
1314 delete [] F_row;
1315 if(BnumEl) {
1316 B=CreateSparseMatrix(nF,nD2,BnumEl,B_row,B_col,B_data);
1317 }
1318 delete [] B_data;
1319 delete [] B_col;
1320 delete [] B_row;
1321
1322 if(B && D2inv) {
1323 minusBD2inv=MultiplyMSparseMSparse(B,D2inv);
1324 if(minusBD2inv) {
1325 Int_t mbd2_nMax=minusBD2inv->GetRowIndexArray()
1326 [minusBD2inv->GetNrows()];
1327 Double_t *mbd2_data=minusBD2inv->GetMatrixArray();
1328 for(Int_t i=0;i<mbd2_nMax;i++) {
1329 mbd2_data[i] = - mbd2_data[i];
1330 }
1331 }
1332 }
1333 if(minusBD2inv && F) {
1334 TMatrixDSparse *minusBD2invBt=
1335 MultiplyMSparseMSparseTranspVector(minusBD2inv,B,0);
1336 AddMSparse(F,1.,minusBD2invBt);
1337 DeleteMatrix(&minusBD2invBt);
1338 }
1339
1340 if(F) {
1341 // cholesky decomposition with preconditioning
1342 const Int_t *f_rows=F->GetRowIndexArray();
1343 const Int_t *f_cols=F->GetColIndexArray();
1344 const Double_t *f_data=F->GetMatrixArray();
1345 // cholesky-type decomposition of F
1346 TMatrixD c(nF,nF);
1347 Int_t nErrorF=0;
1348 for(Int_t i=0;i<nF;i++) {
1349 for(Int_t indexF=f_rows[i];indexF<f_rows[i+1];indexF++) {
1350 if(f_cols[indexF]>=i) c(f_cols[indexF],i)=f_data[indexF];
1351 }
1352 // calculate diagonal element
1353 Double_t c_ii=c(i,i);
1354 for(Int_t j=0;j<i;j++) {
1355 Double_t c_ij=c(i,j);
1356 c_ii -= c_ij*c_ij;
1357 }
1358 if(c_ii<=0.0) {
1359 nErrorF++;
1360 break;
1361 }
1362 c_ii=TMath::Sqrt(c_ii);
1363 c(i,i)=c_ii;
1364 // off-diagonal elements
1365 for(Int_t j=i+1;j<nF;j++) {
1366 Double_t c_ji=c(j,i);
1367 for(Int_t k=0;k<i;k++) {
1368 c_ji -= c(i,k)*c(j,k);
1369 }
1370 c(j,i) = c_ji/c_ii;
1371 }
1372 }
1373 // check condition of dInv
1374 if(!nErrorF) {
1375 Double_t dmin=c(0,0);
1376 Double_t dmax=dmin;
1377 for(Int_t i=1;i<nF;i++) {
1378 dmin=TMath::Min(dmin,c(i,i));
1379 dmax=TMath::Max(dmax,c(i,i));
1380 }
1381#ifdef DEBUG
1382 std::cout<<"dmin,dmax: "<<dmin<<" "<<dmax<<"\n";
1383#endif
1384 if(dmin/dmax<epsilonF2*nF) nErrorF=2;
1385 }
1386 if(!nErrorF) {
1387 // here: F = c c#
1388 // construct inverse of c
1389 TMatrixD cinv(nF,nF);
1390 for(Int_t i=0;i<nF;i++) {
1391 cinv(i,i)=1./c(i,i);
1392 }
1393 for(Int_t i=0;i<nF;i++) {
1394 for(Int_t j=i+1;j<nF;j++) {
1395 Double_t tmp=-c(j,i)*cinv(i,i);
1396 for(Int_t k=i+1;k<j;k++) {
1397 tmp -= cinv(k,i)*c(j,k);
1398 }
1399 cinv(j,i)=tmp*cinv(j,j);
1400 }
1401 }
1402 TMatrixDSparse cInvSparse(cinv);
1404 (&cInvSparse,&cInvSparse);
1405 }
1406 DeleteMatrix(&F);
1407 }
1408 }
1409
1410 // here:
1411 // rNumEl>=0: diagonal part has been inverted
1412 // (nD2==0)||D2inv : D2 part has been inverted or is empty
1413 // (nF==0)||Finv : F part has been inverted or is empty
1414
1415 Int_t rankD2Block=0;
1416 if(rNumEl>=0) {
1417 if( ((nD2==0)||D2inv) && ((nF==0)||Finv) ) {
1418 // all matrix parts have been inverted, compose full matrix
1419 // 1/D1 0 0
1420 // Ainv = ( 0 E G# )
1421 // 0 G F^-1
1422 //
1423 // G = (F^-1) (-B/D2)
1424 // E = 1/D2 + (-B#/D2) G)
1425
1426 TMatrixDSparse *G=0;
1427 if(Finv && minusBD2inv) {
1428 G=MultiplyMSparseMSparse(Finv,minusBD2inv);
1429 }
1430 TMatrixDSparse *E=0;
1431 if(D2inv) E=new TMatrixDSparse(*D2inv);
1432 if(G && minusBD2inv) {
1433 TMatrixDSparse *minusBD2invTransG=
1434 MultiplyMSparseTranspMSparse(minusBD2inv,G);
1435 if(E) {
1436 AddMSparse(E,1.,minusBD2invTransG);
1437 DeleteMatrix(&minusBD2invTransG);
1438 } else {
1439 E=minusBD2invTransG;
1440 }
1441 }
1442 // pack matrix E to r
1443 if(E) {
1444 const Int_t *e_rows=E->GetRowIndexArray();
1445 const Int_t *e_cols=E->GetColIndexArray();
1446 const Double_t *e_data=E->GetMatrixArray();
1447 for(Int_t iE=0;iE<E->GetNrows();iE++) {
1448 Int_t iA=swap[iE+iDiagonal];
1449 for(Int_t indexE=e_rows[iE];indexE<e_rows[iE+1];++indexE) {
1450 Int_t jE=e_cols[indexE];
1451 Int_t jA=swap[jE+iDiagonal];
1452 rEl_col[rNumEl]=iA;
1453 rEl_row[rNumEl]=jA;
1454 rEl_data[rNumEl]=e_data[indexE];
1455 ++rNumEl;
1456 }
1457 }
1458 DeleteMatrix(&E);
1459 }
1460 // pack matrix G to r
1461 if(G) {
1462 const Int_t *g_rows=G->GetRowIndexArray();
1463 const Int_t *g_cols=G->GetColIndexArray();
1464 const Double_t *g_data=G->GetMatrixArray();
1465 for(Int_t iG=0;iG<G->GetNrows();iG++) {
1466 Int_t iA=swap[iG+iBlock];
1467 for(Int_t indexG=g_rows[iG];indexG<g_rows[iG+1];++indexG) {
1468 Int_t jG=g_cols[indexG];
1469 Int_t jA=swap[jG+iDiagonal];
1470 // G
1471 rEl_col[rNumEl]=iA;
1472 rEl_row[rNumEl]=jA;
1473 rEl_data[rNumEl]=g_data[indexG];
1474 ++rNumEl;
1475 // G#
1476 rEl_col[rNumEl]=jA;
1477 rEl_row[rNumEl]=iA;
1478 rEl_data[rNumEl]=g_data[indexG];
1479 ++rNumEl;
1480 }
1481 }
1482 DeleteMatrix(&G);
1483 }
1484 if(Finv) {
1485 // pack matrix Finv to r
1486 const Int_t *finv_rows=Finv->GetRowIndexArray();
1487 const Int_t *finv_cols=Finv->GetColIndexArray();
1488 const Double_t *finv_data=Finv->GetMatrixArray();
1489 for(Int_t iF=0;iF<Finv->GetNrows();iF++) {
1490 Int_t iA=swap[iF+iBlock];
1491 for(Int_t indexF=finv_rows[iF];indexF<finv_rows[iF+1];++indexF) {
1492 Int_t jF=finv_cols[indexF];
1493 Int_t jA=swap[jF+iBlock];
1494 rEl_col[rNumEl]=iA;
1495 rEl_row[rNumEl]=jA;
1496 rEl_data[rNumEl]=finv_data[indexF];
1497 ++rNumEl;
1498 }
1499 }
1500 }
1501 rankD2Block=nD2+nF;
1502 } else if(!rankPtr) {
1503 rNumEl=-3;
1504 Fatal("InvertMSparseSymmPos",
1505 "non-trivial part has rank < %d, matrix can not be inverted",
1506 nF);
1507 } else {
1508 // part of the matrix could not be inverted, get eingenvalue
1509 // decomposition
1510 Int_t nEV=nD2+nF;
1511 Double_t epsilonEV2=fEpsMatrix /* *nEV*nEV */;
1512 Info("InvertMSparseSymmPos",
1513 "cholesky-decomposition failed, try eigenvalue analysis");
1514#ifdef DEBUG
1515 std::cout<<"nEV="<<nEV<<" iDiagonal="<<iDiagonal<<"\n";
1516#endif
1517 TMatrixDSym EV(nEV);
1518 for(Int_t i=0;i<nEV;i++) {
1519 Int_t iA=swap[i+iDiagonal];
1520 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1521 Int_t jA=a_cols[indexA];
1522 Int_t j0=swapBack[jA];
1523 if(j0>=iDiagonal) {
1524 Int_t j=j0-iDiagonal;
1525#ifdef DEBUG
1526 if((i<0)||(j<0)||(i>=nEV)||(j>=nEV)) {
1527 std::cout<<" error "<<nEV<<" "<<i<<" "<<j<<"\n";
1528 }
1529#endif
1530 if(!TMath::Finite(a_data[indexA])) {
1531 Fatal("InvertMSparseSymmPos",
1532 "non-finite number detected element %d %d\n",
1533 iA,jA);
1534 }
1535 EV(i,j)=a_data[indexA];
1536 }
1537 }
1538 }
1539 // EV.Print();
1540 TMatrixDSymEigen Eigen(EV);
1541#ifdef DEBUG
1542 std::cout<<"Eigenvalues\n";
1543 // Eigen.GetEigenValues().Print();
1544#endif
1545 Int_t errorEV=0;
1546 Double_t condition=1.0;
1547 if(Eigen.GetEigenValues()(0)<0.0) {
1548 errorEV=1;
1549 } else if(Eigen.GetEigenValues()(0)>0.0) {
1550 condition=Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0);
1551 }
1552 if(condition<-epsilonEV2) {
1553 errorEV=2;
1554 }
1555 if(errorEV) {
1556 if(errorEV==1) {
1557 Error("InvertMSparseSymmPos",
1558 "Largest Eigenvalue is negative %f",
1559 Eigen.GetEigenValues()(0));
1560 } else {
1561 Error("InvertMSparseSymmPos",
1562 "Some Eigenvalues are negative (EV%d/EV0=%g epsilon=%g)",
1563 nEV-1,
1564 Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0),
1565 epsilonEV2);
1566 }
1567 }
1568 // compose inverse matrix
1569 rankD2Block=0;
1570 Double_t setToZero=epsilonEV2*Eigen.GetEigenValues()(0);
1571 TMatrixD inverseEV(nEV,1);
1572 for(Int_t i=0;i<nEV;i++) {
1573 Double_t x=Eigen.GetEigenValues()(i);
1574 if(x>setToZero) {
1575 inverseEV(i,0)=1./x;
1576 ++rankD2Block;
1577 }
1578 }
1581 (&V,&V,&inverseEV);
1582
1583 // pack matrix VDVt to r
1584 const Int_t *vdvt_rows=VDVt->GetRowIndexArray();
1585 const Int_t *vdvt_cols=VDVt->GetColIndexArray();
1586 const Double_t *vdvt_data=VDVt->GetMatrixArray();
1587 for(Int_t iVDVt=0;iVDVt<VDVt->GetNrows();iVDVt++) {
1588 Int_t iA=swap[iVDVt+iDiagonal];
1589 for(Int_t indexVDVt=vdvt_rows[iVDVt];
1590 indexVDVt<vdvt_rows[iVDVt+1];++indexVDVt) {
1591 Int_t jVDVt=vdvt_cols[indexVDVt];
1592 Int_t jA=swap[jVDVt+iDiagonal];
1593 rEl_col[rNumEl]=iA;
1594 rEl_row[rNumEl]=jA;
1595 rEl_data[rNumEl]=vdvt_data[indexVDVt];
1596 ++rNumEl;
1597 }
1598 }
1599 DeleteMatrix(&VDVt);
1600 }
1601 }
1602 if(rankPtr) {
1603 *rankPtr=rankD1+rankD2Block;
1604 }
1605
1606
1607 DeleteMatrix(&D2inv);
1608 DeleteMatrix(&Finv);
1609 DeleteMatrix(&B);
1610 DeleteMatrix(&minusBD2inv);
1611
1612 delete [] swap;
1613 delete [] swapBack;
1614 delete [] isZero;
1615
1616 TMatrixDSparse *r=(rNumEl>=0) ?
1617 CreateSparseMatrix(A->GetNrows(),A->GetNrows(),rNumEl,
1618 rEl_row,rEl_col,rEl_data) : 0;
1619 delete [] rEl_data;
1620 delete [] rEl_col;
1621 delete [] rEl_row;
1622
1623#ifdef DEBUG_DETAIL
1624 // sanity test
1625 if(r) {
1629
1630 TMatrixD ar(*Ar);
1631 TMatrixD a(*A);
1632 TMatrixD ara(*ArA);
1633 TMatrixD R(*r);
1634 TMatrixD rar(*rAr);
1635
1636 DeleteMatrix(&Ar);
1637 DeleteMatrix(&ArA);
1638 DeleteMatrix(&rAr);
1639
1640 Double_t epsilonA2=fEpsMatrix /* *a.GetNrows()*a.GetNcols() */;
1641 for(Int_t i=0;i<a.GetNrows();i++) {
1642 for(Int_t j=0;j<a.GetNcols();j++) {
1643 // ar should be symmetric
1644 if(TMath::Abs(ar(i,j)-ar(j,i))>
1645 epsilonA2*(TMath::Abs(ar(i,j))+TMath::Abs(ar(j,i)))) {
1646 std::cout<<"Ar is not symmetric Ar("<<i<<","<<j<<")="<<ar(i,j)
1647 <<" Ar("<<j<<","<<i<<")="<<ar(j,i)<<"\n";
1648 }
1649 // ara should be equal a
1650 if(TMath::Abs(ara(i,j)-a(i,j))>
1651 epsilonA2*(TMath::Abs(ara(i,j))+TMath::Abs(a(i,j)))) {
1652 std::cout<<"ArA is not equal A ArA("<<i<<","<<j<<")="<<ara(i,j)
1653 <<" A("<<i<<","<<j<<")="<<a(i,j)<<"\n";
1654 }
1655 // ara should be equal a
1656 if(TMath::Abs(rar(i,j)-R(i,j))>
1657 epsilonA2*(TMath::Abs(rar(i,j))+TMath::Abs(R(i,j)))) {
1658 std::cout<<"rAr is not equal r rAr("<<i<<","<<j<<")="<<rar(i,j)
1659 <<" r("<<i<<","<<j<<")="<<R(i,j)<<"\n";
1660 }
1661 }
1662 }
1663 if(rankPtr) std::cout<<"rank="<<*rankPtr<<"\n";
1664 } else {
1665 std::cout<<"Matrix is not positive\n";
1666 }
1667#endif
1668 return r;
1669
1670}
1671
1672////////////////////////////////////////////////////////////////////////////////
1673/// Get bin name of an output bin.
1674///
1675/// \param[in] iBinX bin number
1676///
1677/// Return value: name of the bin
1678///
1679/// For TUnfold and TUnfoldSys, this function simply returns the bin
1680/// number as a string. This function really only makes sense in the
1681/// context of TUnfoldDensity, where binning schemes are implemented
1682/// using the class TUnfoldBinning, and non-trivial bin names are
1683/// returned.
1684
1686{
1687 return TString::Format("#%d",iBinX);
1688}
1689
1690////////////////////////////////////////////////////////////////////////////////
1691/// Set up response matrix and regularisation scheme.
1692///
1693/// \param[in] hist_A matrix of MC events that describes the migrations
1694/// \param[in] histmap mapping of the histogram axes
1695/// \param[in] regmode (default=kRegModeSize) global regularisation mode
1696/// \param[in] constraint (default=kEConstraintArea) type of constraint
1697///
1698/// Treatment of overflow bins in the matrix hist_A
1699///
1700/// - Events reconstructed in underflow or overflow bins are counted
1701/// as inefficiency. They have to be filled properly.
1702/// - Events where the truth level is in underflow or overflow bins are
1703/// treated as a part of the generator level distribution.
1704/// The full truth level distribution (including underflow and
1705/// overflow) is unfolded.
1706///
1707/// If unsure, do the following:
1708///
1709/// - store evens where the truth is in underflow or overflow
1710/// (sometimes called "fakes") in a separate TH1. Ensure that the
1711/// truth-level underflow and overflow bins of hist_A are all zero.
1712/// - the fakes are background to the
1713/// measurement. Use the classes TUnfoldSys and TUnfoldDensity instead
1714/// of the plain TUnfold for subtracting background.
1715
1716TUnfold::TUnfold(const TH2 *hist_A, EHistMap histmap, ERegMode regmode,
1717 EConstraint constraint)
1718{
1719 // data members initialized to something different from zero:
1720 // fA: filled from hist_A
1721 // fDA: filled from hist_A
1722 // fX0: filled from hist_A
1723 // fL: filled depending on the regularisation scheme
1724 InitTUnfold();
1725 SetConstraint(constraint);
1726 Int_t nx0, nx, ny;
1727 if (histmap == kHistMapOutputHoriz) {
1728 // include overflow bins on the X axis
1729 nx0 = hist_A->GetNbinsX()+2;
1730 ny = hist_A->GetNbinsY();
1731 } else {
1732 // include overflow bins on the X axis
1733 nx0 = hist_A->GetNbinsY()+2;
1734 ny = hist_A->GetNbinsX();
1735 }
1736 nx = 0;
1737 // fNx is expected to be nx0, but the input matrix may be ill-formed
1738 // -> all columns with zero events have to be removed
1739 // (because y does not contain any information on that bin in x)
1740 fSumOverY.Set(nx0);
1741 fXToHist.Set(nx0);
1742 fHistToX.Set(nx0);
1743 Int_t nonzeroA=0;
1744 // loop
1745 // - calculate bias distribution
1746 // sum_over_y
1747 // - count those generator bins which can be unfolded
1748 // fNx
1749 // - histogram bins are added to the lookup-tables
1750 // fHistToX[] and fXToHist[]
1751 // these convert histogram bins to matrix indices and vice versa
1752 // - the number of non-zero elements is counted
1753 // nonzeroA
1754 Int_t nskipped=0;
1755 for (Int_t ix = 0; ix < nx0; ix++) {
1756 // calculate sum over all detector bins
1757 // excluding the overflow bins
1758 Double_t sum = 0.0;
1759 Int_t nonZeroY = 0;
1760 for (Int_t iy = 0; iy < ny; iy++) {
1761 Double_t z;
1762 if (histmap == kHistMapOutputHoriz) {
1763 z = hist_A->GetBinContent(ix, iy + 1);
1764 } else {
1765 z = hist_A->GetBinContent(iy + 1, ix);
1766 }
1767 if (z != 0.0) {
1768 nonzeroA++;
1769 nonZeroY++;
1770 sum += z;
1771 }
1772 }
1773 // check whether there is any sensitivity to this generator bin
1774
1775 if (nonZeroY) {
1776 // update mapping tables to convert bin number to matrix index
1777 fXToHist[nx] = ix;
1778 fHistToX[ix] = nx;
1779 // add overflow bins -> important to keep track of the
1780 // non-reconstructed events
1781 fSumOverY[nx] = sum;
1782 if (histmap == kHistMapOutputHoriz) {
1783 fSumOverY[nx] +=
1784 hist_A->GetBinContent(ix, 0) +
1785 hist_A->GetBinContent(ix, ny + 1);
1786 } else {
1787 fSumOverY[nx] +=
1788 hist_A->GetBinContent(0, ix) +
1789 hist_A->GetBinContent(ny + 1, ix);
1790 }
1791 nx++;
1792 } else {
1793 nskipped++;
1794 // histogram bin pointing to -1 (non-existing matrix entry)
1795 fHistToX[ix] = -1;
1796 }
1797 }
1798 Int_t underflowBin=0,overflowBin=0;
1799 for (Int_t ix = 0; ix < nx; ix++) {
1800 Int_t ibinx = fXToHist[ix];
1801 if(ibinx<1) underflowBin=1;
1802 if (histmap == kHistMapOutputHoriz) {
1803 if(ibinx>hist_A->GetNbinsX()) overflowBin=1;
1804 } else {
1805 if(ibinx>hist_A->GetNbinsY()) overflowBin=1;
1806 }
1807 }
1808 if(nskipped) {
1809 Int_t nprint=0;
1810 Int_t ixfirst=-1,ixlast=-1;
1811 TString binlist;
1812 int nDisconnected=0;
1813 for (Int_t ix = 0; ix < nx0; ix++) {
1814 if(fHistToX[ix]<0) {
1815 nprint++;
1816 if(ixlast<0) {
1817 binlist +=" ";
1818 binlist +=ix;
1819 ixfirst=ix;
1820 }
1821 ixlast=ix;
1822 nDisconnected++;
1823 }
1824 if(((fHistToX[ix]>=0)&&(ixlast>=0))||
1825 (nprint==nskipped)) {
1826 if(ixlast>ixfirst) {
1827 binlist += "-";
1828 binlist += ixlast;
1829 }
1830 ixfirst=-1;
1831 ixlast=-1;
1832 }
1833 if(nprint==nskipped) {
1834 break;
1835 }
1836 }
1837 if(nskipped==(2-underflowBin-overflowBin)) {
1838 Info("TUnfold","underflow and overflow bin "
1839 "do not depend on the input data");
1840 } else {
1841 Warning("TUnfold","%d output bins "
1842 "do not depend on the input data %s",nDisconnected,
1843 static_cast<const char *>(binlist));
1844 }
1845 }
1846 // store bias as matrix
1847 fX0 = new TMatrixD(nx, 1, fSumOverY.GetArray());
1848 // store non-zero elements in sparse matrix fA
1849 // construct sparse matrix fA
1850 Int_t *rowA = new Int_t[nonzeroA];
1851 Int_t *colA = new Int_t[nonzeroA];
1852 Double_t *dataA = new Double_t[nonzeroA];
1853 Int_t index=0;
1854 for (Int_t iy = 0; iy < ny; iy++) {
1855 for (Int_t ix = 0; ix < nx; ix++) {
1856 Int_t ibinx = fXToHist[ix];
1857 Double_t z;
1858 if (histmap == kHistMapOutputHoriz) {
1859 z = hist_A->GetBinContent(ibinx, iy + 1);
1860 } else {
1861 z = hist_A->GetBinContent(iy + 1, ibinx);
1862 }
1863 if (z != 0.0) {
1864 rowA[index]=iy;
1865 colA[index] = ix;
1866 dataA[index] = z / fSumOverY[ix];
1867 index++;
1868 }
1869 }
1870 }
1871 if(underflowBin && overflowBin) {
1872 Info("TUnfold","%d input bins and %d output bins (includes 2 underflow/overflow bins)",ny,nx);
1873 } else if(underflowBin) {
1874 Info("TUnfold","%d input bins and %d output bins (includes 1 underflow bin)",ny,nx);
1875 } else if(overflowBin) {
1876 Info("TUnfold","%d input bins and %d output bins (includes 1 overflow bin)",ny,nx);
1877 } else {
1878 Info("TUnfold","%d input bins and %d output bins",ny,nx);
1879 }
1880 fA = CreateSparseMatrix(ny,nx,index,rowA,colA,dataA);
1881 if(ny<nx) {
1882 Error("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1883 } else if(ny==nx) {
1884 Warning("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1885 }
1886 delete[] rowA;
1887 delete[] colA;
1888 delete[] dataA;
1889 // regularisation conditions squared
1890 fL = new TMatrixDSparse(1, GetNx());
1891 if (regmode != kRegModeNone) {
1892 Int_t nError=RegularizeBins(0, 1, nx0, regmode);
1893 if(nError>0) {
1894 if(nError>1) {
1895 Info("TUnfold",
1896 "%d regularisation conditions have been skipped",nError);
1897 } else {
1898 Info("TUnfold",
1899 "One regularisation condition has been skipped");
1900 }
1901 }
1902 }
1903}
1904
1905////////////////////////////////////////////////////////////////////////////////
1906/// Set bias vector.
1907///
1908/// \param[in] bias histogram with new bias vector
1909///
1910/// the initial bias vector is determined from the response matrix
1911/// but may be changed by using this method
1912
1913void TUnfold::SetBias(const TH1 *bias)
1914{
1915 DeleteMatrix(&fX0);
1916 fX0 = new TMatrixD(GetNx(), 1);
1917 for (Int_t i = 0; i < GetNx(); i++) {
1918 (*fX0) (i, 0) = bias->GetBinContent(fXToHist[i]);
1919 }
1920}
1921
1922////////////////////////////////////////////////////////////////////////////////
1923/// Add a row of regularisation conditions to the matrix L.
1924///
1925/// \param[in] i0 truth histogram bin number
1926/// \param[in] f0 entry in the matrix L, column i0
1927/// \param[in] i1 truth histogram bin number
1928/// \param[in] f1 entry in the matrix L, column i1
1929/// \param[in] i2 truth histogram bin number
1930/// \param[in] f2 entry in the matrix L, column i2
1931///
1932/// the arguments are used to form one row (k) of the matrix L, where
1933/// \f$ L_{k,i0}=f0 \f$ and \f$ L_{k,i1}=f1 \f$ and \f$ L_{k,i2}=f2 \f$
1934/// negative indexes i0,i1,i2 are ignored.
1935
1937(Int_t i0,Double_t f0,Int_t i1,Double_t f1,Int_t i2,Double_t f2)
1938{
1939 Int_t indices[3];
1940 Double_t data[3];
1941 Int_t nEle=0;
1942
1943 if(i2>=0) {
1944 data[nEle]=f2;
1945 indices[nEle]=i2;
1946 nEle++;
1947 }
1948 if(i1>=0) {
1949 data[nEle]=f1;
1950 indices[nEle]=i1;
1951 nEle++;
1952 }
1953 if(i0>=0) {
1954 data[nEle]=f0;
1955 indices[nEle]=i0;
1956 nEle++;
1957 }
1958 return AddRegularisationCondition(nEle,indices,data);
1959}
1960
1961////////////////////////////////////////////////////////////////////////////////
1962/// Add a row of regularisation conditions to the matrix L.
1963///
1964/// \param[in] nEle number of valid entries in indices and rowData
1965/// \param[in] indices column numbers of L to fill
1966/// \param[in] rowData data to fill into the new row of L
1967///
1968/// returns true if a row was added, false otherwise
1969///
1970/// A new row k is added to the matrix L, its dimension is expanded.
1971/// The new elements \f$ L_{ki} \f$ are filled from the array rowData[]
1972/// where the indices i which are taken from the array indices[].
1973
1975(Int_t nEle,const Int_t *indices,const Double_t *rowData)
1976{
1977 Bool_t r=kTRUE;
1978 const Int_t *l0_rows=fL->GetRowIndexArray();
1979 const Int_t *l0_cols=fL->GetColIndexArray();
1980 const Double_t *l0_data=fL->GetMatrixArray();
1981
1982 Int_t nF=l0_rows[fL->GetNrows()]+nEle;
1983 Int_t *l_row=new Int_t[nF];
1984 Int_t *l_col=new Int_t[nF];
1985 Double_t *l_data=new Double_t[nF];
1986 // decode original matrix
1987 nF=0;
1988 for(Int_t row=0;row<fL->GetNrows();row++) {
1989 for(Int_t k=l0_rows[row];k<l0_rows[row+1];k++) {
1990 l_row[nF]=row;
1991 l_col[nF]=l0_cols[k];
1992 l_data[nF]=l0_data[k];
1993 nF++;
1994 }
1995 }
1996
1997 // if the original matrix does not have any data, reset its row count
1998 Int_t rowMax=0;
1999 if(nF>0) {
2000 rowMax=fL->GetNrows();
2001 }
2002
2003 // add the new regularisation conditions
2004 for(Int_t i=0;i<nEle;i++) {
2005 Int_t col=fHistToX[indices[i]];
2006 if(col<0) {
2007 r=kFALSE;
2008 break;
2009 }
2010 l_row[nF]=rowMax;
2011 l_col[nF]=col;
2012 l_data[nF]=rowData[i];
2013 nF++;
2014 }
2015
2016 // replace the old matrix fL
2017 if(r) {
2018 DeleteMatrix(&fL);
2019 fL=CreateSparseMatrix(rowMax+1,GetNx(),nF,l_row,l_col,l_data);
2020 }
2021 delete [] l_row;
2022 delete [] l_col;
2023 delete [] l_data;
2024 return r;
2025}
2026
2027////////////////////////////////////////////////////////////////////////////////
2028/// Add a regularisation condition on the magnitude of a truth bin.
2029///
2030/// \param[in] bin bin number
2031/// \param[in] scale (default=1) scale factor
2032///
2033/// this adds one row to L, where the element <b>bin</b> takes the
2034/// value <b>scale</b>
2035///
2036/// return value: 0 if ok, 1 if the condition has not been
2037/// added. Conditions which are not added typically correspond to bin
2038/// numbers where the truth can not be unfolded (either response
2039/// matrix is empty or the data do not constrain).
2040///
2041/// The RegularizeXXX() methods can be used to set up a custom matrix
2042/// of regularisation conditions. In this case, start with an empty
2043/// matrix L (argument regmode=kRegModeNone in the constructor)
2044
2046{
2047 // add regularisation on the size of bin i
2048 // bin: bin number
2049 // scale: size of the regularisation
2050 // return value: number of conditions which have been skipped
2051 // modifies data member fL
2052
2055
2056 return AddRegularisationCondition(bin,scale) ? 0 : 1;
2057}
2058
2059////////////////////////////////////////////////////////////////////////////////
2060/// Add a regularisation condition on the difference of two truth bin.
2061///
2062/// \param[in] left_bin bin number
2063/// \param[in] right_bin bin number
2064/// \param[in] scale (default=1) scale factor
2065///
2066/// this adds one row to L, where the element <b>left_bin</b> takes the
2067/// value <b>-scale</b> and the element <b>right_bin</b> takes the
2068/// value <b>+scale</b>
2069///
2070/// return value: 0 if ok, 1 if the condition has not been
2071/// added. Conditions which are not added typically correspond to bin
2072/// numbers where the truth can not be unfolded (either response
2073/// matrix is empty or the data do not constrain).
2074///
2075/// The RegularizeXXX() methods can be used to set up a custom matrix
2076/// of regularisation conditions. In this case, start with an empty
2077/// matrix L (argument regmode=kRegModeNone in the constructor)
2078
2079Int_t TUnfold::RegularizeDerivative(int left_bin, int right_bin,
2080 Double_t scale)
2081{
2082 // add regularisation on the difference of two bins
2083 // left_bin: 1st bin
2084 // right_bin: 2nd bin
2085 // scale: size of the regularisation
2086 // return value: number of conditions which have been skipped
2087 // modifies data member fL
2088
2091
2092 return AddRegularisationCondition(left_bin,-scale,right_bin,scale) ? 0 : 1;
2093}
2094
2095////////////////////////////////////////////////////////////////////////////////
2096/// Add a regularisation condition on the curvature of three truth bin.
2097///
2098/// \param[in] left_bin bin number
2099/// \param[in] center_bin bin number
2100/// \param[in] right_bin bin number
2101/// \param[in] scale_left (default=1) scale factor
2102/// \param[in] scale_right (default=1) scale factor
2103///
2104/// this adds one row to L, where the element <b>left_bin</b> takes the
2105/// value <b>-scale_left</b>, the element <b>right_bin</b> takes the
2106/// value <b>-scale_right</b> and the element <b>center_bin</b> takes
2107/// the value <b>scale_left+scale_right</b>
2108///
2109/// return value: 0 if ok, 1 if the condition has not been
2110/// added. Conditions which are not added typically correspond to bin
2111/// numbers where the truth can not be unfolded (either response
2112/// matrix is empty or the data do not constrain).
2113///
2114/// The RegularizeXXX() methods can be used to set up a custom matrix
2115/// of regularisation conditions. In this case, start with an empty
2116/// matrix L (argument regmode=kRegModeNone in the constructor)
2117
2118Int_t TUnfold::RegularizeCurvature(int left_bin, int center_bin,
2119 int right_bin,
2120 Double_t scale_left,
2121 Double_t scale_right)
2122{
2123 // add regularisation on the curvature through 3 bins (2nd derivative)
2124 // left_bin: 1st bin
2125 // center_bin: 2nd bin
2126 // right_bin: 3rd bin
2127 // scale_left: scale factor on center-left difference
2128 // scale_right: scale factor on right-center difference
2129 // return value: number of conditions which have been skipped
2130 // modifies data member fL
2131
2134
2136 (left_bin,-scale_left,
2137 center_bin,scale_left+scale_right,
2138 right_bin,-scale_right)
2139 ? 0 : 1;
2140}
2141
2142////////////////////////////////////////////////////////////////////////////////
2143/// Add regularisation conditions for a group of bins.
2144///
2145/// \param[in] start first bin number
2146/// \param[in] step step size
2147/// \param[in] nbin number of bins
2148/// \param[in] regmode regularisation mode (one of: kRegModeSize,
2149/// kRegModeDerivative, kRegModeCurvature)
2150///
2151/// add regularisation conditions for a group of equidistant
2152/// bins. There are <b>nbin</b> bins, starting with bin <b>start</b>
2153/// and with a distance of <b>step</b> between bins.
2154///
2155/// Return value: number of regularisation conditions which could not
2156/// be added.
2157///
2158/// Conditions which are not added typically correspond to bin
2159/// numbers where the truth can not be unfolded (either response
2160/// matrix is empty or the data do not constrain).
2161
2162Int_t TUnfold::RegularizeBins(int start, int step, int nbin,
2163 ERegMode regmode)
2164{
2165 // set regulatisation on a 1-dimensional curve
2166 // start: first bin
2167 // step: distance between neighbouring bins
2168 // nbin: total number of bins
2169 // regmode: regularisation mode
2170 // return value:
2171 // number of errors (i.e. conditions which have been skipped)
2172 // modifies data member fL
2173
2174 Int_t i0, i1, i2;
2175 i0 = start;
2176 i1 = i0 + step;
2177 i2 = i1 + step;
2178 Int_t nSkip = 0;
2179 Int_t nError= 0;
2180 if (regmode == kRegModeDerivative) {
2181 nSkip = 1;
2182 } else if (regmode == kRegModeCurvature) {
2183 nSkip = 2;
2184 } else if(regmode != kRegModeSize) {
2185 Error("RegularizeBins","regmode = %d is not valid",regmode);
2186 }
2187 for (Int_t i = nSkip; i < nbin; i++) {
2188 if (regmode == kRegModeSize) {
2189 nError += RegularizeSize(i0);
2190 } else if (regmode == kRegModeDerivative) {
2191 nError += RegularizeDerivative(i0, i1);
2192 } else if (regmode == kRegModeCurvature) {
2193 nError += RegularizeCurvature(i0, i1, i2);
2194 }
2195 i0 = i1;
2196 i1 = i2;
2197 i2 += step;
2198 }
2199 return nError;
2200}
2201
2202////////////////////////////////////////////////////////////////////////////////
2203/// Add regularisation conditions for 2d unfolding.
2204///
2205/// \param[in] start_bin first bin number
2206/// \param[in] step1 step size, 1st dimension
2207/// \param[in] nbin1 number of bins, 1st dimension
2208/// \param[in] step2 step size, 2nd dimension
2209/// \param[in] nbin2 number of bins, 2nd dimension
2210/// \param[in] regmode regularisation mode (one of: kRegModeSize,
2211/// kRegModeDerivative, kRegModeCurvature)
2212///
2213/// add regularisation conditions for a grid of bins. The start bin is
2214/// <b>start_bin</b>. Along the first (second) dimension, there are
2215/// <b>nbin1</b> (<b>nbin2</b>) bins and adjacent bins are spaced by
2216/// <b>step1</b> (<b>step2</b>) units.
2217///
2218/// Return value: number of regularisation conditions which could not
2219/// be added. Conditions which are not added typically correspond to bin
2220/// numbers where the truth can not be unfolded (either response
2221/// matrix is empty or the data do not constrain).
2222
2223Int_t TUnfold::RegularizeBins2D(int start_bin, int step1, int nbin1,
2224 int step2, int nbin2, ERegMode regmode)
2225{
2226 // set regularisation on a 2-dimensional grid of bins
2227 // start_bin: first bin
2228 // step1: distance between bins in 1st direction
2229 // nbin1: number of bins in 1st direction
2230 // step2: distance between bins in 2nd direction
2231 // nbin2: number of bins in 2nd direction
2232 // return value:
2233 // number of errors (i.e. conditions which have been skipped)
2234 // modifies data member fL
2235
2236 Int_t nError = 0;
2237 for (Int_t i1 = 0; i1 < nbin1; i1++) {
2238 nError += RegularizeBins(start_bin + step1 * i1, step2, nbin2, regmode);
2239 }
2240 for (Int_t i2 = 0; i2 < nbin2; i2++) {
2241 nError += RegularizeBins(start_bin + step2 * i2, step1, nbin1, regmode);
2242 }
2243 return nError;
2244}
2245
2246////////////////////////////////////////////////////////////////////////////////
2247/// Perform the unfolding for a given input and regularisation.
2248///
2249/// \param[in] tau_reg regularisation parameter
2250/// \param[in] input input distribution with uncertainties
2251/// \param[in] scaleBias (default=0.0) scale factor applied to the bias
2252///
2253/// This is a shortcut for `{ SetInput(input,scaleBias); DoUnfold(tau); }`
2254///
2255/// Data members required:
2256/// - fA, fX0, fL
2257/// Data members modified:
2258/// - those documented in SetInput()
2259/// and those documented in DoUnfold(Double_t)
2260/// Return value:
2261/// - maximum global correlation coefficient
2262/// NOTE!!! return value >=1.0 means error, and the result is junk
2263///
2264/// Overflow bins of the input distribution are ignored!
2265
2267 Double_t scaleBias)
2268{
2269
2270 SetInput(input,scaleBias);
2271 return DoUnfold(tau_reg);
2272}
2273
2274////////////////////////////////////////////////////////////////////////////////
2275/// Define input data for subsequent calls to DoUnfold(tau).
2276///
2277/// \param[in] input input distribution with uncertainties
2278/// \param[in] scaleBias (default=0) scale factor applied to the bias
2279/// \param[in] oneOverZeroError (default=0) for bins with zero error, this number defines 1/error.
2280/// \param[in] hist_vyy (default=0) if non-zero, this defines the data covariance matrix
2281/// \param[in] hist_vyy_inv (default=0) if non-zero and hist_vyy is
2282/// set, defines the inverse of the data covariance matrix. This
2283/// feature can be useful for repeated unfoldings in cases where the
2284/// inversion of the input covariance matrix is lengthy
2285///
2286/// Return value: nError1+10000*nError2
2287///
2288/// - nError1: number of bins where the uncertainty is zero.
2289/// these bins either are not used for the unfolding (if
2290/// oneOverZeroError==0) or 1/uncertainty is set to oneOverZeroError.
2291/// - nError2: return values>10000 are fatal errors, because the
2292/// unfolding can not be done. The number nError2 corresponds to the
2293/// number of truth bins which are not constrained by data points.
2294///
2295/// Data members modified:
2296/// - fY, fVyy, , fBiasScale
2297/// Data members cleared
2298/// - fVyyInv, fNdf
2299/// - + see ClearResults
2300
2301Int_t TUnfold::SetInput(const TH1 *input, Double_t scaleBias,
2302 Double_t oneOverZeroError,const TH2 *hist_vyy,
2303 const TH2 *hist_vyy_inv)
2304{
2306 fNdf=0;
2307
2308 fBiasScale = scaleBias;
2309
2310 // delete old results (if any)
2311 ClearResults();
2312
2313 // construct error matrix and inverted error matrix of measured quantities
2314 // from errors of input histogram or use error matrix
2315
2316 Int_t *rowVyyN=new Int_t[GetNy()*GetNy()+1];
2317 Int_t *colVyyN=new Int_t[GetNy()*GetNy()+1];
2318 Double_t *dataVyyN=new Double_t[GetNy()*GetNy()+1];
2319
2320 Int_t *rowVyy1=new Int_t[GetNy()];
2321 Int_t *colVyy1=new Int_t[GetNy()];
2322 Double_t *dataVyy1=new Double_t[GetNy()];
2323 Double_t *dataVyyDiag=new Double_t[GetNy()];
2324
2325 Int_t nVarianceZero=0;
2326 Int_t nVarianceForced=0;
2327 Int_t nVyyN=0;
2328 Int_t nVyy1=0;
2329 for (Int_t iy = 0; iy < GetNy(); iy++) {
2330 // diagonals
2331 Double_t dy2;
2332 if(!hist_vyy) {
2333 Double_t dy = input->GetBinError(iy + 1);
2334 dy2=dy*dy;
2335 if (dy2 <= 0.0) {
2336 if(oneOverZeroError>0.0) {
2337 dy2 = 1./ ( oneOverZeroError*oneOverZeroError);
2338 nVarianceForced++;
2339 } else {
2340 nVarianceZero++;
2341 }
2342 }
2343 } else {
2344 dy2 = hist_vyy->GetBinContent(iy+1,iy+1);
2345 if (dy2 <= 0.0) {
2346 nVarianceZero++;
2347 }
2348 }
2349 rowVyyN[nVyyN] = iy;
2350 colVyyN[nVyyN] = iy;
2351 rowVyy1[nVyy1] = iy;
2352 colVyy1[nVyy1] = 0;
2353 dataVyyDiag[iy] = dy2;
2354 if(dy2>0.0) {
2355 dataVyyN[nVyyN++] = dy2;
2356 dataVyy1[nVyy1++] = dy2;
2357 }
2358 }
2359 if(hist_vyy) {
2360 // non-diagonal elements
2361 Int_t nOffDiagNonzero=0;
2362 for (Int_t iy = 0; iy < GetNy(); iy++) {
2363 // ignore rows where the diagonal is zero
2364 if(dataVyyDiag[iy]<=0.0) {
2365 for (Int_t jy = 0; jy < GetNy(); jy++) {
2366 if(hist_vyy->GetBinContent(iy+1,jy+1)!=0.0) {
2367 nOffDiagNonzero++;
2368 }
2369 }
2370 continue;
2371 }
2372 for (Int_t jy = 0; jy < GetNy(); jy++) {
2373 // skip diagonal elements
2374 if(iy==jy) continue;
2375 // ignore columns where the diagonal is zero
2376 if(dataVyyDiag[jy]<=0.0) continue;
2377
2378 rowVyyN[nVyyN] = iy;
2379 colVyyN[nVyyN] = jy;
2380 dataVyyN[nVyyN]= hist_vyy->GetBinContent(iy+1,jy+1);
2381 if(dataVyyN[nVyyN] == 0.0) continue;
2382 nVyyN ++;
2383 }
2384 }
2385 if(hist_vyy_inv) {
2386 Warning("SetInput",
2387 "inverse of input covariance is taken from user input");
2388 Int_t *rowVyyInv=new Int_t[GetNy()*GetNy()+1];
2389 Int_t *colVyyInv=new Int_t[GetNy()*GetNy()+1];
2390 Double_t *dataVyyInv=new Double_t[GetNy()*GetNy()+1];
2391 Int_t nVyyInv=0;
2392 for (Int_t iy = 0; iy < GetNy(); iy++) {
2393 for (Int_t jy = 0; jy < GetNy(); jy++) {
2394 rowVyyInv[nVyyInv] = iy;
2395 colVyyInv[nVyyInv] = jy;
2396 dataVyyInv[nVyyInv]= hist_vyy_inv->GetBinContent(iy+1,jy+1);
2397 if(dataVyyInv[nVyyInv] == 0.0) continue;
2398 nVyyInv ++;
2399 }
2400 }
2402 (GetNy(),GetNy(),nVyyInv,rowVyyInv,colVyyInv,dataVyyInv);
2403 delete [] rowVyyInv;
2404 delete [] colVyyInv;
2405 delete [] dataVyyInv;
2406 } else {
2407 if(nOffDiagNonzero) {
2408 Error("SetInput",
2409 "input covariance has elements C(X,Y)!=0 where V(X)==0");
2410 }
2411 }
2412 }
2415 (GetNy(),GetNy(),nVyyN,rowVyyN,colVyyN,dataVyyN);
2416
2417 delete[] rowVyyN;
2418 delete[] colVyyN;
2419 delete[] dataVyyN;
2420
2422 (GetNy(),1,nVyy1,rowVyy1,colVyy1, dataVyy1);
2423
2424 delete[] rowVyy1;
2425 delete[] colVyy1;
2426 delete[] dataVyy1;
2427
2428 //
2429 // get input vector
2430 DeleteMatrix(&fY);
2431 fY = new TMatrixD(GetNy(), 1);
2432 for (Int_t i = 0; i < GetNy(); i++) {
2433 (*fY) (i, 0) = input->GetBinContent(i + 1);
2434 }
2435 // simple check whether unfolding is possible, given the matrices fA and fV
2437 DeleteMatrix(&vecV);
2438 Int_t nError2=0;
2439 for (Int_t i = 0; i <mAtV->GetNrows();i++) {
2440 if(mAtV->GetRowIndexArray()[i]==
2441 mAtV->GetRowIndexArray()[i+1]) {
2442 nError2 ++;
2443 }
2444 }
2445 if(nVarianceForced) {
2446 if(nVarianceForced>1) {
2447 Warning("SetInput","%d/%d input bins have zero error,"
2448 " 1/error set to %lf.",
2449 nVarianceForced,GetNy(),oneOverZeroError);
2450 } else {
2451 Warning("SetInput","One input bin has zero error,"
2452 " 1/error set to %lf.",oneOverZeroError);
2453 }
2454 }
2455 if(nVarianceZero) {
2456 if(nVarianceZero>1) {
2457 Warning("SetInput","%d/%d input bins have zero error,"
2458 " and are ignored.",nVarianceZero,GetNy());
2459 } else {
2460 Warning("SetInput","One input bin has zero error,"
2461 " and is ignored.");
2462 }
2463 fIgnoredBins=nVarianceZero;
2464 }
2465 if(nError2>0) {
2466 // check whether data points with zero error are responsible
2467 if(oneOverZeroError<=0.0) {
2468 //const Int_t *a_rows=fA->GetRowIndexArray();
2469 //const Int_t *a_cols=fA->GetColIndexArray();
2470 for (Int_t col = 0; col <mAtV->GetNrows();col++) {
2471 if(mAtV->GetRowIndexArray()[col]==
2472 mAtV->GetRowIndexArray()[col+1]) {
2473 TString binlist("no data to constrain output bin ");
2474 binlist += GetOutputBinName(fXToHist[col]);
2475 /* binlist +=" depends on ignored input bins ";
2476 for(Int_t row=0;row<fA->GetNrows();row++) {
2477 if(dataVyyDiag[row]>0.0) continue;
2478 for(Int_t i=a_rows[row];i<a_rows[row+1];i++) {
2479 if(a_cols[i]!=col) continue;
2480 binlist +=" ";
2481 binlist +=row;
2482 }
2483 } */
2484 Warning("SetInput","%s",(char const *)binlist);
2485 }
2486 }
2487 }
2488 if(nError2>1) {
2489 Error("SetInput","%d/%d output bins are not constrained by any data.",
2490 nError2,mAtV->GetNrows());
2491 } else {
2492 Error("SetInput","One output bins is not constrained by any data.");
2493 }
2494 }
2495 DeleteMatrix(&mAtV);
2496
2497 delete[] dataVyyDiag;
2498
2499 return nVarianceForced+nVarianceZero+10000*nError2;
2500}
2501
2502////////////////////////////////////////////////////////////////////////////////
2503/// Perform the unfolding for a given regularisation parameter tau.
2504///
2505/// \param[in] tau regularisation parameter
2506///
2507/// This method sets tau and then calls the core unfolding algorithm
2508/// required data members:
2509/// - fA: matrix to relate x and y
2510/// - fY: measured data points
2511/// - fX0: bias on x
2512/// - fBiasScale: scale factor for fX0
2513/// - fV: inverse of covariance matrix for y
2514/// - fL: regularisation conditions
2515/// modified data members:
2516/// - fTauSquared and those documented in DoUnfold(void)
2517
2519{
2520 fTauSquared=tau*tau;
2521 return DoUnfold();
2522}
2523
2524////////////////////////////////////////////////////////////////////////////////
2525/// Scan the L curve, determine tau and unfold at the final value of tau.
2526///
2527/// \param[in] nPoint number of points used for the scan
2528/// \param[in] tauMin smallest tau value to study
2529/// \param[in] tauMax largest tau value to study. If tauMin=tauMax=0,
2530/// a scan interval is determined automatically.
2531/// \param[out] lCurve if nonzero, a new TGraph is returned,
2532/// containing the L-curve
2533/// \param[out] logTauX if nonzero, a new TSpline is returned, to
2534/// parameterize the L-curve's x-coordinates as a function of log10(tau)
2535/// \param[out] logTauY if nonzero, a new TSpline is returned, to
2536/// parameterize the L-curve's y-coordinates as a function of log10(tau)
2537/// \param[out] logTauCurvature if nonzero, a new TSpline is returned
2538/// of the L-curve curvature as a function of log10(tau)
2539///
2540/// return value: the coordinate number in the logTauX,logTauY graphs
2541/// corresponding to the "final" choice of tau
2542///
2543/// Recommendation: always check <b>logTauCurvature</b>, it
2544/// should be a peaked function (similar to a Gaussian), the maximum
2545/// corresponding to the final choice of tau. Also, check the <b>lCurve</b>
2546/// it should be approximately L-shaped. If in doubt, adjust tauMin
2547/// and tauMax until the results are satisfactory.
2548
2550 Double_t tauMin,Double_t tauMax,
2551 TGraph **lCurve,TSpline **logTauX,
2552 TSpline **logTauY,TSpline **logTauCurvature)
2553{
2554 typedef std::map<Double_t,std::pair<Double_t,Double_t> > XYtau_t;
2555 XYtau_t curve;
2556
2557 //==========================================================
2558 // algorithm:
2559 // (1) do the unfolding for nPoint-1 points
2560 // and store the results in the map
2561 // curve
2562 // (1a) store minimum and maximum tau to curve
2563 // (1b) insert additional points, until nPoint-1 values
2564 // have been calculated
2565 //
2566 // (2) determine the best choice of tau
2567 // do the unfolding for this point
2568 // and store the result in
2569 // curve
2570 // (3) return the result in
2571 // lCurve logTauX logTauY
2572
2573 //==========================================================
2574 // (1) do the unfolding for nPoint-1 points
2575 // and store the results in
2576 // curve
2577 // (1a) store minimum and maximum tau to curve
2578
2579 if((tauMin<=0)||(tauMax<=0.0)||(tauMin>=tauMax)) {
2580 // here no range is given, has to be determined automatically
2581 // the maximum tau is determined from the chi**2 values
2582 // observed from unfolding without regulatisation
2583
2584 // first unfolding, without regularisation
2585 DoUnfold(0.0);
2586
2587 // if the number of degrees of freedom is too small, create an error
2588 if(GetNdf()<=0) {
2589 Error("ScanLcurve","too few input bins, NDF<=0 %d",GetNdf());
2590 }
2591
2592 Double_t x0=GetLcurveX();
2593 Double_t y0=GetLcurveY();
2594 Info("ScanLcurve","logtau=-Infinity X=%lf Y=%lf",x0,y0);
2595 if(!TMath::Finite(x0)) {
2596 Fatal("ScanLcurve","problem (too few input bins?) X=%f",x0);
2597 }
2598 if(!TMath::Finite(y0)) {
2599 Fatal("ScanLcurve","problem (missing regularisation?) Y=%f",y0);
2600 }
2601 {
2602 // unfolding guess maximum tau and store it
2603 Double_t logTau=
2604 0.5*(TMath::Log10(fChi2A+3.*TMath::Sqrt(GetNdf()+1.0))
2605 -GetLcurveY());
2606 DoUnfold(TMath::Power(10.,logTau));
2608 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2610 }
2611 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2612 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2613 logTau,GetLcurveX(),GetLcurveY());
2614 }
2615 if((*curve.begin()).second.first<x0) {
2616 // if the point at tau==0 seems numerically unstable,
2617 // try to find the minimum chi**2 as start value
2618 //
2619 // "unstable" means that there is a finite tau where the
2620 // unfolding chi**2 is smaller than for the case of no
2621 // regularisation. Ideally this should never happen
2622 do {
2623 x0=GetLcurveX();
2624 Double_t logTau=(*curve.begin()).first-0.5;
2625 DoUnfold(TMath::Power(10.,logTau));
2627 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2629 }
2630 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2631 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2632 logTau,GetLcurveX(),GetLcurveY());
2633 }
2634 while(((int)curve.size()<(nPoint-1)/2)&&
2635 ((*curve.begin()).second.first<x0));
2636 } else {
2637 // minimum tau is chosen such that is less than
2638 // 1% different from the case of no regularization
2639 // log10(1.01) = 0.00432
2640
2641 // here, more than one point are inserted if necessary
2642 while(((int)curve.size()<nPoint-1)&&
2643 (((*curve.begin()).second.first-x0>0.00432)||
2644 ((*curve.begin()).second.second-y0>0.00432)||
2645 (curve.size()<2))) {
2646 Double_t logTau=(*curve.begin()).first-0.5;
2647 DoUnfold(TMath::Power(10.,logTau));
2649 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2651 }
2652 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2653 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2654 logTau,GetLcurveX(),GetLcurveY());
2655 }
2656 }
2657 } else {
2658 Double_t logTauMin=TMath::Log10(tauMin);
2659 Double_t logTauMax=TMath::Log10(tauMax);
2660 if(nPoint>1) {
2661 // insert maximum tau
2662 DoUnfold(TMath::Power(10.,logTauMax));
2664 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2666 }
2667 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2668 logTauMax,GetLcurveX(),GetLcurveY());
2669 curve[logTauMax]=std::make_pair(GetLcurveX(),GetLcurveY());
2670 }
2671 // insert minimum tau
2672 DoUnfold(TMath::Power(10.,logTauMin));
2674 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2676 }
2677 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2678 logTauMin,GetLcurveX(),GetLcurveY());
2679 curve[logTauMin]=std::make_pair(GetLcurveX(),GetLcurveY());
2680 }
2681
2682
2683 //==========================================================
2684 // (1b) insert additional points, until nPoint-1 values
2685 // have been calculated
2686
2687 while(int(curve.size())<nPoint-1) {
2688 // insert additional points, such that the sizes of the delta(XY) vectors
2689 // are getting smaller and smaller
2690 XYtau_t::const_iterator i0,i1;
2691 i0=curve.begin();
2692 i1=i0;
2693 Double_t logTau=(*i0).first;
2694 Double_t distMax=0.0;
2695 for (++i1; i1 != curve.end(); ++i1) {
2696 const std::pair<Double_t, Double_t> &xy0 = (*i0).second;
2697 const std::pair<Double_t, Double_t> &xy1 = (*i1).second;
2698 Double_t dx = xy1.first - xy0.first;
2699 Double_t dy = xy1.second - xy0.second;
2700 Double_t d = TMath::Sqrt(dx * dx + dy * dy);
2701 if (d >= distMax) {
2702 distMax = d;
2703 logTau = 0.5 * ((*i0).first + (*i1).first);
2704 }
2705 i0 = i1;
2706 }
2707 DoUnfold(TMath::Power(10.,logTau));
2709 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2711 }
2712 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",logTau,GetLcurveX(),GetLcurveY());
2713 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2714 }
2715
2716 //==========================================================
2717 // (2) determine the best choice of tau
2718 // do the unfolding for this point
2719 // and store the result in
2720 // curve
2721 XYtau_t::const_iterator i0,i1;
2722 i0=curve.begin();
2723 i1=i0;
2724 ++i1;
2725 Double_t logTauFin=(*i0).first;
2726 if( ((int)curve.size())<nPoint) {
2727 // set up splines and determine (x,y) curvature in each point
2728 Double_t *cTi=new Double_t[curve.size()-1]();
2729 Double_t *cCi=new Double_t[curve.size()-1]();
2730 Int_t n=0;
2731 {
2732 Double_t *lXi=new Double_t[curve.size()]();
2733 Double_t *lYi=new Double_t[curve.size()]();
2734 Double_t *lTi=new Double_t[curve.size()]();
2735 for (XYtau_t::const_iterator i = curve.begin(); i != curve.end(); ++i) {
2736 lXi[n] = (*i).second.first;
2737 lYi[n] = (*i).second.second;
2738 lTi[n] = (*i).first;
2739 n++;
2740 }
2741 TSpline3 *splineX=new TSpline3("x vs tau",lTi,lXi,n);
2742 TSpline3 *splineY=new TSpline3("y vs tau",lTi,lYi,n);
2743 // calculate (x,y) curvature for all points
2744 // the curvature is stored in the array cCi[] as a function of cTi[]
2745 for(Int_t i=0;i<n-1;i++) {
2746 Double_t ltau,xy,bi,ci,di;
2747 splineX->GetCoeff(i,ltau,xy,bi,ci,di);
2748 Double_t tauBar=0.5*(lTi[i]+lTi[i+1]);
2749 Double_t dTau=0.5*(lTi[i+1]-lTi[i]);
2750 Double_t dx1=bi+dTau*(2.*ci+3.*di*dTau);
2751 Double_t dx2=2.*ci+6.*di*dTau;
2752 splineY->GetCoeff(i,ltau,xy,bi,ci,di);
2753 Double_t dy1=bi+dTau*(2.*ci+3.*di*dTau);
2754 Double_t dy2=2.*ci+6.*di*dTau;
2755 cTi[i]=tauBar;
2756 cCi[i]=(dy2*dx1-dy1*dx2)/TMath::Power(dx1*dx1+dy1*dy1,1.5);
2757 }
2758 delete splineX;
2759 delete splineY;
2760 delete[] lXi;
2761 delete[] lYi;
2762 delete[] lTi;
2763 }
2764 // create curvature Spline
2765 TSpline3 *splineC=new TSpline3("L curve curvature",cTi,cCi,n-1);
2766 // find the maximum of the curvature
2767 // if the parameter iskip is non-zero, then iskip points are
2768 // ignored when looking for the largest curvature
2769 // (there are problems with the curvature determined from the first
2770 // few points of splineX,splineY in the algorithm above)
2771 Int_t iskip=0;
2772 if(n>4) iskip=1;
2773 if(n>7) iskip=2;
2774 Double_t cCmax=cCi[iskip];
2775 Double_t cTmax=cTi[iskip];
2776 for(Int_t i=iskip;i<n-2-iskip;i++) {
2777 // find maximum on this spline section
2778 // check boundary conditions for x[i+1]
2779 Double_t xMax=cTi[i+1];
2780 Double_t yMax=cCi[i+1];
2781 if(cCi[i]>yMax) {
2782 yMax=cCi[i];
2783 xMax=cTi[i];
2784 }
2785 // find maximum for x[i]<x<x[i+1]
2786 // get spline coefficients and solve equation
2787 // derivative(x)==0
2788 Double_t x,y,b,c,d;
2789 splineC->GetCoeff(i,x,y,b,c,d);
2790 // coefficients of quadratic equation
2791 Double_t m_p_half=-c/(3.*d);
2792 Double_t q=b/(3.*d);
2793 Double_t discr=m_p_half*m_p_half-q;
2794 if(discr>=0.0) {
2795 // solution found
2796 discr=TMath::Sqrt(discr);
2797 Double_t xx;
2798 if(m_p_half>0.0) {
2799 xx = m_p_half + discr;
2800 } else {
2801 xx = m_p_half - discr;
2802 }
2803 Double_t dx=cTi[i+1]-x;
2804 // check first solution
2805 if((xx>0.0)&&(xx<dx)) {
2806 y=splineC->Eval(x+xx);
2807 if(y>yMax) {
2808 yMax=y;
2809 xMax=x+xx;
2810 }
2811 }
2812 // second solution
2813 if(xx !=0.0) {
2814 xx= q/xx;
2815 } else {
2816 xx=0.0;
2817 }
2818 // check second solution
2819 if((xx>0.0)&&(xx<dx)) {
2820 y=splineC->Eval(x+xx);
2821 if(y>yMax) {
2822 yMax=y;
2823 xMax=x+xx;
2824 }
2825 }
2826 }
2827 // check whether this local minimum is a global minimum
2828 if(yMax>cCmax) {
2829 cCmax=yMax;
2830 cTmax=xMax;
2831 }
2832 }
2833 if(logTauCurvature) {
2834 *logTauCurvature=splineC;
2835 } else {
2836 delete splineC;
2837 }
2838 delete[] cTi;
2839 delete[] cCi;
2840 logTauFin=cTmax;
2841 DoUnfold(TMath::Power(10.,logTauFin));
2843 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2845 }
2846 Info("ScanLcurve","Result logtau=%lf X=%lf Y=%lf",
2847 logTauFin,GetLcurveX(),GetLcurveY());
2848 curve[logTauFin]=std::make_pair(GetLcurveX(),GetLcurveY());
2849 }
2850
2851
2852 //==========================================================
2853 // (3) return the result in
2854 // lCurve logTauX logTauY
2855
2856 Int_t bestChoice=-1;
2857 if(curve.size()>0) {
2858 Double_t *x=new Double_t[curve.size()]();
2859 Double_t *y=new Double_t[curve.size()]();
2860 Double_t *logT=new Double_t[curve.size()]();
2861 int n=0;
2862 for (XYtau_t::const_iterator i = curve.begin(); i != curve.end(); ++i) {
2863 if (logTauFin == (*i).first) {
2864 bestChoice = n;
2865 }
2866 x[n] = (*i).second.first;
2867 y[n] = (*i).second.second;
2868 logT[n] = (*i).first;
2869 n++;
2870 }
2871 if(lCurve) {
2872 (*lCurve)=new TGraph(n,x,y);
2873 (*lCurve)->SetTitle("L curve");
2874 }
2875 if(logTauX) (*logTauX)=new TSpline3("log(chi**2)%log(tau)",logT,x,n);
2876 if(logTauY) (*logTauY)=new TSpline3("log(reg.cond)%log(tau)",logT,y,n);
2877 delete[] x;
2878 delete[] y;
2879 delete[] logT;
2880 }
2881
2882 return bestChoice;
2883}
2884
2885////////////////////////////////////////////////////////////////////////////////
2886/// Histogram of truth bins, determined from summing over the response matrix.
2887///
2888/// \param[out] out histogram to store the truth bins. The bin contents
2889/// are overwritten
2890/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
2891///
2892/// This vector is also used to initialize the bias
2893/// x_{0}. However, the bias vector may be changed using the
2894/// SetBias() method.
2895///
2896/// The use of <b>binMap</b> is explained with the documentation of
2897/// the GetOutput() method.
2898
2899void TUnfold::GetNormalisationVector(TH1 *out,const Int_t *binMap) const
2900{
2901
2902 ClearHistogram(out);
2903 for (Int_t i = 0; i < GetNx(); i++) {
2904 int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2905 if(dest>=0) {
2906 out->SetBinContent(dest, fSumOverY[i] + out->GetBinContent(dest));
2907 }
2908 }
2909}
2910
2911////////////////////////////////////////////////////////////////////////////////
2912/// Get bias vector including bias scale.
2913///
2914/// \param[out] out histogram to store the scaled bias vector. The bin
2915/// contents are overwritten
2916/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
2917///
2918/// This method returns the bias vector times scaling factor, f*x_{0}
2919///
2920/// The use of <b>binMap</b> is explained with the documentation of
2921/// the GetOutput() method
2922
2923void TUnfold::GetBias(TH1 *out,const Int_t *binMap) const
2924{
2925
2926 ClearHistogram(out);
2927 for (Int_t i = 0; i < GetNx(); i++) {
2928 int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2929 if(dest>=0) {
2930 out->SetBinContent(dest, fBiasScale*(*fX0) (i, 0) +
2931 out->GetBinContent(dest));
2932 }
2933 }
2934}
2935
2936////////////////////////////////////////////////////////////////////////////////
2937/// Get unfolding result on detector level.
2938///
2939/// \param[out] out histogram to store the correlation coefficients. The bin
2940/// contents and errors are overwritten.
2941/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
2942///
2943/// This method returns the unfolding output folded by the response
2944/// matrix, i.e. the vector Ax.
2945///
2946/// The use of <b>binMap</b> is explained with the documentation of
2947/// the GetOutput() method
2948
2949void TUnfold::GetFoldedOutput(TH1 *out,const Int_t *binMap) const
2950{
2951 ClearHistogram(out);
2952
2954
2955 const Int_t *rows_A=fA->GetRowIndexArray();
2956 const Int_t *cols_A=fA->GetColIndexArray();
2957 const Double_t *data_A=fA->GetMatrixArray();
2958 const Int_t *rows_AVxx=AVxx->GetRowIndexArray();
2959 const Int_t *cols_AVxx=AVxx->GetColIndexArray();
2960 const Double_t *data_AVxx=AVxx->GetMatrixArray();
2961
2962 for (Int_t i = 0; i < GetNy(); i++) {
2963 Int_t destI = binMap ? binMap[i+1] : i+1;
2964 if(destI<0) continue;
2965
2966 out->SetBinContent(destI, (*fAx) (i, 0)+ out->GetBinContent(destI));
2967 Double_t e2=0.0;
2968 Int_t index_a=rows_A[i];
2969 Int_t index_av=rows_AVxx[i];
2970 while((index_a<rows_A[i+1])&&(index_av<rows_AVxx[i+1])) {
2971 Int_t j_a=cols_A[index_a];
2972 Int_t j_av=cols_AVxx[index_av];
2973 if(j_a<j_av) {
2974 index_a++;
2975 } else if(j_a>j_av) {
2976 index_av++;
2977 } else {
2978 e2 += data_AVxx[index_av] * data_A[index_a];
2979 index_a++;
2980 index_av++;
2981 }
2982 }
2983 out->SetBinError(destI,TMath::Sqrt(e2));
2984 }
2985 DeleteMatrix(&AVxx);
2986}
2987
2988////////////////////////////////////////////////////////////////////////////////
2989/// Get matrix of probabilities.
2990///
2991/// \param[out] A two-dimensional histogram to store the
2992/// probabilities (normalized response matrix). The bin contents are
2993/// overwritten
2994/// \param[in] histmap specify axis along which the truth bins are
2995/// oriented
2996
2998{
2999 // retrieve matrix of probabilities
3000 // histmap: on which axis to arrange the input/output vector
3001 // A: histogram to store the probability matrix
3002 const Int_t *rows_A=fA->GetRowIndexArray();
3003 const Int_t *cols_A=fA->GetColIndexArray();
3004 const Double_t *data_A=fA->GetMatrixArray();
3005 for (Int_t iy = 0; iy <fA->GetNrows(); iy++) {
3006 for(Int_t indexA=rows_A[iy];indexA<rows_A[iy+1];indexA++) {
3007 Int_t ix = cols_A[indexA];
3008 Int_t ih=fXToHist[ix];
3009 if (histmap == kHistMapOutputHoriz) {
3010 A->SetBinContent(ih, iy+1,data_A[indexA]);
3011 } else {
3012 A->SetBinContent(iy+1, ih,data_A[indexA]);
3013 }
3014 }
3015 }
3016}
3017
3018////////////////////////////////////////////////////////////////////////////////
3019/// Input vector of measurements
3020///
3021/// \param[out] out histogram to store the measurements. Bin content
3022/// and bin errors are overwrite.
3023/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
3024///
3025/// Bins which had an uncertainty of zero in the call to SetInput()
3026/// may acquire bin contents or bin errors different from the
3027/// original settings in SetInput().
3028///
3029/// The use of <b>binMap</b> is explained with the documentation of
3030/// the GetOutput() method
3031
3032void TUnfold::GetInput(TH1 *out,const Int_t *binMap) const
3033{
3034 ClearHistogram(out);
3035
3036 const Int_t *rows_Vyy=fVyy->GetRowIndexArray();
3037 const Int_t *cols_Vyy=fVyy->GetColIndexArray();
3038 const Double_t *data_Vyy=fVyy->GetMatrixArray();
3039
3040 for (Int_t i = 0; i < GetNy(); i++) {
3041 Int_t destI=binMap ? binMap[i+1] : i+1;
3042 if(destI<0) continue;
3043
3044 out->SetBinContent(destI, (*fY) (i, 0)+out->GetBinContent(destI));
3045
3046 Double_t e=0.0;
3047 for(int index=rows_Vyy[i];index<rows_Vyy[i+1];index++) {
3048 if(cols_Vyy[index]==i) {
3049 e=TMath::Sqrt(data_Vyy[index]);
3050 }
3051 }
3052 out->SetBinError(destI, e);
3053 }
3054}
3055
3056////////////////////////////////////////////////////////////////////////////////
3057/// Get inverse of the measurement's covariance matrix.
3058///
3059/// \param[out] out histogram to store the inverted covariance
3060
3062{
3063 // calculate the inverse of the contribution to the error matrix
3064 // corresponding to the input data
3065 if(!fVyyInv) {
3066 Int_t rank=0;
3068 // and count number of degrees of freedom
3069 fNdf = rank-GetNpar();
3070
3071 if(rank<GetNy()-fIgnoredBins) {
3072 Warning("GetInputInverseEmatrix","input covariance matrix has rank %d expect %d",
3073 rank,GetNy());
3074 }
3075 if(fNdf<0) {
3076 Error("GetInputInverseEmatrix","number of parameters %d > %d (rank of input covariance). Problem can not be solved",GetNpar(),rank);
3077 } else if(fNdf==0) {
3078 Warning("GetInputInverseEmatrix","number of parameters %d = input rank %d. Problem is ill posed",GetNpar(),rank);
3079 }
3080 }
3081 if(out) {
3082 // return matrix as histogram
3083 const Int_t *rows_VyyInv=fVyyInv->GetRowIndexArray();
3084 const Int_t *cols_VyyInv=fVyyInv->GetColIndexArray();
3085 const Double_t *data_VyyInv=fVyyInv->GetMatrixArray();
3086
3087 for(int i=0;i<=out->GetNbinsX()+1;i++) {
3088 for(int j=0;j<=out->GetNbinsY()+1;j++) {
3089 out->SetBinContent(i,j,0.);
3090 }
3091 }
3092
3093 for (Int_t i = 0; i < fVyyInv->GetNrows(); i++) {
3094 for(int index=rows_VyyInv[i];index<rows_VyyInv[i+1];index++) {
3095 Int_t j=cols_VyyInv[index];
3096 out->SetBinContent(i+1,j+1,data_VyyInv[index]);
3097 }
3098 }
3099 }
3100}
3101
3102////////////////////////////////////////////////////////////////////////////////
3103/// Get matrix of regularisation conditions squared.
3104///
3105/// \param[out] out histogram to store the squared matrix of
3106/// regularisation conditions. the bin contents are overwritten
3107///
3108/// This returns the square matrix L^{T}L as a histogram
3109///
3110/// The histogram should have dimension nx times nx, where nx
3111/// corresponds to the number of histogram bins in the response matrix
3112/// along the truth axis.
3113
3115{
3116 // retrieve matrix of regularisation conditions squared
3117 // out: pre-booked matrix
3118
3120 // loop over sparse matrix
3121 const Int_t *rows=lSquared->GetRowIndexArray();
3122 const Int_t *cols=lSquared->GetColIndexArray();
3123 const Double_t *data=lSquared->GetMatrixArray();
3124 for (Int_t i = 0; i < GetNx(); i++) {
3125 for (Int_t cindex = rows[i]; cindex < rows[i+1]; cindex++) {
3126 Int_t j=cols[cindex];
3127 out->SetBinContent(fXToHist[i], fXToHist[j],data[cindex]);
3128 }
3129 }
3130 DeleteMatrix(&lSquared);
3131}
3132
3133////////////////////////////////////////////////////////////////////////////////
3134/// Get number of regularisation conditions.
3135///
3136/// This returns the number of regularisation conditions, useful for
3137/// booking a histogram for a subsequent call of GetL().
3138
3140 return fL->GetNrows();
3141}
3142
3143////////////////////////////////////////////////////////////////////////////////
3144/// Get matrix of regularisation conditions.
3145///
3146/// \param[out] out histogram to store the regularisation conditions.
3147/// the bin contents are overwritten
3148///
3149/// The histogram should have dimension nr (x-axis) times nx (y-axis).
3150/// nr corresponds to the number of regularisation conditions, it can
3151/// be obtained using the method GetNr(). nx corresponds to the number
3152/// of histogram bins in the response matrix along the truth axis.
3153
3154void TUnfold::GetL(TH2 *out) const
3155{
3156 // loop over sparse matrix
3157 const Int_t *rows=fL->GetRowIndexArray();
3158 const Int_t *cols=fL->GetColIndexArray();
3159 const Double_t *data=fL->GetMatrixArray();
3160 for (Int_t row = 0; row < GetNr(); row++) {
3161 for (Int_t cindex = rows[row]; cindex < rows[row+1]; cindex++) {
3162 Int_t col=cols[cindex];
3163 Int_t indexH=fXToHist[col];
3164 out->SetBinContent(indexH,row+1,data[cindex]);
3165 }
3166 }
3167}
3168
3169////////////////////////////////////////////////////////////////////////////////
3170/// Set type of area constraint.
3171///
3172/// results of a previous unfolding are reset
3173
3175{
3176 // set type of constraint for the next unfolding
3177 if(fConstraint !=constraint) ClearResults();
3178 fConstraint=constraint;
3179 Info("SetConstraint","fConstraint=%d",fConstraint);
3180}
3181
3182
3183////////////////////////////////////////////////////////////////////////////////
3184/// Return regularisation parameter.
3185
3187{
3188 // return regularisation parameter
3189 return TMath::Sqrt(fTauSquared);
3190}
3191
3192////////////////////////////////////////////////////////////////////////////////
3193/// Get \f$ chi^{2}_{L} \f$ contribution determined in recent unfolding.
3194
3196{
3197 // return chi**2 contribution from regularisation conditions
3198 return fLXsquared*fTauSquared;
3199}
3200
3201////////////////////////////////////////////////////////////////////////////////
3202/// Get number of truth parameters determined in recent unfolding.
3203///
3204/// empty bins of the response matrix or bins which can not be
3205/// unfolded due to rank deficits are not counted
3206
3208{
3209 return GetNx();
3210}
3211
3212////////////////////////////////////////////////////////////////////////////////
3213/// Get value on x-axis of L-curve determined in recent unfolding.
3214///
3215/// \f$ x=log_{10}(GetChi2A()) \f$
3216
3218{
3219 return TMath::Log10(fChi2A);
3220}
3221
3222////////////////////////////////////////////////////////////////////////////////
3223/// Get value on y-axis of L-curve determined in recent unfolding.
3224///
3225/// \f$ y=log_{10}(GetChi2L()) \f$
3226
3228{
3229 return TMath::Log10(fLXsquared);
3230}
3231
3232////////////////////////////////////////////////////////////////////////////////
3233/// Get output distribution, possibly cumulated over several bins.
3234///
3235/// \param[out] output existing output histogram. content and errors
3236/// will be updated.
3237/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
3238///
3239/// If nonzero, the array <b>binMap</b> must have dimension n+2, where n
3240/// corresponds to the number of bins on the truth axis of the response
3241/// matrix (the histogram specified with the TUnfold
3242/// constructor). The indexes of <b>binMap</b> correspond to the truth
3243/// bins (including underflow and overflow) of the response matrix.
3244/// The element binMap[i] specifies the histogram number in
3245/// <b>output</b> where the corresponding truth bin will be stored. It is
3246/// possible to specify the same <b>output</b> bin number for multiple
3247/// indexes, in which case these bins are added. Set binMap[i]=-1 to
3248/// ignore an unfolded truth bin. The uncertainties are
3249/// calculated from the corresponding parts of the covariance matrix,
3250/// properly taking care of added truth bins.
3251///
3252/// If the pointer <b>binMap</b> is zero, the bins are mapped
3253/// one-to-one. Truth bin zero (underflow) is stored in the
3254/// <b>output</b> underflow, truth bin 1 is stored in bin number 1, etc.
3255///
3256/// - output: output histogram
3257/// - binMap: for each bin of the original output distribution
3258/// specify the destination bin. A value of -1 means that the bin
3259/// is discarded. 0 means underflow bin, 1 first bin, ...
3260/// - binMap[0] : destination of underflow bin
3261/// - binMap[1] : destination of first bin
3262/// ...
3263
3264void TUnfold::GetOutput(TH1 *output,const Int_t *binMap) const
3265{
3267 /* Int_t nbin=output->GetNbinsX();
3268 Double_t *c=new Double_t[nbin+2];
3269 Double_t *e2=new Double_t[nbin+2];
3270 for(Int_t i=0;i<nbin+2;i++) {
3271 c[i]=0.0;
3272 e2[i]=0.0;
3273 } */
3274
3275 std::map<Int_t,Double_t> e2;
3276
3277 const Int_t *rows_Vxx=fVxx->GetRowIndexArray();
3278 const Int_t *cols_Vxx=fVxx->GetColIndexArray();
3279 const Double_t *data_Vxx=fVxx->GetMatrixArray();
3280
3281 Int_t binMapSize = fHistToX.GetSize();
3282 for(Int_t i=0;i<binMapSize;i++) {
3283 Int_t destBinI=binMap ? binMap[i] : i; // histogram bin
3284 Int_t srcBinI=fHistToX[i]; // matrix row index
3285 if((destBinI>=0)&&(srcBinI>=0)) {
3286 output->SetBinContent
3287 (destBinI, (*fX)(srcBinI,0)+ output->GetBinContent(destBinI));
3288 // here we loop over the columns of the error matrix
3289 // j: counts histogram bins
3290 // index: counts sparse matrix index
3291 // the algorithm makes use of the fact that fHistToX is ordered
3292 Int_t j=0; // histogram bin
3293 Int_t index_vxx=rows_Vxx[srcBinI];
3294 //Double_t e2=TMath::Power(output->GetBinError(destBinI),2.);
3295 while((j<binMapSize)&&(index_vxx<rows_Vxx[srcBinI+1])) {
3296 Int_t destBinJ=binMap ? binMap[j] : j; // histogram bin
3297 if(destBinI!=destBinJ) {
3298 // only diagonal elements are calculated
3299 j++;
3300 } else {
3301 Int_t srcBinJ=fHistToX[j]; // matrix column index
3302 if(srcBinJ<0) {
3303 // bin was not unfolded, check next bin
3304 j++;
3305 } else {
3306 if(cols_Vxx[index_vxx]<srcBinJ) {
3307 // index is too small
3308 index_vxx++;
3309 } else if(cols_Vxx[index_vxx]>srcBinJ) {
3310 // index is too large, skip bin
3311 j++;
3312 } else {
3313 // add this bin
3314 e2[destBinI] += data_Vxx[index_vxx];
3315 j++;
3316 index_vxx++;
3317 }
3318 }
3319 }
3320 }
3321 //output->SetBinError(destBinI,TMath::Sqrt(e2));
3322 }
3323 }
3324 for (std::map<Int_t, Double_t>::const_iterator i = e2.begin(); i != e2.end(); ++i) {
3325 //cout<<(*i).first<<" "<<(*i).second<<"\n";
3326 output->SetBinError((*i).first,TMath::Sqrt((*i).second));
3327 }
3328}
3329
3330////////////////////////////////////////////////////////////////////////////////
3331/// Add up an error matrix, also respecting the bin mapping.
3332///
3333/// \param[inout] ematrix error matrix histogram
3334/// \param[in] emat error matrix stored with internal mapping (member fXToHist)
3335/// \param[in] binMap mapping of histogram bins
3336/// \param[in] doClear if true, ematrix is cleared prior to adding
3337/// elements of emat to it.
3338///
3339/// the array <b>binMap</b> is explained with the method GetOutput(). The
3340/// matrix emat must have dimension NxN where N=fXToHist.size()
3341/// The flag <b>doClear</b> may be used to add covariance matrices from
3342/// several uncertainty sources.
3343
3345 const Int_t *binMap,Bool_t doClear) const
3346{
3347 Int_t nbin=ematrix->GetNbinsX();
3348 if(doClear) {
3349 for(Int_t i=0;i<nbin+2;i++) {
3350 for(Int_t j=0;j<nbin+2;j++) {
3351 ematrix->SetBinContent(i,j,0.0);
3352 ematrix->SetBinError(i,j,0.0);
3353 }
3354 }
3355 }
3356
3357 if(emat) {
3358 const Int_t *rows_emat=emat->GetRowIndexArray();
3359 const Int_t *cols_emat=emat->GetColIndexArray();
3360 const Double_t *data_emat=emat->GetMatrixArray();
3361
3362 Int_t binMapSize = fHistToX.GetSize();
3363 for(Int_t i=0;i<binMapSize;i++) {
3364 Int_t destBinI=binMap ? binMap[i] : i;
3365 Int_t srcBinI=fHistToX[i];
3366 if((destBinI>=0)&&(destBinI<nbin+2)&&(srcBinI>=0)) {
3367 // here we loop over the columns of the source matrix
3368 // j: counts histogram bins
3369 // index: counts sparse matrix index
3370 // the algorithm makes use of the fact that fHistToX is ordered
3371 Int_t j=0;
3372 Int_t index_vxx=rows_emat[srcBinI];
3373 while((j<binMapSize)&&(index_vxx<rows_emat[srcBinI+1])) {
3374 Int_t destBinJ=binMap ? binMap[j] : j;
3375 Int_t srcBinJ=fHistToX[j];
3376 if((destBinJ<0)||(destBinJ>=nbin+2)||(srcBinJ<0)) {
3377 // check next bin
3378 j++;
3379 } else {
3380 if(cols_emat[index_vxx]<srcBinJ) {
3381 // index is too small
3382 index_vxx++;
3383 } else if(cols_emat[index_vxx]>srcBinJ) {
3384 // index is too large, skip bin
3385 j++;
3386 } else {
3387 // add this bin
3388 Double_t e2= ematrix->GetBinContent(destBinI,destBinJ)
3389 + data_emat[index_vxx];
3390 ematrix->SetBinContent(destBinI,destBinJ,e2);
3391 j++;
3392 index_vxx++;
3393 }
3394 }
3395 }
3396 }
3397 }
3398 }
3399}
3400
3401////////////////////////////////////////////////////////////////////////////////
3402/// Get output covariance matrix, possibly cumulated over several bins.
3403///
3404/// \param[out] ematrix histogram to store the covariance. The bin
3405/// contents are overwritten.
3406/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
3407///
3408/// The use of <b>binMap</b> is explained with the documentation of
3409/// the GetOutput() method
3410
3411void TUnfold::GetEmatrix(TH2 *ematrix,const Int_t *binMap) const
3412{
3413 ErrorMatrixToHist(ematrix,fVxx,binMap,kTRUE);
3414}
3415
3416////////////////////////////////////////////////////////////////////////////////
3417/// Get correlation coefficients, possibly cumulated over several bins.
3418///
3419/// \param[out] rhoij histogram to store the correlation coefficients. The bin
3420/// contents are overwritten.
3421/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
3422///
3423/// The use of <b>binMap</b> is explained with the documentation of
3424/// the GetOutput() method
3425
3426void TUnfold::GetRhoIJ(TH2 *rhoij,const Int_t *binMap) const
3427{
3428 GetEmatrix(rhoij,binMap);
3429 Int_t nbin=rhoij->GetNbinsX();
3430 Double_t *e=new Double_t[nbin+2];
3431 for(Int_t i=0;i<nbin+2;i++) {
3432 e[i]=TMath::Sqrt(rhoij->GetBinContent(i,i));
3433 }
3434 for(Int_t i=0;i<nbin+2;i++) {
3435 for(Int_t j=0;j<nbin+2;j++) {
3436 if((e[i]>0.0)&&(e[j]>0.0)) {
3437 rhoij->SetBinContent(i,j,rhoij->GetBinContent(i,j)/e[i]/e[j]);
3438 } else {
3439 rhoij->SetBinContent(i,j,0.0);
3440 }
3441 }
3442 }
3443 delete[] e;
3444}
3445
3446////////////////////////////////////////////////////////////////////////////////
3447/// Get global correlation coefficients, possibly cumulated over several bins.
3448///
3449/// \param[out] rhoi histogram to store the global correlation
3450/// coefficients. The bin contents are overwritten.
3451/// \param[in] binMap (default=0) array for mapping truth bins to
3452/// histogram bins
3453/// \param[out] invEmat (default=0) histogram to store the inverted
3454/// covariance matrix
3455///
3456/// for a given bin, the global correlation coefficient is defined
3457/// as \f$ \rho_{i} = \sqrt{1-\frac{1}{(V_{ii}*V^{-1}_{ii})}} \f$
3458///
3459/// such that the calculation of global correlation coefficients
3460/// possibly involves the inversion of a covariance matrix.
3461///
3462/// return value: maximum global correlation coefficient
3463///
3464/// The use of <b>binMap</b> is explained with the documentation of
3465/// the GetOutput() method
3466
3467Double_t TUnfold::GetRhoI(TH1 *rhoi,const Int_t *binMap,TH2 *invEmat) const
3468{
3469 ClearHistogram(rhoi,-1.);
3470
3471 if(binMap) {
3472 // if there is a bin map, the matrix needs to be inverted
3473 // otherwise, use the existing inverse, such that
3474 // no matrix inversion is needed
3475 return GetRhoIFromMatrix(rhoi,fVxx,binMap,invEmat);
3476 } else {
3477 Double_t rhoMax=0.0;
3478
3479 const Int_t *rows_Vxx=fVxx->GetRowIndexArray();
3480 const Int_t *cols_Vxx=fVxx->GetColIndexArray();
3481 const Double_t *data_Vxx=fVxx->GetMatrixArray();
3482
3483 const Int_t *rows_VxxInv=fVxxInv->GetRowIndexArray();
3484 const Int_t *cols_VxxInv=fVxxInv->GetColIndexArray();
3485 const Double_t *data_VxxInv=fVxxInv->GetMatrixArray();
3486
3487 for(Int_t i=0;i<GetNx();i++) {
3488 Int_t destI=fXToHist[i];
3489 Double_t e_ii=0.0,einv_ii=0.0;
3490 for(int index_vxx=rows_Vxx[i];index_vxx<rows_Vxx[i+1];index_vxx++) {
3491 if(cols_Vxx[index_vxx]==i) {
3492 e_ii=data_Vxx[index_vxx];
3493 break;
3494 }
3495 }
3496 for(int index_vxxinv=rows_VxxInv[i];index_vxxinv<rows_VxxInv[i+1];
3497 index_vxxinv++) {
3498 if(cols_VxxInv[index_vxxinv]==i) {
3499 einv_ii=data_VxxInv[index_vxxinv];
3500 break;
3501 }
3502 }
3503 Double_t rho=1.0;
3504 if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3505 if(rho>=0.0) rho=TMath::Sqrt(rho);
3506 else rho=-TMath::Sqrt(-rho);
3507 if(rho>rhoMax) {
3508 rhoMax = rho;
3509 }
3510 rhoi->SetBinContent(destI,rho);
3511 }
3512 return rhoMax;
3513 }
3514}
3515
3516////////////////////////////////////////////////////////////////////////////////
3517/// Get global correlation coefficients with arbitrary min map.
3518///
3519/// - rhoi: global correlation histogram
3520/// - emat: error matrix
3521/// - binMap: for each bin of the original output distribution
3522/// specify the destination bin. A value of -1 means that the bin
3523/// is discarded. 0 means underflow bin, 1 first bin, ...
3524/// - binMap[0] : destination of underflow bin
3525/// - binMap[1] : destination of first bin
3526/// ...
3527/// return value: maximum global correlation
3528
3530 const Int_t *binMap,TH2 *invEmat) const
3531{
3532 Double_t rhoMax=0.;
3533 // original number of bins:
3534 // fHistToX.GetSize()
3535 // loop over binMap and find number of used bins
3536
3537 Int_t binMapSize = fHistToX.GetSize();
3538
3539 // histToLocalBin[iBin] points to a local index
3540 // only bins iBin of the histogram rhoi whih are referenced
3541 // in the bin map have a local index
3542 std::map<int,int> histToLocalBin;
3543 Int_t nBin=0;
3544 for(Int_t i=0;i<binMapSize;i++) {
3545 Int_t mapped_i=binMap ? binMap[i] : i;
3546 if(mapped_i>=0) {
3547 if(histToLocalBin.find(mapped_i)==histToLocalBin.end()) {
3548 histToLocalBin[mapped_i]=nBin;
3549 nBin++;
3550 }
3551 }
3552 }
3553 // number of local indices: nBin
3554 if(nBin>0) {
3555 // construct inverse mapping function
3556 // local index -> histogram bin
3557 Int_t *localBinToHist=new Int_t[nBin];
3558 for (std::map<int, int>::const_iterator i = histToLocalBin.begin(); i != histToLocalBin.end(); ++i) {
3559 localBinToHist[(*i).second]=(*i).first;
3560 }
3561
3562 const Int_t *rows_eOrig=eOrig->GetRowIndexArray();
3563 const Int_t *cols_eOrig=eOrig->GetColIndexArray();
3564 const Double_t *data_eOrig=eOrig->GetMatrixArray();
3565
3566 // remap error matrix
3567 // matrix row i -> origI (fXToHist[i])
3568 // origI -> destI (binMap)
3569 // destI -> ematBinI (histToLocalBin)
3570 TMatrixD eRemap(nBin,nBin);
3571 // i: row of the matrix eOrig
3572 for(Int_t i=0;i<GetNx();i++) {
3573 // origI: pointer in output histogram with all bins
3574 Int_t origI=fXToHist[i];
3575 // destI: pointer in the histogram rhoi
3576 Int_t destI=binMap ? binMap[origI] : origI;
3577 if(destI<0) continue;
3578 Int_t ematBinI=histToLocalBin[destI];
3579 for(int index_eOrig=rows_eOrig[i];index_eOrig<rows_eOrig[i+1];
3580 index_eOrig++) {
3581 // j: column of the matrix fVxx
3582 Int_t j=cols_eOrig[index_eOrig];
3583 // origJ: pointer in output histogram with all bins
3584 Int_t origJ=fXToHist[j];
3585 // destJ: pointer in the histogram rhoi
3586 Int_t destJ=binMap ? binMap[origJ] : origJ;
3587 if(destJ<0) continue;
3588 Int_t ematBinJ=histToLocalBin[destJ];
3589 eRemap(ematBinI,ematBinJ) += data_eOrig[index_eOrig];
3590 }
3591 }
3592 // invert remapped error matrix
3593 TMatrixDSparse eSparse(eRemap);
3594 Int_t rank=0;
3595 TMatrixDSparse *einvSparse=InvertMSparseSymmPos(&eSparse,&rank);
3596 if(rank!=einvSparse->GetNrows()) {
3597 Warning("GetRhoIFormMatrix","Covariance matrix has rank %d expect %d",
3598 rank,einvSparse->GetNrows());
3599 }
3600 // fill to histogram
3601 const Int_t *rows_eInv=einvSparse->GetRowIndexArray();
3602 const Int_t *cols_eInv=einvSparse->GetColIndexArray();
3603 const Double_t *data_eInv=einvSparse->GetMatrixArray();
3604
3605 for(Int_t i=0;i<einvSparse->GetNrows();i++) {
3606 Double_t e_ii=eRemap(i,i);
3607 Double_t einv_ii=0.0;
3608 for(Int_t index_einv=rows_eInv[i];index_einv<rows_eInv[i+1];
3609 index_einv++) {
3610 Int_t j=cols_eInv[index_einv];
3611 if(j==i) {
3612 einv_ii=data_eInv[index_einv];
3613 }
3614 if(invEmat) {
3615 invEmat->SetBinContent(localBinToHist[i],localBinToHist[j],
3616 data_eInv[index_einv]);
3617 }
3618 }
3619 Double_t rho=1.0;
3620 if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3621 if(rho>=0.0) rho=TMath::Sqrt(rho);
3622 else rho=-TMath::Sqrt(-rho);
3623 if(rho>rhoMax) {
3624 rhoMax = rho;
3625 }
3626 //std::cout<<i<<" "<<localBinToHist[i]<<" "<<rho<<"\n";
3627 rhoi->SetBinContent(localBinToHist[i],rho);
3628 }
3629
3630 DeleteMatrix(&einvSparse);
3631 delete [] localBinToHist;
3632 }
3633 return rhoMax;
3634}
3635
3636////////////////////////////////////////////////////////////////////////////////
3637/// Initialize bin contents and bin errors for a given histogram.
3638///
3639/// \param[out] h histogram
3640/// \param[in] x new histogram content
3641///
3642/// all histgram errors are set to zero, all contents are set to <b>x</b>
3643
3645{
3646 Int_t nxyz[3];
3647 nxyz[0]=h->GetNbinsX()+1;
3648 nxyz[1]=h->GetNbinsY()+1;
3649 nxyz[2]=h->GetNbinsZ()+1;
3650 for(int i=h->GetDimension();i<3;i++) nxyz[i]=0;
3651 Int_t ixyz[3];
3652 for(int i=0;i<3;i++) ixyz[i]=0;
3653 while((ixyz[0]<=nxyz[0])&&
3654 (ixyz[1]<=nxyz[1])&&
3655 (ixyz[2]<=nxyz[2])){
3656 Int_t ibin=h->GetBin(ixyz[0],ixyz[1],ixyz[2]);
3657 h->SetBinContent(ibin,x);
3658 h->SetBinError(ibin,0.0);
3659 for(Int_t i=0;i<3;i++) {
3660 ixyz[i] += 1;
3661 if(ixyz[i]<=nxyz[i]) break;
3662 if(i<2) ixyz[i]=0;
3663 }
3664 }
3665}
3666
3668 // set accuracy for matrix inversion
3669 if((eps>0.0)&&(eps<1.0)) fEpsMatrix=eps;
3670}
3671
3672////////////////////////////////////////////////////////////////////////////////
3673/// Return a string describing the TUnfold version.
3674///
3675/// The version is reported in the form Vmajor.minor
3676/// Changes of the minor version number typically correspond to
3677/// bug-fixes. Changes of the major version may result in adding or
3678/// removing data attributes, such that the streamer methods are not
3679/// compatible between different major versions.
3680
3682{
3683 return TUnfold_VERSION;
3684}
3685
ROOT::R::TRInterface & r
Definition Object.C:4
#define d(i)
Definition RSha256.hxx:102
#define b(i)
Definition RSha256.hxx:100
#define f(i)
Definition RSha256.hxx:104
#define c(i)
Definition RSha256.hxx:101
#define a(i)
Definition RSha256.hxx:99
#define h(i)
Definition RSha256.hxx:106
#define R(a, b, c, d, e, f, g, h, i)
Definition RSha256.hxx:110
#define e(i)
Definition RSha256.hxx:103
const Bool_t kFALSE
Definition RtypesCore.h:92
double Double_t
Definition RtypesCore.h:59
const Bool_t kTRUE
Definition RtypesCore.h:91
#define ClassImp(name)
Definition Rtypes.h:364
XPoint xy[kMAXMK]
Definition TGX11.cxx:123
float * q
TMatrixTSparse< Double_t > TMatrixDSparse
TMatrixT< Double_t > TMatrixD
Definition TMatrixDfwd.h:23
#define TUnfold_VERSION
Definition TUnfold.h:99
void Set(Int_t n)
Set size of this array to n doubles.
Definition TArrayD.cxx:106
const Double_t * GetArray() const
Definition TArrayD.h:43
void Set(Int_t n)
Set size of this array to n ints.
Definition TArrayI.cxx:105
Int_t GetSize() const
Definition TArray.h:47
A TGraph is an object made of two arrays X and Y with npoints each.
Definition TGraph.h:41
TH1 is the base class of all histogram classes in ROOT.
Definition TH1.h:58
virtual Int_t GetNbinsY() const
Definition TH1.h:297
virtual Double_t GetBinError(Int_t bin) const
Return value of error associated to bin number bin.
Definition TH1.cxx:8903
virtual Int_t GetNbinsX() const
Definition TH1.h:296
virtual void SetBinError(Int_t bin, Double_t error)
Set the bin Error Note that this resets the bin eror option to be of Normal Type and for the non-empt...
Definition TH1.cxx:9046
virtual void SetBinContent(Int_t bin, Double_t content)
Set bin content see convention for numbering bins in TH1::GetBin In case the bin number is greater th...
Definition TH1.cxx:9062
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition TH1.cxx:4993
Service class for 2-Dim histogram classes.
Definition TH2.h:30
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition TH2.h:88
virtual void SetBinContent(Int_t bin, Double_t content)
Set bin content.
Definition TH2.cxx:2507
TMatrixDSymEigen.
const TVectorD & GetEigenValues() const
const TMatrixD & GetEigenVectors() const
TMatrixTBase.
Int_t GetNrows() const
Int_t GetNcols() const
virtual TMatrixTBase< Element > & SetMatrixArray(const Element *data, Option_t *="")
Copy array data to matrix .
virtual const Int_t * GetRowIndexArray() const
virtual const Element * GetMatrixArray() const
virtual const Int_t * GetColIndexArray() const
virtual void Warning(const char *method, const char *msgfmt,...) const
Issue warning message.
Definition TObject.cxx:879
virtual void Error(const char *method, const char *msgfmt,...) const
Issue error message.
Definition TObject.cxx:893
virtual void Fatal(const char *method, const char *msgfmt,...) const
Issue fatal error message.
Definition TObject.cxx:921
virtual void Info(const char *method, const char *msgfmt,...) const
Issue info message.
Definition TObject.cxx:867
Class to create third splines to interpolate knots Arbitrary conditions can be introduced for first a...
Definition TSpline.h:201
Double_t Eval(Double_t x) const
Eval this spline at x.
Definition TSpline.cxx:786
void GetCoeff(Int_t i, Double_t &x, Double_t &y, Double_t &b, Double_t &c, Double_t &d)
Definition TSpline.h:240
Base class for spline implementation containing the Draw/Paint methods.
Definition TSpline.h:31
Basic string class.
Definition TString.h:136
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString.
Definition TString.cxx:2331
An algorithm to unfold distributions from detector to truth level.
Definition TUnfold.h:103
TArrayI fHistToX
mapping of histogram bins to matrix indices
Definition TUnfold.h:166
void GetRhoIJ(TH2 *rhoij, const Int_t *binMap=0) const
Get correlation coefficients, possibly cumulated over several bins.
Definition TUnfold.cxx:3426
TMatrixDSparse * fE
matrix E
Definition TUnfold.h:209
void GetBias(TH1 *bias, const Int_t *binMap=0) const
Get bias vector including bias scale.
Definition TUnfold.cxx:2923
TMatrixDSparse * fEinv
matrix E^(-1)
Definition TUnfold.h:207
virtual Double_t GetLcurveY(void) const
Get value on y-axis of L-curve determined in recent unfolding.
Definition TUnfold.cxx:3227
TMatrixDSparse * fAx
result x folded back A*x
Definition TUnfold.h:187
TMatrixDSparse * MultiplyMSparseM(const TMatrixDSparse *a, const TMatrixD *b) const
Multiply sparse matrix and a non-sparse matrix.
Definition TUnfold.cxx:774
virtual Double_t DoUnfold(void)
Core unfolding algorithm.
Definition TUnfold.cxx:291
Double_t fChi2A
chi**2 contribution from (y-Ax)Vyy-1(y-Ax)
Definition TUnfold.h:189
TMatrixD * fX0
bias vector x0
Definition TUnfold.h:160
TMatrixDSparse * MultiplyMSparseTranspMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
Multiply a transposed Sparse matrix with another sparse matrix,.
Definition TUnfold.cxx:694
TMatrixDSparse * MultiplyMSparseMSparseTranspVector(const TMatrixDSparse *m1, const TMatrixDSparse *m2, const TMatrixTBase< Double_t > *v) const
Calculate a sparse matrix product where the diagonal matrix V is given by a vector.
Definition TUnfold.cxx:834
TMatrixDSparse * CreateSparseMatrix(Int_t nrow, Int_t ncol, Int_t nele, Int_t *row, Int_t *col, Double_t *data) const
Create a sparse matrix, given the nonzero elements.
Definition TUnfold.cxx:593
Int_t RegularizeSize(int bin, Double_t scale=1.0)
Add a regularisation condition on the magnitude of a truth bin.
Definition TUnfold.cxx:2045
Double_t fEpsMatrix
machine accuracy used to determine matrix rank after eigenvalue analysis
Definition TUnfold.h:177
void GetProbabilityMatrix(TH2 *A, EHistMap histmap) const
Get matrix of probabilities.
Definition TUnfold.cxx:2997
Double_t GetChi2L(void) const
Get contribution determined in recent unfolding.
Definition TUnfold.cxx:3195
TMatrixDSparse * fVxx
covariance matrix Vxx
Definition TUnfold.h:181
Int_t GetNy(void) const
returns the number of measurement bins
Definition TUnfold.h:234
virtual TString GetOutputBinName(Int_t iBinX) const
Get bin name of an output bin.
Definition TUnfold.cxx:1685
Double_t fBiasScale
scale factor for the bias
Definition TUnfold.h:158
virtual ~TUnfold(void)
Definition TUnfold.cxx:133
Double_t fRhoAvg
average global correlation coefficient
Definition TUnfold.h:195
TMatrixDSparse * fDXDtauSquared
derivative of the result wrt tau squared
Definition TUnfold.h:203
static void DeleteMatrix(TMatrixD **m)
delete matrix and invalidate pointer
Definition TUnfold.cxx:196
void GetNormalisationVector(TH1 *s, const Int_t *binMap=0) const
Histogram of truth bins, determined from summing over the response matrix.
Definition TUnfold.cxx:2899
void ClearHistogram(TH1 *h, Double_t x=0.) const
Initialize bin contents and bin errors for a given histogram.
Definition TUnfold.cxx:3644
Int_t RegularizeDerivative(int left_bin, int right_bin, Double_t scale=1.0)
Add a regularisation condition on the difference of two truth bin.
Definition TUnfold.cxx:2079
Int_t GetNx(void) const
returns internal number of output (truth) matrix rows
Definition TUnfold.h:226
static const char * GetTUnfoldVersion(void)
Return a string describing the TUnfold version.
Definition TUnfold.cxx:3681
void SetConstraint(EConstraint constraint)
Set type of area constraint.
Definition TUnfold.cxx:3174
virtual Int_t ScanLcurve(Int_t nPoint, Double_t tauMin, Double_t tauMax, TGraph **lCurve, TSpline **logTauX=0, TSpline **logTauY=0, TSpline **logTauCurvature=0)
Scan the L curve, determine tau and unfold at the final value of tau.
Definition TUnfold.cxx:2549
Int_t RegularizeBins(int start, int step, int nbin, ERegMode regmode)
Add regularisation conditions for a group of bins.
Definition TUnfold.cxx:2162
Bool_t AddRegularisationCondition(Int_t i0, Double_t f0, Int_t i1=-1, Double_t f1=0., Int_t i2=-1, Double_t f2=0.)
Add a row of regularisation conditions to the matrix L.
Definition TUnfold.cxx:1937
Int_t RegularizeCurvature(int left_bin, int center_bin, int right_bin, Double_t scale_left=1.0, Double_t scale_right=1.0)
Add a regularisation condition on the curvature of three truth bin.
Definition TUnfold.cxx:2118
void SetBias(const TH1 *bias)
Set bias vector.
Definition TUnfold.cxx:1913
void GetL(TH2 *l) const
Get matrix of regularisation conditions.
Definition TUnfold.cxx:3154
ERegMode fRegMode
type of regularisation
Definition TUnfold.h:172
Int_t GetNr(void) const
Get number of regularisation conditions.
Definition TUnfold.cxx:3139
TMatrixDSparse * fVxxInv
inverse of covariance matrix Vxx-1
Definition TUnfold.h:183
TMatrixD * fX
unfolding result x
Definition TUnfold.h:179
EConstraint
type of extra constraint
Definition TUnfold.h:109
@ kEConstraintNone
use no extra constraint
Definition TUnfold.h:112
virtual Double_t GetLcurveX(void) const
Get value on x-axis of L-curve determined in recent unfolding.
Definition TUnfold.cxx:3217
TMatrixDSparse * fVyy
covariance matrix Vyy corresponding to y
Definition TUnfold.h:156
Int_t fNdf
number of degrees of freedom
Definition TUnfold.h:197
TArrayD fSumOverY
truth vector calculated from the non-normalized response matrix
Definition TUnfold.h:168
virtual Int_t SetInput(const TH1 *hist_y, Double_t scaleBias=0.0, Double_t oneOverZeroError=0.0, const TH2 *hist_vyy=0, const TH2 *hist_vyy_inv=0)
Define input data for subsequent calls to DoUnfold(tau).
Definition TUnfold.cxx:2301
ERegMode
choice of regularisation scheme
Definition TUnfold.h:119
@ kRegModeNone
no regularisation, or defined later by RegularizeXXX() methods
Definition TUnfold.h:122
@ kRegModeDerivative
regularize the 1st derivative of the output distribution
Definition TUnfold.h:128
@ kRegModeSize
regularise the amplitude of the output distribution
Definition TUnfold.h:125
@ kRegModeCurvature
regularize the 2nd derivative of the output distribution
Definition TUnfold.h:131
@ kRegModeMixed
mixed regularisation pattern
Definition TUnfold.h:135
void GetInput(TH1 *inputData, const Int_t *binMap=0) const
Input vector of measurements.
Definition TUnfold.cxx:3032
void SetEpsMatrix(Double_t eps)
set numerical accuracy for Eigenvalue analysis when inverting matrices with rank problems
Definition TUnfold.cxx:3667
void GetEmatrix(TH2 *ematrix, const Int_t *binMap=0) const
Get output covariance matrix, possibly cumulated over several bins.
Definition TUnfold.cxx:3411
void ErrorMatrixToHist(TH2 *ematrix, const TMatrixDSparse *emat, const Int_t *binMap, Bool_t doClear) const
Add up an error matrix, also respecting the bin mapping.
Definition TUnfold.cxx:3344
TArrayI fXToHist
mapping of matrix indices to histogram bins
Definition TUnfold.h:164
TMatrixDSparse * fDXDY
derivative of the result wrt dx/dy
Definition TUnfold.h:205
TMatrixD * fY
input (measured) data y
Definition TUnfold.h:154
TMatrixDSparse * InvertMSparseSymmPos(const TMatrixDSparse *A, Int_t *rank) const
Get the inverse or pseudo-inverse of a positive, sparse matrix.
Definition TUnfold.cxx:1009
Double_t GetRhoI(TH1 *rhoi, const Int_t *binMap=0, TH2 *invEmat=0) const
Get global correlation coefficients, possibly cumulated over several bins.
Definition TUnfold.cxx:3467
TMatrixDSparse * fVyyInv
inverse of the input covariance matrix Vyy-1
Definition TUnfold.h:185
Double_t fLXsquared
chi**2 contribution from (x-s*x0)TLTL(x-s*x0)
Definition TUnfold.h:191
TMatrixDSparse * fDXDAM[2]
matrix contribution to the of derivative dx_k/dA_ij
Definition TUnfold.h:199
Double_t fTauSquared
regularisation parameter tau squared
Definition TUnfold.h:162
void GetOutput(TH1 *output, const Int_t *binMap=0) const
Get output distribution, possibly cumulated over several bins.
Definition TUnfold.cxx:3264
Int_t GetNpar(void) const
Get number of truth parameters determined in recent unfolding.
Definition TUnfold.cxx:3207
virtual void ClearResults(void)
Reset all results.
Definition TUnfold.cxx:219
Double_t fRhoMax
maximum global correlation coefficient
Definition TUnfold.h:193
TMatrixDSparse * MultiplyMSparseMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
Multiply two sparse matrices.
Definition TUnfold.cxx:618
EConstraint fConstraint
type of constraint to use for the unfolding
Definition TUnfold.h:170
TUnfold(void)
Only for use by root streamer or derived classes.
Definition TUnfold.cxx:249
EHistMap
arrangement of axes for the response matrix (TH2 histogram)
Definition TUnfold.h:139
@ kHistMapOutputHoriz
truth level on x-axis of the response matrix
Definition TUnfold.h:142
void AddMSparse(TMatrixDSparse *dest, Double_t f, const TMatrixDSparse *src) const
Add a sparse matrix, scaled by a factor, to another scaled matrix.
Definition TUnfold.cxx:931
TMatrixDSparse * fDXDAZ[2]
vector contribution to the of derivative dx_k/dA_ij
Definition TUnfold.h:201
Double_t GetRhoIFromMatrix(TH1 *rhoi, const TMatrixDSparse *eOrig, const Int_t *binMap, TH2 *invEmat) const
Get global correlation coefficients with arbitrary min map.
Definition TUnfold.cxx:3529
void InitTUnfold(void)
Initialize data members, for use in constructors.
Definition TUnfold.cxx:150
Double_t GetTau(void) const
Return regularisation parameter.
Definition TUnfold.cxx:3186
Int_t RegularizeBins2D(int start_bin, int step1, int nbin1, int step2, int nbin2, ERegMode regmode)
Add regularisation conditions for 2d unfolding.
Definition TUnfold.cxx:2223
void GetLsquared(TH2 *lsquared) const
Get matrix of regularisation conditions squared.
Definition TUnfold.cxx:3114
void GetInputInverseEmatrix(TH2 *ematrix)
Get inverse of the measurement's covariance matrix.
Definition TUnfold.cxx:3061
TMatrixDSparse * fA
response matrix A
Definition TUnfold.h:150
void GetFoldedOutput(TH1 *folded, const Int_t *binMap=0) const
Get unfolding result on detector level.
Definition TUnfold.cxx:2949
TMatrixDSparse * fL
regularisation conditions L
Definition TUnfold.h:152
Int_t fIgnoredBins
number of input bins which are dropped because they have error=0
Definition TUnfold.h:175
Int_t GetNdf(void) const
get number of degrees of freedom determined in recent unfolding
Definition TUnfold.h:323
Double_t y[n]
Definition legend1.C:17
Double_t x[n]
Definition legend1.C:17
const Int_t n
Definition legend1.C:16
TF1 * f1
Definition legend1.C:11
#define F(x, y, z)
#define G(x, y, z)
Short_t Max(Short_t a, Short_t b)
Definition TMathBase.h:212
Int_t Finite(Double_t x)
Check if it is finite with a mask in order to be consistent in presence of fast math.
Definition TMath.h:771
Double_t Sqrt(Double_t x)
Definition TMath.h:691
LongDouble_t Power(LongDouble_t x, LongDouble_t y)
Definition TMath.h:735
Short_t Min(Short_t a, Short_t b)
Definition TMathBase.h:180
Double_t Log10(Double_t x)
Definition TMath.h:764
Short_t Abs(Short_t d)
Definition TMathBase.h:120
Definition first.py:1
auto * m
Definition textangle.C:8
static uint64_t sum(uint64_t i)
Definition Factory.cxx:2345
REAL epsilon
Definition triangle.c:617
#define dest(otri, vertexptr)
Definition triangle.c:1040
static void output(int code)
Definition gifencode.c:226