Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TUnfold.cxx
Go to the documentation of this file.
1// Author: Stefan Schmitt
2// DESY, 13/10/08
3
4// Version 17.9, add new methods GetDF(), GetSURE(), ScanSURE(), GetSqrtEvEmatrix()
5//
6// History:
7// Version 17.8, add new method GetDXDY() for histograms
8// Version 17.7, change finite() -> TMath::Finite()
9// Version 17.6, updated doxygen-style comments, add one argument for scanLCurve
10// Version 17.5, fix memory leak with fVyyInv, bugs in GetInputInverseEmatrix(), GetInput(), bug in MultiplyMSparseMSparseTranspVector
11// Version 17.4, in parallel to changes in TUnfoldBinning
12// Version 17.3, in parallel to changes in TUnfoldBinning
13// Version 17.2, bug fix with GetProbabilityMatrix
14// Version 17.1, bug fixes in GetFoldedOutput, GetOutput
15// Version 17.0, option to specify an error matrix with SetInput(), new ScanRho() method
16// Version 16.2, in parallel to bug-fix in TUnfoldSys
17// Version 16.1, fix bug with error matrix in case kEConstraintArea is used
18// Version 16.0, fix calculation of global correlations, improved error messages
19// Version 15, simplified L-curve scan, new tau definition, new error calc., area preservation
20// Version 14, with changes in TUnfoldSys.cxx
21// Version 13, new methods for derived classes and small bug fix
22// Version 12, report singular matrices
23// Version 11, reduce the amount of printout
24// Version 10, more correct definition of the L curve, update references
25// Version 9, faster matrix inversion and skip edge points for L-curve scan
26// Version 8, replace all TMatrixSparse matrix operations by private code
27// Version 7, fix problem with TMatrixDSparse,TMatrixD multiplication
28// Version 6, replace class XY by std::pair
29// Version 5, replace run-time dynamic arrays by new and delete[]
30// Version 4, fix new bug from V3 with initial regularisation condition
31// Version 3, fix bug with initial regularisation condition
32// Version 2, with improved ScanLcurve() algorithm
33// Version 1, added ScanLcurve() method
34// Version 0, stable version of basic unfolding algorithm
35
36
37/** \class TUnfold
38An algorithm to unfold distributions from detector to truth level
39
40TUnfold is used to decompose a measurement y into several sources x,
41given the measurement uncertainties and a matrix of migrations A.
42The method can be applied to a large number of problems,
43where the measured distribution y is a linear superposition
44of several Monte Carlo shapes. Beyond such a simple template fit,
45TUnfold has an adjustable regularisation term and also supports an
46optional constraint on the total number of events.
47
48<b>For most applications, it is better to use the derived class
49TUnfoldDensity instead of TUnfold. TUnfoldDensity adds various
50features to TUnfold, such as:
51background subtraction, propagation of systematic uncertainties,
52complex multidimensional arrangements of the bins. For innocent
53users, the most notable improvement of TUnfoldDensity over TUnfold are
54the getter functions. For TUnfold, histograms have to be booked by the
55user and the getter functions fill the histogram bins. TUnfoldDensity
56simply returns a new, already filled histogram.</b>
57
58If you use this software, please consider the following citation
59<br/>
60<b>S.Schmitt, JINST 7 (2012) T10003 [arXiv:1205.6201]</b>
61<br/>
62Detailed documentation and updates are available on
63http://www.desy.de/~sschmitt
64
65Brief recipy to use TUnfold:
66<ul>
67<li>a matrix (truth,reconstructed) is given as a two-dimensional histogram
68 as argument to the constructor of TUnfold</li>
69<li>a vector of measurements is given as one-dimensional histogram using
70 the SetInput() method</li>
71<li>The unfolding is performed
72<ul>
73<li>either once with a fixed parameter tau, method DoUnfold(tau)</li>
74<li>or multiple times in a scan to determine the best chouce of tau,
75 method ScanLCurve()</li>
76</ul>
77<li>Unfolding results are retrieved using various GetXXX() methods
78</ul>
79
80Basic formulae:<br/>
81&chi;<sup>2</sup><sub>A</sub>=(Ax-y)<sup>T</sup>V<sub>yy</sub><sup>-1</sup>(Ax-y)<br/>
82&chi;<sup>2</sup><sub>L</sub>=(x-f*x<sub>0</sub>)<sup>T</sup>L<sup>T</sup>L(x-f*x<sub>0</sub>)<br/>
83&chi;<sup>2</sup><sub>unf</sub>=&chi;<sup>2</sup><sub>A</sub>+&tau;<sup>2</sup>&chi;<sup>2</sup><sub>L</sub>+&lambda;&Sigma;<sub>i</sub>(Ax-y)<sub>i</sub><br/>
84x:result, A:probabilities, y:data, V<sub>yy</sub>:data
85covariance, f:bias scale, x<sub>0</sub>:bias, L:regularisation conditions,
86&tau;:regularisation strength, &lambda;:Lagrangian multiplier<br/>
87Without area constraint, &lambda; is set to zero, and
88&chi;<sup>2</sup><sub>unf</sub> is minimized to determine x. With area
89constraint, both x and &lambda; are determined.
90*/
91
92
93/*
94 This file is part of TUnfold.
95
96 TUnfold is free software: you can redistribute it and/or modify
97 it under the terms of the GNU General Public License as published by
98 the Free Software Foundation, either version 3 of the License, or
99 (at your option) any later version.
100
101 TUnfold is distributed in the hope that it will be useful,
102 but WITHOUT ANY WARRANTY; without even the implied warranty of
103 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
104 GNU General Public License for more details.
105
106 You should have received a copy of the GNU General Public License
107 along with TUnfold. If not, see <http://www.gnu.org/licenses/>.
108*/
109
110#include <iostream>
111#include <TMatrixD.h>
112#include <TMatrixDSparse.h>
113#include <TMatrixDSym.h>
114#include <TMatrixDSymEigen.h>
115#include <TMath.h>
116#include "TUnfold.h"
117
118#include <map>
119#include <vector>
120
121//#define DEBUG
122//#define DEBUG_DETAIL
123//#define FORCE_EIGENVALUE_DECOMPOSITION
124
126
128{
129 // delete all data members
130
137
138 ClearResults();
139}
140
141
142////////////////////////////////////////////////////////////////////////
143/// initialize data menbers, for use in constructors
145{
146 // reset all data members
147 fXToHist.Set(0);
148 fHistToX.Set(0);
149 fSumOverY.Set(0);
150 fA = nullptr;
151 fL = nullptr;
152 fVyy = nullptr;
153 fY = nullptr;
154 fX0 = nullptr;
155 fTauSquared = 0.0;
156 fBiasScale = 0.0;
159 // output
160 fX = nullptr;
161 fVyyInv = nullptr;
162 fVxx = nullptr;
163 fVxxInv = nullptr;
164 fAx = nullptr;
165 fChi2A = 0.0;
166 fLXsquared = 0.0;
167 fRhoMax = 999.0;
168 fRhoAvg = -1.0;
169 fNdf = 0;
170 fDXDAM[0] = nullptr;
171 fDXDAZ[0] = nullptr;
172 fDXDAM[1] = nullptr;
173 fDXDAZ[1] = nullptr;
174 fDXDtauSquared = nullptr;
175 fDXDY = nullptr;
176 fEinv = nullptr;
177 fE = nullptr;
178 fEpsMatrix=1.E-13;
179 fIgnoredBins=0;
180}
181
182////////////////////////////////////////////////////////////////////////
183/// delete matrix and invalidate pointer
184///
185/// \param[inout] m pointer to a matrix-pointer
186///
187/// if the matrix pointer os non-zero, thematrix id deleted. The matrox pointer is set to zero.
189{
190 if(*m) delete *m;
191 *m=nullptr;
192}
193
194////////////////////////////////////////////////////////////////////////
195/// delete sparse matrix and invalidate pointer
196///
197/// \param[inout] m pointer to a matrix-pointer
198///
199/// if the matrix pointer os non-zero, thematrix id deleted. The matrox pointer is set to zero.
201{
202 if(*m) delete *m;
203 *m=nullptr;
204}
205
206////////////////////////////////////////////////////////////////////////
207/// reset all results
209{
210 // delete old results (if any)
211 // this function is virtual, so derived classes may implement their own
212 // method to flag results as non-valid
213
214 // note: the inverse of the input covariance is not cleared
215 // because it does not change until the input is changed
216
220 for(Int_t i=0;i<2;i++) {
223 }
229 fChi2A = 0.0;
230 fLXsquared = 0.0;
231 fRhoMax = 999.0;
232 fRhoAvg = -1.0;
233}
234
235////////////////////////////////////////////////////////////////////////
236/// only for use by root streamer or derived classes
237///
239{
240 // set all matrix pointers to zero
241 InitTUnfold();
242}
243
244////////////////////////////////////////////////////////////////////////
245/// core unfolding algorithm
247{
248 // main unfolding algorithm. Declared virtual, because other algorithms
249 // could be implemented
250 //
251 // Purpose: unfold y -> x
252 // Data members required:
253 // fA: matrix to relate x and y
254 // fY: measured data points
255 // fX0: bias on x
256 // fBiasScale: scale factor for fX0
257 // fVyy: covariance matrix for y
258 // fL: regularisation conditions
259 // fTauSquared: regularisation strength
260 // fConstraint: whether the constraint is applied
261 // Data members modified:
262 // fVyyInv: inverse of input data covariance matrix
263 // fNdf: number of degrees of freedom
264 // fEinv: inverse of the matrix needed for unfolding calculations
265 // fE: the matrix needed for unfolding calculations
266 // fX: unfolded data points
267 // fDXDY: derivative of x wrt y (for error propagation)
268 // fVxx: error matrix (covariance matrix) on x
269 // fAx: estimate of distribution y from unfolded data
270 // fChi2A: contribution to chi**2 from y-Ax
271 // fChi2L: contribution to chi**2 from L*(x-x0)
272 // fDXDtauSquared: derivative of x wrt tau
273 // fDXDAM[0,1]: matrix parts of derivative x wrt A
274 // fDXDAZ[0,1]: vector parts of derivative x wrt A
275 // fRhoMax: maximum global correlation coefficient
276 // fRhoAvg: average global correlation coefficient
277 // return code:
278 // fRhoMax if(fRhoMax>=1.0) then the unfolding has failed!
279
280 ClearResults();
281
282 // get pseudo-inverse matrix Vyyinv and NDF
283 if(!fVyyInv) {
284 GetInputInverseEmatrix(nullptr);
286 fNdf--;
287 }
288 }
289 //
290 // get matrix
291 // T
292 // fA fV = mAt_V
293 //
295 //
296 // get
297 // T
298 // fA fVyyinv fY + fTauSquared fBiasScale Lsquared fX0 = rhs
299 //
300 TMatrixDSparse *rhs=MultiplyMSparseM(AtVyyinv,fY);
302 if (fBiasScale != 0.0) {
303 TMatrixDSparse *rhs2=MultiplyMSparseM(lSquared,fX0);
304 AddMSparse(rhs, fTauSquared * fBiasScale ,rhs2);
305 DeleteMatrix(&rhs2);
306 }
307
308 //
309 // get matrix
310 // T
311 // (fA fV)fA + fTauSquared*fLsquared = fEinv
313 AddMSparse(fEinv,fTauSquared,lSquared);
314
315 //
316 // get matrix
317 // -1
318 // fEinv = fE
319 //
320 Int_t rank=0;
322 if(rank != GetNx()) {
323 Warning("DoUnfold","rank of matrix E %d expect %d",rank,GetNx());
324 }
325
326 //
327 // get result
328 // fE rhs = x
329 //
331 fX = new TMatrixD(*xSparse);
332 DeleteMatrix(&rhs);
333 DeleteMatrix(&xSparse);
334
335 // additional correction for constraint
336 Double_t lambda_half=0.0;
337 Double_t one_over_epsEeps=0.0;
338 TMatrixDSparse *epsilon=nullptr;
339 TMatrixDSparse *Eepsilon=nullptr;
341 // calculate epsilon: verctor of efficiencies
342 const Int_t *A_rows=fA->GetRowIndexArray();
343 const Int_t *A_cols=fA->GetColIndexArray();
344 const Double_t *A_data=fA->GetMatrixArray();
345 TMatrixD epsilonNosparse(fA->GetNcols(),1);
346 for(Int_t i=0;i<A_rows[fA->GetNrows()];i++) {
347 epsilonNosparse(A_cols[i],0) += A_data[i];
348 }
349 epsilon=new TMatrixDSparse(epsilonNosparse);
350 // calculate vector EE*epsilon
351 Eepsilon=MultiplyMSparseMSparse(fE,epsilon);
352 // calculate scalar product epsilon#*Eepsilon
353 TMatrixDSparse *epsilonEepsilon=MultiplyMSparseTranspMSparse(epsilon,
354 Eepsilon);
355 // if epsilonEepsilon is zero, nothing works...
356 if(epsilonEepsilon->GetRowIndexArray()[1]==1) {
357 one_over_epsEeps=1./epsilonEepsilon->GetMatrixArray()[0];
358 } else {
359 Fatal("Unfold","epsilon#Eepsilon has dimension %d != 1",
360 epsilonEepsilon->GetRowIndexArray()[1]);
361 }
362 DeleteMatrix(&epsilonEepsilon);
363 // calculate sum(Y)
364 Double_t y_minus_epsx=0.0;
365 for(Int_t iy=0;iy<fY->GetNrows();iy++) {
366 y_minus_epsx += (*fY)(iy,0);
367 }
368 // calculate sum(Y)-epsilon#*X
369 for(Int_t ix=0;ix<epsilonNosparse.GetNrows();ix++) {
370 y_minus_epsx -= epsilonNosparse(ix,0) * (*fX)(ix,0);
371 }
372 // calculate lambda_half
373 lambda_half=y_minus_epsx*one_over_epsEeps;
374 // calculate final vector X
375 const Int_t *EEpsilon_rows=Eepsilon->GetRowIndexArray();
376 const Double_t *EEpsilon_data=Eepsilon->GetMatrixArray();
377 for(Int_t ix=0;ix<Eepsilon->GetNrows();ix++) {
378 if(EEpsilon_rows[ix]<EEpsilon_rows[ix+1]) {
379 (*fX)(ix,0) += lambda_half * EEpsilon_data[EEpsilon_rows[ix]];
380 }
381 }
382 }
383 //
384 // get derivative dx/dy
385 // for error propagation
386 // dx/dy = E A# Vyy^-1 ( = B )
387 fDXDY = MultiplyMSparseMSparse(fE,AtVyyinv);
388
389 // additional correction for constraint
391 // transposed vector of dimension GetNy() all elements equal 1/epseEeps
392 Int_t *rows=new Int_t[GetNy()];
393 Int_t *cols=new Int_t[GetNy()];
394 Double_t *data=new Double_t[GetNy()];
395 for(Int_t i=0;i<GetNy();i++) {
396 rows[i]=0;
397 cols[i]=i;
398 data[i]=one_over_epsEeps;
399 }
401 (1,GetNy(),GetNy(),rows,cols,data);
402 delete[] data;
403 delete[] rows;
404 delete[] cols;
405 // B# * epsilon
407 // temp- one_over_epsEeps*Bepsilon
408 AddMSparse(temp, -one_over_epsEeps, epsilonB);
409 DeleteMatrix(&epsilonB);
410 // correction matrix
411 TMatrixDSparse *corr=MultiplyMSparseMSparse(Eepsilon,temp);
412 DeleteMatrix(&temp);
413 // determine new derivative
414 AddMSparse(fDXDY,1.0,corr);
415 DeleteMatrix(&corr);
416 }
417
418 DeleteMatrix(&AtVyyinv);
419
420 //
421 // get error matrix on x
422 // fDXDY * Vyy * fDXDY#
425
426 DeleteMatrix(&fDXDYVyy);
427
428 //
429 // get result
430 // fA x = fAx
431 //
433
434 //
435 // calculate chi**2 etc
436
437 // chi**2 contribution from (y-Ax)V(y-Ax)
440
441 const Int_t *VyyinvDy_rows=VyyinvDy->GetRowIndexArray();
442 const Double_t *VyyinvDy_data=VyyinvDy->GetMatrixArray();
443 fChi2A=0.0;
444 for(Int_t iy=0;iy<VyyinvDy->GetNrows();iy++) {
445 if(VyyinvDy_rows[iy]<VyyinvDy_rows[iy+1]) {
446 fChi2A += VyyinvDy_data[VyyinvDy_rows[iy]]*dy(iy,0);
447 }
448 }
450 TMatrixDSparse *LsquaredDx=MultiplyMSparseM(lSquared,&dx);
451 const Int_t *LsquaredDx_rows=LsquaredDx->GetRowIndexArray();
452 const Double_t *LsquaredDx_data=LsquaredDx->GetMatrixArray();
453 fLXsquared = 0.0;
454 for(Int_t ix=0;ix<LsquaredDx->GetNrows();ix++) {
455 if(LsquaredDx_rows[ix]<LsquaredDx_rows[ix+1]) {
456 fLXsquared += LsquaredDx_data[LsquaredDx_rows[ix]]*dx(ix,0);
457 }
458 }
459
460 //
461 // get derivative dx/dtau
463
466 Double_t f=0.0;
467 if(temp->GetRowIndexArray()[1]==1) {
468 f=temp->GetMatrixArray()[0]*one_over_epsEeps;
469 } else if(temp->GetRowIndexArray()[1]>1) {
470 Fatal("Unfold",
471 "epsilon#fDXDtauSquared has dimension %d != 1",
472 temp->GetRowIndexArray()[1]);
473 }
474 if(f!=0.0) {
475 AddMSparse(fDXDtauSquared, -f,Eepsilon);
476 }
477 DeleteMatrix(&temp);
478 }
479 DeleteMatrix(&epsilon);
480
481 DeleteMatrix(&LsquaredDx);
482 DeleteMatrix(&lSquared);
483
484 // calculate/store matrices defining the derivatives dx/dA
485 fDXDAM[0]=new TMatrixDSparse(*fE);
486 fDXDAM[1]=new TMatrixDSparse(*fDXDY); // create a copy
487 fDXDAZ[0]=VyyinvDy; // instead of deleting VyyinvDy
488 VyyinvDy=nullptr;
489 fDXDAZ[1]=new TMatrixDSparse(*fX); // create a copy
490
492 // add correction to fDXDAM[0]
494 (Eepsilon,Eepsilon,nullptr);
495 AddMSparse(fDXDAM[0], -one_over_epsEeps,temp1);
496 DeleteMatrix(&temp1);
497 // add correction to fDXDAZ[0]
498 Int_t *rows=new Int_t[GetNy()];
499 Int_t *cols=new Int_t[GetNy()];
500 Double_t *data=new Double_t[GetNy()];
501 for(Int_t i=0;i<GetNy();i++) {
502 rows[i]=i;
503 cols[i]=0;
504 data[i]=lambda_half;
505 }
507 (GetNy(),1,GetNy(),rows,cols,data);
508 delete[] data;
509 delete[] rows;
510 delete[] cols;
511 AddMSparse(fDXDAZ[0],1.0,temp2);
512 DeleteMatrix(&temp2);
513 }
514
515 DeleteMatrix(&Eepsilon);
516
517 rank=0;
519 if(rank != GetNx()) {
520 Warning("DoUnfold","rank of output covariance is %d expect %d",
521 rank,GetNx());
522 }
523
524 TVectorD VxxInvDiag(fVxxInv->GetNrows());
525 const Int_t *VxxInv_rows=fVxxInv->GetRowIndexArray();
526 const Int_t *VxxInv_cols=fVxxInv->GetColIndexArray();
527 const Double_t *VxxInv_data=fVxxInv->GetMatrixArray();
528 for (int ix = 0; ix < fVxxInv->GetNrows(); ix++) {
529 for(int ik=VxxInv_rows[ix];ik<VxxInv_rows[ix+1];ik++) {
530 if(ix==VxxInv_cols[ik]) {
531 VxxInvDiag(ix)=VxxInv_data[ik];
532 }
533 }
534 }
535
536 // maximum global correlation coefficient
537 const Int_t *Vxx_rows=fVxx->GetRowIndexArray();
538 const Int_t *Vxx_cols=fVxx->GetColIndexArray();
539 const Double_t *Vxx_data=fVxx->GetMatrixArray();
540
541 Double_t rho_squared_max = 0.0;
542 Double_t rho_sum = 0.0;
543 Int_t n_rho=0;
544 for (int ix = 0; ix < fVxx->GetNrows(); ix++) {
545 for(int ik=Vxx_rows[ix];ik<Vxx_rows[ix+1];ik++) {
546 if(ix==Vxx_cols[ik]) {
547 Double_t rho_squared =
548 1. - 1. / (VxxInvDiag(ix) * Vxx_data[ik]);
549 if (rho_squared > rho_squared_max)
550 rho_squared_max = rho_squared;
551 if(rho_squared>0.0) {
552 rho_sum += TMath::Sqrt(rho_squared);
553 n_rho++;
554 }
555 break;
556 }
557 }
558 }
559 fRhoMax = TMath::Sqrt(rho_squared_max);
560 fRhoAvg = (n_rho>0) ? (rho_sum/n_rho) : -1.0;
561
562 return fRhoMax;
563}
564
565////////////////////////////////////////////////////////////////////////
566/// create a sparse matrix, given the nonzero elements
567///
568/// \param[in] nrow number of rows
569/// \param[in] ncol number of columns
570/// \param[in] nel number of non-zero elements
571/// \param[in] row row indexes of non-zero elements
572/// \param[in] col column indexes of non-zero elements
573/// \param[in] data non-zero elements data
574///
575/// return pointer to a new sparse matrix
576///
577/// shortcut to new TMatrixDSparse() followed by SetMatrixArray()
579(Int_t nrow,Int_t ncol,Int_t nel,Int_t *row,Int_t *col,Double_t *data) const
580{
581 // create a sparse matri
582 // nrow,ncol : dimension of the matrix
583 // nel: number of non-zero elements
584 // row[nel],col[nel],data[nel] : indices and data of the non-zero elements
585 TMatrixDSparse *A=new TMatrixDSparse(nrow,ncol);
586 if(nel>0) {
587 A->SetMatrixArray(nel,row,col,data);
588 }
589 return A;
590}
591
592////////////////////////////////////////////////////////////////////////
593/// multiply two sparse matrices
594///
595/// \param[in] a sparse matrix
596/// \param[in] b sparse matrix
597///
598/// returns a new sparse matrix a*b.
599/// <br/>
600/// A replacement for:
601/// new TMatrixDSparse(a,TMatrixDSparse::kMult,b)
602/// the root implementation had problems in older versions of root
604 const TMatrixDSparse *b) const
605{
606 if(a->GetNcols()!=b->GetNrows()) {
607 Fatal("MultiplyMSparseMSparse",
608 "inconsistent matrix col/ matrix row %d !=%d",
609 a->GetNcols(),b->GetNrows());
610 }
611
612 TMatrixDSparse *r=new TMatrixDSparse(a->GetNrows(),b->GetNcols());
613 const Int_t *a_rows=a->GetRowIndexArray();
614 const Int_t *a_cols=a->GetColIndexArray();
615 const Double_t *a_data=a->GetMatrixArray();
616 const Int_t *b_rows=b->GetRowIndexArray();
617 const Int_t *b_cols=b->GetColIndexArray();
618 const Double_t *b_data=b->GetMatrixArray();
619 // maximum size of the output matrix
620 int nMax=0;
621 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
622 if(a_rows[irow+1]>a_rows[irow]) nMax += b->GetNcols();
623 }
624 if((nMax>0)&&(a_cols)&&(b_cols)) {
625 Int_t *r_rows=new Int_t[nMax];
626 Int_t *r_cols=new Int_t[nMax];
627 Double_t *r_data=new Double_t[nMax];
628 Double_t *row_data=new Double_t[b->GetNcols()];
629 Int_t n=0;
630 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
631 if(a_rows[irow+1]<=a_rows[irow]) continue;
632 // clear row data
633 for(Int_t icol=0;icol<b->GetNcols();icol++) {
634 row_data[icol]=0.0;
635 }
636 // loop over a-columns in this a-row
637 for(Int_t ia=a_rows[irow];ia<a_rows[irow+1];ia++) {
638 Int_t k=a_cols[ia];
639 // loop over b-columns in b-row k
640 for(Int_t ib=b_rows[k];ib<b_rows[k+1];ib++) {
641 row_data[b_cols[ib]] += a_data[ia]*b_data[ib];
642 }
643 }
644 // store nonzero elements
645 for(Int_t icol=0;icol<b->GetNcols();icol++) {
646 if(row_data[icol] != 0.0) {
647 r_rows[n]=irow;
648 r_cols[n]=icol;
649 r_data[n]=row_data[icol];
650 n++;
651 }
652 }
653 }
654 if(n>0) {
655 r->SetMatrixArray(n,r_rows,r_cols,r_data);
656 }
657 delete[] r_rows;
658 delete[] r_cols;
659 delete[] r_data;
660 delete[] row_data;
661 }
662
663 return r;
664}
665
666////////////////////////////////////////////////////////////////////////
667/// multiply a transposed Sparse matrix with another Sparse matrix
668///
669/// \param[in] a sparse matrix (to be transposed)
670/// \param[in] b sparse matrix
671///
672/// returns a new sparse matrix a<sup>T</sup>*b
673/// <br/>
674/// this is a replacement for the root constructors
675/// new TMatrixDSparse(TMatrixDSparse(TMatrixDSparse::kTransposed,*a),TMatrixDSparse::kMult,*b)
676
678(const TMatrixDSparse *a,const TMatrixDSparse *b) const
679{
680 if(a->GetNrows() != b->GetNrows()) {
681 Fatal("MultiplyMSparseTranspMSparse",
682 "inconsistent matrix row numbers %d!=%d",
683 a->GetNrows(),b->GetNrows());
684 }
685
686 TMatrixDSparse *r=new TMatrixDSparse(a->GetNcols(),b->GetNcols());
687 const Int_t *a_rows=a->GetRowIndexArray();
688 const Int_t *a_cols=a->GetColIndexArray();
689 const Double_t *a_data=a->GetMatrixArray();
690 const Int_t *b_rows=b->GetRowIndexArray();
691 const Int_t *b_cols=b->GetColIndexArray();
692 const Double_t *b_data=b->GetMatrixArray();
693 // maximum size of the output matrix
694
695 // matrix multiplication
696 typedef std::map<Int_t,Double_t> MMatrixRow_t;
697 typedef std::map<Int_t, MMatrixRow_t > MMatrix_t;
698 MMatrix_t matrix;
699
700 for(Int_t iRowAB=0;iRowAB<a->GetNrows();iRowAB++) {
701 for(Int_t ia=a_rows[iRowAB];ia<a_rows[iRowAB+1];ia++) {
702 for(Int_t ib=b_rows[iRowAB];ib<b_rows[iRowAB+1];ib++) {
703 // this creates a new row if necessary
704 MMatrixRow_t &row=matrix[a_cols[ia]];
705 MMatrixRow_t::iterator icol=row.find(b_cols[ib]);
706 if(icol!=row.end()) {
707 // update existing row
708 (*icol).second += a_data[ia]*b_data[ib];
709 } else {
710 // create new row
711 row[b_cols[ib]] = a_data[ia]*b_data[ib];
712 }
713 }
714 }
715 }
716
717 Int_t n=0;
718 for(MMatrix_t::const_iterator irow=matrix.begin();
719 irow!=matrix.end();irow++) {
720 n += (*irow).second.size();
721 }
722 if(n>0) {
723 // pack matrix into arrays
724 Int_t *r_rows=new Int_t[n];
725 Int_t *r_cols=new Int_t[n];
726 Double_t *r_data=new Double_t[n];
727 n=0;
728 for(MMatrix_t::const_iterator irow=matrix.begin();
729 irow!=matrix.end();irow++) {
730 for(MMatrixRow_t::const_iterator icol=(*irow).second.begin();
731 icol!=(*irow).second.end();icol++) {
732 r_rows[n]=(*irow).first;
733 r_cols[n]=(*icol).first;
734 r_data[n]=(*icol).second;
735 n++;
736 }
737 }
738 // pack arrays into TMatrixDSparse
739 if(n>0) {
740 r->SetMatrixArray(n,r_rows,r_cols,r_data);
741 }
742 delete[] r_rows;
743 delete[] r_cols;
744 delete[] r_data;
745 }
746
747 return r;
748}
749
750////////////////////////////////////////////////////////////////////////
751/// multiply sparse matrix and a non-sparse matrix
752///
753/// \param[in] a sparse matrix
754/// \param[in] b matrix
755///
756/// returns a new sparse matrix a*b.
757/// <br/> A replacement for:
758/// new TMatrixDSparse(a,TMatrixDSparse::kMult,b)
759/// the root implementation had problems in older versions of root
761 const TMatrixD *b) const
762{
763 if(a->GetNcols()!=b->GetNrows()) {
764 Fatal("MultiplyMSparseM","inconsistent matrix col /matrix row %d!=%d",
765 a->GetNcols(),b->GetNrows());
766 }
767
768 TMatrixDSparse *r=new TMatrixDSparse(a->GetNrows(),b->GetNcols());
769 const Int_t *a_rows=a->GetRowIndexArray();
770 const Int_t *a_cols=a->GetColIndexArray();
771 const Double_t *a_data=a->GetMatrixArray();
772 // maximum size of the output matrix
773 int nMax=0;
774 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
775 if(a_rows[irow+1]-a_rows[irow]>0) nMax += b->GetNcols();
776 }
777 if(nMax>0) {
778 Int_t *r_rows=new Int_t[nMax];
779 Int_t *r_cols=new Int_t[nMax];
780 Double_t *r_data=new Double_t[nMax];
781
782 Int_t n=0;
783 // fill matrix r
784 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
785 if(a_rows[irow+1]-a_rows[irow]<=0) continue;
786 for(Int_t icol=0;icol<b->GetNcols();icol++) {
787 r_rows[n]=irow;
788 r_cols[n]=icol;
789 r_data[n]=0.0;
790 for(Int_t i=a_rows[irow];i<a_rows[irow+1];i++) {
791 Int_t j=a_cols[i];
792 r_data[n] += a_data[i]*(*b)(j,icol);
793 }
794 if(r_data[n]!=0.0) n++;
795 }
796 }
797 if(n>0) {
798 r->SetMatrixArray(n,r_rows,r_cols,r_data);
799 }
800 delete[] r_rows;
801 delete[] r_cols;
802 delete[] r_data;
803 }
804 return r;
805}
806
807
808////////////////////////////////////////////////////////////////////////
809/// calculate a sparse matrix product M1*V*M2<sup>T</sup> where the diagonal matrix V is
810/// given by a vector
811///
812/// \param[in] m1 pointer to sparse matrix with dimension I*K
813/// \param[in] m2 pointer to sparse matrix with dimension J*K
814/// \param[in] v pointer to vector (matrix) with dimension K*1
815///
816/// returns a sparse matrix R with elements
817/// r<sub>ij</sub>=&Sigma;<sub>k</sub>M1<sub>ik</sub>V<sub>k</sub>M2<sub>jk</sub>
818
820(const TMatrixDSparse *m1,const TMatrixDSparse *m2,
821 const TMatrixTBase<Double_t> *v) const
822{
823 if((m1->GetNcols() != m2->GetNcols())||
824 (v && ((m1->GetNcols()!=v->GetNrows())||(v->GetNcols()!=1)))) {
825 if(v) {
826 Fatal("MultiplyMSparseMSparseTranspVector",
827 "matrix cols/vector rows %d!=%d!=%d or vector rows %d!=1\n",
828 m1->GetNcols(),m2->GetNcols(),v->GetNrows(),v->GetNcols());
829 } else {
830 Fatal("MultiplyMSparseMSparseTranspVector",
831 "matrix cols %d!=%d\n",m1->GetNcols(),m2->GetNcols());
832 }
833 }
834 const Int_t *rows_m1=m1->GetRowIndexArray();
835 const Int_t *cols_m1=m1->GetColIndexArray();
836 const Double_t *data_m1=m1->GetMatrixArray();
837 Int_t num_m1=0;
838 for(Int_t i=0;i<m1->GetNrows();i++) {
839 if(rows_m1[i]<rows_m1[i+1]) num_m1++;
840 }
841 const Int_t *rows_m2=m2->GetRowIndexArray();
842 const Int_t *cols_m2=m2->GetColIndexArray();
843 const Double_t *data_m2=m2->GetMatrixArray();
844 Int_t num_m2=0;
845 for(Int_t j=0;j<m2->GetNrows();j++) {
846 if(rows_m2[j]<rows_m2[j+1]) num_m2++;
847 }
848 const TMatrixDSparse *v_sparse=dynamic_cast<const TMatrixDSparse *>(v);
849 const Int_t *v_rows=nullptr;
850 const Double_t *v_data=nullptr;
851 if(v_sparse) {
852 v_rows=v_sparse->GetRowIndexArray();
853 v_data=v_sparse->GetMatrixArray();
854 }
855 Int_t num_r=num_m1*num_m2+1;
856 Int_t *row_r=new Int_t[num_r];
857 Int_t *col_r=new Int_t[num_r];
858 Double_t *data_r=new Double_t[num_r];
859 num_r=0;
860 for(Int_t i=0;i<m1->GetNrows();i++) {
861 for(Int_t j=0;j<m2->GetNrows();j++) {
862 data_r[num_r]=0.0;
863 Int_t index_m1=rows_m1[i];
864 Int_t index_m2=rows_m2[j];
865 while((index_m1<rows_m1[i+1])&&(index_m2<rows_m2[j+1])) {
866 Int_t k1=cols_m1[index_m1];
867 Int_t k2=cols_m2[index_m2];
868 if(k1<k2) {
869 index_m1++;
870 } else if(k1>k2) {
871 index_m2++;
872 } else {
873 if(v_sparse) {
874 Int_t v_index=v_rows[k1];
875 if(v_index<v_rows[k1+1]) {
876 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2]
877 * v_data[v_index];
878 }
879 } else if(v) {
880 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2]
881 * (*v)(k1,0);
882 } else {
883 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2];
884 }
885 index_m1++;
886 index_m2++;
887 }
888 }
889 if(data_r[num_r] !=0.0) {
890 row_r[num_r]=i;
891 col_r[num_r]=j;
892 num_r++;
893 }
894 }
895 }
897 num_r,row_r,col_r,data_r);
898 delete[] row_r;
899 delete[] col_r;
900 delete[] data_r;
901 return r;
902}
903
904////////////////////////////////////////////////////////////////////////
905/// add a sparse matrix, scaled by a factor, to another scaled matrix
906///
907/// \param[inout] dest destination matrix
908/// \param[in] f scaling factor
909/// \param[in] src matrix to be added to dest
910///
911/// a replacement for
912/// (*dest) += f * (*src)
913/// which suffered from a bug in old root versions
914
916 const TMatrixDSparse *src) const
917{
918 const Int_t *dest_rows=dest->GetRowIndexArray();
919 const Int_t *dest_cols=dest->GetColIndexArray();
920 const Double_t *dest_data=dest->GetMatrixArray();
921 const Int_t *src_rows=src->GetRowIndexArray();
922 const Int_t *src_cols=src->GetColIndexArray();
923 const Double_t *src_data=src->GetMatrixArray();
924
925 if((dest->GetNrows()!=src->GetNrows())||
926 (dest->GetNcols()!=src->GetNcols())) {
927 Fatal("AddMSparse","inconsistent matrix rows %d!=%d OR cols %d!=%d",
928 src->GetNrows(),dest->GetNrows(),src->GetNcols(),dest->GetNcols());
929 }
930 Int_t nmax=dest->GetNrows()*dest->GetNcols();
931 Double_t *result_data=new Double_t[nmax];
932 Int_t *result_rows=new Int_t[nmax];
933 Int_t *result_cols=new Int_t[nmax];
934 Int_t n=0;
935 for(Int_t row=0;row<dest->GetNrows();row++) {
936 Int_t i_dest=dest_rows[row];
937 Int_t i_src=src_rows[row];
938 while((i_dest<dest_rows[row+1])||(i_src<src_rows[row+1])) {
939 Int_t col_dest=(i_dest<dest_rows[row+1]) ?
940 dest_cols[i_dest] : dest->GetNcols();
941 Int_t col_src =(i_src <src_rows[row+1] ) ?
942 src_cols [i_src] : src->GetNcols();
943 result_rows[n]=row;
944 if(col_dest<col_src) {
945 result_cols[n]=col_dest;
946 result_data[n]=dest_data[i_dest++];
947 } else if(col_dest>col_src) {
948 result_cols[n]=col_src;
949 result_data[n]=f*src_data[i_src++];
950 } else {
951 result_cols[n]=col_dest;
952 result_data[n]=dest_data[i_dest++]+f*src_data[i_src++];
953 }
954 if(result_data[n] !=0.0) {
955 if(!TMath::Finite(result_data[n])) {
956 Fatal("AddMSparse","Nan detected %d %d %d",
957 row,i_dest,i_src);
958 }
959 n++;
960 }
961 }
962 }
963 if(n<=0) {
964 n=1;
965 result_rows[0]=0;
966 result_cols[0]=0;
967 result_data[0]=0.0;
968 }
969 dest->SetMatrixArray(n,result_rows,result_cols,result_data);
970 delete[] result_data;
971 delete[] result_rows;
972 delete[] result_cols;
973}
974
975////////////////////////////////////////////////////////////////////////
976/// get the inverse or pseudo-inverse of a positive, sparse matrix
977///
978/// \param[in] A the sparse matrix to be inverted, has to be positive
979/// \param[inout] rankPtr if zero, suppress calculation of pseudo-inverse
980/// otherwise the rank of the matrix is returned in *rankPtr
981///
982/// return value: 0 or a new sparse matrix
983/// <ul>
984/// <li>if(rankPtr==nullptr) return the inverse if it exists, or return 0</li>
985/// <li>else return a (pseudo-)inverse and store the rank of the matrix in
986/// *rankPtr </li>
987/// </ul>
988///
989/// the matrix inversion is optimized in performance for the case
990/// where a large submatrix of A is diagonal
991
993(const TMatrixDSparse *A,Int_t *rankPtr) const
994{
995
996 if(A->GetNcols()!=A->GetNrows()) {
997 Fatal("InvertMSparseSymmPos","inconsistent matrix row/col %d!=%d",
998 A->GetNcols(),A->GetNrows());
999 }
1000
1001 Bool_t *isZero=new Bool_t[A->GetNrows()];
1002 const Int_t *a_rows=A->GetRowIndexArray();
1003 const Int_t *a_cols=A->GetColIndexArray();
1004 const Double_t *a_data=A->GetMatrixArray();
1005
1006 // determine diagonal elements
1007 // Aii: diagonals of A
1008 Int_t iDiagonal=0;
1009 Int_t iBlock=A->GetNrows();
1010 Bool_t isDiagonal=kTRUE;
1011 TVectorD aII(A->GetNrows());
1012 Int_t nError=0;
1013 for(Int_t iA=0;iA<A->GetNrows();iA++) {
1014 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1015 Int_t jA=a_cols[indexA];
1016 if(iA==jA) {
1017 if(!(a_data[indexA]>=0.0)) nError++;
1018 aII(iA)=a_data[indexA];
1019 }
1020 }
1021 }
1022 if(nError>0) {
1023 Fatal("InvertMSparseSymmPos",
1024 "Matrix has %d negative elements on the diagonal", nError);
1025 return nullptr;
1026 }
1027
1028 // reorder matrix such that the largest block of zeros is swapped
1029 // to the lowest indices
1030 // the result of this ordering:
1031 // swap[i] : for index i point to the row in A
1032 // swapBack[iA] : for index iA in A point to the swapped index i
1033 // the indices are grouped into three groups
1034 // 0..iDiagonal-1 : these rows only have diagonal elements
1035 // iDiagonal..iBlock : these rows contain a diagonal block matrix
1036 //
1037 Int_t *swap=new Int_t[A->GetNrows()];
1038 for(Int_t j=0;j<A->GetNrows();j++) swap[j]=j;
1039 for(Int_t iStart=0;iStart<iBlock;iStart++) {
1040 Int_t nZero=0;
1041 for(Int_t i=iStart;i<iBlock;i++) {
1042 Int_t iA=swap[i];
1043 Int_t n=A->GetNrows()-(a_rows[iA+1]-a_rows[iA]);
1044 if(n>nZero) {
1045 Int_t tmp=swap[iStart];
1046 swap[iStart]=swap[i];
1047 swap[i]=tmp;
1048 nZero=n;
1049 }
1050 }
1051 for(Int_t i=0;i<A->GetNrows();i++) isZero[i]=kTRUE;
1052 Int_t iA=swap[iStart];
1053 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1054 Int_t jA=a_cols[indexA];
1055 isZero[jA]=kFALSE;
1056 if(iA!=jA) {
1057 isDiagonal=kFALSE;
1058 }
1059 }
1060 if(isDiagonal) {
1061 iDiagonal=iStart+1;
1062 } else {
1063 for(Int_t i=iStart+1;i<iBlock;) {
1064 if(isZero[swap[i]]) {
1065 i++;
1066 } else {
1067 iBlock--;
1068 Int_t tmp=swap[iBlock];
1069 swap[iBlock]=swap[i];
1070 swap[i]=tmp;
1071 }
1072 }
1073 }
1074 }
1075
1076 // for tests uncomment this:
1077 // iBlock=iDiagonal;
1078
1079
1080 // conditioning of the iBlock part
1081#ifdef CONDITION_BLOCK_PART
1082 Int_t nn=A->GetNrows()-iBlock;
1083 for(int inc=(nn+1)/2;inc;inc /= 2) {
1084 for(int i=inc;i<nn;i++) {
1085 int itmp=swap[i+iBlock];
1086 int j;
1087 for(j=i;(j>=inc)&&(aII(swap[j-inc+iBlock])<aII(itmp));j -=inc) {
1088 swap[j+iBlock]=swap[j-inc+iBlock];
1089 }
1090 swap[j+iBlock]=itmp;
1091 }
1092 }
1093#endif
1094 // construct array for swapping back
1095 Int_t *swapBack=new Int_t[A->GetNrows()];
1096 for(Int_t i=0;i<A->GetNrows();i++) {
1097 swapBack[swap[i]]=i;
1098 }
1099#ifdef DEBUG_DETAIL
1100 for(Int_t i=0;i<A->GetNrows();i++) {
1101 std::cout<<" "<<i<<" "<<swap[i]<<" "<<swapBack[i]<<"\n";
1102 }
1103 std::cout<<"after sorting\n";
1104 for(Int_t i=0;i<A->GetNrows();i++) {
1105 if(i==iDiagonal) std::cout<<"iDiagonal="<<i<<"\n";
1106 if(i==iBlock) std::cout<<"iBlock="<<i<<"\n";
1107 std::cout<<" "<<swap[i]<<" "<<aII(swap[i])<<"\n";
1108 }
1109 {
1110 // sanity check:
1111 // (1) sub-matrix swapped[0]..swapped[iDiagonal]
1112 // must not contain off-diagonals
1113 // (2) sub-matrix swapped[0]..swapped[iBlock-1] must be diagonal
1114 Int_t nDiag=0;
1115 Int_t nError1=0;
1116 Int_t nError2=0;
1117 Int_t nBlock=0;
1118 Int_t nNonzero=0;
1119 for(Int_t i=0;i<A->GetNrows();i++) {
1120 Int_t row=swap[i];
1121 for(Int_t indexA=a_rows[row];indexA<a_rows[row+1];indexA++) {
1122 Int_t j=swapBack[a_cols[indexA]];
1123 if(i==j) nDiag++;
1124 else if((i<iDiagonal)||(j<iDiagonal)) nError1++;
1125 else if((i<iBlock)&&(j<iBlock)) nError2++;
1126 else if((i<iBlock)||(j<iBlock)) nBlock++;
1127 else nNonzero++;
1128 }
1129 }
1130 if(nError1+nError2>0) {
1131 Fatal("InvertMSparseSymmPos","sparse matrix analysis failed %d %d %d %d %d",
1132 iDiagonal,iBlock,A->GetNrows(),nError1,nError2);
1133 }
1134 }
1135#endif
1136#ifdef DEBUG
1137 Info("InvertMSparseSymmPos","iDiagonal=%d iBlock=%d nRow=%d",
1138 iDiagonal,iBlock,A->GetNrows());
1139#endif
1140
1141 //=============================================
1142 // matrix inversion starts here
1143 //
1144 // the matrix is split into several parts
1145 // D1 0 0
1146 // A = ( 0 D2 B# )
1147 // 0 B C
1148 //
1149 // D1,D2 are diagonal matrices
1150 //
1151 // first, the D1 part is inverted by calculating 1/D1
1152 // dim(D1)= iDiagonal
1153 // next, the other parts are inverted using Schur complement
1154 //
1155 // 1/D1 0 0
1156 // Ainv = ( 0 E G# )
1157 // 0 G F^-1
1158 //
1159 // where F = C + (-B/D2) B#
1160 // G = (F^-1) (-B/D2)
1161 // E = 1/D2 + (-B#/D2) G)
1162 //
1163 // the inverse of F is calculated using a Cholesky decomposition
1164 //
1165 // Error handling:
1166 // (a) if rankPtr==nullptr: user requests inverse
1167 //
1168 // if D1 is not strictly positive, return NULL
1169 // if D2 is not strictly positive, return NULL
1170 // if F is not strictly positive (Cholesky decomposition failed)
1171 // return NULL
1172 // else
1173 // return Ainv as defined above
1174 //
1175 // (b) if rankPtr !=nullptr: user requests pseudo-inverse
1176 // if D2 is not strictly positive or F is not strictly positive
1177 // calculate singular-value decomposition of
1178 // D2 B#
1179 // ( ) = O D O^-1
1180 // B C
1181 // if some eigenvalues are negative, return NULL
1182 // else calculate pseudo-inverse
1183 // *rankPtr = rank(D1)+rank(D)
1184 // else
1185 // calculate pseudo-inverse of D1 (D1_ii==nullptr) ? 0 : 1/D1_ii
1186 // *rankPtr=rank(D1)+nrow(D2)+nrow(C)
1187 // return Ainv as defined above
1188
1189 Double_t *rEl_data=new Double_t[A->GetNrows()*A->GetNrows()];
1190 Int_t *rEl_col=new Int_t[A->GetNrows()*A->GetNrows()];
1191 Int_t *rEl_row=new Int_t[A->GetNrows()*A->GetNrows()];
1192 Int_t rNumEl=0;
1193
1194 //====================================================
1195 // invert D1
1196 Int_t rankD1=0;
1197 for(Int_t i=0;i<iDiagonal;++i) {
1198 Int_t iA=swap[i];
1199 if(aII(iA)>0.0) {
1200 rEl_col[rNumEl]=iA;
1201 rEl_row[rNumEl]=iA;
1202 rEl_data[rNumEl]=1./aII(iA);
1203 ++rankD1;
1204 ++rNumEl;
1205 }
1206 }
1207 if((!rankPtr)&&(rankD1!=iDiagonal)) {
1208 Fatal("InvertMSparseSymmPos",
1209 "diagonal part 1 has rank %d != %d, matrix can not be inverted",
1210 rankD1,iDiagonal);
1211 rNumEl=-1;
1212 }
1213
1214
1215 //====================================================
1216 // invert D2
1217 Int_t nD2=iBlock-iDiagonal;
1218 TMatrixDSparse *D2inv=nullptr;
1219 if((rNumEl>=0)&&(nD2>0)) {
1220 Double_t *D2inv_data=new Double_t[nD2];
1221 Int_t *D2inv_col=new Int_t[nD2];
1222 Int_t *D2inv_row=new Int_t[nD2];
1223 Int_t D2invNumEl=0;
1224 for(Int_t i=0;i<nD2;++i) {
1225 Int_t iA=swap[i+iDiagonal];
1226 if(aII(iA)>0.0) {
1227 D2inv_col[D2invNumEl]=i;
1228 D2inv_row[D2invNumEl]=i;
1229 D2inv_data[D2invNumEl]=1./aII(iA);
1230 ++D2invNumEl;
1231 }
1232 }
1233 if(D2invNumEl==nD2) {
1234 D2inv=CreateSparseMatrix(nD2,nD2,D2invNumEl,D2inv_row,D2inv_col,
1235 D2inv_data);
1236 } else if(!rankPtr) {
1237 Fatal("InvertMSparseSymmPos",
1238 "diagonal part 2 has rank %d != %d, matrix can not be inverted",
1239 D2invNumEl,nD2);
1240 rNumEl=-2;
1241 }
1242 delete [] D2inv_data;
1243 delete [] D2inv_col;
1244 delete [] D2inv_row;
1245 }
1246
1247 //====================================================
1248 // invert F
1249
1250 Int_t nF=A->GetNrows()-iBlock;
1251 TMatrixDSparse *Finv=nullptr;
1252 TMatrixDSparse *B=nullptr;
1253 TMatrixDSparse *minusBD2inv=nullptr;
1254 if((rNumEl>=0)&&(nF>0)&&((nD2==0)||D2inv)) {
1255 // construct matrices F and B
1256 Int_t nFmax=nF*nF;
1257 Double_t epsilonF2=fEpsMatrix;
1258 Double_t *F_data=new Double_t[nFmax];
1259 Int_t *F_col=new Int_t[nFmax];
1260 Int_t *F_row=new Int_t[nFmax];
1261 Int_t FnumEl=0;
1262
1263 Int_t nBmax=nF*(nD2+1);
1264 Double_t *B_data=new Double_t[nBmax];
1265 Int_t *B_col=new Int_t[nBmax];
1266 Int_t *B_row=new Int_t[nBmax];
1267 Int_t BnumEl=0;
1268
1269 for(Int_t i=0;i<nF;i++) {
1270 Int_t iA=swap[i+iBlock];
1271 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1272 Int_t jA=a_cols[indexA];
1273 Int_t j0=swapBack[jA];
1274 if(j0>=iBlock) {
1275 Int_t j=j0-iBlock;
1276 F_row[FnumEl]=i;
1277 F_col[FnumEl]=j;
1278 F_data[FnumEl]=a_data[indexA];
1279 FnumEl++;
1280 } else if(j0>=iDiagonal) {
1281 Int_t j=j0-iDiagonal;
1282 B_row[BnumEl]=i;
1283 B_col[BnumEl]=j;
1284 B_data[BnumEl]=a_data[indexA];
1285 BnumEl++;
1286 }
1287 }
1288 }
1289 TMatrixDSparse *F=nullptr;
1290 if(FnumEl) {
1291#ifndef FORCE_EIGENVALUE_DECOMPOSITION
1292 F=CreateSparseMatrix(nF,nF,FnumEl,F_row,F_col,F_data);
1293#endif
1294 }
1295 delete [] F_data;
1296 delete [] F_col;
1297 delete [] F_row;
1298 if(BnumEl) {
1299 B=CreateSparseMatrix(nF,nD2,BnumEl,B_row,B_col,B_data);
1300 }
1301 delete [] B_data;
1302 delete [] B_col;
1303 delete [] B_row;
1304
1305 if(B && D2inv) {
1306 minusBD2inv=MultiplyMSparseMSparse(B,D2inv);
1307 if(minusBD2inv) {
1308 Int_t mbd2_nMax=minusBD2inv->GetRowIndexArray()
1309 [minusBD2inv->GetNrows()];
1310 Double_t *mbd2_data=minusBD2inv->GetMatrixArray();
1311 for(Int_t i=0;i<mbd2_nMax;i++) {
1312 mbd2_data[i] = - mbd2_data[i];
1313 }
1314 }
1315 }
1316 if(minusBD2inv && F) {
1317 TMatrixDSparse *minusBD2invBt=
1318 MultiplyMSparseMSparseTranspVector(minusBD2inv,B,nullptr);
1319 AddMSparse(F,1.,minusBD2invBt);
1320 DeleteMatrix(&minusBD2invBt);
1321 }
1322
1323 if(F) {
1324 // cholesky decomposition with preconditioning
1325 const Int_t *f_rows=F->GetRowIndexArray();
1326 const Int_t *f_cols=F->GetColIndexArray();
1327 const Double_t *f_data=F->GetMatrixArray();
1328 // cholesky-type decomposition of F
1329 TMatrixD c(nF,nF);
1330 Int_t nErrorF=0;
1331 for(Int_t i=0;i<nF;i++) {
1332 for(Int_t indexF=f_rows[i];indexF<f_rows[i+1];indexF++) {
1333 if(f_cols[indexF]>=i) c(f_cols[indexF],i)=f_data[indexF];
1334 }
1335 // calculate diagonal element
1336 Double_t c_ii=c(i,i);
1337 for(Int_t j=0;j<i;j++) {
1338 Double_t c_ij=c(i,j);
1339 c_ii -= c_ij*c_ij;
1340 }
1341 if(c_ii<=0.0) {
1342 nErrorF++;
1343 break;
1344 }
1345 c_ii=TMath::Sqrt(c_ii);
1346 c(i,i)=c_ii;
1347 // off-diagonal elements
1348 for(Int_t j=i+1;j<nF;j++) {
1349 Double_t c_ji=c(j,i);
1350 for(Int_t k=0;k<i;k++) {
1351 c_ji -= c(i,k)*c(j,k);
1352 }
1353 c(j,i) = c_ji/c_ii;
1354 }
1355 }
1356 // check condition of dInv
1357 if(!nErrorF) {
1358 Double_t dmin=c(0,0);
1359 Double_t dmax=dmin;
1360 for(Int_t i=1;i<nF;i++) {
1361 dmin=TMath::Min(dmin,c(i,i));
1362 dmax=TMath::Max(dmax,c(i,i));
1363 }
1364#ifdef DEBUG
1365 std::cout<<"dmin,dmax: "<<dmin<<" "<<dmax<<"\n";
1366#endif
1367 if(dmin/dmax<epsilonF2*nF) nErrorF=2;
1368 }
1369 if(!nErrorF) {
1370 // here: F = c c#
1371 // construct inverse of c
1372 TMatrixD cinv(nF,nF);
1373 for(Int_t i=0;i<nF;i++) {
1374 cinv(i,i)=1./c(i,i);
1375 }
1376 for(Int_t i=0;i<nF;i++) {
1377 for(Int_t j=i+1;j<nF;j++) {
1378 Double_t tmp=-c(j,i)*cinv(i,i);
1379 for(Int_t k=i+1;k<j;k++) {
1380 tmp -= cinv(k,i)*c(j,k);
1381 }
1382 cinv(j,i)=tmp*cinv(j,j);
1383 }
1384 }
1385 TMatrixDSparse cInvSparse(cinv);
1387 (&cInvSparse,&cInvSparse);
1388 }
1389 DeleteMatrix(&F);
1390 }
1391 }
1392
1393 // here:
1394 // rNumEl>=nullptr: diagonal part has been inverted
1395 // (nD2==nullptr)||D2inv : D2 part has been inverted or is empty
1396 // (nF==nullptr)||Finv : F part has been inverted or is empty
1397
1398 Int_t rankD2Block=0;
1399 if(rNumEl>=0) {
1400 if( ((nD2==0)||D2inv) && ((nF==0)||Finv) ) {
1401 // all matrix parts have been inverted, compose full matrix
1402 // 1/D1 0 0
1403 // Ainv = ( 0 E G# )
1404 // 0 G F^-1
1405 //
1406 // G = (F^-1) (-B/D2)
1407 // E = 1/D2 + (-B#/D2) G)
1408
1409 TMatrixDSparse *G=nullptr;
1410 if(Finv && minusBD2inv) {
1411 G=MultiplyMSparseMSparse(Finv,minusBD2inv);
1412 }
1413 TMatrixDSparse *E=nullptr;
1414 if(D2inv) E=new TMatrixDSparse(*D2inv);
1415 if(G && minusBD2inv) {
1416 TMatrixDSparse *minusBD2invTransG=
1417 MultiplyMSparseTranspMSparse(minusBD2inv,G);
1418 if(E) {
1419 AddMSparse(E,1.,minusBD2invTransG);
1420 DeleteMatrix(&minusBD2invTransG);
1421 } else {
1422 E=minusBD2invTransG;
1423 }
1424 }
1425 // pack matrix E to r
1426 if(E) {
1427 const Int_t *e_rows=E->GetRowIndexArray();
1428 const Int_t *e_cols=E->GetColIndexArray();
1429 const Double_t *e_data=E->GetMatrixArray();
1430 for(Int_t iE=0;iE<E->GetNrows();iE++) {
1431 Int_t iA=swap[iE+iDiagonal];
1432 for(Int_t indexE=e_rows[iE];indexE<e_rows[iE+1];++indexE) {
1433 Int_t jE=e_cols[indexE];
1434 Int_t jA=swap[jE+iDiagonal];
1435 rEl_col[rNumEl]=iA;
1436 rEl_row[rNumEl]=jA;
1437 rEl_data[rNumEl]=e_data[indexE];
1438 ++rNumEl;
1439 }
1440 }
1441 DeleteMatrix(&E);
1442 }
1443 // pack matrix G to r
1444 if(G) {
1445 const Int_t *g_rows=G->GetRowIndexArray();
1446 const Int_t *g_cols=G->GetColIndexArray();
1447 const Double_t *g_data=G->GetMatrixArray();
1448 for(Int_t iG=0;iG<G->GetNrows();iG++) {
1449 Int_t iA=swap[iG+iBlock];
1450 for(Int_t indexG=g_rows[iG];indexG<g_rows[iG+1];++indexG) {
1451 Int_t jG=g_cols[indexG];
1452 Int_t jA=swap[jG+iDiagonal];
1453 // G
1454 rEl_col[rNumEl]=iA;
1455 rEl_row[rNumEl]=jA;
1456 rEl_data[rNumEl]=g_data[indexG];
1457 ++rNumEl;
1458 // G#
1459 rEl_col[rNumEl]=jA;
1460 rEl_row[rNumEl]=iA;
1461 rEl_data[rNumEl]=g_data[indexG];
1462 ++rNumEl;
1463 }
1464 }
1465 DeleteMatrix(&G);
1466 }
1467 if(Finv) {
1468 // pack matrix Finv to r
1469 const Int_t *finv_rows=Finv->GetRowIndexArray();
1470 const Int_t *finv_cols=Finv->GetColIndexArray();
1471 const Double_t *finv_data=Finv->GetMatrixArray();
1472 for(Int_t iF=0;iF<Finv->GetNrows();iF++) {
1473 Int_t iA=swap[iF+iBlock];
1474 for(Int_t indexF=finv_rows[iF];indexF<finv_rows[iF+1];++indexF) {
1475 Int_t jF=finv_cols[indexF];
1476 Int_t jA=swap[jF+iBlock];
1477 rEl_col[rNumEl]=iA;
1478 rEl_row[rNumEl]=jA;
1479 rEl_data[rNumEl]=finv_data[indexF];
1480 ++rNumEl;
1481 }
1482 }
1483 }
1484 rankD2Block=nD2+nF;
1485 } else if(!rankPtr) {
1486 rNumEl=-3;
1487 Fatal("InvertMSparseSymmPos",
1488 "non-trivial part has rank < %d, matrix can not be inverted",
1489 nF);
1490 } else {
1491 // part of the matrix could not be inverted, get eingenvalue
1492 // decomposition
1493 Int_t nEV=nD2+nF;
1494 Double_t epsilonEV2=fEpsMatrix /* *nEV*nEV */;
1495 Info("InvertMSparseSymmPos",
1496 "cholesky-decomposition failed, try eigenvalue analysis");
1497#ifdef DEBUG
1498 std::cout<<"nEV="<<nEV<<" iDiagonal="<<iDiagonal<<"\n";
1499#endif
1500 TMatrixDSym EV(nEV);
1501 for(Int_t i=0;i<nEV;i++) {
1502 Int_t iA=swap[i+iDiagonal];
1503 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1504 Int_t jA=a_cols[indexA];
1505 Int_t j0=swapBack[jA];
1506 if(j0>=iDiagonal) {
1507 Int_t j=j0-iDiagonal;
1508#ifdef DEBUG
1509 if((i<0)||(j<0)||(i>=nEV)||(j>=nEV)) {
1510 std::cout<<" error "<<nEV<<" "<<i<<" "<<j<<"\n";
1511 }
1512#endif
1513 if(!TMath::Finite(a_data[indexA])) {
1514 Fatal("InvertMSparseSymmPos",
1515 "non-finite number detected element %d %d\n",
1516 iA,jA);
1517 }
1518 EV(i,j)=a_data[indexA];
1519 }
1520 }
1521 }
1522 // EV.Print();
1523 TMatrixDSymEigen Eigen(EV);
1524#ifdef DEBUG
1525 std::cout<<"Eigenvalues\n";
1526 // Eigen.GetEigenValues().Print();
1527#endif
1528 Int_t errorEV=0;
1529 Double_t condition=1.0;
1530 if(Eigen.GetEigenValues()(0)<0.0) {
1531 errorEV=1;
1532 } else if(Eigen.GetEigenValues()(0)>0.0) {
1533 condition=Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0);
1534 }
1535 if(condition<-epsilonEV2) {
1536 errorEV=2;
1537 }
1538 if(errorEV) {
1539 if(errorEV==1) {
1540 Error("InvertMSparseSymmPos",
1541 "Largest Eigenvalue is negative %f",
1542 Eigen.GetEigenValues()(0));
1543 } else {
1544 Error("InvertMSparseSymmPos",
1545 "Some Eigenvalues are negative (EV%d/EV0=%g epsilon=%g)",
1546 nEV-1,
1547 Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0),
1548 epsilonEV2);
1549 }
1550 }
1551 // compose inverse matrix
1552 rankD2Block=0;
1553 Double_t setToZero=epsilonEV2*Eigen.GetEigenValues()(0);
1554 TMatrixD inverseEV(nEV,1);
1555 for(Int_t i=0;i<nEV;i++) {
1556 Double_t x=Eigen.GetEigenValues()(i);
1557 if(x>setToZero) {
1558 inverseEV(i,0)=1./x;
1559 ++rankD2Block;
1560 }
1561 }
1564 (&V,&V,&inverseEV);
1565
1566 // pack matrix VDVt to r
1567 const Int_t *vdvt_rows=VDVt->GetRowIndexArray();
1568 const Int_t *vdvt_cols=VDVt->GetColIndexArray();
1569 const Double_t *vdvt_data=VDVt->GetMatrixArray();
1570 for(Int_t iVDVt=0;iVDVt<VDVt->GetNrows();iVDVt++) {
1571 Int_t iA=swap[iVDVt+iDiagonal];
1572 for(Int_t indexVDVt=vdvt_rows[iVDVt];
1573 indexVDVt<vdvt_rows[iVDVt+1];++indexVDVt) {
1574 Int_t jVDVt=vdvt_cols[indexVDVt];
1575 Int_t jA=swap[jVDVt+iDiagonal];
1576 rEl_col[rNumEl]=iA;
1577 rEl_row[rNumEl]=jA;
1578 rEl_data[rNumEl]=vdvt_data[indexVDVt];
1579 ++rNumEl;
1580 }
1581 }
1582 DeleteMatrix(&VDVt);
1583 }
1584 }
1585 if(rankPtr) {
1586 *rankPtr=rankD1+rankD2Block;
1587 }
1588
1589
1590 DeleteMatrix(&D2inv);
1591 DeleteMatrix(&Finv);
1592 DeleteMatrix(&B);
1593 DeleteMatrix(&minusBD2inv);
1594
1595 delete [] swap;
1596 delete [] swapBack;
1597 delete [] isZero;
1598
1599 TMatrixDSparse *r=(rNumEl>=0) ?
1600 CreateSparseMatrix(A->GetNrows(),A->GetNrows(),rNumEl,
1601 rEl_row,rEl_col,rEl_data) : nullptr;
1602 delete [] rEl_data;
1603 delete [] rEl_col;
1604 delete [] rEl_row;
1605
1606#ifdef DEBUG_DETAIL
1607 // sanity test
1608 if(r) {
1612
1613 TMatrixD ar(*Ar);
1614 TMatrixD a(*A);
1615 TMatrixD ara(*ArA);
1616 TMatrixD R(*r);
1617 TMatrixD rar(*rAr);
1618
1619 DeleteMatrix(&Ar);
1620 DeleteMatrix(&ArA);
1621 DeleteMatrix(&rAr);
1622
1623 Double_t epsilonA2=fEpsMatrix /* *a.GetNrows()*a.GetNcols() */;
1624 for(Int_t i=nullptr;i<a.GetNrows();i++) {
1625 for(Int_t j=nullptr;j<a.GetNcols();j++) {
1626 // ar should be symmetric
1627 if(TMath::Abs(ar(i,j)-ar(j,i))>
1628 epsilonA2*(TMath::Abs(ar(i,j))+TMath::Abs(ar(j,i)))) {
1629 std::cout<<"Ar is not symmetric Ar("<<i<<","<<j<<")="<<ar(i,j)
1630 <<" Ar("<<j<<","<<i<<")="<<ar(j,i)<<"\n";
1631 }
1632 // ara should be equal a
1633 if(TMath::Abs(ara(i,j)-a(i,j))>
1634 epsilonA2*(TMath::Abs(ara(i,j))+TMath::Abs(a(i,j)))) {
1635 std::cout<<"ArA is not equal A ArA("<<i<<","<<j<<")="<<ara(i,j)
1636 <<" A("<<i<<","<<j<<")="<<a(i,j)<<"\n";
1637 }
1638 // ara should be equal a
1639 if(TMath::Abs(rar(i,j)-R(i,j))>
1640 epsilonA2*(TMath::Abs(rar(i,j))+TMath::Abs(R(i,j)))) {
1641 std::cout<<"rAr is not equal r rAr("<<i<<","<<j<<")="<<rar(i,j)
1642 <<" r("<<i<<","<<j<<")="<<R(i,j)<<"\n";
1643 }
1644 }
1645 }
1646 if(rankPtr) std::cout<<"rank="<<*rankPtr<<"\n";
1647 } else {
1648 std::cout<<"Matrix is not positive\n";
1649 }
1650#endif
1651 return r;
1652
1653}
1654
1655////////////////////////////////////////////////////////////////////////
1656/// Get bin name of an outpt bin
1657///
1658/// \param[in] iBinX bin number
1659///
1660/// Return value: name of the bin
1661/// <br/>
1662/// For TUnfold and TUnfoldSys, this function simply returns the bin
1663/// number as a string. This function really only makes sense in the
1664/// context of TUnfoldDensity, where binnig schemes are implemented
1665/// using the class TUnfoldBinning, and non-trivial bin names are
1666/// returned.
1668{
1669 return TString::Format("#%d",iBinX);
1670}
1671
1672////////////////////////////////////////////////////////////////////////
1673/// Set up response matrix and regularisation scheme
1674///
1675/// \param[in] hist_A matrix of MC events that describes the migrations
1676/// \param[in] histmap mapping of the histogram axes
1677/// \param[in] regmode (default=kRegModeSize) global regularisation mode
1678/// \param[in] constraint (default=kEConstraintArea) type of constraint
1679///
1680/// Treatment of overflow bins in the matrix hist_A
1681/// <ul>
1682/// <li>Events reconstructed in underflow or overflow bins are counted
1683/// as inefficiency. They have to be filled properly.</li>
1684/// <li>Events where the truth level is in underflow or overflow bins are
1685/// treated as a part of the generator level distribution.
1686/// The full truth level distribution (including underflow and
1687/// overflow) is unfolded.</li>
1688/// </ul>
1689/// If unsure, do the following:
1690/// <ul>
1691/// <li>store evens where the truth is in underflow or overflow
1692/// (sometimes called "fakes") in a separate TH1. Ensure that the
1693/// truth-level underflow and overflow bins of hist_A are all zero.
1694/// <li>the fakes are background to the
1695/// measurement. Use the classes TUnfoldSys and TUnfoldDensity instead
1696/// of the plain TUnfold for subtracting background</li>
1697/// </ul>
1698
1699TUnfold::TUnfold(const TH2 *hist_A, EHistMap histmap, ERegMode regmode,
1700 EConstraint constraint)
1701{
1702 // data members initialized to something different from zero:
1703 // fA: filled from hist_A
1704 // fDA: filled from hist_A
1705 // fX0: filled from hist_A
1706 // fL: filled depending on the regularisation scheme
1707 InitTUnfold();
1708 SetConstraint(constraint);
1709 Int_t nx0, nx, ny;
1710 if (histmap == kHistMapOutputHoriz) {
1711 // include overflow bins on the X axis
1712 nx0 = hist_A->GetNbinsX()+2;
1713 ny = hist_A->GetNbinsY();
1714 } else {
1715 // include overflow bins on the X axis
1716 nx0 = hist_A->GetNbinsY()+2;
1717 ny = hist_A->GetNbinsX();
1718 }
1719 nx = 0;
1720 // fNx is expected to be nx0, but the input matrix may be ill-formed
1721 // -> all columns with zero events have to be removed
1722 // (because y does not contain any information on that bin in x)
1723 fSumOverY.Set(nx0);
1724 fXToHist.Set(nx0);
1725 fHistToX.Set(nx0);
1726 Int_t nonzeroA=0;
1727 // loop
1728 // - calculate bias distribution
1729 // sum_over_y
1730 // - count those generator bins which can be unfolded
1731 // fNx
1732 // - histogram bins are added to the lookup-tables
1733 // fHistToX[] and fXToHist[]
1734 // these convert histogram bins to matrix indices and vice versa
1735 // - the number of non-zero elements is counted
1736 // nonzeroA
1737 Int_t nskipped=0;
1738 for (Int_t ix = 0; ix < nx0; ix++) {
1739 // calculate sum over all detector bins
1740 // excluding the overflow bins
1741 Double_t sum = 0.0;
1742 Int_t nonZeroY = 0;
1743 for (Int_t iy = 0; iy < ny; iy++) {
1744 Double_t z;
1745 if (histmap == kHistMapOutputHoriz) {
1746 z = hist_A->GetBinContent(ix, iy + 1);
1747 } else {
1748 z = hist_A->GetBinContent(iy + 1, ix);
1749 }
1750 if (z != 0.0) {
1751 nonzeroA++;
1752 nonZeroY++;
1753 sum += z;
1754 }
1755 }
1756 // check whether there is any sensitivity to this generator bin
1757
1758 if (nonZeroY) {
1759 // update mapping tables to convert bin number to matrix index
1760 fXToHist[nx] = ix;
1761 fHistToX[ix] = nx;
1762 // add overflow bins -> important to keep track of the
1763 // non-reconstructed events
1764 fSumOverY[nx] = sum;
1765 if (histmap == kHistMapOutputHoriz) {
1766 fSumOverY[nx] +=
1767 hist_A->GetBinContent(ix, 0) +
1768 hist_A->GetBinContent(ix, ny + 1);
1769 } else {
1770 fSumOverY[nx] +=
1771 hist_A->GetBinContent(0, ix) +
1772 hist_A->GetBinContent(ny + 1, ix);
1773 }
1774 nx++;
1775 } else {
1776 nskipped++;
1777 // histogram bin pointing to -1 (non-existing matrix entry)
1778 fHistToX[ix] = -1;
1779 }
1780 }
1781 Int_t underflowBin=0,overflowBin=0;
1782 for (Int_t ix = 0; ix < nx; ix++) {
1783 Int_t ibinx = fXToHist[ix];
1784 if(ibinx<1) underflowBin=1;
1785 if (histmap == kHistMapOutputHoriz) {
1786 if(ibinx>hist_A->GetNbinsX()) overflowBin=1;
1787 } else {
1788 if(ibinx>hist_A->GetNbinsY()) overflowBin=1;
1789 }
1790 }
1791 if(nskipped) {
1792 Int_t nprint=0;
1793 Int_t ixfirst=-1,ixlast=-1;
1794 TString binlist;
1795 int nDisconnected=0;
1796 for (Int_t ix = 0; ix < nx0; ix++) {
1797 if(fHistToX[ix]<0) {
1798 nprint++;
1799 if(ixlast<0) {
1800 binlist +=" ";
1801 binlist +=ix;
1802 ixfirst=ix;
1803 }
1804 ixlast=ix;
1805 nDisconnected++;
1806 }
1807 if(((fHistToX[ix]>=0)&&(ixlast>=0))||
1808 (nprint==nskipped)) {
1809 if(ixlast>ixfirst) {
1810 binlist += "-";
1811 binlist += ixlast;
1812 }
1813 ixfirst=-1;
1814 ixlast=-1;
1815 }
1816 if(nprint==nskipped) {
1817 break;
1818 }
1819 }
1820 if(nskipped==(2-underflowBin-overflowBin)) {
1821 Info("TUnfold","underflow and overflow bin "
1822 "do not depend on the input data");
1823 } else {
1824 Warning("TUnfold","%d output bins "
1825 "do not depend on the input data %s",nDisconnected,
1826 static_cast<const char *>(binlist));
1827 }
1828 }
1829 // store bias as matrix
1830 fX0 = new TMatrixD(nx, 1, fSumOverY.GetArray());
1831 // store non-zero elements in sparse matrix fA
1832 // construct sparse matrix fA
1833 Int_t *rowA = new Int_t[nonzeroA];
1834 Int_t *colA = new Int_t[nonzeroA];
1835 Double_t *dataA = new Double_t[nonzeroA];
1836 Int_t index=0;
1837 for (Int_t iy = 0; iy < ny; iy++) {
1838 for (Int_t ix = 0; ix < nx; ix++) {
1839 Int_t ibinx = fXToHist[ix];
1840 Double_t z;
1841 if (histmap == kHistMapOutputHoriz) {
1842 z = hist_A->GetBinContent(ibinx, iy + 1);
1843 } else {
1844 z = hist_A->GetBinContent(iy + 1, ibinx);
1845 }
1846 if (z != 0.0) {
1847 rowA[index]=iy;
1848 colA[index] = ix;
1849 dataA[index] = z / fSumOverY[ix];
1850 index++;
1851 }
1852 }
1853 }
1854 if(underflowBin && overflowBin) {
1855 Info("TUnfold","%d input bins and %d output bins (includes 2 underflow/overflow bins)",ny,nx);
1856 } else if(underflowBin) {
1857 Info("TUnfold","%d input bins and %d output bins (includes 1 underflow bin)",ny,nx);
1858 } else if(overflowBin) {
1859 Info("TUnfold","%d input bins and %d output bins (includes 1 overflow bin)",ny,nx);
1860 } else {
1861 Info("TUnfold","%d input bins and %d output bins",ny,nx);
1862 }
1863 fA = CreateSparseMatrix(ny,nx,index,rowA,colA,dataA);
1864 if(ny<nx) {
1865 Error("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1866 } else if(ny==nx) {
1867 Warning("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1868 }
1869 delete[] rowA;
1870 delete[] colA;
1871 delete[] dataA;
1872 // regularisation conditions squared
1873 fL = new TMatrixDSparse(1, GetNx());
1874 if (regmode != kRegModeNone) {
1875 Int_t nError=RegularizeBins(0, 1, nx0, regmode);
1876 if(nError>0) {
1877 if(nError>1) {
1878 Info("TUnfold",
1879 "%d regularisation conditions have been skipped",nError);
1880 } else {
1881 Info("TUnfold",
1882 "One regularisation condition has been skipped");
1883 }
1884 }
1885 }
1886}
1887
1888////////////////////////////////////////////////////////////////////////
1889/// set bias vector
1890///
1891/// \param[in] bias histogram with new bias vector
1892///
1893/// the initial bias vector is determined from the response matrix
1894/// but may be changed by using this method
1895void TUnfold::SetBias(const TH1 *bias)
1896{
1897 DeleteMatrix(&fX0);
1898 fX0 = new TMatrixD(GetNx(), 1);
1899 for (Int_t i = 0; i < GetNx(); i++) {
1900 (*fX0) (i, 0) = bias->GetBinContent(fXToHist[i]);
1901 }
1902}
1903
1904////////////////////////////////////////////////////////////////////////
1905/// add a row of regularisation conditions to the matrix L
1906///
1907/// \param[in] i0 truth histogram bin number
1908/// \param[in] f0 entry in the matrix L, column i0
1909/// \param[in] i1 truth histogram bin number
1910/// \param[in] f1 entry in the matrix L, column i1
1911/// \param[in] i2 truth histogram bin number
1912/// \param[in] f2 entry in the matrix L, column i2
1913///
1914/// the arguments are used to form one row (k) of the matrix L, where
1915/// <br/> L<sub>k,i0</sub>=f0 and L<sub>k,i1</sub>=f1 and L<sub>k,i2</sub>=f2
1916/// <br>negative indexes i0,i1,i2 are ignored
1918(Int_t i0,Double_t f0,Int_t i1,Double_t f1,Int_t i2,Double_t f2)
1919{
1920 Int_t indices[3];
1921 Double_t data[3];
1922 Int_t nEle=0;
1923
1924 if(i2>=0) {
1925 data[nEle]=f2;
1926 indices[nEle]=i2;
1927 nEle++;
1928 }
1929 if(i1>=0) {
1930 data[nEle]=f1;
1931 indices[nEle]=i1;
1932 nEle++;
1933 }
1934 if(i0>=0) {
1935 data[nEle]=f0;
1936 indices[nEle]=i0;
1937 nEle++;
1938 }
1939 return AddRegularisationCondition(nEle,indices,data);
1940}
1941
1942////////////////////////////////////////////////////////////////////////
1943/// add a row of regularisation conditions to the matrix L
1944///
1945/// \param[in] nEle number of valid entries in indeces and rowData
1946/// \param[in] indices column numbers of L to fill
1947/// \param[in] rowData data to fill into the new row of L
1948///
1949/// returns true if a row was added, false otherwise
1950/// <br/>
1951/// A new row k is added to the matrix L, its dimension is expanded.
1952/// The new elements L<sub>ki</sub> are filled from the array rowData[]
1953/// where the indices i which are taken from the array indices[].
1955(Int_t nEle,const Int_t *indices,const Double_t *rowData)
1956{
1957 Bool_t r=kTRUE;
1958 const Int_t *l0_rows=fL->GetRowIndexArray();
1959 const Int_t *l0_cols=fL->GetColIndexArray();
1960 const Double_t *l0_data=fL->GetMatrixArray();
1961
1962 Int_t nF=l0_rows[fL->GetNrows()]+nEle;
1963 Int_t *l_row=new Int_t[nF];
1964 Int_t *l_col=new Int_t[nF];
1965 Double_t *l_data=new Double_t[nF];
1966 // decode original matrix
1967 nF=0;
1968 for(Int_t row=0;row<fL->GetNrows();row++) {
1969 for(Int_t k=l0_rows[row];k<l0_rows[row+1];k++) {
1970 l_row[nF]=row;
1971 l_col[nF]=l0_cols[k];
1972 l_data[nF]=l0_data[k];
1973 nF++;
1974 }
1975 }
1976
1977 // if the original matrix does not have any data, reset its row count
1978 Int_t rowMax=0;
1979 if(nF>0) {
1980 rowMax=fL->GetNrows();
1981 }
1982
1983 // add the new regularisation conditions
1984 for(Int_t i=0;i<nEle;i++) {
1985 Int_t col=fHistToX[indices[i]];
1986 if(col<0) {
1987 r=kFALSE;
1988 break;
1989 }
1990 l_row[nF]=rowMax;
1991 l_col[nF]=col;
1992 l_data[nF]=rowData[i];
1993 nF++;
1994 }
1995
1996 // replace the old matrix fL
1997 if(r) {
1998 DeleteMatrix(&fL);
1999 fL=CreateSparseMatrix(rowMax+1,GetNx(),nF,l_row,l_col,l_data);
2000 }
2001 delete [] l_row;
2002 delete [] l_col;
2003 delete [] l_data;
2004 return r;
2005}
2006
2007////////////////////////////////////////////////////////////////////////
2008/// add a regularisation condition on the magnitude of a truth bin
2009///
2010/// \param[in] bin bin number
2011/// \param[in] scale (default=1) scale factor
2012///
2013/// this adds one row to L, where the element <b>bin</b> takes the
2014/// value <b>scale</b>
2015///
2016/// return value: 0 if ok, 1 if the condition has not been
2017/// added. Conditions which are not added typically correspond to bin
2018/// numbers where the truth can not be unfolded (either response
2019/// matrix is empty or the data do not constrain).
2020///
2021/// The RegularizeXXX() methods can be used to set up a custom matrix
2022/// of regularisation conditions. In this case, start with an empty
2023/// matrix L (argument regmode=kRegModeNone in the constructor)
2024
2026{
2027 // add regularisation on the size of bin i
2028 // bin: bin number
2029 // scale: size of the regularisation
2030 // return value: number of conditions which have been skipped
2031 // modifies data member fL
2032
2035
2036 return AddRegularisationCondition(bin,scale) ? 0 : 1;
2037}
2038
2039////////////////////////////////////////////////////////////////////////
2040/// add a regularisation condition on the difference of two truth bin
2041///
2042/// \param[in] left_bin bin number
2043/// \param[in] right_bin bin number
2044/// \param[in] scale (default=1) scale factor
2045///
2046/// this adds one row to L, where the element <b>left_bin</b> takes the
2047/// value <b>-scale</b> and the element <b>right_bin</b> takes the
2048/// value <b>+scale</b>
2049///
2050/// return value: 0 if ok, 1 if the condition has not been
2051/// added. Conditions which are not added typically correspond to bin
2052/// numbers where the truth can not be unfolded (either response
2053/// matrix is empty or the data do not constrain).
2054///
2055/// The RegularizeXXX() methods can be used to set up a custom matrix
2056/// of regularisation conditions. In this case, start with an empty
2057/// matrix L (argument regmode=kRegModeNone in the constructor)
2058
2059Int_t TUnfold::RegularizeDerivative(int left_bin, int right_bin,
2060 Double_t scale)
2061{
2062 // add regularisation on the difference of two bins
2063 // left_bin: 1st bin
2064 // right_bin: 2nd bin
2065 // scale: size of the regularisation
2066 // return value: number of conditions which have been skipped
2067 // modifies data member fL
2068
2071
2072 return AddRegularisationCondition(left_bin,-scale,right_bin,scale) ? 0 : 1;
2073}
2074
2075////////////////////////////////////////////////////////////////////////
2076/// add a regularisation condition on the curvature of three truth bin
2077///
2078/// \param[in] left_bin bin number
2079/// \param[in] center_bin bin number
2080/// \param[in] right_bin bin number
2081/// \param[in] scale_left (default=1) scale factor
2082/// \param[in] scale_right (default=1) scale factor
2083///
2084/// this adds one row to L, where the element <b>left_bin</b> takes the
2085/// value <b>-scale_left</b>, the element <b>right_bin</b> takes the
2086/// value <b>-scale_right</b> and the element <b>center_bin</b> takes
2087/// the value <b>scale_left+scale_right</b>
2088///
2089/// return value: 0 if ok, 1 if the condition has not been
2090/// added. Conditions which are not added typically correspond to bin
2091/// numbers where the truth can not be unfolded (either response
2092/// matrix is empty or the data do not constrain).
2093///
2094/// The RegularizeXXX() methods can be used to set up a custom matrix
2095/// of regularisation conditions. In this case, start with an empty
2096/// matrix L (argument regmode=kRegModeNone in the constructor)
2097
2098Int_t TUnfold::RegularizeCurvature(int left_bin, int center_bin,
2099 int right_bin,
2100 Double_t scale_left,
2101 Double_t scale_right)
2102{
2103 // add regularisation on the curvature through 3 bins (2nd derivative)
2104 // left_bin: 1st bin
2105 // center_bin: 2nd bin
2106 // right_bin: 3rd bin
2107 // scale_left: scale factor on center-left difference
2108 // scale_right: scale factor on right-center difference
2109 // return value: number of conditions which have been skipped
2110 // modifies data member fL
2111
2114
2116 (left_bin,-scale_left,
2117 center_bin,scale_left+scale_right,
2118 right_bin,-scale_right)
2119 ? 0 : 1;
2120}
2121
2122////////////////////////////////////////////////////////////////////////
2123/// add regularisation conditions for a group of bins
2124///
2125/// \param[in] start first bin number
2126/// \param[in] step step size
2127/// \param[in] nbin number of bins
2128/// \param[in] regmode regularisation mode (one of: kRegModeSize,
2129/// kRegModeDerivative, kRegModeCurvature)
2130///
2131/// add regularisation conditions for a group of equidistant
2132/// bins. There are <b>nbin</b> bins, starting with bin <b>start</b>
2133/// and with a distance of <b>step</b> between bins.
2134///
2135/// Return value: number of regularisation conditions which could not
2136/// be added.
2137/// <br/>
2138/// Conditions which are not added typically correspond to bin
2139/// numbers where the truth can not be unfolded (either response
2140/// matrix is empty or the data do not constrain).
2141///
2142
2143Int_t TUnfold::RegularizeBins(int start, int step, int nbin,
2144 ERegMode regmode)
2145{
2146 // set regulatisation on a 1-dimensional curve
2147 // start: first bin
2148 // step: distance between neighbouring bins
2149 // nbin: total number of bins
2150 // regmode: regularisation mode
2151 // return value:
2152 // number of errors (i.e. conditions which have been skipped)
2153 // modifies data member fL
2154
2155 Int_t i0, i1, i2;
2156 i0 = start;
2157 i1 = i0 + step;
2158 i2 = i1 + step;
2159 Int_t nSkip = 0;
2160 Int_t nError= 0;
2161 if (regmode == kRegModeDerivative) {
2162 nSkip = 1;
2163 } else if (regmode == kRegModeCurvature) {
2164 nSkip = 2;
2165 } else if(regmode != kRegModeSize) {
2166 Error("RegularizeBins","regmode = %d is not valid",regmode);
2167 }
2168 for (Int_t i = nSkip; i < nbin; i++) {
2169 if (regmode == kRegModeSize) {
2170 nError += RegularizeSize(i0);
2171 } else if (regmode == kRegModeDerivative) {
2172 nError += RegularizeDerivative(i0, i1);
2173 } else if (regmode == kRegModeCurvature) {
2174 nError += RegularizeCurvature(i0, i1, i2);
2175 }
2176 i0 = i1;
2177 i1 = i2;
2178 i2 += step;
2179 }
2180 return nError;
2181}
2182
2183////////////////////////////////////////////////////////////////////////
2184/// add regularisation conditions for 2d unfolding
2185///
2186/// \param[in] start_bin first bin number
2187/// \param[in] step1 step size, 1st dimension
2188/// \param[in] nbin1 number of bins, 1st dimension
2189/// \param[in] step2 step size, 2nd dimension
2190/// \param[in] nbin2 number of bins, 2nd dimension
2191/// \param[in] regmode regularisation mode (one of: kRegModeSize,
2192/// kRegModeDerivative, kRegModeCurvature)
2193///
2194/// add regularisation conditions for a grid of bins. The start bin is
2195/// <b>start_bin</b>. Along the first (second) dimension, there are
2196/// <b>nbin1</b> (<b>nbin2</b>) bins and adjacent bins are spaced by
2197/// <b>step1</b> (<b>step2</b>) units.
2198///
2199/// Return value: number of regularisation conditions which could not
2200/// be added. Conditions which are not added typically correspond to bin
2201/// numbers where the truth can not be unfolded (either response
2202/// matrix is empty or the data do not constrain).
2203///
2204Int_t TUnfold::RegularizeBins2D(int start_bin, int step1, int nbin1,
2205 int step2, int nbin2, ERegMode regmode)
2206{
2207 // set regularisation on a 2-dimensional grid of bins
2208 // start_bin: first bin
2209 // step1: distance between bins in 1st direction
2210 // nbin1: number of bins in 1st direction
2211 // step2: distance between bins in 2nd direction
2212 // nbin2: number of bins in 2nd direction
2213 // return value:
2214 // number of errors (i.e. conditions which have been skipped)
2215 // modifies data member fL
2216
2217 Int_t nError = 0;
2218 for (Int_t i1 = 0; i1 < nbin1; i1++) {
2219 nError += RegularizeBins(start_bin + step1 * i1, step2, nbin2, regmode);
2220 }
2221 for (Int_t i2 = 0; i2 < nbin2; i2++) {
2222 nError += RegularizeBins(start_bin + step2 * i2, step1, nbin1, regmode);
2223 }
2224 return nError;
2225}
2226
2227////////////////////////////////////////////////////////////////////////
2228/// perform the unfolding for a given input and regularisation
2229///
2230/// \param[in] tau_reg regularisation parameter
2231/// \param[in] input input distribution with uncertainties
2232/// \param[in] scaleBias (default=0.0) scale factor applied to the bias
2233///
2234/// This is a shortcut for { SetInput(input,scaleBias); DoUnfold(tau); }
2236 Double_t scaleBias)
2237{
2238// Data members required:
2239 // fA, fX0, fL
2240 // Data members modified:
2241 // those documented in SetInput()
2242 // and those documented in DoUnfold(Double_t)
2243 // Return value:
2244 // maximum global correlation coefficient
2245 // NOTE!!! return value >=1.0 means error, and the result is junk
2246 //
2247 // Overflow bins of the input distribution are ignored!
2248 SetInput(input,scaleBias);
2249 return DoUnfold(tau_reg);
2250}
2251
2252////////////////////////////////////////////////////////////////////////
2253/// Define input data for subsequent calls to DoUnfold(tau)
2254/// \param[in] input input distribution with uncertainties
2255/// \param[in] scaleBias (default=nullptr) scale factor applied to the bias
2256/// \param[in] oneOverZeroError (default=nullptr) for bins with zero error, this number defines 1/error.
2257/// \param[in] hist_vyy (default=nullptr) if non-zero, this defines the data covariance matrix
2258/// \param[in] hist_vyy_inv (default=nullptr) if non-zero and hist_vyy is
2259/// set, defines the inverse of the data covariance matrix. This
2260/// feature can be useful for repeated unfoldings in cases where the
2261/// inversion of the input covariance matrix is lengthy
2262///
2263/// Return value: nError1+10000*nError2
2264/// <ul>
2265/// <li>nError1: number of bins where the uncertainty is zero.
2266/// these bins either are not used for the unfolding (if
2267/// oneOverZeroError==nullptr) or 1/uncertainty is set to oneOverZeroError.</li>
2268/// <li>nError2: return values>10000 are fatal errors, because the
2269/// unfolding can not be done. The number nError2 corresponds to the
2270/// number of truth bins which are not constrained by data points.
2271/// </li>
2272/// </ul>
2273
2275 Double_t oneOverZeroError,const TH2 *hist_vyy,
2276 const TH2 *hist_vyy_inv)
2277{
2278 // Data members modified:
2279 // fY, fVyy, , fBiasScale
2280 // Data members cleared
2281 // fVyyInv, fNdf
2282 // + see ClearResults
2283
2285 fNdf=0;
2286
2287 fBiasScale = scaleBias;
2288
2289 // delete old results (if any)
2290 ClearResults();
2291
2292 // construct error matrix and inverted error matrix of measured quantities
2293 // from errors of input histogram or use error matrix
2294
2295 Int_t *rowVyyN=new Int_t[GetNy()*GetNy()+1];
2296 Int_t *colVyyN=new Int_t[GetNy()*GetNy()+1];
2297 Double_t *dataVyyN=new Double_t[GetNy()*GetNy()+1];
2298
2299 Int_t *rowVyy1=new Int_t[GetNy()];
2300 Int_t *colVyy1=new Int_t[GetNy()];
2301 Double_t *dataVyy1=new Double_t[GetNy()];
2302 Double_t *dataVyyDiag=new Double_t[GetNy()];
2303
2304 Int_t nVarianceZero=0;
2305 Int_t nVarianceForced=0;
2306 Int_t nVyyN=0;
2307 Int_t nVyy1=0;
2308 for (Int_t iy = 0; iy < GetNy(); iy++) {
2309 // diagonals
2310 Double_t dy2;
2311 if(!hist_vyy) {
2312 Double_t dy = input->GetBinError(iy + 1);
2313 dy2=dy*dy;
2314 if (dy2 <= 0.0) {
2315 if(oneOverZeroError>0.0) {
2316 dy2 = 1./ ( oneOverZeroError*oneOverZeroError);
2317 nVarianceForced++;
2318 } else {
2319 nVarianceZero++;
2320 }
2321 }
2322 } else {
2323 dy2 = hist_vyy->GetBinContent(iy+1,iy+1);
2324 if (dy2 <= 0.0) {
2325 nVarianceZero++;
2326 }
2327 }
2328 rowVyyN[nVyyN] = iy;
2329 colVyyN[nVyyN] = iy;
2330 rowVyy1[nVyy1] = iy;
2331 colVyy1[nVyy1] = 0;
2332 dataVyyDiag[iy] = dy2;
2333 if(dy2>0.0) {
2334 dataVyyN[nVyyN++] = dy2;
2335 dataVyy1[nVyy1++] = dy2;
2336 }
2337 }
2338 if(hist_vyy) {
2339 // non-diagonal elements
2340 Int_t nOffDiagNonzero=0;
2341 for (Int_t iy = 0; iy < GetNy(); iy++) {
2342 // ignore rows where the diagonal is zero
2343 if(dataVyyDiag[iy]<=0.0) {
2344 for (Int_t jy = 0; jy < GetNy(); jy++) {
2345 if(hist_vyy->GetBinContent(iy+1,jy+1)!=0.0) {
2346 nOffDiagNonzero++;
2347 }
2348 }
2349 continue;
2350 }
2351 for (Int_t jy = 0; jy < GetNy(); jy++) {
2352 // skip diagonal elements
2353 if(iy==jy) continue;
2354 // ignore columns where the diagonal is zero
2355 if(dataVyyDiag[jy]<=0.0) continue;
2356
2357 rowVyyN[nVyyN] = iy;
2358 colVyyN[nVyyN] = jy;
2359 dataVyyN[nVyyN]= hist_vyy->GetBinContent(iy+1,jy+1);
2360 if(dataVyyN[nVyyN] == 0.0) continue;
2361 nVyyN ++;
2362 }
2363 }
2364 if(hist_vyy_inv) {
2365 Warning("SetInput",
2366 "inverse of input covariance is taken from user input");
2367 Int_t *rowVyyInv=new Int_t[GetNy()*GetNy()+1];
2368 Int_t *colVyyInv=new Int_t[GetNy()*GetNy()+1];
2369 Double_t *dataVyyInv=new Double_t[GetNy()*GetNy()+1];
2370 Int_t nVyyInv=0;
2371 for (Int_t iy = 0; iy < GetNy(); iy++) {
2372 for (Int_t jy = 0; jy < GetNy(); jy++) {
2373 rowVyyInv[nVyyInv] = iy;
2374 colVyyInv[nVyyInv] = jy;
2375 dataVyyInv[nVyyInv]= hist_vyy_inv->GetBinContent(iy+1,jy+1);
2376 if(dataVyyInv[nVyyInv] == 0.0) continue;
2377 nVyyInv ++;
2378 }
2379 }
2381 (GetNy(),GetNy(),nVyyInv,rowVyyInv,colVyyInv,dataVyyInv);
2382 delete [] rowVyyInv;
2383 delete [] colVyyInv;
2384 delete [] dataVyyInv;
2385 } else {
2386 if(nOffDiagNonzero) {
2387 Error("SetInput",
2388 "input covariance has elements C(X,Y)!=nullptr where V(X)==0");
2389 }
2390 }
2391 }
2394 (GetNy(),GetNy(),nVyyN,rowVyyN,colVyyN,dataVyyN);
2395
2396 delete[] rowVyyN;
2397 delete[] colVyyN;
2398 delete[] dataVyyN;
2399
2401 (GetNy(),1,nVyy1,rowVyy1,colVyy1, dataVyy1);
2402
2403 delete[] rowVyy1;
2404 delete[] colVyy1;
2405 delete[] dataVyy1;
2406
2407 //
2408 // get input vector
2409 DeleteMatrix(&fY);
2410 fY = new TMatrixD(GetNy(), 1);
2411 for (Int_t i = 0; i < GetNy(); i++) {
2412 (*fY) (i, 0) = input->GetBinContent(i + 1);
2413 }
2414 // simple check whether unfolding is possible, given the matrices fA and fV
2416 DeleteMatrix(&vecV);
2417 Int_t nError2=0;
2418 Int_t error2bin=-1;
2419 for (Int_t i = 0; i <mAtV->GetNrows();i++) {
2420 if(mAtV->GetRowIndexArray()[i]==
2421 mAtV->GetRowIndexArray()[i+1]) {
2422 nError2 ++;
2423 error2bin=i;
2424 }
2425 }
2426 if(nVarianceForced) {
2427 if(nVarianceForced>1) {
2428 Warning("SetInput","%d/%d input bins have zero error,"
2429 " 1/error set to %lf.",
2430 nVarianceForced,GetNy(),oneOverZeroError);
2431 } else {
2432 Warning("SetInput","One input bin has zero error,"
2433 " 1/error set to %lf.",oneOverZeroError);
2434 }
2435 }
2436 if(nVarianceZero) {
2437 if(nVarianceZero>1) {
2438 Warning("SetInput","%d/%d input bins have zero error,"
2439 " and are ignored.",nVarianceZero,GetNy());
2440 } else {
2441 Warning("SetInput","One input bin has zero error,"
2442 " and is ignored.");
2443 }
2444 fIgnoredBins=nVarianceZero;
2445 }
2446 if(nError2>0) {
2447 // check whether data points with zero error are responsible
2448 if(oneOverZeroError<=0.0) {
2449 //const Int_t *a_rows=fA->GetRowIndexArray();
2450 //const Int_t *a_cols=fA->GetColIndexArray();
2451 for (Int_t col = 0; col <mAtV->GetNrows();col++) {
2452 if(mAtV->GetRowIndexArray()[col]==
2453 mAtV->GetRowIndexArray()[col+1]) {
2454 TString binlist("no data to constrain output bin ");
2455 binlist += GetOutputBinName(fXToHist[col]);
2456 /* binlist +=" depends on ignored input bins ";
2457 for(Int_t row=nullptr;row<fA->GetNrows();row++) {
2458 if(dataVyyDiag[row]>0.0) continue;
2459 for(Int_t i=a_rows[row];i<a_rows[row+1];i++) {
2460 if(a_cols[i]!=col) continue;
2461 binlist +=" ";
2462 binlist +=row;
2463 }
2464 } */
2465 Warning("SetInput","%s",(char const *)binlist);
2466 }
2467 }
2468 }
2469 if(nError2>1) {
2470 Error("SetInput","%d/%d output bins are not constrained by any data.",
2471 nError2,mAtV->GetNrows());
2472 } else {
2473 Error("SetInput","One output bin [%d] is not constrained by any data.",
2474 error2bin);
2475 }
2476 }
2477 DeleteMatrix(&mAtV);
2478
2479 delete[] dataVyyDiag;
2480
2481 return nVarianceForced+nVarianceZero+10000*nError2;
2482}
2483
2484////////////////////////////////////////////////////////////////////////
2485/// perform the unfolding for a given regularisation parameter tau
2486///
2487/// \param[in] tau regularisation parameter
2488///
2489/// this method sets tau and then calls the core unfolding algorithm
2490
2492{
2493 // required data members:
2494 // fA: matrix to relate x and y
2495 // fY: measured data points
2496 // fX0: bias on x
2497 // fBiasScale: scale factor for fX0
2498 // fV: inverse of covariance matrix for y
2499 // fL: regularisation conditions
2500 // modified data members:
2501 // fTauSquared and those documented in DoUnfold(void)
2502 fTauSquared=tau*tau;
2503 return DoUnfold();
2504}
2505
2506////////////////////////////////////////////////////////////////////////
2507// calculate square roots of the eigenvalues of the matrix E
2508// returns: vector of eigenvalues
2510 int nRow=GetE()->GetNrows();
2511 TMatrixDSym eSym(nRow);
2512 const Int_t *rows=GetE()->GetRowIndexArray();
2513 const Int_t *cols=GetE()->GetColIndexArray();
2514 const Double_t *data=GetE()->GetMatrixArray();
2515 for(Int_t irow=0;irow<nRow;irow++) {
2516 for(Int_t indexE=rows[irow];indexE<rows[irow+1];indexE++) {
2517 Int_t icol=cols[indexE];
2518 eSym(irow,icol)=data[indexE];
2519 }
2520 }
2521 TMatrixDSymEigen ev(eSym);
2523 for(int i=0;i<r.GetNrows();i++) {
2524 double evi=r(i);
2525 if(evi<0.) evi= - TMath::Sqrt(evi);
2526 else evi = TMath::Sqrt(evi);
2527 r(i)=evi;
2528 }
2529 return r;
2530}
2531
2532////////////////////////////////////////////////////////////////////////
2533/// scan the L curve, determine tau and unfold at the final value of
2534/// tau
2535///
2536/// \param[in] nPoint number of points used for the scan
2537/// \param[in] tauMin smallest tau value to study
2538/// \param[in] tauMax largest tau value to study. If tauMin=tauMax=nullptr,
2539/// a scan interval is determined automatically.
2540/// \param[out] lCurve if nonzero, a new TGraph is returned,
2541/// containing the L-curve
2542/// \param[out] logTauX if nonzero, a new TSpline is returned, to
2543/// parameterize the L-curve's x-coordinates as a function of log10(tau)
2544/// \param[out] logTauY if nonzero, a new TSpline is returned, to
2545/// parameterize the L-curve's y-coordinates as a function of log10(tau)
2546/// \param[out] logTauCurvature if nonzero, a new TSpline is returned
2547/// of the L-curve curvature as a function of log10(tau)
2548///
2549/// return value: the coordinate number in the logTauX,logTauY graphs
2550/// corresponding to the "final" choice of tau
2551///
2552/// Recommendation: always check <b>logTauCurvature</b>, it
2553/// should be a peaked function (similar to a Gaussian), the maximum
2554/// corresponding to the final choice of tau. Also, check the <b>lCurve</b>
2555/// it should be approximately L-shaped. If in doubt, adjust tauMin
2556/// and tauMax until the results are satisfactory.
2557
2559 Double_t tauMin,Double_t tauMax,
2560 TGraph **lCurve,TSpline **logTauX,
2561 TSpline **logTauY,TSpline **logTauCurvature)
2562{
2563 typedef std::map<Double_t,std::pair<Double_t,Double_t> > XYtau_t;
2564 XYtau_t curve;
2565
2566 //==========================================================
2567 // algorithm:
2568 // (1) do the unfolding for nPoint-1 points
2569 // and store the results in the map
2570 // curve
2571 // (1a) store minimum and maximum tau to curve
2572 // (1b) insert additional points, until nPoint-1 values
2573 // have been calculated
2574 //
2575 // (2) determine the best choice of tau
2576 // do the unfolding for this point
2577 // and store the result in
2578 // curve
2579 // (3) return the result in
2580 // lCurve logTauX logTauY
2581
2582 //==========================================================
2583 // (1) do the unfolding for nPoint-1 points
2584 // and store the results in
2585 // curve
2586 // (1a) store minimum and maximum tau to curve
2587
2588 if((tauMin<=0)||(tauMax<=0.0)||(tauMin>=tauMax)) {
2589 // here no range is given, has to be determined automatically
2590 // the maximum tau is determined from the chi**2 values
2591 // observed from unfolding without regulatisation
2592
2593 // first unfolding, without regularisation
2594 DoUnfold(0.0);
2595
2596 // if the number of degrees of freedom is too small, create an error
2597 if(GetNdf()<=0) {
2598 Error("ScanLcurve","too few input bins, NDF<=nullptr %d",GetNdf());
2599 }
2600
2601 Double_t x0=GetLcurveX();
2602 Double_t y0=GetLcurveY();
2603 Info("ScanLcurve","logtau=-Infinity X=%lf Y=%lf",x0,y0);
2604 if(!TMath::Finite(x0)) {
2605 Fatal("ScanLcurve","problem (too few input bins?) X=%f",x0);
2606 }
2607 if(!TMath::Finite(y0)) {
2608 Fatal("ScanLcurve","problem (missing regularisation?) Y=%f",y0);
2609 }
2610 {
2611 // unfolding guess maximum tau and store it
2612 Double_t logTau=
2613 0.5*(TMath::Log10(fChi2A+3.*TMath::Sqrt(GetNdf()+1.0))
2614 -GetLcurveY());
2615 DoUnfold(TMath::Power(10.,logTau));
2616
2618 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2620 }
2621 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2622 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2623 logTau,GetLcurveX(),GetLcurveY());
2624 }
2625 if((*curve.begin()).second.first<x0) {
2626 // if the point at tau==nullptr seems numerically unstable,
2627 // try to find the minimum chi**2 as start value
2628 //
2629 // "unstable" means that there is a finite tau where the
2630 // unfolding chi**2 is smaller than for the case of no
2631 // regularisation. Ideally this should never happen
2632 do {
2633 x0=GetLcurveX();
2634 Double_t logTau=(*curve.begin()).first-0.5;
2635 DoUnfold(TMath::Power(10.,logTau));
2637 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2639 }
2640 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2641 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2642 logTau,GetLcurveX(),GetLcurveY());
2643 }
2644 while(((int)curve.size()<(nPoint-1)/2)&&
2645 ((*curve.begin()).second.first<x0));
2646 } else {
2647 // minimum tau is chosen such that is less than
2648 // 1% different from the case of no regularusation
2649 // log10(1.01) = 0.00432
2650
2651 // here, more than one point are inserted if necessary
2652 while(((int)curve.size()<nPoint-1)&&
2653 (((*curve.begin()).second.first-x0>0.00432)||
2654 ((*curve.begin()).second.second-y0>0.00432)||
2655 (curve.size()<2))) {
2656 Double_t logTau=(*curve.begin()).first-0.5;
2657 DoUnfold(TMath::Power(10.,logTau));
2659 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2661 }
2662 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2663 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2664 logTau,GetLcurveX(),GetLcurveY());
2665 }
2666 }
2667 } else {
2668 Double_t logTauMin=TMath::Log10(tauMin);
2669 Double_t logTauMax=TMath::Log10(tauMax);
2670 if(nPoint>1) {
2671 // insert maximum tau
2672 DoUnfold(TMath::Power(10.,logTauMax));
2674 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2676 }
2677 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2678 logTauMax,GetLcurveX(),GetLcurveY());
2679 curve[logTauMax]=std::make_pair(GetLcurveX(),GetLcurveY());
2680 }
2681 // insert minimum tau
2682 DoUnfold(TMath::Power(10.,logTauMin));
2684 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2686 }
2687 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2688 logTauMin,GetLcurveX(),GetLcurveY());
2689 curve[logTauMin]=std::make_pair(GetLcurveX(),GetLcurveY());
2690 }
2691
2692
2693 //==========================================================
2694 // (1b) insert additional points, until nPoint-1 values
2695 // have been calculated
2696
2697 while(int(curve.size())<nPoint-1) {
2698 // insert additional points, such that the sizes of the delta(XY) vectors
2699 // are getting smaller and smaller
2700 XYtau_t::const_iterator i0,i1;
2701 i0=curve.begin();
2702 i1=i0;
2703 Double_t logTau=(*i0).first;
2704 Double_t distMax=0.0;
2705 for(i1++;i1!=curve.end();i1++) {
2706 const std::pair<Double_t,Double_t> &xy0=(*i0).second;
2707 const std::pair<Double_t,Double_t> &xy1=(*i1).second;
2708 Double_t dx=xy1.first-xy0.first;
2709 Double_t dy=xy1.second-xy0.second;
2710 Double_t d=TMath::Sqrt(dx*dx+dy*dy);
2711 if(d>=distMax) {
2712 distMax=d;
2713 logTau=0.5*((*i0).first+(*i1).first);
2714 }
2715 i0=i1;
2716 }
2717 DoUnfold(TMath::Power(10.,logTau));
2719 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2721 }
2722 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",logTau,GetLcurveX(),GetLcurveY());
2723 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2724 }
2725
2726 //==========================================================
2727 // (2) determine the best choice of tau
2728 // do the unfolding for this point
2729 // and store the result in
2730 // curve
2731 XYtau_t::const_iterator i0,i1;
2732 i0=curve.begin();
2733 i1=i0;
2734 i1++;
2735 Double_t logTauFin=(*i0).first;
2736 if( ((int)curve.size())<nPoint) {
2737 // set up splines and determine (x,y) curvature in each point
2738 Double_t *cTi=new Double_t[curve.size()-1];
2739 Double_t *cCi=new Double_t[curve.size()-1];
2740 Int_t n=0;
2741 {
2742 Double_t *lXi=new Double_t[curve.size()];
2743 Double_t *lYi=new Double_t[curve.size()];
2744 Double_t *lTi=new Double_t[curve.size()];
2745 for( XYtau_t::const_iterator i=curve.begin();i!=curve.end();i++) {
2746 lXi[n]=(*i).second.first;
2747 lYi[n]=(*i).second.second;
2748 lTi[n]=(*i).first;
2749 n++;
2750 }
2751 TSpline3 *splineX=new TSpline3("x vs tau",lTi,lXi,n);
2752 TSpline3 *splineY=new TSpline3("y vs tau",lTi,lYi,n);
2753 // calculate (x,y) curvature for all points
2754 // the curvature is stored in the array cCi[] as a function of cTi[]
2755 for(Int_t i=0;i<n-1;i++) {
2756 Double_t ltau,xy,bi,ci,di;
2757 splineX->GetCoeff(i,ltau,xy,bi,ci,di);
2758 Double_t tauBar=0.5*(lTi[i]+lTi[i+1]);
2759 Double_t dTau=0.5*(lTi[i+1]-lTi[i]);
2760 Double_t dx1=bi+dTau*(2.*ci+3.*di*dTau);
2761 Double_t dx2=2.*ci+6.*di*dTau;
2762 splineY->GetCoeff(i,ltau,xy,bi,ci,di);
2763 Double_t dy1=bi+dTau*(2.*ci+3.*di*dTau);
2764 Double_t dy2=2.*ci+6.*di*dTau;
2765 cTi[i]=tauBar;
2766 cCi[i]=(dy2*dx1-dy1*dx2)/TMath::Power(dx1*dx1+dy1*dy1,1.5);
2767 }
2768 delete splineX;
2769 delete splineY;
2770 delete[] lXi;
2771 delete[] lYi;
2772 delete[] lTi;
2773 }
2774 // create curvature Spline
2775 TSpline3 *splineC=new TSpline3("L curve curvature",cTi,cCi,n-1);
2776 // find the maximum of the curvature
2777 // if the parameter iskip is non-zero, then iskip points are
2778 // ignored when looking for the largest curvature
2779 // (there are problems with the curvature determined from the first
2780 // few points of splineX,splineY in the algorithm above)
2781 Int_t iskip=0;
2782 if(n>4) iskip=1;
2783 if(n>7) iskip=2;
2784 Double_t cCmax=cCi[iskip];
2785 Double_t cTmax=cTi[iskip];
2786 for(Int_t i=iskip;i<n-2-iskip;i++) {
2787 // find maximum on this spline section
2788 // check boundary conditions for x[i+1]
2789 Double_t xMax=cTi[i+1];
2790 Double_t yMax=cCi[i+1];
2791 if(cCi[i]>yMax) {
2792 yMax=cCi[i];
2793 xMax=cTi[i];
2794 }
2795 // find maximum for x[i]<x<x[i+1]
2796 // get spline coefficiencts and solve equation
2797 // derivative(x)==nullptr
2798 Double_t x,y,b,c,d;
2799 splineC->GetCoeff(i,x,y,b,c,d);
2800 // coefficiencts of quadratic equation
2801 Double_t m_p_half=-c/(3.*d);
2802 Double_t q=b/(3.*d);
2803 Double_t discr=m_p_half*m_p_half-q;
2804 if(discr>=0.0) {
2805 // solution found
2806 discr=TMath::Sqrt(discr);
2807 Double_t xx;
2808 if(m_p_half>0.0) {
2809 xx = m_p_half + discr;
2810 } else {
2811 xx = m_p_half - discr;
2812 }
2813 Double_t dx=cTi[i+1]-x;
2814 // check first solution
2815 if((xx>0.0)&&(xx<dx)) {
2816 y=splineC->Eval(x+xx);
2817 if(y>yMax) {
2818 yMax=y;
2819 xMax=x+xx;
2820 }
2821 }
2822 // second solution
2823 if(xx !=0.0) {
2824 xx= q/xx;
2825 } else {
2826 xx=0.0;
2827 }
2828 // check second solution
2829 if((xx>0.0)&&(xx<dx)) {
2830 y=splineC->Eval(x+xx);
2831 if(y>yMax) {
2832 yMax=y;
2833 xMax=x+xx;
2834 }
2835 }
2836 }
2837 // check whether this local minimum is a global minimum
2838 if(yMax>cCmax) {
2839 cCmax=yMax;
2840 cTmax=xMax;
2841 }
2842 }
2843 if(logTauCurvature) {
2844 *logTauCurvature=splineC;
2845 } else {
2846 delete splineC;
2847 }
2848 delete[] cTi;
2849 delete[] cCi;
2850 logTauFin=cTmax;
2851 DoUnfold(TMath::Power(10.,logTauFin));
2853 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2855 }
2856 Info("ScanLcurve","Result logtau=%lf X=%lf Y=%lf",
2857 logTauFin,GetLcurveX(),GetLcurveY());
2858 curve[logTauFin]=std::make_pair(GetLcurveX(),GetLcurveY());
2859 }
2860
2861
2862 //==========================================================
2863 // (3) return the result in
2864 // lCurve logTauX logTauY
2865
2866 Int_t bestChoice=-1;
2867 if(!curve.empty()) {
2868 Double_t *x=new Double_t[curve.size()];
2869 Double_t *y=new Double_t[curve.size()];
2870 Double_t *logT=new Double_t[curve.size()];
2871 int n=0;
2872 for( XYtau_t::const_iterator i=curve.begin();i!=curve.end();i++) {
2873 if(logTauFin==(*i).first) {
2874 bestChoice=n;
2875 }
2876 x[n]=(*i).second.first;
2877 y[n]=(*i).second.second;
2878 logT[n]=(*i).first;
2879 n++;
2880 }
2881 if(lCurve) {
2882 (*lCurve)=new TGraph(n,x,y);
2883 (*lCurve)->SetTitle("L curve");
2884 }
2885 if(logTauX) (*logTauX)=new TSpline3("log(chi**2)%log(tau)",logT,x,n);
2886 if(logTauY) (*logTauY)=new TSpline3("log(reg.cond)%log(tau)",logT,y,n);
2887 delete[] x;
2888 delete[] y;
2889 delete[] logT;
2890 }
2891
2892 return bestChoice;
2893}
2894
2895////////////////////////////////////////////////////////////////////////
2896/// histogram of truth bins, determined from suming over the response matrix
2897///
2898/// \param[out] out histogram to store the truth bins. The bin contents
2899/// are overwritten
2900/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
2901///
2902/// This vector is also used to initialize the bias
2903/// x<sub>0</sub>. However, the bias vector may be changed using the
2904/// SetBias() method.
2905///
2906/// The use of <b>binMap</b> is explained with the documentation of
2907/// the GetOutput() method
2908///
2909
2910void TUnfold::GetNormalisationVector(TH1 *out,const Int_t *binMap) const
2911{
2912
2913 ClearHistogram(out);
2914 for (Int_t i = 0; i < GetNx(); i++) {
2915 int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2916 if(dest>=0) {
2917 out->SetBinContent(dest, fSumOverY[i] + out->GetBinContent(dest));
2918 }
2919 }
2920}
2921
2922////////////////////////////////////////////////////////////////////////
2923/// get bias vector including bias scale
2924///
2925/// \param[out] out histogram to store the scaled bias vector. The bin
2926/// contents are overwritten
2927/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
2928///
2929/// This method returns the bias vector times scaling factor, f*x<sub>0</sub>
2930///
2931/// The use of <b>binMap</b> is explained with the documentation of
2932/// the GetOutput() method
2933///
2934
2935void TUnfold::GetBias(TH1 *out,const Int_t *binMap) const
2936{
2937
2938 ClearHistogram(out);
2939 for (Int_t i = 0; i < GetNx(); i++) {
2940 int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2941 if(dest>=0) {
2942 out->SetBinContent(dest, fBiasScale*(*fX0) (i, 0) +
2943 out->GetBinContent(dest));
2944 }
2945 }
2946}
2947
2948////////////////////////////////////////////////////////////////////////
2949/// get unfolding result on detector level
2950///
2951/// \param[out] out histogram to store the correlation coefficiencts. The bin
2952/// contents and errors are overwritten.
2953/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
2954///
2955/// This method returns the unfolding output folded by the response
2956/// matrix, i.e. the vector Ax.
2957///
2958/// The use of <b>binMap</b> is explained with the documentation of
2959/// the GetOutput() method
2960///
2961
2962void TUnfold::GetFoldedOutput(TH1 *out,const Int_t *binMap) const
2963{
2964 ClearHistogram(out);
2965
2967
2968 const Int_t *rows_A=fA->GetRowIndexArray();
2969 const Int_t *cols_A=fA->GetColIndexArray();
2970 const Double_t *data_A=fA->GetMatrixArray();
2971 const Int_t *rows_AVxx=AVxx->GetRowIndexArray();
2972 const Int_t *cols_AVxx=AVxx->GetColIndexArray();
2973 const Double_t *data_AVxx=AVxx->GetMatrixArray();
2974
2975 for (Int_t i = 0; i < GetNy(); i++) {
2976 Int_t destI = binMap ? binMap[i+1] : i+1;
2977 if(destI<0) continue;
2978
2979 out->SetBinContent(destI, (*fAx) (i, 0)+ out->GetBinContent(destI));
2980 Double_t e2=0.0;
2981 Int_t index_a=rows_A[i];
2982 Int_t index_av=rows_AVxx[i];
2983 while((index_a<rows_A[i+1])&&(index_av<rows_AVxx[i+1])) {
2984 Int_t j_a=cols_A[index_a];
2985 Int_t j_av=cols_AVxx[index_av];
2986 if(j_a<j_av) {
2987 index_a++;
2988 } else if(j_a>j_av) {
2989 index_av++;
2990 } else {
2991 e2 += data_AVxx[index_av] * data_A[index_a];
2992 index_a++;
2993 index_av++;
2994 }
2995 }
2996 out->SetBinError(destI,TMath::Sqrt(e2));
2997 }
2998 DeleteMatrix(&AVxx);
2999}
3000
3001////////////////////////////////////////////////////////////////////////
3002/// get matrix of probabilities
3003///
3004/// \param[out] A two-dimensional histogram to store the
3005/// probabilities (normalized response matrix). The bin contents are
3006/// overwritten for those bins where A is nonzero
3007/// \param[in] histmap specify axis along which the truth bins are
3008/// oriented
3009
3011{
3012 // retreive matrix of probabilities
3013 // histmap: on which axis to arrange the input/output vector
3014 // A: histogram to store the probability matrix
3015 const Int_t *rows_A=fA->GetRowIndexArray();
3016 const Int_t *cols_A=fA->GetColIndexArray();
3017 const Double_t *data_A=fA->GetMatrixArray();
3018 for (Int_t iy = 0; iy <fA->GetNrows(); iy++) {
3019 for(Int_t indexA=rows_A[iy];indexA<rows_A[iy+1];indexA++) {
3020 Int_t ix = cols_A[indexA];
3021 Int_t ih=fXToHist[ix];
3022 if (histmap == kHistMapOutputHoriz) {
3023 A->SetBinContent(ih, iy+1,data_A[indexA]);
3024 } else {
3025 A->SetBinContent(iy+1, ih,data_A[indexA]);
3026 }
3027 }
3028 }
3029}
3030
3031////////////////////////////////////////////////////////////////////////
3032/// get matrix describing gow the result changes with the input data
3033///
3034/// \param[out] dxdy two-dimensional histogram to store the
3035/// matrix connecting the output and input data.
3036/// The bin contents are overwritten for those bins where dxdy is non-zero.
3037
3038void TUnfold::GetDXDY(TH2 *dxdy) const
3039{
3040 // retreive matrix connecting input and output
3041 // dxdy: histogram to store the probability matrix
3042 const Int_t *rows_dxdy=fDXDY->GetRowIndexArray();
3043 const Int_t *cols_dxdy=fDXDY->GetColIndexArray();
3044 const Double_t *data_dxdy=fDXDY->GetMatrixArray();
3045 for (Int_t ix = 0; ix <fDXDY->GetNrows(); ix++) {
3046 for(Int_t indexDXDY=rows_dxdy[ix];indexDXDY<rows_dxdy[ix+1];indexDXDY++) {
3047 Int_t iy = cols_dxdy[indexDXDY];
3048 Int_t ih=fXToHist[ix];
3049 dxdy->SetBinContent(ih, iy+1,data_dxdy[indexDXDY]);
3050 }
3051 }
3052}
3053
3054////////////////////////////////////////////////////////////////////////
3055/// Input vector of measurements
3056///
3057/// \param[out] out histogram to store the measurements. Bin content
3058/// and bin errors are overwritte.
3059/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
3060///
3061/// Bins which had an uncertainty of zero in the call to SetInput()
3062/// maye acquire bin contents or bin errors different from the
3063/// original settings in SetInput().
3064///
3065/// The use of <b>binMap</b> is explained with the documentation of
3066/// the GetOutput() method
3067///
3068
3069void TUnfold::GetInput(TH1 *out,const Int_t *binMap) const
3070{
3071 ClearHistogram(out);
3072
3073 const Int_t *rows_Vyy=fVyy->GetRowIndexArray();
3074 const Int_t *cols_Vyy=fVyy->GetColIndexArray();
3075 const Double_t *data_Vyy=fVyy->GetMatrixArray();
3076
3077 for (Int_t i = 0; i < GetNy(); i++) {
3078 Int_t destI=binMap ? binMap[i+1] : i+1;
3079 if(destI<0) continue;
3080
3081 out->SetBinContent(destI, (*fY) (i, 0)+out->GetBinContent(destI));
3082
3083 Double_t e=0.0;
3084 for(int index=rows_Vyy[i];index<rows_Vyy[i+1];index++) {
3085 if(cols_Vyy[index]==i) {
3086 e=TMath::Sqrt(data_Vyy[index]);
3087 }
3088 }
3089 out->SetBinError(destI, e);
3090 }
3091}
3092
3093////////////////////////////////////////////////////////////////////////
3094/// get inverse of the measurement's covariance matrix
3095///
3096/// \param[out] out histogram to store the inverted covariance
3097
3099{
3100 // calculate the inverse of the contribution to the error matrix
3101 // corresponding to the input data
3102 if(!fVyyInv) {
3103 Int_t rank=0;
3105 // and count number of degrees of freedom
3106 fNdf = rank-GetNpar();
3107
3108 if(rank<GetNy()-fIgnoredBins) {
3109 Warning("GetInputInverseEmatrix","input covariance matrix has rank %d expect %d",
3110 rank,GetNy());
3111 }
3112 if(fNdf<0) {
3113 Error("GetInputInverseEmatrix","number of parameters %d > %d (rank of input covariance). Problem can not be solved",GetNpar(),rank);
3114 } else if(fNdf==0) {
3115 Warning("GetInputInverseEmatrix","number of parameters %d = input rank %d. Problem is ill posed",GetNpar(),rank);
3116 }
3117 }
3118 if(out) {
3119 // return matrix as histogram
3120 const Int_t *rows_VyyInv=fVyyInv->GetRowIndexArray();
3121 const Int_t *cols_VyyInv=fVyyInv->GetColIndexArray();
3122 const Double_t *data_VyyInv=fVyyInv->GetMatrixArray();
3123
3124 for(int i=0;i<=out->GetNbinsX()+1;i++) {
3125 for(int j=0;j<=out->GetNbinsY()+1;j++) {
3126 out->SetBinContent(i,j,0.);
3127 }
3128 }
3129
3130 for (Int_t i = 0; i < fVyyInv->GetNrows(); i++) {
3131 for(int index=rows_VyyInv[i];index<rows_VyyInv[i+1];index++) {
3132 Int_t j=cols_VyyInv[index];
3133 out->SetBinContent(i+1,j+1,data_VyyInv[index]);
3134 }
3135 }
3136 }
3137}
3138
3139////////////////////////////////////////////////////////////////////////
3140/// get matrix of regularisation conditions squared
3141///
3142/// \param[out] out histogram to store the squared matrix of
3143/// regularisation conditions. the bin contents are overwritten
3144///
3145/// This returns the square matrix L<sup>T</sup>L as a histogram
3146///
3147/// The histogram should have dimension nx times nx, where nx
3148/// corresponds to the number of histogram bins in the response matrix
3149/// along the truth axis.
3150
3152{
3153 // retreive matrix of regularisation conditions squared
3154 // out: prebooked matrix
3155
3157 // loop over sparse matrix
3158 const Int_t *rows=lSquared->GetRowIndexArray();
3159 const Int_t *cols=lSquared->GetColIndexArray();
3160 const Double_t *data=lSquared->GetMatrixArray();
3161 for (Int_t i = 0; i < GetNx(); i++) {
3162 for (Int_t cindex = rows[i]; cindex < rows[i+1]; cindex++) {
3163 Int_t j=cols[cindex];
3164 out->SetBinContent(fXToHist[i], fXToHist[j],data[cindex]);
3165 }
3166 }
3167 DeleteMatrix(&lSquared);
3168}
3169
3170////////////////////////////////////////////////////////////////////////
3171/// get number of regularisation conditions
3172///
3173/// Ths returns the number of regularisation conditions, useful for
3174/// booking a histogram for a subsequent call of GetL().
3175
3177 return fL->GetNrows();
3178}
3179
3180////////////////////////////////////////////////////////////////////////
3181/// get matrix of regularisation conditions
3182///
3183/// \param[out] out histogram to store the regularisation conditions.
3184/// the bincontents are overwritten
3185///
3186/// The histogram should have dimension nr (y-axis) times nx (x-axis).
3187/// nr corresponds to the number of regularisation conditions, it can
3188/// be obtained using the method GetNr(). nx corresponds to the number
3189/// of histogram bins in the response matrix along the truth axis.
3190
3191void TUnfold::GetL(TH2 *out) const
3192{
3193 // loop over sparse matrix
3194 const Int_t *rows=fL->GetRowIndexArray();
3195 const Int_t *cols=fL->GetColIndexArray();
3196 const Double_t *data=fL->GetMatrixArray();
3197 for (Int_t row = 0; row < GetNr(); row++) {
3198 for (Int_t cindex = rows[row]; cindex < rows[row+1]; cindex++) {
3199 Int_t col=cols[cindex];
3200 Int_t indexH=fXToHist[col];
3201 out->SetBinContent(indexH,row+1,data[cindex]);
3202 }
3203 }
3204}
3205
3206////////////////////////////////////////////////////////////////////////
3207/// set type of area constraint
3208///
3209/// results of a previous unfolding are reset
3210
3212{
3213 // set type of constraint for the next unfolding
3214 if(fConstraint !=constraint) ClearResults();
3215 fConstraint=constraint;
3216 Info("SetConstraint","fConstraint=%d",fConstraint);
3217}
3218
3219
3220////////////////////////////////////////////////////////////////////////
3221/// return regularisation parameter
3222///
3224{
3225 // return regularisation parameter
3226 return TMath::Sqrt(fTauSquared);
3227}
3228
3229////////////////////////////////////////////////////////////////////////
3230/// get &chi;<sup>2</sup><sub>L</sub> contribution determined in recent unfolding
3232{
3233 // return chi**2 contribution from regularisation conditions
3234 return fLXsquared*fTauSquared;
3235}
3236
3237////////////////////////////////////////////////////////////////////////
3238/// get number of truth parameters determined in recent unfolding
3239///
3240/// empty bins of the response matrix or bins which can not be
3241/// unfolded due to rank deficits are not counted
3243{
3244 return GetNx();
3245}
3246
3247////////////////////////////////////////////////////////////////////////
3248/// get value on x-axis of L-curve determined in recent unfolding
3249///
3250/// x=log<sub>10</sub>(GetChi2A())
3252{
3253 return TMath::Log10(fChi2A);
3254}
3255
3256////////////////////////////////////////////////////////////////////////
3257/// get value on y-axis of L-curve determined in recent unfolding
3258///
3259/// y=log<sub>10</sub>(GetChi2L())
3261{
3262 return TMath::Log10(fLXsquared);
3263}
3264
3265////////////////////////////////////////////////////////////////////////
3266/// get output distribution, possibly cumulated over several bins
3267///
3268/// \param[out] output existing output histogram. content and errors
3269/// will be updated.
3270/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
3271///
3272/// If nonzero, the array <b>binMap</b> must have dimension n+2, where n
3273/// corresponds to the number of bins on the truth axis of the response
3274/// matrix (the histogram specified with the TUnfold
3275/// constructor). The indexes of <b>binMap</b> correspond to the truth
3276/// bins (including underflow and overflow) of the response matrix.
3277/// The element binMap[i] specifies the histogram number in
3278/// <b>output</b> where the corresponding truth bin will be stored. It is
3279/// possible to specify the same <b>output</b> bin number for multiple
3280/// indexes, in which case these bins are added. Set binMap[i]=-1 to
3281/// ignore an unfolded truth bin. The uncertainties are
3282/// calculated from the corresponding parts of the covariance matrix,
3283/// properly taking care of added truth bins.
3284/// <br/>
3285/// If the pointer <b>binMap</b> is zero, the bins are mapped
3286/// one-to-one. Truth bin zero (underflow) is stored in the
3287/// <b>output</b> underflow, truth bin 1 is stored in bin number 1, etc.
3288
3289void TUnfold::GetOutput(TH1 *output,const Int_t *binMap) const
3290{
3291 //
3292 // output: output histogram
3293 // binMap: for each bin of the original output distribution
3294 // specify the destination bin. A value of -1 means that the bin
3295 // is discarded. 0 means underflow bin, 1 first bin, ...
3296 // binMap[0] : destination of underflow bin
3297 // binMap[1] : destination of first bin
3298 // ...
3299
3301 /* Int_t nbin=output->GetNbinsX();
3302 Double_t *c=new Double_t[nbin+2];
3303 Double_t *e2=new Double_t[nbin+2];
3304 for(Int_t i=nullptr;i<nbin+2;i++) {
3305 c[i]=0.0;
3306 e2[i]=0.0;
3307 } */
3308
3309 std::map<Int_t,Double_t> e2;
3310
3311 const Int_t *rows_Vxx=fVxx->GetRowIndexArray();
3312 const Int_t *cols_Vxx=fVxx->GetColIndexArray();
3313 const Double_t *data_Vxx=fVxx->GetMatrixArray();
3314
3315 Int_t binMapSize = fHistToX.GetSize();
3316 for(Int_t i=0;i<binMapSize;i++) {
3317 Int_t destBinI=binMap ? binMap[i] : i; // histogram bin
3318 Int_t srcBinI=fHistToX[i]; // matrix row index
3319 if((destBinI>=0)&&(srcBinI>=0)) {
3320 output->SetBinContent
3321 (destBinI, (*fX)(srcBinI,0)+ output->GetBinContent(destBinI));
3322 // here we loop over the columns of the error matrix
3323 // j: counts histogram bins
3324 // index: counts sparse matrix index
3325 // the algorithm makes use of the fact that fHistToX is ordered
3326 Int_t j=0; // histogram bin
3327 Int_t index_vxx=rows_Vxx[srcBinI];
3328 //Double_t e2=TMath::Power(output->GetBinError(destBinI),2.);
3329 while((j<binMapSize)&&(index_vxx<rows_Vxx[srcBinI+1])) {
3330 Int_t destBinJ=binMap ? binMap[j] : j; // histogram bin
3331 if(destBinI!=destBinJ) {
3332 // only diagonal elements are calculated
3333 j++;
3334 } else {
3335 Int_t srcBinJ=fHistToX[j]; // matrix column index
3336 if(srcBinJ<0) {
3337 // bin was not unfolded, check next bin
3338 j++;
3339 } else {
3340 if(cols_Vxx[index_vxx]<srcBinJ) {
3341 // index is too small
3342 index_vxx++;
3343 } else if(cols_Vxx[index_vxx]>srcBinJ) {
3344 // index is too large, skip bin
3345 j++;
3346 } else {
3347 // add this bin
3348 e2[destBinI] += data_Vxx[index_vxx];
3349 j++;
3350 index_vxx++;
3351 }
3352 }
3353 }
3354 }
3355 //output->SetBinError(destBinI,TMath::Sqrt(e2));
3356 }
3357 }
3358 for(std::map<Int_t,Double_t>::const_iterator i=e2.begin();
3359 i!=e2.end();i++) {
3360 //cout<<(*i).first<<" "<<(*i).second<<"\n";
3361 output->SetBinError((*i).first,TMath::Sqrt((*i).second));
3362 }
3363}
3364
3365////////////////////////////////////////////////////////////////////////
3366/// add up an error matrix, also respecting the bin mapping
3367///
3368/// \param[inout] ematrix error matrix histogram
3369/// \param[in] emat error matrix stored with internal mapping (member fXToHist)
3370/// \param[in] binMap mapping of histogram bins
3371/// \param[in] doClear if true, ematrix is cleared prior to adding
3372/// elements of emat to it.
3373///
3374/// the array <b>binMap</b> is explained with the method GetOutput(). The
3375/// matrix emat must have dimension NxN where N=fXToHist.size()
3376/// The flag <b>doClear</b> may be used to add covariance matrices from
3377/// several uncertainty sources.
3378///
3380 const Int_t *binMap,Bool_t doClear) const
3381{
3382 Int_t nbin=ematrix->GetNbinsX();
3383 if(doClear) {
3384 for(Int_t i=0;i<nbin+2;i++) {
3385 for(Int_t j=0;j<nbin+2;j++) {
3386 ematrix->SetBinContent(i,j,0.0);
3387 ematrix->SetBinError(i,j,0.0);
3388 }
3389 }
3390 }
3391
3392 if(emat) {
3393 const Int_t *rows_emat=emat->GetRowIndexArray();
3394 const Int_t *cols_emat=emat->GetColIndexArray();
3395 const Double_t *data_emat=emat->GetMatrixArray();
3396
3397 Int_t binMapSize = fHistToX.GetSize();
3398 for(Int_t i=0;i<binMapSize;i++) {
3399 Int_t destBinI=binMap ? binMap[i] : i;
3400 Int_t srcBinI=fHistToX[i];
3401 if((destBinI>=0)&&(destBinI<nbin+2)&&(srcBinI>=0)) {
3402 // here we loop over the columns of the source matrix
3403 // j: counts histogram bins
3404 // index: counts sparse matrix index
3405 // the algorithm makes use of the fact that fHistToX is ordered
3406 Int_t j=0;
3407 Int_t index_vxx=rows_emat[srcBinI];
3408 while((j<binMapSize)&&(index_vxx<rows_emat[srcBinI+1])) {
3409 Int_t destBinJ=binMap ? binMap[j] : j;
3410 Int_t srcBinJ=fHistToX[j];
3411 if((destBinJ<0)||(destBinJ>=nbin+2)||(srcBinJ<0)) {
3412 // check next bin
3413 j++;
3414 } else {
3415 if(cols_emat[index_vxx]<srcBinJ) {
3416 // index is too small
3417 index_vxx++;
3418 } else if(cols_emat[index_vxx]>srcBinJ) {
3419 // index is too large, skip bin
3420 j++;
3421 } else {
3422 // add this bin
3423 Double_t e2= ematrix->GetBinContent(destBinI,destBinJ)
3424 + data_emat[index_vxx];
3425 ematrix->SetBinContent(destBinI,destBinJ,e2);
3426 j++;
3427 index_vxx++;
3428 }
3429 }
3430 }
3431 }
3432 }
3433 }
3434}
3435
3436////////////////////////////////////////////////////////////////////////
3437/// get output covariance matrix, possibly cumulated over several bins
3438///
3439/// \param[out] ematrix histogram to store the covariance. The bin
3440/// contents are overwritten.
3441/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
3442///
3443/// The use of <b>binMap</b> is explained with the documentation of
3444/// the GetOutput() method
3445
3446void TUnfold::GetEmatrix(TH2 *ematrix,const Int_t *binMap) const
3447{
3448 ErrorMatrixToHist(ematrix,fVxx,binMap,kTRUE);
3449}
3450
3451////////////////////////////////////////////////////////////////////////
3452/// get correlation coefficiencts, possibly cumulated over several bins
3453///
3454/// \param[out] rhoij histogram to store the correlation coefficiencts. The bin
3455/// contents are overwritten.
3456/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
3457///
3458/// The use of <b>binMap</b> is explained with the documentation of
3459/// the GetOutput() method
3460
3461void TUnfold::GetRhoIJ(TH2 *rhoij,const Int_t *binMap) const
3462{
3463 GetEmatrix(rhoij,binMap);
3464 Int_t nbin=rhoij->GetNbinsX();
3465 Double_t *e=new Double_t[nbin+2];
3466 for(Int_t i=0;i<nbin+2;i++) {
3467 e[i]=TMath::Sqrt(rhoij->GetBinContent(i,i));
3468 }
3469 for(Int_t i=0;i<nbin+2;i++) {
3470 for(Int_t j=0;j<nbin+2;j++) {
3471 if((e[i]>0.0)&&(e[j]>0.0)) {
3472 rhoij->SetBinContent(i,j,rhoij->GetBinContent(i,j)/e[i]/e[j]);
3473 } else {
3474 rhoij->SetBinContent(i,j,0.0);
3475 }
3476 }
3477 }
3478 delete[] e;
3479}
3480
3481////////////////////////////////////////////////////////////////////////
3482/// get global correlation coefficiencts, possibly cumulated over several bins
3483///
3484/// \param[out] rhoi histogram to store the global correlation
3485/// coefficients. The bin contents are overwritten.
3486/// \param[in] binMap (default=nullptr) array for mapping truth bins to
3487/// histogram bins
3488/// \param[out] invEmat (default=nullptr) histogram to store the inverted
3489/// covariance matrix
3490///
3491/// for a given bin, the global correlation coefficient is defined
3492/// as<br/>
3493/// &rho;<sub>i</sub>=sqrt(1-1/(V<sub>ii</sub>*V<sup>-1</sup><sub>ii</sub>))
3494/// <br/>
3495/// such that the calculation of global correlation coefficients
3496/// possibly involves the inversion of a covariance matrix.
3497///
3498/// return value: maximum global correlation coefficient
3499///
3500/// The use of <b>binMap</b> is explained with the documentation of
3501/// the GetOutput() method
3502///
3503
3504Double_t TUnfold::GetRhoI(TH1 *rhoi,const Int_t *binMap,TH2 *invEmat) const
3505{
3506 ClearHistogram(rhoi,-1.);
3507
3508 if(binMap) {
3509 // if there is a bin map, the matrix needs to be inverted
3510 // otherwise, use the existing inverse, such that
3511 // no matrix inversion is needed
3512 return GetRhoIFromMatrix(rhoi,fVxx,binMap,invEmat);
3513 } else {
3514 Double_t rhoMax=0.0;
3515
3516 const Int_t *rows_Vxx=fVxx->GetRowIndexArray();
3517 const Int_t *cols_Vxx=fVxx->GetColIndexArray();
3518 const Double_t *data_Vxx=fVxx->GetMatrixArray();
3519
3520 const Int_t *rows_VxxInv=fVxxInv->GetRowIndexArray();
3521 const Int_t *cols_VxxInv=fVxxInv->GetColIndexArray();
3522 const Double_t *data_VxxInv=fVxxInv->GetMatrixArray();
3523
3524 for(Int_t i=0;i<GetNx();i++) {
3525 Int_t destI=fXToHist[i];
3526 Double_t e_ii=0.0,einv_ii=0.0;
3527 for(int index_vxx=rows_Vxx[i];index_vxx<rows_Vxx[i+1];index_vxx++) {
3528 if(cols_Vxx[index_vxx]==i) {
3529 e_ii=data_Vxx[index_vxx];
3530 break;
3531 }
3532 }
3533 for(int index_vxxinv=rows_VxxInv[i];index_vxxinv<rows_VxxInv[i+1];
3534 index_vxxinv++) {
3535 if(cols_VxxInv[index_vxxinv]==i) {
3536 einv_ii=data_VxxInv[index_vxxinv];
3537 break;
3538 }
3539 }
3540 Double_t rho=1.0;
3541 if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3542 if(rho>=0.0) rho=TMath::Sqrt(rho);
3543 else rho=-TMath::Sqrt(-rho);
3544 if(rho>rhoMax) {
3545 rhoMax = rho;
3546 }
3547 rhoi->SetBinContent(destI,rho);
3548 }
3549 return rhoMax;
3550 }
3551}
3552
3554 const Int_t *binMap,TH2 *invEmat) const
3555{
3556 // get global correlation coefficients with arbitrary min map
3557 // rhoi: global correlation histogram
3558 // emat: error matrix
3559 // binMap: for each bin of the original output distribution
3560 // specify the destination bin. A value of -1 means that the bin
3561 // is discarded. 0 means underflow bin, 1 first bin, ...
3562 // binMap[0] : destination of underflow bin
3563 // binMap[1] : destination of first bin
3564 // ...
3565 // return value: maximum global correlation
3566
3567 Double_t rhoMax=0.;
3568 // original number of bins:
3569 // fHistToX.GetSize()
3570 // loop over binMap and find number of used bins
3571
3572 Int_t binMapSize = fHistToX.GetSize();
3573
3574 // histToLocalBin[iBin] points to a local index
3575 // only bins iBin of the histogram rhoi whih are referenced
3576 // in the bin map have a local index
3577 std::map<int,int> histToLocalBin;
3578 Int_t nBin=0;
3579 for(Int_t i=0;i<binMapSize;i++) {
3580 Int_t mapped_i=binMap ? binMap[i] : i;
3581 if(mapped_i>=0) {
3582 if(histToLocalBin.find(mapped_i)==histToLocalBin.end()) {
3583 histToLocalBin[mapped_i]=nBin;
3584 nBin++;
3585 }
3586 }
3587 }
3588 // number of local indices: nBin
3589 if(nBin>0) {
3590 // construct inverse mapping function
3591 // local index -> histogram bin
3592 Int_t *localBinToHist=new Int_t[nBin];
3593 for(std::map<int,int>::const_iterator i=histToLocalBin.begin();
3594 i!=histToLocalBin.end();i++) {
3595 localBinToHist[(*i).second]=(*i).first;
3596 }
3597
3598 const Int_t *rows_eOrig=eOrig->GetRowIndexArray();
3599 const Int_t *cols_eOrig=eOrig->GetColIndexArray();
3600 const Double_t *data_eOrig=eOrig->GetMatrixArray();
3601
3602 // remap error matrix
3603 // matrix row i -> origI (fXToHist[i])
3604 // origI -> destI (binMap)
3605 // destI -> ematBinI (histToLocalBin)
3606 TMatrixD eRemap(nBin,nBin);
3607 // i: row of the matrix eOrig
3608 for(Int_t i=0;i<GetNx();i++) {
3609 // origI: pointer in output histogram with all bins
3610 Int_t origI=fXToHist[i];
3611 // destI: pointer in the histogram rhoi
3612 Int_t destI=binMap ? binMap[origI] : origI;
3613 if(destI<0) continue;
3614 Int_t ematBinI=histToLocalBin[destI];
3615 for(int index_eOrig=rows_eOrig[i];index_eOrig<rows_eOrig[i+1];
3616 index_eOrig++) {
3617 // j: column of the matrix fVxx
3618 Int_t j=cols_eOrig[index_eOrig];
3619 // origJ: pointer in output histogram with all bins
3620 Int_t origJ=fXToHist[j];
3621 // destJ: pointer in the histogram rhoi
3622 Int_t destJ=binMap ? binMap[origJ] : origJ;
3623 if(destJ<0) continue;
3624 Int_t ematBinJ=histToLocalBin[destJ];
3625 eRemap(ematBinI,ematBinJ) += data_eOrig[index_eOrig];
3626 }
3627 }
3628 // invert remapped error matrix
3629 TMatrixDSparse eSparse(eRemap);
3630 Int_t rank=0;
3631 TMatrixDSparse *einvSparse=InvertMSparseSymmPos(&eSparse,&rank);
3632 if(rank!=einvSparse->GetNrows()) {
3633 Warning("GetRhoIFormMatrix","Covariance matrix has rank %d expect %d",
3634 rank,einvSparse->GetNrows());
3635 }
3636 // fill to histogram
3637 const Int_t *rows_eInv=einvSparse->GetRowIndexArray();
3638 const Int_t *cols_eInv=einvSparse->GetColIndexArray();
3639 const Double_t *data_eInv=einvSparse->GetMatrixArray();
3640
3641 for(Int_t i=0;i<einvSparse->GetNrows();i++) {
3642 Double_t e_ii=eRemap(i,i);
3643 Double_t einv_ii=0.0;
3644 for(Int_t index_einv=rows_eInv[i];index_einv<rows_eInv[i+1];
3645 index_einv++) {
3646 Int_t j=cols_eInv[index_einv];
3647 if(j==i) {
3648 einv_ii=data_eInv[index_einv];
3649 }
3650 if(invEmat) {
3651 invEmat->SetBinContent(localBinToHist[i],localBinToHist[j],
3652 data_eInv[index_einv]);
3653 }
3654 }
3655 Double_t rho=1.0;
3656 if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3657 if(rho>=0.0) rho=TMath::Sqrt(rho);
3658 else rho=-TMath::Sqrt(-rho);
3659 if(rho>rhoMax) {
3660 rhoMax = rho;
3661 }
3662 //std::cout<<i<<" "<<localBinToHist[i]<<" "<<rho<<"\n";
3663 rhoi->SetBinContent(localBinToHist[i],rho);
3664 }
3665
3666 DeleteMatrix(&einvSparse);
3667 delete [] localBinToHist;
3668 }
3669 return rhoMax;
3670}
3671
3672
3673////////////////////////////////////////////////////////////////////////
3674/// Initialize bin contents and bin errors for a given histogram
3675///
3676/// \param[out] h histogram
3677/// \param[in] x new histogram content
3678///
3679/// all histgram errors are set to zero, all contents are set to <b>x</b>
3681{
3682 Int_t nxyz[3];
3683 nxyz[0]=h->GetNbinsX()+1;
3684 nxyz[1]=h->GetNbinsY()+1;
3685 nxyz[2]=h->GetNbinsZ()+1;
3686 for(int i=h->GetDimension();i<3;i++) nxyz[i]=0;
3687 Int_t ixyz[3];
3688 for(int i=0;i<3;i++) ixyz[i]=0;
3689 while((ixyz[0]<=nxyz[0])&&
3690 (ixyz[1]<=nxyz[1])&&
3691 (ixyz[2]<=nxyz[2])){
3692 Int_t ibin=h->GetBin(ixyz[0],ixyz[1],ixyz[2]);
3693 h->SetBinContent(ibin,x);
3694 h->SetBinError(ibin,0.0);
3695 for(Int_t i=0;i<3;i++) {
3696 ixyz[i] += 1;
3697 if(ixyz[i]<=nxyz[i]) break;
3698 if(i<2) ixyz[i]=0;
3699 }
3700 }
3701}
3702
3704 // set accuracy for matrix inversion
3705 if((eps>0.0)&&(eps<1.0)) fEpsMatrix=eps;
3706}
3707
3708////////////////////////////////////////////////////////////////////////
3709/// return a string describing the TUnfold version
3710///
3711/// The version is reported in the form Vmajor.minor
3712/// Changes of the minor version number typically correspond to
3713/// bug-fixes. Changes of the major version may result in adding or
3714/// removing data attributes, such that the streamer methods are not
3715/// compatible between different major versions.
3716
3718{
3719 return TUnfold_VERSION;
3720}
3721
3722
3723////////////////////////////////////////////////////////////////////////
3724/// return Stein's unbiased risk estimator
3725/// See e.g. arXiv:1612.09415
3726///
3727/// A minimum in the SURE variable is a good choice of regularisation strength
3728///
3729/// NOTE: the calculation of SURE depends on the calculation of DF.
3730/// See the method GetDF() for caveats with Poisson-distributed data.
3731
3732double TUnfold::GetSURE(void) const {
3733 return GetChi2A()+2.*GetDF();
3734}
3735
3736////////////////////////////////////////////////////////////////////////
3737/// return the effecive number of degrees of freedom
3738/// See e.g. arXiv:1612.09415 and the references therein
3739///
3740/// Here, DF is calculated using the dependence of
3741/// the unfolding result x on the data y
3742///
3743/// This calculation is done assuming a CONSTANT data variance.
3744/// I.e. the uncertainties reported to TUnfold in "SetInput()"
3745/// ought to be independent of the measurements.
3746/// This is *NOT* true for standard Poisson-distributed data.
3747/// In practice the impact is expected to be small
3748
3749double TUnfold::GetDF(void) const {
3750
3751 double r=0.;
3752 const Int_t *rows_A=fA->GetRowIndexArray();
3753 const Int_t *cols_A=fA->GetColIndexArray();
3754 const Double_t *data_A=fA->GetMatrixArray();
3755 for (Int_t iy = 0; iy <fA->GetNrows(); iy++) {
3756 for(Int_t indexA=rows_A[iy];indexA<rows_A[iy+1];indexA++) {
3757 Int_t ix = cols_A[indexA];
3758 r += data_A[indexA]*(*fDXDY)(ix,iy);
3759 }
3760 }
3761 return r;
3762}
3763
3764////////////////////////////////////////////////////////////////////////
3765/// minimize Stein's unbiased risk estimator "SURE"
3766/// using successive calls to DoUnfold at various tau.
3767/// Optionally, also the L-curve and its curvature are calculated
3768/// for comparison. See description of GetSURE()
3769/// See e.g. arXiv:1612.09415 for the definition of SURE
3770///
3771/// \param[in] nPoint : number of points
3772/// \param[in] tauMin : lower end of scan-range
3773/// \param[in] tauMax : upper end of scan-range
3774/// \param[out] logTauSURE : scan result, SURE as a function of log(tau)
3775/// \param[out] df_chi2A : parametric plot of DF against chi2A
3776/// \param[out] lCurve : parametric plot (lCurve)
3777///
3778/// return value: index of the "best" point
3779///
3780/// if tauMin is less than zero of if tauMin is not loer than tauMax, then
3781/// the scan range is determined automatically
3782/// if tau=nullptr is included in the scan, then the first x-coordinate
3783// of the result plot logTauSURE is set to -Infinity
3784
3786(Int_t nPoint,Double_t tauMin,Double_t tauMax,
3787 TGraph **logTauSURE,TGraph **df_chi2A,TGraph **lCurve) {
3788
3789 struct ScanResult {
3790 double DF,SURE,chi2A,x,y;
3791 };
3792 std::map<double,ScanResult > scan;
3793
3794 DoUnfold(0.);
3796
3797 while((int)scan.size()<nPoint) {
3798 // add a point
3799 double tau;
3800 int n=scan.size();
3801 if(n==0) {
3802 if((tauMin>=0.)&&(tauMax>tauMin)) {
3803 tau=tauMin;
3804 } else {
3805 tau=0.0;
3806 }
3807 } else if(n==1) {
3808 if((tauMin>=0.)&&(tauMax>tauMin)) {
3809 tau=tauMax;
3810 } else {
3811 // estimate of maximum tau
3812 tau=1./ev(ev.GetNrows()-1);
3813 if(tau<=tauMin) tau=tauMin+1.;
3814 }
3815 } else {
3816 // interpolate/extrapolate
3817 std::vector<double> t,s;
3818 t.reserve(n);
3819 s.reserve(n);
3820 for(std::map<double,ScanResult>::const_iterator i=scan.begin();
3821 i!=scan.end();i++) {
3822 t.push_back((*i).first);
3823 s.push_back((*i).second.SURE);
3824 }
3825 if((n<nPoint-1)||(n<3)) {
3826 // add another point
3827 int where=0;
3828
3829 if(t.size()>2) {
3830 // check whether all intervals not including tau=nullptr
3831 // have the same (logarithmic) size
3832 double s0=0.,s1=0.,s2=0.;
3833 double rMax=0.;
3834 for(size_t i=0;i<t.size()-1;i++) {
3835 if(t[i]>0.) {
3836 double r=TMath::Log(t[i+1]/t[i]);
3837 s0+=1.;
3838 s1+=r;
3839 s2+=r*r;
3840 if(r>rMax) {
3841 rMax=r;
3842 where=i;
3843 }
3844 }
3845 }
3846 double mean=s1/s0;
3847 double diffSq=s2-mean*mean*s0;
3848 if(diffSq<0.5*(rMax-mean)) {
3849 where=0;
3850 }
3851 }
3852
3853 if((where==0)&&(t[where]<=0.0)) {
3854 // automatic scan
3855 // change minimum by 1/2
3856 tau=t[where+1]/2.;
3857 } else {
3858 // geometric mean
3859 tau=TMath::Sqrt(t[where]*t[where+1]);
3860 }
3861 } else {
3862 // improve minimum
3863 int where=1;
3864 for(size_t i=2;i<t.size()-1;i++) {
3865 if(s[i]<s[where]) where=i;
3866 }
3867 // quadratic interpolation
3868 if(where!=1) {
3869 // log in tau
3870 for(int i=-1;i<2;i++) t[where+i]=TMath::Log10(t[where+i]);
3871 }
3872 Info("ScanSURE","minimum near: [%f,%f,%f] -> [%f,%f,%f}",
3873 t[where-1],t[where],t[where+1],s[where-1],s[where],s[where+1]);
3874 double s21=s[where+1]-s[where];
3875 double s01=s[where-1]-s[where];
3876 double t21=t[where+1]-t[where];
3877 double t01=t[where-1]-t[where];
3878 double t20=t[where+1]-t[where-1];
3879 double ct0t2=(s21*t01-s01*t21)/t20;
3880 double bt0t2=(s01*t21*t21-s21*t01*t01)/t20;
3881 if(ct0t2<0.0) {
3882 tau=t[where]-0.5*bt0t2/ct0t2;
3883 //Info("ScanSURE","try tau=%f",tau);
3884 } else {
3885 // not a minimum - not clear what to do
3886 if(s21<s01) {
3887 tau=t[where]+0.5*t21;
3888 } else {
3889 tau=t[where]+0.5*t01;
3890 }
3891 //Info("ScanSURE","half of interval: tau=%f",tau);
3892 }
3893 if(where!=1) tau=TMath::Power(10.,tau);
3894 }
3895 }
3896 DoUnfold(tau);
3897 ScanResult &r=scan[tau];
3898 r.SURE=GetSURE();
3899 r.DF=GetDF();
3900 r.chi2A=GetChi2A();
3901 r.x=GetLcurveX();
3902 r.y=GetLcurveY();
3903
3904 if((tau<=0.)&&(GetNdf()<=0)) {
3905 Error("ScanSURE","too few input bins, NDF<=nullptr %d",GetNdf());
3906 }
3907 if(tau<=0.) {
3908 Info("ScanSURE","logtau=-Infinity Chi2A=%lf SURE=%lf DF=%lf X=%lf Y=%lf",
3909 r.chi2A,r.SURE,r.DF,r.x,r.y);
3910 if(!TMath::Finite(r.x)) {
3911 Fatal("ScanSURE","problem (too few input bins?) x=%f",r.x);
3912 }
3913 if(!TMath::Finite(r.y)) {
3914 Fatal("ScanSURE","problem (missing regularisation?) y=%f",r.y);
3915 }
3916 } else {
3917 Info("ScanSURE","logtau=%lf Chi2A=%lf SURE=%lf DF=%lf X=%lf Y=%lf",
3918 TMath::Log10(tau),r.chi2A,r.SURE,r.DF,r.x,r.y);
3919 }
3920
3921 }
3922
3923 int iBest=-1;
3924 // store results
3925 std::vector<double> logTau,SURE,DF,chi2A,X,Y;
3926 for(std::map<double,ScanResult>::const_iterator i=scan.begin();
3927 i!=scan.end();i++) {
3928 if((*i).first>=0) {
3929 double log10tau;
3930 if((*i).first>0.0) {
3931 log10tau=TMath::Log10((*i).first);
3932 } else {
3933 continue; // log10tau=-HUGE_VAL;
3934 }
3935 double s=(*i).second.SURE;
3936 if((iBest<0)||(SURE[iBest]>s)) {
3937 iBest=SURE.size();
3938 }
3939 logTau.push_back(log10tau);
3940 SURE.push_back(s);
3941 DF.push_back((*i).second.DF);
3942 chi2A.push_back((*i).second.chi2A);
3943 X.push_back((*i).second.x);
3944 Y.push_back((*i).second.y);
3945 }
3946 }
3947
3948 int n=logTau.size();
3949 if(n) {
3950 // fill graphs and splines
3951 if(logTauSURE) {
3952 *logTauSURE=new TGraph(n,logTau.data(),SURE.data());
3953 }
3954 if(df_chi2A) {
3955 *df_chi2A=new TGraph(n,DF.data(),chi2A.data());
3956 }
3957 if(lCurve) {
3958 *lCurve=new TGraph(n,X.data(),Y.data());
3959 }
3960 }
3961 return iBest;
3962}
3963
3964
#define d(i)
Definition RSha256.hxx:102
#define b(i)
Definition RSha256.hxx:100
#define f(i)
Definition RSha256.hxx:104
#define c(i)
Definition RSha256.hxx:101
#define s0(x)
Definition RSha256.hxx:90
#define a(i)
Definition RSha256.hxx:99
#define s1(x)
Definition RSha256.hxx:91
#define h(i)
Definition RSha256.hxx:106
#define R(a, b, c, d, e, f, g, h, i)
Definition RSha256.hxx:110
#define e(i)
Definition RSha256.hxx:103
constexpr Bool_t kFALSE
Definition RtypesCore.h:94
constexpr Bool_t kTRUE
Definition RtypesCore.h:93
#define ClassImp(name)
Definition Rtypes.h:382
#define X(type, name)
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void data
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void input
Option_t Option_t cindex
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t dest
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t r
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t index
Option_t Option_t TPoint xy
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t src
float * q
TMatrixTSparse< Double_t > TMatrixDSparse
TMatrixT< Double_t > TMatrixD
Definition TMatrixDfwd.h:23
#define TUnfold_VERSION
Definition TUnfold.h:103
void Set(Int_t n) override
Set size of this array to n doubles.
Definition TArrayD.cxx:106
const Double_t * GetArray() const
Definition TArrayD.h:43
void Set(Int_t n) override
Set size of this array to n ints.
Definition TArrayI.cxx:105
Int_t GetSize() const
Definition TArray.h:47
A TGraph is an object made of two arrays X and Y with npoints each.
Definition TGraph.h:41
TH1 is the base class of all histogram classes in ROOT.
Definition TH1.h:59
virtual Int_t GetNbinsY() const
Definition TH1.h:298
virtual Int_t GetNbinsX() const
Definition TH1.h:297
virtual void SetBinError(Int_t bin, Double_t error)
Set the bin Error Note that this resets the bin eror option to be of Normal Type and for the non-empt...
Definition TH1.cxx:9197
virtual void SetBinContent(Int_t bin, Double_t content)
Set bin content see convention for numbering bins in TH1::GetBin In case the bin number is greater th...
Definition TH1.cxx:9213
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition TH1.cxx:5052
Service class for 2-D histogram classes.
Definition TH2.h:30
void SetBinContent(Int_t bin, Double_t content) override
Set bin content.
Definition TH2.cxx:2616
Double_t GetBinContent(Int_t binx, Int_t biny) const override
Definition TH2.h:93
TMatrixDSymEigen.
const TVectorD & GetEigenValues() const
const TMatrixD & GetEigenVectors() const
TMatrixTBase.
Int_t GetNrows() const
Int_t GetNcols() const
TMatrixTBase< Element > & SetMatrixArray(const Element *data, Option_t *="") override
Copy array data to matrix .
const Int_t * GetRowIndexArray() const override
const Int_t * GetColIndexArray() const override
const Element * GetMatrixArray() const override
virtual void Warning(const char *method, const char *msgfmt,...) const
Issue warning message.
Definition TObject.cxx:979
virtual void Error(const char *method, const char *msgfmt,...) const
Issue error message.
Definition TObject.cxx:993
virtual void Fatal(const char *method, const char *msgfmt,...) const
Issue fatal error message.
Definition TObject.cxx:1021
virtual void Info(const char *method, const char *msgfmt,...) const
Issue info message.
Definition TObject.cxx:967
Class to create third splines to interpolate knots Arbitrary conditions can be introduced for first a...
Definition TSpline.h:182
void GetCoeff(Int_t i, Double_t &x, Double_t &y, Double_t &b, Double_t &c, Double_t &d) const
Definition TSpline.h:220
Double_t Eval(Double_t x) const override
Eval this spline at x.
Definition TSpline.cxx:786
Base class for spline implementation containing the Draw/Paint methods.
Definition TSpline.h:31
Basic string class.
Definition TString.h:139
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString.
Definition TString.cxx:2378
An algorithm to unfold distributions from detector to truth level.
Definition TUnfold.h:106
TArrayI fHistToX
mapping of histogram bins to matrix indices
Definition TUnfold.h:169
double GetDF(void) const
return the effecive number of degrees of freedom See e.g.
Definition TUnfold.cxx:3749
TMatrixDSparse * fE
matrix E
Definition TUnfold.h:212
TMatrixDSparse * fEinv
matrix E^(-1)
Definition TUnfold.h:210
virtual Double_t GetLcurveY(void) const
get value on y-axis of L-curve determined in recent unfolding
Definition TUnfold.cxx:3260
TMatrixDSparse * fAx
result x folded back A*x
Definition TUnfold.h:190
TMatrixDSparse * MultiplyMSparseM(const TMatrixDSparse *a, const TMatrixD *b) const
multiply sparse matrix and a non-sparse matrix
Definition TUnfold.cxx:760
virtual Double_t DoUnfold(void)
core unfolding algorithm
Definition TUnfold.cxx:246
Double_t fChi2A
chi**2 contribution from (y-Ax)Vyy-1(y-Ax)
Definition TUnfold.h:192
TMatrixD * fX0
bias vector x0
Definition TUnfold.h:163
double GetSURE(void) const
return Stein's unbiased risk estimator See e.g.
Definition TUnfold.cxx:3732
void GetBias(TH1 *bias, const Int_t *binMap=nullptr) const
get bias vector including bias scale
Definition TUnfold.cxx:2935
TMatrixDSparse * MultiplyMSparseTranspMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
multiply a transposed Sparse matrix with another Sparse matrix
Definition TUnfold.cxx:678
TMatrixDSparse * MultiplyMSparseMSparseTranspVector(const TMatrixDSparse *m1, const TMatrixDSparse *m2, const TMatrixTBase< Double_t > *v) const
calculate a sparse matrix product M1*V*M2T where the diagonal matrix V is given by a vector
Definition TUnfold.cxx:820
TMatrixDSparse * CreateSparseMatrix(Int_t nrow, Int_t ncol, Int_t nele, Int_t *row, Int_t *col, Double_t *data) const
create a sparse matrix, given the nonzero elements
Definition TUnfold.cxx:579
Int_t RegularizeSize(int bin, Double_t scale=1.0)
add a regularisation condition on the magnitude of a truth bin
Definition TUnfold.cxx:2025
Double_t fEpsMatrix
machine accuracy used to determine matrix rank after eigenvalue analysis
Definition TUnfold.h:180
void GetProbabilityMatrix(TH2 *A, EHistMap histmap) const
get matrix of probabilities
Definition TUnfold.cxx:3010
Double_t GetChi2L(void) const
get χ2L contribution determined in recent unfolding
Definition TUnfold.cxx:3231
TMatrixDSparse * fVxx
covariance matrix Vxx
Definition TUnfold.h:184
Int_t GetNy(void) const
returns the number of measurement bins
Definition TUnfold.h:237
virtual TString GetOutputBinName(Int_t iBinX) const
Get bin name of an outpt bin.
Definition TUnfold.cxx:1667
Double_t fBiasScale
scale factor for the bias
Definition TUnfold.h:161
virtual Int_t ScanSURE(Int_t nPoint, Double_t tauMin, Double_t tauMax, TGraph **logTauSURE=nullptr, TGraph **df_chi2A=nullptr, TGraph **lCurve=nullptr)
minimize Stein's unbiased risk estimator "SURE" using successive calls to DoUnfold at various tau.
Definition TUnfold.cxx:3786
virtual Int_t ScanLcurve(Int_t nPoint, Double_t tauMin, Double_t tauMax, TGraph **lCurve, TSpline **logTauX=nullptr, TSpline **logTauY=nullptr, TSpline **logTauCurvature=nullptr)
scan the L curve, determine tau and unfold at the final value of tau
Definition TUnfold.cxx:2558
Double_t fRhoAvg
average global correlation coefficient
Definition TUnfold.h:198
TMatrixDSparse * fDXDtauSquared
derivative of the result wrt tau squared
Definition TUnfold.h:206
static void DeleteMatrix(TMatrixD **m)
delete matrix and invalidate pointer
Definition TUnfold.cxx:188
void ClearHistogram(TH1 *h, Double_t x=0.) const
Initialize bin contents and bin errors for a given histogram.
Definition TUnfold.cxx:3680
Int_t RegularizeDerivative(int left_bin, int right_bin, Double_t scale=1.0)
add a regularisation condition on the difference of two truth bin
Definition TUnfold.cxx:2059
Int_t GetNx(void) const
returns internal number of output (truth) matrix rows
Definition TUnfold.h:229
static const char * GetTUnfoldVersion(void)
return a string describing the TUnfold version
Definition TUnfold.cxx:3717
void SetConstraint(EConstraint constraint)
set type of area constraint
Definition TUnfold.cxx:3211
void GetFoldedOutput(TH1 *folded, const Int_t *binMap=nullptr) const
get unfolding result on detector level
Definition TUnfold.cxx:2962
Int_t RegularizeBins(int start, int step, int nbin, ERegMode regmode)
add regularisation conditions for a group of bins
Definition TUnfold.cxx:2143
Bool_t AddRegularisationCondition(Int_t i0, Double_t f0, Int_t i1=-1, Double_t f1=0., Int_t i2=-1, Double_t f2=0.)
add a row of regularisation conditions to the matrix L
Definition TUnfold.cxx:1918
Int_t RegularizeCurvature(int left_bin, int center_bin, int right_bin, Double_t scale_left=1.0, Double_t scale_right=1.0)
add a regularisation condition on the curvature of three truth bin
Definition TUnfold.cxx:2098
void SetBias(const TH1 *bias)
set bias vector
Definition TUnfold.cxx:1895
void GetL(TH2 *l) const
get matrix of regularisation conditions
Definition TUnfold.cxx:3191
ERegMode fRegMode
type of regularisation
Definition TUnfold.h:175
Int_t GetNr(void) const
get number of regularisation conditions
Definition TUnfold.cxx:3176
TMatrixDSparse * fVxxInv
inverse of covariance matrix Vxx-1
Definition TUnfold.h:186
TMatrixD * fX
unfolding result x
Definition TUnfold.h:182
EConstraint
type of extra constraint
Definition TUnfold.h:112
@ kEConstraintNone
use no extra constraint
Definition TUnfold.h:115
virtual Double_t GetLcurveX(void) const
get value on x-axis of L-curve determined in recent unfolding
Definition TUnfold.cxx:3251
Double_t GetRhoI(TH1 *rhoi, const Int_t *binMap=nullptr, TH2 *invEmat=nullptr) const
get global correlation coefficiencts, possibly cumulated over several bins
Definition TUnfold.cxx:3504
TMatrixDSparse * fVyy
covariance matrix Vyy corresponding to y
Definition TUnfold.h:159
Int_t fNdf
number of degrees of freedom
Definition TUnfold.h:200
TArrayD fSumOverY
truth vector calculated from the non-normalized response matrix
Definition TUnfold.h:171
ERegMode
choice of regularisation scheme
Definition TUnfold.h:122
@ kRegModeNone
no regularisation, or defined later by RegularizeXXX() methods
Definition TUnfold.h:125
@ kRegModeDerivative
regularize the 1st derivative of the output distribution
Definition TUnfold.h:131
@ kRegModeSize
regularise the amplitude of the output distribution
Definition TUnfold.h:128
@ kRegModeCurvature
regularize the 2nd derivative of the output distribution
Definition TUnfold.h:134
@ kRegModeMixed
mixed regularisation pattern
Definition TUnfold.h:138
void GetInput(TH1 *inputData, const Int_t *binMap=nullptr) const
Input vector of measurements.
Definition TUnfold.cxx:3069
void SetEpsMatrix(Double_t eps)
set numerical accuracy for Eigenvalue analysis when inverting matrices with rank problems
Definition TUnfold.cxx:3703
const TMatrixDSparse * GetE(void) const
matrix E, using internal bin counting
Definition TUnfold.h:257
TVectorD GetSqrtEvEmatrix(void) const
Definition TUnfold.cxx:2509
void GetOutput(TH1 *output, const Int_t *binMap=nullptr) const
get output distribution, possibly cumulated over several bins
Definition TUnfold.cxx:3289
void GetRhoIJ(TH2 *rhoij, const Int_t *binMap=nullptr) const
get correlation coefficiencts, possibly cumulated over several bins
Definition TUnfold.cxx:3461
void ErrorMatrixToHist(TH2 *ematrix, const TMatrixDSparse *emat, const Int_t *binMap, Bool_t doClear) const
add up an error matrix, also respecting the bin mapping
Definition TUnfold.cxx:3379
TArrayI fXToHist
mapping of matrix indices to histogram bins
Definition TUnfold.h:167
TMatrixDSparse * fDXDY
derivative of the result wrt dx/dy
Definition TUnfold.h:208
TMatrixD * fY
input (measured) data y
Definition TUnfold.h:157
TMatrixDSparse * InvertMSparseSymmPos(const TMatrixDSparse *A, Int_t *rank) const
get the inverse or pseudo-inverse of a positive, sparse matrix
Definition TUnfold.cxx:993
TMatrixDSparse * fVyyInv
inverse of the input covariance matrix Vyy-1
Definition TUnfold.h:188
Double_t fLXsquared
chi**2 contribution from (x-s*x0)TLTL(x-s*x0)
Definition TUnfold.h:194
TMatrixDSparse * fDXDAM[2]
matrix contribution to the of derivative dx_k/dA_ij
Definition TUnfold.h:202
Double_t fTauSquared
regularisation parameter tau squared
Definition TUnfold.h:165
Int_t GetNpar(void) const
get number of truth parameters determined in recent unfolding
Definition TUnfold.cxx:3242
virtual void ClearResults(void)
reset all results
Definition TUnfold.cxx:208
Double_t fRhoMax
maximum global correlation coefficient
Definition TUnfold.h:196
void GetEmatrix(TH2 *ematrix, const Int_t *binMap=nullptr) const
get output covariance matrix, possibly cumulated over several bins
Definition TUnfold.cxx:3446
TMatrixDSparse * MultiplyMSparseMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
multiply two sparse matrices
Definition TUnfold.cxx:603
EConstraint fConstraint
type of constraint to use for the unfolding
Definition TUnfold.h:173
TUnfold(void)
only for use by root streamer or derived classes
Definition TUnfold.cxx:238
EHistMap
arrangement of axes for the response matrix (TH2 histogram)
Definition TUnfold.h:142
@ kHistMapOutputHoriz
truth level on x-axis of the response matrix
Definition TUnfold.h:145
void AddMSparse(TMatrixDSparse *dest, Double_t f, const TMatrixDSparse *src) const
add a sparse matrix, scaled by a factor, to another scaled matrix
Definition TUnfold.cxx:915
void GetNormalisationVector(TH1 *s, const Int_t *binMap=nullptr) const
histogram of truth bins, determined from suming over the response matrix
Definition TUnfold.cxx:2910
TMatrixDSparse * fDXDAZ[2]
vector contribution to the of derivative dx_k/dA_ij
Definition TUnfold.h:204
Double_t GetRhoIFromMatrix(TH1 *rhoi, const TMatrixDSparse *eOrig, const Int_t *binMap, TH2 *invEmat) const
Definition TUnfold.cxx:3553
void InitTUnfold(void)
initialize data menbers, for use in constructors
Definition TUnfold.cxx:144
Double_t GetTau(void) const
return regularisation parameter
Definition TUnfold.cxx:3223
Double_t GetChi2A(void) const
get χ2A contribution determined in recent unfolding
Definition TUnfold.h:328
Int_t RegularizeBins2D(int start_bin, int step1, int nbin1, int step2, int nbin2, ERegMode regmode)
add regularisation conditions for 2d unfolding
Definition TUnfold.cxx:2204
const TMatrixDSparse * GetDXDY(void) const
matrix of derivatives dx/dy
Definition TUnfold.h:249
void GetLsquared(TH2 *lsquared) const
get matrix of regularisation conditions squared
Definition TUnfold.cxx:3151
void GetInputInverseEmatrix(TH2 *ematrix)
get inverse of the measurement's covariance matrix
Definition TUnfold.cxx:3098
TMatrixDSparse * fA
response matrix A
Definition TUnfold.h:153
TMatrixDSparse * fL
regularisation conditions L
Definition TUnfold.h:155
virtual Int_t SetInput(const TH1 *hist_y, Double_t scaleBias=0.0, Double_t oneOverZeroError=0.0, const TH2 *hist_vyy=nullptr, const TH2 *hist_vyy_inv=nullptr)
Define input data for subsequent calls to DoUnfold(tau)
Definition TUnfold.cxx:2274
Int_t fIgnoredBins
number of input bins which are dropped because they have error=nullptr
Definition TUnfold.h:178
Int_t GetNdf(void) const
get number of degrees of freedom determined in recent unfolding
Definition TUnfold.h:338
~TUnfold(void) override
Int_t GetNrows() const
Definition TVectorT.h:73
Double_t y[n]
Definition legend1.C:17
Double_t x[n]
Definition legend1.C:17
const Int_t n
Definition legend1.C:16
TF1 * f1
Definition legend1.C:11
#define F(x, y, z)
#define G(x, y, z)
Short_t Max(Short_t a, Short_t b)
Returns the largest of a and b.
Definition TMathBase.h:250
Int_t Finite(Double_t x)
Check if it is finite with a mask in order to be consistent in presence of fast math.
Definition TMath.h:770
Double_t Log(Double_t x)
Returns the natural logarithm of x.
Definition TMath.h:756
Double_t Sqrt(Double_t x)
Returns the square root of x.
Definition TMath.h:662
LongDouble_t Power(LongDouble_t x, LongDouble_t y)
Returns x raised to the power y.
Definition TMath.h:721
Short_t Min(Short_t a, Short_t b)
Returns the smallest of a and b.
Definition TMathBase.h:198
Double_t Log10(Double_t x)
Returns the common (base-10) logarithm of x.
Definition TMath.h:762
Short_t Abs(Short_t d)
Returns the absolute value of parameter Short_t d.
Definition TMathBase.h:123
TMarker m
Definition textangle.C:8
static uint64_t sum(uint64_t i)
Definition Factory.cxx:2345
static void output()