Logo ROOT  
Reference Guide
TUnfold.cxx
Go to the documentation of this file.
1// @(#)root/unfold:$Id$
2// Author: Stefan Schmitt DESY, 13/10/08
3
4/** \class TUnfold
5\ingroup Unfold
6
7An algorithm to unfold distributions from detector to truth level
8
9TUnfold is used to decompose a measurement y into several sources x,
10given the measurement uncertainties and a matrix of migrations A.
11The method can be applied to a large number of problems,
12where the measured distribution y is a linear superposition
13of several Monte Carlo shapes. Beyond such a simple template fit,
14TUnfold has an adjustable regularisation term and also supports an
15optional constraint on the total number of events.
16
17<b>For most applications, it is better to use the derived class
18TUnfoldDensity instead of TUnfold</b>. TUnfoldDensity adds various
19features to TUnfold, such as:
20background subtraction, propagation of systematic uncertainties,
21complex multidimensional arrangements of the bins. For innocent
22users, the most notable improvement of TUnfoldDensity over TUnfold are
23the getter functions. For TUnfold, histograms have to be booked by the
24user and the getter functions fill the histogram bins. TUnfoldDensity
25simply returns a new, already filled histogram.
26
27If you use this software, please consider the following citation
28
29<b>S.Schmitt, JINST 7 (2012) T10003 [arXiv:1205.6201]</b>
30
31Detailed documentation and updates are available on
32http://www.desy.de/~sschmitt
33
34Brief recipe to use TUnfold:
35
36 - a matrix (truth,reconstructed) is given as a two-dimensional histogram
37 as argument to the constructor of TUnfold
38 - a vector of measurements is given as one-dimensional histogram using
39 the SetInput() method
40 - The unfolding is performed
41
42 - either once with a fixed parameter tau, method DoUnfold(tau)
43 - or multiple times in a scan to determine the best choice of tau,
44 method ScanLCurve()
45
46 - Unfolding results are retrieved using various GetXXX() methods
47
48Basic formulae:
49\f[
50\chi^{2}_{A}=(Ax-y)^{T}V_{yy}^{-1}(Ax-y) \\
51\chi^{2}_{L}=(x-f*x_{0})^{T}L^{T}L(x-f*x_{0}) \\
52\chi^{2}_{unf}=\chi^{2}_{A}+\tau^{2}\chi^{2}_{L}+\lambda\Sigma_{i}(Ax-y)_{i}
53\f]
54
55 - \f$ x \f$:result,
56 - \f$ A \f$:probabilities,
57 - \f$ y \f$:data,
58 - \f$ V_{yy} \f$:data covariance,
59 - \f$ f \f$:bias scale,
60 - \f$ x_{0} \f$:bias,
61 - \f$ L \f$:regularisation conditions,
62 - \f$ \tau \f$:regularisation strength,
63 - \f$ \lambda \f$:Lagrangian multiplier.
64
65 Without area constraint, \f$ \lambda \f$ is set to zero, and
66\f$ \chi^{2}_{unf} \f$ is minimized to determine \f$ x \f$.
67With area constraint, both \f$ x \f$ and \f$ \lambda \f$ are determined.
68
69--------------------------------------------------------------------------------
70This file is part of TUnfold.
71
72TUnfold is free software: you can redistribute it and/or modify
73it under the terms of the GNU General Public License as published by
74the Free Software Foundation, either version 3 of the License, or
75(at your option) any later version.
76
77TUnfold is distributed in the hope that it will be useful,
78but WITHOUT ANY WARRANTY; without even the implied warranty of
79MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
80GNU General Public License for more details.
81
82You should have received a copy of the GNU General Public License
83along with TUnfold. If not, see <http://www.gnu.org/licenses/>.
84
85<b>Version 17.6, updated doxygen-style comments, add one argument for scanLCurve </b>
86
87#### History:
88 - Version 17.5, fix memory leak with fVyyInv, bugs in GetInputInverseEmatrix(), GetInput(), bug in MultiplyMSparseMSparseTranspVector
89 - Version 17.4, in parallel to changes in TUnfoldBinning
90 - Version 17.3, in parallel to changes in TUnfoldBinning
91 - Version 17.2, bug fix with GetProbabilityMatrix
92 - Version 17.1, bug fixes in GetFoldedOutput, GetOutput
93 - Version 17.0, option to specify an error matrix with SetInput(), new ScanRho() method
94 - Version 16.2, in parallel to bug-fix in TUnfoldSys
95 - Version 16.1, fix bug with error matrix in case kEConstraintArea is used
96 - Version 16.0, fix calculation of global correlations, improved error messages
97 - Version 15, simplified L-curve scan, new tau definition, new error calc., area preservation
98 - Version 14, with changes in TUnfoldSys.cxx
99 - Version 13, new methods for derived classes and small bug fix
100 - Version 12, report singular matrices
101 - Version 11, reduce the amount of printout
102 - Version 10, more correct definition of the L curve, update references
103 - Version 9, faster matrix inversion and skip edge points for L-curve scan
104 - Version 8, replace all TMatrixSparse matrix operations by private code
105 - Version 7, fix problem with TMatrixDSparse,TMatrixD multiplication
106 - Version 6, replace class XY by std::pair
107 - Version 5, replace run-time dynamic arrays by new and delete[]
108 - Version 4, fix new bug from V3 with initial regularisation condition
109 - Version 3, fix bug with initial regularisation condition
110 - Version 2, with improved ScanLcurve() algorithm
111 - Version 1, added ScanLcurve() method
112 - Version 0, stable version of basic unfolding algorithm
113*/
114
115#include <iostream>
116#include <TMatrixD.h>
117#include <TMatrixDSparse.h>
118#include <TMatrixDSym.h>
119#include <TMatrixDSymEigen.h>
120#include <TMath.h>
121#include "TUnfold.h"
122
123#include <map>
124#include <vector>
125
126//#define DEBUG
127//#define DEBUG_DETAIL
128//#define FORCE_EIGENVALUE_DECOMPOSITION
129
131
133{
134 // delete all data members
135
142
143 ClearResults();
144}
145
146////////////////////////////////////////////////////////////////////////////////
147/// Initialize data members, for use in constructors.
148
150{
151 // reset all data members
152 fXToHist.Set(0);
153 fHistToX.Set(0);
154 fSumOverY.Set(0);
155 fA = 0;
156 fL = 0;
157 fVyy = 0;
158 fY = 0;
159 fX0 = 0;
160 fTauSquared = 0.0;
161 fBiasScale = 0.0;
164 // output
165 fX = 0;
166 fVyyInv = 0;
167 fVxx = 0;
168 fVxxInv = 0;
169 fAx = 0;
170 fChi2A = 0.0;
171 fLXsquared = 0.0;
172 fRhoMax = 999.0;
173 fRhoAvg = -1.0;
174 fNdf = 0;
175 fDXDAM[0] = 0;
176 fDXDAZ[0] = 0;
177 fDXDAM[1] = 0;
178 fDXDAZ[1] = 0;
179 fDXDtauSquared = 0;
180 fDXDY = 0;
181 fEinv = 0;
182 fE = 0;
183 fEpsMatrix=1.E-13;
184 fIgnoredBins=0;
185}
186
187////////////////////////////////////////////////////////////////////////////////
188/// Delete matrix and invalidate pointer.
189///
190/// \param[inout] m pointer to a matrix-pointer
191///
192/// If the matrix pointer os non-zero, the matrix id deleted. The matrix pointer
193/// is set to zero.
194
196{
197 if(*m) delete *m;
198 *m=0;
199}
200
201////////////////////////////////////////////////////////////////////////////////
202/// Delete sparse matrix and invalidate pointer
203///
204/// \param[inout] m pointer to a matrix-pointer
205///
206/// if the matrix pointer os non-zero, the matrix id deleted. The matrix pointer
207/// is set to zero.
208
210{
211 if(*m) delete *m;
212 *m=0;
213}
214
215////////////////////////////////////////////////////////////////////////////////
216/// Reset all results.
217
219{
220 // delete old results (if any)
221 // this function is virtual, so derived classes may implement their own
222 // method to flag results as non-valid
223
224 // note: the inverse of the input covariance is not cleared
225 // because it does not change until the input is changed
226
230 for(Int_t i=0;i<2;i++) {
233 }
239 fChi2A = 0.0;
240 fLXsquared = 0.0;
241 fRhoMax = 999.0;
242 fRhoAvg = -1.0;
243}
244
245////////////////////////////////////////////////////////////////////////////////
246/// Only for use by root streamer or derived classes.
247
249{
250 // set all matrix pointers to zero
251 InitTUnfold();
252}
253
254////////////////////////////////////////////////////////////////////////////////
255/// Core unfolding algorithm.
256///
257/// Main unfolding algorithm. Declared virtual, because other algorithms
258/// could be implemented
259///
260/// Purpose: unfold y -> x
261///
262/// - Data members required:
263/// - fA: matrix to relate x and y
264/// - fY: measured data points
265/// - fX0: bias on x
266/// - fBiasScale: scale factor for fX0
267/// - fVyy: covariance matrix for y
268/// - fL: regularisation conditions
269/// - fTauSquared: regularisation strength
270/// - fConstraint: whether the constraint is applied
271/// - Data members modified:
272/// - fVyyInv: inverse of input data covariance matrix
273/// - fNdf: number of degrees of freedom
274/// - fEinv: inverse of the matrix needed for unfolding calculations
275/// - fE: the matrix needed for unfolding calculations
276/// - fX: unfolded data points
277/// - fDXDY: derivative of x wrt y (for error propagation)
278/// - fVxx: error matrix (covariance matrix) on x
279/// - fAx: estimate of distribution y from unfolded data
280/// - fChi2A: contribution to chi**2 from y-Ax
281/// - fChi2L: contribution to chi**2 from L*(x-x0)
282/// - fDXDtauSquared: derivative of x wrt tau
283/// - fDXDAM[0,1]: matrix parts of derivative x wrt A
284/// - fDXDAZ[0,1]: vector parts of derivative x wrt A
285/// - fRhoMax: maximum global correlation coefficient
286/// - fRhoAvg: average global correlation coefficient
287/// - Return code:
288/// - fRhoMax if(fRhoMax>=1.0) then the unfolding has failed!
289
291{
292 ClearResults();
293
294 // get pseudo-inverse matrix Vyyinv and NDF
295 if(!fVyyInv) {
298 fNdf--;
299 }
300 }
301 //
302 // get matrix
303 // T
304 // fA fV = mAt_V
305 //
307 //
308 // get
309 // T
310 // fA fVyyinv fY + fTauSquared fBiasScale Lsquared fX0 = rhs
311 //
312 TMatrixDSparse *rhs=MultiplyMSparseM(AtVyyinv,fY);
314 if (fBiasScale != 0.0) {
315 TMatrixDSparse *rhs2=MultiplyMSparseM(lSquared,fX0);
316 AddMSparse(rhs, fTauSquared * fBiasScale ,rhs2);
317 DeleteMatrix(&rhs2);
318 }
319
320 //
321 // get matrix
322 // T
323 // (fA fV)fA + fTauSquared*fLsquared = fEinv
325 AddMSparse(fEinv,fTauSquared,lSquared);
326
327 //
328 // get matrix
329 // -1
330 // fEinv = fE
331 //
332 Int_t rank=0;
334 if(rank != GetNx()) {
335 Warning("DoUnfold","rank of matrix E %d expect %d",rank,GetNx());
336 }
337
338 //
339 // get result
340 // fE rhs = x
341 //
343 fX = new TMatrixD(*xSparse);
344 DeleteMatrix(&rhs);
345 DeleteMatrix(&xSparse);
346
347 // additional correction for constraint
348 Double_t lambda_half=0.0;
349 Double_t one_over_epsEeps=0.0;
351 TMatrixDSparse *Eepsilon=0;
353 // calculate epsilon: vector of efficiencies
354 const Int_t *A_rows=fA->GetRowIndexArray();
355 const Int_t *A_cols=fA->GetColIndexArray();
356 const Double_t *A_data=fA->GetMatrixArray();
357 TMatrixD epsilonNosparse(fA->GetNcols(),1);
358 for(Int_t i=0;i<A_rows[fA->GetNrows()];i++) {
359 epsilonNosparse(A_cols[i],0) += A_data[i];
360 }
361 epsilon=new TMatrixDSparse(epsilonNosparse);
362 // calculate vector EE*epsilon
364 // calculate scalar product epsilon#*Eepsilon
366 Eepsilon);
367 // if epsilonEepsilon is zero, nothing works...
368 if(epsilonEepsilon->GetRowIndexArray()[1]==1) {
369 one_over_epsEeps=1./epsilonEepsilon->GetMatrixArray()[0];
370 } else {
371 Fatal("Unfold","epsilon#Eepsilon has dimension %d != 1",
372 epsilonEepsilon->GetRowIndexArray()[1]);
373 }
374 DeleteMatrix(&epsilonEepsilon);
375 // calculate sum(Y)
376 Double_t y_minus_epsx=0.0;
377 for(Int_t iy=0;iy<fY->GetNrows();iy++) {
378 y_minus_epsx += (*fY)(iy,0);
379 }
380 // calculate sum(Y)-epsilon#*X
381 for(Int_t ix=0;ix<epsilonNosparse.GetNrows();ix++) {
382 y_minus_epsx -= epsilonNosparse(ix,0) * (*fX)(ix,0);
383 }
384 // calculate lambda_half
385 lambda_half=y_minus_epsx*one_over_epsEeps;
386 // calculate final vector X
387 const Int_t *EEpsilon_rows=Eepsilon->GetRowIndexArray();
388 const Double_t *EEpsilon_data=Eepsilon->GetMatrixArray();
389 for(Int_t ix=0;ix<Eepsilon->GetNrows();ix++) {
390 if(EEpsilon_rows[ix]<EEpsilon_rows[ix+1]) {
391 (*fX)(ix,0) += lambda_half * EEpsilon_data[EEpsilon_rows[ix]];
392 }
393 }
394 }
395 //
396 // get derivative dx/dy
397 // for error propagation
398 // dx/dy = E A# Vyy^-1 ( = B )
399 fDXDY = MultiplyMSparseMSparse(fE,AtVyyinv);
400
401 // additional correction for constraint
403 // transposed vector of dimension GetNy() all elements equal 1/epseEeps
404 Int_t *rows=new Int_t[GetNy()];
405 Int_t *cols=new Int_t[GetNy()];
406 Double_t *data=new Double_t[GetNy()];
407 for(Int_t i=0;i<GetNy();i++) {
408 rows[i]=0;
409 cols[i]=i;
410 data[i]=one_over_epsEeps;
411 }
413 (1,GetNy(),GetNy(),rows,cols,data);
414 delete[] data;
415 delete[] rows;
416 delete[] cols;
417 // B# * epsilon
419 // temp- one_over_epsEeps*Bepsilon
420 AddMSparse(temp, -one_over_epsEeps, epsilonB);
421 DeleteMatrix(&epsilonB);
422 // correction matrix
423 TMatrixDSparse *corr=MultiplyMSparseMSparse(Eepsilon,temp);
424 DeleteMatrix(&temp);
425 // determine new derivative
426 AddMSparse(fDXDY,1.0,corr);
427 DeleteMatrix(&corr);
428 }
429
430 DeleteMatrix(&AtVyyinv);
431
432 //
433 // get error matrix on x
434 // fDXDY * Vyy * fDXDY#
437
438 DeleteMatrix(&fDXDYVyy);
439
440 //
441 // get result
442 // fA x = fAx
443 //
445
446 //
447 // calculate chi**2 etc
448
449 // chi**2 contribution from (y-Ax)V(y-Ax)
452
453 const Int_t *VyyinvDy_rows=VyyinvDy->GetRowIndexArray();
454 const Double_t *VyyinvDy_data=VyyinvDy->GetMatrixArray();
455 fChi2A=0.0;
456 for(Int_t iy=0;iy<VyyinvDy->GetNrows();iy++) {
457 if(VyyinvDy_rows[iy]<VyyinvDy_rows[iy+1]) {
458 fChi2A += VyyinvDy_data[VyyinvDy_rows[iy]]*dy(iy,0);
459 }
460 }
462 TMatrixDSparse *LsquaredDx=MultiplyMSparseM(lSquared,&dx);
463 const Int_t *LsquaredDx_rows=LsquaredDx->GetRowIndexArray();
464 const Double_t *LsquaredDx_data=LsquaredDx->GetMatrixArray();
465 fLXsquared = 0.0;
466 for(Int_t ix=0;ix<LsquaredDx->GetNrows();ix++) {
467 if(LsquaredDx_rows[ix]<LsquaredDx_rows[ix+1]) {
468 fLXsquared += LsquaredDx_data[LsquaredDx_rows[ix]]*dx(ix,0);
469 }
470 }
471
472 //
473 // get derivative dx/dtau
475
478 Double_t f=0.0;
479 if(temp->GetRowIndexArray()[1]==1) {
480 f=temp->GetMatrixArray()[0]*one_over_epsEeps;
481 } else if(temp->GetRowIndexArray()[1]>1) {
482 Fatal("Unfold",
483 "epsilon#fDXDtauSquared has dimension %d != 1",
484 temp->GetRowIndexArray()[1]);
485 }
486 if(f!=0.0) {
487 AddMSparse(fDXDtauSquared, -f,Eepsilon);
488 }
489 DeleteMatrix(&temp);
490 }
492
493 DeleteMatrix(&LsquaredDx);
494 DeleteMatrix(&lSquared);
495
496 // calculate/store matrices defining the derivatives dx/dA
497 fDXDAM[0]=new TMatrixDSparse(*fE);
498 fDXDAM[1]=new TMatrixDSparse(*fDXDY); // create a copy
499 fDXDAZ[0]=VyyinvDy; // instead of deleting VyyinvDy
500 VyyinvDy=0;
501 fDXDAZ[1]=new TMatrixDSparse(*fX); // create a copy
502
504 // add correction to fDXDAM[0]
506 (Eepsilon,Eepsilon,0);
507 AddMSparse(fDXDAM[0], -one_over_epsEeps,temp1);
508 DeleteMatrix(&temp1);
509 // add correction to fDXDAZ[0]
510 Int_t *rows=new Int_t[GetNy()];
511 Int_t *cols=new Int_t[GetNy()];
512 Double_t *data=new Double_t[GetNy()];
513 for(Int_t i=0;i<GetNy();i++) {
514 rows[i]=i;
515 cols[i]=0;
516 data[i]=lambda_half;
517 }
519 (GetNy(),1,GetNy(),rows,cols,data);
520 delete[] data;
521 delete[] rows;
522 delete[] cols;
523 AddMSparse(fDXDAZ[0],1.0,temp2);
524 DeleteMatrix(&temp2);
525 }
526
527 DeleteMatrix(&Eepsilon);
528
529 rank=0;
531 if(rank != GetNx()) {
532 Warning("DoUnfold","rank of output covariance is %d expect %d",
533 rank,GetNx());
534 }
535
536 TVectorD VxxInvDiag(fVxxInv->GetNrows());
537 const Int_t *VxxInv_rows=fVxxInv->GetRowIndexArray();
538 const Int_t *VxxInv_cols=fVxxInv->GetColIndexArray();
539 const Double_t *VxxInv_data=fVxxInv->GetMatrixArray();
540 for (int ix = 0; ix < fVxxInv->GetNrows(); ix++) {
541 for(int ik=VxxInv_rows[ix];ik<VxxInv_rows[ix+1];ik++) {
542 if(ix==VxxInv_cols[ik]) {
543 VxxInvDiag(ix)=VxxInv_data[ik];
544 }
545 }
546 }
547
548 // maximum global correlation coefficient
549 const Int_t *Vxx_rows=fVxx->GetRowIndexArray();
550 const Int_t *Vxx_cols=fVxx->GetColIndexArray();
551 const Double_t *Vxx_data=fVxx->GetMatrixArray();
552
553 Double_t rho_squared_max = 0.0;
554 Double_t rho_sum = 0.0;
555 Int_t n_rho=0;
556 for (int ix = 0; ix < fVxx->GetNrows(); ix++) {
557 for(int ik=Vxx_rows[ix];ik<Vxx_rows[ix+1];ik++) {
558 if(ix==Vxx_cols[ik]) {
559 Double_t rho_squared =
560 1. - 1. / (VxxInvDiag(ix) * Vxx_data[ik]);
561 if (rho_squared > rho_squared_max)
562 rho_squared_max = rho_squared;
563 if(rho_squared>0.0) {
564 rho_sum += TMath::Sqrt(rho_squared);
565 n_rho++;
566 }
567 break;
568 }
569 }
570 }
571 fRhoMax = TMath::Sqrt(rho_squared_max);
572 fRhoAvg = (n_rho>0) ? (rho_sum/n_rho) : -1.0;
573
574 return fRhoMax;
575}
576
577////////////////////////////////////////////////////////////////////////////////
578/// Create a sparse matrix, given the nonzero elements.
579///
580/// \param[in] nrow number of rows
581/// \param[in] ncol number of columns
582/// \param[in] nel number of non-zero elements
583/// \param[in] row row indexes of non-zero elements
584/// \param[in] col column indexes of non-zero elements
585/// \param[in] data non-zero elements data
586///
587/// return pointer to a new sparse matrix
588///
589/// shortcut to new TMatrixDSparse() followed by SetMatrixArray().
590
592(Int_t nrow,Int_t ncol,Int_t nel,Int_t *row,Int_t *col,Double_t *data) const
593{
594 // create a sparse matri
595 // nrow,ncol : dimension of the matrix
596 // nel: number of non-zero elements
597 // row[nel],col[nel],data[nel] : indices and data of the non-zero elements
598 TMatrixDSparse *A=new TMatrixDSparse(nrow,ncol);
599 if(nel>0) {
600 A->SetMatrixArray(nel,row,col,data);
601 }
602 return A;
603}
604
605////////////////////////////////////////////////////////////////////////////////
606/// Multiply two sparse matrices.
607///
608/// \param[in] a sparse matrix
609/// \param[in] b sparse matrix
610///
611/// returns a new sparse matrix a*b.
612///
613/// A replacement for:
614/// new TMatrixDSparse(a,TMatrixDSparse::kMult,b)
615/// the root implementation had problems in older versions of root.
616
618 const TMatrixDSparse *b) const
619{
620 if(a->GetNcols()!=b->GetNrows()) {
621 Fatal("MultiplyMSparseMSparse",
622 "inconsistent matrix col/ matrix row %d !=%d",
623 a->GetNcols(),b->GetNrows());
624 }
625
626 TMatrixDSparse *r=new TMatrixDSparse(a->GetNrows(),b->GetNcols());
627 const Int_t *a_rows=a->GetRowIndexArray();
628 const Int_t *a_cols=a->GetColIndexArray();
629 const Double_t *a_data=a->GetMatrixArray();
630 const Int_t *b_rows=b->GetRowIndexArray();
631 const Int_t *b_cols=b->GetColIndexArray();
632 const Double_t *b_data=b->GetMatrixArray();
633 // maximum size of the output matrix
634 int nMax=0;
635 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
636 if(a_rows[irow+1]>a_rows[irow]) nMax += b->GetNcols();
637 }
638 if((nMax>0)&&(a_cols)&&(b_cols)) {
639 Int_t *r_rows=new Int_t[nMax];
640 Int_t *r_cols=new Int_t[nMax];
641 Double_t *r_data=new Double_t[nMax];
642 Double_t *row_data=new Double_t[b->GetNcols()];
643 Int_t n=0;
644 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
645 if(a_rows[irow+1]<=a_rows[irow]) continue;
646 // clear row data
647 for(Int_t icol=0;icol<b->GetNcols();icol++) {
648 row_data[icol]=0.0;
649 }
650 // loop over a-columns in this a-row
651 for(Int_t ia=a_rows[irow];ia<a_rows[irow+1];ia++) {
652 Int_t k=a_cols[ia];
653 // loop over b-columns in b-row k
654 for(Int_t ib=b_rows[k];ib<b_rows[k+1];ib++) {
655 row_data[b_cols[ib]] += a_data[ia]*b_data[ib];
656 }
657 }
658 // store nonzero elements
659 for(Int_t icol=0;icol<b->GetNcols();icol++) {
660 if(row_data[icol] != 0.0) {
661 r_rows[n]=irow;
662 r_cols[n]=icol;
663 r_data[n]=row_data[icol];
664 n++;
665 }
666 }
667 }
668 if(n>0) {
669 r->SetMatrixArray(n,r_rows,r_cols,r_data);
670 }
671 delete[] r_rows;
672 delete[] r_cols;
673 delete[] r_data;
674 delete[] row_data;
675 }
676
677 return r;
678}
679
680////////////////////////////////////////////////////////////////////////////////
681/// Multiply a transposed Sparse matrix with another sparse matrix,
682///
683/// \param[in] a sparse matrix (to be transposed)
684/// \param[in] b sparse matrix
685///
686/// returns a new sparse matrix a^{T}*b
687///
688/// this is a replacement for the root constructors
689/// new TMatrixDSparse(TMatrixDSparse(TMatrixDSparse::kTransposed,*a),
690/// TMatrixDSparse::kMult,*b)
691
693(const TMatrixDSparse *a,const TMatrixDSparse *b) const
694{
695 if(a->GetNrows() != b->GetNrows()) {
696 Fatal("MultiplyMSparseTranspMSparse",
697 "inconsistent matrix row numbers %d!=%d",
698 a->GetNrows(),b->GetNrows());
699 }
700
701 TMatrixDSparse *r=new TMatrixDSparse(a->GetNcols(),b->GetNcols());
702 const Int_t *a_rows=a->GetRowIndexArray();
703 const Int_t *a_cols=a->GetColIndexArray();
704 const Double_t *a_data=a->GetMatrixArray();
705 const Int_t *b_rows=b->GetRowIndexArray();
706 const Int_t *b_cols=b->GetColIndexArray();
707 const Double_t *b_data=b->GetMatrixArray();
708 // maximum size of the output matrix
709
710 // matrix multiplication
711 typedef std::map<Int_t,Double_t> MMatrixRow_t;
712 typedef std::map<Int_t, MMatrixRow_t > MMatrix_t;
713 MMatrix_t matrix;
714
715 for(Int_t iRowAB=0;iRowAB<a->GetNrows();iRowAB++) {
716 for(Int_t ia=a_rows[iRowAB];ia<a_rows[iRowAB+1];ia++) {
717 for(Int_t ib=b_rows[iRowAB];ib<b_rows[iRowAB+1];ib++) {
718 // this creates a new row if necessary
719 MMatrixRow_t &row=matrix[a_cols[ia]];
720 MMatrixRow_t::iterator icol=row.find(b_cols[ib]);
721 if(icol!=row.end()) {
722 // update existing row
723 (*icol).second += a_data[ia]*b_data[ib];
724 } else {
725 // create new row
726 row[b_cols[ib]] = a_data[ia]*b_data[ib];
727 }
728 }
729 }
730 }
731
732 Int_t n=0;
733 for (MMatrix_t::const_iterator irow = matrix.begin(); irow != matrix.end(); ++irow) {
734 n += (*irow).second.size();
735 }
736 if(n>0) {
737 // pack matrix into arrays
738 Int_t *r_rows=new Int_t[n];
739 Int_t *r_cols=new Int_t[n];
740 Double_t *r_data=new Double_t[n];
741 n=0;
742 for (MMatrix_t::const_iterator irow = matrix.begin(); irow != matrix.end(); ++irow) {
743 for (MMatrixRow_t::const_iterator icol = (*irow).second.begin(); icol != (*irow).second.end(); ++icol) {
744 r_rows[n]=(*irow).first;
745 r_cols[n]=(*icol).first;
746 r_data[n]=(*icol).second;
747 n++;
748 }
749 }
750 // pack arrays into TMatrixDSparse
751 if(n>0) {
752 r->SetMatrixArray(n,r_rows,r_cols,r_data);
753 }
754 delete[] r_rows;
755 delete[] r_cols;
756 delete[] r_data;
757 }
758
759 return r;
760}
761
762////////////////////////////////////////////////////////////////////////////////
763/// Multiply sparse matrix and a non-sparse matrix.
764///
765/// \param[in] a sparse matrix
766/// \param[in] b matrix
767///
768/// returns a new sparse matrix a*b.
769/// A replacement for:
770/// new TMatrixDSparse(a,TMatrixDSparse::kMult,b)
771/// the root implementation had problems in older versions of root.
772
774 const TMatrixD *b) const
775{
776 if(a->GetNcols()!=b->GetNrows()) {
777 Fatal("MultiplyMSparseM","inconsistent matrix col /matrix row %d!=%d",
778 a->GetNcols(),b->GetNrows());
779 }
780
781 TMatrixDSparse *r=new TMatrixDSparse(a->GetNrows(),b->GetNcols());
782 const Int_t *a_rows=a->GetRowIndexArray();
783 const Int_t *a_cols=a->GetColIndexArray();
784 const Double_t *a_data=a->GetMatrixArray();
785 // maximum size of the output matrix
786 int nMax=0;
787 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
788 if(a_rows[irow+1]-a_rows[irow]>0) nMax += b->GetNcols();
789 }
790 if(nMax>0) {
791 Int_t *r_rows=new Int_t[nMax];
792 Int_t *r_cols=new Int_t[nMax];
793 Double_t *r_data=new Double_t[nMax];
794
795 Int_t n=0;
796 // fill matrix r
797 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
798 if(a_rows[irow+1]-a_rows[irow]<=0) continue;
799 for(Int_t icol=0;icol<b->GetNcols();icol++) {
800 r_rows[n]=irow;
801 r_cols[n]=icol;
802 r_data[n]=0.0;
803 for(Int_t i=a_rows[irow];i<a_rows[irow+1];i++) {
804 Int_t j=a_cols[i];
805 r_data[n] += a_data[i]*(*b)(j,icol);
806 }
807 if(r_data[n]!=0.0) n++;
808 }
809 }
810 if(n>0) {
811 r->SetMatrixArray(n,r_rows,r_cols,r_data);
812 }
813 delete[] r_rows;
814 delete[] r_cols;
815 delete[] r_data;
816 }
817 return r;
818}
819
820
821////////////////////////////////////////////////////////////////////////////////
822/// Calculate a sparse matrix product\f$ M1*V*M2^{T} \f$ where the diagonal matrix V is
823/// given by a vector.
824///
825/// \param[in] m1 pointer to sparse matrix with dimension I*K
826/// \param[in] m2 pointer to sparse matrix with dimension J*K
827/// \param[in] v pointer to vector (matrix) with dimension K*1
828///
829/// returns a sparse matrix R with elements
830/// \f$ r_{ij}=\Sigma_{k}M1_{ik}V_{k}M2_{jk} \f$
831
833(const TMatrixDSparse *m1,const TMatrixDSparse *m2,
834 const TMatrixTBase<Double_t> *v) const
835{
836 if((m1->GetNcols() != m2->GetNcols())||
837 (v && ((m1->GetNcols()!=v->GetNrows())||(v->GetNcols()!=1)))) {
838 if(v) {
839 Fatal("MultiplyMSparseMSparseTranspVector",
840 "matrix cols/vector rows %d!=%d!=%d or vector rows %d!=1\n",
841 m1->GetNcols(),m2->GetNcols(),v->GetNrows(),v->GetNcols());
842 } else {
843 Fatal("MultiplyMSparseMSparseTranspVector",
844 "matrix cols %d!=%d\n",m1->GetNcols(),m2->GetNcols());
845 }
846 }
847 const Int_t *rows_m1=m1->GetRowIndexArray();
848 const Int_t *cols_m1=m1->GetColIndexArray();
849 const Double_t *data_m1=m1->GetMatrixArray();
850 Int_t num_m1=0;
851 for(Int_t i=0;i<m1->GetNrows();i++) {
852 if(rows_m1[i]<rows_m1[i+1]) num_m1++;
853 }
854 const Int_t *rows_m2=m2->GetRowIndexArray();
855 const Int_t *cols_m2=m2->GetColIndexArray();
856 const Double_t *data_m2=m2->GetMatrixArray();
857 Int_t num_m2=0;
858 for(Int_t j=0;j<m2->GetNrows();j++) {
859 if(rows_m2[j]<rows_m2[j+1]) num_m2++;
860 }
861 const TMatrixDSparse *v_sparse=dynamic_cast<const TMatrixDSparse *>(v);
862 const Int_t *v_rows=0;
863 const Double_t *v_data=0;
864 if(v_sparse) {
865 v_rows=v_sparse->GetRowIndexArray();
866 v_data=v_sparse->GetMatrixArray();
867 }
868 Int_t num_r=num_m1*num_m2+1;
869 Int_t *row_r=new Int_t[num_r];
870 Int_t *col_r=new Int_t[num_r];
871 Double_t *data_r=new Double_t[num_r];
872 num_r=0;
873 for(Int_t i=0;i<m1->GetNrows();i++) {
874 for(Int_t j=0;j<m2->GetNrows();j++) {
875 data_r[num_r]=0.0;
876 Int_t index_m1=rows_m1[i];
877 Int_t index_m2=rows_m2[j];
878 while((index_m1<rows_m1[i+1])&&(index_m2<rows_m2[j+1])) {
879 Int_t k1=cols_m1[index_m1];
880 Int_t k2=cols_m2[index_m2];
881 if(k1<k2) {
882 index_m1++;
883 } else if(k1>k2) {
884 index_m2++;
885 } else {
886 if(v_sparse) {
887 Int_t v_index=v_rows[k1];
888 if(v_index<v_rows[k1+1]) {
889 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2]
890 * v_data[v_index];
891 }
892 } else if(v) {
893 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2]
894 * (*v)(k1,0);
895 } else {
896 data_r[num_r] += data_m1[index_m1] * data_m2[index_m2];
897 }
898 index_m1++;
899 index_m2++;
900 }
901 }
902 if(data_r[num_r] !=0.0) {
903 row_r[num_r]=i;
904 col_r[num_r]=j;
905 num_r++;
906 }
907 }
908 }
909 TMatrixDSparse *r=CreateSparseMatrix(m1->GetNrows(),m2->GetNrows(),
910 num_r,row_r,col_r,data_r);
911 delete[] row_r;
912 delete[] col_r;
913 delete[] data_r;
914 return r;
915}
916
917////////////////////////////////////////////////////////////////////////////////
918/// Add a sparse matrix, scaled by a factor, to another scaled matrix.
919///
920/// \param[inout] dest destination matrix
921/// \param[in] f scaling factor
922/// \param[in] src matrix to be added to dest
923///
924/// a replacement for
925/// ~~~
926/// (*dest) += f * (*src)
927/// ~~~
928/// which suffered from a bug in old root versions.
929
931 const TMatrixDSparse *src) const
932{
933 const Int_t *dest_rows=dest->GetRowIndexArray();
934 const Int_t *dest_cols=dest->GetColIndexArray();
935 const Double_t *dest_data=dest->GetMatrixArray();
936 const Int_t *src_rows=src->GetRowIndexArray();
937 const Int_t *src_cols=src->GetColIndexArray();
938 const Double_t *src_data=src->GetMatrixArray();
939
940 if((dest->GetNrows()!=src->GetNrows())||
941 (dest->GetNcols()!=src->GetNcols())) {
942 Fatal("AddMSparse","inconsistent matrix rows %d!=%d OR cols %d!=%d",
943 src->GetNrows(),dest->GetNrows(),src->GetNcols(),dest->GetNcols());
944 }
945 Int_t nmax=dest->GetNrows()*dest->GetNcols();
946 Double_t *result_data=new Double_t[nmax];
947 Int_t *result_rows=new Int_t[nmax];
948 Int_t *result_cols=new Int_t[nmax];
949 Int_t n=0;
950 for(Int_t row=0;row<dest->GetNrows();row++) {
951 Int_t i_dest=dest_rows[row];
952 Int_t i_src=src_rows[row];
953 while((i_dest<dest_rows[row+1])||(i_src<src_rows[row+1])) {
954 Int_t col_dest=(i_dest<dest_rows[row+1]) ?
955 dest_cols[i_dest] : dest->GetNcols();
956 Int_t col_src =(i_src <src_rows[row+1] ) ?
957 src_cols [i_src] : src->GetNcols();
958 result_rows[n]=row;
959 if(col_dest<col_src) {
960 result_cols[n]=col_dest;
961 result_data[n]=dest_data[i_dest++];
962 } else if(col_dest>col_src) {
963 result_cols[n]=col_src;
964 result_data[n]=f*src_data[i_src++];
965 } else {
966 result_cols[n]=col_dest;
967 result_data[n]=dest_data[i_dest++]+f*src_data[i_src++];
968 }
969 if(result_data[n] !=0.0) {
970 if(!TMath::Finite(result_data[n])) {
971 Fatal("AddMSparse","Nan detected %d %d %d",
972 row,i_dest,i_src);
973 }
974 n++;
975 }
976 }
977 }
978 if(n<=0) {
979 n=1;
980 result_rows[0]=0;
981 result_cols[0]=0;
982 result_data[0]=0.0;
983 }
984 dest->SetMatrixArray(n,result_rows,result_cols,result_data);
985 delete[] result_data;
986 delete[] result_rows;
987 delete[] result_cols;
988}
989
990////////////////////////////////////////////////////////////////////////////////
991/// Get the inverse or pseudo-inverse of a positive, sparse matrix.
992///
993/// \param[in] A the sparse matrix to be inverted, has to be positive
994/// \param[inout] rankPtr if zero, suppress calculation of pseudo-inverse
995/// otherwise the rank of the matrix is returned in *rankPtr
996///
997/// return value: 0 or a new sparse matrix
998///
999/// - if(rankPtr==0) return the inverse if it exists, or return 0
1000/// - else return a (pseudo-)inverse and store the rank of the matrix in
1001/// *rankPtr
1002///
1003///
1004/// the matrix inversion is optimized in performance for the case
1005/// where a large submatrix of A is diagonal
1006
1008(const TMatrixDSparse *A,Int_t *rankPtr) const
1009{
1010
1011 if(A->GetNcols()!=A->GetNrows()) {
1012 Fatal("InvertMSparseSymmPos","inconsistent matrix row/col %d!=%d",
1013 A->GetNcols(),A->GetNrows());
1014 }
1015
1016 Bool_t *isZero=new Bool_t[A->GetNrows()];
1017 const Int_t *a_rows=A->GetRowIndexArray();
1018 const Int_t *a_cols=A->GetColIndexArray();
1019 const Double_t *a_data=A->GetMatrixArray();
1020
1021 // determine diagonal elements
1022 // Aii: diagonals of A
1023 Int_t iDiagonal=0;
1024 Int_t iBlock=A->GetNrows();
1025 Bool_t isDiagonal=kTRUE;
1026 TVectorD aII(A->GetNrows());
1027 Int_t nError=0;
1028 for(Int_t iA=0;iA<A->GetNrows();iA++) {
1029 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1030 Int_t jA=a_cols[indexA];
1031 if(iA==jA) {
1032 if(!(a_data[indexA]>=0.0)) nError++;
1033 aII(iA)=a_data[indexA];
1034 }
1035 }
1036 }
1037 if(nError>0) {
1038 Fatal("InvertMSparseSymmPos",
1039 "Matrix has %d negative elements on the diagonal", nError);
1040 delete[] isZero;
1041 return 0;
1042 }
1043
1044 // reorder matrix such that the largest block of zeros is swapped
1045 // to the lowest indices
1046 // the result of this ordering:
1047 // swap[i] : for index i point to the row in A
1048 // swapBack[iA] : for index iA in A point to the swapped index i
1049 // the indices are grouped into three groups
1050 // 0..iDiagonal-1 : these rows only have diagonal elements
1051 // iDiagonal..iBlock : these rows contain a diagonal block matrix
1052 //
1053 Int_t *swap=new Int_t[A->GetNrows()];
1054 for(Int_t j=0;j<A->GetNrows();j++) swap[j]=j;
1055 for(Int_t iStart=0;iStart<iBlock;iStart++) {
1056 Int_t nZero=0;
1057 for(Int_t i=iStart;i<iBlock;i++) {
1058 Int_t iA=swap[i];
1059 Int_t n=A->GetNrows()-(a_rows[iA+1]-a_rows[iA]);
1060 if(n>nZero) {
1061 Int_t tmp=swap[iStart];
1062 swap[iStart]=swap[i];
1063 swap[i]=tmp;
1064 nZero=n;
1065 }
1066 }
1067 for(Int_t i=0;i<A->GetNrows();i++) isZero[i]=kTRUE;
1068 Int_t iA=swap[iStart];
1069 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1070 Int_t jA=a_cols[indexA];
1071 isZero[jA]=kFALSE;
1072 if(iA!=jA) {
1073 isDiagonal=kFALSE;
1074 }
1075 }
1076 if(isDiagonal) {
1077 iDiagonal=iStart+1;
1078 } else {
1079 for(Int_t i=iStart+1;i<iBlock;) {
1080 if(isZero[swap[i]]) {
1081 i++;
1082 } else {
1083 iBlock--;
1084 Int_t tmp=swap[iBlock];
1085 swap[iBlock]=swap[i];
1086 swap[i]=tmp;
1087 }
1088 }
1089 }
1090 }
1091
1092 // for tests uncomment this:
1093 // iBlock=iDiagonal;
1094
1095
1096 // conditioning of the iBlock part
1097#ifdef CONDITION_BLOCK_PART
1098 Int_t nn=A->GetNrows()-iBlock;
1099 for(int inc=(nn+1)/2;inc;inc /= 2) {
1100 for(int i=inc;i<nn;i++) {
1101 int itmp=swap[i+iBlock];
1102 int j;
1103 for(j=i;(j>=inc)&&(aII(swap[j-inc+iBlock])<aII(itmp));j -=inc) {
1104 swap[j+iBlock]=swap[j-inc+iBlock];
1105 }
1106 swap[j+iBlock]=itmp;
1107 }
1108 }
1109#endif
1110 // construct array for swapping back
1111 Int_t *swapBack=new Int_t[A->GetNrows()];
1112 for(Int_t i=0;i<A->GetNrows();i++) {
1113 swapBack[swap[i]]=i;
1114 }
1115#ifdef DEBUG_DETAIL
1116 for(Int_t i=0;i<A->GetNrows();i++) {
1117 std::cout<<" "<<i<<" "<<swap[i]<<" "<<swapBack[i]<<"\n";
1118 }
1119 std::cout<<"after sorting\n";
1120 for(Int_t i=0;i<A->GetNrows();i++) {
1121 if(i==iDiagonal) std::cout<<"iDiagonal="<<i<<"\n";
1122 if(i==iBlock) std::cout<<"iBlock="<<i<<"\n";
1123 std::cout<<" "<<swap[i]<<" "<<aII(swap[i])<<"\n";
1124 }
1125 {
1126 // sanity check:
1127 // (1) sub-matrix swapped[0]..swapped[iDiagonal]
1128 // must not contain off-diagonals
1129 // (2) sub-matrix swapped[0]..swapped[iBlock-1] must be diagonal
1130 Int_t nDiag=0;
1131 Int_t nError1=0;
1132 Int_t nError2=0;
1133 Int_t nBlock=0;
1134 Int_t nNonzero=0;
1135 for(Int_t i=0;i<A->GetNrows();i++) {
1136 Int_t row=swap[i];
1137 for(Int_t indexA=a_rows[row];indexA<a_rows[row+1];indexA++) {
1138 Int_t j=swapBack[a_cols[indexA]];
1139 if(i==j) nDiag++;
1140 else if((i<iDiagonal)||(j<iDiagonal)) nError1++;
1141 else if((i<iBlock)&&(j<iBlock)) nError2++;
1142 else if((i<iBlock)||(j<iBlock)) nBlock++;
1143 else nNonzero++;
1144 }
1145 }
1146 if(nError1+nError2>0) {
1147 Fatal("InvertMSparseSymmPos","sparse matrix analysis failed %d %d %d %d %d",
1148 iDiagonal,iBlock,A->GetNrows(),nError1,nError2);
1149 }
1150 }
1151#endif
1152#ifdef DEBUG
1153 Info("InvertMSparseSymmPos","iDiagonal=%d iBlock=%d nRow=%d",
1154 iDiagonal,iBlock,A->GetNrows());
1155#endif
1156
1157 //=============================================
1158 // matrix inversion starts here
1159 //
1160 // the matrix is split into several parts
1161 // D1 0 0
1162 // A = ( 0 D2 B# )
1163 // 0 B C
1164 //
1165 // D1,D2 are diagonal matrices
1166 //
1167 // first, the D1 part is inverted by calculating 1/D1
1168 // dim(D1)= iDiagonal
1169 // next, the other parts are inverted using Schur complement
1170 //
1171 // 1/D1 0 0
1172 // Ainv = ( 0 E G# )
1173 // 0 G F^-1
1174 //
1175 // where F = C + (-B/D2) B#
1176 // G = (F^-1) (-B/D2)
1177 // E = 1/D2 + (-B#/D2) G)
1178 //
1179 // the inverse of F is calculated using a Cholesky decomposition
1180 //
1181 // Error handling:
1182 // (a) if rankPtr==0: user requests inverse
1183 //
1184 // if D1 is not strictly positive, return NULL
1185 // if D2 is not strictly positive, return NULL
1186 // if F is not strictly positive (Cholesky decomposition failed)
1187 // return NULL
1188 // else
1189 // return Ainv as defined above
1190 //
1191 // (b) if rankPtr !=0: user requests pseudo-inverse
1192 // if D2 is not strictly positive or F is not strictly positive
1193 // calculate singular-value decomposition of
1194 // D2 B#
1195 // ( ) = O D O^-1
1196 // B C
1197 // if some eigenvalues are negative, return NULL
1198 // else calculate pseudo-inverse
1199 // *rankPtr = rank(D1)+rank(D)
1200 // else
1201 // calculate pseudo-inverse of D1 (D1_ii==0) ? 0 : 1/D1_ii
1202 // *rankPtr=rank(D1)+nrow(D2)+nrow(C)
1203 // return Ainv as defined above
1204
1205 Double_t *rEl_data=new Double_t[A->GetNrows()*A->GetNrows()];
1206 Int_t *rEl_col=new Int_t[A->GetNrows()*A->GetNrows()];
1207 Int_t *rEl_row=new Int_t[A->GetNrows()*A->GetNrows()];
1208 Int_t rNumEl=0;
1209
1210 //====================================================
1211 // invert D1
1212 Int_t rankD1=0;
1213 for(Int_t i=0;i<iDiagonal;++i) {
1214 Int_t iA=swap[i];
1215 if(aII(iA)>0.0) {
1216 rEl_col[rNumEl]=iA;
1217 rEl_row[rNumEl]=iA;
1218 rEl_data[rNumEl]=1./aII(iA);
1219 ++rankD1;
1220 ++rNumEl;
1221 }
1222 }
1223 if((!rankPtr)&&(rankD1!=iDiagonal)) {
1224 Fatal("InvertMSparseSymmPos",
1225 "diagonal part 1 has rank %d != %d, matrix can not be inverted",
1226 rankD1,iDiagonal);
1227 rNumEl=-1;
1228 }
1229
1230
1231 //====================================================
1232 // invert D2
1233 Int_t nD2=iBlock-iDiagonal;
1234 TMatrixDSparse *D2inv=0;
1235 if((rNumEl>=0)&&(nD2>0)) {
1236 Double_t *D2inv_data=new Double_t[nD2];
1237 Int_t *D2inv_col=new Int_t[nD2];
1238 Int_t *D2inv_row=new Int_t[nD2];
1239 Int_t D2invNumEl=0;
1240 for(Int_t i=0;i<nD2;++i) {
1241 Int_t iA=swap[i+iDiagonal];
1242 if(aII(iA)>0.0) {
1243 D2inv_col[D2invNumEl]=i;
1244 D2inv_row[D2invNumEl]=i;
1245 D2inv_data[D2invNumEl]=1./aII(iA);
1246 ++D2invNumEl;
1247 }
1248 }
1249 if(D2invNumEl==nD2) {
1250 D2inv=CreateSparseMatrix(nD2,nD2,D2invNumEl,D2inv_row,D2inv_col,
1251 D2inv_data);
1252 } else if(!rankPtr) {
1253 Fatal("InvertMSparseSymmPos",
1254 "diagonal part 2 has rank %d != %d, matrix can not be inverted",
1255 D2invNumEl,nD2);
1256 rNumEl=-2;
1257 }
1258 delete [] D2inv_data;
1259 delete [] D2inv_col;
1260 delete [] D2inv_row;
1261 }
1262
1263 //====================================================
1264 // invert F
1265
1266 Int_t nF=A->GetNrows()-iBlock;
1267 TMatrixDSparse *Finv=0;
1268 TMatrixDSparse *B=0;
1269 TMatrixDSparse *minusBD2inv=0;
1270 if((rNumEl>=0)&&(nF>0)&&((nD2==0)||D2inv)) {
1271 // construct matrices F and B
1272 Int_t nFmax=nF*nF;
1273 Double_t epsilonF2=fEpsMatrix;
1274 Double_t *F_data=new Double_t[nFmax];
1275 Int_t *F_col=new Int_t[nFmax];
1276 Int_t *F_row=new Int_t[nFmax];
1277 Int_t FnumEl=0;
1278
1279 Int_t nBmax=nF*(nD2+1);
1280 Double_t *B_data=new Double_t[nBmax];
1281 Int_t *B_col=new Int_t[nBmax];
1282 Int_t *B_row=new Int_t[nBmax];
1283 Int_t BnumEl=0;
1284
1285 for(Int_t i=0;i<nF;i++) {
1286 Int_t iA=swap[i+iBlock];
1287 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1288 Int_t jA=a_cols[indexA];
1289 Int_t j0=swapBack[jA];
1290 if(j0>=iBlock) {
1291 Int_t j=j0-iBlock;
1292 F_row[FnumEl]=i;
1293 F_col[FnumEl]=j;
1294 F_data[FnumEl]=a_data[indexA];
1295 FnumEl++;
1296 } else if(j0>=iDiagonal) {
1297 Int_t j=j0-iDiagonal;
1298 B_row[BnumEl]=i;
1299 B_col[BnumEl]=j;
1300 B_data[BnumEl]=a_data[indexA];
1301 BnumEl++;
1302 }
1303 }
1304 }
1305 TMatrixDSparse *F=0;
1306 if(FnumEl) {
1307#ifndef FORCE_EIGENVALUE_DECOMPOSITION
1308 F=CreateSparseMatrix(nF,nF,FnumEl,F_row,F_col,F_data);
1309#endif
1310 }
1311 delete [] F_data;
1312 delete [] F_col;
1313 delete [] F_row;
1314 if(BnumEl) {
1315 B=CreateSparseMatrix(nF,nD2,BnumEl,B_row,B_col,B_data);
1316 }
1317 delete [] B_data;
1318 delete [] B_col;
1319 delete [] B_row;
1320
1321 if(B && D2inv) {
1322 minusBD2inv=MultiplyMSparseMSparse(B,D2inv);
1323 if(minusBD2inv) {
1324 Int_t mbd2_nMax=minusBD2inv->GetRowIndexArray()
1325 [minusBD2inv->GetNrows()];
1326 Double_t *mbd2_data=minusBD2inv->GetMatrixArray();
1327 for(Int_t i=0;i<mbd2_nMax;i++) {
1328 mbd2_data[i] = - mbd2_data[i];
1329 }
1330 }
1331 }
1332 if(minusBD2inv && F) {
1333 TMatrixDSparse *minusBD2invBt=
1335 AddMSparse(F,1.,minusBD2invBt);
1336 DeleteMatrix(&minusBD2invBt);
1337 }
1338
1339 if(F) {
1340 // cholesky decomposition with preconditioning
1341 const Int_t *f_rows=F->GetRowIndexArray();
1342 const Int_t *f_cols=F->GetColIndexArray();
1343 const Double_t *f_data=F->GetMatrixArray();
1344 // cholesky-type decomposition of F
1345 TMatrixD c(nF,nF);
1346 Int_t nErrorF=0;
1347 for(Int_t i=0;i<nF;i++) {
1348 for(Int_t indexF=f_rows[i];indexF<f_rows[i+1];indexF++) {
1349 if(f_cols[indexF]>=i) c(f_cols[indexF],i)=f_data[indexF];
1350 }
1351 // calculate diagonal element
1352 Double_t c_ii=c(i,i);
1353 for(Int_t j=0;j<i;j++) {
1354 Double_t c_ij=c(i,j);
1355 c_ii -= c_ij*c_ij;
1356 }
1357 if(c_ii<=0.0) {
1358 nErrorF++;
1359 break;
1360 }
1361 c_ii=TMath::Sqrt(c_ii);
1362 c(i,i)=c_ii;
1363 // off-diagonal elements
1364 for(Int_t j=i+1;j<nF;j++) {
1365 Double_t c_ji=c(j,i);
1366 for(Int_t k=0;k<i;k++) {
1367 c_ji -= c(i,k)*c(j,k);
1368 }
1369 c(j,i) = c_ji/c_ii;
1370 }
1371 }
1372 // check condition of dInv
1373 if(!nErrorF) {
1374 Double_t dmin=c(0,0);
1375 Double_t dmax=dmin;
1376 for(Int_t i=1;i<nF;i++) {
1377 dmin=TMath::Min(dmin,c(i,i));
1378 dmax=TMath::Max(dmax,c(i,i));
1379 }
1380#ifdef DEBUG
1381 std::cout<<"dmin,dmax: "<<dmin<<" "<<dmax<<"\n";
1382#endif
1383 if(dmin/dmax<epsilonF2*nF) nErrorF=2;
1384 }
1385 if(!nErrorF) {
1386 // here: F = c c#
1387 // construct inverse of c
1388 TMatrixD cinv(nF,nF);
1389 for(Int_t i=0;i<nF;i++) {
1390 cinv(i,i)=1./c(i,i);
1391 }
1392 for(Int_t i=0;i<nF;i++) {
1393 for(Int_t j=i+1;j<nF;j++) {
1394 Double_t tmp=-c(j,i)*cinv(i,i);
1395 for(Int_t k=i+1;k<j;k++) {
1396 tmp -= cinv(k,i)*c(j,k);
1397 }
1398 cinv(j,i)=tmp*cinv(j,j);
1399 }
1400 }
1401 TMatrixDSparse cInvSparse(cinv);
1403 (&cInvSparse,&cInvSparse);
1404 }
1405 DeleteMatrix(&F);
1406 }
1407 }
1408
1409 // here:
1410 // rNumEl>=0: diagonal part has been inverted
1411 // (nD2==0)||D2inv : D2 part has been inverted or is empty
1412 // (nF==0)||Finv : F part has been inverted or is empty
1413
1414 Int_t rankD2Block=0;
1415 if(rNumEl>=0) {
1416 if( ((nD2==0)||D2inv) && ((nF==0)||Finv) ) {
1417 // all matrix parts have been inverted, compose full matrix
1418 // 1/D1 0 0
1419 // Ainv = ( 0 E G# )
1420 // 0 G F^-1
1421 //
1422 // G = (F^-1) (-B/D2)
1423 // E = 1/D2 + (-B#/D2) G)
1424
1425 TMatrixDSparse *G=0;
1426 if(Finv && minusBD2inv) {
1427 G=MultiplyMSparseMSparse(Finv,minusBD2inv);
1428 }
1429 TMatrixDSparse *E=0;
1430 if(D2inv) E=new TMatrixDSparse(*D2inv);
1431 if(G && minusBD2inv) {
1432 TMatrixDSparse *minusBD2invTransG=
1433 MultiplyMSparseTranspMSparse(minusBD2inv,G);
1434 if(E) {
1435 AddMSparse(E,1.,minusBD2invTransG);
1436 DeleteMatrix(&minusBD2invTransG);
1437 } else {
1438 E=minusBD2invTransG;
1439 }
1440 }
1441 // pack matrix E to r
1442 if(E) {
1443 const Int_t *e_rows=E->GetRowIndexArray();
1444 const Int_t *e_cols=E->GetColIndexArray();
1445 const Double_t *e_data=E->GetMatrixArray();
1446 for(Int_t iE=0;iE<E->GetNrows();iE++) {
1447 Int_t iA=swap[iE+iDiagonal];
1448 for(Int_t indexE=e_rows[iE];indexE<e_rows[iE+1];++indexE) {
1449 Int_t jE=e_cols[indexE];
1450 Int_t jA=swap[jE+iDiagonal];
1451 rEl_col[rNumEl]=iA;
1452 rEl_row[rNumEl]=jA;
1453 rEl_data[rNumEl]=e_data[indexE];
1454 ++rNumEl;
1455 }
1456 }
1457 DeleteMatrix(&E);
1458 }
1459 // pack matrix G to r
1460 if(G) {
1461 const Int_t *g_rows=G->GetRowIndexArray();
1462 const Int_t *g_cols=G->GetColIndexArray();
1463 const Double_t *g_data=G->GetMatrixArray();
1464 for(Int_t iG=0;iG<G->GetNrows();iG++) {
1465 Int_t iA=swap[iG+iBlock];
1466 for(Int_t indexG=g_rows[iG];indexG<g_rows[iG+1];++indexG) {
1467 Int_t jG=g_cols[indexG];
1468 Int_t jA=swap[jG+iDiagonal];
1469 // G
1470 rEl_col[rNumEl]=iA;
1471 rEl_row[rNumEl]=jA;
1472 rEl_data[rNumEl]=g_data[indexG];
1473 ++rNumEl;
1474 // G#
1475 rEl_col[rNumEl]=jA;
1476 rEl_row[rNumEl]=iA;
1477 rEl_data[rNumEl]=g_data[indexG];
1478 ++rNumEl;
1479 }
1480 }
1481 DeleteMatrix(&G);
1482 }
1483 if(Finv) {
1484 // pack matrix Finv to r
1485 const Int_t *finv_rows=Finv->GetRowIndexArray();
1486 const Int_t *finv_cols=Finv->GetColIndexArray();
1487 const Double_t *finv_data=Finv->GetMatrixArray();
1488 for(Int_t iF=0;iF<Finv->GetNrows();iF++) {
1489 Int_t iA=swap[iF+iBlock];
1490 for(Int_t indexF=finv_rows[iF];indexF<finv_rows[iF+1];++indexF) {
1491 Int_t jF=finv_cols[indexF];
1492 Int_t jA=swap[jF+iBlock];
1493 rEl_col[rNumEl]=iA;
1494 rEl_row[rNumEl]=jA;
1495 rEl_data[rNumEl]=finv_data[indexF];
1496 ++rNumEl;
1497 }
1498 }
1499 }
1500 rankD2Block=nD2+nF;
1501 } else if(!rankPtr) {
1502 rNumEl=-3;
1503 Fatal("InvertMSparseSymmPos",
1504 "non-trivial part has rank < %d, matrix can not be inverted",
1505 nF);
1506 } else {
1507 // part of the matrix could not be inverted, get eingenvalue
1508 // decomposition
1509 Int_t nEV=nD2+nF;
1510 Double_t epsilonEV2=fEpsMatrix /* *nEV*nEV */;
1511 Info("InvertMSparseSymmPos",
1512 "cholesky-decomposition failed, try eigenvalue analysis");
1513#ifdef DEBUG
1514 std::cout<<"nEV="<<nEV<<" iDiagonal="<<iDiagonal<<"\n";
1515#endif
1516 TMatrixDSym EV(nEV);
1517 for(Int_t i=0;i<nEV;i++) {
1518 Int_t iA=swap[i+iDiagonal];
1519 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1520 Int_t jA=a_cols[indexA];
1521 Int_t j0=swapBack[jA];
1522 if(j0>=iDiagonal) {
1523 Int_t j=j0-iDiagonal;
1524#ifdef DEBUG
1525 if((i<0)||(j<0)||(i>=nEV)||(j>=nEV)) {
1526 std::cout<<" error "<<nEV<<" "<<i<<" "<<j<<"\n";
1527 }
1528#endif
1529 if(!TMath::Finite(a_data[indexA])) {
1530 Fatal("InvertMSparseSymmPos",
1531 "non-finite number detected element %d %d\n",
1532 iA,jA);
1533 }
1534 EV(i,j)=a_data[indexA];
1535 }
1536 }
1537 }
1538 // EV.Print();
1539 TMatrixDSymEigen Eigen(EV);
1540#ifdef DEBUG
1541 std::cout<<"Eigenvalues\n";
1542 // Eigen.GetEigenValues().Print();
1543#endif
1544 Int_t errorEV=0;
1545 Double_t condition=1.0;
1546 if(Eigen.GetEigenValues()(0)<0.0) {
1547 errorEV=1;
1548 } else if(Eigen.GetEigenValues()(0)>0.0) {
1549 condition=Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0);
1550 }
1551 if(condition<-epsilonEV2) {
1552 errorEV=2;
1553 }
1554 if(errorEV) {
1555 if(errorEV==1) {
1556 Error("InvertMSparseSymmPos",
1557 "Largest Eigenvalue is negative %f",
1558 Eigen.GetEigenValues()(0));
1559 } else {
1560 Error("InvertMSparseSymmPos",
1561 "Some Eigenvalues are negative (EV%d/EV0=%g epsilon=%g)",
1562 nEV-1,
1563 Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0),
1564 epsilonEV2);
1565 }
1566 }
1567 // compose inverse matrix
1568 rankD2Block=0;
1569 Double_t setToZero=epsilonEV2*Eigen.GetEigenValues()(0);
1570 TMatrixD inverseEV(nEV,1);
1571 for(Int_t i=0;i<nEV;i++) {
1572 Double_t x=Eigen.GetEigenValues()(i);
1573 if(x>setToZero) {
1574 inverseEV(i,0)=1./x;
1575 ++rankD2Block;
1576 }
1577 }
1580 (&V,&V,&inverseEV);
1581
1582 // pack matrix VDVt to r
1583 const Int_t *vdvt_rows=VDVt->GetRowIndexArray();
1584 const Int_t *vdvt_cols=VDVt->GetColIndexArray();
1585 const Double_t *vdvt_data=VDVt->GetMatrixArray();
1586 for(Int_t iVDVt=0;iVDVt<VDVt->GetNrows();iVDVt++) {
1587 Int_t iA=swap[iVDVt+iDiagonal];
1588 for(Int_t indexVDVt=vdvt_rows[iVDVt];
1589 indexVDVt<vdvt_rows[iVDVt+1];++indexVDVt) {
1590 Int_t jVDVt=vdvt_cols[indexVDVt];
1591 Int_t jA=swap[jVDVt+iDiagonal];
1592 rEl_col[rNumEl]=iA;
1593 rEl_row[rNumEl]=jA;
1594 rEl_data[rNumEl]=vdvt_data[indexVDVt];
1595 ++rNumEl;
1596 }
1597 }
1598 DeleteMatrix(&VDVt);
1599 }
1600 }
1601 if(rankPtr) {
1602 *rankPtr=rankD1+rankD2Block;
1603 }
1604
1605
1606 DeleteMatrix(&D2inv);
1607 DeleteMatrix(&Finv);
1608 DeleteMatrix(&B);
1609 DeleteMatrix(&minusBD2inv);
1610
1611 delete [] swap;
1612 delete [] swapBack;
1613 delete [] isZero;
1614
1615 TMatrixDSparse *r=(rNumEl>=0) ?
1616 CreateSparseMatrix(A->GetNrows(),A->GetNrows(),rNumEl,
1617 rEl_row,rEl_col,rEl_data) : 0;
1618 delete [] rEl_data;
1619 delete [] rEl_col;
1620 delete [] rEl_row;
1621
1622#ifdef DEBUG_DETAIL
1623 // sanity test
1624 if(r) {
1628
1629 TMatrixD ar(*Ar);
1630 TMatrixD a(*A);
1631 TMatrixD ara(*ArA);
1632 TMatrixD R(*r);
1633 TMatrixD rar(*rAr);
1634
1635 DeleteMatrix(&Ar);
1636 DeleteMatrix(&ArA);
1637 DeleteMatrix(&rAr);
1638
1639 Double_t epsilonA2=fEpsMatrix /* *a.GetNrows()*a.GetNcols() */;
1640 for(Int_t i=0;i<a.GetNrows();i++) {
1641 for(Int_t j=0;j<a.GetNcols();j++) {
1642 // ar should be symmetric
1643 if(TMath::Abs(ar(i,j)-ar(j,i))>
1644 epsilonA2*(TMath::Abs(ar(i,j))+TMath::Abs(ar(j,i)))) {
1645 std::cout<<"Ar is not symmetric Ar("<<i<<","<<j<<")="<<ar(i,j)
1646 <<" Ar("<<j<<","<<i<<")="<<ar(j,i)<<"\n";
1647 }
1648 // ara should be equal a
1649 if(TMath::Abs(ara(i,j)-a(i,j))>
1650 epsilonA2*(TMath::Abs(ara(i,j))+TMath::Abs(a(i,j)))) {
1651 std::cout<<"ArA is not equal A ArA("<<i<<","<<j<<")="<<ara(i,j)
1652 <<" A("<<i<<","<<j<<")="<<a(i,j)<<"\n";
1653 }
1654 // ara should be equal a
1655 if(TMath::Abs(rar(i,j)-R(i,j))>
1656 epsilonA2*(TMath::Abs(rar(i,j))+TMath::Abs(R(i,j)))) {
1657 std::cout<<"rAr is not equal r rAr("<<i<<","<<j<<")="<<rar(i,j)
1658 <<" r("<<i<<","<<j<<")="<<R(i,j)<<"\n";
1659 }
1660 }
1661 }
1662 if(rankPtr) std::cout<<"rank="<<*rankPtr<<"\n";
1663 } else {
1664 std::cout<<"Matrix is not positive\n";
1665 }
1666#endif
1667 return r;
1668
1669}
1670
1671////////////////////////////////////////////////////////////////////////////////
1672/// Get bin name of an output bin.
1673///
1674/// \param[in] iBinX bin number
1675///
1676/// Return value: name of the bin
1677///
1678/// For TUnfold and TUnfoldSys, this function simply returns the bin
1679/// number as a string. This function really only makes sense in the
1680/// context of TUnfoldDensity, where binning schemes are implemented
1681/// using the class TUnfoldBinning, and non-trivial bin names are
1682/// returned.
1683
1685{
1686 return TString::Format("#%d",iBinX);
1687}
1688
1689////////////////////////////////////////////////////////////////////////////////
1690/// Set up response matrix and regularisation scheme.
1691///
1692/// \param[in] hist_A matrix of MC events that describes the migrations
1693/// \param[in] histmap mapping of the histogram axes
1694/// \param[in] regmode (default=kRegModeSize) global regularisation mode
1695/// \param[in] constraint (default=kEConstraintArea) type of constraint
1696///
1697/// Treatment of overflow bins in the matrix hist_A
1698///
1699/// - Events reconstructed in underflow or overflow bins are counted
1700/// as inefficiency. They have to be filled properly.
1701/// - Events where the truth level is in underflow or overflow bins are
1702/// treated as a part of the generator level distribution.
1703/// The full truth level distribution (including underflow and
1704/// overflow) is unfolded.
1705///
1706/// If unsure, do the following:
1707///
1708/// - store evens where the truth is in underflow or overflow
1709/// (sometimes called "fakes") in a separate TH1. Ensure that the
1710/// truth-level underflow and overflow bins of hist_A are all zero.
1711/// - the fakes are background to the
1712/// measurement. Use the classes TUnfoldSys and TUnfoldDensity instead
1713/// of the plain TUnfold for subtracting background.
1714
1715TUnfold::TUnfold(const TH2 *hist_A, EHistMap histmap, ERegMode regmode,
1716 EConstraint constraint)
1717{
1718 // data members initialized to something different from zero:
1719 // fA: filled from hist_A
1720 // fDA: filled from hist_A
1721 // fX0: filled from hist_A
1722 // fL: filled depending on the regularisation scheme
1723 InitTUnfold();
1724 SetConstraint(constraint);
1725 Int_t nx0, nx, ny;
1726 if (histmap == kHistMapOutputHoriz) {
1727 // include overflow bins on the X axis
1728 nx0 = hist_A->GetNbinsX()+2;
1729 ny = hist_A->GetNbinsY();
1730 } else {
1731 // include overflow bins on the X axis
1732 nx0 = hist_A->GetNbinsY()+2;
1733 ny = hist_A->GetNbinsX();
1734 }
1735 nx = 0;
1736 // fNx is expected to be nx0, but the input matrix may be ill-formed
1737 // -> all columns with zero events have to be removed
1738 // (because y does not contain any information on that bin in x)
1739 fSumOverY.Set(nx0);
1740 fXToHist.Set(nx0);
1741 fHistToX.Set(nx0);
1742 Int_t nonzeroA=0;
1743 // loop
1744 // - calculate bias distribution
1745 // sum_over_y
1746 // - count those generator bins which can be unfolded
1747 // fNx
1748 // - histogram bins are added to the lookup-tables
1749 // fHistToX[] and fXToHist[]
1750 // these convert histogram bins to matrix indices and vice versa
1751 // - the number of non-zero elements is counted
1752 // nonzeroA
1753 Int_t nskipped=0;
1754 for (Int_t ix = 0; ix < nx0; ix++) {
1755 // calculate sum over all detector bins
1756 // excluding the overflow bins
1757 Double_t sum = 0.0;
1758 Int_t nonZeroY = 0;
1759 for (Int_t iy = 0; iy < ny; iy++) {
1760 Double_t z;
1761 if (histmap == kHistMapOutputHoriz) {
1762 z = hist_A->GetBinContent(ix, iy + 1);
1763 } else {
1764 z = hist_A->GetBinContent(iy + 1, ix);
1765 }
1766 if (z != 0.0) {
1767 nonzeroA++;
1768 nonZeroY++;
1769 sum += z;
1770 }
1771 }
1772 // check whether there is any sensitivity to this generator bin
1773
1774 if (nonZeroY) {
1775 // update mapping tables to convert bin number to matrix index
1776 fXToHist[nx] = ix;
1777 fHistToX[ix] = nx;
1778 // add overflow bins -> important to keep track of the
1779 // non-reconstructed events
1780 fSumOverY[nx] = sum;
1781 if (histmap == kHistMapOutputHoriz) {
1782 fSumOverY[nx] +=
1783 hist_A->GetBinContent(ix, 0) +
1784 hist_A->GetBinContent(ix, ny + 1);
1785 } else {
1786 fSumOverY[nx] +=
1787 hist_A->GetBinContent(0, ix) +
1788 hist_A->GetBinContent(ny + 1, ix);
1789 }
1790 nx++;
1791 } else {
1792 nskipped++;
1793 // histogram bin pointing to -1 (non-existing matrix entry)
1794 fHistToX[ix] = -1;
1795 }
1796 }
1797 Int_t underflowBin=0,overflowBin=0;
1798 for (Int_t ix = 0; ix < nx; ix++) {
1799 Int_t ibinx = fXToHist[ix];
1800 if(ibinx<1) underflowBin=1;
1801 if (histmap == kHistMapOutputHoriz) {
1802 if(ibinx>hist_A->GetNbinsX()) overflowBin=1;
1803 } else {
1804 if(ibinx>hist_A->GetNbinsY()) overflowBin=1;
1805 }
1806 }
1807 if(nskipped) {
1808 Int_t nprint=0;
1809 Int_t ixfirst=-1,ixlast=-1;
1810 TString binlist;
1811 int nDisconnected=0;
1812 for (Int_t ix = 0; ix < nx0; ix++) {
1813 if(fHistToX[ix]<0) {
1814 nprint++;
1815 if(ixlast<0) {
1816 binlist +=" ";
1817 binlist +=ix;
1818 ixfirst=ix;
1819 }
1820 ixlast=ix;
1821 nDisconnected++;
1822 }
1823 if(((fHistToX[ix]>=0)&&(ixlast>=0))||
1824 (nprint==nskipped)) {
1825 if(ixlast>ixfirst) {
1826 binlist += "-";
1827 binlist += ixlast;
1828 }
1829 ixfirst=-1;
1830 ixlast=-1;
1831 }
1832 if(nprint==nskipped) {
1833 break;
1834 }
1835 }
1836 if(nskipped==(2-underflowBin-overflowBin)) {
1837 Info("TUnfold","underflow and overflow bin "
1838 "do not depend on the input data");
1839 } else {
1840 Warning("TUnfold","%d output bins "
1841 "do not depend on the input data %s",nDisconnected,
1842 static_cast<const char *>(binlist));
1843 }
1844 }
1845 // store bias as matrix
1846 fX0 = new TMatrixD(nx, 1, fSumOverY.GetArray());
1847 // store non-zero elements in sparse matrix fA
1848 // construct sparse matrix fA
1849 Int_t *rowA = new Int_t[nonzeroA];
1850 Int_t *colA = new Int_t[nonzeroA];
1851 Double_t *dataA = new Double_t[nonzeroA];
1852 Int_t index=0;
1853 for (Int_t iy = 0; iy < ny; iy++) {
1854 for (Int_t ix = 0; ix < nx; ix++) {
1855 Int_t ibinx = fXToHist[ix];
1856 Double_t z;
1857 if (histmap == kHistMapOutputHoriz) {
1858 z = hist_A->GetBinContent(ibinx, iy + 1);
1859 } else {
1860 z = hist_A->GetBinContent(iy + 1, ibinx);
1861 }
1862 if (z != 0.0) {
1863 rowA[index]=iy;
1864 colA[index] = ix;
1865 dataA[index] = z / fSumOverY[ix];
1866 index++;
1867 }
1868 }
1869 }
1870 if(underflowBin && overflowBin) {
1871 Info("TUnfold","%d input bins and %d output bins (includes 2 underflow/overflow bins)",ny,nx);
1872 } else if(underflowBin) {
1873 Info("TUnfold","%d input bins and %d output bins (includes 1 underflow bin)",ny,nx);
1874 } else if(overflowBin) {
1875 Info("TUnfold","%d input bins and %d output bins (includes 1 overflow bin)",ny,nx);
1876 } else {
1877 Info("TUnfold","%d input bins and %d output bins",ny,nx);
1878 }
1879 fA = CreateSparseMatrix(ny,nx,index,rowA,colA,dataA);
1880 if(ny<nx) {
1881 Error("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1882 } else if(ny==nx) {
1883 Warning("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1884 }
1885 delete[] rowA;
1886 delete[] colA;
1887 delete[] dataA;
1888 // regularisation conditions squared
1889 fL = new TMatrixDSparse(1, GetNx());
1890 if (regmode != kRegModeNone) {
1891 Int_t nError=RegularizeBins(0, 1, nx0, regmode);
1892 if(nError>0) {
1893 if(nError>1) {
1894 Info("TUnfold",
1895 "%d regularisation conditions have been skipped",nError);
1896 } else {
1897 Info("TUnfold",
1898 "One regularisation condition has been skipped");
1899 }
1900 }
1901 }
1902}
1903
1904////////////////////////////////////////////////////////////////////////////////
1905/// Set bias vector.
1906///
1907/// \param[in] bias histogram with new bias vector
1908///
1909/// the initial bias vector is determined from the response matrix
1910/// but may be changed by using this method
1911
1912void TUnfold::SetBias(const TH1 *bias)
1913{
1914 DeleteMatrix(&fX0);
1915 fX0 = new TMatrixD(GetNx(), 1);
1916 for (Int_t i = 0; i < GetNx(); i++) {
1917 (*fX0) (i, 0) = bias->GetBinContent(fXToHist[i]);
1918 }
1919}
1920
1921////////////////////////////////////////////////////////////////////////////////
1922/// Add a row of regularisation conditions to the matrix L.
1923///
1924/// \param[in] i0 truth histogram bin number
1925/// \param[in] f0 entry in the matrix L, column i0
1926/// \param[in] i1 truth histogram bin number
1927/// \param[in] f1 entry in the matrix L, column i1
1928/// \param[in] i2 truth histogram bin number
1929/// \param[in] f2 entry in the matrix L, column i2
1930///
1931/// the arguments are used to form one row (k) of the matrix L, where
1932/// \f$ L_{k,i0}=f0 \f$ and \f$ L_{k,i1}=f1 \f$ and \f$ L_{k,i2}=f2 \f$
1933/// negative indexes i0,i1,i2 are ignored.
1934
1936(Int_t i0,Double_t f0,Int_t i1,Double_t f1,Int_t i2,Double_t f2)
1937{
1938 Int_t indices[3];
1939 Double_t data[3];
1940 Int_t nEle=0;
1941
1942 if(i2>=0) {
1943 data[nEle]=f2;
1944 indices[nEle]=i2;
1945 nEle++;
1946 }
1947 if(i1>=0) {
1948 data[nEle]=f1;
1949 indices[nEle]=i1;
1950 nEle++;
1951 }
1952 if(i0>=0) {
1953 data[nEle]=f0;
1954 indices[nEle]=i0;
1955 nEle++;
1956 }
1957 return AddRegularisationCondition(nEle,indices,data);
1958}
1959
1960////////////////////////////////////////////////////////////////////////////////
1961/// Add a row of regularisation conditions to the matrix L.
1962///
1963/// \param[in] nEle number of valid entries in indices and rowData
1964/// \param[in] indices column numbers of L to fill
1965/// \param[in] rowData data to fill into the new row of L
1966///
1967/// returns true if a row was added, false otherwise
1968///
1969/// A new row k is added to the matrix L, its dimension is expanded.
1970/// The new elements \f$ L_{ki} \f$ are filled from the array rowData[]
1971/// where the indices i which are taken from the array indices[].
1972
1974(Int_t nEle,const Int_t *indices,const Double_t *rowData)
1975{
1976 Bool_t r=kTRUE;
1977 const Int_t *l0_rows=fL->GetRowIndexArray();
1978 const Int_t *l0_cols=fL->GetColIndexArray();
1979 const Double_t *l0_data=fL->GetMatrixArray();
1980
1981 Int_t nF=l0_rows[fL->GetNrows()]+nEle;
1982 Int_t *l_row=new Int_t[nF];
1983 Int_t *l_col=new Int_t[nF];
1984 Double_t *l_data=new Double_t[nF];
1985 // decode original matrix
1986 nF=0;
1987 for(Int_t row=0;row<fL->GetNrows();row++) {
1988 for(Int_t k=l0_rows[row];k<l0_rows[row+1];k++) {
1989 l_row[nF]=row;
1990 l_col[nF]=l0_cols[k];
1991 l_data[nF]=l0_data[k];
1992 nF++;
1993 }
1994 }
1995
1996 // if the original matrix does not have any data, reset its row count
1997 Int_t rowMax=0;
1998 if(nF>0) {
1999 rowMax=fL->GetNrows();
2000 }
2001
2002 // add the new regularisation conditions
2003 for(Int_t i=0;i<nEle;i++) {
2004 Int_t col=fHistToX[indices[i]];
2005 if(col<0) {
2006 r=kFALSE;
2007 break;
2008 }
2009 l_row[nF]=rowMax;
2010 l_col[nF]=col;
2011 l_data[nF]=rowData[i];
2012 nF++;
2013 }
2014
2015 // replace the old matrix fL
2016 if(r) {
2017 DeleteMatrix(&fL);
2018 fL=CreateSparseMatrix(rowMax+1,GetNx(),nF,l_row,l_col,l_data);
2019 }
2020 delete [] l_row;
2021 delete [] l_col;
2022 delete [] l_data;
2023 return r;
2024}
2025
2026////////////////////////////////////////////////////////////////////////////////
2027/// Add a regularisation condition on the magnitude of a truth bin.
2028///
2029/// \param[in] bin bin number
2030/// \param[in] scale (default=1) scale factor
2031///
2032/// this adds one row to L, where the element <b>bin</b> takes the
2033/// value <b>scale</b>
2034///
2035/// return value: 0 if ok, 1 if the condition has not been
2036/// added. Conditions which are not added typically correspond to bin
2037/// numbers where the truth can not be unfolded (either response
2038/// matrix is empty or the data do not constrain).
2039///
2040/// The RegularizeXXX() methods can be used to set up a custom matrix
2041/// of regularisation conditions. In this case, start with an empty
2042/// matrix L (argument regmode=kRegModeNone in the constructor)
2043
2045{
2046 // add regularisation on the size of bin i
2047 // bin: bin number
2048 // scale: size of the regularisation
2049 // return value: number of conditions which have been skipped
2050 // modifies data member fL
2051
2054
2055 return AddRegularisationCondition(bin,scale) ? 0 : 1;
2056}
2057
2058////////////////////////////////////////////////////////////////////////////////
2059/// Add a regularisation condition on the difference of two truth bin.
2060///
2061/// \param[in] left_bin bin number
2062/// \param[in] right_bin bin number
2063/// \param[in] scale (default=1) scale factor
2064///
2065/// this adds one row to L, where the element <b>left_bin</b> takes the
2066/// value <b>-scale</b> and the element <b>right_bin</b> takes the
2067/// value <b>+scale</b>
2068///
2069/// return value: 0 if ok, 1 if the condition has not been
2070/// added. Conditions which are not added typically correspond to bin
2071/// numbers where the truth can not be unfolded (either response
2072/// matrix is empty or the data do not constrain).
2073///
2074/// The RegularizeXXX() methods can be used to set up a custom matrix
2075/// of regularisation conditions. In this case, start with an empty
2076/// matrix L (argument regmode=kRegModeNone in the constructor)
2077
2078Int_t TUnfold::RegularizeDerivative(int left_bin, int right_bin,
2079 Double_t scale)
2080{
2081 // add regularisation on the difference of two bins
2082 // left_bin: 1st bin
2083 // right_bin: 2nd bin
2084 // scale: size of the regularisation
2085 // return value: number of conditions which have been skipped
2086 // modifies data member fL
2087
2090
2091 return AddRegularisationCondition(left_bin,-scale,right_bin,scale) ? 0 : 1;
2092}
2093
2094////////////////////////////////////////////////////////////////////////////////
2095/// Add a regularisation condition on the curvature of three truth bin.
2096///
2097/// \param[in] left_bin bin number
2098/// \param[in] center_bin bin number
2099/// \param[in] right_bin bin number
2100/// \param[in] scale_left (default=1) scale factor
2101/// \param[in] scale_right (default=1) scale factor
2102///
2103/// this adds one row to L, where the element <b>left_bin</b> takes the
2104/// value <b>-scale_left</b>, the element <b>right_bin</b> takes the
2105/// value <b>-scale_right</b> and the element <b>center_bin</b> takes
2106/// the value <b>scale_left+scale_right</b>
2107///
2108/// return value: 0 if ok, 1 if the condition has not been
2109/// added. Conditions which are not added typically correspond to bin
2110/// numbers where the truth can not be unfolded (either response
2111/// matrix is empty or the data do not constrain).
2112///
2113/// The RegularizeXXX() methods can be used to set up a custom matrix
2114/// of regularisation conditions. In this case, start with an empty
2115/// matrix L (argument regmode=kRegModeNone in the constructor)
2116
2117Int_t TUnfold::RegularizeCurvature(int left_bin, int center_bin,
2118 int right_bin,
2119 Double_t scale_left,
2120 Double_t scale_right)
2121{
2122 // add regularisation on the curvature through 3 bins (2nd derivative)
2123 // left_bin: 1st bin
2124 // center_bin: 2nd bin
2125 // right_bin: 3rd bin
2126 // scale_left: scale factor on center-left difference
2127 // scale_right: scale factor on right-center difference
2128 // return value: number of conditions which have been skipped
2129 // modifies data member fL
2130
2133
2135 (left_bin,-scale_left,
2136 center_bin,scale_left+scale_right,
2137 right_bin,-scale_right)
2138 ? 0 : 1;
2139}
2140
2141////////////////////////////////////////////////////////////////////////////////
2142/// Add regularisation conditions for a group of bins.
2143///
2144/// \param[in] start first bin number
2145/// \param[in] step step size
2146/// \param[in] nbin number of bins
2147/// \param[in] regmode regularisation mode (one of: kRegModeSize,
2148/// kRegModeDerivative, kRegModeCurvature)
2149///
2150/// add regularisation conditions for a group of equidistant
2151/// bins. There are <b>nbin</b> bins, starting with bin <b>start</b>
2152/// and with a distance of <b>step</b> between bins.
2153///
2154/// Return value: number of regularisation conditions which could not
2155/// be added.
2156///
2157/// Conditions which are not added typically correspond to bin
2158/// numbers where the truth can not be unfolded (either response
2159/// matrix is empty or the data do not constrain).
2160
2161Int_t TUnfold::RegularizeBins(int start, int step, int nbin,
2162 ERegMode regmode)
2163{
2164 // set regulatisation on a 1-dimensional curve
2165 // start: first bin
2166 // step: distance between neighbouring bins
2167 // nbin: total number of bins
2168 // regmode: regularisation mode
2169 // return value:
2170 // number of errors (i.e. conditions which have been skipped)
2171 // modifies data member fL
2172
2173 Int_t i0, i1, i2;
2174 i0 = start;
2175 i1 = i0 + step;
2176 i2 = i1 + step;
2177 Int_t nSkip = 0;
2178 Int_t nError= 0;
2179 if (regmode == kRegModeDerivative) {
2180 nSkip = 1;
2181 } else if (regmode == kRegModeCurvature) {
2182 nSkip = 2;
2183 } else if(regmode != kRegModeSize) {
2184 Error("RegularizeBins","regmode = %d is not valid",regmode);
2185 }
2186 for (Int_t i = nSkip; i < nbin; i++) {
2187 if (regmode == kRegModeSize) {
2188 nError += RegularizeSize(i0);
2189 } else if (regmode == kRegModeDerivative) {
2190 nError += RegularizeDerivative(i0, i1);
2191 } else if (regmode == kRegModeCurvature) {
2192 nError += RegularizeCurvature(i0, i1, i2);
2193 }
2194 i0 = i1;
2195 i1 = i2;
2196 i2 += step;
2197 }
2198 return nError;
2199}
2200
2201////////////////////////////////////////////////////////////////////////////////
2202/// Add regularisation conditions for 2d unfolding.
2203///
2204/// \param[in] start_bin first bin number
2205/// \param[in] step1 step size, 1st dimension
2206/// \param[in] nbin1 number of bins, 1st dimension
2207/// \param[in] step2 step size, 2nd dimension
2208/// \param[in] nbin2 number of bins, 2nd dimension
2209/// \param[in] regmode regularisation mode (one of: kRegModeSize,
2210/// kRegModeDerivative, kRegModeCurvature)
2211///
2212/// add regularisation conditions for a grid of bins. The start bin is
2213/// <b>start_bin</b>. Along the first (second) dimension, there are
2214/// <b>nbin1</b> (<b>nbin2</b>) bins and adjacent bins are spaced by
2215/// <b>step1</b> (<b>step2</b>) units.
2216///
2217/// Return value: number of regularisation conditions which could not
2218/// be added. Conditions which are not added typically correspond to bin
2219/// numbers where the truth can not be unfolded (either response
2220/// matrix is empty or the data do not constrain).
2221
2222Int_t TUnfold::RegularizeBins2D(int start_bin, int step1, int nbin1,
2223 int step2, int nbin2, ERegMode regmode)
2224{
2225 // set regularisation on a 2-dimensional grid of bins
2226 // start_bin: first bin
2227 // step1: distance between bins in 1st direction
2228 // nbin1: number of bins in 1st direction
2229 // step2: distance between bins in 2nd direction
2230 // nbin2: number of bins in 2nd direction
2231 // return value:
2232 // number of errors (i.e. conditions which have been skipped)
2233 // modifies data member fL
2234
2235 Int_t nError = 0;
2236 for (Int_t i1 = 0; i1 < nbin1; i1++) {
2237 nError += RegularizeBins(start_bin + step1 * i1, step2, nbin2, regmode);
2238 }
2239 for (Int_t i2 = 0; i2 < nbin2; i2++) {
2240 nError += RegularizeBins(start_bin + step2 * i2, step1, nbin1, regmode);
2241 }
2242 return nError;
2243}
2244
2245////////////////////////////////////////////////////////////////////////////////
2246/// Perform the unfolding for a given input and regularisation.
2247///
2248/// \param[in] tau_reg regularisation parameter
2249/// \param[in] input input distribution with uncertainties
2250/// \param[in] scaleBias (default=0.0) scale factor applied to the bias
2251///
2252/// This is a shortcut for `{ SetInput(input,scaleBias); DoUnfold(tau); }`
2253///
2254/// Data members required:
2255/// - fA, fX0, fL
2256/// Data members modified:
2257/// - those documented in SetInput()
2258/// and those documented in DoUnfold(Double_t)
2259/// Return value:
2260/// - maximum global correlation coefficient
2261/// NOTE!!! return value >=1.0 means error, and the result is junk
2262///
2263/// Overflow bins of the input distribution are ignored!
2264
2266 Double_t scaleBias)
2267{
2268
2269 SetInput(input,scaleBias);
2270 return DoUnfold(tau_reg);
2271}
2272
2273////////////////////////////////////////////////////////////////////////////////
2274/// Define input data for subsequent calls to DoUnfold(tau).
2275///
2276/// \param[in] input input distribution with uncertainties
2277/// \param[in] scaleBias (default=0) scale factor applied to the bias
2278/// \param[in] oneOverZeroError (default=0) for bins with zero error, this number defines 1/error.
2279/// \param[in] hist_vyy (default=0) if non-zero, this defines the data covariance matrix
2280/// \param[in] hist_vyy_inv (default=0) if non-zero and hist_vyy is
2281/// set, defines the inverse of the data covariance matrix. This
2282/// feature can be useful for repeated unfoldings in cases where the
2283/// inversion of the input covariance matrix is lengthy
2284///
2285/// Return value: nError1+10000*nError2
2286///
2287/// - nError1: number of bins where the uncertainty is zero.
2288/// these bins either are not used for the unfolding (if
2289/// oneOverZeroError==0) or 1/uncertainty is set to oneOverZeroError.
2290/// - nError2: return values>10000 are fatal errors, because the
2291/// unfolding can not be done. The number nError2 corresponds to the
2292/// number of truth bins which are not constrained by data points.
2293///
2294/// Data members modified:
2295/// - fY, fVyy, , fBiasScale
2296/// Data members cleared
2297/// - fVyyInv, fNdf
2298/// - + see ClearResults
2299
2300Int_t TUnfold::SetInput(const TH1 *input, Double_t scaleBias,
2301 Double_t oneOverZeroError,const TH2 *hist_vyy,
2302 const TH2 *hist_vyy_inv)
2303{
2305 fNdf=0;
2306
2307 fBiasScale = scaleBias;
2308
2309 // delete old results (if any)
2310 ClearResults();
2311
2312 // construct error matrix and inverted error matrix of measured quantities
2313 // from errors of input histogram or use error matrix
2314
2315 Int_t *rowVyyN=new Int_t[GetNy()*GetNy()+1];
2316 Int_t *colVyyN=new Int_t[GetNy()*GetNy()+1];
2317 Double_t *dataVyyN=new Double_t[GetNy()*GetNy()+1];
2318
2319 Int_t *rowVyy1=new Int_t[GetNy()];
2320 Int_t *colVyy1=new Int_t[GetNy()];
2321 Double_t *dataVyy1=new Double_t[GetNy()];
2322 Double_t *dataVyyDiag=new Double_t[GetNy()];
2323
2324 Int_t nVarianceZero=0;
2325 Int_t nVarianceForced=0;
2326 Int_t nVyyN=0;
2327 Int_t nVyy1=0;
2328 for (Int_t iy = 0; iy < GetNy(); iy++) {
2329 // diagonals
2330 Double_t dy2;
2331 if(!hist_vyy) {
2332 Double_t dy = input->GetBinError(iy + 1);
2333 dy2=dy*dy;
2334 if (dy2 <= 0.0) {
2335 if(oneOverZeroError>0.0) {
2336 dy2 = 1./ ( oneOverZeroError*oneOverZeroError);
2337 nVarianceForced++;
2338 } else {
2339 nVarianceZero++;
2340 }
2341 }
2342 } else {
2343 dy2 = hist_vyy->GetBinContent(iy+1,iy+1);
2344 if (dy2 <= 0.0) {
2345 nVarianceZero++;
2346 }
2347 }
2348 rowVyyN[nVyyN] = iy;
2349 colVyyN[nVyyN] = iy;
2350 rowVyy1[nVyy1] = iy;
2351 colVyy1[nVyy1] = 0;
2352 dataVyyDiag[iy] = dy2;
2353 if(dy2>0.0) {
2354 dataVyyN[nVyyN++] = dy2;
2355 dataVyy1[nVyy1++] = dy2;
2356 }
2357 }
2358 if(hist_vyy) {
2359 // non-diagonal elements
2360 Int_t nOffDiagNonzero=0;
2361 for (Int_t iy = 0; iy < GetNy(); iy++) {
2362 // ignore rows where the diagonal is zero
2363 if(dataVyyDiag[iy]<=0.0) {
2364 for (Int_t jy = 0; jy < GetNy(); jy++) {
2365 if(hist_vyy->GetBinContent(iy+1,jy+1)!=0.0) {
2366 nOffDiagNonzero++;
2367 }
2368 }
2369 continue;
2370 }
2371 for (Int_t jy = 0; jy < GetNy(); jy++) {
2372 // skip diagonal elements
2373 if(iy==jy) continue;
2374 // ignore columns where the diagonal is zero
2375 if(dataVyyDiag[jy]<=0.0) continue;
2376
2377 rowVyyN[nVyyN] = iy;
2378 colVyyN[nVyyN] = jy;
2379 dataVyyN[nVyyN]= hist_vyy->GetBinContent(iy+1,jy+1);
2380 if(dataVyyN[nVyyN] == 0.0) continue;
2381 nVyyN ++;
2382 }
2383 }
2384 if(hist_vyy_inv) {
2385 Warning("SetInput",
2386 "inverse of input covariance is taken from user input");
2387 Int_t *rowVyyInv=new Int_t[GetNy()*GetNy()+1];
2388 Int_t *colVyyInv=new Int_t[GetNy()*GetNy()+1];
2389 Double_t *dataVyyInv=new Double_t[GetNy()*GetNy()+1];
2390 Int_t nVyyInv=0;
2391 for (Int_t iy = 0; iy < GetNy(); iy++) {
2392 for (Int_t jy = 0; jy < GetNy(); jy++) {
2393 rowVyyInv[nVyyInv] = iy;
2394 colVyyInv[nVyyInv] = jy;
2395 dataVyyInv[nVyyInv]= hist_vyy_inv->GetBinContent(iy+1,jy+1);
2396 if(dataVyyInv[nVyyInv] == 0.0) continue;
2397 nVyyInv ++;
2398 }
2399 }
2401 (GetNy(),GetNy(),nVyyInv,rowVyyInv,colVyyInv,dataVyyInv);
2402 delete [] rowVyyInv;
2403 delete [] colVyyInv;
2404 delete [] dataVyyInv;
2405 } else {
2406 if(nOffDiagNonzero) {
2407 Error("SetInput",
2408 "input covariance has elements C(X,Y)!=0 where V(X)==0");
2409 }
2410 }
2411 }
2414 (GetNy(),GetNy(),nVyyN,rowVyyN,colVyyN,dataVyyN);
2415
2416 delete[] rowVyyN;
2417 delete[] colVyyN;
2418 delete[] dataVyyN;
2419
2421 (GetNy(),1,nVyy1,rowVyy1,colVyy1, dataVyy1);
2422
2423 delete[] rowVyy1;
2424 delete[] colVyy1;
2425 delete[] dataVyy1;
2426
2427 //
2428 // get input vector
2429 DeleteMatrix(&fY);
2430 fY = new TMatrixD(GetNy(), 1);
2431 for (Int_t i = 0; i < GetNy(); i++) {
2432 (*fY) (i, 0) = input->GetBinContent(i + 1);
2433 }
2434 // simple check whether unfolding is possible, given the matrices fA and fV
2436 DeleteMatrix(&vecV);
2437 Int_t nError2=0;
2438 for (Int_t i = 0; i <mAtV->GetNrows();i++) {
2439 if(mAtV->GetRowIndexArray()[i]==
2440 mAtV->GetRowIndexArray()[i+1]) {
2441 nError2 ++;
2442 }
2443 }
2444 if(nVarianceForced) {
2445 if(nVarianceForced>1) {
2446 Warning("SetInput","%d/%d input bins have zero error,"
2447 " 1/error set to %lf.",
2448 nVarianceForced,GetNy(),oneOverZeroError);
2449 } else {
2450 Warning("SetInput","One input bin has zero error,"
2451 " 1/error set to %lf.",oneOverZeroError);
2452 }
2453 }
2454 if(nVarianceZero) {
2455 if(nVarianceZero>1) {
2456 Warning("SetInput","%d/%d input bins have zero error,"
2457 " and are ignored.",nVarianceZero,GetNy());
2458 } else {
2459 Warning("SetInput","One input bin has zero error,"
2460 " and is ignored.");
2461 }
2462 fIgnoredBins=nVarianceZero;
2463 }
2464 if(nError2>0) {
2465 // check whether data points with zero error are responsible
2466 if(oneOverZeroError<=0.0) {
2467 //const Int_t *a_rows=fA->GetRowIndexArray();
2468 //const Int_t *a_cols=fA->GetColIndexArray();
2469 for (Int_t col = 0; col <mAtV->GetNrows();col++) {
2470 if(mAtV->GetRowIndexArray()[col]==
2471 mAtV->GetRowIndexArray()[col+1]) {
2472 TString binlist("no data to constrain output bin ");
2473 binlist += GetOutputBinName(fXToHist[col]);
2474 /* binlist +=" depends on ignored input bins ";
2475 for(Int_t row=0;row<fA->GetNrows();row++) {
2476 if(dataVyyDiag[row]>0.0) continue;
2477 for(Int_t i=a_rows[row];i<a_rows[row+1];i++) {
2478 if(a_cols[i]!=col) continue;
2479 binlist +=" ";
2480 binlist +=row;
2481 }
2482 } */
2483 Warning("SetInput","%s",(char const *)binlist);
2484 }
2485 }
2486 }
2487 if(nError2>1) {
2488 Error("SetInput","%d/%d output bins are not constrained by any data.",
2489 nError2,mAtV->GetNrows());
2490 } else {
2491 Error("SetInput","One output bins is not constrained by any data.");
2492 }
2493 }
2494 DeleteMatrix(&mAtV);
2495
2496 delete[] dataVyyDiag;
2497
2498 return nVarianceForced+nVarianceZero+10000*nError2;
2499}
2500
2501////////////////////////////////////////////////////////////////////////////////
2502/// Perform the unfolding for a given regularisation parameter tau.
2503///
2504/// \param[in] tau regularisation parameter
2505///
2506/// This method sets tau and then calls the core unfolding algorithm
2507/// required data members:
2508/// - fA: matrix to relate x and y
2509/// - fY: measured data points
2510/// - fX0: bias on x
2511/// - fBiasScale: scale factor for fX0
2512/// - fV: inverse of covariance matrix for y
2513/// - fL: regularisation conditions
2514/// modified data members:
2515/// - fTauSquared and those documented in DoUnfold(void)
2516
2518{
2519 fTauSquared=tau*tau;
2520 return DoUnfold();
2521}
2522
2523////////////////////////////////////////////////////////////////////////////////
2524/// Scan the L curve, determine tau and unfold at the final value of tau.
2525///
2526/// \param[in] nPoint number of points used for the scan
2527/// \param[in] tauMin smallest tau value to study
2528/// \param[in] tauMax largest tau value to study. If tauMin=tauMax=0,
2529/// a scan interval is determined automatically.
2530/// \param[out] lCurve if nonzero, a new TGraph is returned,
2531/// containing the L-curve
2532/// \param[out] logTauX if nonzero, a new TSpline is returned, to
2533/// parameterize the L-curve's x-coordinates as a function of log10(tau)
2534/// \param[out] logTauY if nonzero, a new TSpline is returned, to
2535/// parameterize the L-curve's y-coordinates as a function of log10(tau)
2536/// \param[out] logTauCurvature if nonzero, a new TSpline is returned
2537/// of the L-curve curvature as a function of log10(tau)
2538///
2539/// return value: the coordinate number in the logTauX,logTauY graphs
2540/// corresponding to the "final" choice of tau
2541///
2542/// Recommendation: always check <b>logTauCurvature</b>, it
2543/// should be a peaked function (similar to a Gaussian), the maximum
2544/// corresponding to the final choice of tau. Also, check the <b>lCurve</b>
2545/// it should be approximately L-shaped. If in doubt, adjust tauMin
2546/// and tauMax until the results are satisfactory.
2547
2549 Double_t tauMin,Double_t tauMax,
2550 TGraph **lCurve,TSpline **logTauX,
2551 TSpline **logTauY,TSpline **logTauCurvature)
2552{
2553 typedef std::map<Double_t,std::pair<Double_t,Double_t> > XYtau_t;
2554 XYtau_t curve;
2555
2556 //==========================================================
2557 // algorithm:
2558 // (1) do the unfolding for nPoint-1 points
2559 // and store the results in the map
2560 // curve
2561 // (1a) store minimum and maximum tau to curve
2562 // (1b) insert additional points, until nPoint-1 values
2563 // have been calculated
2564 //
2565 // (2) determine the best choice of tau
2566 // do the unfolding for this point
2567 // and store the result in
2568 // curve
2569 // (3) return the result in
2570 // lCurve logTauX logTauY
2571
2572 //==========================================================
2573 // (1) do the unfolding for nPoint-1 points
2574 // and store the results in
2575 // curve
2576 // (1a) store minimum and maximum tau to curve
2577
2578 if((tauMin<=0)||(tauMax<=0.0)||(tauMin>=tauMax)) {
2579 // here no range is given, has to be determined automatically
2580 // the maximum tau is determined from the chi**2 values
2581 // observed from unfolding without regulatisation
2582
2583 // first unfolding, without regularisation
2584 DoUnfold(0.0);
2585
2586 // if the number of degrees of freedom is too small, create an error
2587 if(GetNdf()<=0) {
2588 Error("ScanLcurve","too few input bins, NDF<=0 %d",GetNdf());
2589 }
2590
2591 Double_t x0=GetLcurveX();
2592 Double_t y0=GetLcurveY();
2593 Info("ScanLcurve","logtau=-Infinity X=%lf Y=%lf",x0,y0);
2594 if(!TMath::Finite(x0)) {
2595 Fatal("ScanLcurve","problem (too few input bins?) X=%f",x0);
2596 }
2597 if(!TMath::Finite(y0)) {
2598 Fatal("ScanLcurve","problem (missing regularisation?) Y=%f",y0);
2599 }
2600 {
2601 // unfolding guess maximum tau and store it
2602 Double_t logTau=
2603 0.5*(TMath::Log10(fChi2A+3.*TMath::Sqrt(GetNdf()+1.0))
2604 -GetLcurveY());
2605 DoUnfold(TMath::Power(10.,logTau));
2607 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2609 }
2610 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2611 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2612 logTau,GetLcurveX(),GetLcurveY());
2613 }
2614 if((*curve.begin()).second.first<x0) {
2615 // if the point at tau==0 seems numerically unstable,
2616 // try to find the minimum chi**2 as start value
2617 //
2618 // "unstable" means that there is a finite tau where the
2619 // unfolding chi**2 is smaller than for the case of no
2620 // regularisation. Ideally this should never happen
2621 do {
2622 x0=GetLcurveX();
2623 Double_t logTau=(*curve.begin()).first-0.5;
2624 DoUnfold(TMath::Power(10.,logTau));
2626 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2628 }
2629 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2630 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2631 logTau,GetLcurveX(),GetLcurveY());
2632 }
2633 while(((int)curve.size()<(nPoint-1)/2)&&
2634 ((*curve.begin()).second.first<x0));
2635 } else {
2636 // minimum tau is chosen such that is less than
2637 // 1% different from the case of no regularization
2638 // log10(1.01) = 0.00432
2639
2640 // here, more than one point are inserted if necessary
2641 while(((int)curve.size()<nPoint-1)&&
2642 (((*curve.begin()).second.first-x0>0.00432)||
2643 ((*curve.begin()).second.second-y0>0.00432)||
2644 (curve.size()<2))) {
2645 Double_t logTau=(*curve.begin()).first-0.5;
2646 DoUnfold(TMath::Power(10.,logTau));
2648 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2650 }
2651 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2652 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2653 logTau,GetLcurveX(),GetLcurveY());
2654 }
2655 }
2656 } else {
2657 Double_t logTauMin=TMath::Log10(tauMin);
2658 Double_t logTauMax=TMath::Log10(tauMax);
2659 if(nPoint>1) {
2660 // insert maximum tau
2661 DoUnfold(TMath::Power(10.,logTauMax));
2663 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2665 }
2666 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2667 logTauMax,GetLcurveX(),GetLcurveY());
2668 curve[logTauMax]=std::make_pair(GetLcurveX(),GetLcurveY());
2669 }
2670 // insert minimum tau
2671 DoUnfold(TMath::Power(10.,logTauMin));
2673 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2675 }
2676 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2677 logTauMin,GetLcurveX(),GetLcurveY());
2678 curve[logTauMin]=std::make_pair(GetLcurveX(),GetLcurveY());
2679 }
2680
2681
2682 //==========================================================
2683 // (1b) insert additional points, until nPoint-1 values
2684 // have been calculated
2685
2686 while(int(curve.size())<nPoint-1) {
2687 // insert additional points, such that the sizes of the delta(XY) vectors
2688 // are getting smaller and smaller
2689 XYtau_t::const_iterator i0,i1;
2690 i0=curve.begin();
2691 i1=i0;
2692 Double_t logTau=(*i0).first;
2693 Double_t distMax=0.0;
2694 for (++i1; i1 != curve.end(); ++i1) {
2695 const std::pair<Double_t, Double_t> &xy0 = (*i0).second;
2696 const std::pair<Double_t, Double_t> &xy1 = (*i1).second;
2697 Double_t dx = xy1.first - xy0.first;
2698 Double_t dy = xy1.second - xy0.second;
2699 Double_t d = TMath::Sqrt(dx * dx + dy * dy);
2700 if (d >= distMax) {
2701 distMax = d;
2702 logTau = 0.5 * ((*i0).first + (*i1).first);
2703 }
2704 i0 = i1;
2705 }
2706 DoUnfold(TMath::Power(10.,logTau));
2708 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2710 }
2711 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",logTau,GetLcurveX(),GetLcurveY());
2712 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2713 }
2714
2715 //==========================================================
2716 // (2) determine the best choice of tau
2717 // do the unfolding for this point
2718 // and store the result in
2719 // curve
2720 XYtau_t::const_iterator i0,i1;
2721 i0=curve.begin();
2722 i1=i0;
2723 ++i1;
2724 Double_t logTauFin=(*i0).first;
2725 if( ((int)curve.size())<nPoint) {
2726 // set up splines and determine (x,y) curvature in each point
2727 Double_t *cTi=new Double_t[curve.size()-1]();
2728 Double_t *cCi=new Double_t[curve.size()-1]();
2729 Int_t n=0;
2730 {
2731 Double_t *lXi=new Double_t[curve.size()]();
2732 Double_t *lYi=new Double_t[curve.size()]();
2733 Double_t *lTi=new Double_t[curve.size()]();
2734 for (XYtau_t::const_iterator i = curve.begin(); i != curve.end(); ++i) {
2735 lXi[n] = (*i).second.first;
2736 lYi[n] = (*i).second.second;
2737 lTi[n] = (*i).first;
2738 n++;
2739 }
2740 TSpline3 *splineX=new TSpline3("x vs tau",lTi,lXi,n);
2741 TSpline3 *splineY=new TSpline3("y vs tau",lTi,lYi,n);
2742 // calculate (x,y) curvature for all points
2743 // the curvature is stored in the array cCi[] as a function of cTi[]
2744 for(Int_t i=0;i<n-1;i++) {
2745 Double_t ltau,xy,bi,ci,di;
2746 splineX->GetCoeff(i,ltau,xy,bi,ci,di);
2747 Double_t tauBar=0.5*(lTi[i]+lTi[i+1]);
2748 Double_t dTau=0.5*(lTi[i+1]-lTi[i]);
2749 Double_t dx1=bi+dTau*(2.*ci+3.*di*dTau);
2750 Double_t dx2=2.*ci+6.*di*dTau;
2751 splineY->GetCoeff(i,ltau,xy,bi,ci,di);
2752 Double_t dy1=bi+dTau*(2.*ci+3.*di*dTau);
2753 Double_t dy2=2.*ci+6.*di*dTau;
2754 cTi[i]=tauBar;
2755 cCi[i]=(dy2*dx1-dy1*dx2)/TMath::Power(dx1*dx1+dy1*dy1,1.5);
2756 }
2757 delete splineX;
2758 delete splineY;
2759 delete[] lXi;
2760 delete[] lYi;
2761 delete[] lTi;
2762 }
2763 // create curvature Spline
2764 TSpline3 *splineC=new TSpline3("L curve curvature",cTi,cCi,n-1);
2765 // find the maximum of the curvature
2766 // if the parameter iskip is non-zero, then iskip points are
2767 // ignored when looking for the largest curvature
2768 // (there are problems with the curvature determined from the first
2769 // few points of splineX,splineY in the algorithm above)
2770 Int_t iskip=0;
2771 if(n>4) iskip=1;
2772 if(n>7) iskip=2;
2773 Double_t cCmax=cCi[iskip];
2774 Double_t cTmax=cTi[iskip];
2775 for(Int_t i=iskip;i<n-2-iskip;i++) {
2776 // find maximum on this spline section
2777 // check boundary conditions for x[i+1]
2778 Double_t xMax=cTi[i+1];
2779 Double_t yMax=cCi[i+1];
2780 if(cCi[i]>yMax) {
2781 yMax=cCi[i];
2782 xMax=cTi[i];
2783 }
2784 // find maximum for x[i]<x<x[i+1]
2785 // get spline coefficients and solve equation
2786 // derivative(x)==0
2787 Double_t x,y,b,c,d;
2788 splineC->GetCoeff(i,x,y,b,c,d);
2789 // coefficients of quadratic equation
2790 Double_t m_p_half=-c/(3.*d);
2791 Double_t q=b/(3.*d);
2792 Double_t discr=m_p_half*m_p_half-q;
2793 if(discr>=0.0) {
2794 // solution found
2795 discr=TMath::Sqrt(discr);
2796 Double_t xx;
2797 if(m_p_half>0.0) {
2798 xx = m_p_half + discr;
2799 } else {
2800 xx = m_p_half - discr;
2801 }
2802 Double_t dx=cTi[i+1]-x;
2803 // check first solution
2804 if((xx>0.0)&&(xx<dx)) {
2805 y=splineC->Eval(x+xx);
2806 if(y>yMax) {
2807 yMax=y;
2808 xMax=x+xx;
2809 }
2810 }
2811 // second solution
2812 if(xx !=0.0) {
2813 xx= q/xx;
2814 } else {
2815 xx=0.0;
2816 }
2817 // check second solution
2818 if((xx>0.0)&&(xx<dx)) {
2819 y=splineC->Eval(x+xx);
2820 if(y>yMax) {
2821 yMax=y;
2822 xMax=x+xx;
2823 }
2824 }
2825 }
2826 // check whether this local minimum is a global minimum
2827 if(yMax>cCmax) {
2828 cCmax=yMax;
2829 cTmax=xMax;
2830 }
2831 }
2832 if(logTauCurvature) {
2833 *logTauCurvature=splineC;
2834 } else {
2835 delete splineC;
2836 }
2837 delete[] cTi;
2838 delete[] cCi;
2839 logTauFin=cTmax;
2840 DoUnfold(TMath::Power(10.,logTauFin));
2842 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2844 }
2845 Info("ScanLcurve","Result logtau=%lf X=%lf Y=%lf",
2846 logTauFin,GetLcurveX(),GetLcurveY());
2847 curve[logTauFin]=std::make_pair(GetLcurveX(),GetLcurveY());
2848 }
2849
2850
2851 //==========================================================
2852 // (3) return the result in
2853 // lCurve logTauX logTauY
2854
2855 Int_t bestChoice=-1;
2856 if(curve.size()>0) {
2857 Double_t *x=new Double_t[curve.size()]();
2858 Double_t *y=new Double_t[curve.size()]();
2859 Double_t *logT=new Double_t[curve.size()]();
2860 int n=0;
2861 for (XYtau_t::const_iterator i = curve.begin(); i != curve.end(); ++i) {
2862 if (logTauFin == (*i).first) {
2863 bestChoice = n;
2864 }
2865 x[n] = (*i).second.first;
2866 y[n] = (*i).second.second;
2867 logT[n] = (*i).first;
2868 n++;
2869 }
2870 if(lCurve) {
2871 (*lCurve)=new TGraph(n,x,y);
2872 (*lCurve)->SetTitle("L curve");
2873 }
2874 if(logTauX) (*logTauX)=new TSpline3("log(chi**2)%log(tau)",logT,x,n);
2875 if(logTauY) (*logTauY)=new TSpline3("log(reg.cond)%log(tau)",logT,y,n);
2876 delete[] x;
2877 delete[] y;
2878 delete[] logT;
2879 }
2880
2881 return bestChoice;
2882}
2883
2884////////////////////////////////////////////////////////////////////////////////
2885/// Histogram of truth bins, determined from summing over the response matrix.
2886///
2887/// \param[out] out histogram to store the truth bins. The bin contents
2888/// are overwritten
2889/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
2890///
2891/// This vector is also used to initialize the bias
2892/// x_{0}. However, the bias vector may be changed using the
2893/// SetBias() method.
2894///
2895/// The use of <b>binMap</b> is explained with the documentation of
2896/// the GetOutput() method.
2897
2898void TUnfold::GetNormalisationVector(TH1 *out,const Int_t *binMap) const
2899{
2900
2901 ClearHistogram(out);
2902 for (Int_t i = 0; i < GetNx(); i++) {
2903 int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2904 if(dest>=0) {
2905 out->SetBinContent(dest, fSumOverY[i] + out->GetBinContent(dest));
2906 }
2907 }
2908}
2909
2910////////////////////////////////////////////////////////////////////////////////
2911/// Get bias vector including bias scale.
2912///
2913/// \param[out] out histogram to store the scaled bias vector. The bin
2914/// contents are overwritten
2915/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
2916///
2917/// This method returns the bias vector times scaling factor, f*x_{0}
2918///
2919/// The use of <b>binMap</b> is explained with the documentation of
2920/// the GetOutput() method
2921
2922void TUnfold::GetBias(TH1 *out,const Int_t *binMap) const
2923{
2924
2925 ClearHistogram(out);
2926 for (Int_t i = 0; i < GetNx(); i++) {
2927 int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2928 if(dest>=0) {
2929 out->SetBinContent(dest, fBiasScale*(*fX0) (i, 0) +
2930 out->GetBinContent(dest));
2931 }
2932 }
2933}
2934
2935////////////////////////////////////////////////////////////////////////////////
2936/// Get unfolding result on detector level.
2937///
2938/// \param[out] out histogram to store the correlation coefficients. The bin
2939/// contents and errors are overwritten.
2940/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
2941///
2942/// This method returns the unfolding output folded by the response
2943/// matrix, i.e. the vector Ax.
2944///
2945/// The use of <b>binMap</b> is explained with the documentation of
2946/// the GetOutput() method
2947
2948void TUnfold::GetFoldedOutput(TH1 *out,const Int_t *binMap) const
2949{
2950 ClearHistogram(out);
2951
2953
2954 const Int_t *rows_A=fA->GetRowIndexArray();
2955 const Int_t *cols_A=fA->GetColIndexArray();
2956 const Double_t *data_A=fA->GetMatrixArray();
2957 const Int_t *rows_AVxx=AVxx->GetRowIndexArray();
2958 const Int_t *cols_AVxx=AVxx->GetColIndexArray();
2959 const Double_t *data_AVxx=AVxx->GetMatrixArray();
2960
2961 for (Int_t i = 0; i < GetNy(); i++) {
2962 Int_t destI = binMap ? binMap[i+1] : i+1;
2963 if(destI<0) continue;
2964
2965 out->SetBinContent(destI, (*fAx) (i, 0)+ out->GetBinContent(destI));
2966 Double_t e2=0.0;
2967 Int_t index_a=rows_A[i];
2968 Int_t index_av=rows_AVxx[i];
2969 while((index_a<rows_A[i+1])&&(index_av<rows_AVxx[i+1])) {
2970 Int_t j_a=cols_A[index_a];
2971 Int_t j_av=cols_AVxx[index_av];
2972 if(j_a<j_av) {
2973 index_a++;
2974 } else if(j_a>j_av) {
2975 index_av++;
2976 } else {
2977 e2 += data_AVxx[index_av] * data_A[index_a];
2978 index_a++;
2979 index_av++;
2980 }
2981 }
2982 out->SetBinError(destI,TMath::Sqrt(e2));
2983 }
2984 DeleteMatrix(&AVxx);
2985}
2986
2987////////////////////////////////////////////////////////////////////////////////
2988/// Get matrix of probabilities.
2989///
2990/// \param[out] A two-dimensional histogram to store the
2991/// probabilities (normalized response matrix). The bin contents are
2992/// overwritten
2993/// \param[in] histmap specify axis along which the truth bins are
2994/// oriented
2995
2997{
2998 // retrieve matrix of probabilities
2999 // histmap: on which axis to arrange the input/output vector
3000 // A: histogram to store the probability matrix
3001 const Int_t *rows_A=fA->GetRowIndexArray();
3002 const Int_t *cols_A=fA->GetColIndexArray();
3003 const Double_t *data_A=fA->GetMatrixArray();
3004 for (Int_t iy = 0; iy <fA->GetNrows(); iy++) {
3005 for(Int_t indexA=rows_A[iy];indexA<rows_A[iy+1];indexA++) {
3006 Int_t ix = cols_A[indexA];
3007 Int_t ih=fXToHist[ix];
3008 if (histmap == kHistMapOutputHoriz) {
3009 A->SetBinContent(ih, iy+1,data_A[indexA]);
3010 } else {
3011 A->SetBinContent(iy+1, ih,data_A[indexA]);
3012 }
3013 }
3014 }
3015}
3016
3017////////////////////////////////////////////////////////////////////////////////
3018/// Input vector of measurements
3019///
3020/// \param[out] out histogram to store the measurements. Bin content
3021/// and bin errors are overwrite.
3022/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
3023///
3024/// Bins which had an uncertainty of zero in the call to SetInput()
3025/// may acquire bin contents or bin errors different from the
3026/// original settings in SetInput().
3027///
3028/// The use of <b>binMap</b> is explained with the documentation of
3029/// the GetOutput() method
3030
3031void TUnfold::GetInput(TH1 *out,const Int_t *binMap) const
3032{
3033 ClearHistogram(out);
3034
3035 const Int_t *rows_Vyy=fVyy->GetRowIndexArray();
3036 const Int_t *cols_Vyy=fVyy->GetColIndexArray();
3037 const Double_t *data_Vyy=fVyy->GetMatrixArray();
3038
3039 for (Int_t i = 0; i < GetNy(); i++) {
3040 Int_t destI=binMap ? binMap[i+1] : i+1;
3041 if(destI<0) continue;
3042
3043 out->SetBinContent(destI, (*fY) (i, 0)+out->GetBinContent(destI));
3044
3045 Double_t e=0.0;
3046 for(int index=rows_Vyy[i];index<rows_Vyy[i+1];index++) {
3047 if(cols_Vyy[index]==i) {
3048 e=TMath::Sqrt(data_Vyy[index]);
3049 }
3050 }
3051 out->SetBinError(destI, e);
3052 }
3053}
3054
3055////////////////////////////////////////////////////////////////////////////////
3056/// Get inverse of the measurement's covariance matrix.
3057///
3058/// \param[out] out histogram to store the inverted covariance
3059
3061{
3062 // calculate the inverse of the contribution to the error matrix
3063 // corresponding to the input data
3064 if(!fVyyInv) {
3065 Int_t rank=0;
3067 // and count number of degrees of freedom
3068 fNdf = rank-GetNpar();
3069
3070 if(rank<GetNy()-fIgnoredBins) {
3071 Warning("GetInputInverseEmatrix","input covariance matrix has rank %d expect %d",
3072 rank,GetNy());
3073 }
3074 if(fNdf<0) {
3075 Error("GetInputInverseEmatrix","number of parameters %d > %d (rank of input covariance). Problem can not be solved",GetNpar(),rank);
3076 } else if(fNdf==0) {
3077 Warning("GetInputInverseEmatrix","number of parameters %d = input rank %d. Problem is ill posed",GetNpar(),rank);
3078 }
3079 }
3080 if(out) {
3081 // return matrix as histogram
3082 const Int_t *rows_VyyInv=fVyyInv->GetRowIndexArray();
3083 const Int_t *cols_VyyInv=fVyyInv->GetColIndexArray();
3084 const Double_t *data_VyyInv=fVyyInv->GetMatrixArray();
3085
3086 for(int i=0;i<=out->GetNbinsX()+1;i++) {
3087 for(int j=0;j<=out->GetNbinsY()+1;j++) {
3088 out->SetBinContent(i,j,0.);
3089 }
3090 }
3091
3092 for (Int_t i = 0; i < fVyyInv->GetNrows(); i++) {
3093 for(int index=rows_VyyInv[i];index<rows_VyyInv[i+1];index++) {
3094 Int_t j=cols_VyyInv[index];
3095 out->SetBinContent(i+1,j+1,data_VyyInv[index]);
3096 }
3097 }
3098 }
3099}
3100
3101////////////////////////////////////////////////////////////////////////////////
3102/// Get matrix of regularisation conditions squared.
3103///
3104/// \param[out] out histogram to store the squared matrix of
3105/// regularisation conditions. the bin contents are overwritten
3106///
3107/// This returns the square matrix L^{T}L as a histogram
3108///
3109/// The histogram should have dimension nx times nx, where nx
3110/// corresponds to the number of histogram bins in the response matrix
3111/// along the truth axis.
3112
3114{
3115 // retrieve matrix of regularisation conditions squared
3116 // out: pre-booked matrix
3117
3119 // loop over sparse matrix
3120 const Int_t *rows=lSquared->GetRowIndexArray();
3121 const Int_t *cols=lSquared->GetColIndexArray();
3122 const Double_t *data=lSquared->GetMatrixArray();
3123 for (Int_t i = 0; i < GetNx(); i++) {
3124 for (Int_t cindex = rows[i]; cindex < rows[i+1]; cindex++) {
3125 Int_t j=cols[cindex];
3126 out->SetBinContent(fXToHist[i], fXToHist[j],data[cindex]);
3127 }
3128 }
3129 DeleteMatrix(&lSquared);
3130}
3131
3132////////////////////////////////////////////////////////////////////////////////
3133/// Get number of regularisation conditions.
3134///
3135/// This returns the number of regularisation conditions, useful for
3136/// booking a histogram for a subsequent call of GetL().
3137
3139 return fL->GetNrows();
3140}
3141
3142////////////////////////////////////////////////////////////////////////////////
3143/// Get matrix of regularisation conditions.
3144///
3145/// \param[out] out histogram to store the regularisation conditions.
3146/// the bin contents are overwritten
3147///
3148/// The histogram should have dimension nr (x-axis) times nx (y-axis).
3149/// nr corresponds to the number of regularisation conditions, it can
3150/// be obtained using the method GetNr(). nx corresponds to the number
3151/// of histogram bins in the response matrix along the truth axis.
3152
3153void TUnfold::GetL(TH2 *out) const
3154{
3155 // loop over sparse matrix
3156 const Int_t *rows=fL->GetRowIndexArray();
3157 const Int_t *cols=fL->GetColIndexArray();
3158 const Double_t *data=fL->GetMatrixArray();
3159 for (Int_t row = 0; row < GetNr(); row++) {
3160 for (Int_t cindex = rows[row]; cindex < rows[row+1]; cindex++) {
3161 Int_t col=cols[cindex];
3162 Int_t indexH=fXToHist[col];
3163 out->SetBinContent(indexH,row+1,data[cindex]);
3164 }
3165 }
3166}
3167
3168////////////////////////////////////////////////////////////////////////////////
3169/// Set type of area constraint.
3170///
3171/// results of a previous unfolding are reset
3172
3174{
3175 // set type of constraint for the next unfolding
3176 if(fConstraint !=constraint) ClearResults();
3177 fConstraint=constraint;
3178 Info("SetConstraint","fConstraint=%d",fConstraint);
3179}
3180
3181
3182////////////////////////////////////////////////////////////////////////////////
3183/// Return regularisation parameter.
3184
3186{
3187 // return regularisation parameter
3188 return TMath::Sqrt(fTauSquared);
3189}
3190
3191////////////////////////////////////////////////////////////////////////////////
3192/// Get \f$ chi^{2}_{L} \f$ contribution determined in recent unfolding.
3193
3195{
3196 // return chi**2 contribution from regularisation conditions
3197 return fLXsquared*fTauSquared;
3198}
3199
3200////////////////////////////////////////////////////////////////////////////////
3201/// Get number of truth parameters determined in recent unfolding.
3202///
3203/// empty bins of the response matrix or bins which can not be
3204/// unfolded due to rank deficits are not counted
3205
3207{
3208 return GetNx();
3209}
3210
3211////////////////////////////////////////////////////////////////////////////////
3212/// Get value on x-axis of L-curve determined in recent unfolding.
3213///
3214/// \f$ x=log_{10}(GetChi2A()) \f$
3215
3217{
3218 return TMath::Log10(fChi2A);
3219}
3220
3221////////////////////////////////////////////////////////////////////////////////
3222/// Get value on y-axis of L-curve determined in recent unfolding.
3223///
3224/// \f$ y=log_{10}(GetChi2L()) \f$
3225
3227{
3228 return TMath::Log10(fLXsquared);
3229}
3230
3231////////////////////////////////////////////////////////////////////////////////
3232/// Get output distribution, possibly cumulated over several bins.
3233///
3234/// \param[out] output existing output histogram. content and errors
3235/// will be updated.
3236/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
3237///
3238/// If nonzero, the array <b>binMap</b> must have dimension n+2, where n
3239/// corresponds to the number of bins on the truth axis of the response
3240/// matrix (the histogram specified with the TUnfold
3241/// constructor). The indexes of <b>binMap</b> correspond to the truth
3242/// bins (including underflow and overflow) of the response matrix.
3243/// The element binMap[i] specifies the histogram number in
3244/// <b>output</b> where the corresponding truth bin will be stored. It is
3245/// possible to specify the same <b>output</b> bin number for multiple
3246/// indexes, in which case these bins are added. Set binMap[i]=-1 to
3247/// ignore an unfolded truth bin. The uncertainties are
3248/// calculated from the corresponding parts of the covariance matrix,
3249/// properly taking care of added truth bins.
3250///
3251/// If the pointer <b>binMap</b> is zero, the bins are mapped
3252/// one-to-one. Truth bin zero (underflow) is stored in the
3253/// <b>output</b> underflow, truth bin 1 is stored in bin number 1, etc.
3254///
3255/// - output: output histogram
3256/// - binMap: for each bin of the original output distribution
3257/// specify the destination bin. A value of -1 means that the bin
3258/// is discarded. 0 means underflow bin, 1 first bin, ...
3259/// - binMap[0] : destination of underflow bin
3260/// - binMap[1] : destination of first bin
3261/// ...
3262
3263void TUnfold::GetOutput(TH1 *output,const Int_t *binMap) const
3264{
3266 /* Int_t nbin=output->GetNbinsX();
3267 Double_t *c=new Double_t[nbin+2];
3268 Double_t *e2=new Double_t[nbin+2];
3269 for(Int_t i=0;i<nbin+2;i++) {
3270 c[i]=0.0;
3271 e2[i]=0.0;
3272 } */
3273
3274 std::map<Int_t,Double_t> e2;
3275
3276 const Int_t *rows_Vxx=fVxx->GetRowIndexArray();
3277 const Int_t *cols_Vxx=fVxx->GetColIndexArray();
3278 const Double_t *data_Vxx=fVxx->GetMatrixArray();
3279
3280 Int_t binMapSize = fHistToX.GetSize();
3281 for(Int_t i=0;i<binMapSize;i++) {
3282 Int_t destBinI=binMap ? binMap[i] : i; // histogram bin
3283 Int_t srcBinI=fHistToX[i]; // matrix row index
3284 if((destBinI>=0)&&(srcBinI>=0)) {
3285 output->SetBinContent
3286 (destBinI, (*fX)(srcBinI,0)+ output->GetBinContent(destBinI));
3287 // here we loop over the columns of the error matrix
3288 // j: counts histogram bins
3289 // index: counts sparse matrix index
3290 // the algorithm makes use of the fact that fHistToX is ordered
3291 Int_t j=0; // histogram bin
3292 Int_t index_vxx=rows_Vxx[srcBinI];
3293 //Double_t e2=TMath::Power(output->GetBinError(destBinI),2.);
3294 while((j<binMapSize)&&(index_vxx<rows_Vxx[srcBinI+1])) {
3295 Int_t destBinJ=binMap ? binMap[j] : j; // histogram bin
3296 if(destBinI!=destBinJ) {
3297 // only diagonal elements are calculated
3298 j++;
3299 } else {
3300 Int_t srcBinJ=fHistToX[j]; // matrix column index
3301 if(srcBinJ<0) {
3302 // bin was not unfolded, check next bin
3303 j++;
3304 } else {
3305 if(cols_Vxx[index_vxx]<srcBinJ) {
3306 // index is too small
3307 index_vxx++;
3308 } else if(cols_Vxx[index_vxx]>srcBinJ) {
3309 // index is too large, skip bin
3310 j++;
3311 } else {
3312 // add this bin
3313 e2[destBinI] += data_Vxx[index_vxx];
3314 j++;
3315 index_vxx++;
3316 }
3317 }
3318 }
3319 }
3320 //output->SetBinError(destBinI,TMath::Sqrt(e2));
3321 }
3322 }
3323 for (std::map<Int_t, Double_t>::const_iterator i = e2.begin(); i != e2.end(); ++i) {
3324 //cout<<(*i).first<<" "<<(*i).second<<"\n";
3325 output->SetBinError((*i).first,TMath::Sqrt((*i).second));
3326 }
3327}
3328
3329////////////////////////////////////////////////////////////////////////////////
3330/// Add up an error matrix, also respecting the bin mapping.
3331///
3332/// \param[inout] ematrix error matrix histogram
3333/// \param[in] emat error matrix stored with internal mapping (member fXToHist)
3334/// \param[in] binMap mapping of histogram bins
3335/// \param[in] doClear if true, ematrix is cleared prior to adding
3336/// elements of emat to it.
3337///
3338/// the array <b>binMap</b> is explained with the method GetOutput(). The
3339/// matrix emat must have dimension NxN where N=fXToHist.size()
3340/// The flag <b>doClear</b> may be used to add covariance matrices from
3341/// several uncertainty sources.
3342
3344 const Int_t *binMap,Bool_t doClear) const
3345{
3346 Int_t nbin=ematrix->GetNbinsX();
3347 if(doClear) {
3348 for(Int_t i=0;i<nbin+2;i++) {
3349 for(Int_t j=0;j<nbin+2;j++) {
3350 ematrix->SetBinContent(i,j,0.0);
3351 ematrix->SetBinError(i,j,0.0);
3352 }
3353 }
3354 }
3355
3356 if(emat) {
3357 const Int_t *rows_emat=emat->GetRowIndexArray();
3358 const Int_t *cols_emat=emat->GetColIndexArray();
3359 const Double_t *data_emat=emat->GetMatrixArray();
3360
3361 Int_t binMapSize = fHistToX.GetSize();
3362 for(Int_t i=0;i<binMapSize;i++) {
3363 Int_t destBinI=binMap ? binMap[i] : i;
3364 Int_t srcBinI=fHistToX[i];
3365 if((destBinI>=0)&&(destBinI<nbin+2)&&(srcBinI>=0)) {
3366 // here we loop over the columns of the source matrix
3367 // j: counts histogram bins
3368 // index: counts sparse matrix index
3369 // the algorithm makes use of the fact that fHistToX is ordered
3370 Int_t j=0;
3371 Int_t index_vxx=rows_emat[srcBinI];
3372 while((j<binMapSize)&&(index_vxx<rows_emat[srcBinI+1])) {
3373 Int_t destBinJ=binMap ? binMap[j] : j;
3374 Int_t srcBinJ=fHistToX[j];
3375 if((destBinJ<0)||(destBinJ>=nbin+2)||(srcBinJ<0)) {
3376 // check next bin
3377 j++;
3378 } else {
3379 if(cols_emat[index_vxx]<srcBinJ) {
3380 // index is too small
3381 index_vxx++;
3382 } else if(cols_emat[index_vxx]>srcBinJ) {
3383 // index is too large, skip bin
3384 j++;
3385 } else {
3386 // add this bin
3387 Double_t e2= ematrix->GetBinContent(destBinI,destBinJ)
3388 + data_emat[index_vxx];
3389 ematrix->SetBinContent(destBinI,destBinJ,e2);
3390 j++;
3391 index_vxx++;
3392 }
3393 }
3394 }
3395 }
3396 }
3397 }
3398}
3399
3400////////////////////////////////////////////////////////////////////////////////
3401/// Get output covariance matrix, possibly cumulated over several bins.
3402///
3403/// \param[out] ematrix histogram to store the covariance. The bin
3404/// contents are overwritten.
3405/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
3406///
3407/// The use of <b>binMap</b> is explained with the documentation of
3408/// the GetOutput() method
3409
3410void TUnfold::GetEmatrix(TH2 *ematrix,const Int_t *binMap) const
3411{
3412 ErrorMatrixToHist(ematrix,fVxx,binMap,kTRUE);
3413}
3414
3415////////////////////////////////////////////////////////////////////////////////
3416/// Get correlation coefficients, possibly cumulated over several bins.
3417///
3418/// \param[out] rhoij histogram to store the correlation coefficients. The bin
3419/// contents are overwritten.
3420/// \param[in] binMap (default=0) array for mapping truth bins to histogram bins
3421///
3422/// The use of <b>binMap</b> is explained with the documentation of
3423/// the GetOutput() method
3424
3425void TUnfold::GetRhoIJ(TH2 *rhoij,const Int_t *binMap) const
3426{
3427 GetEmatrix(rhoij,binMap);
3428 Int_t nbin=rhoij->GetNbinsX();
3429 Double_t *e=new Double_t[nbin+2];
3430 for(Int_t i=0;i<nbin+2;i++) {
3431 e[i]=TMath::Sqrt(rhoij->GetBinContent(i,i));
3432 }
3433 for(Int_t i=0;i<nbin+2;i++) {
3434 for(Int_t j=0;j<nbin+2;j++) {
3435 if((e[i]>0.0)&&(e[j]>0.0)) {
3436 rhoij->SetBinContent(i,j,rhoij->GetBinContent(i,j)/e[i]/e[j]);
3437 } else {
3438 rhoij->SetBinContent(i,j,0.0);
3439 }
3440 }
3441 }
3442 delete[] e;
3443}
3444
3445////////////////////////////////////////////////////////////////////////////////
3446/// Get global correlation coefficients, possibly cumulated over several bins.
3447///
3448/// \param[out] rhoi histogram to store the global correlation
3449/// coefficients. The bin contents are overwritten.
3450/// \param[in] binMap (default=0) array for mapping truth bins to
3451/// histogram bins
3452/// \param[out] invEmat (default=0) histogram to store the inverted
3453/// covariance matrix
3454///
3455/// for a given bin, the global correlation coefficient is defined
3456/// as \f$ \rho_{i} = \sqrt{1-\frac{1}{(V_{ii}*V^{-1}_{ii})}} \f$
3457///
3458/// such that the calculation of global correlation coefficients
3459/// possibly involves the inversion of a covariance matrix.
3460///
3461/// return value: maximum global correlation coefficient
3462///
3463/// The use of <b>binMap</b> is explained with the documentation of
3464/// the GetOutput() method
3465
3466Double_t TUnfold::GetRhoI(TH1 *rhoi,const Int_t *binMap,TH2 *invEmat) const
3467{
3468 ClearHistogram(rhoi,-1.);
3469
3470 if(binMap) {
3471 // if there is a bin map, the matrix needs to be inverted
3472 // otherwise, use the existing inverse, such that
3473 // no matrix inversion is needed
3474 return GetRhoIFromMatrix(rhoi,fVxx,binMap,invEmat);
3475 } else {
3476 Double_t rhoMax=0.0;
3477
3478 const Int_t *rows_Vxx=fVxx->GetRowIndexArray();
3479 const Int_t *cols_Vxx=fVxx->GetColIndexArray();
3480 const Double_t *data_Vxx=fVxx->GetMatrixArray();
3481
3482 const Int_t *rows_VxxInv=fVxxInv->GetRowIndexArray();
3483 const Int_t *cols_VxxInv=fVxxInv->GetColIndexArray();
3484 const Double_t *data_VxxInv=fVxxInv->GetMatrixArray();
3485
3486 for(Int_t i=0;i<GetNx();i++) {
3487 Int_t destI=fXToHist[i];
3488 Double_t e_ii=0.0,einv_ii=0.0;
3489 for(int index_vxx=rows_Vxx[i];index_vxx<rows_Vxx[i+1];index_vxx++) {
3490 if(cols_Vxx[index_vxx]==i) {
3491 e_ii=data_Vxx[index_vxx];
3492 break;
3493 }
3494 }
3495 for(int index_vxxinv=rows_VxxInv[i];index_vxxinv<rows_VxxInv[i+1];
3496 index_vxxinv++) {
3497 if(cols_VxxInv[index_vxxinv]==i) {
3498 einv_ii=data_VxxInv[index_vxxinv];
3499 break;
3500 }
3501 }
3502 Double_t rho=1.0;
3503 if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3504 if(rho>=0.0) rho=TMath::Sqrt(rho);
3505 else rho=-TMath::Sqrt(-rho);
3506 if(rho>rhoMax) {
3507 rhoMax = rho;
3508 }
3509 rhoi->SetBinContent(destI,rho);
3510 }
3511 return rhoMax;
3512 }
3513}
3514
3515////////////////////////////////////////////////////////////////////////////////
3516/// Get global correlation coefficients with arbitrary min map.
3517///
3518/// - rhoi: global correlation histogram
3519/// - emat: error matrix
3520/// - binMap: for each bin of the original output distribution
3521/// specify the destination bin. A value of -1 means that the bin
3522/// is discarded. 0 means underflow bin, 1 first bin, ...
3523/// - binMap[0] : destination of underflow bin
3524/// - binMap[1] : destination of first bin
3525/// ...
3526/// return value: maximum global correlation
3527
3529 const Int_t *binMap,TH2 *invEmat) const
3530{
3531 Double_t rhoMax=0.;
3532 // original number of bins:
3533 // fHistToX.GetSize()
3534 // loop over binMap and find number of used bins
3535
3536 Int_t binMapSize = fHistToX.GetSize();
3537
3538 // histToLocalBin[iBin] points to a local index
3539 // only bins iBin of the histogram rhoi whih are referenced
3540 // in the bin map have a local index
3541 std::map<int,int> histToLocalBin;
3542 Int_t nBin=0;
3543 for(Int_t i=0;i<binMapSize;i++) {
3544 Int_t mapped_i=binMap ? binMap[i] : i;
3545 if(mapped_i>=0) {
3546 if(histToLocalBin.find(mapped_i)==histToLocalBin.end()) {
3547 histToLocalBin[mapped_i]=nBin;
3548 nBin++;
3549 }
3550 }
3551 }
3552 // number of local indices: nBin
3553 if(nBin>0) {
3554 // construct inverse mapping function
3555 // local index -> histogram bin
3556 Int_t *localBinToHist=new Int_t[nBin];
3557 for (std::map<int, int>::const_iterator i = histToLocalBin.begin(); i != histToLocalBin.end(); ++i) {
3558 localBinToHist[(*i).second]=(*i).first;
3559 }
3560
3561 const Int_t *rows_eOrig=eOrig->GetRowIndexArray();
3562 const Int_t *cols_eOrig=eOrig->GetColIndexArray();
3563 const Double_t *data_eOrig=eOrig->GetMatrixArray();
3564
3565 // remap error matrix
3566 // matrix row i -> origI (fXToHist[i])
3567 // origI -> destI (binMap)
3568 // destI -> ematBinI (histToLocalBin)
3569 TMatrixD eRemap(nBin,nBin);
3570 // i: row of the matrix eOrig
3571 for(Int_t i=0;i<GetNx();i++) {
3572 // origI: pointer in output histogram with all bins
3573 Int_t origI=fXToHist[i];
3574 // destI: pointer in the histogram rhoi
3575 Int_t destI=binMap ? binMap[origI] : origI;
3576 if(destI<0) continue;
3577 Int_t ematBinI=histToLocalBin[destI];
3578 for(int index_eOrig=rows_eOrig[i];index_eOrig<rows_eOrig[i+1];
3579 index_eOrig++) {
3580 // j: column of the matrix fVxx
3581 Int_t j=cols_eOrig[index_eOrig];
3582 // origJ: pointer in output histogram with all bins
3583 Int_t origJ=fXToHist[j];
3584 // destJ: pointer in the histogram rhoi
3585 Int_t destJ=binMap ? binMap[origJ] : origJ;
3586 if(destJ<0) continue;
3587 Int_t ematBinJ=histToLocalBin[destJ];
3588 eRemap(ematBinI,ematBinJ) += data_eOrig[index_eOrig];
3589 }
3590 }
3591 // invert remapped error matrix
3592 TMatrixDSparse eSparse(eRemap);
3593 Int_t rank=0;
3594 TMatrixDSparse *einvSparse=InvertMSparseSymmPos(&eSparse,&rank);
3595 if(rank!=einvSparse->GetNrows()) {
3596 Warning("GetRhoIFormMatrix","Covariance matrix has rank %d expect %d",
3597 rank,einvSparse->GetNrows());
3598 }
3599 // fill to histogram
3600 const Int_t *rows_eInv=einvSparse->GetRowIndexArray();
3601 const Int_t *cols_eInv=einvSparse->GetColIndexArray();
3602 const Double_t *data_eInv=einvSparse->GetMatrixArray();
3603
3604 for(Int_t i=0;i<einvSparse->GetNrows();i++) {
3605 Double_t e_ii=eRemap(i,i);
3606 Double_t einv_ii=0.0;
3607 for(Int_t index_einv=rows_eInv[i];index_einv<rows_eInv[i+1];
3608 index_einv++) {
3609 Int_t j=cols_eInv[index_einv];
3610 if(j==i) {
3611 einv_ii=data_eInv[index_einv];
3612 }
3613 if(invEmat) {
3614 invEmat->SetBinContent(localBinToHist[i],localBinToHist[j],
3615 data_eInv[index_einv]);
3616 }
3617 }
3618 Double_t rho=1.0;
3619 if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3620 if(rho>=0.0) rho=TMath::Sqrt(rho);
3621 else rho=-TMath::Sqrt(-rho);
3622 if(rho>rhoMax) {
3623 rhoMax = rho;
3624 }
3625 //std::cout<<i<<" "<<localBinToHist[i]<<" "<<rho<<"\n";
3626 rhoi->SetBinContent(localBinToHist[i],rho);
3627 }
3628
3629 DeleteMatrix(&einvSparse);
3630 delete [] localBinToHist;
3631 }
3632 return rhoMax;
3633}
3634
3635////////////////////////////////////////////////////////////////////////////////
3636/// Initialize bin contents and bin errors for a given histogram.
3637///
3638/// \param[out] h histogram
3639/// \param[in] x new histogram content
3640///
3641/// all histgram errors are set to zero, all contents are set to <b>x</b>
3642
3644{
3645 Int_t nxyz[3];
3646 nxyz[0]=h->GetNbinsX()+1;
3647 nxyz[1]=h->GetNbinsY()+1;
3648 nxyz[2]=h->GetNbinsZ()+1;
3649 for(int i=h->GetDimension();i<3;i++) nxyz[i]=0;
3650 Int_t ixyz[3];
3651 for(int i=0;i<3;i++) ixyz[i]=0;
3652 while((ixyz[0]<=nxyz[0])&&
3653 (ixyz[1]<=nxyz[1])&&
3654 (ixyz[2]<=nxyz[2])){
3655 Int_t ibin=h->GetBin(ixyz[0],ixyz[1],ixyz[2]);
3656 h->SetBinContent(ibin,x);
3657 h->SetBinError(ibin,0.0);
3658 for(Int_t i=0;i<3;i++) {
3659 ixyz[i] += 1;
3660 if(ixyz[i]<=nxyz[i]) break;
3661 if(i<2) ixyz[i]=0;
3662 }
3663 }
3664}
3665
3667 // set accuracy for matrix inversion
3668 if((eps>0.0)&&(eps<1.0)) fEpsMatrix=eps;
3669}
3670
3671////////////////////////////////////////////////////////////////////////////////
3672/// Return a string describing the TUnfold version.
3673///
3674/// The version is reported in the form Vmajor.minor
3675/// Changes of the minor version number typically correspond to
3676/// bug-fixes. Changes of the major version may result in adding or
3677/// removing data attributes, such that the streamer methods are not
3678/// compatible between different major versions.
3679
3681{
3682 return TUnfold_VERSION;
3683}
3684
ROOT::R::TRInterface & r
Definition: Object.C:4
#define d(i)
Definition: RSha256.hxx:102
#define b(i)
Definition: RSha256.hxx:100
#define f(i)
Definition: RSha256.hxx:104
#define c(i)
Definition: RSha256.hxx:101
#define h(i)
Definition: RSha256.hxx:106
#define R(a, b, c, d, e, f, g, h, i)
Definition: RSha256.hxx:110
#define e(i)
Definition: RSha256.hxx:103
const Bool_t kFALSE
Definition: RtypesCore.h:90
double Double_t
Definition: RtypesCore.h:57
const Bool_t kTRUE
Definition: RtypesCore.h:89
#define ClassImp(name)
Definition: Rtypes.h:361
XPoint xy[kMAXMK]
Definition: TGX11.cxx:122
float * q
Definition: THbookFile.cxx:87
TMatrixTSparse< Double_t > TMatrixDSparse
TMatrixT< Double_t > TMatrixD
Definition: TMatrixDfwd.h:22
#define TUnfold_VERSION
Definition: TUnfold.h:99
void Set(Int_t n)
Set size of this array to n doubles.
Definition: TArrayD.cxx:106
const Double_t * GetArray() const
Definition: TArrayD.h:43
void Set(Int_t n)
Set size of this array to n ints.
Definition: TArrayI.cxx:105
Int_t GetSize() const
Definition: TArray.h:47
A TGraph is an object made of two arrays X and Y with npoints each.
Definition: TGraph.h:41
The TH1 histogram class.
Definition: TH1.h:56
virtual Int_t GetNbinsY() const
Definition: TH1.h:293
virtual Double_t GetBinError(Int_t bin) const
Return value of error associated to bin number bin.
Definition: TH1.cxx:8519
virtual Int_t GetNbinsX() const
Definition: TH1.h:292
virtual void SetBinError(Int_t bin, Double_t error)
Set the bin Error Note that this resets the bin eror option to be of Normal Type and for the non-empt...
Definition: TH1.cxx:8662
virtual void SetBinContent(Int_t bin, Double_t content)
Set bin content see convention for numbering bins in TH1::GetBin In case the bin number is greater th...
Definition: TH1.cxx:8678
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition: TH1.cxx:4907
Service class for 2-Dim histogram classes.
Definition: TH2.h:30
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition: TH2.h:88
virtual void SetBinContent(Int_t bin, Double_t content)
Set bin content.
Definition: TH2.cxx:2480
TMatrixDSymEigen.
const TVectorD & GetEigenValues() const
const TMatrixD & GetEigenVectors() const
Int_t GetNrows() const
Definition: TMatrixTBase.h:124
Int_t GetNcols() const
Definition: TMatrixTBase.h:127
virtual const Int_t * GetRowIndexArray() const
virtual const Element * GetMatrixArray() const
virtual const Int_t * GetColIndexArray() const
virtual void Warning(const char *method, const char *msgfmt,...) const
Issue warning message.
Definition: TObject.cxx:877
virtual void Error(const char *method, const char *msgfmt,...) const
Issue error message.
Definition: TObject.cxx:891
virtual void Fatal(const char *method, const char *msgfmt,...) const
Issue fatal error message.
Definition: TObject.cxx:919
virtual void Info(const char *method, const char *msgfmt,...) const
Issue info message.
Definition: TObject.cxx:865
Class to create third splines to interpolate knots Arbitrary conditions can be introduced for first a...
Definition: TSpline.h:192
Double_t Eval(Double_t x) const
Eval this spline at x.
Definition: TSpline.cxx:782
void GetCoeff(Int_t i, Double_t &x, Double_t &y, Double_t &b, Double_t &c, Double_t &d)
Definition: TSpline.h:231
Base class for spline implementation containing the Draw/Paint methods.
Definition: TSpline.h:22
Basic string class.
Definition: TString.h:131
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString.
Definition: TString.cxx:2311
An algorithm to unfold distributions from detector to truth level.
Definition: TUnfold.h:103
TArrayI fHistToX
mapping of histogram bins to matrix indices
Definition: TUnfold.h:166
void GetRhoIJ(TH2 *rhoij, const Int_t *binMap=0) const
Get correlation coefficients, possibly cumulated over several bins.
Definition: TUnfold.cxx:3425
TMatrixDSparse * fE
matrix E
Definition: TUnfold.h:209
void GetBias(TH1 *bias, const Int_t *binMap=0) const
Get bias vector including bias scale.
Definition: TUnfold.cxx:2922
TMatrixDSparse * fEinv
matrix E^(-1)
Definition: TUnfold.h:207
virtual Double_t GetLcurveY(void) const
Get value on y-axis of L-curve determined in recent unfolding.
Definition: TUnfold.cxx:3226
TMatrixDSparse * fAx
result x folded back A*x
Definition: TUnfold.h:187
TMatrixDSparse * MultiplyMSparseM(const TMatrixDSparse *a, const TMatrixD *b) const
Multiply sparse matrix and a non-sparse matrix.
Definition: TUnfold.cxx:773
virtual Double_t DoUnfold(void)
Core unfolding algorithm.
Definition: TUnfold.cxx:290
Double_t fChi2A
chi**2 contribution from (y-Ax)Vyy-1(y-Ax)
Definition: TUnfold.h:189
TMatrixD * fX0
bias vector x0
Definition: TUnfold.h:160
TMatrixDSparse * MultiplyMSparseTranspMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
Multiply a transposed Sparse matrix with another sparse matrix,.
Definition: TUnfold.cxx:693
TMatrixDSparse * MultiplyMSparseMSparseTranspVector(const TMatrixDSparse *m1, const TMatrixDSparse *m2, const TMatrixTBase< Double_t > *v) const
Calculate a sparse matrix product where the diagonal matrix V is given by a vector.
Definition: TUnfold.cxx:833
TMatrixDSparse * CreateSparseMatrix(Int_t nrow, Int_t ncol, Int_t nele, Int_t *row, Int_t *col, Double_t *data) const
Create a sparse matrix, given the nonzero elements.
Definition: TUnfold.cxx:592
Int_t RegularizeSize(int bin, Double_t scale=1.0)
Add a regularisation condition on the magnitude of a truth bin.
Definition: TUnfold.cxx:2044
Double_t fEpsMatrix
machine accuracy used to determine matrix rank after eigenvalue analysis
Definition: TUnfold.h:177
void GetProbabilityMatrix(TH2 *A, EHistMap histmap) const
Get matrix of probabilities.
Definition: TUnfold.cxx:2996
Double_t GetChi2L(void) const
Get contribution determined in recent unfolding.
Definition: TUnfold.cxx:3194
TMatrixDSparse * fVxx
covariance matrix Vxx
Definition: TUnfold.h:181
Int_t GetNy(void) const
returns the number of measurement bins
Definition: TUnfold.h:234
virtual TString GetOutputBinName(Int_t iBinX) const
Get bin name of an output bin.
Definition: TUnfold.cxx:1684
Double_t fBiasScale
scale factor for the bias
Definition: TUnfold.h:158
virtual ~TUnfold(void)
Definition: TUnfold.cxx:132
Double_t fRhoAvg
average global correlation coefficient
Definition: TUnfold.h:195
TMatrixDSparse * fDXDtauSquared
derivative of the result wrt tau squared
Definition: TUnfold.h:203
static void DeleteMatrix(TMatrixD **m)
delete matrix and invalidate pointer
Definition: TUnfold.cxx:195
void GetNormalisationVector(TH1 *s, const Int_t *binMap=0) const
Histogram of truth bins, determined from summing over the response matrix.
Definition: TUnfold.cxx:2898
void ClearHistogram(TH1 *h, Double_t x=0.) const
Initialize bin contents and bin errors for a given histogram.
Definition: TUnfold.cxx:3643
Int_t RegularizeDerivative(int left_bin, int right_bin, Double_t scale=1.0)
Add a regularisation condition on the difference of two truth bin.
Definition: TUnfold.cxx:2078
Int_t GetNx(void) const
returns internal number of output (truth) matrix rows
Definition: TUnfold.h:226
static const char * GetTUnfoldVersion(void)
Return a string describing the TUnfold version.
Definition: TUnfold.cxx:3680
void SetConstraint(EConstraint constraint)
Set type of area constraint.
Definition: TUnfold.cxx:3173
virtual Int_t ScanLcurve(Int_t nPoint, Double_t tauMin, Double_t tauMax, TGraph **lCurve, TSpline **logTauX=0, TSpline **logTauY=0, TSpline **logTauCurvature=0)
Scan the L curve, determine tau and unfold at the final value of tau.
Definition: TUnfold.cxx:2548
Int_t RegularizeBins(int start, int step, int nbin, ERegMode regmode)
Add regularisation conditions for a group of bins.
Definition: TUnfold.cxx:2161
Bool_t AddRegularisationCondition(Int_t i0, Double_t f0, Int_t i1=-1, Double_t f1=0., Int_t i2=-1, Double_t f2=0.)
Add a row of regularisation conditions to the matrix L.
Definition: TUnfold.cxx:1936
Int_t RegularizeCurvature(int left_bin, int center_bin, int right_bin, Double_t scale_left=1.0, Double_t scale_right=1.0)
Add a regularisation condition on the curvature of three truth bin.
Definition: TUnfold.cxx:2117
void SetBias(const TH1 *bias)
Set bias vector.
Definition: TUnfold.cxx:1912
void GetL(TH2 *l) const
Get matrix of regularisation conditions.
Definition: TUnfold.cxx:3153
ERegMode fRegMode
type of regularisation
Definition: TUnfold.h:172
Int_t GetNr(void) const
Get number of regularisation conditions.
Definition: TUnfold.cxx:3138
TMatrixDSparse * fVxxInv
inverse of covariance matrix Vxx-1
Definition: TUnfold.h:183
TMatrixD * fX
unfolding result x
Definition: TUnfold.h:179
EConstraint
type of extra constraint
Definition: TUnfold.h:109
@ kEConstraintNone
use no extra constraint
Definition: TUnfold.h:112
virtual Double_t GetLcurveX(void) const
Get value on x-axis of L-curve determined in recent unfolding.
Definition: TUnfold.cxx:3216
TMatrixDSparse * fVyy
covariance matrix Vyy corresponding to y
Definition: TUnfold.h:156
Int_t fNdf
number of degrees of freedom
Definition: TUnfold.h:197
TArrayD fSumOverY
truth vector calculated from the non-normalized response matrix
Definition: TUnfold.h:168
virtual Int_t SetInput(const TH1 *hist_y, Double_t scaleBias=0.0, Double_t oneOverZeroError=0.0, const TH2 *hist_vyy=0, const TH2 *hist_vyy_inv=0)
Define input data for subsequent calls to DoUnfold(tau).
Definition: TUnfold.cxx:2300
ERegMode
choice of regularisation scheme
Definition: TUnfold.h:119
@ kRegModeNone
no regularisation, or defined later by RegularizeXXX() methods
Definition: TUnfold.h:122
@ kRegModeDerivative
regularize the 1st derivative of the output distribution
Definition: TUnfold.h:128
@ kRegModeSize
regularise the amplitude of the output distribution
Definition: TUnfold.h:125
@ kRegModeCurvature
regularize the 2nd derivative of the output distribution
Definition: TUnfold.h:131
@ kRegModeMixed
mixed regularisation pattern
Definition: TUnfold.h:135
void GetInput(TH1 *inputData, const Int_t *binMap=0) const
Input vector of measurements.
Definition: TUnfold.cxx:3031
void SetEpsMatrix(Double_t eps)
set numerical accuracy for Eigenvalue analysis when inverting matrices with rank problems
Definition: TUnfold.cxx:3666
void GetEmatrix(TH2 *ematrix, const Int_t *binMap=0) const
Get output covariance matrix, possibly cumulated over several bins.
Definition: TUnfold.cxx:3410
void ErrorMatrixToHist(TH2 *ematrix, const TMatrixDSparse *emat, const Int_t *binMap, Bool_t doClear) const
Add up an error matrix, also respecting the bin mapping.
Definition: TUnfold.cxx:3343
TArrayI fXToHist
mapping of matrix indices to histogram bins
Definition: TUnfold.h:164
TMatrixDSparse * fDXDY
derivative of the result wrt dx/dy
Definition: TUnfold.h:205
TMatrixD * fY
input (measured) data y
Definition: TUnfold.h:154
TMatrixDSparse * InvertMSparseSymmPos(const TMatrixDSparse *A, Int_t *rank) const
Get the inverse or pseudo-inverse of a positive, sparse matrix.
Definition: TUnfold.cxx:1008
Double_t GetRhoI(TH1 *rhoi, const Int_t *binMap=0, TH2 *invEmat=0) const
Get global correlation coefficients, possibly cumulated over several bins.
Definition: TUnfold.cxx:3466
TMatrixDSparse * fVyyInv
inverse of the input covariance matrix Vyy-1
Definition: TUnfold.h:185
Double_t fLXsquared
chi**2 contribution from (x-s*x0)TLTL(x-s*x0)
Definition: TUnfold.h:191
TMatrixDSparse * fDXDAM[2]
matrix contribution to the of derivative dx_k/dA_ij
Definition: TUnfold.h:199
Double_t fTauSquared
regularisation parameter tau squared
Definition: TUnfold.h:162
void GetOutput(TH1 *output, const Int_t *binMap=0) const
Get output distribution, possibly cumulated over several bins.
Definition: TUnfold.cxx:3263
Int_t GetNpar(void) const
Get number of truth parameters determined in recent unfolding.
Definition: TUnfold.cxx:3206
virtual void ClearResults(void)
Reset all results.
Definition: TUnfold.cxx:218
Double_t fRhoMax
maximum global correlation coefficient
Definition: TUnfold.h:193
TMatrixDSparse * MultiplyMSparseMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
Multiply two sparse matrices.
Definition: TUnfold.cxx:617
EConstraint fConstraint
type of constraint to use for the unfolding
Definition: TUnfold.h:170
TUnfold(void)
Only for use by root streamer or derived classes.
Definition: TUnfold.cxx:248
EHistMap
arrangement of axes for the response matrix (TH2 histogram)
Definition: TUnfold.h:139
@ kHistMapOutputHoriz
truth level on x-axis of the response matrix
Definition: TUnfold.h:142
void AddMSparse(TMatrixDSparse *dest, Double_t f, const TMatrixDSparse *src) const
Add a sparse matrix, scaled by a factor, to another scaled matrix.
Definition: TUnfold.cxx:930
TMatrixDSparse * fDXDAZ[2]
vector contribution to the of derivative dx_k/dA_ij
Definition: TUnfold.h:201
Double_t GetRhoIFromMatrix(TH1 *rhoi, const TMatrixDSparse *eOrig, const Int_t *binMap, TH2 *invEmat) const
Get global correlation coefficients with arbitrary min map.
Definition: TUnfold.cxx:3528
void InitTUnfold(void)
Initialize data members, for use in constructors.
Definition: TUnfold.cxx:149
Double_t GetTau(void) const
Return regularisation parameter.
Definition: TUnfold.cxx:3185
Int_t RegularizeBins2D(int start_bin, int step1, int nbin1, int step2, int nbin2, ERegMode regmode)
Add regularisation conditions for 2d unfolding.
Definition: TUnfold.cxx:2222
void GetLsquared(TH2 *lsquared) const
Get matrix of regularisation conditions squared.
Definition: TUnfold.cxx:3113
void GetInputInverseEmatrix(TH2 *ematrix)
Get inverse of the measurement's covariance matrix.
Definition: TUnfold.cxx:3060
TMatrixDSparse * fA
response matrix A
Definition: TUnfold.h:150
void GetFoldedOutput(TH1 *folded, const Int_t *binMap=0) const
Get unfolding result on detector level.
Definition: TUnfold.cxx:2948
TMatrixDSparse * fL
regularisation conditions L
Definition: TUnfold.h:152
Int_t fIgnoredBins
number of input bins which are dropped because they have error=0
Definition: TUnfold.h:175
Int_t GetNdf(void) const
get number of degrees of freedom determined in recent unfolding
Definition: TUnfold.h:323
Double_t y[n]
Definition: legend1.C:17
Double_t x[n]
Definition: legend1.C:17
const Int_t n
Definition: legend1.C:16
TF1 * f1
Definition: legend1.C:11
#define F(x, y, z)
#define G(x, y, z)
void swap(RDirectoryEntry &e1, RDirectoryEntry &e2) noexcept
static double B[]
static double A[]
static constexpr double m2
Short_t Max(Short_t a, Short_t b)
Definition: TMathBase.h:212
Int_t Finite(Double_t x)
Check if it is finite with a mask in order to be consistent in presence of fast math.
Definition: TMath.h:761
constexpr Double_t E()
Base of natural log:
Definition: TMath.h:97
Double_t Sqrt(Double_t x)
Definition: TMath.h:681
LongDouble_t Power(LongDouble_t x, LongDouble_t y)
Definition: TMath.h:725
Short_t Min(Short_t a, Short_t b)
Definition: TMathBase.h:180
Double_t Log10(Double_t x)
Definition: TMath.h:754
Short_t Abs(Short_t d)
Definition: TMathBase.h:120
Definition: first.py:1
auto * m
Definition: textangle.C:8
auto * a
Definition: textangle.C:12
static long int sum(long int i)
Definition: Factory.cxx:2275
REAL epsilon
Definition: triangle.c:617
#define dest(otri, vertexptr)
Definition: triangle.c:1040
static void output(int code)
Definition: gifencode.c:226