Confusion applying BayesDivide to projected 1D histogram

From: James Jackson <james.jackson_at_cern.ch>
Date: Wed, 18 Mar 2009 15:23:16 +0000


Hi,

I have two 2D histograms containing pass and total counts respectively. Performing a simple loop over these in 2D returns the correct efficiencies:

    for(Int_t i = 1; i <= 10; ++i)
    {

       Double_t etaMin = clusterEffAllPtEta->GetYaxis()-
>GetBinLowEdge(i);

       Double_t etaMax = etaMin + clusterEffAllPtEta->GetYaxis()-
>GetBinWidth(i);

       for(Int_t j = 1; j <= 10; ++j)
       {
          Double_t pass = clusterEffPassingPtEta->GetBinContent(j, i);
          Double_t all = clusterEffAllPtEta->GetBinContent(j, i);
          std::cout << (pass / all) << "\t";
       }
       std::cout << std::endl;

    }

However I would like the Baysean errors on these efficiencies. It seems sensible to take 1D slices of each 2D histogram, compute the efficiencies with TGraphAsymmErrors::BayesDivide, and then read off the efficiencies and errors from the graph:

    for(Int_t i = 1; i <= 10; ++i)
    {

       Double_t x, y;
       TH1D *allProj = clusterEffAllPtEta->ProjectionX("all", i, i);
       TH1D *passProj = clusterEffPassingPtEta->ProjectionX("pass", i,  
i);
       TGraphAsymmErrors g(10);
       g.BayesDivide(passProj, allProj);
       for(Int_t j = 0; j < 10; ++j)
       {
          g.GetPoint(j, x, y);
          Double_t erUp = g.GetErrorYhigh(j);
          Double_t erDown = g.GetErrorYlow(j);
          std::cout << y << " +" << erUp << " -" << erDown << "\t";
       }

    }

However, when I do this, it appears that the first column (as printed) has vanished, so all values are shifted one bin to the L, and the last bin is filled with junk. Can anybody suggest what I'm doing wrong?

Regards,
James. Received on Wed Mar 18 2009 - 16:23:21 CET

This archive was generated by hypermail 2.2.0 : Thu Mar 19 2009 - 11:50:02 CET