Hi Steve et al, On Wed, 10 Oct 2001 14:05:13 -0500 "Stephen J. Sanders" <ssanders@falcon.cc.ukans.edu> wrote concerning "Energy calibration of multiplicity array": > Hi, > I have written a short document describing the energy calibration of > the si and tile elements of the multiplicity array which can be > accessed at > > http://www.phsx.ukans.edu/~sanders/multCalib. Great! Shouldn't it really be an analysis note? > The program used for these calibrations is in the brahms cvs area at > brahms_app/sjs_app/multCalib. > > You can see the program output (described in the document linked > above) in my account on the rcas machines in the > ~sanders/brahms_app/sjs_app/bin directory. > > This is the calibration that Hiro has been using to produce his > recent dN/deta plots. When will the calibrations hit the database? Several comments on your code [I know this is rather long, but please read it carefully]: General: * Please document all methods (it's methods, not routines - Fortrantitis). * Please document important steps in each method too. multCalib.cxx: * Use BrAppOptionManager instead of parsing the commandline. Many examples of this exists in BRAT. int main (int argc, char** argv) { BrAppOptionManager* optionManager = BrAppOptionManager::Instance(); optionManager->SetCommandLine(argc, argv); optionManager->SetVersion(0, 1, 0, "Steve Sanders"); optionManager->SetHelp("make TMA/SMA calibrations"); BrIntAppOption* runnoOption = new BrIntAppOption('r', "runno", "The run number", 0); optionManager->AddOption(runnoOption); ... * Use ifstream instead of old C fopen, fread, ... if(runnoOption->GetValue() == 0) { ifstream file(("dataDirectory.dat", ios::in); if (!file) { cerr << "Include run numbers on command line...STOP" << endl; return 1; } cout << "Scanning... " <<endl; Int_t tempRun; Int_t tempSeq; Char_t c[3]; while (true) { file >> c; file >> tempRun; file >> c file >> tempSeq; if (file.eof()) break; if (file.fail()) { cerr << "Error while reading file" << endl; return 1; } if (tempRun > LastRun) LastRun = tempRun; if (tempRun == LastRun && tempSeq > LastSeq) LastSeq = tempSeq; } cout<< " Last run(seq) found: " << LastRun << "(" << LastSeq << ")" << endl; seq0 = (LastSeq - 1 > 0) ? LastSeq - 1 : 0; runno = LastRun; file.close(); } * Use TString for string manipulations instead of old C functions. TString line("MultNtuple *ntup = new MultNtuple();"); app->ProcessLine(line, 1); line = Form("ntup->Init(%d );",runno); ... TString fname(Form("ntuples/multNtuple_%d.root", rname)); ifstream ntuplefile(fname); ... BTW, why the heck do create all those objects via the interpretor? Why don't you just create them directly in the program? MultRaw: This class must be replaced by proper modules and configuration scripts. MultRaw::Raw: What you do here is essentially what the reconstrunction of the ZDC and BB data, and then fill the TMA/SDA ADC, Run number, Triggers, BB vertex, and ZDC vertex into a NTuple. That's a bit waste of time, since the data is already there. Instead, you should make a simple module that accumelatets histograms in the Event method, and then does the varius operations in the Finish method. [Refer to BrTofPedCalModule]. MultRaw::FindPedestals: class BrMultPedCalModule : public BrModule { private: TH1D** fRaw; // Adc histograms public: BrTileMultCalModule(const char* name, const char* title) { ... } DefineHistograms() { fRaw = new TH1D*[nSingles]; for (Int_t i = 0; i < nSingles; i++) fRaw[i] = new TH1D(Form("raw%03d", i), Form("Raw ADC channel %d", i), fNBins, 0, fNBins); } void Event(BrEventNode* inNode, outNode) { BrTileDig* dig = (BrTileDig*)inNode->GetObject(...); ... for (Int_t i = 0; i < nSingles; i++) fRaw[i]->Fill(dig->GetAdc(i+1)); } void Finish() { ifstream file(fFileName, ios::in); if(!file) { ... return; } for (Int_t i = 0; i < nSingles; i++) { Int_t sum = fRaw[i]->Integral(); Int_t maxBin = fRaw[i]->GetMaximumBin(); Int_t maxAdc = fRaw[i]->GetBinCenter(maxBin); TF1* fit = new TF1("fit", "gaus", TMath::Max(maxAdc-6, 0), maxAdc + 10); fRaw[i]->Fit("fit", "QR", "SAME"); Float_t pedestal = fit->GetParameter(0); Float_t pedestalWidth = fit->GetParameter(1); file << i << "\t" << pedestal << "\t" << pedestalWidth << endl; // For the tiles, you may want to find the gap here too. TString n(GetName()); if (n != BrDetectorList::GetDetectorName(kBrTile)) continue; Int_t gapTop = 0; Int_t gapBot = 0; for (Int_t j = fNBins; j > maxBin; j--) { // Are we in the gap? if (fRaw[i]->GetBinContent(j) == 0) { // Is this the first bin in the gap if (gapTop == 0) // Set the upper limit on the gap gapTop = j; else // Otherwise we set the lower limit gapBot = j; } // If not in the gap, bu we have been there, we can break out. else if (gapBot != 0) break; } } }; For pedestals, one must ofcourse make sure that only pulser events is read. That is done by setting up a BrTriggerFilter in the module pipeline in your configuration script. Alternatively - though not recommended - is to make a module that reads the data from the outNode into an NTuple, and stores that in the histogram file. MultPulser: It seems to me, that in this module you basically read in the calibrations from the file PTQCalibrations.dat, but I see nowhere where you create the actual calibrations. Is this right? Is it because PTQCalibrations.dat contains data from a special measurement done in a test lab or something? MultGeant: Again, this class must be a module that can go into a regular module pipeline. I appricate the fact that this is sort of 'non-standard', and so we need to do something special. Anyway, make a module that fills the appropiate histograms in the Event method, and then in Finish does all the fits and write the appropiate parameters/histograms to disk. Those parameters/histograms can then subsequently be read in by the module that does the raw data calibrations. Perhaps you want to derive this module from Br[Tile|Si]DigModule, as these modules already contains the digisation (c.f. brahms_app/cholm_app/jobs/global/SimulCentModule.{h,cxx} as well as SimulConfig.C in the same directory) MultDataFits + MultRescale: Again must be a module. This is obviously a bit more tricky than the previous, since for this you need the result of the previous steps. However, that is easily overcome by having the other steps write out a file (ROOT or ASCII), and this step read it in. So what this module should do, is in Init to make sure it has all the previous calibration parameters, like pedestal, gap, pulser, and GEANT spektra. In Event, it fills appropiate histograms, taking care to apply the previous calibrations, so that what I believe should be filled into the histograms are E = (ADC - pedestal - X * pedestalWidth - - gapwidth * Theta(ADC - gapTop)) * gain Incidently, that's the SingleEnergy output of BrMultRdoModule, so you could just use that in front of this module in the pipeline, pick up the BrMultRdo object from the outNode and use the SingleEnergy fields. Ofcourse this requires that you have previously commited the pedestal, pedestalWidth, gap, pulser, and gain to the database - which makes sense, since it's quite possible to do those calibrations indepedent of the energy calibrations. Finally, in Finish, is should fit the histograms to a gaus+pol0, and then rescale so that the BRAG and RAW spectra lines up, and then write to disk. Notice, that this is the only place the BRAG spectras are used! In summary, the above ... recommendations is too loose ... will require you do seperate steps: 1) Make pulser calibrations 2) Make pedestal, pedestalWidth, and gap calibrations 3) Obtain the BRAG spectra 4) Obtain the Raw data spectra and rescale to BRAG spectra Each of these steps requires a configuration script of it's own, and should produce some output in a file (either ROOT of ASCII), which can be checked and commited to the database, with the exception of the BRAG step. The fact that you need several steps is _not_ a bad thing. In fact it's a Good Thing - and more importantly the Right Thing. You see, what we'd like to do, is to make pedestals, ladida, for most (if not all) detectors in one go. Then you do another iteration where you do another pass where you do gains, slewing, and so on. The fact that the TMA and SMA are a bit "non-standard" in that you need external data in the form of BRAG spectra is easily done by a single step that may be done in parallel (or previously) to the calibration passes. Again, I'd like to stress the importance of implementing calibration, analysis, etc. classes so that they fit into that general design of BRAT - that is, classes that do stuff are derived from BrModule, they deal with single events in the Event method, deal with a full jobs data in Finish, and so on. If we start making different strategies for various stuff pertaining to data (whatever kind it may be) we quickly end up with a bloody mess. If, on the other hand, everything follows the same strategy, then it's very easy for everyone to navigate, use, and improve the code. In terms of calibration code, it's infact crucial, as we at some point would very much like to set up 'calibration shifts', so that one person for X days will be responsible for doing all the calibrations. If this shifter has to deal with Y different approaches life would become very hard for that poor bugger. The effect of many approaches is likely to be that some calibrations are never done! A single approach to things makes life much easier for a shifter, and has the effect that people will not mind too much doing shifts, and do them properly. Finally, if you insist on doing things your own way, there'd be very few people that understand what you're doing, so support will approach zero exponentially! That means more work for you. The essence of this entire mail can more or less be summarised in: Conform to the BRAT design! Yours, Christian Holm Christensen ------------------------------------------- Address: Sankt Hansgade 23, 1. th. Phone: (+45) 35 35 96 91 DK-2200 Copenhagen N Cell: (+45) 28 82 16 23 Denmark Office: (+45) 353 25 305 Email: cholm@nbi.dk Web: www.nbi.dk/~cholm
This archive was generated by hypermail 2b30 : Thu Oct 11 2001 - 10:31:17 EDT