Re: Updates and questions for Dry Run

From: Flemming Videbaek (videbaek@sgs1.hirg.bnl.gov)
Date: Wed Feb 16 2000 - 10:32:33 EST

  • Next message: Konstantin Olchanski: "Re: Updates and questions for Dry Run"

    Kris,
    
    I observed the same thing. -
    And did also insert DebugLevel() stuff . I will try to have stuff merged
    today.
    
    I do not think we are quite in agreement on the simpler structures. Take a
    look what I did to
    BrDigRICH - easy to deal with no detector yet. I made
    brDigRICH : BrDataObject rather than TOBject - It can then be placed
    directly in the evntnode and not in a table. Talked to JH and YK and they
    agree they would like to do this
    too for MULTTiles. So if you agree we will do this morning.
    
    On ZDC it turns out KO has not done his end-- but will do today.
    
    I have looked at bit on Christian's stuff but have some comments. I will
    include here a message I send to them a while back but never got a response
    on.
    
    Cheers
    Flemming
    -- Will likely spend time in CH/FEHut.
    
    
    ----- Original Message -----
    From: <hagel@comp.tamu.edu>
    To: BRAT Development List <brahms-dev-l@bnl.gov>
    Sent: Tuesday, February 15, 2000 9:37 PM
    Subject: Updates and questions for Dry Run
    
    
    > Hello,
    > I ran SuperMon from here (Texas A & M) and I found a few things wrong as
    > well as some things I don't understand.
    >
    > Fixed Stuff:
    > 1. While getting data from the event builder, BrRawDataInput returned
    > many numbers.  I traced this to a cout statement in there, so I put the
    > condition on that print statement that the DebugLevel() needed to be
    > other than 0.  That quietened BrRawDataInput considerably.
    >
    > 2. The number of analyzed events in BrSuperMonitor started off with a
    > large number.  This was fixed.
    >
    > 3. The list boxes for spectra and pictures were widened (in a sloppy
    > way) to aid in viewing longer spectra and picture names.
    >
    > What I don't understand.
    > 1. I changed BrRawDataInput last week to reflect (among other things)
    > changes in the recordId's of ZDC's since we agreed to read the ZDC left
    > and right in a single record.  I left the unpacking of the old ones
    > (10010 and 10011; now labeled obsolete in
    > http://opus.brahms.bnl.gov/Documentation/records.txt ) and print out a
    > message if I get some of them.  I get lot's of them.  I also appear to
    > get some of the new ones (10020), at least I think I get pedestals in
    > the spectra.  Is there something I don't understand???
    >
    > Anyway, the changes I made are checked into the repository.
    >
    > Regards
    >
    > Kris
    >
    
    Dear Christian and Anders,
    
    I do not send this comment to the general listserver at this point mainly
    since Christian gave me a draft and preliminary version of his write-up. I
    do not want to comment publicly on something people has not seen yet.
    
    First on the general words from Anders. These are well taken, and I will
    address some general issue illustrated by example.
    
    The structure of calibration data have at least two aspects.
    
    Filling pattern i.e. the structure and information available when a table is
    created and filled and
    Usage pattern i.e. the code using the calibration.
    This may look as a redundant notice, but let me clarify with an example as
    usual from TOF. It could have be any other detector with a small number of
    modules. It also relates to part of what Christian is working on.
    
    I believe the following set of parameters is a fairly complete list that is
    needed for each PMT and SLAT. The actual order of calibration may be
    different, and Ian can surely comments on this- the concern here is the need
    for separation.
    
    PMT Top/Bottom - pedestal, gain , 2.order term (a0, a1 a2) and estimated
    error(s)
    
    PMT Bottom pedestal, gain .
    
    PMT Top/Bottom - time pedestal (t0 relative to other tubes), psec/channel,..
    
    PMT Top/Bottom Sleewing correction i.e. set of constants that are used on
    the normalized/calibrated times and adc values.
    
    SLAT ' offset, gain to convert Time(top)-Time(bottom) to Slat position.
    
    Offset - to convert (T(up)+Time(down))/2 (may be taken care by indv tubes)
    
    Detector Overall offset in T0.
    
    My reason for the division into the individual parts is that they likely
    will be created at different times using different programs, and will have
    different validity periods. Example: The psec/channel in the TDC will be
    measured once per year . The relative time-offset between tubes are most
    likely constant as long as no cabling is changed. On the other hand ADC
    gains,.. depends on HV setting, temperature.. and the overall offset in t0
    for the detector will probably need to be fine-tuned per run (i.e. hourly
    basis). The different creation time may imply that you would not like to
    have the ADC_gain and Sleev in the same table, because this implies making
    updates, and likely the validity period is different.
    
    The point I am trying to make here the need for a detector component (PMT)
    to have multiple tables. I realized that this of course can be setup by
    defining the components differently e.g.
    
    PMTADCT1 , PMTADCB1, PMTTIMET1,.. and having different tables PMTADCTable,
    PMTTIMETable etc. and probably should because of the generation pattern.
    
    On the other hand when the time comes to apply the calibration values in
    e.g. the reconstruction code the user code is most likely to want to apply
    like the following taken (and expanded) from Kris'/ Ian's sample code in
    BrCalibrateTof.. (a better name is probably TofApplyCalibrationModule.) Note
    this is not working code.
    
    
    
    for(is=0;is<NumHits;is++) {
    
    digtof_p = (BrDigTof*)DigitizedTof->At(is);
    
    tdcBot = digtof_p->GetTdcDown();
    
    tdcTop = digtof_p->GetTdcUp();
    
    adcTop = digtof_p->GetAdcUp();
    
    adcBot = digtof_p->GetAdcDown();
    
    SlatNo = digtof_p->GetModule();
    
    adcGainUp = fParamsTof_p->GetADCGainUp(Module);
    
    adcOffUp = fParamsTof_p->GetADCOffsetUp(Module);
    
    ...
    
    
    tof = (tdcBot + tdcTop - (Float_t)2.*offset) * slope/ (Float_t)2. -
    transversal_time;
    
    ypos = (tdcBot - tdcTop) * slope / ((Float_t)2. * propogation_time) +
    fParamsTof_p->GetYoffset(Module);
    
    adcSum = (adcTop-adcoffUp*adcgainUp )+ (adcBot-adcoffBot*adcgainBot);
    
    adcAvg = adcSum / 2 ;
    
    caltof_p = new BrCalTof();
    
    CalibratedTof->Add(caltof_p);
    
    caltof_p->SetTof(tof);
    
    caltof_p->SetPosition(ypos);
    
    caltof_p->SetSlatno(digtof_p->GetSlatno());
    
    caltof_p->SetSlEnergy(adcAvg);
    
    }
    
    The point by putting this incomplete code snippet is to illustrate that the
    user code in this case wants to deal with all (or most) of the calibrations
    constants at a given time, taken from different tables, and with different
    validity period. I find it important we hide (at this point) in the code
    that this information in fact comes from quite different SQL tables.
    
    I am aware this is not of direct concern for the design of the tables, and
    comments more on the API (inside Brat) to the user code. The DetectorParams
    classes as exist right now has to develop in parallel with the development
    of the database(s) to be functional. At present some geometry, and pure
    simulation parameters and kept together with calibration like data,
    something that should be changed.
    
    For the code where calibrations are generated and tables created and
    filled - the code could and should in fact be much closer to the
    implementation.
    
    For the database it is also important to have access to simulation
    calibrations, (and geometry). They are different in a couple of aspects but
    are mostly parallel to the 'real' data. Eg. The values are probably fixed
    almost forever in the calibrations, while the geometry depends on the
    experiment setting rather than run. It though seem this can easily be
    adapted by the revision_type in the intervals.
    
    I have deep down a concern on the effectiveness of retrieval due to he
    segmentation in very small entities (e.g. The RICH has 320 PMTs) and it
    could make more sense to have 'arrays' of 320 numbers rather that individual
    PMTs. At this point let us not worry, but move forward along the lines you
    are proposing, and come with concrete examples.
    
    I should also say that at this point MySql is the database we will use as
    said by Anders but left open in the draft.
    
    Jens Jorgen asked me to say something on geometry, but this will have to
    wait.It is not completely trivial, and consists of much more that survey
    points. It has intrinsic measures for detectors, calibrated offsets,
    magnetic field maps (or pointers to magnetic field files)..
    
    best regards and thanks for the work so far
    
        Flemming
    



    This archive was generated by hypermail 2b29 : Wed Feb 16 2000 - 09:24:47 EST