Re: Bdst analyse

From: Djamel Ouerdane (ouerdane@nbi.dk)
Date: Tue Nov 19 2002 - 16:43:28 EST

  • Next message: Flemming Videbaek: "follow up of pid..."
    I like this idea of probability. I started something based on slice
    fitting (eg : 1/beta vs p, ring rad vs p, etc) where you get the sigmas of
    the observables. Then if one knows what is a true n sigma cut on
    these observables, you can start generating probabilities.
    
    Djam
    
    
    >    Dear Claus,
    >              your scheme seems very resonable. Do you plan to decide
    > weather a given track is a pion, or a kaon, or a proton? An alternative
    > is to statistically identify each track and say that it is 85% a pion,
    > 14% a kaon and 1% a proton. If one cuts in "orthogonal" variables such
    > as mass**2 and confidence level of the track then the total efficency is
    > just the product of the indivdual efficencies. These are surely best 
    > evaluated from the fits and often leave some room for interpreation. 
    > This whiggle room ends up in your systematic error. If one does cuts
    > in variables that are correlated things are a lot harder.
    >    However the efficecies can be evaluated from histogrammes after all
    > of your loops.
    >                 YOurs Michael
    > 
    > 
    > Quoting "Claus O. E. Jorgensen" <ekman@nbi.dk>:
    > 
    > > 
    > > Hi analyzer,
    > > 
    > > After some correspondence on the analysis of the dbsts I have 
    > > now written up some thoughts on how to get on with the analysis 
    > > from the bdsts. The ambition is to end up with some kind of
    > > standard analysis method in the future.
    > > As Jens-Ivar pointed out this discussion should be on the dev-list
    > > and as Flemming pointed out we should (think a bit and) agree 
    > > before we do the coding (thanks for the comments).
    > > 
    > > What we want to do:
    > > 
    > > - Select events
    > > - Select good tracks
    > > - Identify particles
    > > - Evaluate efficiencies
    > > - ???
    > > 
    > > I like the way it is done in the Bdst{Mrs,Fs}Ana (the new versions of
    > > Br{Mrs,Fs}DstAna) modules. The basic idea is to scan the data and
    > > determine various constants for event selection, momentum
    > > determination, track selection and pid selection (in the code these
    > > scans is called the prepreloop and preloop). The found constants are
    > > then used when selecting events, tracks and identified particles in the
    > > final loop over all events and tracks.
    > > 
    > > What are these constants and how are they determined?
    > > 
    > > Event Selection:
    > > 
    > >    - Difference in bb and zdc vertex (mean and sigma).
    > >      Determined by fitting a simple histogram (bbVtxZ - zdcVtxZ)
    > >      which is filled in the preloop.	
    > >      A nSigma cut can then by applied in the "real" loop.
    > > 
    > >    - Vertex range. Set by the user.
    > > 
    > >    - Centrality range. Set by the user.
    > > 
    > >    - Good trigger.
    > > 
    > > 
    > > Track Selection:
    > > 
    > >    - Track to vertex (meanZ, sigmaZ, meanY, sigmaY)
    > >      Determined by fitting histograms filled in the preloop.
    > > 
    > >    - Good track status.
    > > 
    > >    - Fiducial cuts?
    > > 
    > > 
    > > Pid Selection:
    > > 
    > >    - Constant determining the expectation curves (for example mass2)
    > >      for each particle for each pid detector. Found by fitting
    > >      histograms filled in the preloop.
    > > 
    > >    - Sigma from expectation curve (momentum dependent).
    > > 
    > >    We can maybe have a method that return the number of sigmas from
    > >    the expectation curve for a given particle specie, or? 
    > >    I don't think there's anything wrong in "fine-tuning the 
    > >    calibrations" at this point (the alternative is several iterations
    > >    of the calibrations to get the 100% correct values).
    > > 
    > > Efficiency:
    > > 
    > >    - Pawel is the expert in this and I'll not try to describe it, but
    > >      I think it should be in a separate module.
    > > 		
    > > This stuff described above is actually not so far from what is in
    > > the code now. Maybe we can split the tasks into several modules, or?
    > > I would like to try to put the skeleton together, then we can always
    > > discuss the details...any comments or suggestions?
    > > 
    > > How can we implement the pp (or d+Au) in such a framework? How do want 
    > > to pass on the information (at this point it's saved in a ntuple which
    > > adds another step in the analysis chain)?
    > > 
    > > Cheers,
    > > 
    > > Claus
    > > 
    > > +------------------------------------------------------------+
    > > | Claus E. Jørgensen                                         |
    > > | Cand. Scient.                  Phone  : (+45) 33 32 49 49  |
    > > |                                Cell   : (+45) 27 28 49 49  |
    > > | Niels Bohr Institute, Ta-2,    Office : (+45) 35 32 54 04  |
    > > | Blegdamsvej 17, DK-2100,       E-mail : ekman@nbi.dk       |
    > > | University of Copenhagen       Home   : www.nbi.dk/~ekman/ |
    > > +------------------------------------------------------------+
    > > 
    > 
    > 
    > 
    > 
    > -------------------------------------------------
    > This mail sent through IMP: http://horde.org/imp/
    > 
    
    -- 
    Djamel Ouerdane ------------------------------------------o
    |  Niels Bohr Institute      |  Home:                     |
    |  Blegdamsvej 17, DK-2100 Ø |  Jagtvej 141 2D,           |
    |  Fax: +45 35 32 50 16      |  DK-2200 Copenhagen N      |
    |  Tel: +45 35 32 52 69      |  +45 35 86 19 74           |
    |                  http://www.nbi.dk/~ouerdane            |
    |                  ouerdane@nbi.dk                        |
    o---------------------------------------------------------o
    


    This archive was generated by hypermail 2.1.5 : Tue Nov 19 2002 - 16:52:03 EST