Re: AnNote23: dN/dEta from TPM1

From: Bjorn H Samset (bjornhs@rcf.rhic.bnl.gov)
Date: Tue Feb 20 2001 - 05:30:15 EST

  • Next message: Fouad Rami: "Re: AnNote23: dN/dEta from TPM1"

    Ho :-) A quick reply before I have to run:
    
    >   - Looking at Fig.4 in the report of Claus and Christian (AnNote#22),
    >     one can see clearly that around the nominal intersection point
    >     there is no significant dependence of Mcut on the vertex (all
    >     histograms and curves are absolutely flat). Therefore, I would
    >     expect that both centrality recipes (JH and NBI) should give
    >     exactly the same results if a narrow cut is imposed on the vertex.
    >     It would be therefore very nice if you could compare the results
    >     (centrality dependence of dN/deta) using both methods under
    >     (for example) a +-5cm vertex cut (statistics should be sufficient,
    >     in my old analysis I even used +-3cm).
    
    Not if you look at their figure 5. There you can see that the different
    selections actually pick out different percentages of the total
    MinBias. The 0-6% cut in GetCentrality actually looks like it picks close
    to 10% of the events, even for a narrow vertex cut. Also the 0-50% cut
    only selects a total of 40% of the events. The NBI cuts also have a few
    similar problems, but not as severe. I do have enough statsitics to do the
    check you propose and I can check it pretty quickly once I can sit down
    with it (there's a lot going on this week, so I'm a bit stressed), but I
    really don't think that the results are comparable.
    
    >  - <Npart> does'nt depend linearly on centrality (I am not sure
    >     that this is a few percent effect for all centrality cuts!).
    >     I don't know how this would affect the results but it is clear
    >     that it should be done more carefully. One can calculate <Npart>
    >     directly from Glauber model (or any other geometrical model). 
    
    I know <Npart> does'nt depend linearly on centrality, and the
    "real" dependance is there in the numbers we take from our ref. 1 (see
    below). The only "linear" assumption we make is in converting these
    numbers, which are integrated from 0-xx%, to our centrality classes which
    are yy%-xx%. This effect should not be large. I also agree that it is easy
    to calculate Npart from a glauber calc, but someone must sit down and do
    it. You have made a calc. using a hard sphere - is this easilly extendable
    to a Woods-Saxon and glauber model?
    
    >     In your note, you say that <Npart> should come from simulating    
    >     the tiles. This is not really clear to me! All what you need
    >     to calculate <Npart> is the percent of the cross section
    >     for a given centrality cut and a geometrical model (Glauber
    >     for example).
    
    Yes, if we assume that our centrality is correct. What I would like to see
    is a simulation of the Tiles subjected to our centrality cuts, then take
    the events accepted in each cent-bin and plot Npart (from HIJING or
    whatever model was used), and then take <Npart> to be the statistical mean
    of this distribution. This _should_ of course correspond to the simple
    geometrical <Npart>, but if it doesn't we clearly don't understand our
    detectors. 
    
    >     Is Reference 1 available somewhere on the WEB ?
    
    Yes, look at
    xxx.lanl.gov/abs/nucl-th/0012025
    
    Ping :-)
    
    ------------------------------------------------
    Bjorn H. Samset
    Master-student in Heavy Ion physics
    Mob: +47 92 05 19 98  Office: +47 22 85 77 62  
    Adr: Kri 2A709 Sognsveien 218 0864 Oslo
    



    This archive was generated by hypermail 2b29 : Tue Feb 20 2001 - 05:30:35 EST