Dear Brahmins,
            I just read Christian's analysis note 37. The Single Detector Energy
method reminds me of what was done in NA44^1 where we used a small 
plastic scintilator behind the target to estimate centrality. We defined
 10% central as the top portion of intergral. In principle this method
required knowing what the total cross section was but in pratice things
such as the pbar dN/dy did not change much with 10% changes in the total
cross section^2. I beleive that Chrisitan is correct when he says that
both a constant level of secondaries and a componant proportional to 
the true multiplicity do not effect the final centrality. However the
variance in these quantities is a problem for both the energy 
and multiplicity methods since it tends to wash out any interesting 
physics variations with centrality. Ideally we should evaluate  
 our centrality resolutions and deconvolute for them, or at least
quote them in our papers. Note it does not make sense to quote the
top 2% of centrality if our error from the fluctuation in secondaries
and detector reponse is 4%. While the energy method can be used for
most analysis I think it is essential that Hiro and Steve  pursue their 
determination of multiplicity. This is useful both as a physics measurement in 
in itself and
as a check with other experiments. A good example of this was with the
measurment of pion HBT source sizes for SS, SAg,  SPb and PbPb^3. 
We linked these together with a measurement of charged particle 
multiplicity in our silicon detector. Of course this multiplicity was
much harder to extract than the centrality from our scintilator.
I actually prefer this way of linking different systems versus 
"measurements" of the number of particpants, which seem to me to be more
model dependent. 
 
  One thing I don't like about Christian's analysis is the comparison
to models. What I want to know from these models is the answer to 
questions such as: 
"If Fritioff was a complete description of these collisions how well
would BRAHMS measure centrality, number of particpants, number of 
collisions etc." 
Therefore we should not use the centrality cuts from
the real data but rather scale them to match Fritioff. Althernatively 
one could scale the energy or multiplicity from Fritoff to match the data and 
and use the same centrality cuts for both data and Monte Carlo.
   Finally I would like to comment on how we think about centrality.
For AuAu collisions at sqrt(S_nn)=200 the initial state is not two 
bags of ping pong balls but rather the overlap of two coherent gluon
fields^4. As these fields become decoherent entropy is produced^5 which
eventually shows up as multiplicity in our detectors. 
 Thus it seems to me that multiplicity, or energy depostited in a given 
detector, is
what we want to measure and not the number of particpants. While the
ZDCs may help us measure the number of spectators (and so the number
of particpants) I think it is better to use them to  compare with other
experiments rather than trying to plot variables such as kaon dN/dy 
versus  the number of participants.
                   Michael
 
0) Brahmin is a person who knows 'Brahma' ie the whole universe.
1) NA44 was a fixed target heavy ion experiment in the last millenium.
2) If one plots dN/dy v centrality then the error on the total cross 
section become an error on the scale of the centrality axis. 
3) EPJ C18 317 (2000)
4) Eg hep-ph/0104168  Raju Venugopalan 
Small x physics and the initial conditions in heavy ion
collisions
5) Of course more entropy is produced later on in the collisions
 
Michael Murray, Cyclotron TAMU, 979 845 1411 x 273, Fax 1899
This archive was generated by hypermail 2b30 : Thu Sep 06 2001 - 11:31:59 EDT