Hi all, I'd like comment on the centrality determination in general. I should point out, that this is work in progress (Claus, Erik, and I are looking in to it), and should not be taken as final. Also I like to make no claim to correctness, and will appreciate any thought/input/corrections any of you may have. There are several things to consider when trying to determine the centrality at BRAHMS. These has to do with the infamous vertex location, minimum bias, trigger and so on, in short - not easy. First of, we'd like to say something like "this event is 0-5% central, and that one is within 10-15%". This obviously require that you scan a large number of events and in some way try to characterise the centrality of the events. The current consensus is to use the total ADC of the scintilator tiles in the multiplicity array (PTMA - Photo multiplier Tube, Multiplicity Array). Then one should say what "0-5%" really means. That is, I believe sort of obvious: Histogram the number minimum bias events based on the ADC sum from the PTMA, integrate that, find the top most 5% of that integral, and make a cut there. That's the threshold ADC from the PTMA characterising the 5% most central events. Then comes the question "What is a minimum bias trigger?". Well, Flemming has previously stated that trigger 4 (ZDC coincidence) is the minimum bias trigger. However, trigger 4 is drastically down scaled in later runs, giving very poor statistics for the histogram(s) (more on the histograms below), so one is tempted to include all triggers (1, 4, 5, and 6) which however isn't trivial since one need to correct for the down scaling. So what one can do, is to take runs where trigger 4 isn't down scaled, find the ratio of trigger 6 to trigger 4, and use only trigger 6 events on later runs, correcting with the factor previously found. Also, one should note the format of the trigger word: It's a 16 bit number, composed of two 8 bit bit masks, the lower 8 bits constitutes the first 0xXX, and the upper 8 bits the second 0xYY00. The first mask is the triggers that the DAQ actually got - i.e., before scale down - and the second is the triggers after the scale down was performed. E.g., if the 16 bit trigger word is 8240 = 0x2030 = 00100000 00110000 (dec) = (0xYYXX hex) = (xxxxxxx yyyyyyyy bin) then the DAQ got trigger 5 and 6, but the trigger 5 was scaled down. The class BrEventManager tests the first 8 bits (0xXX), which can be misleading I believe. I'd like to take this opportunity to encourage Flemming to put something on the web describing the triggers, Konstantin to put something on down scaling and the trigger words, and for everyone to think about how to normalise down scaled triggers - if anyone has any idea, please let me know, since I'm not sure I got it yet. So assuming we've got min bias events - currently, Claus takes trigger 6, plus trigger 4 without trigger 6 (scaled) as min bias events and histogram those - we're not home free yet, and this has to do with the infamous wide distribution of the vertex! The point is, an event with a high numerical z-component of the vertex (Vz) will have another ADC response in the PTMA then an event with low Vz and the same centrality. This is evident if one make the 2D histogram of events, based on ADC and Vz. Hence, the 5% cut is dependent on where the vertex is at a given event! This is not a small effect! The 5% limit varies 33% from |Vz| ~ 0-6cm to 33-45cm. This vertex dependent threshold - we believe - is the most assumption-free way of obtaining the 5% limit - or any other for that matter - since it only assumes translational invariance and rotation invariance around the z-axis! So we also need to determine Vz before we can determine the centrality. Here we _must_ use the ZDCs to determine Vz, so that we do not introduce a further bias, as would be the case had we chosen the Beam-Beam method to obtain Vz. Finally, we need to correct the ADC values from the individual tiles for the variation in the path length in the tiles - as outlined by J. H., Jens Jorgen & Erik. This correction, is probably not very big in terms of the 5% threshold, but for a single event is may be extremly important, and Erik has some code that does this. I should also like to point out the statistical difficulties in this. One may feel tempted to do al sorts of weird corrections, like reflecting tiles to create pseudo-tiles, ignoring some tiles either because of geometry or high response, and so on. All such cuts will inevitably bias the result unwantedly. Again, this is work in progress and any comments will be greatly appreciated. Although most of the code is written, is generally in the form of scripts, and are not really ready for inclusion into BRAT. We anticipate two modules: one that calculates the threshold at certain points in (Vz, centraility)-space - basically a sort of calibration - and one, given an event, determinds the lower and upper bound of the centraility - based on the calibration - and stores it in the output eventnode, for other modules to inspect. Yours, Christian ----------------------------------------------------------- Holm Christensen Phone: (+45) 35 35 96 91 Sankt Hansgade 23, 1. th. Office: (+45) 353 25 305 DK-2200 Copenhagen N Web: www.nbi.dk/~cholm Denmark Email: cholm@nbi.dk
This archive was generated by hypermail 2b29 : Mon Jan 08 2001 - 13:23:55 EST