run and analysis plan

From: Flemming Videbaek (videbaek@sgs1.hirg.bnl.goV)
Date: Sun Oct 07 2001 - 19:05:26 EDT

  • Next message: Apache: "Shift report 20011008 16:00-24:00"

    Dear Collaborator,
    
    We have now entered the 8'Th week of the Au running, the beam is steadily
    improving though occasionally there are hiccups. There are two issues for
    the plan for the remaining part of the run. The aim of this mail to get
    feedback from the collaboration particularly on the overall goal, so the
    more routine execution of the data-taking can be carried out with this clear
    goal in mind.
    
    One issue is the strategic overall goal: what physics do we want to obtain
    in the 2001 run?  The other is the practical issue of how we obtain this
    goal, i.e. how many settings and how much time at each of them, and
    scheduling of runs? At the end of the mailing are added information about
    the present status of the machine. Please comment on the plans put forward
    here.
    
    
    
    The primary goal is to ensure that we measure hadron
    spectra (with good statistics) at a number of rapidities.  Initial
    measurements should be around the expected mean pt of pion/proton, then
    extending these measurements to higher pt. With the current and expected
    luminosity it is not possible to achieve continuous rapidity distributions,
    though we can do pt spectra for a few selected rapidities (y=0,1, and at
    large y, 2-3).
    A plan that would cover this reasonable well is to collect data for
    Theta = 2.3, 3, 4, 8, 12 and 20 deg with the 2.3-8 in 2-3 field settings
    (and two polarities)
    and the 12 and 20, 1 setting (2 polarity) with the forward spectrometer, and
    at 90, 35,40,45
    with the MRS with two field settings (and two polarities) with minimum goal
    of collecting > 2,000
    protons for the 0-10% centrality bin in a ~30 % region around the reference
    momentum.
    A more extended goal is to get good statistics for less central collisions,
    which easily
    requires 5-8 times the running time per setting. The strategy should be to
    ensure the
    coverage and statistics for central collisions before moving on to longer
    run needed
    for less central collisions.
    
    The actual running up to this point has been to a) ensure we get data at
    higher rapidities
     both to overlap last years settings, b) and to explore a wider y and pt
    range by taken
    data at 8 and 3 degrees and c) collect high statistics data for mainly the
    90 deg,
    but also 45 degree.
    For the day-to day data taking other considerations come into play as a)
    detector conditions,
     b) machine background, need for calibration runs etc, specific requests for
    zero-field,
    voltage settings. A practical consideration is that the 2.3 deg running, as
    well as low to
    intermediate momentum runs where the full FS are in use should be done with
    C1 out
    of acceptance. Such runs should be done at a time where background
    (hopefully)
    are less since the T5, H2 and RICH has to work well for such runs. Access is
    required
    before and after such switch over. Tactically we also need to know about the
    performance
    of detectors, quality of runs, good information about actual 'tracks'
    particle, collected.
    
    
    This brings us to the second issue. As discussed at collaboration meeting a
    long time ago,
    and brought up by Jens Jorgen recently with a specific proposal we need to
    analyze data
    as we go along. We are also at a point in time where this has become
    feasible. A lot of
    effort has taken place to develop code to perform calibrations, overall
    tracking, and
     particle identification using TOF and Cherenkov systems. Truly these will
    have to be further
    developed, but are in such shape that it should be used in a combined way
    for several purposes
    a)    Get feedback to experiment on quality and quantity, and to help detect
    problems with detector components, performance.
    b)    Further check quality of data, code and calibrations
    c)    Create output data, (root files, trees (ntuple) and histograms, log
    files) that can be used be the collaboration for further though most likely
    preliminary analysis, and code development.
    
    The plan brought forward is to have this routine data processing occur
    ongoing with the
    experiment, and with the help of people at their home institution having
    responsibility for
    a given period (in the order of a week at a time) to process the data using
    the RCF farms,
    and storing the output on the data disks. The necessary programs and scripts
    are being
    put together by Ian; the system has been checked out. The next step is to
    make the
    more detailed plan including which other institutions are involved, how the
    output is
    made available to all etc, and to process in a systematic way.
    
    This kind of coordinated effort is also a model for how we can later perform
    'final'
    calibration and analysis passes on the data. I have talked to Ian and
    and he is preparing a detailed implementation plan for data reduction and
    calibration.
     Just as the efforts from all collaborators are needed to take the data,
    such an efforts
     is needed for the analysis with the hopefully much more data to come in the
    coming weeks.
    
    
    // status ..
    
    The machine is now improving a lot, particular in regard to reproducibility.
    RHIC may
     have reached a limit on ions per bunch with a max current around 25-30
    10**9 ions,
     so the focus is on the beta* of 2 acc, and increasing to 120(112) bunches.
    Presently
    RHIC fairly routinely gets to 300 ZDC/sec and can go to ~700 with beta*=2.
    This
     implies that in the operating conditions with 1h re-fill time one can
    achieve ~300-400K /hour,
    and with 50% availability for beam 5M/day, which will be sufficient for the
    program.
    With such rates it is necessary to impose more stringent triggers. My
    suggestion is to
     implement the vertex trigger, rather than the centrality trigger. The
    vertex trigger will
    sample in a region +-20 cm with high efficiency, rather than attempting to
    utilize a
    wide vertex which can results in quite varying normalizations from run to
    run.
    
    best regards
        Flemming
    
    ------------------------------------------------------
    Flemming Videbaek
    Physics Department
    Brookhaven National Laboratory
    
    tlf: 631-344-4106
    fax 631-344-1334
    e-mail: videbaek@bnl.gov
    



    This archive was generated by hypermail 2b30 : Sun Oct 07 2001 - 19:03:07 EDT