Atlas Overview Week Bern, July 2008

The Atlas week in Bern was for us relative newcomers to Atlas a good way to get an overview over all the different aspects of the experiment. Also, it gave us an idea of the current state of the detector, which is, so close to the promised start of the beam, of special interest. For our own sake and for all people interested, we want to summarize the most relevant points of the different talks and sessions.

For those who prefer the full talks or to be used as reference material, the agenda of the week can be found here. The introductory material and a guest talk on climate change can also be found here.

1. Atlas detector hardware

1. The 2008 Detector Commissioning

Magnet testing
  • Magnet testing began in 2006, interrupted for inner detector and calorimeter repairs until May 2008
  • Prior to interruption, central solenoid and barrel toroid tested at full field up to 7.73 and 20.5 kA nominal
  • Two other toroids, ECT-A and ECT-C had not finished testing (ECT means End Cap Toroid)
  • In 2008, ECT-C tested at full field ok, with three training quenches (19.9, 20.3, and 21.0 kA), and coil only heating to 54k at fast dump
  • ECT-A had leak in cooling line during cooling process, testing delayed
  • Barrel toroid and central solenoid retested fine, and also ran fine together. potential for magnetic coupling in solenoid chimney tested, forces ok
  • Notes: testing done while cooled to 4k. All magnets currently expected to be operational mid-August 2008

ATLAS Positioning
More information can be found through the survey database, http://atlassurvey3d.web.cern.ch/AtlasSurvey3D/
  • The floor moves- currently measured at about 0.25mm per year
  • Bed plates have movement of less than 0.4mm per year (2007-2008 data shown in slides)
  • ATLAS axis at about -1.5mm, about 0.5 lower than expected
  • TX1STM had some movement between flanges, shims inserted to correct
  • TX1S now has some small deviations (~5mm) well within range of TAS ability to correct (~30mm).
  • Nominal beam line now expected to be at -0.5mm from expected location (note that actual beam can deviate 2-3mm from nominal beam line)
  • In barrel calorimeter, position in experiment hall: tile off by 3.1mm, flange by 2.2mm, solenoid by 0.7mm, and IWV by 0.7mm; calorimeters are within envelopes
  • Maximum deviation in Tile barrel calorimeter is 8mm
  • Are shims between EC (end caps) and barrel, so a gap of about 2-5mm
  • But end cap alignment, forward calorimeter with beam better than 1mm
  • Estimated gaps between barrel calorimeter fingers and extended barrel estimated with an average of 5.3mm (7.8mm max, 2.5mm min, 1mm is 1 sigma level)
  • Center of barrel calorimeter shifted 1.5mm with respect to beam line unexpectedly after inner detector installation
  • Barrel IWV was originally +2.0mm, but then sagged -1.5mm probably due to compression of feet and now solenoid lower than nominal beam by 1.5-2mm. Forward calorimeter position ok, and will try to adjust end cap Z position by a few millimeters at next opening to close the gap
  • Inner detector well positioned with solenoid and within 1mm of barrel, but low by about 2mm compared to the beam line
  • JD/SW Z displacement of 15-20mm after several adjustments, may want to make further adjustments at next opening
  • Barrel toroid off by 9mm max, and possible rotation
  • End cap toroid built lower by about 10mm
  • End cap A needed rotation of 0.2deg to align magnetic axis, but only 0.1deg achieved

Muon Alignment
  • 99.3% of the 5817 optical lines in barrel are working, goal to get to 100%
  • survey targets mounted on outer MDT's, but only 97 working because of obstructions out of potential of 2100. Shows 5mm positioning accuracy in MDT
  • in absolute mode, barrel alignment model gives precision of 200-300 microns
  • need to record a sample of straight tracks to help determine the relative initial geometry and use optical sensors to adjust at later times
  • will need a special run to do this, 6 days at 10E31, with toroidal off, but solenoid and inner detector on to select high momentum tracks (>10GeV)
  • Endcap ok, although EE MDT and alignment bars will have to have installation finished at next shutdown. Others ok, fits show 40 micron abs. alignment possible
  • track alignment done by the large sectors getting information from optical lines and then transmitting to small sectors via optical links (CCC)
  • to increase accuracy from 200-300 microns to 40, we can use overlap tracks
  • have not found good way yet to align barrel and endcap relative to each other, needed to accurately reconstruct tracks of muons that go through both

Current installation status
  • detector closed, mechanical installation should be complete at this point
  • beam pipe (our 50m) closed, on vacuum, may get to 10E-7 bar after a few weeks, bake out for 5-6 days at end of July 2008
  • 50C test done, 100C test on external pipes, not VI or VA, to be done. Also, LUCID undergoing commissioning
  • LHC at 10E-11 bar due to better pump
  • beam pipe and LUCID aligned well, within 2mm on side C
  • plans to change material in VA, VT, VJ sections to aluminum before optimal luminosity reached
  • Inner detector endplates closed, and TRT operational, and in test mode
  • SCT, pixel are expected to be in stand-alone mode for the next few months
  • An incident with the cooling plant has prevented PIXEL from cooling sooner, should be cooling at this point though (nessesary to get beam pipe bake-out going)
  • Inner detector is basically ok, with problems like dead channels on the order of a few percent at most
  • LAr Calorimetors basically ok, new problems: about half of LV power suppliers of endcaps have dedicated B-shielding not as good as anticipated; this is being worked on
  • 20 cells in tile calorimeter not usable for physics, but otherwise installed and working
  • Muon calorimeter basically ok, although the a gas accident damaged 4 EIL4 TGC chambers. One is replaced, others to be addressed during shutdown
  • RPC still needs to finish comissioning and EE chambers to be installed over next few shutdowns
  • Shielding is installed, with the exception of an octagonal forward piece, JF, to ease opening at next shut down (not expected to be important until luminosity at 10E33)
  • Gas, electric, safety systems, and control room all converging; ventillation tests to begin soon

2. Detector system configuration and tests

Calorimeters
  • As mentioned above, there are new b-field sheilding issues that prevent the LV's from operating correctly (this talk says about 2/3), which means LAr can't be operated with BT current above 20kA
  • Tile is basically running fine, some small percentage of dead channels
  • Plans in place for getting the calibration constants
  • Work is ongoing to reduce plots needed for monitering to essentials, but so far seem to be working ok
Muon systems
  • MDT basically ok, expected resolution of 150 micrometers
  • CSC, DCS ok
  • RPC had many gas leaks, about 20 chambers still to be repaired at next shut down. Two sectors may not be fully tested at cavern closure
  • TGC has some dead chambers, 1 BW chamber (TGC2 C10) and 3 EIL4 chambers (A) to be fixed at next shutdown
  • There is still some work to be done on data readouts, as well as shift/monitering information
Inner Detector
  • some instability in the pixel heaters, about 50% of verticle heaters behave ok
  • plans to do bake out at end of July 2008, paying attention to not harm PIXEL
  • only 6 days of commisioning because of cooling plant problems- showed 1 disk cooling loop leaking significantly, 2 less so
  • SCT had two leaking cooling loops in endcap C
  • Some small percentage of laser matrix transmiting data on PIXEL and SCT fibers stopped working
  • dead channels 0.5% or less except for SCT ECC(1.6%) and PIXEL ECA (4.2%), up to about 8% if the very leaky coil not fixed
  • TRT has some electronics problems in various stacks that need to be fixed
  • Some debate over whether to use Xenon with TRT (now quite expensive) or N2 or dry air
  • Original plan (from powerpoint):
    • Silicon detectors have N2 inside the active volumes and CO2 outside
    • TRT uses Xe based “active” gas mixture and CO2 cooling
    • ID volume (i.e. all the rest inside the solenoid bore) is flushed with N2.
  • PIXEL will probably join common ATLAS about 4 weeks after bakeout, SRT periodically depending on its progress

Computing issues
  • Quite a bit of computing infrastructure:
    • 3 gateways & 2 Windows Terminal Server nodes, 2 DNS, 3 CFS, 59 LFS (incl. 9 spares and 2 in failover mode for control room machines).
    • 8 laptops as "mobile control rooms"
    • About 1350 netbooted nodes & about 75 LFC nodes
    • 2 TDAQ, 5 Offline & HLT releases being served
    • Mail / SMS relay, Castor, Backup, etc...
    • Windows Terminal Server, Nice FC, disk space from Linux to windows nodes
    • SysAdmin ‘hotline’ for emergencies, shifts during commissioning and technical runs
  • Recently commisioned hardware: 23 XPU racks (of 27), 15 extra LFS (to do), 10 monitoring nodes (4 online + 6 monitoring), 5 for pixel calibration, 2 for Muon alignment, 1 XPU rack (31 nodes + 1 LFS) for preseries (to do), recommissioning 31 XPUs (E4 nodes) as dual SFI machines (15 more), 2 more CR machines
  • 23 XPU racks commisioning has about 3% broken nodes reported back to dell, overall going faster than expected
  • iMacs installed for SCRs and public machines
  • For data network, configuration for all NIC's done in generic way based on info in IT LanDB, using DHCP servers instead of config scripts
  • IT has a LFC server copy, to be used as backup
  • DNS server is now a mirror with pushed updates from IT (not cache as before)
  • Bios upgrade to all ROS: to allow use of hyperthreading and SMP kernel (better performance)
  • Password unification: NICE credentials can be used on Linux PCs, windows, Twiki, Web. Awaiting our own Active Directory server to deploy.
  • DHCRelay: ATLAS relays for DHCP requests, done
  • All but one of the 28 newly installed HLT node racks have been connected to the network
  • This week (mid July 2008) the last changes to the SFI connectivity is being completed (last connectivity change before data taking)
  • Ongoing work
    • Evaluation of CFS file serving candidates and SAN/NAS (web, GW)
    • Unified DB User interface: combined IPMI command execution from web interface: under test
    • Multiseat X for ACR (allows multiple people to use a single PC by splitting the screen between the number of seats and adding keyboard/mouse, still unstable)
    • SL5 investigations for multi seat and servers (GWs, CFS 3): expect SLC5 for Sept with Linux For Controls automations
    • Hardware inventory tool (keep track of HW failures, repairs, maintenance)
  • Next six months
    • test possible candidates and place order for central file server
    • buy three more web servers, two more gateways and some shared disk and memory for web
    • work on redundancy of network by IT
    • Local File Server active failover: operational for ACR, will be later for USA15 LFSs
    • automate shutdown/bring up proceedure via web
    • documentation work
    • OS testing: SLC5 as it becomes available
    • Support for test labs (Data collection and HLT mainly)
    • Control Room: Consolidate SW for display of any window to projector, tool to use 2 PCs with 1 keyboard/mouse
    • Centralized authentication scheme to be put rolled in.
    • Roles and responsibilities (Authorization): Oracle Identity Management software to finish testing, implement for P1 if no big problems
  • Single Points of Failure
    • Linux For Controls: No longer dependent on the ITLFC group as tightly, still improving this
    • Data network: redundant DHCP servers configured and working
    • CFS to be improved in near future
    • LFS failover can be achieved in 1 hour (procedure being documented)
    • ONL some services are critical (running on a node): AM server, mySQL server (redundant servers in place), global IPC server
    • ADS (Active Directory Server), OIM (Oracle Identity Manager)
  • SysAdmin can be contacted with questions, see talk for more information
  • Networking
    • running for two years, and is a big part of commisioning
    • core routers installed
    • is redundancy on control and datacollection network- control redundancy implementation needs a bug fix in router from manufacturer
  • currently focusing on monitering/diagnosis work, later on error detection and recovery
  • remote monitering being discussed. The concept:
    • public community- web monitering
    • Remote experts Secure login- by request of Run Coordinator
    • Remote shifters- proposed to allow direct connections to P1
  • suggested to copy data outside of P1 for remote shifts- are concerns about traffic control and security that are being addressed
  • Simple prototype has been set up and evaluated during M7, Tile, LAr and Pixel users were involved
  • Base on the result of this evaluation it has been proposed to set up the initial system composed of 3 PCs hosting mirroring infrastructure and 3 PCs hosting up to 12 remote user sessions
  • This proposal has been adopted by TMB and will be implemented in August 2008
  • Open issues: control of access to remote monitering facility, project accounts, institute accounts, schedule/control of usage

3. Global commisioning and hardware readiness of Atlas

TDAQ system readiness *Detector and trigger input:
    • Read Out System (ROS), Region of Interest Builder (RoIB) installed and comissioned
    • Extra ROS computers and read out links added this year for new detectors
    • ATLAS design performance reached
  • Event builder has Sub Farm Input (SFI) and could collect higher rate of events by reducing data size
    • has reached ATLAS design performance in terms of network bandwidth and event rates
  • HLT (High Level Trigger)- farm of multicore PCs, shifts are organized for comissioning of new nodes- 70% done. Is ~35% of final ATLAS
  • Data streaming and recording: Sub Farm Output (SFO), Used by detectors, FDR and TDAQ tests
    • Scripts to transfer data to Castor nowrewritten in python
    • ATLAS design performance reached
  • Monitoring: 30 nodes, with 10 more purchased but temporarily used for other things. Are several display tools that run in the control room
  • Configuration and Control: 25 PCs
    • as HLT racks increase, are some scaling issues that are being addressed
    • also work on fault tolerance, optimization of timing to reach the running state
    • Largest “real” partition up to now: ~ 7000 sw processes on ~1300 computers
  • Control room tools:
    • DAQ panel revised to be more shifter friendly
    • ATLAS logbook upgraded to address user feedback on message structure and for world wide write access
    • Will soon use LDAP for authentication- single username/password in P1
  • P1, Tier 0 communication
    • SFO – T0 handshake based on an Oracle database since April
    • Mechanism successfully used during M7, FDR-2, detector weeks, fine tuning carried out
    • Database will be included in AMI after M8, Requires DB schema and software upgrade
    • Usefultool, also for monitoring, bookkeeping and debugging
  • Access Model
    • Should be able to identify who performs operations and have a strict access control while data are being taken
    • Need to allow experts to commission, debug, prepare the experiment and have a shifter friendly environment, with all needed tools available from the desktop
    • Status display of the experiment should not require access to P1 (should not load the computers in P1)
    • Plan to implement remote monitoring scheme compatible with access model
    • Phasing out of shared accounts almost complete
    • Login restrictions based on roles implemented but still detailed discussion with all sub-systems to define permissions and protections
    • Aim of being able to test the complete model by the end of July but M8 seems out of reach *Releases
    • SW (software) suite always consists of a chain of 6 compatible releases, from LCG up to HLT
    • TDAQ-01-09-00 + HLT-14-0-10, used for M7
    • TDAQ-01-09-01 + HLT-14-2-10, being tested in P1 right now and detectors making the transition to it. This SW planned for M8
    • DAQ/HLT need to continue development, especially to improve reliability, stability and scalability
    • Plan to continue with the scheme of regular release cycles, synchronized with the Offline SW. Will try to limit impact of changes on users (detector SW)
    • Frequency, to be discussed with all parties, depends also on LHC machine schedule
  • Completing installation of EB and HLT for first collisions
  • Developments still required for smooth operations
  • Capability of avoiding cold starts
  • Access Management scheme is in place and is being progressively deployed but taking longer than expected
  • Improvements in documentation for shifters ongoing
  • TDAQ support active both via shifters and on call numbers
  • Overall, DAQ/HLT system is in a good shape

Calorimeters, L1Calo: single beam and collisions
  • want to do Run with DSP in transparent+physics mode - 5 samples: evt size * 3 and a run with L1Calo rather than with MB triggers
    • to get the energy in the calorimeters
  • with single beam, want to collect high energy showers
    • to time the endcaps and to measure pulse shape (5 & 32 samples)
  • with collisions, want to collect high energy showers
    • to time the whole detector and to measure pulse shape on the whole calorimeter (5 & 32 samples)
  • want to run in 32 samples mode to measure drift time and obtain long range intrinsic uniformity: with 10^6 evts (E>10 GeV) could reach 3.10^-3 precision
  • Data Sets for TileCal
    • Between fills: Pedestal runs for noise calculation, calibration for the FE electronics (CIS ramps), Linearity of the PMTs and gain stability followup with laser
      • Only in >8h stops: Cs that follows the stability of optics+PMTs
    • In physics runs, in empty bunches: Laser and Charge Injection events to follow PMT gain and FE stability and to spot intermittent or new problems.
    • Physics events interesting for detector checks: Minimum Bias triggers (low pT tracks/hadrons, global calorimeter checks like phi-uniformity, Isolated muons to cross check expected E deposition and roughly intercalibration, Muons from beam halo for the same as above (not studied yet), “single” tracks from tau triggers to check hadron response.
  • L1 Calo, final beam timing
    • 1 BC from pulser & cosmic muons + align partitions
    • First MB to time in the whole of ATLAS
    • To reach ~1ns: 10 evts/TT ➔ 105 events
    • Need to iterate a few times before turning on L1 Calo
    • Time to achieve stable triggering is difficult to estimate: a few days, possibly more. Then energy calibration will be necessary

Muons: single beam and collisions
  • Single Beam runs
    • beam halo muons in endcap: efficiency and background measurements, with B-field on and off to check detector geometry
    • TGC trigger studies: rough timing adjustments, make trigger road eta phi coincidence maps, check trigger logic, looking at very foward (minimum bias scintillator trigger) and forward regions
    • beam-gas events: RPC- compare barrel events to cosmics, TGC- transition to beam-like timing
  • Collisions under different B-field conditions
    • B-field off: background measurements, MDT calibration for spectrometer, efficiency studies, and chamber alignment studies with 500K events
    • Nominal B-field: check out the curved tracks, work on RPC TGC timing between sectors, and check alignment system
  • Maximum B-Field
    • check MDT calibration and do level 1 trigger efficiency studies
Inner Detector: single beam and collisions
  • still need to bake out beam pipe, and wil be four weeks after that before SCT and PIXEL join combined data taking
  • working on operational stability, timing, and detector performance
  • Cosmics useful for barrel comissioning
  • Trying to use beam halo (interaction of beam with steering magnents, collimators) and beam gas (interaction of beam with atoms in beam pipe) for EC comissioning
  • TRT needs particle data- calibration based on tracks
  • Single beam work
    • TRT debugging from point of view detector response to the particles (mostly ECs)
    • Detector timing
    • TRT EC alignment
    • r-T calibration
    • EC HL threshold calibration
    • Would like data sample ~1 Mtracks
  • Collision work
    • Final detector timing
    • Final detector r-T calibration
    • Final alignment all the parts
    • Final HL threshold
    • Again, would like data sample ~1 Mtracks

Trigger: single beam and collisions
  • This involves L1 Calo and Muon triggers, HLT (L2 and EF), as well as inputs to the central trigger processor (CTP) and internal triggers
    • CTP inputs include BPTX (Beam pickup system), MBTS (Minimum Bias Trigger Scintillator), BCM (Beam Condition Monitor), LUCID, and other scintillators
    • internal triggers include random trigger, bunch group trigger, etc.
    • MBTS particularly important in early running
  • BPTX provides the timing reference of the beam- timing calibration
  • triggers on collision events with MBTS and or BPTX (also with beam gas events in single beam)
    • to study performance of L1 muon and calo triggers
    • HLT in pass-though mode initially
  • Need to think about timing of trigger and detector systems
  • Physics menus (L=10^31 cm-2s-1) with HLT selections as needed
  • Want enhanced bias sample (trigger lowest LVL1 thresholds)
    • Allows to study the performance and rates of the physics trigger menu quickly and to optimize it
    • Useful in longer term when there’s a significant change in the luminosity or beam condition
  • More information about specific planned studies (particularly for comissioning) avaliable in the slides

4. Forward detectors

In General
  • Are several forward detectors in ATLAS:
    • ALFA (Absolute Luminosity for ATLAS) at 240m, ZDC (Zero Degree Calorimeter) at 140m, and LUCID (LUminosity Cerenkov Integrating Detector) at 17m
    • Is proposal for new detector at 22m and 420m, ATLAS FP
  • ALFA status- test beam this summer (2008). There was not a specific talk on this detector
LUCID
  • LUCID is a long cylindrically shaped detector, with and inner cylindar (96mm radius) and outer cylandar (115mm radius)
  • It provides luminosity measurements from the ATLAS pp collisions
  • It has 20 Cerenkov tubes on each end, a water cooling system and a readout from photomultipliers and fibers
  • Approved for contruction in January 2007 it is located at 5.61 < eta < 5.93
  • Both LUCID detectors are installed and cabled, with commisioning beginning June 16 2008
  • In general, running fine, small problems expected to be fixed in time for the beam bake out
  • Still need to finalize commisioning, fully integrate into TDAQ and DCS, and want MC simulation to measure performance of an interaction trigger
ZDC
  • Measures the production of neutral particles in the forward direction (zero degrees from the beam line)
  • See talk for description of detector with pictures
  • Installation in LHC tunnel of hadronic module completed for both arms before tunnel closing – including tunnel electronics
  • Waiting for LHCf completion before installing EM modules.
  • Signals received at USA15 consistent with specifications
  • DAQ system mostly acquired but needs some more PPMs. Currently DAQ being programmed, and they expect system to be at CERN in the fall.
  • DCS installed in tunnel before tunnel closed.
ATLAS FP
  • It would be a spectrometer using LHC magnets to bend protons with small momentum loss out of the beam
  • There is a proposal to upgrade the 220m and 420m region, to add proton detectors
  • If proposal accepted, could be installed as early as winter of 2010
  • See talk for more information about the detector, with pictures
  • An R&D report is published and the R&D phase ends with a complete cryostat design and a prototyped, tested concept for high precision near-beam detectors at LHC
  • The idea is to get siginificant physics potential (from the foward proton tagging) for a relatively low cost and no affect on LHC operation

2. Data Taking and Trigger

1. Operation and data taking organization at point 1

The running schedule can be found at ATLAS>Main Web>AtlasOperation>RunningSchedules or here.
On the Atlas Operation page one can find the control panels for the different detectors (read only smile )

Plans for M8 and future activities in the control room in general:
  • use of RunCom Tool
  • on-call phones carried by experts
  • start daily meetings from 28 July (9:30 am, SCX1 1st floor)
  • night shifts will start ~ 1-2 weeks before beam expected (agreed notice to fill night shifts: 1 month)
  • trying to improve error message generation
  • minimizing start-up time

  • default state should be: combined running with all systems that are not busy otherwise
  • BUT: many systems still busy with commisioning
  • will be mix of shifter and expert work, still try to avoid chaos!

Operation Management Support
  • Important phone numbers can be found here
  • 1st level: CRM phone (24/7, shift around few experienced people, receive alarm and manage crisis)
  • 2nd level: "System piquet phone" (24/7, called by CRM)
  • Service piquets (not yet 24/7, called via CCC or by CRM, 1-2 hours intervention time)
  • SLIMOS: Shift Leader In Matters Of Safety (monitor alarms related to safety systems, technical alarms (P1), take action)

LVL 1 Alarm - part of normal operations
  • interaction from Atlas Control Room (and via remote access), Viewing via Web
  • DCS: Detector Control System
    • state (what is it doing), status (how well is it doing)
    • navigation tool, overview field, alarm window...
  • alarm screen: filter, e-log, details...
  • operated by: Shift Leader, DCS Operator, SLIMOS, Subdetector Expert

LVL 2 Alarm - machine in danger
  • no immediate reaction from fire brigade
  • managed by DSS: Detector Safety System
    • alarm/action system with own independent signals/sensors, connected to Diesel power
    • automatic system
    • handled by the SLIMOS
    • if possible action by DCS (usually easier than DSS)
  • web interface allows any user to see status of any alarm, the actions related...and to export to excel file
  • all requested alarms implemented except the new Pixel alarm

LVL 3 Alarm - life in danger
  • cover: fire, lack of oxygen, flammable gas detection, CO_2 detection, electrocution, flooding
  • implemented in all ATLAS experimental areas: surface buildings, underground buildings, control room
  • alarm transmission to fire brigade and to SLIMOS, GLIMOS
  • SLIMOS: monitoring from Control Room, source of information for fire brigade
  • other systems: RAMSES (radiation), FPIAA (finding people), LASS (access), seismographs
  • special training for fire brigade

2. Trigger

General
  • Trigger reduces data rate from 40 MHz to 200 Hz recording rate
  • organization in slices: muons, electron, photon, jet, Bphysics, Bjets, Minimum bias, cosmics
  • preparation for 2008 run:
    • adding missing components
    • robustness in case of pile up, displaced beamspot
    • flexibility to adapt to unexpected conditions
    • initial trigger menu
  • EDM: Event Data Model
    • StorGate now used online
    • flat trigger containers

Trigger menus
  • twiki page TriggerPhysicsMenu
  • select events for physics, callibration, assign to streams
  • challenge: large number of signatures, "What is interesting physics?"
  • TriggerMenuConvention, example: trigger item EF_2e15i
    • Trigger Level
    • Object Multiplicity
    • type of object
    • ET threshold
    • Isolation criteria
  • trigger menus for software development in python, for production in XML, for online running XML files uploaded to trigger configuration database, future: DB primary source
  • Cosmic Runs: important commissioning step, trigger menu largely based on subsystem needs
  • Single Beam running: start with beam pickup trigger, use TGC to trigger on beam halo
  • first collisions: run simple L1 menu => add HLT algorithms in pass through mode => commission HLT => run 1031 cm-2s-1 physics menu
  • 1031 cm-2s-1 physics menu: detector performance, SM physics; first 1, finally 6 output streams

Trigger operation
  • tests done during Commissioning runs (cosmics): LVL1, HLT (mostly in pass through) => need tools to adapt quickly to unforeseen conditions
  • tests done during Technical runs(MC data): largest possible DAQ and HLT system => timing issues, bottlenecks, stability
  • need to minimize endless loop: problem occurs => diagnosis => propose solution => verify solution => problem occurs => ...
  • TriP: Trigger Presenter
    • overview over all chains
    • rate overview LVL1, HLT
  • DQMF: Data Quality Monitoring Framework => control stability and quality by physics motivated histograms
  • Offline Monitoring: feedback on the data taken several hours before, check correct transfer to all analysis formats (EDS, AOD)
  • Menu Changes when in data taking mode
    • data thoroughly analyzed
    • very conservative, but also "Trigger Commissioning" periods with more flexibility
    • trigger shifter monitors individual chain rates and trigger conditions, informs (and consults with) shift leader in case of a problem
    • predefined changes caused by falling luminosity, trigger rates, collection of specific samples, defects like noisy cells
    • there might be an unforeseen problem that requires a menu change (trigger shifter, shift leader, if change in strategy: Trigger ccordination group)

Trigger efficiency measurements
  • Trigger efficiency: w.r.t. offline reconstruction, depends on (at least) ET, η, φ
  • important trigger feature for physics analysis and data taking monitoring
  • need to be calculated from real data => need a clean data sample
  • standard candle like Z->ee, but also other probes, well defined standard model processes, "orthogonal" trigger signatures and "boot-strap" techniques as control samples, different detectors can provide control samples
  • beginning of data taking: must be fast => maybe less cleaner samples?

3. Data preparation - Data quality and monitoring

Online
  • aims: stop taking faulty data as quickly as possible, provide parameters for offline DQ assessment
  • approach: automatic checks (OMD, Monalisa) and visualization of important information
  • Tools for visual DQ checking (either alerts or periodical checks):
    • DQMF display (online and offline): checks histograms, writes results automatically to COOL database
    • OMD: Operational Monitor Display
    • TriP: Trigger Presenter
    • OHP: Online Histogram Presenter
    • Event Displays (Atlantis, VP1)
  • Histogram archiving: CoCa (Collection and Cache), Monitoring Data Archiving; both will send data to CASTOR
  • Remote DQ Monitoring
    • Public Monitoring via Web and Monitoring via the mirror partition
    • Atlantis and VP1 in remote mode allow browsing through recent events via HTTP server
    • WMI: Web Monitoring Interface framework produces pages which can be seen on the Atlas Operation page
  • still need shifter documentation for some of the subsystems

DCS: Detector Control System
  • subdetectors monitor conditions, some of the conditions important for monitoring data quality
  • information written to PVSS oracle archive, subset copied to COOL
  • Data Quality Status Calculator summarizes detector conditions, comparison with DAQ configuration, status flag for each subsystem written to COOL
  • writes detector status and dead fraction to OFLP200 database
  • status:
    • ready to run, speed depends on input
    • need more information from subdetector experts
    • can be used for offline good/bad decisions

Offline
  • runs and Offline Tools can be found on http://atlas-service-runinformation.web.cern.ch/atlas-service-runinformation/
    • Histogram view: DQM offline web browse
    • Database view: DBQuery
    • Overview: RunSummary
  • tested mainly in FDR (Full Dress Rehearsals)
  • SFO (Subfarm Output) stream events into different datasets, data quality in express streams
  • results of DQ check written to conditions database folders
  • status:
    • test with L=1031 and 1032; hot, noisy, dead cells
    • DQMF decision now written automatically to COOL
    • still too many shift histograms
    • still too slow

Atlas Offline Commissioning
  • offline reconstructing software and computing infrastructure commissioned with cosmics
  • helps detector commissioning
  • several cosmic runs this year (2008), included P1, tier-0, CAF
  • old runs used for commissioning of tier-1s
  • detailed studies for each subsystem, including alignment, bad channels
  • status:
    • we reached memory limit => need more efficient reconstruction
    • reconstruction of some high multiplicity events still takes long
    • BUT: cosmic and physics runs not fully comparable
    • at the moment used for commissioning analysis: CBNT (Combined NTuples), but until end of year ESD required
    • next challenge: cosmic runs with B-field, single beam data

Luminosity measurements
  • luminosity group founded in April 2008
  • aim: provide instantaneous and integrated luminosity for any data sample with sufficient data quality
  • TDAQ: run control initiates LB transition, DCS: online exchange of parameters with LHC
  • luminosity determination:
    • relative luminosity: LUCID, other detectors as crosschecks
    • absolute luminosity: first estimates from LHC parameters, ~5% from physics processes, ultimately pp cross section from ALPHA
  • storage in COOL: DCS, TDAQ parameters, DQM, offline & online luminosity

4. Data preparation - Callibration and allignment

Calibration Loop readiness
  • after data taking and before starting tier-0 reconstruction
  • strategy, status
    • Pixel: noisy pixels flagged out at end of each fill, dead pixels extracted after merging maps of a whole week; not written to database until detector known with reasonable precision
    • SCT (Semiconductor Tracker): online calibration via standalone runs in RODs between fills; offline calibration via express stream on CAF => mostly noisy and dead modules
    • TRT (Transition Radiation Tracker): based on ID alignment stream data; not successful in tests due to high initial misalignments
    • ID: alignment not yet perfect, cosmic and halo stream building not yet implemented
    • LAr calorimeter: electronics calibration in runs between fills, calibration stream not tested yet, nothing done yet for sporadically noisy channels
    • Tile calorimeter: calibration chain; calibration constants from laser not implemented yet
    • MDT (monitored Drift Tube): calibration with tracks from calibration stream built at LVL2, calibration data flow tested
    • Muon Spectrometer: alignment with tracks not implemented yet
  • Mandate of the PROC (Prompt Reconstruction Operation Coordinators):
    • coordination of software and conditions data
    • tier-0 processing (must include appropriate data quality monitoring and must use up to date conditions)
    • Atlas Tier0 release coordination
    • 2 people, term 1 year
    • must keep close contact to all the related groups

ID alignment
  • alignment: determination of position and orientation of the detector components
  • track based alignment: minimize residuals, need iterations for non linear residuals
  • alignment levels:
    • LEVEL1: Subdetector components (like barrel/endcap)
    • LEVEL2: Layers/Disks (Si), Modules (TRT)
    • LEVEL2.5: Pixel Barrel Staves/Disks, SCT Barrel Raws...
    • LEVEL3: Modules (Si), Straws (TRT)
  • use CoG (Center of Gravity) of the ID as a fixed reference for the alignment (with flexibility in CoG definition: apply weights)
  • full scale test of alignment in CSC note
    • minimizing residuals not enough
    • problems with weak modes (detector movements corresponding to poorly constrained degrees of freedom) => need extra information from other detectors and from physics
  • first data taking:
    • still need some optimization for maximum speed
    • before collisions use cosmics, beam halo to get idea of "real" initial misalignment
    • only when confident enough, switch on LEVEL3
  • later data taking
    • after few month reprocessing data at tier-1 with more precise callibration/ alignment
    • group should be able to provide best alignment constants
    • need feedback from physics groups

Magnetic Field Map Reconstruction
  • measurements: a set of 3D Hall Probes
  • simulation: B-field created by the coils and magnetic perturbations (minimize Χ2 by fitting coil displacement and deformations)
  • Magnet test, June 2008 without ECT (End Cap Toroid) A because of leak, full test planned for August
  • full chain working: sensors => online display and oracle database => offline analysis => TTree => field reconstruction => field map
  • code in CVS
  • need to be more automated
  • still need to adjust models better

5. Data bases and Computing Operations

Data Bases
  • shift from development to deployment and operation
  • COOL: conditions data base
  • server infrastructure:
    • ATONR - online Oracle server, at Cern: primary repository for data including DCS, PVSS, TriggerDB, COOL, HLT, ROD
    • ATLR - offline Oracle server, at Cern: primary repository for data including detector description, magnetic field ATLASDD, Conditions data, TAGs
    • Tier1 Oracle servers
    • Oracle at some Tier2s
    • data replication between all servers with ORACLE streams
  • 3D: Distributed Deployment of Databases
    • ATLAS multi-grid infrastructure
    • subset of application must be distributed world wide: Geometry DB (ATLASDD), Conditions DB (COOL), TAG DB (event-level meta-data for physics)
    • ATLAS Jobs: units for data processing workflow management => grouped in Tasks => ATLAS Tasks DB
    • ATLAS File: units for data management => grouped in Datasets => Central File Catalogs
  • database monitoring is important
  • TAG: minimal event data

Simulation production, reprocessing, computing shifts
  • simulation production:
    • running stable since 4 years, ready to cope with ATLAS requirements
    • job and walltime efficiency continuously improving
    • usage of schedulers to control work flow and latencies
    • correct validation important
  • reprocessing system (recall from tape to Tier1)
    • test of data flow and access
    • problems with conditions data access
    • problems with some of the sites
  • Computing Offline Shifts
    • since January 2008, at the moment 15h/6d
    • monitoring of data management, simulation production

Data distribution system
  • DDM (Distributed Data Management), webpage at http://dashb-atlas-data.cern.ch/dashboard/request.py/site
  • 3 GRIDs, 10 Tier1s, ~70 Tier2s
  • Tier1s and associated Tier-ns form clouds
  • centralized data distribution:
    • data replication from Cern to Tier1s
    • data replication within clouds
    • data replication to Cern and between Tier1s
  • Software DQ (Don Quichotte) under heavy testing
  • Central Services at Cern and Local Services at BNL: computers under 24/7 maintenance
  • status
    • software improved, stable
    • monitoring stable
    • scenarios tested well beyond 2008
    • on-call and regional shift teams operated (now shift training)

Tier0
  • functional requirements:
    • ESD (Event Summary data), AOD (Analysis Object Data), DPD (Derived Physics Data), TAG production
    • calibration, alignment
    • express stream reconstruction
    • archiving of RAW to tape, replication of selected data to CAF (Cern Analysis Facility)
  • quantitative requirements
    • O(10K) jobs, permanent files, temporary files per day
    • disk writing 880 MB/s, reading 1900 MB/s, tape writing 540 MB/s
    • approx. 3000 reconstruction jobs parallel
  • Tier0 software
    • operational
    • "handshake" with event filter farm output
    • interface with DDM (see talk above)
    • tests with meta data catalogue, offline software, TAG database, conditions database, Data Preparation, Offline Data Quality community
  • Tier0 shift system:
    • bug tracker
    • shift organization
    • will develop overview page with histograms

6. Software releases and analysis model

Offline Release Schedule
  • Glossary
    • Package: coherent set of C++ or Python Classes, evolves => version is a snapshot
    • Project: coherent set of packages providing similar functionality
    • Release: coherent set of projects and versions
    • Patch: special project that contains version overrides to fix bugs; alternative name for patch release: Cache
    • EDM (Event Data Model): from BS (Byte Stream) => ESD => AOD to DPD and TAG
  • Release Strategy
    • baseline release 2-3 months before first physics data =>stable!
    • short branches for bug fixes
    • longer term development moved off to side
    • in spite of tests: will find deficiencies * Physics Analysis Partial Release for development for physics analysis against stable release
  • Patch Projects: fix within 24 hrs, want to isolate data taking from e.g. simulations

Tier0 offline software
  • for running the EDM from BS to Tag & DPD, base release is 14.2.10
  • bug tracking: Savannah bug tracker, information for reproducing bug will be avaliable
  • patches
    • added when bugs found and after being validated
    • incremental => if too big => new release
    • together with validation in nightly builds
    • cannot handle header file changes => new full bugfix release needed
    • for fixing bugs not for increasing functionality
    • propagated to other nightlies to integrate with fixes from event generation and simulation
  • validation
  • status:
    • need all validation steps
    • system working
    • shifts organized soon

Performance tuning and Reconstruction Descoping
  • requirements:
    • reconstruction of all ATLAS events (200 Hz) within 24 hrs
    • crash rate less than 1 per 1 million events
  • if one algorithms crashing => should be able to disable it without cascade of failure
  • first data: if problems cannot be solved within few hours => organized descoping
  • as not possible to protect against all problems => quick reaction important
  • priorities for Tier0 reco (this year)
    • provide enough information for debugging and calibrating the detectors
    • reconstruction of robust objects
    • measurements of electrons, muons, jets and prompt tracks; then missingET, tau, B-tagging
  • descoping
    • possibility of simplification of algorithms
    • disable algorithms one by one or by local group
    • different DB tag
    • give up reconstruction of one stream (would typically be jet)
    • define priority lists for the different detectors (already done)
    • might have to reduce ESD/AOD size
  • plans
    • fine tuning of priority lists
    • introduce handles for easy descoping
    • avoid cascade failures

Analysis Model
  • ESD (Event Summary data)
    • detailed output of detector reconstruction
    • produced from RAW data
    • allows particle identification, track-refitting, jet finding, calibration... * at the moment bigger, but target size 500 kB/event
  • AOD (Analysis Object Data)
    • summary of reconstructed events sufficient for common analysis
    • produced from ESDs
    • target size 100 kB/event
  • DPD (Derived Physics Data)
    • destilled version of AOD or ESD plus UserData
    • dedicated DPD makers for many physics groups like top group
    • three steps: primary D1PDs, D2PDs and D3PDs (final ROOT files with histograms and ntuples)
  • ESD, AOD, D1,2PD share the same ROOT/POOL format
  • UserData
    • want to store intermediate analysis results (like combined particle masses)
    • events can be "decorated" with the UserDataSvc: label, object (anything that can be put into a TTree
    • want a common solution outside EventView
  • analysis frameworks more modular
  • PyAthena framework improves working with python (https://twiki.cern.ch/twiki/bin/view/Atlas/PyAthena)
  • ARA (Athena Root Access):
    • direct access from ROOT without the athena framework
    • re-using code in here needs some wrapping
  • EDM further simplified to flat structure in release 14

3. Physics

1. Preparation and organization for the first data

e/gamma Performance
  • reconstruct track by merging three algorithms: inside-out (seed with PIXEL/SCT, extrapolate to TRT), outside-in (seed with TRT, extrapolate to PIXEL/SCT), and TRT standalone
  • good efficiency over most of detector for tracks of moderate momenta
  • 10-50% of photons convert before reaching calorimeter- can reconstruct these at ~80% efficiency for conversion radius greater than 80cm
  • reconstruction basically involves picking a window size in eta and phi, clustering all the cells at a local energy maxima, and try to match with a track to hypothesize particle type
  • you then rebuild cluster, calibrate it, determine direction, and start doing things like figuring out discriminating variables for analyses to determine particle type
  • EM calibration involves
    • online: convert ADC samples to energy, uses time dependent calibration constants obtained in dedicated runs
    • offline: correct for electronics nonlinearities and variations in HV, apply results of Z to ee intercalibration
    • offline cluster calibration: correct cluster energy and position and for effects of clustering algorithm and detector geometry- derived from ideal geometry detector simulations * Monitering in place, including variables used with triggers and IsEM: energy in calorimeter samplings, hadronic calorimeter leakage, isolation, TRT, SCT and PIXEL variables. * Early running: must calibrate and align detector, validate algorithms, and measure efficiencies. With data, want to determine fake rates, validate trigger algorithms, etc.

Muon Performance
  • There are various reconstructed muon "types" for different situations:
    • Precisely measure High Pt Muons by matching tracks from ID (inner detector) and MS (muon spectrometer)
      • STACO: combines ID and Muon parameters
      • MuID and other modular approaches: fit ID+Muon track, use Calo energy loss
    • High and robust Muon efficiency by tagging ID tracks with MS segments
      • Mutag, MutagIMO: match ID track with unassociated or any segment
    • Low Pt Muons and regions without a lot of instrumentation by pattern recognition outside ID
      • CaloTag, CaloLR: energy deposits and ID track
      • MuGirl: eed MS pattern recognition with ID track
    • MuonSpectrometer standalone for areas beyond ID acceptance (over +/- 2.5 eta)
      • MoMuMoore and Muonboy: with full reco chain
  • Working to finalize MS part of AtlasTrackingGeometry
  • Are common material effects for ID and MS track fitters
  • Working to be prepared for potential detector failures
    • a run with middle MDTs of reduced overall efficiency from 95% to ~80%
    • they expect segment and calo muon taggers to recover lost efficiency
  • Also working on detector and software robustness, as well as robustness against mis-alignment
  • Work being done on triggers using pile-up and cavern background- more specific information on triggers in the slides
  • Currently working on determination of fake rates and efficiencies

Jet/EtMiss Perfomance
  • Work has been done on the Jet/EtMiss TWiki page, includes rel 13 and 14 documentation: https://twiki.cern.ch/twiki/bin/view/Atlas/JetEtMiss
  • Software change: signal states avaliable for jets- so are now two four vectors inside each jet; allows access to fully calibrated and "raw" signal of a given jet in the same data object
  • Signal states also avaliable for jet constituents (rel 14) so can access calibrated and uncalibrated TopoClusters in the same code- also, no seperate collection of uncalibrated TopoClusters needed anymore
  • Full EtMiss calibration may not be robust enough for early data, so other, simpler, calculations may be used
    • use calocells + muons, use calocells inside TopoClusters + muons, add cryostat correction, apply refined calibration
  • There are several variables that will be monitered- see talk
  • Need to reduce number of histograms, improve naming, and implement proper ESD monitering, and, in longer term, work on trigger-aware monitering
  • Early data Reconstruction:
    • need a flat jet response quickly- with straightforward energy scale calibration and uncertainty estimate; needed for detector performance and data/MC comparison
    • need consistant EtMiss definition
    • provide jet calibration based on simple scaling with eta/phi map for initial physics analysis
    • work on tuning MC to data
  • to get from a jet (EM scale) to a calibrated jet, one must do a baseline subtraction, relative corrections, and absolute calibration
  • EtMiss in early data
    • empasis on instrumental effects, as it is sensitive to these (like beam gas, cosmics, etc)
    • need to validate using standard model processes like tt or w->lv, determine absolute scale in-situ, and check the resolution
  • Still need to address detector problems issues, such as if LAr calorimeter turned off

B-Tagging and tracking
  • Definition: d0 is the impact parameter in the transverse plane, z0 is the impact parameter longitudinally
  • Track reconstruction efficiencies: muons ~100%, pions ~ 94% (central), 85% (forward), electrons ~ 94% (central), 84% (forward)
  • Are some jet reconstruction issues to be solved in rel. 13 vs 14- particular emphasis on fake rates
    • increase in fake rates probably due to simulation bugs, which need to be addressed
  • resolution has improved in d0, z0, and somewhat less in 1/Pt, probably from refined pixel cluster treatment
  • having done some work on cosmics, starting beam gas and beam halo work- particularly important for aligning endcaps
  • alignment proceedure ok, but suffers from "weak modes" (see Z width plot in talk)- hard to address with track based alignment
  • track reconstruction efficiencies and impact paramenter resolutions are better, but b-tagging performance worse- fake rates again? old calibration information?
  • Early data:
    • need to work on reconstruction of beam gas and beam halo events
    • need to optimize alignment before collision data
    • should determine beamspot
    • work on data quality monitering and reconstructing collision data with flawed detector
  • B-tagging algorithms:
    • JetProb- Impact Parameter significance based on LEP/ALEPH
    • IP2D, IP3D- Impact Parameter significance using likelihood ratio
    • SV1- “classical” Secondary Vertex Likelihood Ratio
    • JetFitter- b-c Decay chain; uses likelihood ratio or neural network
  • Primary vertex reconstruction: find signal vertex and determine position in z. Used for impact parameters and flight distances
  • Secondary vertex reconstruction uses two algorithms:
    • BTagVrtSec (SV0, SV1, SV2)- this is a “classical” algorithm: fits a single geometrical vertex
    • JetFitter- uses kinematics of b-c decay chain. It has good performance but more is complex
  • Still doing work on b-tagging efficiency (dry run in CSC note) and on light jet mistag rate, as well as effects of misalignment on b-tagging (up to ~25% rejection loss) with alignment and error scaling

2. Detector performance calibration and strategy with the first data

Standard Model (SM) Physics
  • As of July 2008, expect 20pb-1 luminosity from LHC with 10% efficiency and peak luminosity of 5x10^31 cm-2s-1 from 40 days of physics running (year 2008) at √s = 10 TeV
    • This leads to at least 1 million minimum bias events, 50,000 W's. 5,000 Z's, 20 million triggered jets, etc.
  • Minimum bias- important for pileup, in particular
    • done by looking at tracks with pTmin>150MeV and then applying track to particle, doing vertex reconstruction and trigger bias corrections
  • Underlying events- used for jet and lepton isolation, energy flow, jet tagging, etc. Uncertainties from dependence on multiple interactions, PDFs, and gluon radiation. Jet measurements in early data should help
  • Double parton interactions
    • measured by AFS, UA2 and CDF
    • done by looking at four jet production and accounting for correlations between jets
  • Jet physics
    • measurements of jet cross-sections, first look at jets above 1 TeV
    • Much work being done on JES and jet algorithms, need work on effeciencies, luminosity, and determining jet spectrum we would have with perfect detector
  • W/Z measurement
    • one of the first measurements- useful to understand detector perfomance since the processes are understood well theoretically; several groups involved, use this data for various things (see talk)
  • will try data driven background subtraction for w->ev events
  • W/Z + jets- need more statistics compared to inclusive production. Used for detector performance, background to several samples (like higgs) and tests perterbative QCD. JES is largest systematic error
  • Photons: difficult to understand at start up, and these are important for things like isolation. May not be able to do much with these are start up.

Top Physics (mostly ttbar process)
  • focus on single and di-lepton events, but these are sensitive to isolation trigger- and fake rates
  • triggers have some overlap (see plot in slides)
  • interesting jet multiplicity region also corresponds to low Pt jets (4 jet bin peaks at about 20 GeV), which comes along with JES and jet finding difficulties
  • JES uses template method, with a di-jet mass sample- determined up to 2% accuracy at 50 pb-1, if is full b-tagging
  • Is also a data driven method that is stable at 200pb-1 of data
  • An MET cut helps to reduce QCD background and shape of W transverse mass is sensitive to EtMiss fakes
  • W+Jets is a particularly important background- and accepted cross-sections vary up to a factor of two depending on parton multiplicities
  • single lepton ttbar cross-section: are three methods- fully inclusive jet counting, and reconstructing top kinematics by counting events that pass selection or fitting the three jet invarient mass
    • the first was the discovery method at the tevatron
  • use of b-tagging is helpful for improving S/B
  • Di-lepton ttbar cross-section- also multiple methods such as scan and count, template fit, and maximum likelihood
  • difficult to determine top mass- need to understand reconstructed objects well
    • but interesting- could be resonances in it due to new physics
  • the top quark decays before hadronization, so it is interesting to look at spin, as this information is conserved
  • there is also some work being done on high Pt top reconstruction
  • In general, would like to determine trigger efficiencies, measure jet energy scales and MET calibration and determine B-tagging efficiencies for early data

B-Physics
  • should get early measurements at 500 microbarns- important since many groups will use this information
  • B-physics involves many different measurements from QCD HF production to rare b decays
  • Goals for early data:
    • test understand trigger (with J/psi, for instance) and detector, as well as detector performance, particularly with mass and lifetime measurements (ex. help with alignment tests after 10pb-1)
    • measure cross-sections at new energy, work on testing QCD, optimize b-trigger strategies
    • make preperatory measurements for work later on more sensitive or discovery b-measurements
  • first measurements will include cross-sections and polarizations (particularly with Onia), and B-hadron lifetimes
  • J/psi and upsilon processes important for systematics removal. Both are useful for calibration of tracking, trigger, and muon systems
  • With Onia, can look at mass shifts versus Pt, curvature difference, eta, and phi to get an idea of the source of detector effects
  • FDR1 and 2 both allowed practice runs of determining these things
  • how to get trigger efficiency with J/psi: find a tagged muon associated with Level1 muon RoI, then look for another muon that is a decay product from J/Ψ and measure trigger efficiency using this probe muon.
  • J/Psi are also useful for solenoid field validation- aim is 0.05% precision in 2 tesla field
  • All the LHC experiments will measure B cross-section from pp collisions, but over different phase spaces. Partial overlaps should be good for cross-checks
  • will also look at rare decays. Bs→μμ, for instance, is sensitive to new physics and expected to be viewable within the first few years

Heavy Ions
  • This will cover very briefly the 50 page talk. It is interesting and has many illustrations. See the talk for more information
  • Heavy ion collisions involve several stages: colliding nuclei, hard collisions, dynamical evolution, and hadron freezeout where the middle stages are inferred
  • bulk property studies done using "hydrodynamic flow"
  • can study things like jet quenching (jet modification)- multiple scattering of partons
  • impact parameter strongly related to number of participating nucleons and overall shape- will get info from ZDC experiment
  • Problems
    • hydrodynamics has no direct observation of primary degrees of freedom, 3-d model in development phases and initial conditions poorly understood
    • Jet quenching- data challenges this and hard to study if we don't understand how the primary hard scattered partons affected by it
    • Early thermalization- don't understand what system made of, how and when it thermalizes, and how it achieves low viscosity
  • Are overlaps with other groups, such as minimum bias p+p as well as jet and direct photon measurements

4. Collaboration issues

Collaboration Tools

Publication Committee
  • Publication Committee changes every year
  • Publication policy document can be found at http://www.physto.se/~ker/CB/Update_Publ_Author_June_2008.html
    • publication poliship
    • authorship paper
    • style guide
  • testing of CDS, system CMS uses
  • when analysis is final, draft has to exist and be endorsed by an EdBoard
  • project publications first on COM, then INT, PUB, SN (PUB and SN not internal, so far expectations, role will diminish)
  • preliminary results on CONF => what is presented on conferences, what is expected to appear at publication level
  • proceedings appear as PROC
  • CSC notes:
    • expected performance of the ATLAS experiment
    • will appear as book with full author list
    • only book may be cited

Authorship Committee
  • informations in authorship paper (see above)
  • ATLAS papers and notes - authorship
    • CTB (Combined Test Beam) Papers: people who have actively contributed to preparation, operation, data analysis; Project Leader should propose list in consultation with CTB
    • INT (internal notes): one or a few individuals
    • PUB (public notes): one or a few individuals or ATLAS Collaboration
    • SN (Scientific Notes): one or a few individuals
    • CONF (Conference Notes): The ATLAS Collaboration
    • PROC (Conference Proceedings Notes): speaker on behalf of ATLAS Collaboration
  • ATLAS authorship (see authorship paper)
    • team Leader has to apply to the Chairperson of the Authorship Committee with a short explaining email
    • chairperson will make recommendation to spokesperson (consults with AC and Collaboration Board Chairperson in case of problems)

Speaker's Committee
  • from October on: 6 members
  • function
    • contact point for conference organizers with ATLAS => identifying speakers, participating in planning sessions
    • solicit additional opportunities for conference talks
    • maintain record of given talks, copy of presented material
  • works together with SCAB (Speakers Committee Advisory Board)
    • helps to provide priority list (who should be giving a talk)
    • call to CB members for suggestions
    • looking for data base solution with additional information
  • responsibilities of speakers
    • may accept personal invitations, but need to inform Speakers Committee
    • all presented results should have been approved as ATLAS public results (either final or as CONF)
    • proceedings should be written as an ATLAS note at least 10 days before deadline (reviewed by PubCom)
    • preliminary results must be approved by analysis group and its Physics Convenors in agreement with the Physics Coordinator and the Editorial Board appointed by PubCom
    • Conference Note must be available on the web at least a week before the conference
    • talks for major conferences must undergo a rehearsal in an open physics meeting (slides must be available on web)

Assessment of Scientific Achievement
  • how to objectively record and recognize achievements by individual researchers
  • ATLAS: experience considered important; good documentation of notes papers, talks; OUTMOU for operational tasks

Operation Task Planning

From the Collaboration board
  • initial communication of results through official talk or paper
  • care should be taken when it comes p.ex. to blogs not to pre-date the release of results by official channel
  • refer to official document, highlight any personal interpretation as "personal"
  • as a general matter of politeness, don't cite people without asking them

Protecting information
  • responsibility of every ATLAS member, that physics results not yet approved must not be propagated outside the collaboration
  • areas of concern:
    • notes and papers: CDS (Central Document Server)
    • communication exchange: mailing lists, HyperNews
    • meetings: indico, presentations (protection based on membership of group ready)
    • webpages: standard and twiki (twiki has its own security system, twiki protected sub-web on its way)
    • code and documents: CVS (Concurrent version System) repository (protection based on membership of group ready)
  • access granted on authentication, authorization
  • two groups used today: atlas-gen and atlas-current-physicists, want to automate the building of groups from existing databases
  • mismatch for some persons between ATLAS and CERN database => email ATLAS secretariat

5. Upgrades

Overview
  • Currently, shut down is planned for winter 2012-2013 to switch from Linac2 to new Linac4 (brighter beam, ultimate current) and new large-aperture focussing quadrupoles (from 55 cm to 25 cm)
  • This shutdown will be 6-8 months. Pixel b-layer will probably also be improved
  • Are plans for sLHC in 2017 which has more injector chain improvements and or machine elements to give the potential for >= 10^35 cm-2 s-1
  • Experiments (not machine) will have a long shutdown ~18 months probably after 2016 run for major changes before sLHC
  • In general, sLHC will involve such improvements as:
    • Fully replacing ID with an all-silicon tracker
    • Replacing LAr and tiles electronics and readout (damage from radiation, for instance)
    • Work on forward LAr Calorimeter
    • Work on forward muon chambers, maybe more; Be beampipe; more shielding
    • Magnets and most detectors remain in place

Work towards sLHC
  • target date 2016, a few years later than originally planned. Funding approval needed from simultaneously all LHC experiments by ~end of 2010
  • paperwork is going, various documents anticipated on being completed, starting with a letter of intent in summer of 2009 (should be finalized in September 2009). These need quite a bit of preparation
  • current organization ok, but needs to evolve with increasing workload
  • two meetings planned for 2009 with more as needed

TDAQ upgrade
  • the trigger systems have three levels, each of which take time to process events- and the technology is old by now. Designed for 10^34 luminosities
  • want to add FTK, Fast TracK finder. Will reconstruct tracks above 1 GeV/c at the beginning of level two trigger processing. This should help to reduce QCD jets
  • Also beginning work towards sLHC conditions (10^35) which involve both new discovery limits and precision measurements so still need open triggering, but more events to process
    • will also have to deal with increasing problems like pileup noise in calorimeter and fake rates
  • phase 1 Level 1 upgrades: bump accept rate from 75kHz to 100kHz (electronics limitations beyond this), try to improve latency, possibly allow more criteria
  • phase 2 Level 1 upgrades: algorithm improvements- finer graularity prompts move of some level 2 rejection to level 1 and extension of latency
  • Is EoI (expression of interest) on calorimeter trigger upgrade (includes algorithm and technology upgrades)
  • level 1 muon trigger- are uncertanties in cavern background (early data) and trigger chambers themselves may need replacement at some point (this would also improve resolution)
  • DAQ/HLT: may introduce some ROS with partially switch based ones to reduce bottlenecks (phase 1) and add new subdetector FEE to allow interface change, also increase HLT CPU power (phase two)
  • may need to increase bandwidth at some point and replace all ROB‘s, ROS‘s, ROL‘s, ROD‘s
  • HLT: needs lots of CPU power, extra given to level 1, but then the others get less. Technology improvements should help

B-layer task force
  • working on upgrade to b-layer and PIXEL system
  • opening and closing detector adds two months. Also considering beam pipe replacement in case of an accident. Radiation is a concern in all of these activities
  • replacing b-layer currently estimated to take 1 year- not a short shutdown
  • b-layer ok until 2013, but not until 2016, when long shutdown for full inner detector replacement planned. Radiation dose ok at 2013 but not 2016
  • b-layer insert with new technology is favored by the task force: needs small radius beam pipe and stave development, engineering feasibility needs to be understood- but looks ok at the moment
  • will hopefully take less time than a full replacement
  • a two layer replacement with new technology thought of for sLHC if research and development continues as it is now. Final sLHC pixel detector should have a modular concept for ease in the future

B-layer plans from PIXEL group
  • Some repeat from task force. Also, more specific technical information in the talk
  • Looking at a buffer in the pixel system to help deal with event rate before reaching trigger
  • also considering chip size change from FE-13 to FE-14 for b-layer replacement and sLHC upgrade. May also use FE-15 for front end, and layer one in sLHC

-- SarahHeim - 13 Jul 2008 -- JennyHolzbauer - 29 Jul 2008 -- JennyHolzbauer - 21 Aug 2008 -- JennyHolzbauer - 25 Aug 2008
Topic revision: r116 - 16 Oct 2009, TomRockwell
 

This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback