Common Terms

This is a collection of some of the common terms and definitions that I have come across. There are some references to the ATLAS TDR (technical design report), which is where most of the detector design information came from. Another useful source of information, at least on ATLAS acronyms, is http://twiki.hep.manchester.ac.uk/bin/view/Atlas/AtlasAcRonymGlossary#AcronymsT. Please update if something is incorrect:

Particles

Particle - One particle type from an event.

Electron - Leptonic particle detected in the Calorimeters and inner detector. They produce an electromagnetic shower as they pass through material, which is typically shorter and narrower than the hadronic shower produced by jets.

Muon - Leptonic particle that differs from electrons only in mass (which is larger). They are detected in all parts of the detector including, notably the Muon Spectrometer. Muons have the longest tracks and are usually the only particles (other than the hard-to-detect neutrinos) that make it to the outer muon detection system.

Taus - Leptonic particle that is the heaviest of the three and decay quickly, relative to the other two leptons. Taus decay in two ways, hadronically and leptonically. We see the leptonic decay products as muons and electrons, so we don't really see these taus. We do see the hadronically decaying taus, in the form of jets, which the reconstruction software picks out from the jets that originated as quarks.

Jet - A jet is a group of collimated colorless particles formed through hadronization of colored partons produced in the colision. It is often visualized as the ever-widening decay chain forming a cone (the "jet") with the parton on the point and the base being what the detector sees. The relation between partons and jets allows the determination of hadron level info from the properties of the jets.

MET - Missing Transverse Energy. This is the energy missing in the end of the reaction, which has probably gone into a neutrino, which we don't detect directly with ATLAS. Instead, we reconstruct MET by taking the vector sum of the energy deposits in each of our calorimeter cells, and then we correct that for the presence of electrons, muons, and jets.

Analysis Terms, Different Versions of Particles

MC - Monte Carlo, a weighted distribution of random numbers. The weight is determined by the process being modeled. This does not specifically have to do with a computer. Note that we have MC simulations and NLO calculations. Both are computer generated and simulate the data but the difference is that the MC is based on a random number generator.

RECO (reconstruction) - any MC or data samples that have gone through the full detector simulation. After this and the trigger simulation, real data and MC should look very similar.

Triggers - These are sets of hardware and software that decide whether an event that has gone through the detector should be considered to be written to tape. For MC samples the trigger is simulated.

L1 Trigger - The first trigger level, which does a coarse reconstruction of interesting objects and makes a fast decision.

L2 Trigger - The second trigger level, which looks only at the regions selected by L1 and then refines the particle position and momentum measurement.

EF Trigger - the Event Filter, which is the final level of event trigger object reconstruction

Tagged Jets- These are Jets that are believed to be b-quarks (originally) by the reconstruction software. They are tagged due to the presence of a secondary decay vertex or tracks with a large impact parameter to the primary vertex.

UnTagged Jets - These are Jets that are believed to be c, s, u, or d quarks (originally) by the reconstruction software. These are all jets that are not b-tagged.

Isolated Muon - This is a muon that is believed to not be within a jet and so is most likely from a W decay.

UnIsolated Muon - This is a muon that is believed to be within a jet and probably is part of the decay chain, typically from a heavy quark (b, c, or s) decay.

Event - Everything recorded in the detector for one proton-proton collision, including the trigger information and reconstructed objects. A MC event also includes information about the whole decay chain starting with the colliding quarks to the final decay products detected by the detector.

Common Detector Information, Variables

Eta - An angle in the detector, based on rapidity (y), which is based on a mathematical function of theta (polar angle). The rapidity (y) is a quantity which is fairly uniform for inelastic collisions and differences in y are lorentz invariant. Pseudorapidity is the rapidity in the limit of massless particles.

y = 1/2 * ln (E + p_z / E - p_z)

eta = - 1/2 * ln(tan (theta/2) )

For reference, a particle with eta = 0 will head straight up through the center of the detector. A particle with eta = +/- 4 or so will head in a direction more closely parrallel with the beam. The particles may have an eta of up to +/-5, but the particles, particularly leptons, that reach the reconstructed set often have a more restricted range.

Phi - the azimuthal angle, which perpendicular to the beam direction.

Z - the distance out from the center of the detector along the beam direction.

Pt - the transverse momentum

Et - the transverse energy

DeltaR - the square root of ((eta1 - eta2)^2) + ((phi1 - phi2)^2). It tells how far off two particles are from being in the same position.

TRF: Tag Rate Function, the probability that a random event will be b-tagged (http://www.atlas.uni-wuppertal.de/paper/fermilab-conf-04-403-e.pdf)

Detector Design

NOTE: Much of the following information is from the Technical Design Manual (TDR, 1997). Some of this information may have changed since the manual was written.

ATLAS Detector: made of an inner detector, calorimeters, muon spectrometer, and several magnets. Data is collected by different systems, rejected/accepted by triggers, and then sent out for final analysis.

Muon Spectrometer: Detection system for muon position and Pt. This is the last detector that muons pass though. Since most particles, other than neutrinos which interact rarely, don't make it this far, the hits in this detector should be basically all muons. Particles are bent by the barrel toroid for eta < 1, and the endcap magnets for 1.4 < eta < 2.7. There is also the Transition Region from 1 < eta < 1.4, where the magetic field is a combination of the barrel and endcap fields (TDR 17). The Muon Spectrometer includes the following parts->

CSC: Cathode Strip Chambers

MDT: Monitered Drift Tubes

RPC: Resistive Plate Chambers

TGC: Thin Gap Chambers

MDT used over most of the eta region. CSC, with the higher granularity, are used at large eta and closer to the interation point (so at the innermost plane, 2 < eta < 2.7). The trigger system covers eta <= 2.4 and includes RPC and TGC. RPC is used in the barrel, and TGC is used in the endcap. More precisely, RPC is used for eta < 1.05 and TGC is used for 1.05 < eta < 2.4 (TDR 183). Also see figure 1-1 in TDR page 18 for visual layout. There is a gap in coverage at 0 eta for service cables and such for the inner detector (ID), central solenoid (CS), and calorimeters (TDR 17,19)

naming: The first letter indicates the general region. For example, EMS, EML for MDT chambers (E for endcap) and TM1, TM2, etc for the TGC's in the three trigger planes (TDR, 177). There is a gap at 0 eta of 300mm in BIL, BML, BMS, and BOL chambers (B for barrel) (TDR 178)

EM Calorimeter:

This is the last detector that electrons usually pass through, and the field curves these charged particles, making their tracks easier to see. It uses highly granular liquid argon (LAr), for eta < 3.2, with the barrel for eta < 1.5 and the endcap calorimeter for the rest. It has barrel endcap (eta < 1.475), inner wheel of endcap (2.5 < eta < 3.2) and outer wheel of endcap (1.375 < eta < 2.5). There is also a lot of material at 1.37 < eta < 1.52, which reduces resolution sharply.

Forward Calorimeter (FCAL): This is used to assist in the detection of particles in the forward part of the detector. It is at 3.1 < eta < 4.9, and uses LAr (liquid argon), because it is radiation hard (TDR 12)

Hadronic Calorimeter: This is the last detector that jets usually pass through. It uses LAr (liquid argon) for endcaps and forward calorimeter, but is mostly made of scintillator tile calorimeter, which covers the main barrel and two smaller barrels on each side (TDR 4). Tile used in barrel (eta < 1), extended barrel (0.8 < eta < 1.7), and LAr used in endcap (1.5 < eta < 3.2) (TDR 13)

Magnets: These are used to bend the tracks of charged particles, to make them easier identify. CS (central solenoid), for inner detector; air-core toroids for muon spectrometer; endcap toroids (ECT) inserted in barrel toroid (BT) and contribuite to middle (eta ~0 ) and edges (eta ~ 2 or 3)

Inner Detector (ID): This is the first chance to see particles, and it is as close to the beam as possible so that we can reconstruct tracks better, particularly taus and jets. It uses silicon microstrips (SCT) over 1.4 to 2.5 eta and pixels (1.4 to 2.5 eta) in layers as well as the straw tube tracker (TRT) from 0.07 to 2.5 eta. The pixel detector is as close to the interaction point as possible, , the SCT is in an intermediate range, and the TRT is the outer part of the inner detector(?)
Pixel B-Layer: innermost layer of the Pixel Detector Barrel region

Multivariate Analysis Terms (and related items and acronyms)

resources used in creating this section: http://www-group.slac.stanford.edu/sluo/Lectures/Stat2006_Lectures.html http://www.autonlab.org/tutorials/index.html http://www.hep.caltech.edu/%7Enarsky/spr.html as well as the instructions accompanying SPR and TMVA programs.

Classifier: This just refers to a particular analysis technique/type

MVA: MultiVariate Analysis: statistical tools to extract a low-level signal (in particular) from background. These can involve many dimensions, so can be more complex than a simple cut-based analysis. Will likely also take some time to run.

PDE: Probability Density Estimator

FDA: Function Discriminatant analysis

FOM: Figure of Merit- this indicates how good the classifier is at getting the data and the fuction we are fitting it to, to agree. The smaller this number is, the better the agreement.

Overfitting: This is when we start fitting the training data too well. In this case, the validation sample will not be fit as well, and our error will be higher in this sample. The goal should be to train a parameter to the point that the errors on training and validation samples are the same.

Fisher: related to the log ratio of signal and background densities. This generally performs better if the background and signal centers are not located at the same (or similar) point (for instance, not having the peak of the gaussian at zero in both cases), since this function tries to maximize the distance between the means and minimize the widths of the signal and background curves, respectively.

LDA: Linear Discriminant analysis- like fisher, but assumes a multivariate gaussian as the underlying distribution of the functions

LogitR: similar to LDA, but makes no assumtions on the underlying function, just minimizes the likelihood

Neural Network: put some events into network, find classification parameter, adjust free model parameters, repeat.

ANN: Artificial Neural Network

Decision Tree: data nodes are split by cuts until some stopping criterion is reached

Stoping Criterion: maximum number of nodes in tree is reached or minimum number of events per node is reached

Splitting Criterion: purity, Gini index, cross entropy

Purity: p, probability that an event is correctly classified (related to Bayes Theorem)

Gini index: -2p(1-p)

Cross entropy: (p)log(p) + (1-p)log(1-p)

Significance Level: Probability to reject the signal from your sample where there is signal

ROC: Reciever Operating Characteristic- this is just a curve of ( 1 - background efficiency) vs. signal efficiency. This efficiency is related to how well the classifier seperates the signal and the background.

Boosting: combine a set of classifiers to get a new classifier that is more stable than the others. This can be applied to any classifier, but is typically applied to Decision Trees, to reduce their sensitivity to statistical fluctuation.

AdaBoost: Adaptive Boosting. Weights weak classifiers, giving larger weight to classifiers that misclassified events (to pay more attention to them next round), and then combine these all the weighted classifiers to form a strong classifier.

Bagging: randomly choose events to form a subset and train the classifier. Repeat many times and combine- the result is "smoothed out", somewhat like boosting.

arc-x4: a simpler algorithm, based on arc-fs. It derives its heritage from boosting, and is usually referred to as a boosting algorithm. However, it is different. Performance-wise, it tends to do better than bagging at reducing variance (see [http://citeseer.ist.psu.edu/rd/72332607%2C84089%2C1%2C0.25%2CDownload/http://citeseer.ist.psu.edu/cache/papers/cs/958/ftp:zSzzSzftp.stat.berkeley.eduzSzuserszSzbreimanzSzarcall.pdf/breiman98arcing.pdf][Breiman paper]] for more information)

SVM: Support Vector Machine. This basically takes the data and signal with a non linear boundry and transforms it into a higher dimensional space where a linear boundry can work

Trigger and Software Terms

trigger slice -

Kolmogorov Test (Kolmogorow-Smirnow-Test) - comparison of two probability distributions

pool file -

weak modes - global deformations to which a track based Χ method alignment has limited sensitivity

underlying event - beam-beam remnants plus initial and final state radiation

template method - only the skeleton of an algorithm is defined, some steps are done by subclasses, which themselves can redefine certain steps of the algorithm

Data flow:

-- JennyHolzbauer - 16 Jun 2008 -- JennyHolzbauer - 05 Jun 2008 -- JennyHolzbauer - 18 Sep 2007
Topic revision: r23 - 16 Oct 2009, TomRockwell
 

This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback