ATLAS HLT/DAQ
Transcript
ATLAS HLT/DAQ
CSN1 Aprile 2006 ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working S. Falciano (Roma1) Coordinatore Commissioning HLT A. Negri (Irvine, Pavia) Coordinatore Event Filter Dataflow A. Nisati (Roma1) TDAQ Institute Board chair e Coordinatore PESA Muon Slice F. Parodi (Genova) Coordinatore b-tagging PESA V. Vercesi (Pavia) Deputy HLT leader e Coordinatore PESA (Physics and Event Selection Architecture) Attività italiane Trigger di Livello-1 muoni barrel (Napoli, Roma1, Roma2) Trigger di Livello-2 muoni (Pisa, Roma1) Trigger di Livello-2 pixel (Genova) Event Filter Dataflow (LNF, Pavia) Selection software steering (Genova) Event Filter Muoni (Lecce, Napoli, Pavia, Roma1) DAQ (LNF, Pavia, Roma1) DCS (Napoli, Roma1, Roma2) Monitoring (Cosenza, Napoli, Pavia, Pisa) Pre-series commissioning and exploitation (Everybody) CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 2 ATLAS Trigger & DAQ Muon High Level Trigger Hardware based (FPGA, ASIC) Calo/Muon (coarse granularity) Software (specialised algs) Uses LVL1 Regions of Interest All sub-dets, full granularity Emphasis on early rejection Offline-like algorithms Possibly seeded by LVL2 result Work with full event Full calibration/alignment info CSN1 Aprile 2006 Calo Inner Pipeline Memories LVL1 RoI LVL2 ROD ROD ROD ROB ROB ROB Latency Rates 40 MHz 2.5 μs ~100 kHz ROD ROD ROD Read-Out ROD Drivers ROD ROD ROB ROB ROB Read-Out Subsystems~10 ms ROB hosting ROB ROB Read-Out Buffers ~3 kHz Event builder cluster EF Event Filter farm ~1s ~200 Hz Local Storage: ~ 300 MB/s Valerio Vercesi - INFN Pavia 3 TDAQ Networks and Processing SubFarm Outputs (SFOs) (SFIs) stores LVL2 output Requested event data USA15 UX15 1600 Data of events accepted Read- by first-level trigger Out VME Dedicated links Links USA15 RoI Builder SDX1 pROS Network switches LVL2 Supervisor Regions Of Interest SubFarm Inputs Secondlevel trigger LVL2 farm Network switches DataFlow Manager Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz Event Builder Event Filter (EF) Gigabit Ethernet Data storage Local Storage Event data requests Delete commands ~30 Event rate ~ 200 Hz Dual(quad)-CPU nodes ~1600 ~100 ~ 500 ~150 PCs Read-Out Subsystems (ROSs) Timing Trigger Control (TTC) ReadOut Drivers (RODs) ATLAS detector Firstlevel trigger UX15 Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 4 Pre-series system in ATLAS point-1 8 racks (10% of final dataflow, 2% of EF) 5.5 One ROS rack - TC rack + horiz. Cooling - RoIB rack - TC rack + horiz. cooling 50% of RoIB 12 ROS 48 ROBINs underground : USA15 One Full L2 rack Partial Superv’r rack Partial EFIO rack - One Switch rack - - - TDAQ rack 30 HLT PCs TDAQ rack 3 HE PCs TDAQ rack 128-port GEth for L2+EB TDAQ rack 10 HE PC (6 SFI 2 SFO 2 DFM) surface: SDX1 Partial EF rack - TDAQ rack 12 HLT PCs •ROS, L2, EFIO and EF racks: one Local File Server, one or more Local Switches •Machine Park: Dual Opteron and Xeon nodes, uniprocessor ROS nodes •Operating System: Net booted and diskless nodes, running SLC3 CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia Partial ONLINE rack - TDAQ rack 4 HLT PC (monitoring) 2 LE PC (control) 2 Central FileServers 5 Commissioning and exploitation Fully functional, small scale, version of the complete HLT/DAQ Equivalent to a detector’s ‘module 0’ Purpose and scope of the pre-series system Pre-commissioning phase ¾ ¾ Commissioning phase ¾ To validate the complete, integrated, HLT/DAQ functionality To validate the infrastructure, needed by HLT/DAQ, at point-1 To validate a component (e.g. a ROS) or a deliverable (e.g. a Level-2 rack) prior to its installation and commissioning TDAQ post-commissioning development system ¾ ¾ CSN1 Aprile 2006 Validate new components (e.g. their functionality when integrated into a fully functional system) Validate new software elements or software releases before moving them to the experiment Valerio Vercesi - INFN Pavia 6 Pre-series tests at Point 1 muon calorimeter inner detector 1-20 L2PU nodes with mufast at point 1 with Muon - 20K Muon events (1-4 L2PU Factor 1.9 improvement applications/node) Throughput (Hz) Used integrated software release (Æ installation image ) with offline release 10.0.6, Event Format version 2.4, TDAQ release 01-02-00, HLT release 02-01-01 First time e/γ- and μ-selections run in a combined menu with algorithms 16000 14000 12000 10000 respect to one application/node 1 l2pu/node 2 l2pus/node 8000 6000 4000 2000 0 3 l2pus/node 4 l2pus/node 0 5 8 ROS emulators with preloaded data Data with Level-1 simulation: di-jets (17 GeV) , single e (25 GeV), single μ (100 GeV) Dataflow applications with instrumentation Æ measure execution times, network access times and transferred data sizes Used recently up to 20 Level-2 processors each with up to 4 applications CSN1 Aprile 2006 20 LVL2 Farm Load Balancing LVL2 Decision Rate (Hz) 15 L2PU nodes E.g. Level-2 setup 10 350 300 250 200 150 100 50 0 1 Valerio Vercesi - INFN Pavia 3 5 7 9 11 13 15 17 CPU Index 7 Infrastruttura Event Filter Caratteristiche principali SW infrastruttura EF Completo disaccoppiamento tra data flow (EFD) e data processing (PTs) ¾ sicurezza trattamento dei dati Massimo sfruttamento delle architetture SMP Design flessibile e completamente configurabile SFI Input EFD Sorting PT PT #1 I O ExtPTs PT P T #2 I O P T PT I #a O ExtPTs Trash P T PT I #b O Output Output Output SFO [debug] SFO [std] Calo Inner ROD ROD ROD ROB ROB ROB LVL1 RoI LVL2 Calibration Node n Muon EF Event builder network SubFarm SFI SFI SFI SFI SFO SFO SFO SFO Im ple m ex e n a m t at ple ion SFO [calib] Storage: ~ 300 MB/s CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 8 EF tests Verifiche e studi sulla parte infrastrutturale Ottimizzazione del protocollo di comunicazione tra EF e SFI/SFO: miglioramento delle performance per eventi piccoli (calibrazione) e farm remote Aggiunta di funzionalità addizionali Integrazione e validazione degli algoritmi di selezione Algoritmi derivati dall'offline Ma condizioni operative diverse, es: ¾ ¾ adattamento delle job-option all'online concorrenza nell'accesso al DB Integrata e validata la muon slice Altre slice in corso di validazione Tested with timing: EF-only, 9 EFDs per 2 PTs, TrigMoore algo, 1 MySQL (CERN site) All 9 nodes connect to MySQL simultaneously all 18 PTs do not 1 but 3 connections to CDI (3x18=54 - fast scaling) ¾ ¾ ¾ 6.90±0.20 s – geometry 0.10±0.03 s – AMDCsimRecAthena 0.06±0.03 s – magnetic field DB-caching was used CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 9 HLT Core Software Work plan defined for design review 2005 (https://uimon.cern.ch/twiki/bin/view/Atlas/HLTReviewsPage) HLT compliant with trigger operation Steering and sequencing of algorithms Integration with most recent TDAQ software Cycling through TDAQ state machine (start/stop/reinitialize/…) HLT trigger configuration from data base Use of conditions DB in HLT Integration with online services for error reporting and system monitoring Many of these issues have a direct impact on selection algorithms Æ Functionality needs to be available early in core software to give time to algorithm developers. System performance optimization Æ instrumentation for measurement of network transfer times, data volumes and ROS access patterns (Æ complementary to work in PESA group) For commissioning and readout tests Basic fault tolerance Stability CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 10 Software Installation Image Originally developed for Large Scale Test 2005 Setup / installation scripts Contains a consistent set of all software in one file needed to run a standalone HLT setup Software repositories TDAQ TDAQ Common HLT Completely tested before deployment by PESA, HLT and DAQ specialists Used for first exploitation of pre-series Offline Project builds Example Partitions / Data Files Useful for outside CERN installations and new test bed setups Test suites P1 installation procedure presently being worked out ~ 6.5 GByte software Æ Future images snapshot of P1 installation https://twiki.cern.ch/twiki/bin/view/Atlas/HltImage CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 11 Trigger Configuration Data Base LVL1 + HLT as integrated system offline user shift crew TriggerTool TriggerDB expert DB population scripts compilers TriggerTool: • GUI for DB population • menu changes for experts (HLT and LVL1) TriggerDB: • stores all information to configure the trigger: LVL1 menu, HLT menu, HLT algorithm parameters, HLT release information • Versions identified with key Æ Configuration and Condition DB Retrieval of information for running: Configuration get information via a key, either as: R/O interface • XML/JobOption files System • direct DB read-out for both online + offline running online offline http://indico.cern.ch/getFile.py/access?contribId=72&sessionId=2& running running amp;resId=7&materialId=slides&confId=048 CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 12 Global Monitoring Scheme OHP Detector Specific Athena Algorithm Event Displays CSN1 Aprile 2006 Monitoring Data Storage Athena Monitoring Analysis Framework Athena Gatherer Event Builder Detector Specific Plug-in Online Histogramming Service ROS Event Monitoring Service ROD GNAM Valerio Vercesi - INFN Pavia 13 GNAM Monitoring Principio: disaccoppiare e mascherare le azioni comuni dagli algoritmi di monitoring Dal dataflow sicronizzazione con la DAQ campionamento degli eventi decodifica della parte detector-ind pubblicazione e salvataggio degli histo gestione dei comandi (update, reset, rebin) tools per gli algoritmi ¾ (circular buffer, histogram flags, histogram metadata, ...) GNAM CORE USER LIB USER LIB USER LIB Istogrammi Algoritmi di monitoring (librerie dinamiche a run-time) decodifica detector-dependent booking e filling degli istogrammi gestione di comandi specifici CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia On-line Histogramming Service Comandi Eventi Event Monitoring Service GNAM core: azioni comuni Presenter Viewer Checker 14 Online Histogram Presenter (OHP) Interactive presenter developed in close connection to GNAM monitoring However used to display histograms published on the OHS by any producer Designed to be used both as expert mode: a browser of all the histograms on OHS shifter mode: an histogram presenter to show only predefined sets of histograms in configured tabs Preconfigured set of histograms in tabs Browser part Completely interactive with the GNAM Core (rebin, reset, …) Completely redesigned, after the CTB experience, to minimize network traffic and to have a scalability appropriate for whole ATLAS A very useful collaboration with Computer Science students has been established. CSN1 Aprile 2006 Commands to the Core : rebinning, reset ... Valerio Vercesi - INFN Pavia 15 Monitoring: commissioning Sviluppato un sistema di monitoring/analisi/ validazione on-line dei rivelatori basato su GNAM produzione di istogrammi visualizzati con On-line Histogram Presenter (OHP) on-line event display (in collaborazione con Saclay) In uso al commissioning dal settembre 2005 In sviluppo reperire la configurazione dei rivelatori da DB controlli automatici e generazioni di allarmi Utilizzato da Tile e MDT, interesse espresso da altri CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 16 ROD Crate DAQ RCD usato come interafccia verso i RODs per Control, Configuration, Monitoring, Data readout (via VME) Gli sviluppi RCD hanno avuto sostanzialmente due fasi ReadoutApplication (ovvero l'applicazione che costituisce il ROD Crate DAQ, il ROS ed il Data Driven Event Builder) modificata in modo sostanziale per accomodare tutte le richieste dei rivelatori ed essere pronta con tutte le fuzionalità necessarie per il commissioning ¾ ¾ ¾ ¾ ¾ accesso standardizzato ad Information Service ed Online Histogramming possibilità di accesso ai dati in risposta agli interrupt semplificazione della costruzione delle classi per il controllo e l'acquisizione dei moduli definizione e realizzazione di un data driven event builder librerie per gestione standardizzata delle condizione di errore Supporto dei rivelatori per il commissioning ¾ CSN1 Aprile 2006 Nuovo sviluppo necessario per garantire tramite una semplice interfaccia comune a RAL/CORAL che l'accesso al database di configurazione sia thread safe (fase di inizializzazione) Valerio Vercesi - INFN Pavia 17 Attività RCD Parte specifica del detector del ROD Crate DAQ di MDT ed RPC Database database di cablaggio (molto lavoro!) database di configurazione Interfacce di online e monitoring con questi Detector Control System (DCS) Italiana tutta la parte di DCS degli RPC ed il controllo di HV e LV degli MDT Settore 13 Muoni Run combinati MDT-Tile triggerati da scintillatori Studi di sincronizzazione CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 18 MDT online calibration Required precision for t0 and r-t autocalibration needs inclusive muon rates of 0.3÷3 KHz Not suitable for EF calibration streams Need different Event Building and streaming (under study) Already possible using LVL2 infrastructure with some modifications Thread Thread L2PU Thread Thread x 25 Thread TCP/IP, UDP, etc. L2PU Thread Local Server ~ 480 kB/s Local Server x ~20 Server Memory queue ~ 9.6 MB/s Gatherer Dequeue Calibration farm Calibration Server ~ 480 kB/s Local Server CSN1 Aprile 2006 disk Valerio Vercesi - INFN Pavia 19 SDX1 – TDAQ Room @ P1 Total of 99 racks can be placed in SDX •Lower Level: 49 (LVL2, EB,…) •Upper Level: 50 (EF) CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 20 ROS Overview SDX1 ~30 Event rate Local ~ 200 Hz Storage SubFarm Outputs (SFOs) DataFlow Manager dual-CPU nodes ~1600 ~100 Event Builder Event Filter (EF) SubFarm Inputs ~ 500 LVL2 farm (SFIs) pROS Network switches • In total ~150 ROS PCs will have to be installed • Each ROS PC will be equipped with 3 or 4 ROBIN cards and one 4-port G-bit Ethernet NIC ROBIN USA15 ~150 PCs Data of events accepted 1600 by first-level trigger ReadOut VME Dedicated links Links Read-Out Subsystems (ROSs) RoI Builder Timing Trigger Control (TTC) CSN1 Aprile 2006 ROS PCs in USA15 10-Gigabit Ethernet Event data requests Delete commands Regions Of Interest LVL2 Supervisor Requested event data Network switches ReadOut Drivers (RODs) ATLAS detector Firstlevel trigger UX15 Valerio Vercesi - INFN Pavia 21 ROS Hardware Procurement ROS PCs 1st batch (50 PCs) Ordered and received 2nd batch (60 PCs) Ordered. Delivery scheduled for May Remaining ROS PCs + spares Will be ordered later ROBINs German production (350 cards) Ordered and received (~20 cards did not pass the production test and still need to be repaired) UK production (350 cards) Ordered. Delivery scheduled for March 4-port NICs Ordered. Delivery scheduled for May CSN1 Aprile 2006 Silicom 4-port NIC Valerio Vercesi - INFN Pavia 22 Current Status of ROS-Racks in USA15 Liquid Argon Y.09-16.A2 Y.08-16.A2 Y.07-16.A2 Y.06-16.A2 TileCal Y.05-16.A2 Y.04-16.A2 Y.09-14.A1 Control switch ROS PCs Installed Power & network cables Commissioned (ROS level) Commissioned (ROD - ROS) CSN1 Aprile 2006 yes yes yes yes yes yes yes yes yes yes yes yes yes yes 50% no no no no no 50 % Valerio Vercesi - INFN Pavia 23 Physics and Event Selection Architecture PESA Core SW is responsible for the implementation of the Steering and Control (built around standard Athena components) PESA Algorithms develops HLT software using realistic data access and handling specialized LVL2 and EventFilter algorithms adapted from on-line deployment in HLT testbeds PESA Validation and Performance evaluates algorithms on data samples to extract efficiency, rates, rejection factors, and physics coverage Stems from original structure, laid out in parallel with the organization of the Combined Performance working groups, in “vertical slices" (LVL1+LVL2+EF) Electrons and photons Muons Jets / Taus / ETmiss b-jet tagging B-physics CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 24 HLT Reconstruction Algorithms HLT Feature extraction algorithms are available for each slice Calorimeter algorithms LVL2 and EF algorithms ready for e/γ τ implementation ready at LVL2 Offline tool adapted to the EF is ready for JetCone Muon algorithms LVL2 and EF algorithms are available for the barrel region; work has started on extending the LVL2 algorithm to the endcap ID to muon track matching tools are available at LVL2 and EF Muon isolation studies using calorimeters are being performed ID tracking Tracking with Si data ready at LVL2 and EF; more approaches studied in parallel Tools available for both track extension to the TRT and stand-alone TRT reconstruction; emphasis on providing a robust tool for commissioning and early running CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 25 Selections: LVL2 μ Implemented curvature radius instead of sagitta More suitable for the endcap, recover efficiency in the barrel Same algorithm across ± 2.4 in η Resolution New LUTS for Radius Slightly worse than 10.0.3 Resolution is OK for Standard sectors Turn-on curves 11.0.3 comparable with 10.0.3 Resolution is OK for Standard sectors Worse efficiency in the feet region (Special Sectors) Endcap extension in progress Combined reconstruction (μComb) with ID CSN1 Aprile 2006 Refine the μFast pT by means of ID data sharper 6 GeV threshold Valerio Vercesi - INFN Pavia 26 LVL2 cosmics μ Z-R: bending plane R Y X-Y 12 10 BOS 10 5 BMS BML 8 Straight line extrapolation 0 from y=+98.3 m MDT hits are station centers in X-Y. -5 6 BIS 4 2 -10 -10 -5 MDT hits 0 5 0 -20 10 -15 -10 -5 0 5 10 X RPC hits (pair of phi,eta strips) Muon track from the surface 15 20 Z /castor/cern.ch/user/m/muonprod/cosmics/ cosmics.dig.atlas-dc3-02._0004.pool.root Monte Carlo! MDT,RPC hits are there and looks fine. Conversion of RDO to coordinates seems fine too. Next steps: MuFast modifications CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 27 Selections: EF μ Studies on single muon selections have been performed for two scenarios: 6 GeV threshold at 1033cm−2s−1 luminosity and 20 GeV at 1034cm−2s−1. Cuts are defined so that a 95% efficiency is achieved at the threshold values. No backgr. L=1034 s.f. x1 L=1034 s.f. x5 54 Hz 54 Hz 48 Hz 77 Hz 77 Hz 68 Hz c 30 Hz 30 Hz 26 Hz W t 22 Hz 22 Hz 19 Hz negligible negligible negligible Total ~185 Hz ~190 Hz ~180 Hz Sorgenti di muoni L=1034 π/K b z z z z lower values of efficiency plateau less sharp curves near the thresholds more points are needed for a better curve definition CSN1 Aprile 2006 z z z Layout Q (barrel only) MuId Combined used at EF MuComb rate reduction still to be included at LVL2 Fake rates expected to be ~1% (~12%) of total rate for s.f.x1 (s.f. x5) with this threshold (seeded mode) Valerio Vercesi - INFN Pavia 28 Selections: b-tagging Two classes of tagging variables can be used: track variables (xT ) and collective (vertex) variables (xV ).The weight of each RoI is computed using the likelihoodratio method where Ssig and Sbkg are the probability densities for signal (b-jets) and background WT : transverse (d0/σd0 ) and longitudinal (z0) WV : secondary vertex energy and mass (statistical approach) Recent work to combine SimpleVertex (1-dim fit) and VKalVrt (offline algorithm adapted to LVL2) Impact parameters Impact parameters + probabilistic vertex Impact parameters + VKalVrt/SimpleVertex combined CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 29 Trigger-aware analysis Data taking Production CSN1 Aprile 2006 Analysis Valerio Vercesi - INFN Pavia Analyses using trigger information as a “preprocessor” to correctly evaluate efficiencies, physics reach, etc. The reconstructed objects, used by the trigger are saved in the ESD/AOD file They can be used for comparison with truth/reconstructed information It is possible to re-play the trigger decision, by running the hypothesis algorithms on these objects Only the settings of the hypothesis algorithms can be changed in the analysis The effect of different threshold settings can be measured 30 Trigger & Physics Weeks z z CSN1 Aprile 2006 Aim: bring together trigger, detector performance and physics studies and expose trigger issues and strategy to a broad ATLAS audience Focus on initial scenario: 1031cm-2s-1, 200Hz Valerio Vercesi - INFN Pavia 31 Ideas @ 1031 Electrons / Photons Sketch some pre-scale factors @ HLT Crude estimates “to guide the eye” keeping total e/γ output rate constant Photons not yet worked out ¾ ¾ Assessment of both di-photon thresholds and high-pT single one to be revisited Photons useful to obtain unbiased jet sample HLT Rate (Hz) Pre-scale factor e10i 2 250 e15i 5 40 e20i 36 1 2e10i ~Hz 1 Muons Absent (or very low) cavern background makes LVL1 commissioning “easier” ¾ Rate (Hz) Pre-scale factor LVL1 μ6 40 6 HLT μ6 20 3 LVL1 μ20 14 1 LVL1 diμ 3 1 Full shielding, 75 ns bunch spacing Build menus allowing to ¾ Measure Xsec from (very)-low pT 9 ¾ ¾ CSN1 Aprile 2006 Can go as low as ~5 GeV Check W, Z, J/ψ, Y… Study ways to increase trigger acceptance Valerio Vercesi - INFN Pavia 32 Accounting Contributo INFN alla Pre-serie 140 KCHF (ROS Racks, Monitoring, Operations, Switches, FileServer) completamenti spesi entro il 2005 Per questo e per il resto VV riceve in copia tutte le fatture Contributo CORE 2005-2006 Online Computing System: 45+135 KCHF (Monitoring, Operations) ¾ ¾ Read-Out System: 275+275 KCHF (ROS Racks) ¾ ¾ Gara CERN espletata con un congruo ritardo per la prima tranche (50 ROS), la parte rimanente è in consegna (60 ROS a Maggio) Imputati all’INFN per ora circa 200 KCHF (su Roma1) LVL2 processors, Event Building, Event Filter processors: 65+50+170 KCHF ¾ ¾ ¾ Inviati al CERN 45 KCHF a Maggio 2005 Già acquistati quattro file server In corso di perfezionamento le specifiche dettagliate (soprattutto per i processori HLT) Può darsi si possa utilizzare un marker survey fatto da CERN-IT Studi in corso anche per la valutazione delle ultime tecnologie (Moore’s law failures…) Infrastruttura: 80 KCHF (cavi, racks, cooling,…) CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 33 Cost Profile (KCHF) 2004 2005 2006 2007 2008 2009 Total 140 0 0 0 0 0 140 Detector R/O 0 275 275 0 0 0 550 LVL2 Proc 0 0 65 195 230 160 650 Event Builder 0 0 50 50 110 70 280 Event Filter 0 0 170 180 570 380 1300 Online 0 45 135 0 0 0 180 Infrastructure 0 0 80 80 20 20 200 INFN Total 140 320 775 505 930 630 3300 TDR Total 1048 3357 4087 4544 7522 4543 25101 INFN Percentage(%) 13.4 9.5 19.0 11.1 12.4 13.9 13.1 Pre-series CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 34 INFN Milestones 30/06/2005 TDAQ - Installazione, test e uso della "Pre-serie" (~ 10% TDAQ slice) ¾ ¾ “compiutamente” raggiunta in Ottobre: ritardi accumulati soprattutto sugli acquisti delle componenti Proposta di indicare il 100% e modificare la “matching date” 24/12/2005 TDAQ - Installazione e test dei ROS di Pixel, LAr, Tile, Muon (interfacciamento al ROD Crate e integrazione nel DAQ) ¾ ¾ ¾ Forte dipendenza dalla data di consegna dei ROS (lentezza gara, etc) Nessun problema “di principio”, il programma di lavoro è chiaro, l’esperienza della preserie è direttamente trasferibile Proposta di indicare 50% alla data prevista 30/04/2006 Completamento dei test sulla pre-serie e definizione delle funzionalità per il supporto al commissioning TDAQ 31/08/2006 Commissioning delle slice di ROS dei rivelatori utilizzando le funzionalità della preserie (modulo-0 del sistema finale) 31/12/2006 Presa dati integrata dei rivelatori nel pozzo con raggi cosmici CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 35 Conclusioni Il progetto TDAQ sta entrando in una fase di piena maturità Rendere disponibile ai rivelatori tutte le infrastrutture necessarie per i run di cosmici Preparare il commissioning completo del sistema in preparazione allo start-up di LHC I contributi italiani sono chiaramente visibili e ben riconosciuti a livello della Collaborazione Integrazione hardware, sviluppi algoritmici, posizioni di responsabilità, finanze Il tempo a disposizione per il commissioning TDAQ è molto compresso Fondamentale per poter assicurare il data flow necessario anche allo start-up CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 36 Goal of Early Commissioning… Prepare for Unexpected Events… CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 37 Spares LVL2 tests Data File LVL2 Latency LVL2 Rate Processing Time RoI Col DAQ Time Data Rate Data Size #Reqs /Event Data /Req (Hz) (ms) (ms) (ms) Fraction (MB/s) bytes bytes mu 293.1 3.41 2.78 0.62 0.19 0.084 287 1.3 223 jet 280.3 3.57 3.26 0.28 0.09 0.781 2785 1.2 2283 e 58.2 17.18 15.48 1.66 0.10 0.921 15820 7.4 2147 Fraction of events passing LVL2 as a function of the decision latency Fraction of events 1.2 1 0.8 mu 0.6 jet Prefiltered 0.4 e 0.2 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 Latency (ms) CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 39 Example: Data Base Schema keys: stored in CondDB, to retrieve information (online and offline) LVL1 HLT Early prototype of HLT part already run on 6 node system with muon selection algorithm algorithms, trigger menu jobOptions CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia software release 40 Routing μ calibration data CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 41 Selections: e/γ Rate and efficiency studies performed for main physics triggers: e25i , 2e15i, e60, γ60, 2γ20i Results for 11.0.4 perfectly in agreement with Rome results Tools have been developed to optimize the selections In the future, results will be provided as efficiency vs. rejection curves, to provide a continuous set of working points: essential for trigger bandwidth optimization Eff % Rate L1 95.5 4.7 KHz L2 Calo 94.9 890 Hz L2 ID 91.0 280 Hz L2 Match 89.7 98 Hz EF Calo 87.6 65 Hz EF ID 81.8 35 Hz EF Match 81.0 35 Hz Cluster composition CSN1 Aprile 2006 W→eν 21% Z→ee 5% Direct photons or quark brem 5% e from b, c decays 37% rest 32% Valerio Vercesi - INFN Pavia 42 Jets/Taus/ETmiss LVL2 calo algorithm for taus recently separated from egamma Ongoing performance studies for selection strategies on variables At present only EM calibration for cluster energies: need for a tau calibration (also for EF, H1 style as in the offline mode?) First implementation of EF “seeded” TrigtauRec is already working making use of offline tools Once the selection strategies are defined, physics trigger-aware analyses (studying the effect of the hadronic tau trigger) can be performed Three different strategies (concerning the data preparation) are being considered Read out calorimeter and unpack the cells (unpacking time may dominate) Read out calorimeter, get Ex/ Ey calculated in ROD (faster but … resolution?) Read out TriggerTower from LVL1 Preprocessor Ongoing work to define and studies general strategy for pre-scales, in particular for jet objects CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 43 Jet triggers and prescales CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 44 RoI Based B-physics Aim: use the calorimeter to identify regions of the event containing B decay products EM RoI for e and gamma. Jet RoI for hadronic B-decays 4 Keep multiplicity low, to minimize data transfer and cpu, whilst maximising efficiency for events used in physics studies 4.5 multiplicity= 1-2 The effect of different thresholds (EM&HAD and the jet RoI size on this multiplicity was studied using Rome data (1x1033) with the new TTL LVL1 simulation and pile up The requirement on multiplicity implies an ET threshold of ~ 2GeV for LVL1 EM RoI CSN1 Aprile 2006 defaults (em=500. had=750. Towerthresh=500MeV (default) 3.5 RoI Multiplicity Mean ROI Multiplicity LVL1 RoI multiplicity vs. energy threshold, B->muX events LVL1 EM RoI multiplicity vs. ET cut em=750. Towerthresh=750MeV 3 em=1000. Towerthresh=1000MeV 2.5 2 1.5 1 0.5 0 1 2 3 4 5 EM ROI Energy Threshold(GeV) LVL1 Threshold Energy (GeV) Valerio Vercesi - INFN Pavia 45 2006 PESA Milestones LVL1/HLT AODs fully available in Rel 12 for trigger-aware analyses – Apr 06 Very preliminary AOD information available in Rel 11 Detailed description of Rel 12 deliverables prepared by Simon HLT algorithm reviews complete – Jun 06 T&P Week T&P Week Detailed review of ID LVL2 algorithms already taken place Focus on system performance and implementation Results fed back into Rel 13 Online tests of selection slices with preloaded mixed files and large menus – Sep 06 First production version of trigger configuration Selection software ready for cosmic run – Oct 06 Already in PPT: need to refine meaning Blind test of HLT selection – Dec 06 T&P Week In discussion with physics coordination Sample of representative events from initial ATLAS output & run full menu CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 46 PESA Planning Several interactions with PESA Slice coordinators and with Algorithms developers Try and bring together something to help reinforcing the content of proposed milestones and monitoring the development process Only gone through first iteration until now… Try always to describe the work in a “task oriented” fashion, to help identifying weak areas as well as facilitate the job assignment Attempt to build a full PESA planning (Excel) starting from this information to monitor progress and allow for updates, suggestions, improvements Clearly more details on near-future objectives than on far-away ones http://agenda.cern.ch/askArchive.php?base=agenda&categ=a057236&id=a057236s1t0/schedule PESA Planning Task Comments Expected PPT Workpackage Definition of EDM Done? dec-05 …………………………………………………………………………………………… …………………………………………………………………………………… ……………………… ………………………….. e/gamma implementation in common framework RTT, ESD, Root Analysis Framework February 2006 DH-W101 Develop tools for automatic optimisations of e/gamma selections scanning of parameter space, minuit fitting there, neural net, multi-variant method being developed March 2006 Check trigger selection w.r.t offline selection for electrons/photons Need new evaluations from offline groups March 2006 DH-W101 Establish set of pre-scaled e-triggers using Rome datasets Photons as well February 2006 DH-W101 First evaluation of trigger efficiencies from data For electrons, photons and muons March 2006 DH-W101 LVL1 Trigger Slices Strategies for ETmiss calculations DH-W101 March 2006 DH-W110 Revised Steering Configuration Prototype LVL2 Hypothesis algorithm for all Examples to be further developed in validation February 2006 Provide documentation and examples to physics community For all selections March 2006 Milestone April 2006 LVL1/HLT AODs completely available in version 12 for trigger-aware analyses CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia DH-W147 47
Documenti analoghi
2002 - INFN - Sezione di Padova
• S.Falciano (Roma1) -> Coordinatore Detector Readout
nel DIG e Detector HLT slice tests
Gruppo1 - 25/6/2002