High Energy Physics @ FermiLab
Two physics detectors (5 stories tall each) to understand smallest scale of matter
Each experiment has ~500 people doing science
Each experiment handles millions of particle collisions per second - HUGE amount of data!
Data volume and analysis rates
Total 4.3 petabytes
Analysis >1 petabyte/month
The data challenge
Use resources at participating institutions Ship and analyze data around the world !
Get data fast, processes it, and then immediately store the results back at Fermilab
Strategy
Common business model for data cataloguing, tracking, and mining.
Streamlined support of the underlying machineryShared expertise solves issues at the user level.
Follow grid standards, use grid middleware and shared resources (OSG and LCG grids).
Contribute to grid projectsOSG resource selection, SRM, security…
How we handle data
Sequential access via Metadata ( SAM )data storage, directly from the detector or
from remote data processing facilities data cataloguing, miningdistributed resources management to
optimize usage and data throughput, and enforce the policies of the experiments.
Use of variety storage service providersdCache, enstore, HPSS, SRM, in house disk
resources
Success story: D0 refixing
Problem. Correct processing mistake in 6 weeks 85 Tb and 4Million hours of 1Ghz CPU time
Plenty of network but no free CPU to do the job at Fermilab
Solution. Involve CPU resources elsewhere. Ship detector data directly to the analysis Cache re-usable data near computing sites
OSG grid 20Tb
LCG grid 5Tb
Westgrid 10Tb Fermi 40Tb IN2P3 10Tb
Conclusion
Store a petabyte a yearProcess a petabyte a monthComputing that meets growing demands
of the HEP experimentsone step ahead of the physics needs
Top Related