Managing Petabyte-Scale Storage for the ATLAS Tier-1 Centre at TRIUMF
Deatrich, D., Liu, S., Payne, C., Tafirout, R., Walker, R., Wong, A. and Vetterli, M. (2008) Managing Petabyte-Scale Storage for the ATLAS Tier-1 Centre at TRIUMF. In: 22nd International Symposium on High Performance Computing Systems and Applications, HPCS 2008, 9 - 11 June, Quebec, Canada pp. 167-171.
*Subscription may be required
The ATLAS experiment at the Large Hadron Collider (LUC), located in Geneva, will collect 3 to 4 petabytes or PB (10 15 bytes) of data for each year of its operation, when fully commissioned. Secondary data sets resulting from event reconstruction, reprocessing and calibration will result in an additional 2.5 PB for each year of data taking. Simulated data sets require also significant resources as well nearing 1 PB per year. The data will be distributed worldwide to ten Tier-1 computing centres within the Worldwide LHC Computing Grid (WLCG) that will operate around the clock. One of these centres is hosted at TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, located in Vancouver, BC. By the year 2010, the storage capacity at TRIUMF will consist of about 3 Petabyte of disk storage, and 2 PB of tape storage. At present, the disk capacity installed is 750 terabytes or TB (10 12 bytes) while the tape capacity is 560 TB, both using state of the art technology. dCache from www.dcache.org is used to manage the entire storage in order to provide a common file namespace. It is a highly scalable solution and highly configurable. In this paper we will describe and review the storage infrastructure and configuration currently in place at the Tier-1 centre at TRIUMF for both disk and tape as well as the management software and tools that have been developed.
|Publication Type:||Conference Paper|
|Copyright:||© 2008 IEEE.|
|Item Control Page|