Data management demonstrators
Embed Size (px)
Transcript of Data management demonstrators
Data management demonstrators
Data management demonstratorsIan Bird; WLCG MB18th January 2011March 2010Experiments express concern over Data management and accessFirst brainstorming agree on Jamboree June 2010Amsterdam Jamboree~15 demonstrator projects proposedProcess: Follow up at WLCG meeting ensure projects had effort and interestFollow up in GDBsBy end of year decide which would continue based on demonstrated usefulness/feasibility2nd half 2010Initial follow up at WLCG workshopGDB status reportJan 2011GDB status report (last week)Close of process started in Amsterdam (today)[email protected] projects scheduled at GDB2 had no progress reported (CDN + Cassandra/Fuse)10 either driven, or interest expressed, by experimentsAssume these 10 will progress and be regularly reported on in GDBScope for collaboration between severalTo be encouraged/pushedSeveral using xrootd technologyMust ensure we arrange adequate supportWhich (and how) to be wrapped into WLCG sw distributions?Process and initiatives have been very usefulMB endorses continued progress on these 10 [email protected] Summary of demonstratorsATLAS PD2PIn use; linked to LST demonstratorImplementation is ATLAS-specific, but ideas can be re-used with other central task-queuesARC CachingUse to improve ATLAS use of ARC sites, could also help others use ARCMore general use of cache needs input from developers (and interest/need from elsewhere)Speed-up of SRM getturl (make sync)Essentially done, but important mainly if lots of small files (then other measures should be taken too)Cat/SE sync + ACL propagation with MSGPrototype existsInterest in testing from ATLASIdeas for other usesCHIRPSeems to work well use case of personal SE (grid home dir)Used by ATLAS, tested by CMS
[email protected] 1Xrootd-relatedXrootd (EOS, LST)Well advanced, tested by ATLAS and CMSStrategy for Castor evolution at CERNXrootd ATLASAugments DDM Commonality with CMSXrootd-global CMSGlobal xrootd federation, integrates with local SEs and FSsMany commonalities can we converge on a common set of tools??Proxy-caches in rootRequires validation before productionContinue to study file caching in experiment frameworksNFS4.1Lot of progress in implementations and testingNeeds some xrootd support (CMSD)Should MB push for pNFS kernel in SL?
[email protected] 2Amsterdam was not only the demonstrators
Was a recognition that the network was a resourceCould use remote accessShould not rely on 100% accuracy of catalogues, etc.Can use network to access remote servicesNetwork planning group set upWork with NRENs etc is ongoing (they got our message)Also understood where data management model should changeSeparate tape and disk caches (logically at least)Access to disk caches does not need SRMSRM for tape can be minimal functionalityDisk to be treated as a cache move away from data placement for analysis at leastRe-think [email protected] Amsterdam We have changed directionThere are a number of very active efforts driven by experiment need/interests for futureNot just what was in the demonstratorsShould continue to monitor and support and look for commonalities: opportunity to reduce duplication and improve support [email protected]