We had an interesting experience with the CASTOR upgrade to 2.1.12, that the link between the storage area (SA) and the tape pool disappeared in the upgrade. In GLUE speak, the SA is a storage space of sorts, which may be shared between collaborators - we use it to publish dynamic usage data.
In CASTOR, we have used the "service class" as the SA; there is then a many-to-many link to disk pools and tape pools, something like this:
The dynamic data of each pool then gets shared accordingly between all the SvcClasses, which is (was) the Right Thing™. Now the second association link has gone away, we're wondering how to keep publishing data correctly in the short term - and the upgrade got postponed by a week amidst much scratching of heads.
The information provider may just have enough information (in its config files) to restore the link, but it'd be a bit hairy to code - we're still working on that - but it may just be better to rething what the SA should be (which we will). We also tried a supermassive query which examined disk copies of files from tape pools to see which disk pools they were on, and then linking those with service classes - which was quite enlightening as we discovered those disk copies were all over the place, not just where they were supposed to be...
In the interest of getting it working, we decided to just remember and adjust which data publishes where - meanwhile, we shall then rethink what the SA should be in the future.
19 September 2012
14 September 2012
Large Read/Write rates allow T2 storage to fill quickly.
In the last six months, the UK T2s have ingested on average nearly 590MB/s; (peaking at 2.0GB/s.) As a source for transfers; the T2s have averaged 230MB/s and had a peak of 1GB/s. Purely for ATLAS, the write rate has averaged 473MB/s.
That equates to a volume for the last 23 weeks to be 6.58PB of data. Now ATLAS "only" have access to 7.9PB of storage capacity; so the time it takes ATLAS to fill its available storage is 28 weeks. N.B this week the average rate has been double at 953MB/s. so it would only take ~ 3 months to fill from empty. Also remember that this fill rate assumes that files are only transferred into the storage form across the WAN. Since many files are actually created at the T2s (either MC production or derived data; then the fill rate will be much quicker.
That equates to a volume for the last 23 weeks to be 6.58PB of data. Now ATLAS "only" have access to 7.9PB of storage capacity; so the time it takes ATLAS to fill its available storage is 28 weeks. N.B this week the average rate has been double at 953MB/s. so it would only take ~ 3 months to fill from empty. Also remember that this fill rate assumes that files are only transferred into the storage form across the WAN. Since many files are actually created at the T2s (either MC production or derived data; then the fill rate will be much quicker.
11 September 2012
The net for UK to ROW ATLAS sites.
So I was trying to find out what SEs were close, (in a routing sense) to other SEs for ATLAS from RAL. I ended up producing a network diagram for the net. Not surprisingly; SEs within a European country all seem to follow a similar route. However; for traffic to the U.S traffic seems to have multiple different routes.
Subscribe to:
Posts (Atom)