26 January 2011

Dirk get's mentioned in Nature

So at least one person other than my avatar is aware of my existence. One of my children is mentioned in the article; (even though the majority of the article is about me and ALL my children. can't be having favourites amongst them now can I??)

http://www.nature.com/news/2011/110119/full/469282a.html

Interesting point to note is only 0.02% of the total data collected by the ATLAS is represented, that's to say if I were people and ATLAS were all the people in the world; then I would represent 1.2 million people.

ATLAS have also changed now the way they send my children out. Interestingly I am now in 70/120 houses. The break down of where these rooms are is as follows:

9 rooms at BNL-OSG2.
8 rooms at CERN-PROD.
6 rooms at IN2P3-CC.
4 rooms at SLACXRD, LRZ-LMU, INFN-MILANO-ATLASC and AGLT2.
3 rooms at UKI-NORTHGRID-SHEF-HEP, UKI-LT2-QMUL, TRIUMF-LCG2,SWT2, RU-PROTVINO-IHEP, RAL-LCG2, PRAGUELCG2, NDGF-T1, MWT2, INFN-NAPOLI-ATLAS and DESY-HH.
2 rooms at WUPPERTALPROD, UNI-FREIBURG, UKI-SCOTGRID-GLASGOW, UKI-NORTHGRID-MAN-HEP, TW-FTT, TOKYO-LCG2, NIKHEF-ELPROD, NET2, MPPMU, LIP-COIMBRA, INFN-T1, GRIF-LAL, FZK-LCG2 and DESY-ZN.
1 room at AUSTRALIA-ATLAS, WISC, WEIZMANN-LCG2, UPENN, UNICPH-NBI, UKI-SOUTHGRID-RALPP, UKI-SOUTHGRID-OX-HEP, UKI-SOUTHGRID-BHAM-HEP, UKI-NORTHGRID-LIV-HEP, UKI-NORTHGRID-LANCS-HEP, UKI-LT2-RHUL, TAIWAN-LCG2, SMU, SFU-LCG2, SARA-MATRIX, RU-PNPI, RRC-KI, RO-07-NIPNE, PIC, NCG-INGRID-PT, JINR-LCG2, INFN-ROMA3, INFN-ROMA1, IN2P3-LPSC, IN2P3-LAPP, IN2P3-CPPM, IL-TAU-HEP, ILLINOISHEP, IFIC-LCG2,IFAE, HEPHY-UIBK, GRIF-LPNHE, GRIF-IRFU, GOEGRID, CSCS-LCG2, CA-SCINET-T2, CA-ALBERTA-WESTGRID-T2 and BEIJING-LCG2.


This is in total 139 of the 781 rooms at have.
The number and type of rooms are:

56 rooms of type DATADISK.
26 rooms of type LOCALGROUPDISK.
17 rooms of type SCRATCHDISK.
7 rooms of type USERDISK.
5 rooms of type PHYS-SM and PERF-JETS.
4 rooms of type PERF-FLAVTAG.
3 rooms of type PERF-MUONS, PERF-EGAMMA, MCDISK, DATATAPE and CALIBDISK.
1 room of type TZERO, PHYS-HIGGS, PHYS-BEAUTY and EOSDATADISK.



11 January 2011

Who cares about TCP anyway....

Don't worry I haven't injured myself and need a cut sterilizing, I mean window sizes!!!
So as part of my work to look at how to speed up individual transfers, I thought I would go back and look to see what the effect of changing some of our favourite TCP window settings would be. These are documented at http://fasterdata.es.net/TCP-tuning/

Our CMS instance of Castor is nice since CMS have a separate disk pool for incoming WAN transfers, outgoing WAN transfers and for pool for internal transfers between WNs and the SE. This is great feature as it means the disk servers in WanIn and WanOut will never have 100s of local connections ( a worry I have for setting TCP settings to high;) so we experimented to see what the effect of changing our TCP settings.

I decided to study transfers that the international as these are the large RTT transfers and most likely to benefit from tweaking. Our settings before the change were. 64kB for default and a 1MB maximum window size.
This lead to a maximum transfer rate per transfer of ~60MB/s and an average of ~7.0 MB/s.
This appears to be hardware dependent across the different generation s of kit.
We changed the settings to 128kB and 4MB. This led to an increase to ~90MB/s maximum data transfer rate per transfer and an average transfer of~11MB/s so roughly a 50% increase in performance. This might not seem a lot since we doubled and quadrupled are settings... However further analysis improves matters. changing TCP settings is only going to help with transfers where the settings at RAL were the bottleneck.
For channels where the settings at the source site are already the limiting factor then these changes would have a limited effect. However looking at transfers from FNAL to RAL for CMS we see a much greater improvement.

Before the tweak the maximum file transfer rate was ~20MB/s with an average of 6.2MB/s. However; after the TCP tweak these increased to 50MB/s and 12.9MB/s respectively.

Another set of sites where the changes dramatically helped were transfers from the US tier2s to RAL ( over the production network rather than the OPN). Before the tweaks the transfers peaked at 10Mb/s and averaged 4.9MB/s. After the tweaks, these values were 40MB/s and 10.8 MB/s respectively.

Now putting all these values into a spreadsheet and looking at other values we get:









Solid Line is Peak. Dotted line is average.
Green is total transfers.
Red is transfer from FNAL.
Blue is transfers to US T2 sites.
Tests on a pre-production system at RAL also show that the efffects on the LAN transfers for these changeas are acceptable.