17 March 2017

What rates can we get for single file transfers?

Recently has a conversation regarding what the expected rates we can see for single file data transfers; so I went to have a look... For recalling a single file 275GB ( we have then at the Tie1 one for some VOs) I got the following results. These are all just examples and have no statiscal basis to them. but as a first step it gives interesting results. When recalling back for our tape system i get the following graph showing over 300MB/s:

Of course, I am also interested to see what happens when I copy a file into castor.The floowing is an example of a similar 275Gb file being copied across the WAN  then is written to tape. As you can see the intial write phase (~75MB/s) is lower than the rate for the file to be written to tape (~260MB/s)    Copy across the network for similar size file and then uploading into Castor:

 The end of the log file for this transfer is shown here:

N.B This transfer was using four concurrent stream within the gsiftp transfer.

It is (I think) interesting to look at the theoretical rate limit for transfers for a single stream between the two hosts in this transfer using some predictions; (from website https://www.switch.ch/network/tools/tcp_throughput/) :

So we may need to work on this...
The higher level monitoring worried me for this file until I realised the display options greatly effect the perceived rate. In this (in what might be an atypical) example; by solely changing the bin size I was able to change the perceived rate from 75MB/s to 460MB/s as seen in the two pictures below:

So I will rely on log file info from now on... (well mainly)

Now my site has the advantage (hindrance) of two storage systems  using different hardware configurations and  separate implementation of a gsiftp server; so I decided to see what rate I could get between the two.... And I managed over 100MB/s. This 100MB/s was for a 5 section poll time in the transfer as seen here:

What I also find interesting is that there seems to be a systematic difference between the average rate for transfers depending on transfer direction. (70-80MB/s one direction 90-95MB/s in reverse.) whether this difference is worth investigating is a question I will leave to the reader to decide. Also of interest may be to see what the effect of changing data transfer protocol has; but that it for another day...

16 March 2017

Happy Birthday To ME ! ( almost) The contiuing adventures of Dave the dataset

With my 6th birthday nearly open me, (how the last six years have flown by....) I thought I should have an update.I an my children still exist in 50 rooms across 29 houses.
The list of houses were my children and I reside are:

Nationalities are Canadian, Czech, Dutch, French, German, Israeli, Italian, Nordic, Japanese, Portuguese, Spanish, Swiss, Turkish, UK and USA
And the types of room are:

There are 284 unique individuals in my family tree: 4 are triplets; 56 are twins. 224 individuals have no replicas; so are at risk of extinction if any particular room is destroyed. OF the 284;  64 are Ursulas', 7 are Gavins' and 195 are Dirks'.

Each child varies from 1 to 3531 files (103 are only a single file.) Size varies from 450B to 3.13TB.
In total, there are 7.96515TB of unique data spread across  22841 files. ( giving an average file size form the dataset and its children of 348MB.)

02 March 2017

When not to optimise best network settings ( and you should be satified with good...)

Just a quick note.. So whilst reviewing the current recommendations for network settings our our disk servers from the advice of our friends at fasterdata.es.net , I noticed they also had a section for settings for network performance machines running tests similar to those which on our machines as part of our WLCG work using the perfSONAR monitoring tools. 

Firstly I was interested to see that there were possible differences in settings between these machine types, but it also go me thinking that should we apply the perfSONAR settings to our perfSONAR machines. Un-intuitively the answer is NO! The gist of the advice from my friendly sys-admin with greater knowledge on this matter than I on this matter put it as follows.  `"` You want to have similar settings on the performance node (perfSONAR) as you do on you production disk servers so that you have  a good representation of the expected performance of reality..  `"`

Data Recall Tests from Data from Tape for ATLAS successful... ....ish

The ATLAS VO wanted to test  rates achievable to be recalled from Tape system for real data to see what we could achieve. We (RAL) decided to make this even harder on ourselves by getting them to move the date to our new CEPH storage system so as to test further its deployment. This had some success and discovered issues which we have been working to fix. I won't go into all of them here and leave it to others to expand if they wish to other than to say the following.

Data rate was improved by fixing the number of concurrent transfers between the tow storage systems by disabling the auto-configure feature in FTS for the particular FTS "link". We set it to 32 as it had been downgraded to only two due to other issues. This 32 is a value which should be investigated in further tests.

What I will say are the headline figures we achieved. When things were working well,; we were recalling from tape at ~900MB/s and then copying from the Castor disk cache into our object store using (gsiftp protocol) via FTS at  450MB/s. (Total Data Volume expected to be moved was 150TB, but we seem to have seen 230TB of data moved). This was using three gateway machines on the CEPH cluster, 12 disk servers of various quality in the disk cache in castor (with capability of up to 160 concurrent transfers.) This was connected to our tape library which had up to 10 out of 16 tape drives ( this limit is due to an internal tape family setting which we shan't worry about now.)

We also had to worry about other VOs using the system and internal ATLAS contention of resources during the test.  There are probably earlier tests that my colleagues have done in the background  on the Q.T; but this is the first large Volume data recall and movement combining CEPH and Castor, so thought I would comment on it here for prosperity ( and reference for the next time the test is run.)