15 March 2019

Some success in testing sites with no local production data storage for ATLAS VO.

For a while we have had a few  of the smaller sites (T3s) in the UK running for ATLAS with out any storage at the site.We recently tried to run Birmingham as a completely diskless site with it using storage at Manchester.  This was mostly successful; the saturation of the WAN connection at Manchester which was always considered a worrying possibility was seen. This has helped inform ATLAS's opinions on how to implement diskless sites which was then presented at the ATLAS Site Jamboree this month.

We intend to try using XCache at Birmingham instead to see that is an alternative approach which might succeed. WE are also looking into using the ARC Control tower to pre-place data for ARC-CEs. Main issue is how this conflicts with VO wish for last minute payload changes within pilot jobs.

I would also just remind why (IMHO) we are looking into optimising ATLASDATADISK storage.

From a small site perspective, storage requires a substantial amount of effort to maintain. This effort compared to the volume of storage provided could be efficiently used in other activities. Below is a plot of the percentage of current ATLASDATADISK provided by each site.  The VO also benefits with not using smaller sites as it has fewer logical endpoints to track.
 

This plot shows that if the 10 smaller sites (of which 5 are in the UK) allows for ATLAS to use 99% of space form only 88% of sites. ATLASSCRATCHDISK and ATLASLOCALGROUPDISK usage/requirement also needs to be taken into consideration when deciding if a site should become a fully diskless or caching/buffering site.

08 March 2019

ATLAS Jamboree 2019 view from the offiste perspective.

I didn't go in person to the ATLAS Jamboree this year held at CERN. For those who are allowed to view I suggest looking at: https://indico.cern.ch/event/770307/

But I did join for some via vidyo!
Here is my musings about the talk givens I saw. ( Shame I couldn't get involved in coffee/dinner  discussions which are often the most fruitful moments of these meetings):

Even before the main meeting started, there is an interesting talk regarding HPC data access in US an ANL.


In particular, I  like the thought of globus usage and incorporating rucio into DTNs at the sites.  Similar to what was discussed at other sites at the rucio community workshop last week.

In the preview talk, I picked out the switch to Fast Sim rather than Full sim will increase output rate by a factor of 10. A good reminder that user workflow changes could drastically alter computing requirements.
From the main meeting , the following meetings will be of interest on a data storage:

Data Organization and Management Activities: Third Party Copy and (storage) Quality of Service
TPC: details on DPM
DOMA ACCESS: Caches 
DDM Ops overview
Diskless and lightweight sites: consolidation of storage
Data Carousel
Networking - best practice for sites, and evolution
WLCG evolution strategy
 
One thing it di was cause me to think what if; (and I stress the if is me not ATLAS musing,)  ATLAS  wanted to read 1PB of data a day from Tape at RAL and then distribute it across the world?

 
 

06 March 2019

Rucio 2nd Community Workshop roundup

There was an interesting workshop for members of the rucio community last week:
https://indico.cern.ch/event/773489/timetable/#all.detailed

Here is the summary from the workshop:
Summary
● Presentation from 25 communities, 66 attendees!
● Many different use-cases have been presented
○ Please join us on Slack and the rucio-users mailing list for follow-ups!
● Documentation!
○ Examples
○ Operations documentation
○ Easy way for communities to contribute to the documentation
○ Documentation/Support on Monitoring (Setup, Interpretation, Knowledge)
○ Recommendations on data layout/scheme → Very difficult decision for new communities
● Databases
○ Larger-Scale evaluation of non-Oracle databases would be very beneficial for the community
● Drop of Python 2.6 support for Rucio clients
● LTS/Gold release model
○ Will propose a release model with LTS/gold releases with Security/Critical bug fixes
● Archive support
● Metadata support
○ Existing metadata features (generic metadata) need more evaluation/support
○ More documentation/examples needed
● Additional authentication methods needed
○ OpenID, Edugain, …
● Interfacing/Integration with DIRAC
○ Many communities interested
○ Possibility for joint effort?

Here is a list of my summary of these and other snippets from the talks I find interesting/thought provoking. I 'll point out I was not at the meeting so tone of talks n=may have been lost on me.

Network talk from GEANT:
Lots of 100Gb links!



DUNE talk:
Replacing SAM ( THE product from where I started my data management journey...)
Unfortunately not able to use significant amount of storage at RAL Echo
– Dynafed WebDAV interface can’t handle files larger than 5GB
– Latest Davix can do direct upload of large files (as a multi-part upload), but not third-party
transfers
– Maybe use the S3 interface directly instead?
• Recent work done on improving Rucio S3 URL signing


CMS talk:
Some concerns called out by review panel which we will have to address or solve:
▪ Automated and centralized consistency checking — person is assigned
▪ No verification that files are on tape before they are allowed to be deleted from buffer
★ FTS has agreed to address this


BelleII talk:
My thoughts:
using a BelleII specific version of DIRAC
They are evaluating rucio
naming schema is "interesting"
power users seem a GoodIdea (TM)
BelleII thoughts:
• Looking ahead, the main challenge is achieving balance between conflicting requirements:
• bringing Rucio into Belle II operations quickly enough to avoid duplication of development effort
• supporting the old system for a running experiment


FTS talk:
I didn't know about multi hop support:
Multi-hop transfers support – Transfers from A->C, but also A->B->C


XENONnT talk:
This is a dark matter experiment with some familiar  HEP sites using rucio.


ICE cube experiment :
Interesting data movement issues when base a south pole!
●Raw data is ~1 TB/day, sent via cargo ship; 1 shipment/year
●Filtered data is ~80 GB/day, sent via satellite; daily transfer


CTA talk:
CTA is an experiment for cosmic rays as well as CERN's news tape system!


Rucio DB at CERN  talk:
rucio DB numbers for ATLAS are "impressive" (1014M DIDs)


SKA talk:
RAL members get a thank you specifically.


NSLSII talk:
Similar needs as Diamond Light source DLS , possible collaboration?
Another site which does both HEP and photonics.
Has tested using globus endpoints.


XDC talk:
rucio and dynafed and storage all in one!


CTA (tape system) talk:
Initial deployments: predict a need of 70PB of disk pace just in the disk cache ! (am I reading this slide correctly?)


LCLSII talk:
Linear version of NSLSII based at SLAC, similar to BNL> need to use FTS to be tested. prod system need in the next year.


LSST talk:
Using docker release of rucio
Nice set of setup tests. Things look promising!
– FTS has proven its efficiency for data transfers, either standalone or paired with Rucio
– Rucio makes data management easier in a multi-site context, and tasks can be highly automated
– These features could prove beneficial to LSST
● Evaluation is still ongoing
– discussions with the LSST DM team at NCSA are taking place


Dynafed talk:
Dynafed as a Storage Element is work in progress
– Not be the design purpose of Dynafed


RAL/IRIS talk:
I would be interested to hear how this tlak went down with th epeople present



ARC at Nordugrid talk:
I still think ARC control tower (ACT)  are the future. rucio integration with volatile rse is nice.

28 February 2019

Fixing an iptables problem with DPM at Edinburgh


After spending some time examining the output from iptables on our dpm servers in Edinburgh I came across a small problem combining our iptables rules with SRM.

For brevity the iptables rule which caused the problems is:

 *filter  
 :INPUT ACCEPT [0:0]  
 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT  
 -A INPUT -i lo -j ACCEPT  
 -A INPUT -p icmp --icmp-type any -j ACCEPT  
 ...  
 -A INPUT -p tcp -m multiport --dports 8446 -m comment --comment "allow srmv2.2" -m state --state NEW -j ACCEPT  
 ...  

The problem caused by this is that packets look similar to the following in logs:

 IN=eth0 OUT= MAC=aa:bb:cc:dd:ee:ff:gg:hh:ii:jj:kk:ll:mm:nn SRC=192.168.12.34 DST=192.168.123.45 LEN=52 TOS=0x00 PREC=0x00 TTL=47 ID=19246 DF PROTO=TCP SPT=55012 DPT=8446 WINDOW=27313 RES=0x00 ACK FIN URGP=0

Here, ACK FIN shows how the dropped packet appears to be associated with closing a connection which iptables has already seen as closed.
(This is the case at least when with the DPM 1.10 srmv2.2 builds on both the latest security SL6 and CentOS7 kernels)

In Edinburgh we historically had problems with many connections which don't appear to close correctly, in particular with the SRM protocol. The service would if uncorrected run for several hours and then appear to hang not accepting any further connections.

We now suspect that this dropping of packets was potentially causing the issues we were seeing.

In order to fix this the above rule should either be changed to:
 ...  
 -A INPUT -p tcp -m multiport --dports 8446 -m comment --comment "allow srmv2.2" -m state --state NEW,INVALID -j ACCEPT  
 ...  

or, the state module shouldn't be used to filter only NEW packets associated with the srmv2.2 protocol.

With this in mind we've now removed the firewall requirement that packets be NEW to be accepted by our srmv2.2 service. This has enjoyed an active uptime of several days without hanging and refusing further connections.

An advatage of this is most of the rejected packets by the firewall of our DPM head node were actually associated with this rule. Now that the number of packets being rejected by our firewall has dropped significantly examining connections which are rejected for further patterns/problems becomes much easier.

18 February 2019

Understanding Globus connect/online... is it doing a lot??

I have made further progress in understanding Globus transfer tool (one thing I still struggle with is what to call it...) What I know I still need to understand is its authentication and authorization mechanisms. Of interest (to me at least) was to look at the usage of our Globus endpoints at RAL. 20TB in last 3 months. Now to work out if that is a lot or not compared to other  similar Globus endpoints and or other communities...

07 February 2019

PerfSonar, Network Latencies and Grid Storage

One of the future developments mooted under the DOMA umbrella of working groups has been distribution of storage. Doing this well requires understanding how "close" our various sites are, in network terms, so we can intelligently distribute. (For example: modern Erasure Coding resilience provides "tiered" successions of parity blocks, which provide for locating some parity "close" (in network terms) to subsets of the stripe; and other parity in a "global" pool with no locality assumptions.)

Of course, for us, the relevant latency and bandwidth is the pure storage-to-storage measurement, but this is limited by the lower/upper bound (respectively) of the site's own latency and bandwidth. We already have a system which measures this bounding limit, in the PerfSonar network which all WLCG sites have had installed for some time.

Whilst PerfSonar sites do record the (one-way) packet latency to all of their peers, the display of this doesn't seem to be a "standard" visualisation from the central repository for PerfSonar. So, I spent a few hours pulling data from the UK's sites with public PerfSonar services, and making a spreadsheet. (Doing it this way also means that I can make my own "averages" - using a 10% truncated mean to remove outliers - rather than the "standard" mean used by PerfSonar itself.)
The result, in raw form, looks like this (where, for comparison, I also added the ballpark latencies for access to spinning media, solid state storage (via SATA), and solid state storage (via PCI-E).
All of these results are for IPv4 packets: for some sites, it looks like switching transport protocol to IPv6 has very significant effects on the numbers!

GLA
BHAM
SUSX
ECDF
RALPP
LIV
UCL
RAL
DUR 
RHUL
QMUL
LANCS
CAM
OX
MAN
GLA
x
2.5
5.5
3.9

1.4
3.6
4.8
2.6
4.1
-
0.7
3.5
4.9

BHAM
4.4
x
3.6
3.7

1.2
1.5
2.5
3.0
2.7
-
4.7
2.2
2.9

SUSX
8.1
4.0
x
7.2

5.2
1.8
3.1
7.0
3.0
-
8.9
6.2
3.2

ECDF
6.7
5.0
7.6
x

4.0
6.0
7.0
3.6
5.3
-
7.8
4.3
7.5

RALPP
7.0
-
-
-
x
3.4
-
0.1
6.0
1.4
-
7.0
-
1.6

LIV
4.4
2.0
5.3
3.6

x
3.6
4.7
-
5.0
-
4.8
4.2
4.5

UCL
6.5
2.5
2.5
3.8

3.9
x
1.6
3.8
1.0
-
6.8
3.5
1.6

RAL
7.0
2.5
2.5
5.5

3.4
0.7
x
5.0
1.8
-
7.7
4.5
1.6

DUR
5.0
3.3
6.5
2.7

2.7
5.3
6.0
x
4.5
-
5.5
3.8
6.1

RHUL
7.0
2.7
2.2
6.3

3.6
0.5
1.6
3.0
x
-
6.8
3.5
1.6

QMUL
6.0
2.3
2.0
3.9

3.6
0.4
1.2
4.0
0.5
x
6.6
3.3
1.2

LANCS
3.2
5.3
8.5
6.8

4.5
7.1
7.8
5.7
7.1
-
x
6.4
7.9

CAM
5.3
2.0
5.3
3.2

3.1
3.5
4.2
3.1
2.9
-
5.9
x
4.5

OX
7.2
2.9
2.5
6.7

4.2
0.9
2.0
6.0
1.8
-
7.6
5.0
x

MAN









4.1























































Spinning media
2 to 20 ms

SSD
0.2ms

nVme
0.06 millisecond























The first take-away is that these latencies are all really very good - the largest value is still less than 10ms, which is exceptional. There's still measurable, consistent, variation in the latencies, though, so we can construct an adjacency graph from the data, using NetworkX and a force-directed layout (with the Kamada Kawai algorithm) to visualise:

Kamada-Kawai (force-directed) layout of the UK sites with public latency measurements (minus MAN), with very close clusters annotated.


As you can see, this reproduces the JANET networking structure - rather than the geographical distance - with, for example, Lancaster further away from Liverpool than Glasgow is, because Lancaster's packets actually pass through Glasgow, before routing back down south.

The next thing to do is to test the point-to-point SE latency for test examples.