One of the advantages of the RAL ECHO service having "gone first" in terms of setting up a Ceph object store with direct connections to xrootd and gridftp services, is that when we are doing the same thing at Scotgrid-Glasgow, we can try new things.
One such change for us is how we do account authorisation and mapping.
The RAL Echo system is deliberately conservative, and has a two stage process:
- User DNs are mapped via a simple grid-mapfile to a specific account.
- That account name is then associated with a set of capabilities via an xrootd authdb file.
(These capabilities correspond to access permissions for a small number of ceph pools on the backend, usually one per VO.)
We know that works, but it's unwieldy - you need big grid-mapfiles full of DNs for all the users, and users are hard to map to more than one account.
Additionally, privacy and security concerns have led to policies for voms servers being restricted - it's hard or impossible to even request a list of member DNs for some VOs now.
It would be nice if we could do something more modern, using the VOMS extensions in the certificates. (It would be even nicer if we could, whilst we're doing this, call out to an ARGUS server for banning, as that's a cheap way to provide central banning for our SE.)
It turns out that we
can do this, with the magic of a >6 year old technology from Nikhef called LCMAPS. The below replaces the grid-mapfile parts of the RAL configuration -
you still need the authdb part to map the resulting account names to the underlying capabilities.
(And in the magical world where we just pass capability tokens around, we can probably make this a single step mapping.)
Doing this needs a bit of work, but since we're already compiling our own version of xrootd, and our own gridftp-ceph plugin, a bit more compilation never hurts.
The underlying LCMAPS configuration we're using (in /etc/lcmaps/lcmaps.db) looks like this, with a bit of unique data obscured:
vomsmapfile2 = "lcmaps_voms_localaccount.mod"
"-gridmap /etc/grid-security/voms-mapfile"
verifyproxynokey = "lcmaps_verify_proxy.mod"
"--allow-limited-proxy"
"--discard_private_key_absence"
" -certdir /etc/grid-security/certificates"
pepc = "lcmaps_c_pep.mod"
"--pep-daemon-endpoint-url https://ourargusserverauthzpoint"
"--resourceid ourcephresourceid"
"--actionid http://glite.org/xacml/action/execute"
"--capath /etc/grid-security/certificates/"
"--certificate /etc/grid-security/hostcert.pem"
"--key /etc/grid-security/hostkey.pem"
"--banning-only-mode"
good = "lcmaps_dummy_good.mod"
bad = "lcmaps_dummy_bad.mod"
mapping_pol:
verifyproxynokey -> pepc | bad
pepc -> vomsmapfile2 | bad
vomsmapfile2 -> good
Here, the grey backgrounded part uses the
lcmaps_voms_localaccount plugin to map by VOMS extension only, to a small number of accounts. So, our local services don't need to maintain a large and brittle grid-mapfile, or call out anywhere with a cron to update it.
(The voms-mapfile is as simple as, for example:
/dteam* dteamaccount
to map all /dteam* VOMS extensions to the single dteamaccount )
The pink backgrounded part uses the
lcmaps_c_pep plugin to call out to our local ARGUS server. Unlike for glExec on workernodes, or CEs, the only thing we care about here is if the ARGUS server returns a "Permit" or not. As a result, the policy on the local PAP in our ARGUS server has no obligations in it - in fact, including the local_environment_mapping obligation
breaks our chain, since we don't have (or need) pool accounts on these servers by design. We still need to add a policy for the corresponding resourceid we pass, and remember to reload the config on the PDP and PEP afterwards.
So far, so easy (and all the packages needed are in UMD4 and easy to get).
Getting LCMAPS to work with the vanilla versions of globus-gridftp-server and xrootd is not completely trivial, however.
In gridftp's case:
globus-gridftp-server is perfectly capable of interfacing with lcmaps, but all of the shipped versions in EPEL and UMD come without the necessary configuration to do so. (In particular, a set of environment variables need to be present in the environment of the gridftp server daemon, and without them set, the configured LCMAPs will fail with odd errors about gridftp still being mapped to the root user.)
We can fix this with the addition of a
/etc/sysconfig/globus-gridftp-server file containing:
export LCMAPS_DB_FILE=/etc/lcmaps/lcmaps.db
export LLGT_LIFT_PRIVILEGED_PROTECTION=1
export LLGT_RUN_LCAS=no
export conf=/etc/gridftp.conf
where the lower line also prevents the configured gridftp service from trying to load LCAS (which we don't need here - since banning is being farmed out to ARGUS).
We also need to install the
lcas-lcmaps-gt4-interface rpm, which provides the glue to let gsi call out via LCMAPS.
and finally, install the /
etc/grid-security/gsi-authz.conf file to tell gridftp how to authenticate gsi stuff:
globus_mapping liblcas_lcmaps_gt4_mapping lcmaps_callout
(The more exciting thing with gridftp is getting the ceph and authdb stuff to work, about which more in another post)
In Xrootd's case:
This needs a little more work: xrootd does not have an officially packaged security plugin for interfacing with lcmaps.
Luckily, however, OSG have done some sterling work on this (in fact, most of this blog post is based on their documentation, plus the nikhef LCMAPS docs), and there's a git repository containing a working xrootd-lcmaps plugin, here:
https://github.com/opensciencegrid/xrootd-lcmaps.git
In order to build this, we also need the development libraries for the underlying technologies:
voms-devel,
lcmaps-devel and
lcmaps-common-devel, as well as a host of globus libs that you probably already have installed (as well as the xrootd development headers, which we already have since we build xrootd locally too).
Building this, and installing the resulting
libXrdLcmaps.so into a suitable place, we just need to add the following to our xrootd config for the externally visible service:
sec.protocol /opt/xrootd/lib64 gsi -certdir:/etc/grid-security/certificates \
-cert:/etc/grid-security/hostcert.pem \
-key:/etc/grid-security/xrd/hostkey.pem \
-crl:1 \
-authzfun:libXrdLcmaps.so \
-authzfunparms:lcmapscfg=/etc/lcmaps/lcmaps.db,loglevel=1,policy=mapping_pol \
-gmapopt:10 -gmapto:0
where here we configure the xrootd service to call out to the library we built (and we have to, unlike with gridftp, specify the policy to use from the file - gridftp will use the only policy present if there's just one).
We need a second copy of the hostkey, you'll notice, because the xrootd service doesn't run as the same user as the gridftp service - but gridftp won't let you have a hostkey which is accessible by more than one user. (So we need two copies, one for gridftp and one for xrootd.)
EXAMPLE
Once you configure your authdb for the capability mapping you're ready to go!
As you can see from the LCMAPS logs, when I do a transfer with a voms-enabled proxy, using, in this case, globus-url-copy, but it's the same with xrdcp:
Oct 3 15:33:25 cephs02 globus-gridftp-server: lcmaps: Starting policy: mapping_pol
... (some certificate verification) ...
Oct 3 15:33:25 cephs02 globus-gridftp-server: lcmaps: lcmaps_plugin_verify_proxy-plugin_run(): verify proxy plugin succeeded
Oct 3 15:33:25 cephs02 globus-gridftp-server: lcmaps: lcmaps_plugin_c_pep-plugin_run(): Using endpoint OURARGUSENDPPOINT, try #1
Oct 3 15:33:25 cephs02 globus-gridftp-server: lcmaps: lcmaps_plugin_c_pep-plugin_run(): c_pep plugin succeeded
Oct 3 15:33:25 cephs02 globus-gridftp-server: lcmaps: lcmaps_gridmapfile: Found mapping dteamaccount for "/dteam/*" (line 1)
Oct 3 15:33:25 cephs02 globus-gridftp-server: lcmaps: lcmaps_voms_localaccount-plugin_run(): voms_localaccount plugin succeeded
Oct 3 15:33:25 cephs02 globus-gridftp-server: lcmaps: lcmaps_dummy_good-plugin_run(): good plugin succeeded
Oct 3 15:33:25 cephs02 globus-gridftp-server: lcmaps: LCMAPS CRED FINAL: mapped uid:'xxx',pgid:'xxx',sgid:'xxx',sgid:'xxx'
Oct 3 15:33:25 cephs02 globus-gridftp-server: Callout to "LCMAPS" returned local user (service file): "dteamaccount"
and then we go into the gridftp.log for the authdb:
[344940] Thu Oct 3 15:33:25 2019 :: globus_l_gfs_ceph_send: started
[344940] Thu Oct 3 15:33:25 2019 :: globus_l_gfs_ceph_send: rolename is dteamaccount
[344940] Thu Oct 3 15:33:25 2019 :: globus_l_gfs_ceph_send: pathname: dteam:testfile1/
[344940] Thu Oct 3 15:33:25 2019 :: INFO globus_l_gfs_ceph_send: acc.success: 'RETR' operation allowed
[344940] Thu Oct 3 15:33:25 2019 :: ceph_posix_stat64 : pathname = /dteam:testfile1
where our capabilities are checked (and the dteamaccount is, indeed, allowed to READ from objects in the dteam pool).