From: Steve Dickson <SteveD@redhat.com>
To: NeilBrown <neilb@suse.de>
Cc: Linux NFS Mailing list <linux-nfs@vger.kernel.org>
Subject: Re: A couple systemd questions
Date: Mon, 12 Jan 2015 07:27:15 -0500 [thread overview]
Message-ID: <54B3BDA3.2010009@RedHat.com> (raw)
In-Reply-To: <20150112085144.6460dc27@notabene.brown>
On 01/11/2015 02:51 PM, NeilBrown wrote:
> On Sun, 11 Jan 2015 10:24:39 -0500 Steve Dickson <SteveD@redhat.com> wrote:
>
>> Hey Neil,
>>
>> You being the architect of the systemd scribes ;-) I have a
>> couple questions.
>>
>> The nfs-server service brings both the rpc.mountd and rpc.idmapd
>> daemons up when the service is started, but only
>> brings rpc.mountd down when the service is stopped.
>>
>> I'm assuming that was done because the client was
>> using rpc.idmapd to do its id mapping, but that is no
>> longer the case with some clients. They use the
>> key rings via nfsidmap command to do the id mapping
>
> Your assumption is correct.
> Having nfs-client only require nfs-idmap on some configs is not easy to do
> with systemd... at least I don't know an easy way.
> We probably need to distribute two versions of some config file(s), one where
> the dependency exists, one where it doesn't.
>
> How does one tell whether idmapd is needed or not?
The existence of /etc/request-key.d/id_resolver.conf and /usr/sbin/nfsidmap?
The kernel is hard coded to do an keyring upcall first then if that fails
it will try an upcall to rpc.idmapd. So at this point it
makes sense to use the keyring upcalls...
>
>
>>
>> So I'm thinking the nfs-server service should bring
>> both daemons up and down when the server is started
>> and stopped since the server is the only service using
>> the rpc.idmapd.
>>
>> My attempted at doing this is to change the
>> nfs-idmap service to do the following:
>>
>> [Unit]
>> Description=NFSv4 ID-name mapping service
>> Requires=var-lib-nfs-rpc_pipefs.mount
>> After=var-lib-nfs-rpc_pipefs.mount
>> After=network.target
>> PartOf=nfs-server.service
>> PartOf=nfs-utils.service
>>
>> Wants=nfs-config.service
>> After=nfs-config.service
>>
>> [Service]
>> EnvironmentFile=-/run/sysconfig/nfs-utils
>> Type=forking
>> ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS
>>
>> Almost exactly what the rpc-mountd service does.
>>
>> Now I thought the "PartOf=nfs-server.service" would cause
>> rpc.idmapd to come down when the server came down, but that
>> does not seem to be the case... The only way I can get
>> nfs-idmap service to come down is to explicitly stop it...
>> What am I missing?
>
> You are missing the same thing that I am missing.
> I wonder if systemd gets confused by multiple PartOf directives.
>
> The man page says:
>
> When systemd stops or restarts the units listed here, the action
> is propagated to this unit.
>
> So if systemd is told to stop nfs-server.service, it should also stop
> nfs-idmap.service...
That's how I interpreted it as well... I'll talk with the systemd guys
to see what they say
> Maybe try with only one PartOf? Or with
> PartOf=nfs-server.service nfs-utils.service
I did try this and it didn't seem to matter...
>
>
>>
>> Secondly, in all of the services where rpcbind is needed you
>> reference the rpcbind.target instead of the rpcbind.service.
>> Why is that?
>
> Because rpcbind.target is a public name (even mentioned in doco). I had this
> idea that when systemd unit files from one package need to interact with
> those from another package, they should be careful to only use "public"
> interfaces in case one package changes.
> And I thought "rpcbind.target" was like a public interface.
>
> I no longer think that. The ".target" concept doesn't seem to be nearly as
> useful as I thought it would be.
>
> I would not object to changing nfs-utils to use "rpcbind.service" if that
> would help anyone at all.
I've got this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1171603
talking about how the nfs-server service should be using rpcbind.service
instead of rpcbind.target. The reasoning in the bz might be a
bit off... but it did get me thinking about it...
>
>>
>> The reason I'm ask is, I'm seeing a problem where rpc.statd fails
>> to start because nfs_svc_create() fails. Meaning it was unable to
>> either create UPD/TCP sockets or the registrations to rpcbind fails.
>>
>> I'm thinking its the later due a race between rpcbind and statd
>> starting. I've seen races like before in with systemd services...
>
> but but but, the whole point of before/after is to avoid races...
Understood... Maybe targets are handled differently that services???
>
>>
>> So I'm thinking that race would not exist if the rpc-statd service
>> would use rpcbind.service in the "Requires=" and "After="
>> statements instead of rpcbind.target. Right??
>
> Don't know. Try it and see. If it works, use it.
Here is the race I'm seeing:
https://bugzilla.redhat.com/show_bug.cgi?id=1175005#c4
Its not very reproducible, but I have seen a couple other times
since we moved to the new systemd scripts...
Thanks for the time!!
steved.
prev parent reply other threads:[~2015-01-12 12:27 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-11 15:24 A couple systemd questions Steve Dickson
2015-01-11 19:51 ` NeilBrown
2015-01-12 12:27 ` Steve Dickson [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54B3BDA3.2010009@RedHat.com \
--to=steved@redhat.com \
--cc=linux-nfs@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox