From: Peter Lieven <pl@kamp.de>
To: ronnie sahlberg <ronniesahlberg@gmail.com>
Cc: Kevin Wolf <kwolf@redhat.com>, qemu-devel <qemu-devel@nongnu.org>,
"open list:Block layer core" <qemu-block@nongnu.org>,
Max Reitz <mreitz@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] block/nfs: add support for nfs_umount
Date: Thu, 5 Sep 2019 12:32:12 +0200 [thread overview]
Message-ID: <024ab961-472a-76ee-03d2-abf3737e350e@kamp.de> (raw)
In-Reply-To: <CAN05THSJBYuG=GiZ1J10qci+8mT0qcd=mcG-n4sGZRjbdUMq4Q@mail.gmail.com>
Am 05.09.19 um 12:28 schrieb ronnie sahlberg:
> On Thu, Sep 5, 2019 at 8:16 PM Peter Lieven <pl@kamp.de> wrote:
>> Am 05.09.19 um 12:05 schrieb ronnie sahlberg:
>>> On Thu, Sep 5, 2019 at 7:43 PM Peter Lieven <pl@kamp.de> wrote:
>>>> Am 04.09.19 um 11:34 schrieb Kevin Wolf:
>>>>> Am 03.09.2019 um 21:52 hat Peter Lieven geschrieben:
>>>>>>> Am 03.09.2019 um 16:56 schrieb Kevin Wolf <kwolf@redhat.com>:
>>>>>>>
>>>>>>> Am 03.09.2019 um 15:44 hat Peter Lieven geschrieben:
>>>>>>>> libnfs recently added support for unmounting. Add support
>>>>>>>> in Qemu too.
>>>>>>>>
>>>>>>>> Signed-off-by: Peter Lieven <pl@kamp.de>
>>>>>>> Looks trivial enough to review even for me. :-)
>>>>>>>
>>>>>>> Thanks, applied to the block branch.
>>>>>>>
>>>>>>> Kevin
>>>>>> I am not sure what the reason is, but with this patch I sometimes run
>>>>>> into nfs_process_read being called for a cdrom mounted from nfs after
>>>>>> I ejected it (and the whole nfs client context is already destroyed).
>>>>> Does this mean that nfs_umount() gets some response, but we don't
>>>>> properly wait for it? Or is some older request still in flight?
>>>> nfs_umount itself is a sync call and should only terminate when
>>>>
>>>> the call is done. But there is an independent I/O handler in that
>>>>
>>>> function polling on the fd. (wait_for_nfs_reply in libnfs-sync.c).
>>>>
>>>> This is why I thought the right solution is to stop the Qemu I/O handler
>>>>
>>>> before calling nfs_close and nfs_umount. nfs_close also uses this
>>>>
>>>> sync I/O handler, but for some reason it seems not to make trouble.
>>>>
>>>>
>>>> The other solution would be to use the async versions of close and umount,
>>>>
>>>> but that would make the code in Qemu more complex.
>>>>
>>>>
>>> NFS umount is pretty messy so I think you should continue using the
>>> sync version.
>>> In NFSv3 (there is no mount protocol in v4) the Mount call (fetch the
>>> root filehandle)
>>> and the Umount calls (tell server we should no longer show up in
>>> showexports -a output)
>>> are not part of the NFS protocol but a different service running on a
>>> separate port.
>>>
>>> This does not map well to libnfs since it is centered around a "struct
>>> nfs_context".
>>>
>>> To use nfs_umount() from QEMU I would suggest :
>>> 1, make sure all commands in flight have finished, because you will
>>> soon disconnect from the NFS server and will never receive any
>>> in-flight responses.
>>> 2, unregister the nfs->fh filedescriptor from your eventsystem.
>>> Because the fd is about to be closed so there is great chance it will
>>> be recycled for a completely different purpose if you open any other
>>> files from qemu.
>>>
>>> 3, call nfs_umount() Internally this will close the socket to the
>>> NFS server, then go through thr process to open a new socket to the
>>> portmapper to discover the mount server, then close that socket and
>>> reconnect a new socket again to the mount server and perform the UMNT
>>> call.
>>
>> What we currently do in Qemu is:
>>
>>
>> 1) bdrv_drain
>>
>> 2) bdrv_close which in the end calls nfs_client_close from block/nfs.c.
>>
>> There we call:
>>
>> 2a) nfs_close(client->fh)
>>
>> 2b) aio_set_fd_handler(NULL)
>>
>> 2c) nfs_destroy_context(client->context);
>>
>>
>> My first patch added a nfs_umount between 2a) and 2b) so that we have
>>
>> 2a) nfs_close(client->fh)
>>
>> 2b) nfs_umount(client->context)
>>
>> 2c) aio_set_fd_handler(NULL)
>>
>> 2d) nfs_destroy_context(client->context);
>>
>>
>> This leads to triggering to assertion for an uninitialized client->mutex which is called from an invocation
>>
>> of nfs_process_read after nfs_destroy_context was called.
>>
>>
>> If I change the order as following I see no more assertions:
>>
>> 2a) aio_set_fd_handler(NULL)
>>
>> 2b) nfs_close(client->fh)
>>
>> 2c) nfs_umount(client->context)
>>
>> 2d) nfs_destroy_context(client->context);
> That makes sense and looks correct to me.
I am unsure if we should acquire the client->mutex before we fiddle with
the aio_handler because in theory there could be something happen to the
nfs_fd in the background.
Best,
Peter
prev parent reply other threads:[~2019-09-05 10:35 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-03 13:44 [Qemu-devel] [PATCH] block/nfs: add support for nfs_umount Peter Lieven
2019-09-03 14:56 ` Kevin Wolf
2019-09-03 19:52 ` Peter Lieven
2019-09-04 9:34 ` Kevin Wolf
2019-09-05 9:43 ` Peter Lieven
2019-09-05 10:05 ` ronnie sahlberg
2019-09-05 10:16 ` Peter Lieven
2019-09-05 10:28 ` ronnie sahlberg
2019-09-05 10:32 ` Peter Lieven [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=024ab961-472a-76ee-03d2-abf3737e350e@kamp.de \
--to=pl@kamp.de \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=ronniesahlberg@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).