From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Juan Quintela <quintela@redhat.com>,
Fabiano Rosas <farosas@suse.de>,
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>,
Hanna Reitz <hreitz@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
Fam Zheng <fam@euphon.net>, Eric Blake <eblake@redhat.com>,
qemu-block@nongnu.org
Subject: Re: [PATCH v2 6/6] nbd/server: introduce NBDClient->lock to protect fields
Date: Thu, 21 Dec 2023 18:38:16 +0100 [thread overview]
Message-ID: <ZYR4CKeZD7EmCYuv@redhat.com> (raw)
In-Reply-To: <20231221153548.1752005-7-stefanha@redhat.com>
Am 21.12.2023 um 16:35 hat Stefan Hajnoczi geschrieben:
> NBDClient has a number of fields that are accessed by both the export
> AioContext and the main loop thread. When the AioContext lock is removed
> these fields will need another form of protection.
>
> Add NBDClient->lock and protect fields that are accessed by both
> threads. Also add assertions where possible and otherwise add doc
> comments stating assumptions about which thread and lock holding.
>
> Note this patch moves the client->recv_coroutine assertion from
> nbd_co_receive_request() to nbd_trip() where client->lock is held.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> +/* Runs in export AioContext */
> +static void nbd_wake_read_bh(void *opaque)
> +{
> + NBDClient *client = opaque;
> + qio_channel_wake_read(client->ioc);
> +}
> +
> static bool nbd_drained_poll(void *opaque)
> {
> NBDExport *exp = opaque;
> NBDClient *client;
>
> + assert(qemu_in_main_thread());
> +
> QTAILQ_FOREACH(client, &exp->clients, next) {
> - if (client->nb_requests != 0) {
> - /*
> - * If there's a coroutine waiting for a request on nbd_read_eof()
> - * enter it here so we don't depend on the client to wake it up.
> - */
> - if (client->recv_coroutine != NULL && client->read_yielding) {
> - qio_channel_wake_read(client->ioc);
> + WITH_QEMU_LOCK_GUARD(&client->lock) {
> + if (client->nb_requests != 0) {
> + /*
> + * If there's a coroutine waiting for a request on nbd_read_eof()
> + * enter it here so we don't depend on the client to wake it up.
> + *
> + * Schedule a BH in the export AioContext to avoid missing the
> + * wake up due to the race between qio_channel_wake_read() and
> + * qio_channel_yield().
> + */
> + if (client->recv_coroutine != NULL && client->read_yielding) {
> + aio_bh_schedule_oneshot(nbd_export_aio_context(client->exp),
> + nbd_wake_read_bh, client);
> + }
Doesn't the condition have to move inside the BH to avoid the race?
Checking client->recv_coroutine != NULL could work here because I don't
think it can go from NULL to something while we're quiescing, but
client->read_yielding can still change until the BH runs and we know
that the nbd_co_trip() coroutine has yielded. It seems easiest to just
move the whole condition to the BH.
> + return true;
> }
> -
> - return true;
> }
> }
The rest looks good to me.
Kevin
next prev parent reply other threads:[~2023-12-21 17:39 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-21 15:35 [PATCH v2 0/6] qemu-iotests fixes for Kevin's block tree Stefan Hajnoczi
2023-12-21 15:35 ` [PATCH v2 1/6] fixup block-coroutine-wrapper: use qemu_get_current_aio_context() Stefan Hajnoczi
2023-12-21 15:35 ` [PATCH v2 2/6] fixup block: remove AioContext locking Stefan Hajnoczi
2023-12-21 15:35 ` [PATCH v2 3/6] fixup scsi: only access SCSIDevice->requests from one thread Stefan Hajnoczi
2023-12-21 15:35 ` [PATCH v2 4/6] nbd/server: avoid per-NBDRequest nbd_client_get/put() Stefan Hajnoczi
2023-12-21 15:35 ` [PATCH v2 5/6] nbd/server: only traverse NBDExport->clients from main loop thread Stefan Hajnoczi
2023-12-21 15:35 ` [PATCH v2 6/6] nbd/server: introduce NBDClient->lock to protect fields Stefan Hajnoczi
2023-12-21 17:38 ` Kevin Wolf [this message]
2023-12-21 19:21 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZYR4CKeZD7EmCYuv@redhat.com \
--to=kwolf@redhat.com \
--cc=eblake@redhat.com \
--cc=fam@euphon.net \
--cc=farosas@suse.de \
--cc=hreitz@redhat.com \
--cc=leobras@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=stefanha@redhat.com \
--cc=vsementsov@yandex-team.ru \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).