From: Hanna Czenczek <hreitz@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-block@nongnu.org, qemu-devel@nongnu.org,
Ilya Dryomov <idryomov@gmail.com>,
Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [PATCH] block/rbd: Do not use BDS's AioContext
Date: Wed, 12 Feb 2025 15:27:26 +0100 [thread overview]
Message-ID: <807a4908-d087-4c69-ba56-07c60545405e@redhat.com> (raw)
In-Reply-To: <Z6yhczeIJ1XIej3M@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4287 bytes --]
On 12.02.25 14:26, Kevin Wolf wrote:
> Am 12.02.2025 um 10:32 hat Hanna Czenczek geschrieben:
>> RBD schedules the request completion code (qemu_rbd_finish_bh()) to run
>> in the BDS's AioContext. The intent seems to be to run it in the same
>> context that the original request coroutine ran in, i.e. the thread on
>> whose stack the RBDTask object exists (see qemu_rbd_start_co()).
>>
>> However, with multiqueue, that thread is not necessarily the same as the
>> BDS's AioContext. Instead, we need to remember the actual AioContext
>> and schedule the completion BH there.
>>
>> Buglink:https://issues.redhat.com/browse/RHEL-67115
> Please add a short summary of what actually happens to the commit
> message. I had to check the link to remember what the symptoms are.
Sure. The problem is, I don’t know exactly what’s going wrong (looked
like a coroutine being rescheduled after it was already done), and I
don’t know how this fixes it.
>> Reported-by: Junyao Zhao<junzhao@redhat.com>
>> Signed-off-by: Hanna Czenczek<hreitz@redhat.com>
>> ---
>> I think I could also drop RBDTask.ctx and just use
>> `qemu_coroutine_get_aio_context(RBDTask.co)` instead, but this is the
>> version of the patch that was tested and confirmed to fix the issue (I
>> don't have a local reproducer), so I thought I'll post this first.
> Did you figure out why it even makes a difference in which thread
> qemu_rbd_finish_bh() runs? For context:
>
> static void qemu_rbd_finish_bh(void *opaque)
> {
> RBDTask *task = opaque;
> task->complete = true;
> aio_co_wake(task->co);
> }
>
> This looks as if it should be working in any thread, except maybe for a
> missing barrier after updating task->complete - but I think the failure
> mode for that would be a hang in qemu_rbd_start_co().
Yes, I thought the same thing. All I could imagine was that maybe
reading task->co returns the wrong result, but given how long ago that
must have been set, it seems quite unlikely (to say the least). In
addition, qemu_rbd_completion_cb() already reads the object from a
different thread, and that seems to work fine.
Really, all I know is that the notion of a BDS’s AioContext no longer
makes sense in a multiqueue I/O path, so this should be scheduled in the
I/O’s AioContext (just conceptually speaking), and that this seems to
fix the bug.
>> block/rbd.c | 10 ++++++----
>> 1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/block/rbd.c b/block/rbd.c
>> index af984fb7db..9d4e0817e0 100644
>> --- a/block/rbd.c
>> +++ b/block/rbd.c
>> @@ -102,7 +102,7 @@ typedef struct BDRVRBDState {
>> } BDRVRBDState;
>>
>> typedef struct RBDTask {
>> - BlockDriverState *bs;
>> + AioContext *ctx;
>> Coroutine *co;
>> bool complete;
>> int64_t ret;
>> @@ -1269,8 +1269,7 @@ static void qemu_rbd_completion_cb(rbd_completion_t c, RBDTask *task)
>> {
>> task->ret = rbd_aio_get_return_value(c);
>> rbd_aio_release(c);
>> - aio_bh_schedule_oneshot(bdrv_get_aio_context(task->bs),
>> - qemu_rbd_finish_bh, task);
>> + aio_bh_schedule_oneshot(task->ctx, qemu_rbd_finish_bh, task);
>> }
>>
>> static int coroutine_fn qemu_rbd_start_co(BlockDriverState *bs,
>> @@ -1281,7 +1280,10 @@ static int coroutine_fn qemu_rbd_start_co(BlockDriverState *bs,
>> RBDAIOCmd cmd)
>> {
>> BDRVRBDState *s = bs->opaque;
>> - RBDTask task = { .bs = bs, .co = qemu_coroutine_self() };
>> + RBDTask task = {
>> + .ctx = qemu_get_current_aio_context(),
>> + .co = qemu_coroutine_self(),
>> + };
>> rbd_completion_t c;
>> int r;
> Nothing wrong I can see about the change, but I don't understand why it
> fixes the problem.
Me neither. But if this patch had been part of one of the original
multiqueue series (without pointing out the linked bug), would there
have been any argument against it?
Indeed it is a problem that I don’t understand what’s happening. But
even more honestly, I’ll have to admit I can’t ever claim to understand
what’s happening in a multi-threaded asynchronous C environment; even
more so when the reproducer is installing Windows on RBD.
Hanna
[-- Attachment #2: Type: text/html, Size: 5488 bytes --]
next prev parent reply other threads:[~2025-02-12 14:28 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-12 9:32 [PATCH] block/rbd: Do not use BDS's AioContext Hanna Czenczek
2025-02-12 13:26 ` Kevin Wolf
2025-02-12 14:27 ` Hanna Czenczek [this message]
2025-02-18 5:26 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=807a4908-d087-4c69-ba56-07c60545405e@redhat.com \
--to=hreitz@redhat.com \
--cc=idryomov@gmail.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).