From: Eric Blake <eblake@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>, qemu-block@nongnu.org
Cc: dplotnikov@virtuozzo.com, vsementsov@virtuozzo.com,
den@virtuozzo.com, mreitz@redhat.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 4/4] block-backend: Queue requests while drained
Date: Thu, 25 Jul 2019 12:06:28 -0500 [thread overview]
Message-ID: <dc36c911-377a-924b-c5b7-a11f6022b765@redhat.com> (raw)
In-Reply-To: <20190725162704.12622-5-kwolf@redhat.com>
[-- Attachment #1.1: Type: text/plain, Size: 1496 bytes --]
On 7/25/19 11:27 AM, Kevin Wolf wrote:
> This fixes device like IDE that can still start new requests from I/O
> handlers in the CPU thread while the block backend is drained.
>
> The basic assumption is that in a drain section, no new requests should
> be allowed through a BlockBackend (blk_drained_begin/end don't exist,
> we get drain sections only on the node level). However, there are two
> special cases where requests should not be queued:
>
> 1. Block jobs: We already make sure that block jobs are paused in a
> drain section, so they won't start new requests. However, if the
> drain_begin is called on the job's BlockBackend first, it can happen
> that we deadlock because the job stays busy until it reaches a pause
> point - which it can't if it's requests aren't processed any more.
its (remember, "it's" is only okay if "it is" works as well)
>
> The proper solution here would be to make all requests through the
> job's filter node instead of using a BlockBackend. For now, just
> disabling request queuin on the job BlockBackend is simpler.
queuing
>
> 2. In test cases where making requests through bdrv_* would be
> cumbersome because we'd need a BdrvChild. As we already got the
> functionality to disable request queuing from 1., use it in tests,
> too, for convenience.
>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization: qemu.org | libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2019-07-25 17:06 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-25 16:27 [Qemu-devel] [PATCH 0/4] block-backend: Queue requests while drained Kevin Wolf
2019-07-25 16:27 ` [Qemu-devel] [PATCH 1/4] block: Remove blk_pread_unthrottled() Kevin Wolf
2019-07-26 9:18 ` Max Reitz
2019-07-25 16:27 ` [Qemu-devel] [PATCH 2/4] block: Reduce (un)drains when replacing a child Kevin Wolf
2019-07-25 16:27 ` [Qemu-devel] [PATCH 3/4] mirror: Keep target drained until graph changes are done Kevin Wolf
2019-07-25 17:03 ` Eric Blake
2019-07-26 9:52 ` Max Reitz
2019-07-26 11:36 ` Kevin Wolf
2019-07-26 12:30 ` Max Reitz
2019-07-25 16:27 ` [Qemu-devel] [PATCH 4/4] block-backend: Queue requests while drained Kevin Wolf
2019-07-25 17:06 ` Eric Blake [this message]
2019-07-26 10:50 ` Max Reitz
2019-07-26 11:49 ` Kevin Wolf
2019-07-26 12:34 ` Max Reitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dc36c911-377a-924b-c5b7-a11f6022b765@redhat.com \
--to=eblake@redhat.com \
--cc=den@virtuozzo.com \
--cc=dplotnikov@virtuozzo.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=vsementsov@virtuozzo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).