From: Kevin Wolf <kwolf@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Hanna Czenczek <hreitz@redhat.com>,
qemu-block@nongnu.org, qemu-devel@nongnu.org,
Fiona Ebner <f.ebner@proxmox.com>, John Snow <jsnow@redhat.com>
Subject: Re: [PATCH for-8.0] ide: Fix manual in-flight count for TRIM BH
Date: Thu, 9 Mar 2023 18:46:35 +0100 [thread overview]
Message-ID: <ZAobe/wtsf//YGHJ@redhat.com> (raw)
In-Reply-To: <3e695f64-13bb-1311-6cd6-09bffc312873@redhat.com>
Am 09.03.2023 um 14:59 hat Paolo Bonzini geschrieben:
> On 3/9/23 13:31, Hanna Czenczek wrote:
> > On 09.03.23 13:08, Paolo Bonzini wrote:
> > > On Thu, Mar 9, 2023 at 1:05 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
> > > > I think having to do this is problematic, because the blk_drain should
> > > > leave no pending operation.
> > > >
> > > > Here it seems okay because you do it in a controlled situation, but the
> > > > main thread can also issue blk_drain(), or worse bdrv_drained_begin(),
> > > > and there would be pending I/O operations when it returns.
> >
> > Not really. We would stop in the middle of a trim that processes a list
> > of discard requests. So I see it more like stopping in the middle of
> > anything that processes guest requests. Once drain ends, we continue
> > processing them, and that’s not exactly pending I/O.
> >
> > There is a pending object in s->bus->dma->aiocb on the IDE side, so
> > there is a pending DMA operation, but naïvely, I don’t see that as a
> > problem.
>
> What about the bdrv_drain_all() when a VM stops, would the guest continue to
> access memory and disks after bdrv_drain() return?
That one shouldn't be a problem because the devices are stopped before
the backends.
> Migration could also be a problem, because the partial TRIM would not be
> recorded in the s->bus->error_status field of IDEState (no surprise there,
> it's not an error). Also, errors happening after bdrv_drain() might not be
> migrated correctly.
Yes, I think it would be good to have the I/O operation fully completed
on the IDE level rather than just in the block layer.
> > Or the issue is generally that IDE uses dma_* functions, which might
> > cause I/O functions to be run from new BHs (I guess through
> > reschedule_dma()?).
None of those complicated scenarios actually. The problem solved by the
request queuing is simply that nothing else stops the guest from
submitting new requests to drained nodes if the CPUs are still running.
Drain uses aio_disable_external() to disable processing of external
events, in particular the ioeventfd used by virtio-blk and virtio-scsi.
But IDE submits requests through MMIO or port I/O, i.e. the vcpu thread
exits to userspace and calls directly into the IDE code, so it's
completely unaffected by aio_disable_external().
> Ah, you mean that you can have pending I/O operations while blk->in_flight
> is zero? That would be a problem indeed. We already have BlockDevOps for
> ide-cd and ide-hd, should we add a .drained_poll callback there?
To be more precise, you suggested in the call that .drained_poll should
return that IDE is still busy while aiocb != NULL. Without having looked
at the code in detail yet, that seems to make sense to me. And maybe
even the blk_inc/dec_in_flight() pair can then go completely away
because IDE takes care of its drain state itself then.
Kevin
next prev parent reply other threads:[~2023-03-09 17:47 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-09 11:44 [PATCH for-8.0] ide: Fix manual in-flight count for TRIM BH Hanna Czenczek
2023-03-09 12:05 ` Paolo Bonzini
2023-03-09 12:08 ` Paolo Bonzini
2023-03-09 12:31 ` Hanna Czenczek
2023-03-09 13:59 ` Paolo Bonzini
2023-03-09 17:46 ` Kevin Wolf [this message]
2023-03-10 13:05 ` Fiona Ebner
2023-03-10 14:25 ` Kevin Wolf
2023-03-10 15:13 ` Paolo Bonzini
2023-03-13 12:29 ` Fiona Ebner
2023-03-13 13:09 ` Paolo Bonzini
2023-03-13 16:32 ` Kevin Wolf
2023-03-14 9:42 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZAobe/wtsf//YGHJ@redhat.com \
--to=kwolf@redhat.com \
--cc=f.ebner@proxmox.com \
--cc=hreitz@redhat.com \
--cc=jsnow@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).