From: Stefan Hajnoczi <stefanha@redhat.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Emanuele Giuseppe Esposito <eesposit@redhat.com>,
Kevin Wolf <kwolf@redhat.com>,
qemu-block@nongnu.org, Hanna Reitz <hreitz@redhat.com>,
Stefan Weil <sw@weilnetz.de>, Fam Zheng <fam@euphon.net>,
Paolo Bonzini <pbonzini@redhat.com>,
qemu-devel@nongnu.org, quintela@redhat.com
Subject: Re: [PATCH 2/2] thread-pool: use ThreadPool from the running thread
Date: Mon, 24 Oct 2022 14:49:47 -0400 [thread overview]
Message-ID: <Y1beS+QAuNx/Zdck@fedora> (raw)
In-Reply-To: <Y1F1uU5bAQw80mG0@work-vm>
[-- Attachment #1: Type: text/plain, Size: 3770 bytes --]
On Thu, Oct 20, 2022 at 05:22:17PM +0100, Dr. David Alan Gilbert wrote:
> * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > On Mon, Oct 03, 2022 at 10:52:33AM +0200, Emanuele Giuseppe Esposito wrote:
> > >
> > >
> > > Am 30/09/2022 um 17:45 schrieb Kevin Wolf:
> > > > Am 30.09.2022 um 14:17 hat Emanuele Giuseppe Esposito geschrieben:
> > > >> Am 29/09/2022 um 17:30 schrieb Kevin Wolf:
> > > >>> Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
> > > >>>> Remove usage of aio_context_acquire by always submitting work items
> > > >>>> to the current thread's ThreadPool.
> > > >>>>
> > > >>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > > >>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > > >>>
> > > >>> The thread pool is used by things outside of the file-* block drivers,
> > > >>> too. Even outside the block layer. Not all of these seem to submit work
> > > >>> in the same thread.
> > > >>>
> > > >>>
> > > >>> For example:
> > > >>>
> > > >>> postcopy_ram_listen_thread() -> qemu_loadvm_state_main() ->
> > > >>> qemu_loadvm_section_start_full() -> vmstate_load() ->
> > > >>> vmstate_load_state() -> spapr_nvdimm_flush_post_load(), which has:
> > > >>>
> > > >>> ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
> > ^^^^^^^^^^^^^^^^^^^
> >
> > aio_get_thread_pool() isn't thread safe either:
> >
> > ThreadPool *aio_get_thread_pool(AioContext *ctx)
> > {
> > if (!ctx->thread_pool) {
> > ctx->thread_pool = thread_pool_new(ctx);
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >
> > Two threads could race in aio_get_thread_pool().
> >
> > I think post-copy is broken here: it's calling code that was only
> > designed to be called from the main loop thread.
> >
> > I have CCed Juan and David.
>
> In theory the path that you describe there shouldn't happen - although
> there is perhaps not enough protection on the load side to stop it
> happening if presented with a bad stream.
> This is documented in docs/devel/migration.rst under 'Destination
> behaviour'; but to recap, during postcopy load we have a problem that we
> need to be able to load incoming iterative (ie. RAM) pages during the
> loading of normal devices, because the loading of a device may access
> RAM that's not yet been transferred.
>
> To do that, the device state of all the non-iterative devices (which I
> think includes your spapr_nvdimm) is serialised into a separate
> migration stream and sent as a 'package'.
>
> We read the package off the stream on the main thread, but don't process
> it until we fire off the 'listen' thread - which you spotted the
> creation of above; the listen thread now takes over reading the
> migration stream to process RAM pages, and since it's in the same
> format, it calls qemu_loadvm_state_main() - but it doesn't expect
> any devices in that other than the RAM devices; it's just expecting RAM.
>
> In parallel with that, the main thread carries on loading the contents
> of the 'package' - and that contains your spapr_nvdimm device (and any
> other 'normal' devices); but that's OK because that's the main thread.
>
> Now if something was very broken and sent a header for the spapr-nvdimm
> down the main thread rather than into the package then, yes, we'd
> trigger your case, but that shouldn't happen.
Thanks for explaining that. A way to restrict the listen thread to only
process RAM pages would be good both as documentation and to prevent
invalid migration streams for causing problems.
For Emanuele and Kevin's original question about this code, it seems the
thread pool won't be called from the listen thread.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
prev parent reply other threads:[~2022-10-24 19:23 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-09 13:44 [PATCH 0/2] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
2022-06-09 13:44 ` [PATCH 1/2] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
2022-09-29 14:52 ` Kevin Wolf
2022-09-30 10:00 ` Emanuele Giuseppe Esposito
2022-09-30 15:32 ` Kevin Wolf
2022-10-03 9:18 ` Emanuele Giuseppe Esposito
2022-06-09 13:44 ` [PATCH 2/2] thread-pool: use ThreadPool " Emanuele Giuseppe Esposito
2022-09-29 15:30 ` Kevin Wolf
2022-09-30 12:17 ` Emanuele Giuseppe Esposito
2022-09-30 14:46 ` Emanuele Giuseppe Esposito
2022-09-30 15:45 ` Kevin Wolf
2022-10-03 8:52 ` Emanuele Giuseppe Esposito
2022-10-20 15:39 ` Stefan Hajnoczi
2022-10-20 16:22 ` Dr. David Alan Gilbert
2022-10-24 18:49 ` Stefan Hajnoczi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y1beS+QAuNx/Zdck@fedora \
--to=stefanha@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eesposit@redhat.com \
--cc=fam@euphon.net \
--cc=hreitz@redhat.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=sw@weilnetz.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).