From: Fabiano Rosas <farosas@suse.de>
To: "Daniel P. Berrangé" <berrange@redhat.com>,
"Peter Xu" <peterx@redhat.com>
Cc: qemu-devel@nongnu.org, armbru@redhat.com,
Claudio Fontana <cfontana@suse.de>
Subject: Re: [PATCH v6 00/23] migration: File based migration with multifd and mapped-ram
Date: Mon, 04 Mar 2024 10:09:25 -0300 [thread overview]
Message-ID: <87bk7unny2.fsf@suse.de> (raw)
In-Reply-To: <ZeXBsR0ctl4evdYb@redhat.com>
Daniel P. Berrangé <berrange@redhat.com> writes:
> On Mon, Mar 04, 2024 at 08:35:36PM +0800, Peter Xu wrote:
>> Fabiano,
>>
>> On Thu, Feb 29, 2024 at 12:29:54PM -0300, Fabiano Rosas wrote:
>> > => guest: 128 GB RAM - 120 GB dirty - 1 vcpu in tight loop dirtying memory
>>
>> I'm curious normally how much time does it take to do the final fdatasync()
>> for you when you did this test.
I haven't looked at the fdatasync() in isolation. I'll do some
measurements soon.
>>
>> I finally got a relatively large system today and gave it a quick shot over
>> 128G (100G busy dirty) mapped-ram snapshot with 8 multifd channels. The
>> migration save/load does all fine, so I don't think there's anything wrong
>> with the patchset, however when save completes (I'll need to stop the
>> workload as my disk isn't fast enough I guess..) I'll always hit a super
>> long hang of QEMU on fdatasync() on XFS during which the main thread is in
>> UNINTERRUPTIBLE state.
> That isn't very surprising. If you don't have O_DIRECT enabled, then
> all that disk I/O from the migrate is going to be in RAM, and thus the
> fdatasync() is likely to trigger writing out alot of data.
>
> Blocking the main QEMU thread though is pretty unhelpful. That suggests
> the data sync needs to be moved to a non-main thread.
Perhaps if we move the fsync to the same spot as the multifd thread sync
instead of having a big one at the end? Not sure how that looks with
concurrency in the mix.
I'll have to experiment a bit.
next prev parent reply other threads:[~2024-03-04 13:09 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-29 15:29 [PATCH v6 00/23] migration: File based migration with multifd and mapped-ram Fabiano Rosas
2024-02-29 15:29 ` [PATCH v6 01/23] migration/multifd: Cleanup multifd_recv_sync_main Fabiano Rosas
2024-02-29 15:29 ` [PATCH v6 02/23] io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file Fabiano Rosas
2024-02-29 15:29 ` [PATCH v6 03/23] io: Add generic pwritev/preadv interface Fabiano Rosas
2024-02-29 15:29 ` [PATCH v6 04/23] io: implement io_pwritev/preadv for QIOChannelFile Fabiano Rosas
2024-02-29 15:29 ` [PATCH v6 05/23] io: fsync before closing a file channel Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 06/23] migration/qemu-file: add utility methods for working with seekable channels Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 07/23] migration/ram: Introduce 'mapped-ram' migration capability Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 08/23] migration: Add mapped-ram URI compatibility check Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 09/23] migration/ram: Add outgoing 'mapped-ram' migration Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 10/23] migration/ram: Add incoming " Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 11/23] tests/qtest/migration: Add tests for mapped-ram file-based migration Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 12/23] migration/multifd: Rename MultiFDSend|RecvParams::data to compress_data Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 13/23] migration/multifd: Decouple recv method from pages Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 14/23] migration/multifd: Allow multifd without packets Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 15/23] migration/multifd: Allow receiving pages " Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 16/23] migration/multifd: Add a wrapper for channels_created Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 17/23] migration/multifd: Add outgoing QIOChannelFile support Fabiano Rosas
2024-03-01 1:43 ` Peter Xu
2024-02-29 15:30 ` [PATCH v6 18/23] migration/multifd: Add incoming " Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 19/23] migration/multifd: Prepare multifd sync for mapped-ram migration Fabiano Rosas
2024-03-01 1:45 ` Peter Xu
2024-02-29 15:30 ` [PATCH v6 20/23] migration/multifd: Support outgoing mapped-ram stream format Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 21/23] migration/multifd: Support incoming " Fabiano Rosas
2024-02-29 15:30 ` [PATCH v6 22/23] migration/multifd: Add mapped-ram support to fd: URI Fabiano Rosas
2024-03-01 1:47 ` Peter Xu
2024-02-29 15:30 ` [PATCH v6 23/23] tests/qtest/migration: Add a multifd + mapped-ram migration test Fabiano Rosas
2024-03-01 1:50 ` [PATCH v6 00/23] migration: File based migration with multifd and mapped-ram Peter Xu
2024-03-01 7:18 ` Markus Armbruster
2024-03-01 8:11 ` Daniel P. Berrangé
2024-03-01 8:37 ` Peter Xu
2024-03-04 12:35 ` Peter Xu
2024-03-04 12:42 ` Daniel P. Berrangé
2024-03-04 12:53 ` Peter Xu
2024-03-04 13:12 ` Peter Xu
2024-03-04 20:15 ` Fabiano Rosas
2024-03-04 21:04 ` Daniel P. Berrangé
2024-03-05 1:51 ` Peter Xu
2024-03-05 15:23 ` Fabiano Rosas
2024-03-04 13:09 ` Fabiano Rosas [this message]
2024-03-04 13:17 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87bk7unny2.fsf@suse.de \
--to=farosas@suse.de \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=cfontana@suse.de \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).