From: Fabiano Rosas <farosas@suse.de>
To: Peter Xu <peterx@redhat.com>
Cc: qemu-devel@nongnu.org, "Claudio Fontana" <cfontana@suse.de>,
jfehlig@suse.com, dfaggioli@suse.com, dgilbert@redhat.com,
"Daniel P . Berrangé" <berrange@redhat.com>,
"Juan Quintela" <quintela@redhat.com>
Subject: Re: [RFC PATCH v1 00/26] migration: File based migration with multifd and fixed-ram
Date: Fri, 31 Mar 2023 11:37:50 -0300 [thread overview]
Message-ID: <87edp5oukh.fsf@suse.de> (raw)
In-Reply-To: <ZCYCE0llX9WANK18@x1n>
Peter Xu <peterx@redhat.com> writes:
> On Thu, Mar 30, 2023 at 03:03:10PM -0300, Fabiano Rosas wrote:
>> Hi folks,
>
> Hi,
>
>>
>> I'm continuing the work done last year to add a new format of
>> migration stream that can be used to migrate large guests to a single
>> file in a performant way.
>>
>> This is an early RFC with the previous code + my additions to support
>> multifd and direct IO. Let me know what you think!
>>
>> Here are the reference links for previous discussions:
>>
>> https://lists.gnu.org/archive/html/qemu-devel/2022-08/msg01813.html
>> https://lists.gnu.org/archive/html/qemu-devel/2022-10/msg01338.html
>> https://lists.gnu.org/archive/html/qemu-devel/2022-10/msg05536.html
>>
>> The series has 4 main parts:
>>
>> 1) File migration: A new "file:" migration URI. So "file:mig" does the
>> same as "exec:cat > mig". Patches 1-4 implement this;
>>
>> 2) Fixed-ram format: A new format for the migration stream. Puts guest
>> pages at their relative offsets in the migration file. This saves
>> space on the worst case of RAM utilization because every page has a
>> fixed offset in the migration file and (potentially) saves us time
>> because we could write pages independently in parallel. It also
>> gives alignment guarantees so we could use O_DIRECT. Patches 5-13
>> implement this;
>>
>> With patches 1-13 these two^ can be used with:
>>
>> (qemu) migrate_set_capability fixed-ram on
>> (qemu) migrate[_incoming] file:mig
>
> Have you considered enabling the new fixed-ram format with postcopy when
> loading?
>
> Due to the linear offseting of pages, I think it can achieve super fast vm
> loads due to O(1) lookup of pages and local page fault resolutions.
>
I don't think we have looked that much at the loading side yet. Good to
know that it has potential to be faster. I'll look into it. Thanks for
the suggestion.
>>
>> --> new in this series:
>>
>> 3) MultiFD support: This is about making use of the parallelism
>> allowed by the new format. We just need the threading and page
>> queuing infrastructure that is already in place for
>> multifd. Patches 14-24 implement this;
>>
>> (qemu) migrate_set_capability fixed-ram on
>> (qemu) migrate_set_capability multifd on
>> (qemu) migrate_set_parameter multifd-channels 4
>> (qemu) migrate_set_parameter max-bandwith 0
>> (qemu) migrate[_incoming] file:mig
>>
>> 4) Add a new "direct_io" parameter and enable O_DIRECT for the
>> properly aligned segments of the migration (mostly ram). Patch 25.
>>
>> (qemu) migrate_set_parameter direct-io on
>>
>> Thanks! Some data below:
>> =====
>>
>> Outgoing migration to file. NVMe disk. XFS filesystem.
>>
>> - Single migration runs of stopped 32G guest with ~90% RAM usage. Guest
>> running `stress-ng --vm 4 --vm-bytes 90% --vm-method all --verify -t
>> 10m -v`:
>>
>> migration type | MB/s | pages/s | ms
>> ----------------+------+---------+------
>> savevm io_uring | 434 | 102294 | 71473
>
> So I assume this is the non-live migration scenario. Could you explain
> what does io_uring mean here?
>
This table is all non-live migration. This particular line is a snapshot
(hmp_savevm->save_snapshot). I thought it could be relevant because it
is another way by which we write RAM into disk.
The io_uring is noise, I was initially under the impression that the
block device aio configuration affected this scenario.
>> file: | 3017 | 855862 | 10301
>> fixed-ram | 1982 | 330686 | 15637
>> ----------------+------+---------+------
>> fixed-ram + multifd + O_DIRECT
>> 2 ch. | 5565 | 1500882 | 5576
>> 4 ch. | 5735 | 1991549 | 5412
>> 8 ch. | 5650 | 1769650 | 5489
>> 16 ch. | 6071 | 1832407 | 5114
>> 32 ch. | 6147 | 1809588 | 5050
>> 64 ch. | 6344 | 1841728 | 4895
>> 128 ch. | 6120 | 1915669 | 5085
>> ----------------+------+---------+------
>
> Thanks,
next prev parent reply other threads:[~2023-03-31 14:39 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-30 18:03 [RFC PATCH v1 00/26] migration: File based migration with multifd and fixed-ram Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 01/26] migration: Add support for 'file:' uri for source migration Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 02/26] migration: Add support for 'file:' uri for incoming migration Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 03/26] tests/qtest: migration: Add migrate_incoming_qmp helper Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 04/26] tests/qtest: migration-test: Add tests for file-based migration Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 05/26] migration: Initial support of fixed-ram feature for analyze-migration.py Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 06/26] io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 07/26] io: Add generic pwritev/preadv interface Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 08/26] io: implement io_pwritev/preadv for QIOChannelFile Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 09/26] migration/qemu-file: add utility methods for working with seekable channels Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 10/26] migration/ram: Introduce 'fixed-ram' migration stream capability Fabiano Rosas
2023-03-30 22:01 ` Peter Xu
2023-03-31 7:56 ` Daniel P. Berrangé
2023-03-31 14:39 ` Peter Xu
2023-03-31 15:34 ` Daniel P. Berrangé
2023-03-31 16:13 ` Peter Xu
2023-03-31 15:05 ` Fabiano Rosas
2023-03-31 5:50 ` Markus Armbruster
2023-03-30 18:03 ` [RFC PATCH v1 11/26] migration: Refactor precopy ram loading code Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 12/26] migration: Add support for 'fixed-ram' migration restore Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 13/26] tests/qtest: migration-test: Add tests for fixed-ram file-based migration Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 14/26] migration: Add completion tracepoint Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 15/26] migration/multifd: Remove direct "socket" references Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 16/26] migration/multifd: Allow multifd without packets Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 17/26] migration/multifd: Add outgoing QIOChannelFile support Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 18/26] migration/multifd: Add incoming " Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 19/26] migration/multifd: Add pages to the receiving side Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 20/26] io: Add a pwritev/preadv version that takes a discontiguous iovec Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 21/26] migration/ram: Add a wrapper for fixed-ram shadow bitmap Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 22/26] migration/multifd: Support outgoing fixed-ram stream format Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 23/26] migration/multifd: Support incoming " Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 24/26] tests/qtest: Add a multifd + fixed-ram migration test Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 25/26] migration: Add direct-io parameter Fabiano Rosas
2023-03-30 18:03 ` [RFC PATCH v1 26/26] tests/migration/guestperf: Add file, fixed-ram and direct-io support Fabiano Rosas
2023-03-30 21:41 ` [RFC PATCH v1 00/26] migration: File based migration with multifd and fixed-ram Peter Xu
2023-03-31 14:37 ` Fabiano Rosas [this message]
2023-03-31 14:52 ` Peter Xu
2023-03-31 15:30 ` Fabiano Rosas
2023-03-31 15:55 ` Peter Xu
2023-03-31 16:10 ` Daniel P. Berrangé
2023-03-31 16:27 ` Peter Xu
2023-03-31 18:18 ` Fabiano Rosas
2023-03-31 21:52 ` Peter Xu
2023-04-03 7:47 ` Claudio Fontana
2023-04-03 19:26 ` Peter Xu
2023-04-04 8:00 ` Claudio Fontana
2023-04-04 14:53 ` Peter Xu
2023-04-04 15:10 ` Claudio Fontana
2023-04-04 15:56 ` Peter Xu
2023-04-06 16:46 ` Fabiano Rosas
2023-04-07 10:36 ` Claudio Fontana
2023-04-11 15:48 ` Peter Xu
2023-04-18 16:58 ` Daniel P. Berrangé
2023-04-18 19:26 ` Peter Xu
2023-04-19 17:12 ` Daniel P. Berrangé
2023-04-19 19:07 ` Peter Xu
2023-04-20 9:02 ` Daniel P. Berrangé
2023-04-20 19:19 ` Peter Xu
2023-04-21 7:48 ` Daniel P. Berrangé
2023-04-21 13:56 ` Peter Xu
2023-03-31 15:46 ` Daniel P. Berrangé
2023-04-03 7:38 ` David Hildenbrand
2023-04-03 14:41 ` Fabiano Rosas
2023-04-03 16:24 ` David Hildenbrand
2023-04-03 16:36 ` Fabiano Rosas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87edp5oukh.fsf@suse.de \
--to=farosas@suse.de \
--cc=berrange@redhat.com \
--cc=cfontana@suse.de \
--cc=dfaggioli@suse.com \
--cc=dgilbert@redhat.com \
--cc=jfehlig@suse.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).