From: Peter Xu <peterx@redhat.com>
To: Marco Cavenati <Marco.Cavenati@eurecom.fr>
Cc: "Fabiano Rosas" <farosas@suse.de>,
qemu-devel@nongnu.org, "Daniel P. Berrangé" <berrange@redhat.com>,
"Prasad Pandit" <ppandit@redhat.com>
Subject: Re: [PATCH] migration: add FEATURE_SEEKABLE to QIOChannelBlock
Date: Fri, 9 May 2025 18:04:53 -0400 [thread overview]
Message-ID: <aB58BQQ12aosCalh@x1.local> (raw)
In-Reply-To: <1b54a0-681e7080-273-3299e580@146025174>
On Fri, May 09, 2025 at 11:14:41PM +0200, Marco Cavenati wrote:
> On Friday, May 09, 2025 18:21 CEST, Peter Xu <peterx@redhat.com> wrote:
>
> > So you don't really need to take sequence of snapshots? Hmm, that sounds
> > like a completely different use case that I originally thought.
>
> Correct
>
> > Have you thought of leveraging ignore-shared and MAP_PRIVATE for the major
> > chunk of guest mem?
> >
> > Let me explain; it's a very rough idea, but maybe you can collect something
> > useful.
> >
> > So.. if you keep reloading one VM state thousands of times, it's better
> > first that you have some shmem file (let's imagine that's enough.. you
> > could have more backends) keeping the major chunk of the VM RAM image that
> > you migrated before into a file.
> >
> > Say, the major part of guest mem is stored here:
> >
> > PATH_RAM=/dev/shm/XXX
> >
> > Then you migrate (with ignore-shared=on) to a file here (NOTE: I _think_
> > you really can use file migration in this case with VM stopped first, not
> > snapshot save/load):
> >
> > PATH_VM_IMAGE=/tmp/VM_IMAGE_YYY
> >
> > Then, the two files above should contain all info you need to start a new
> > VM.
> >
> > When you want to recover that VM state, boot a VM using this cmdline:
> >
> > $qemu ... \
> > -object memory-backend-file,mem-path=$PATH_RAM,share=off
> > -incoming file:$PATH_VM_IMAGE
> >
> > That'll boot a VM, directly loading the shmem page cache (always present on
> > the host, occupying RAM, though, outside of VM lifecycle, but it's part of
> > the design..), loading VM image would be lightning fast because it's tiny
> > when there's almost no RAM inside. No concern on mapped-ram at all as the
> > rest RAMs are too trivial to just be a stream.
> >
> > The important bit is share=off - that will mmap() the VM major RAM as
> > MAP_PRIVATE, then it'll do CoW on the "snapshot" you made before, whenever
> > you writes to some guest pages during fuzzing some functions, it copies the
> > shmem page cache over. shmem page cache should never change its content.
> >
> > Sounds working to you?
>
> I didn't know much about these options, cool, thanks for the explanation.
>
> My only concern is that I'd have to restart the QEMU process for each iteration.
> Honestly I've never measured the impact it would have but I fear that it would
> be noticeable since the goal is to restore many times a second. What do you
> think?
It may depends on how "many times" are defined. :) IIUC, booting QEMU could
still be pretty fast, but yes, worth measuring.
If that works at least functionally (which also needs some double checking
I guess..), it would be great if you would compare the perf difference
v.s. your solution, that'll be very helpful material for reviewers to read
when/if you're going to propose the feature.
> (Also, snapshots conveniently take care of the disk as well, but this shouldn't
> be too big of a deal.)
True, I didn't take disks into consideration. Maybe disk files can be
snapshotted and recovered separately using either qcow2's snapshots, or
using snapshots on modern file systems like btrfs. Good to know you seem
to have ways to work it out in all cases.
--
Peter Xu
next prev parent reply other threads:[~2025-05-09 22:06 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-27 14:14 [PATCH] migration: add FEATURE_SEEKABLE to QIOChannelBlock Marco Cavenati
2025-04-04 8:19 ` Prasad Pandit
2025-04-04 9:04 ` Marco Cavenati
2025-04-04 10:14 ` Prasad Pandit
2025-04-04 12:05 ` Marco Cavenati
2025-04-07 6:47 ` Prasad Pandit
2025-04-07 9:00 ` Marco Cavenati
2025-04-08 5:25 ` Prasad Pandit
2025-04-08 15:03 ` Marco Cavenati
2025-04-15 10:21 ` Daniel P. Berrangé
2025-04-15 10:44 ` Prasad Pandit
2025-04-15 11:03 ` Daniel P. Berrangé
2025-04-15 11:57 ` Prasad Pandit
2025-04-15 12:03 ` Daniel P. Berrangé
2025-04-10 19:52 ` Fabiano Rosas
2025-04-11 8:48 ` Marco Cavenati
2025-04-11 12:24 ` Fabiano Rosas
2025-04-15 10:15 ` Marco Cavenati
2025-04-15 13:50 ` Fabiano Rosas
2025-04-17 9:10 ` Marco Cavenati
2025-04-17 15:12 ` Fabiano Rosas
2025-04-24 13:44 ` Marco Cavenati
2025-05-08 20:23 ` Peter Xu
2025-05-09 12:51 ` Marco Cavenati
2025-05-09 16:21 ` Peter Xu
2025-05-09 21:14 ` Marco Cavenati
2025-05-09 22:04 ` Peter Xu [this message]
2025-09-16 16:06 ` Marco Cavenati
2025-09-19 21:24 ` Fabiano Rosas
2025-09-22 15:51 ` Marco Cavenati
2025-09-30 20:12 ` Fabiano Rosas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aB58BQQ12aosCalh@x1.local \
--to=peterx@redhat.com \
--cc=Marco.Cavenati@eurecom.fr \
--cc=berrange@redhat.com \
--cc=farosas@suse.de \
--cc=ppandit@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).