* [PATCH] migration/multifd: Document two places for mapped-ram
@ 2024-03-01 9:15 peterx
2024-03-01 12:22 ` Fabiano Rosas
2024-03-01 17:47 ` Prasad Pandit
0 siblings, 2 replies; 6+ messages in thread
From: peterx @ 2024-03-01 9:15 UTC (permalink / raw)
To: qemu-devel; +Cc: Fabiano Rosas, Daniel P . Berrangé, peterx
From: Peter Xu <peterx@redhat.com>
Add two documentations for mapped-ram migration on two spots that may not
be extremely clear.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
Based-on: <20240229153017.2221-1-farosas@suse.de>
---
migration/multifd.c | 12 ++++++++++++
migration/ram.c | 8 +++++++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index b4e5a9dfcc..2942395ce2 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -709,6 +709,18 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp)
{
if (p->c) {
migration_ioc_unregister_yank(p->c);
+ /*
+ * An explicitly close() on the channel here is normally not
+ * required, but can be helpful for "file:" iochannels, where it
+ * will include an fdatasync() to make sure the data is flushed to
+ * the disk backend.
+ *
+ * The object_unref() cannot guarantee that because: (1) finalize()
+ * of the iochannel is only triggered on the last reference, and
+ * it's not guaranteed that we always hold the last refcount when
+ * reaching here, and, (2) even if finalize() is invoked, it only
+ * does a close(fd) without data flush.
+ */
qio_channel_close(p->c, &error_abort);
object_unref(OBJECT(p->c));
p->c = NULL;
diff --git a/migration/ram.c b/migration/ram.c
index 1f1b5297cf..c79e3de521 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -4258,7 +4258,13 @@ static int ram_load_precopy(QEMUFile *f)
switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
case RAM_SAVE_FLAG_MEM_SIZE:
ret = parse_ramblocks(f, addr);
-
+ /*
+ * For mapped-ram migration (to a file) using multifd, we sync
+ * once and for all here to make sure all tasks we queued to
+ * multifd threads are completed, so that all the ramblocks
+ * (including all the guest memory pages within) are fully
+ * loaded after this sync returns.
+ */
if (migrate_mapped_ram()) {
multifd_recv_sync_main();
}
--
2.44.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH] migration/multifd: Document two places for mapped-ram
2024-03-01 9:15 [PATCH] migration/multifd: Document two places for mapped-ram peterx
@ 2024-03-01 12:22 ` Fabiano Rosas
2024-03-01 17:47 ` Prasad Pandit
1 sibling, 0 replies; 6+ messages in thread
From: Fabiano Rosas @ 2024-03-01 12:22 UTC (permalink / raw)
To: peterx, qemu-devel; +Cc: Daniel P . Berrangé, peterx
peterx@redhat.com writes:
> From: Peter Xu <peterx@redhat.com>
>
> Add two documentations for mapped-ram migration on two spots that may not
> be extremely clear.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] migration/multifd: Document two places for mapped-ram
2024-03-01 9:15 [PATCH] migration/multifd: Document two places for mapped-ram peterx
2024-03-01 12:22 ` Fabiano Rosas
@ 2024-03-01 17:47 ` Prasad Pandit
2024-03-01 17:50 ` Daniel P. Berrangé
1 sibling, 1 reply; 6+ messages in thread
From: Prasad Pandit @ 2024-03-01 17:47 UTC (permalink / raw)
To: peterx; +Cc: qemu-devel, Fabiano Rosas, Daniel P . Berrangé
Hello Petr,
On Fri, 1 Mar 2024 at 14:46, <peterx@redhat.com> wrote:
> + * An explicitly close() on the channel here is normally not
explicitly -> explicit
> + * required, but can be helpful for "file:" iochannels, where it
> + * will include an fdatasync() to make sure the data is flushed to
> + * the disk backend.
* an fdatasync() -> fdatasync()
* qio_channel_close
-> ioc_klass->io_close = qio_channel_file_close;
-> qemu_close(fioc->fd)
-> close(fd);
It does not seem to call fdatasync() before close(fd);
- qio_channel_file_new_path(filename, O_CREAT | O_WRONLY | O_TRUNC, ...)
Maybe the qio_channel..() calls above should include the 'O_DSYNC'
flag as well? But that will do fdatasync() work at each write(2) call
I think, not sure if that is okay.
> + *
> + * The object_unref() cannot guarantee that because: (1) finalize()
> + * of the iochannel is only triggered on the last reference, and
> + * it's not guaranteed that we always hold the last refcount when
> + * reaching here, and, (2) even if finalize() is invoked, it only
> + * does a close(fd) without data flush.
> + */
* object_unref
-> object_finalize
-> object_deinit
-> type->instance_finalize
-> qio_channel_file_finalize
-> qemu_close(ioc->fd);
* I hope I'm looking at the right code here. (Sorry if I'm not)
Thank you.
---
- Prasad
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH] migration/multifd: Document two places for mapped-ram
2024-03-01 17:47 ` Prasad Pandit
@ 2024-03-01 17:50 ` Daniel P. Berrangé
2024-03-04 0:30 ` Peter Xu
0 siblings, 1 reply; 6+ messages in thread
From: Daniel P. Berrangé @ 2024-03-01 17:50 UTC (permalink / raw)
To: Prasad Pandit; +Cc: peterx, qemu-devel, Fabiano Rosas
On Fri, Mar 01, 2024 at 11:17:10PM +0530, Prasad Pandit wrote:
> Hello Petr,
>
> On Fri, 1 Mar 2024 at 14:46, <peterx@redhat.com> wrote:
> > + * An explicitly close() on the channel here is normally not
>
> explicitly -> explicit
>
> > + * required, but can be helpful for "file:" iochannels, where it
> > + * will include an fdatasync() to make sure the data is flushed to
> > + * the disk backend.
>
> * an fdatasync() -> fdatasync()
>
> * qio_channel_close
> -> ioc_klass->io_close = qio_channel_file_close;
> -> qemu_close(fioc->fd)
> -> close(fd);
>
> It does not seem to call fdatasync() before close(fd);
>
> - qio_channel_file_new_path(filename, O_CREAT | O_WRONLY | O_TRUNC, ...)
The documented behaviour reliant on another pending patch:
https://lists.nongnu.org/archive/html/qemu-devel/2024-02/msg07046.html
>
> Maybe the qio_channel..() calls above should include the 'O_DSYNC'
> flag as well? But that will do fdatasync() work at each write(2) call
> I think, not sure if that is okay.
>
> > + *
> > + * The object_unref() cannot guarantee that because: (1) finalize()
> > + * of the iochannel is only triggered on the last reference, and
> > + * it's not guaranteed that we always hold the last refcount when
> > + * reaching here, and, (2) even if finalize() is invoked, it only
> > + * does a close(fd) without data flush.
> > + */
>
> * object_unref
> -> object_finalize
> -> object_deinit
> -> type->instance_finalize
> -> qio_channel_file_finalize
> -> qemu_close(ioc->fd);
>
> * I hope I'm looking at the right code here. (Sorry if I'm not)
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] migration/multifd: Document two places for mapped-ram
2024-03-01 17:50 ` Daniel P. Berrangé
@ 2024-03-04 0:30 ` Peter Xu
2024-03-04 5:21 ` Prasad Pandit
0 siblings, 1 reply; 6+ messages in thread
From: Peter Xu @ 2024-03-04 0:30 UTC (permalink / raw)
To: Daniel P. Berrangé; +Cc: Prasad Pandit, qemu-devel, Fabiano Rosas
On Fri, Mar 01, 2024 at 05:50:24PM +0000, Daniel P. Berrangé wrote:
> On Fri, Mar 01, 2024 at 11:17:10PM +0530, Prasad Pandit wrote:
> > Hello Petr,
Hey Prasad!
Thanks for taking a look.
> >
> > On Fri, 1 Mar 2024 at 14:46, <peterx@redhat.com> wrote:
> > > + * An explicitly close() on the channel here is normally not
> >
> > explicitly -> explicit
> >
> > > + * required, but can be helpful for "file:" iochannels, where it
> > > + * will include an fdatasync() to make sure the data is flushed to
> > > + * the disk backend.
> >
> > * an fdatasync() -> fdatasync()
I'll fix both when apply.
> >
> > * qio_channel_close
> > -> ioc_klass->io_close = qio_channel_file_close;
> > -> qemu_close(fioc->fd)
> > -> close(fd);
> >
> > It does not seem to call fdatasync() before close(fd);
> >
> > - qio_channel_file_new_path(filename, O_CREAT | O_WRONLY | O_TRUNC, ...)
>
> The documented behaviour reliant on another pending patch:
>
> https://lists.nongnu.org/archive/html/qemu-devel/2024-02/msg07046.html
Rightfully as Dan helped to answer.
While for the other comment section in the same patch it relies on the
other patch:
https://lore.kernel.org/all/20240229153017.2221-20-farosas@suse.de/
>
> >
> > Maybe the qio_channel..() calls above should include the 'O_DSYNC'
> > flag as well? But that will do fdatasync() work at each write(2) call
> > I think, not sure if that is okay.
>
>
>
> >
> > > + *
> > > + * The object_unref() cannot guarantee that because: (1) finalize()
> > > + * of the iochannel is only triggered on the last reference, and
> > > + * it's not guaranteed that we always hold the last refcount when
> > > + * reaching here, and, (2) even if finalize() is invoked, it only
> > > + * does a close(fd) without data flush.
> > > + */
> >
> > * object_unref
> > -> object_finalize
> > -> object_deinit
> > -> type->instance_finalize
> > -> qio_channel_file_finalize
> > -> qemu_close(ioc->fd);
> >
> > * I hope I'm looking at the right code here. (Sorry if I'm not)
Yes I think you're looking at the right path, it's just that the relevant
patches haven't yet landed upstream but is planned. I normally use
"Based-on" tag for such patch that has a dependency outside master, like:
Based-on: <20240229153017.2221-1-farosas@suse.de>
I believe many others on the qemu list do the same. I think the rational
is this will be both recognized by human beings and also by patchew,
e.g. patchew will report a good apply status here:
https://patchew.org/QEMU/20240301091524.39900-1-peterx@redhat.com/
And in the same patch if you fetch the tree patchew provided:
git fetch https://github.com/patchew-project/qemu tags/patchew/20240301091524.39900-1-peterx@redhat.com
You can also directly fetch the whole tree with this patch applied
correctly on top of the dependency series.
Personally I don't use patchew, but I kept that habit to declare patch
dependencies just in case it'll help others who use it (e.g., I think
patchew has other review tools like version comparisons, which I also don't
use myself).
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] migration/multifd: Document two places for mapped-ram
2024-03-04 0:30 ` Peter Xu
@ 2024-03-04 5:21 ` Prasad Pandit
0 siblings, 0 replies; 6+ messages in thread
From: Prasad Pandit @ 2024-03-04 5:21 UTC (permalink / raw)
To: Peter Xu; +Cc: Daniel P. Berrangé, qemu-devel, Fabiano Rosas
On Mon, 4 Mar 2024 at 06:00, Peter Xu <peterx@redhat.com> wrote:
> Yes I think you're looking at the right path, it's just that the relevant
> patches haven't yet landed upstream but is planned. I normally use
> "Based-on" tag for such patch that has a dependency outside master, like:
>
> Based-on: <20240229153017.2221-1-farosas@suse.de>
>
> I believe many others on the qemu list do the same. I think the rational
> is this will be both recognized by human beings and also by patchew,
> e.g. patchew will report a good apply status here:
>
> https://patchew.org/QEMU/20240301091524.39900-1-peterx@redhat.com/
>
> And in the same patch if you fetch the tree patchew provided:
>
> git fetch https://github.com/patchew-project/qemu tags/patchew/20240301091524.39900-1-peterx@redhat.com
>
> You can also directly fetch the whole tree with this patch applied
> correctly on top of the dependency series.
>
> Personally I don't use patchew, but I kept that habit to declare patch
> dependencies just in case it'll help others who use it (e.g., I think
> patchew has other review tools like version comparisons, which I also don't
> use myself).
* Interesting. Thank you.
---
- Prasad
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-03-04 5:22 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-01 9:15 [PATCH] migration/multifd: Document two places for mapped-ram peterx
2024-03-01 12:22 ` Fabiano Rosas
2024-03-01 17:47 ` Prasad Pandit
2024-03-01 17:50 ` Daniel P. Berrangé
2024-03-04 0:30 ` Peter Xu
2024-03-04 5:21 ` Prasad Pandit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).