* [PATCH] migration: ram block cpr blockers
@ 2025-01-17 17:46 Steve Sistare
2025-01-17 18:16 ` Peter Xu
0 siblings, 1 reply; 9+ messages in thread
From: Steve Sistare @ 2025-01-17 17:46 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini, Steve Sistare
Unlike cpr-reboot mode, cpr-transfer mode cannot save volatile ram blocks
in the migration stream file and recreate them later, because the physical
memory for the blocks is pinned and registered for vfio. Add a blocker
for volatile ram blocks.
Also add a blocker for RAM_GUEST_MEMFD. Preserving guest_memfd may be
sufficient for CPR, but it has not been tested yet.
Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
---
include/exec/memory.h | 3 +++
include/exec/ramblock.h | 1 +
migration/savevm.c | 2 ++
system/physmem.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 60 insertions(+)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 0ac21cc..088330a 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -3185,6 +3185,9 @@ bool ram_block_discard_is_disabled(void);
*/
bool ram_block_discard_is_required(void);
+void ram_block_add_cpr_blocker(RAMBlock *rb, Error **errp);
+void ram_block_del_cpr_blocker(RAMBlock *rb);
+
#endif
#endif
diff --git a/include/exec/ramblock.h b/include/exec/ramblock.h
index 0babd10..64484cd 100644
--- a/include/exec/ramblock.h
+++ b/include/exec/ramblock.h
@@ -39,6 +39,7 @@ struct RAMBlock {
/* RCU-enabled, writes protected by the ramlist lock */
QLIST_ENTRY(RAMBlock) next;
QLIST_HEAD(, RAMBlockNotifier) ramblock_notifiers;
+ Error *cpr_blocker;
int fd;
uint64_t fd_offset;
int guest_memfd;
diff --git a/migration/savevm.c b/migration/savevm.c
index c929da1..264bc06 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -3341,12 +3341,14 @@ void vmstate_register_ram(MemoryRegion *mr, DeviceState *dev)
qemu_ram_set_idstr(mr->ram_block,
memory_region_name(mr), dev);
qemu_ram_set_migratable(mr->ram_block);
+ ram_block_add_cpr_blocker(mr->ram_block, &error_fatal);
}
void vmstate_unregister_ram(MemoryRegion *mr, DeviceState *dev)
{
qemu_ram_unset_idstr(mr->ram_block);
qemu_ram_unset_migratable(mr->ram_block);
+ ram_block_del_cpr_blocker(mr->ram_block);
}
void vmstate_register_ram_global(MemoryRegion *mr)
diff --git a/system/physmem.c b/system/physmem.c
index 67c9db9..0bcfc6c 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -70,7 +70,10 @@
#include "qemu/pmem.h"
+#include "qapi/qapi-types-migration.h"
+#include "migration/blocker.h"
#include "migration/cpr.h"
+#include "migration/options.h"
#include "migration/vmstate.h"
#include "qemu/range.h"
@@ -1899,6 +1902,14 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
qemu_mutex_unlock_ramlist();
goto out_free;
}
+
+ error_setg(&new_block->cpr_blocker,
+ "Memory region %s uses guest_memfd, "
+ "which is not supported with CPR.",
+ memory_region_name(new_block->mr));
+ migrate_add_blocker_modes(&new_block->cpr_blocker, errp,
+ MIG_MODE_CPR_TRANSFER,
+ -1);
}
ram_size = (new_block->offset + new_block->max_length) >> TARGET_PAGE_BITS;
@@ -4059,3 +4070,46 @@ bool ram_block_discard_is_required(void)
return qatomic_read(&ram_block_discard_required_cnt) ||
qatomic_read(&ram_block_coordinated_discard_required_cnt);
}
+
+/*
+ * Return true if ram contents would be lost during CPR.
+ * Return false for ram_device because it is remapped in new QEMU. Do not
+ * exclude rom, even though it is readonly, because the rom file could change
+ * in new QEMU. Return false for non-migratable blocks. They are either
+ * re-created in new QEMU, or are handled specially, or are covered by a
+ * device-level CPR blocker. Return false for an fd, because it is visible and
+ * can be remapped in new QEMU.
+ */
+static bool ram_is_volatile(RAMBlock *rb)
+{
+ MemoryRegion *mr = rb->mr;
+
+ return mr &&
+ memory_region_is_ram(mr) &&
+ !memory_region_is_ram_device(mr) &&
+ (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
+ qemu_ram_is_migratable(rb) &&
+ rb->fd < 0;
+}
+
+/*
+ * Add a blocker for each volatile ram block.
+ */
+void ram_block_add_cpr_blocker(RAMBlock *rb, Error **errp)
+{
+ if (!ram_is_volatile(rb)) {
+ return;
+ }
+
+ error_setg(&rb->cpr_blocker,
+ "Memory region %s is volatile. share=on is required for "
+ "memory-backend objects, and aux-ram-share=on is required.",
+ memory_region_name(rb->mr));
+ migrate_add_blocker_modes(&rb->cpr_blocker, errp, MIG_MODE_CPR_TRANSFER,
+ -1);
+}
+
+void ram_block_del_cpr_blocker(RAMBlock *rb)
+{
+ migrate_del_blocker(&rb->cpr_blocker);
+}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH] migration: ram block cpr blockers
2025-01-17 17:46 [PATCH] migration: ram block cpr blockers Steve Sistare
@ 2025-01-17 18:16 ` Peter Xu
2025-01-17 19:10 ` Steven Sistare
0 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2025-01-17 18:16 UTC (permalink / raw)
To: Steve Sistare
Cc: qemu-devel, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini
On Fri, Jan 17, 2025 at 09:46:11AM -0800, Steve Sistare wrote:
> +/*
> + * Return true if ram contents would be lost during CPR.
> + * Return false for ram_device because it is remapped in new QEMU. Do not
> + * exclude rom, even though it is readonly, because the rom file could change
> + * in new QEMU. Return false for non-migratable blocks. They are either
> + * re-created in new QEMU, or are handled specially, or are covered by a
> + * device-level CPR blocker. Return false for an fd, because it is visible and
> + * can be remapped in new QEMU.
> + */
> +static bool ram_is_volatile(RAMBlock *rb)
> +{
> + MemoryRegion *mr = rb->mr;
> +
> + return mr &&
> + memory_region_is_ram(mr) &&
> + !memory_region_is_ram_device(mr) &&
> + (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
> + qemu_ram_is_migratable(rb) &&
> + rb->fd < 0;
> +}
Blocking guest_memfd looks ok, but comparing to add one more block
notifier, can we check all ramblocks once in migrate_prepare(), and fail
that command directly if it fails the check?
OTOH, is there any simpler way to simplify the check conditions? It'll be
at least nice to break these checks into smaller if conditions for
readability..
I wonder if we could stick with looping over all ramblocks, then make sure
each of them is on the cpr saved fd list. It may need to make
cpr_save_fd() always register with the name of ramblock to do such lookup,
or maybe we could also cache the ramblock pointer in CprFd, then the lookup
will be a pointer match check.
--
Peter Xu
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] migration: ram block cpr blockers
2025-01-17 18:16 ` Peter Xu
@ 2025-01-17 19:10 ` Steven Sistare
2025-01-17 23:57 ` Peter Xu
0 siblings, 1 reply; 9+ messages in thread
From: Steven Sistare @ 2025-01-17 19:10 UTC (permalink / raw)
To: Peter Xu
Cc: qemu-devel, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini
On 1/17/2025 1:16 PM, Peter Xu wrote:
> On Fri, Jan 17, 2025 at 09:46:11AM -0800, Steve Sistare wrote:
>> +/*
>> + * Return true if ram contents would be lost during CPR.
>> + * Return false for ram_device because it is remapped in new QEMU. Do not
>> + * exclude rom, even though it is readonly, because the rom file could change
>> + * in new QEMU. Return false for non-migratable blocks. They are either
>> + * re-created in new QEMU, or are handled specially, or are covered by a
>> + * device-level CPR blocker. Return false for an fd, because it is visible and
>> + * can be remapped in new QEMU.
>> + */
>> +static bool ram_is_volatile(RAMBlock *rb)
>> +{
>> + MemoryRegion *mr = rb->mr;
>> +
>> + return mr &&
>> + memory_region_is_ram(mr) &&
>> + !memory_region_is_ram_device(mr) &&
>> + (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
>> + qemu_ram_is_migratable(rb) &&
>> + rb->fd < 0;
>> +}
>
> Blocking guest_memfd looks ok, but comparing to add one more block
> notifier, can we check all ramblocks once in migrate_prepare(), and fail
> that command directly if it fails the check?
In an upcoming patch, I will be adding an option analogous to only-migratable which
prevents QEMU from starting if anything would block cpr-transfer. That option
will be checked when blockers are added, like for only-migratable. migrate_prepare
is too late.
> OTOH, is there any simpler way to simplify the check conditions? It'll be
> at least nice to break these checks into smaller if conditions for
> readability..
I thought the function header comments made it clear, but I could move each
comment next to each condition:
...
/*
* Return false for an fd, because it is visible and can be remapped in
* new QEMU.
*/
if (rb->fd >= 0) {
return false;
}
...
> I wonder if we could stick with looping over all ramblocks, then make sure
> each of them is on the cpr saved fd list. It may need to make
> cpr_save_fd() always register with the name of ramblock to do such lookup,
> or maybe we could also cache the ramblock pointer in CprFd, then the lookup
> will be a pointer match check.
Some ramblocks are not on the list, such as named files. Plus looping in
migrate_prepare is too late as noted above.
IMO what I have already implemented using blockers is clean and elegant.
- Steve
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] migration: ram block cpr blockers
2025-01-17 19:10 ` Steven Sistare
@ 2025-01-17 23:57 ` Peter Xu
2025-01-29 18:20 ` Steven Sistare
0 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2025-01-17 23:57 UTC (permalink / raw)
To: Steven Sistare
Cc: qemu-devel, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini
On Fri, Jan 17, 2025 at 02:10:14PM -0500, Steven Sistare wrote:
> On 1/17/2025 1:16 PM, Peter Xu wrote:
> > On Fri, Jan 17, 2025 at 09:46:11AM -0800, Steve Sistare wrote:
> > > +/*
> > > + * Return true if ram contents would be lost during CPR.
> > > + * Return false for ram_device because it is remapped in new QEMU. Do not
> > > + * exclude rom, even though it is readonly, because the rom file could change
> > > + * in new QEMU. Return false for non-migratable blocks. They are either
> > > + * re-created in new QEMU, or are handled specially, or are covered by a
> > > + * device-level CPR blocker. Return false for an fd, because it is visible and
> > > + * can be remapped in new QEMU.
> > > + */
> > > +static bool ram_is_volatile(RAMBlock *rb)
> > > +{
> > > + MemoryRegion *mr = rb->mr;
> > > +
> > > + return mr &&
> > > + memory_region_is_ram(mr) &&
> > > + !memory_region_is_ram_device(mr) &&
> > > + (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
> > > + qemu_ram_is_migratable(rb) &&
> > > + rb->fd < 0;
> > > +}
> >
> > Blocking guest_memfd looks ok, but comparing to add one more block
> > notifier, can we check all ramblocks once in migrate_prepare(), and fail
> > that command directly if it fails the check?
>
> In an upcoming patch, I will be adding an option analogous to only-migratable which
> prevents QEMU from starting if anything would block cpr-transfer. That option
> will be checked when blockers are added, like for only-migratable. migrate_prepare
> is too late.
>
> > OTOH, is there any simpler way to simplify the check conditions? It'll be
> > at least nice to break these checks into smaller if conditions for
> > readability..
>
> I thought the function header comments made it clear, but I could move each
> comment next to each condition:
>
> ...
> /*
> * Return false for an fd, because it is visible and can be remapped in
> * new QEMU.
> */
> if (rb->fd >= 0) {
> return false;
> }
> ...
>
> > I wonder if we could stick with looping over all ramblocks, then make sure
> > each of them is on the cpr saved fd list. It may need to make
> > cpr_save_fd() always register with the name of ramblock to do such lookup,
> > or maybe we could also cache the ramblock pointer in CprFd, then the lookup
> > will be a pointer match check.
>
> Some ramblocks are not on the list, such as named files. Plus looping in
> migrate_prepare is too late as noted above.
>
> IMO what I have already implemented using blockers is clean and elegant.
OK if we need to fail it early at boot, then yes blockers are probably
better.
We'll need one more cmdline parameter. I've no objection, but I don't know
how to judge when it's ok to add, when it's better not.. I'll leave others
to comment on this.
But still, could we check it when ramblocks are created? So in that way
whatever is forbidden is clear in its own path, I feel like that could be
clearer (like what you did with gmemfd).
For example, if I start to convert some of your requirements above, then
memory_region_is_ram_device() implies RAM_PREALLOC. Actually, ram_device
is not the only RAM_PREALLOC user.. Say, would it also not work with all
memory_region_init_ram_ptr() users (even if they're not ram_device)? An
example is, looks like virtio-gpu can create random ramblocks on the fly
with prealloced buffers. I am not sure whether they can be pinned by VFIO
too. You may know better.
So, to me ram_is_volatile() is harder to follow, meanwhile it may miss
something to me? IMO it's still better to explicitly add cpr blockers in
the ram block add() path if possible, but maybe you still have good reasons
to do it only until vmstate_register_ram() which I overlooked..
--
Peter Xu
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] migration: ram block cpr blockers
2025-01-17 23:57 ` Peter Xu
@ 2025-01-29 18:20 ` Steven Sistare
2025-01-30 17:01 ` Peter Xu
0 siblings, 1 reply; 9+ messages in thread
From: Steven Sistare @ 2025-01-29 18:20 UTC (permalink / raw)
To: Peter Xu
Cc: qemu-devel, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini
On 1/17/2025 6:57 PM, Peter Xu wrote:
> On Fri, Jan 17, 2025 at 02:10:14PM -0500, Steven Sistare wrote:
>> On 1/17/2025 1:16 PM, Peter Xu wrote:
>>> On Fri, Jan 17, 2025 at 09:46:11AM -0800, Steve Sistare wrote:
>>>> +/*
>>>> + * Return true if ram contents would be lost during CPR.
>>>> + * Return false for ram_device because it is remapped in new QEMU. Do not
>>>> + * exclude rom, even though it is readonly, because the rom file could change
>>>> + * in new QEMU. Return false for non-migratable blocks. They are either
>>>> + * re-created in new QEMU, or are handled specially, or are covered by a
>>>> + * device-level CPR blocker. Return false for an fd, because it is visible and
>>>> + * can be remapped in new QEMU.
>>>> + */
>>>> +static bool ram_is_volatile(RAMBlock *rb)
>>>> +{
>>>> + MemoryRegion *mr = rb->mr;
>>>> +
>>>> + return mr &&
>>>> + memory_region_is_ram(mr) &&
>>>> + !memory_region_is_ram_device(mr) &&
>>>> + (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
>>>> + qemu_ram_is_migratable(rb) &&
>>>> + rb->fd < 0;
>>>> +}
>>>
>>> Blocking guest_memfd looks ok, but comparing to add one more block
>>> notifier, can we check all ramblocks once in migrate_prepare(), and fail
>>> that command directly if it fails the check?
>>
>> In an upcoming patch, I will be adding an option analogous to only-migratable which
>> prevents QEMU from starting if anything would block cpr-transfer. That option
>> will be checked when blockers are added, like for only-migratable. migrate_prepare
>> is too late.
>>
>>> OTOH, is there any simpler way to simplify the check conditions? It'll be
>>> at least nice to break these checks into smaller if conditions for
>>> readability..
>>
>> I thought the function header comments made it clear, but I could move each
>> comment next to each condition:
>>
>> ...
>> /*
>> * Return false for an fd, because it is visible and can be remapped in
>> * new QEMU.
>> */
>> if (rb->fd >= 0) {
>> return false;
>> }
>> ...
>>
>>> I wonder if we could stick with looping over all ramblocks, then make sure
>>> each of them is on the cpr saved fd list. It may need to make
>>> cpr_save_fd() always register with the name of ramblock to do such lookup,
>>> or maybe we could also cache the ramblock pointer in CprFd, then the lookup
>>> will be a pointer match check.
>>
>> Some ramblocks are not on the list, such as named files. Plus looping in
>> migrate_prepare is too late as noted above.
>>
>> IMO what I have already implemented using blockers is clean and elegant.
>
> OK if we need to fail it early at boot, then yes blockers are probably
> better.
>
> We'll need one more cmdline parameter. I've no objection, but I don't know
> how to judge when it's ok to add, when it's better not.. I'll leave others
> to comment on this.
>
> But still, could we check it when ramblocks are created? So in that way
> whatever is forbidden is clear in its own path, I feel like that could be
> clearer (like what you did with gmemfd).
When the ramblock is created, we don't yet know if it is migratable. A
ramblock that is not migratable does not block cpr. Migratable is not known
until vmstate_register_ram calls qemu_ram_set_migratable. Hence that is
where I evaluate conditions and install a blocker.
Because that is the only place where ram_block_add_cpr_blocker is called,
the test qemu_ram_is_migratable() inside ram_block_add_cpr_blocker is
redundant, and I should delete it.
> For example, if I start to convert some of your requirements above, then
> memory_region_is_ram_device() implies RAM_PREALLOC. Actually, ram_device
> is not the only RAM_PREALLOC user.. Say, would it also not work with all
> memory_region_init_ram_ptr() users (even if they're not ram_device)? An
> example is, looks like virtio-gpu can create random ramblocks on the fly
> with prealloced buffers. I am not sure whether they can be pinned by VFIO
> too. You may know better.
That memory is not visible to the guest. It is not part of system_memory,
and is not marked migratable.
- Steve
> So, to me ram_is_volatile() is harder to follow, meanwhile it may miss
> something to me? IMO it's still better to explicitly add cpr blockers in
> the ram block add() path if possible, but maybe you still have good reasons
> to do it only until vmstate_register_ram() which I overlooked..
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] migration: ram block cpr blockers
2025-01-29 18:20 ` Steven Sistare
@ 2025-01-30 17:01 ` Peter Xu
2025-02-14 20:12 ` Steven Sistare
0 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2025-01-30 17:01 UTC (permalink / raw)
To: Steven Sistare
Cc: qemu-devel, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini
On Wed, Jan 29, 2025 at 01:20:13PM -0500, Steven Sistare wrote:
> On 1/17/2025 6:57 PM, Peter Xu wrote:
> > On Fri, Jan 17, 2025 at 02:10:14PM -0500, Steven Sistare wrote:
> > > On 1/17/2025 1:16 PM, Peter Xu wrote:
> > > > On Fri, Jan 17, 2025 at 09:46:11AM -0800, Steve Sistare wrote:
> > > > > +/*
> > > > > + * Return true if ram contents would be lost during CPR.
> > > > > + * Return false for ram_device because it is remapped in new QEMU. Do not
> > > > > + * exclude rom, even though it is readonly, because the rom file could change
> > > > > + * in new QEMU. Return false for non-migratable blocks. They are either
> > > > > + * re-created in new QEMU, or are handled specially, or are covered by a
> > > > > + * device-level CPR blocker. Return false for an fd, because it is visible and
> > > > > + * can be remapped in new QEMU.
> > > > > + */
> > > > > +static bool ram_is_volatile(RAMBlock *rb)
> > > > > +{
> > > > > + MemoryRegion *mr = rb->mr;
> > > > > +
> > > > > + return mr &&
> > > > > + memory_region_is_ram(mr) &&
> > > > > + !memory_region_is_ram_device(mr) &&
> > > > > + (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
> > > > > + qemu_ram_is_migratable(rb) &&
> > > > > + rb->fd < 0;
> > > > > +}
> > > >
> > > > Blocking guest_memfd looks ok, but comparing to add one more block
> > > > notifier, can we check all ramblocks once in migrate_prepare(), and fail
> > > > that command directly if it fails the check?
> > >
> > > In an upcoming patch, I will be adding an option analogous to only-migratable which
> > > prevents QEMU from starting if anything would block cpr-transfer. That option
> > > will be checked when blockers are added, like for only-migratable. migrate_prepare
> > > is too late.
> > >
> > > > OTOH, is there any simpler way to simplify the check conditions? It'll be
> > > > at least nice to break these checks into smaller if conditions for
> > > > readability..
> > >
> > > I thought the function header comments made it clear, but I could move each
> > > comment next to each condition:
> > >
> > > ...
> > > /*
> > > * Return false for an fd, because it is visible and can be remapped in
> > > * new QEMU.
> > > */
> > > if (rb->fd >= 0) {
> > > return false;
> > > }
> > > ...
> > >
> > > > I wonder if we could stick with looping over all ramblocks, then make sure
> > > > each of them is on the cpr saved fd list. It may need to make
> > > > cpr_save_fd() always register with the name of ramblock to do such lookup,
> > > > or maybe we could also cache the ramblock pointer in CprFd, then the lookup
> > > > will be a pointer match check.
> > >
> > > Some ramblocks are not on the list, such as named files. Plus looping in
> > > migrate_prepare is too late as noted above.
> > >
> > > IMO what I have already implemented using blockers is clean and elegant.
> >
> > OK if we need to fail it early at boot, then yes blockers are probably
> > better.
> >
> > We'll need one more cmdline parameter. I've no objection, but I don't know
> > how to judge when it's ok to add, when it's better not.. I'll leave others
> > to comment on this.
> >
> > But still, could we check it when ramblocks are created? So in that way
> > whatever is forbidden is clear in its own path, I feel like that could be
> > clearer (like what you did with gmemfd).
>
> When the ramblock is created, we don't yet know if it is migratable. A
> ramblock that is not migratable does not block cpr. Migratable is not known
> until vmstate_register_ram calls qemu_ram_set_migratable. Hence that is
> where I evaluate conditions and install a blocker.
>
> Because that is the only place where ram_block_add_cpr_blocker is called,
> the test qemu_ram_is_migratable() inside ram_block_add_cpr_blocker is
> redundant, and I should delete it.
Hmm.. sounds reasonable.
>
> > For example, if I start to convert some of your requirements above, then
> > memory_region_is_ram_device() implies RAM_PREALLOC. Actually, ram_device
> > is not the only RAM_PREALLOC user.. Say, would it also not work with all
> > memory_region_init_ram_ptr() users (even if they're not ram_device)? An
> > example is, looks like virtio-gpu can create random ramblocks on the fly
> > with prealloced buffers. I am not sure whether they can be pinned by VFIO
> > too. You may know better.
>
> That memory is not visible to the guest. It is not part of system_memory,
> and is not marked migratable.
I _think_ that can still be visible at least for the virtio-gpu use case,
which hangs under VirtIOGPUBase.hostmem. Relevant code for reference:
virtio_gpu_virgl_map_resource_blob:
memory_region_init_ram_ptr(mr, OBJECT(mr), "blob", size, data);
memory_region_add_subregion(&b->hostmem, offset, mr);
memory_region_set_enabled(mr, true);
virtio_gpu_pci_base_realize:
memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
g->conf.hostmem);
pci_register_bar(&vpci_dev->pci_dev, 4,
PCI_BASE_ADDRESS_SPACE_MEMORY |
PCI_BASE_ADDRESS_MEM_PREFETCH |
PCI_BASE_ADDRESS_MEM_TYPE_64,
&g->hostmem);
pci_update_mappings:
memory_region_add_subregion_overlap(r->address_space,
r->addr, r->memory, 1);
but indeed I'm not sure how it work with migration so far, because it
doesn't have RAM_MIGRATABLE set. So at least cpr didn't make it more
special. I assume this isn't something we must figure out as of now
then.. but if you do, please kindly share.
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] migration: ram block cpr blockers
2025-01-30 17:01 ` Peter Xu
@ 2025-02-14 20:12 ` Steven Sistare
2025-02-18 16:10 ` Peter Xu
0 siblings, 1 reply; 9+ messages in thread
From: Steven Sistare @ 2025-02-14 20:12 UTC (permalink / raw)
To: Peter Xu
Cc: qemu-devel, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini
On 1/30/2025 12:01 PM, Peter Xu wrote:
> On Wed, Jan 29, 2025 at 01:20:13PM -0500, Steven Sistare wrote:
>> On 1/17/2025 6:57 PM, Peter Xu wrote:
>>> On Fri, Jan 17, 2025 at 02:10:14PM -0500, Steven Sistare wrote:
>>>> On 1/17/2025 1:16 PM, Peter Xu wrote:
>>>>> On Fri, Jan 17, 2025 at 09:46:11AM -0800, Steve Sistare wrote:
>>>>>> +/*
>>>>>> + * Return true if ram contents would be lost during CPR.
>>>>>> + * Return false for ram_device because it is remapped in new QEMU. Do not
>>>>>> + * exclude rom, even though it is readonly, because the rom file could change
>>>>>> + * in new QEMU. Return false for non-migratable blocks. They are either
>>>>>> + * re-created in new QEMU, or are handled specially, or are covered by a
>>>>>> + * device-level CPR blocker. Return false for an fd, because it is visible and
>>>>>> + * can be remapped in new QEMU.
>>>>>> + */
>>>>>> +static bool ram_is_volatile(RAMBlock *rb)
>>>>>> +{
>>>>>> + MemoryRegion *mr = rb->mr;
>>>>>> +
>>>>>> + return mr &&
>>>>>> + memory_region_is_ram(mr) &&
>>>>>> + !memory_region_is_ram_device(mr) &&
>>>>>> + (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
>>>>>> + qemu_ram_is_migratable(rb) &&
>>>>>> + rb->fd < 0;
>>>>>> +}
>>>>>
>>>>> Blocking guest_memfd looks ok, but comparing to add one more block
>>>>> notifier, can we check all ramblocks once in migrate_prepare(), and fail
>>>>> that command directly if it fails the check?
>>>>
>>>> In an upcoming patch, I will be adding an option analogous to only-migratable which
>>>> prevents QEMU from starting if anything would block cpr-transfer. That option
>>>> will be checked when blockers are added, like for only-migratable. migrate_prepare
>>>> is too late.
>>>>
>>>>> OTOH, is there any simpler way to simplify the check conditions? It'll be
>>>>> at least nice to break these checks into smaller if conditions for
>>>>> readability..
>>>>
>>>> I thought the function header comments made it clear, but I could move each
>>>> comment next to each condition:
>>>>
>>>> ...
>>>> /*
>>>> * Return false for an fd, because it is visible and can be remapped in
>>>> * new QEMU.
>>>> */
>>>> if (rb->fd >= 0) {
>>>> return false;
>>>> }
>>>> ...
>>>>
>>>>> I wonder if we could stick with looping over all ramblocks, then make sure
>>>>> each of them is on the cpr saved fd list. It may need to make
>>>>> cpr_save_fd() always register with the name of ramblock to do such lookup,
>>>>> or maybe we could also cache the ramblock pointer in CprFd, then the lookup
>>>>> will be a pointer match check.
>>>>
>>>> Some ramblocks are not on the list, such as named files. Plus looping in
>>>> migrate_prepare is too late as noted above.
>>>>
>>>> IMO what I have already implemented using blockers is clean and elegant.
>>>
>>> OK if we need to fail it early at boot, then yes blockers are probably
>>> better.
>>>
>>> We'll need one more cmdline parameter. I've no objection, but I don't know
>>> how to judge when it's ok to add, when it's better not.. I'll leave others
>>> to comment on this.
>>>
>>> But still, could we check it when ramblocks are created? So in that way
>>> whatever is forbidden is clear in its own path, I feel like that could be
>>> clearer (like what you did with gmemfd).
>>
>> When the ramblock is created, we don't yet know if it is migratable. A
>> ramblock that is not migratable does not block cpr. Migratable is not known
>> until vmstate_register_ram calls qemu_ram_set_migratable. Hence that is
>> where I evaluate conditions and install a blocker.
>>
>> Because that is the only place where ram_block_add_cpr_blocker is called,
>> the test qemu_ram_is_migratable() inside ram_block_add_cpr_blocker is
>> redundant, and I should delete it.
>
> Hmm.. sounds reasonable.
>
>>
>>> For example, if I start to convert some of your requirements above, then
>>> memory_region_is_ram_device() implies RAM_PREALLOC. Actually, ram_device
>>> is not the only RAM_PREALLOC user.. Say, would it also not work with all
>>> memory_region_init_ram_ptr() users (even if they're not ram_device)? An
>>> example is, looks like virtio-gpu can create random ramblocks on the fly
>>> with prealloced buffers. I am not sure whether they can be pinned by VFIO
>>> too. You may know better.
>>
>> That memory is not visible to the guest. It is not part of system_memory,
>> and is not marked migratable.
>
> I _think_ that can still be visible at least for the virtio-gpu use case,
> which hangs under VirtIOGPUBase.hostmem. Relevant code for reference:
>
> virtio_gpu_virgl_map_resource_blob:
> memory_region_init_ram_ptr(mr, OBJECT(mr), "blob", size, data);
> memory_region_add_subregion(&b->hostmem, offset, mr);
> memory_region_set_enabled(mr, true);
>
> virtio_gpu_pci_base_realize:
> memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
> g->conf.hostmem);
> pci_register_bar(&vpci_dev->pci_dev, 4,
> PCI_BASE_ADDRESS_SPACE_MEMORY |
> PCI_BASE_ADDRESS_MEM_PREFETCH |
> PCI_BASE_ADDRESS_MEM_TYPE_64,
> &g->hostmem);
>
> pci_update_mappings:
> memory_region_add_subregion_overlap(r->address_space,
> r->addr, r->memory, 1);
>
> but indeed I'm not sure how it work with migration so far, because it
> doesn't have RAM_MIGRATABLE set. So at least cpr didn't make it more
> special. I assume this isn't something we must figure out as of now
> then.. but if you do, please kindly share.
AFAICT this memory cannot be pinned, because it is not attached to the
"system_memory" mr and the "address_space_memory" address space. The
listener than maps/pins is attached to address_space_memory.
I am submitting V2 of this patch with more embedded comments.
- Steve
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] migration: ram block cpr blockers
2025-02-14 20:12 ` Steven Sistare
@ 2025-02-18 16:10 ` Peter Xu
2025-02-25 15:46 ` Steven Sistare
0 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2025-02-18 16:10 UTC (permalink / raw)
To: Steven Sistare
Cc: qemu-devel, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini
On Fri, Feb 14, 2025 at 03:12:22PM -0500, Steven Sistare wrote:
> On 1/30/2025 12:01 PM, Peter Xu wrote:
> > On Wed, Jan 29, 2025 at 01:20:13PM -0500, Steven Sistare wrote:
> > > On 1/17/2025 6:57 PM, Peter Xu wrote:
> > > > On Fri, Jan 17, 2025 at 02:10:14PM -0500, Steven Sistare wrote:
> > > > > On 1/17/2025 1:16 PM, Peter Xu wrote:
> > > > > > On Fri, Jan 17, 2025 at 09:46:11AM -0800, Steve Sistare wrote:
> > > > > > > +/*
> > > > > > > + * Return true if ram contents would be lost during CPR.
> > > > > > > + * Return false for ram_device because it is remapped in new QEMU. Do not
> > > > > > > + * exclude rom, even though it is readonly, because the rom file could change
> > > > > > > + * in new QEMU. Return false for non-migratable blocks. They are either
> > > > > > > + * re-created in new QEMU, or are handled specially, or are covered by a
> > > > > > > + * device-level CPR blocker. Return false for an fd, because it is visible and
> > > > > > > + * can be remapped in new QEMU.
> > > > > > > + */
> > > > > > > +static bool ram_is_volatile(RAMBlock *rb)
> > > > > > > +{
> > > > > > > + MemoryRegion *mr = rb->mr;
> > > > > > > +
> > > > > > > + return mr &&
> > > > > > > + memory_region_is_ram(mr) &&
> > > > > > > + !memory_region_is_ram_device(mr) &&
> > > > > > > + (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
> > > > > > > + qemu_ram_is_migratable(rb) &&
> > > > > > > + rb->fd < 0;
> > > > > > > +}
> > > > > >
> > > > > > Blocking guest_memfd looks ok, but comparing to add one more block
> > > > > > notifier, can we check all ramblocks once in migrate_prepare(), and fail
> > > > > > that command directly if it fails the check?
> > > > >
> > > > > In an upcoming patch, I will be adding an option analogous to only-migratable which
> > > > > prevents QEMU from starting if anything would block cpr-transfer. That option
> > > > > will be checked when blockers are added, like for only-migratable. migrate_prepare
> > > > > is too late.
> > > > >
> > > > > > OTOH, is there any simpler way to simplify the check conditions? It'll be
> > > > > > at least nice to break these checks into smaller if conditions for
> > > > > > readability..
> > > > >
> > > > > I thought the function header comments made it clear, but I could move each
> > > > > comment next to each condition:
> > > > >
> > > > > ...
> > > > > /*
> > > > > * Return false for an fd, because it is visible and can be remapped in
> > > > > * new QEMU.
> > > > > */
> > > > > if (rb->fd >= 0) {
> > > > > return false;
> > > > > }
> > > > > ...
> > > > >
> > > > > > I wonder if we could stick with looping over all ramblocks, then make sure
> > > > > > each of them is on the cpr saved fd list. It may need to make
> > > > > > cpr_save_fd() always register with the name of ramblock to do such lookup,
> > > > > > or maybe we could also cache the ramblock pointer in CprFd, then the lookup
> > > > > > will be a pointer match check.
> > > > >
> > > > > Some ramblocks are not on the list, such as named files. Plus looping in
> > > > > migrate_prepare is too late as noted above.
> > > > >
> > > > > IMO what I have already implemented using blockers is clean and elegant.
> > > >
> > > > OK if we need to fail it early at boot, then yes blockers are probably
> > > > better.
> > > >
> > > > We'll need one more cmdline parameter. I've no objection, but I don't know
> > > > how to judge when it's ok to add, when it's better not.. I'll leave others
> > > > to comment on this.
> > > >
> > > > But still, could we check it when ramblocks are created? So in that way
> > > > whatever is forbidden is clear in its own path, I feel like that could be
> > > > clearer (like what you did with gmemfd).
> > >
> > > When the ramblock is created, we don't yet know if it is migratable. A
> > > ramblock that is not migratable does not block cpr. Migratable is not known
> > > until vmstate_register_ram calls qemu_ram_set_migratable. Hence that is
> > > where I evaluate conditions and install a blocker.
> > >
> > > Because that is the only place where ram_block_add_cpr_blocker is called,
> > > the test qemu_ram_is_migratable() inside ram_block_add_cpr_blocker is
> > > redundant, and I should delete it.
> >
> > Hmm.. sounds reasonable.
> >
> > >
> > > > For example, if I start to convert some of your requirements above, then
> > > > memory_region_is_ram_device() implies RAM_PREALLOC. Actually, ram_device
> > > > is not the only RAM_PREALLOC user.. Say, would it also not work with all
> > > > memory_region_init_ram_ptr() users (even if they're not ram_device)? An
> > > > example is, looks like virtio-gpu can create random ramblocks on the fly
> > > > with prealloced buffers. I am not sure whether they can be pinned by VFIO
> > > > too. You may know better.
> > >
> > > That memory is not visible to the guest. It is not part of system_memory,
> > > and is not marked migratable.
> >
> > I _think_ that can still be visible at least for the virtio-gpu use case,
> > which hangs under VirtIOGPUBase.hostmem. Relevant code for reference:
> >
> > virtio_gpu_virgl_map_resource_blob:
> > memory_region_init_ram_ptr(mr, OBJECT(mr), "blob", size, data);
> > memory_region_add_subregion(&b->hostmem, offset, mr);
> > memory_region_set_enabled(mr, true);
[1]
> >
> > virtio_gpu_pci_base_realize:
> > memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
> > g->conf.hostmem);
> > pci_register_bar(&vpci_dev->pci_dev, 4,
> > PCI_BASE_ADDRESS_SPACE_MEMORY |
> > PCI_BASE_ADDRESS_MEM_PREFETCH |
> > PCI_BASE_ADDRESS_MEM_TYPE_64,
> > &g->hostmem);
[2]
> >
> > pci_update_mappings:
> > memory_region_add_subregion_overlap(r->address_space,
> > r->addr, r->memory, 1);
[3]
> >
> > but indeed I'm not sure how it work with migration so far, because it
> > doesn't have RAM_MIGRATABLE set. So at least cpr didn't make it more
> > special. I assume this isn't something we must figure out as of now
> > then.. but if you do, please kindly share.
>
> AFAICT this memory cannot be pinned, because it is not attached to the
> "system_memory" mr and the "address_space_memory" address space. The
> listener than maps/pins is attached to address_space_memory.
I still think it's part of system_memory - every PCI bar that got mapped as
MMIO regions could be part of guest system memory, and IIUC could be
pinned.
Normally these MMIOs are IO regions so "pinning" may not make much
difference IIUC, but here looks like it's real RAM backing it when
emulated, even though they're "MMIO regions"..
Above code clips are paths that I think how they got attached to
system_memory. It isn't more special than a generic PCI device's bars got
mapped to system_memory, though:
- Firstly, at [1], the blob is part of VirtIOGPUBase.hostmem as subregion
- Then, at [2], the VirtIOGPUBase.hostmem is registered as a pci bar, in
this case, it's BAR4.
- Then, it's the same as other pci device BARs where they can logically
be mapped into guest physical address space and be part of system_memory.
Above [3] was when the bar memory got added into PCIIORegion.address_space.
Then if to fill up the last missing piece: taking i440fx as example,
ultimately the pci bus memory address space will be attached to
system_memory here:
i440fx_pcihost_realize:
/* setup pci memory mapping */
pc_pci_as_mapping_init(s->system_memory, s->pci_address_space);
Again, not requesting to resolve this immediately: I have no idea how
migration works at all with random new ramblocks being added.. but I'd
still like to check we're on the same page on this specific case, though.
--
Peter Xu
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] migration: ram block cpr blockers
2025-02-18 16:10 ` Peter Xu
@ 2025-02-25 15:46 ` Steven Sistare
0 siblings, 0 replies; 9+ messages in thread
From: Steven Sistare @ 2025-02-25 15:46 UTC (permalink / raw)
To: Peter Xu
Cc: qemu-devel, Fabiano Rosas, David Hildenbrand,
Philippe Mathieu-Daude, Paolo Bonzini
On 2/18/2025 11:10 AM, Peter Xu wrote:
> On Fri, Feb 14, 2025 at 03:12:22PM -0500, Steven Sistare wrote:
>> On 1/30/2025 12:01 PM, Peter Xu wrote:
>>> On Wed, Jan 29, 2025 at 01:20:13PM -0500, Steven Sistare wrote:
>>>> On 1/17/2025 6:57 PM, Peter Xu wrote:
>>>>> On Fri, Jan 17, 2025 at 02:10:14PM -0500, Steven Sistare wrote:
>>>>>> On 1/17/2025 1:16 PM, Peter Xu wrote:
>>>>>>> On Fri, Jan 17, 2025 at 09:46:11AM -0800, Steve Sistare wrote:
>>>>>>>> +/*
>>>>>>>> + * Return true if ram contents would be lost during CPR.
>>>>>>>> + * Return false for ram_device because it is remapped in new QEMU. Do not
>>>>>>>> + * exclude rom, even though it is readonly, because the rom file could change
>>>>>>>> + * in new QEMU. Return false for non-migratable blocks. They are either
>>>>>>>> + * re-created in new QEMU, or are handled specially, or are covered by a
>>>>>>>> + * device-level CPR blocker. Return false for an fd, because it is visible and
>>>>>>>> + * can be remapped in new QEMU.
>>>>>>>> + */
>>>>>>>> +static bool ram_is_volatile(RAMBlock *rb)
>>>>>>>> +{
>>>>>>>> + MemoryRegion *mr = rb->mr;
>>>>>>>> +
>>>>>>>> + return mr &&
>>>>>>>> + memory_region_is_ram(mr) &&
>>>>>>>> + !memory_region_is_ram_device(mr) &&
>>>>>>>> + (!qemu_ram_is_shared(rb) || !qemu_ram_is_named_file(rb)) &&
>>>>>>>> + qemu_ram_is_migratable(rb) &&
>>>>>>>> + rb->fd < 0;
>>>>>>>> +}
>>>>>>>
>>>>>>> Blocking guest_memfd looks ok, but comparing to add one more block
>>>>>>> notifier, can we check all ramblocks once in migrate_prepare(), and fail
>>>>>>> that command directly if it fails the check?
>>>>>>
>>>>>> In an upcoming patch, I will be adding an option analogous to only-migratable which
>>>>>> prevents QEMU from starting if anything would block cpr-transfer. That option
>>>>>> will be checked when blockers are added, like for only-migratable. migrate_prepare
>>>>>> is too late.
>>>>>>
>>>>>>> OTOH, is there any simpler way to simplify the check conditions? It'll be
>>>>>>> at least nice to break these checks into smaller if conditions for
>>>>>>> readability..
>>>>>>
>>>>>> I thought the function header comments made it clear, but I could move each
>>>>>> comment next to each condition:
>>>>>>
>>>>>> ...
>>>>>> /*
>>>>>> * Return false for an fd, because it is visible and can be remapped in
>>>>>> * new QEMU.
>>>>>> */
>>>>>> if (rb->fd >= 0) {
>>>>>> return false;
>>>>>> }
>>>>>> ...
>>>>>>
>>>>>>> I wonder if we could stick with looping over all ramblocks, then make sure
>>>>>>> each of them is on the cpr saved fd list. It may need to make
>>>>>>> cpr_save_fd() always register with the name of ramblock to do such lookup,
>>>>>>> or maybe we could also cache the ramblock pointer in CprFd, then the lookup
>>>>>>> will be a pointer match check.
>>>>>>
>>>>>> Some ramblocks are not on the list, such as named files. Plus looping in
>>>>>> migrate_prepare is too late as noted above.
>>>>>>
>>>>>> IMO what I have already implemented using blockers is clean and elegant.
>>>>>
>>>>> OK if we need to fail it early at boot, then yes blockers are probably
>>>>> better.
>>>>>
>>>>> We'll need one more cmdline parameter. I've no objection, but I don't know
>>>>> how to judge when it's ok to add, when it's better not.. I'll leave others
>>>>> to comment on this.
>>>>>
>>>>> But still, could we check it when ramblocks are created? So in that way
>>>>> whatever is forbidden is clear in its own path, I feel like that could be
>>>>> clearer (like what you did with gmemfd).
>>>>
>>>> When the ramblock is created, we don't yet know if it is migratable. A
>>>> ramblock that is not migratable does not block cpr. Migratable is not known
>>>> until vmstate_register_ram calls qemu_ram_set_migratable. Hence that is
>>>> where I evaluate conditions and install a blocker.
>>>>
>>>> Because that is the only place where ram_block_add_cpr_blocker is called,
>>>> the test qemu_ram_is_migratable() inside ram_block_add_cpr_blocker is
>>>> redundant, and I should delete it.
>>>
>>> Hmm.. sounds reasonable.
>>>
>>>>
>>>>> For example, if I start to convert some of your requirements above, then
>>>>> memory_region_is_ram_device() implies RAM_PREALLOC. Actually, ram_device
>>>>> is not the only RAM_PREALLOC user.. Say, would it also not work with all
>>>>> memory_region_init_ram_ptr() users (even if they're not ram_device)? An
>>>>> example is, looks like virtio-gpu can create random ramblocks on the fly
>>>>> with prealloced buffers. I am not sure whether they can be pinned by VFIO
>>>>> too. You may know better.
>>>>
>>>> That memory is not visible to the guest. It is not part of system_memory,
>>>> and is not marked migratable.
>>>
>>> I _think_ that can still be visible at least for the virtio-gpu use case,
>>> which hangs under VirtIOGPUBase.hostmem. Relevant code for reference:
>>>
>>> virtio_gpu_virgl_map_resource_blob:
>>> memory_region_init_ram_ptr(mr, OBJECT(mr), "blob", size, data);
>>> memory_region_add_subregion(&b->hostmem, offset, mr);
>>> memory_region_set_enabled(mr, true);
>
> [1]
>
>>>
>>> virtio_gpu_pci_base_realize:
>>> memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
>>> g->conf.hostmem);
>>> pci_register_bar(&vpci_dev->pci_dev, 4,
>>> PCI_BASE_ADDRESS_SPACE_MEMORY |
>>> PCI_BASE_ADDRESS_MEM_PREFETCH |
>>> PCI_BASE_ADDRESS_MEM_TYPE_64,
>>> &g->hostmem);
>
> [2]
>
>>>
>>> pci_update_mappings:
>>> memory_region_add_subregion_overlap(r->address_space,
>>> r->addr, r->memory, 1);
>
> [3]
>
>>>
>>> but indeed I'm not sure how it work with migration so far, because it
>>> doesn't have RAM_MIGRATABLE set. So at least cpr didn't make it more
>>> special. I assume this isn't something we must figure out as of now
>>> then.. but if you do, please kindly share.
>>
>> AFAICT this memory cannot be pinned, because it is not attached to the
>> "system_memory" mr and the "address_space_memory" address space. The
>> listener than maps/pins is attached to address_space_memory.
>
> I still think it's part of system_memory - every PCI bar that got mapped as
> MMIO regions could be part of guest system memory, and IIUC could be
> pinned.
>
> Normally these MMIOs are IO regions so "pinning" may not make much
> difference IIUC, but here looks like it's real RAM backing it when
> emulated, even though they're "MMIO regions"..
>
> Above code clips are paths that I think how they got attached to
> system_memory. It isn't more special than a generic PCI device's bars got
> mapped to system_memory, though:
>
> - Firstly, at [1], the blob is part of VirtIOGPUBase.hostmem as subregion
>
> - Then, at [2], the VirtIOGPUBase.hostmem is registered as a pci bar, in
> this case, it's BAR4.
>
> - Then, it's the same as other pci device BARs where they can logically
> be mapped into guest physical address space and be part of system_memory.
> Above [3] was when the bar memory got added into PCIIORegion.address_space.
>
> Then if to fill up the last missing piece: taking i440fx as example,
> ultimately the pci bus memory address space will be attached to
> system_memory here:
>
> i440fx_pcihost_realize:
> /* setup pci memory mapping */
> pc_pci_as_mapping_init(s->system_memory, s->pci_address_space);
>
> Again, not requesting to resolve this immediately: I have no idea how
> migration works at all with random new ramblocks being added.. but I'd
> still like to check we're on the same page on this specific case, though.
I agree with your analysis; thanks for connecting the dots. The vfio memory
listener callback vfio_listener_region_add would be called, and pin the memory
for DMA.
The memory is allocated by an external source, the virglrenderer library, so cpr
is not supported for this device. virtio_gpu_base_device_realize already calls
migrate_add_blocker, which blocks all migration modes.
- Steve
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-02-25 15:47 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-17 17:46 [PATCH] migration: ram block cpr blockers Steve Sistare
2025-01-17 18:16 ` Peter Xu
2025-01-17 19:10 ` Steven Sistare
2025-01-17 23:57 ` Peter Xu
2025-01-29 18:20 ` Steven Sistare
2025-01-30 17:01 ` Peter Xu
2025-02-14 20:12 ` Steven Sistare
2025-02-18 16:10 ` Peter Xu
2025-02-25 15:46 ` Steven Sistare
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).