* [QEMU][PATCH v1 1/7] xen: when unplugging emulated devices skip virtio devices
2023-10-05 18:16 [QEMU][PATCH v1 0/7] Xen: support grant mappings Vikram Garhwal
@ 2023-10-05 18:16 ` Vikram Garhwal
2023-10-09 23:51 ` Stefano Stabellini
2023-10-05 18:16 ` [QEMU][PATCH v1 2/7] xen: add pseudo RAM region for grant mappings Vikram Garhwal
` (5 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-05 18:16 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, Juergen Gross, Vikram Garhwal, Anthony Perard,
Paul Durrant, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, open list:X86 Xen CPUs
From: Juergen Gross <jgross@suse.com>
Virtio devices should never be unplugged at boot time, as they are
similar to pci passthrough devices.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
hw/i386/xen/xen_platform.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
index 17457ff3de..3560eaf8c8 100644
--- a/hw/i386/xen/xen_platform.c
+++ b/hw/i386/xen/xen_platform.c
@@ -28,6 +28,7 @@
#include "hw/ide/pci.h"
#include "hw/pci/pci.h"
#include "migration/vmstate.h"
+#include "hw/virtio/virtio-bus.h"
#include "net/net.h"
#include "trace.h"
#include "sysemu/xen.h"
@@ -132,7 +133,8 @@ static void unplug_nic(PCIBus *b, PCIDevice *d, void *o)
/* We have to ignore passthrough devices */
if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
PCI_CLASS_NETWORK_ETHERNET
- && !pci_device_is_passthrough(d)) {
+ && !pci_device_is_passthrough(d)
+ && !qdev_get_child_bus(&d->qdev, TYPE_VIRTIO_BUS)) {
object_unparent(OBJECT(d));
}
}
@@ -208,6 +210,10 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *opaque)
/* We have to ignore passthrough devices */
if (pci_device_is_passthrough(d))
return;
+ /* Ignore virtio devices */
+ if (qdev_get_child_bus(&d->qdev, TYPE_VIRTIO_BUS)) {
+ return;
+ }
switch (pci_get_word(d->config + PCI_CLASS_DEVICE)) {
case PCI_CLASS_STORAGE_IDE:
--
2.17.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 1/7] xen: when unplugging emulated devices skip virtio devices
2023-10-05 18:16 ` [QEMU][PATCH v1 1/7] xen: when unplugging emulated devices skip virtio devices Vikram Garhwal
@ 2023-10-09 23:51 ` Stefano Stabellini
2023-10-10 20:24 ` Vikram Garhwal
0 siblings, 1 reply; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-09 23:51 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, Juergen Gross, Anthony Perard,
Paul Durrant, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, open list:X86 Xen CPUs
On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> From: Juergen Gross <jgross@suse.com>
>
> Virtio devices should never be unplugged at boot time, as they are
> similar to pci passthrough devices.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
> hw/i386/xen/xen_platform.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
> index 17457ff3de..3560eaf8c8 100644
> --- a/hw/i386/xen/xen_platform.c
> +++ b/hw/i386/xen/xen_platform.c
> @@ -28,6 +28,7 @@
> #include "hw/ide/pci.h"
> #include "hw/pci/pci.h"
> #include "migration/vmstate.h"
> +#include "hw/virtio/virtio-bus.h"
> #include "net/net.h"
> #include "trace.h"
> #include "sysemu/xen.h"
> @@ -132,7 +133,8 @@ static void unplug_nic(PCIBus *b, PCIDevice *d, void *o)
> /* We have to ignore passthrough devices */
> if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> PCI_CLASS_NETWORK_ETHERNET
> - && !pci_device_is_passthrough(d)) {
> + && !pci_device_is_passthrough(d)
> + && !qdev_get_child_bus(&d->qdev, TYPE_VIRTIO_BUS)) {
Please update the in-code comment above to say "ignore passthrough
devices and virtio devices"
> object_unparent(OBJECT(d));
> }
> }
> @@ -208,6 +210,10 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *opaque)
> /* We have to ignore passthrough devices */
> if (pci_device_is_passthrough(d))
> return;
> + /* Ignore virtio devices */
> + if (qdev_get_child_bus(&d->qdev, TYPE_VIRTIO_BUS)) {
> + return;
> + }
>
> switch (pci_get_word(d->config + PCI_CLASS_DEVICE)) {
> case PCI_CLASS_STORAGE_IDE:
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 1/7] xen: when unplugging emulated devices skip virtio devices
2023-10-09 23:51 ` Stefano Stabellini
@ 2023-10-10 20:24 ` Vikram Garhwal
2023-10-19 9:17 ` David Woodhouse
0 siblings, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-10 20:24 UTC (permalink / raw)
To: Stefano Stabellini
Cc: qemu-devel, Juergen Gross, Anthony Perard, Paul Durrant,
Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, open list:X86 Xen CPUs
Hi Stefano,
On Mon, Oct 09, 2023 at 04:51:53PM -0700, Stefano Stabellini wrote:
> On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> > From: Juergen Gross <jgross@suse.com>
> >
> > Virtio devices should never be unplugged at boot time, as they are
> > similar to pci passthrough devices.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > ---
> > hw/i386/xen/xen_platform.c | 8 +++++++-
> > 1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
> > index 17457ff3de..3560eaf8c8 100644
> > --- a/hw/i386/xen/xen_platform.c
> > +++ b/hw/i386/xen/xen_platform.c
> > @@ -28,6 +28,7 @@
> > #include "hw/ide/pci.h"
> > #include "hw/pci/pci.h"
> > #include "migration/vmstate.h"
> > +#include "hw/virtio/virtio-bus.h"
> > #include "net/net.h"
> > #include "trace.h"
> > #include "sysemu/xen.h"
> > @@ -132,7 +133,8 @@ static void unplug_nic(PCIBus *b, PCIDevice *d, void *o)
> > /* We have to ignore passthrough devices */
> > if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> > PCI_CLASS_NETWORK_ETHERNET
> > - && !pci_device_is_passthrough(d)) {
> > + && !pci_device_is_passthrough(d)
> > + && !qdev_get_child_bus(&d->qdev, TYPE_VIRTIO_BUS)) {
>
> Please update the in-code comment above to say "ignore passthrough
> devices and virtio devices"
Sounds good. Will update in the code comment in v2.
>
>
> > object_unparent(OBJECT(d));
> > }
> > }
> > @@ -208,6 +210,10 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *opaque)
> > /* We have to ignore passthrough devices */
> > if (pci_device_is_passthrough(d))
> > return;
> > + /* Ignore virtio devices */
> > + if (qdev_get_child_bus(&d->qdev, TYPE_VIRTIO_BUS)) {
> > + return;
> > + }
> >
> > switch (pci_get_word(d->config + PCI_CLASS_DEVICE)) {
> > case PCI_CLASS_STORAGE_IDE:
> > --
> > 2.17.1
> >
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 1/7] xen: when unplugging emulated devices skip virtio devices
2023-10-10 20:24 ` Vikram Garhwal
@ 2023-10-19 9:17 ` David Woodhouse
0 siblings, 0 replies; 23+ messages in thread
From: David Woodhouse @ 2023-10-19 9:17 UTC (permalink / raw)
To: Vikram Garhwal, Stefano Stabellini
Cc: qemu-devel, Juergen Gross, Anthony Perard, Paul Durrant,
Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, open list:X86 Xen CPUs
[-- Attachment #1: Type: text/plain, Size: 1933 bytes --]
On Tue, 2023-10-10 at 13:24 -0700, Vikram Garhwal wrote:
> Hi Stefano,
> On Mon, Oct 09, 2023 at 04:51:53PM -0700, Stefano Stabellini wrote:
> > On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> > > From: Juergen Gross <jgross@suse.com>
> > >
> > > Virtio devices should never be unplugged at boot time, as they are
> > > similar to pci passthrough devices.
> > >
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > > ---
> > > hw/i386/xen/xen_platform.c | 8 +++++++-
> > > 1 file changed, 7 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
> > > index 17457ff3de..3560eaf8c8 100644
> > > --- a/hw/i386/xen/xen_platform.c
> > > +++ b/hw/i386/xen/xen_platform.c
> > > @@ -28,6 +28,7 @@
> > > #include "hw/ide/pci.h"
> > > #include "hw/pci/pci.h"
> > > #include "migration/vmstate.h"
> > > +#include "hw/virtio/virtio-bus.h"
> > > #include "net/net.h"
> > > #include "trace.h"
> > > #include "sysemu/xen.h"
> > > @@ -132,7 +133,8 @@ static void unplug_nic(PCIBus *b, PCIDevice *d, void *o)
> > > /* We have to ignore passthrough devices */
> > > if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> > > PCI_CLASS_NETWORK_ETHERNET
> > > - && !pci_device_is_passthrough(d)) {
> > > + && !pci_device_is_passthrough(d)
> > > + && !qdev_get_child_bus(&d->qdev, TYPE_VIRTIO_BUS)) {
> >
> > Please update the in-code comment above to say "ignore passthrough
> > devices and virtio devices"
>
> Sounds good. Will update in the code comment in v2.
Please could you also remove the note in docs/system/i386/xen.rst which
mentions having to dissuade the guest kernel from unplugging VirtIO
devices by adding 'xen_unplug_emul=never' to its command line?
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* [QEMU][PATCH v1 2/7] xen: add pseudo RAM region for grant mappings
2023-10-05 18:16 [QEMU][PATCH v1 0/7] Xen: support grant mappings Vikram Garhwal
2023-10-05 18:16 ` [QEMU][PATCH v1 1/7] xen: when unplugging emulated devices skip virtio devices Vikram Garhwal
@ 2023-10-05 18:16 ` Vikram Garhwal
2023-10-10 0:02 ` Stefano Stabellini
2023-10-05 18:16 ` [QEMU][PATCH v1 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length() Vikram Garhwal
` (4 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-05 18:16 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, Juergen Gross, Vikram Garhwal, Anthony Perard,
Paul Durrant, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
From: Juergen Gross <jgross@suse.com>
Add a memory region which can be used to automatically map granted
memory. It is starting at 0x8000000000000000ULL in order to be able to
distinguish it from normal RAM.
For this reason the xen.ram memory region is expanded, which has no
further impact as it is used just as a container of the real RAM
regions and now the grant region.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
hw/i386/xen/xen-hvm.c | 3 ++
hw/xen/xen-hvm-common.c | 4 +--
hw/xen/xen-mapcache.c | 27 ++++++++++++++
include/exec/ram_addr.h | 1 +
include/hw/xen/xen-hvm-common.h | 2 ++
include/hw/xen/xen_pvdev.h | 3 ++
include/sysemu/xen-mapcache.h | 3 ++
softmmu/physmem.c | 62 +++++++++++++++++++++------------
8 files changed, 80 insertions(+), 25 deletions(-)
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index f42621e674..67a55558a6 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -172,6 +172,9 @@ static void xen_ram_init(PCMachineState *pcms,
x86ms->above_4g_mem_size);
memory_region_add_subregion(sysmem, 0x100000000ULL, &ram_hi);
}
+
+ /* Add grant mappings as a pseudo RAM region. */
+ ram_grants = *xen_init_grant_ram();
}
static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index 565dc39c8f..b7255977a5 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -9,7 +9,7 @@
#include "hw/boards.h"
#include "hw/xen/arch_hvm.h"
-MemoryRegion ram_memory;
+MemoryRegion ram_memory, ram_grants;
void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
Error **errp)
@@ -26,7 +26,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
return;
}
- if (mr == &ram_memory) {
+ if (mr == &ram_memory || mr == &ram_grants) {
return;
}
diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
index f7d974677d..8115c44c00 100644
--- a/hw/xen/xen-mapcache.c
+++ b/hw/xen/xen-mapcache.c
@@ -14,7 +14,9 @@
#include <sys/resource.h>
+#include "hw/xen/xen-hvm-common.h"
#include "hw/xen/xen_native.h"
+#include "hw/xen/xen_pvdev.h"
#include "qemu/bitmap.h"
#include "sysemu/runstate.h"
@@ -597,3 +599,28 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
mapcache_unlock();
return p;
}
+
+MemoryRegion *xen_init_grant_ram(void)
+{
+ RAMBlock *block;
+
+ memory_region_init(&ram_grants, NULL, "xen.grants",
+ XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
+ block = g_malloc0(sizeof(*block));
+ block->mr = &ram_grants;
+ block->used_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
+ block->max_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
+ block->fd = -1;
+ block->page_size = XC_PAGE_SIZE;
+ block->host = (void *)XEN_GRANT_ADDR_OFF;
+ block->offset = XEN_GRANT_ADDR_OFF;
+ block->flags = RAM_PREALLOC;
+ ram_grants.ram_block = block;
+ ram_grants.ram = true;
+ ram_grants.terminates = true;
+ ram_block_add_list(block);
+ memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
+ &ram_grants);
+
+ return &ram_grants;
+}
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 90676093f5..c0b5f9a7d0 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -139,6 +139,7 @@ void qemu_ram_free(RAMBlock *block);
int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, Error **errp);
void qemu_ram_msync(RAMBlock *block, ram_addr_t start, ram_addr_t length);
+void ram_block_add_list(RAMBlock *new_block);
/* Clear whole block of mem */
static inline void qemu_ram_block_writeback(RAMBlock *block)
diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
index 4e9904f1a6..0d300ba898 100644
--- a/include/hw/xen/xen-hvm-common.h
+++ b/include/hw/xen/xen-hvm-common.h
@@ -17,6 +17,8 @@
#include <xen/hvm/ioreq.h>
extern MemoryRegion ram_memory;
+
+extern MemoryRegion ram_grants;
extern MemoryListener xen_io_listener;
extern DeviceListener xen_device_listener;
diff --git a/include/hw/xen/xen_pvdev.h b/include/hw/xen/xen_pvdev.h
index ddad4b9f36..0f1b5edfa9 100644
--- a/include/hw/xen/xen_pvdev.h
+++ b/include/hw/xen/xen_pvdev.h
@@ -80,4 +80,7 @@ int xen_pv_send_notify(struct XenLegacyDevice *xendev);
void xen_pv_printf(struct XenLegacyDevice *xendev, int msg_level,
const char *fmt, ...) G_GNUC_PRINTF(3, 4);
+#define XEN_GRANT_ADDR_OFF 0x8000000000000000ULL
+#define XEN_MAX_VIRTIO_GRANTS 65536
+
#endif /* QEMU_HW_XEN_PVDEV_H */
diff --git a/include/sysemu/xen-mapcache.h b/include/sysemu/xen-mapcache.h
index c8e7c2f6cf..f4bedb1c11 100644
--- a/include/sysemu/xen-mapcache.h
+++ b/include/sysemu/xen-mapcache.h
@@ -10,6 +10,7 @@
#define XEN_MAPCACHE_H
#include "exec/cpu-common.h"
+#include "exec/ram_addr.h"
typedef hwaddr (*phys_offset_to_gaddr_t)(hwaddr phys_offset,
ram_addr_t size);
@@ -25,6 +26,8 @@ void xen_invalidate_map_cache(void);
uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
hwaddr new_phys_addr,
hwaddr size);
+MemoryRegion *xen_init_grant_ram(void);
+
#else
static inline void xen_map_cache_init(phys_offset_to_gaddr_t f,
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 309653c722..e182a2fa07 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -1803,12 +1803,47 @@ static void dirty_memory_extend(ram_addr_t old_ram_size,
}
}
+static void ram_block_add_list_locked(RAMBlock *new_block)
+ {
+ RAMBlock *block;
+ RAMBlock *last_block = NULL;
+
+ /*
+ * Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
+ * QLIST (which has an RCU-friendly variant) does not have insertion at
+ * tail, so save the last element in last_block.
+ */
+ RAMBLOCK_FOREACH(block) {
+ last_block = block;
+ if (block->max_length < new_block->max_length) {
+ break;
+ }
+ }
+ if (block) {
+ QLIST_INSERT_BEFORE_RCU(block, new_block, next);
+ } else if (last_block) {
+ QLIST_INSERT_AFTER_RCU(last_block, new_block, next);
+ } else { /* list is empty */
+ QLIST_INSERT_HEAD_RCU(&ram_list.blocks, new_block, next);
+ }
+ ram_list.mru_block = NULL;
+
+ /* Write list before version */
+ smp_wmb();
+ ram_list.version++;
+}
+
+void ram_block_add_list(RAMBlock *new_block)
+{
+ qemu_mutex_lock_ramlist();
+ ram_block_add_list_locked(new_block);
+ qemu_mutex_unlock_ramlist();
+}
+
static void ram_block_add(RAMBlock *new_block, Error **errp)
{
const bool noreserve = qemu_ram_is_noreserve(new_block);
const bool shared = qemu_ram_is_shared(new_block);
- RAMBlock *block;
- RAMBlock *last_block = NULL;
ram_addr_t old_ram_size, new_ram_size;
Error *err = NULL;
@@ -1846,28 +1881,9 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
if (new_ram_size > old_ram_size) {
dirty_memory_extend(old_ram_size, new_ram_size);
}
- /* Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
- * QLIST (which has an RCU-friendly variant) does not have insertion at
- * tail, so save the last element in last_block.
- */
- RAMBLOCK_FOREACH(block) {
- last_block = block;
- if (block->max_length < new_block->max_length) {
- break;
- }
- }
- if (block) {
- QLIST_INSERT_BEFORE_RCU(block, new_block, next);
- } else if (last_block) {
- QLIST_INSERT_AFTER_RCU(last_block, new_block, next);
- } else { /* list is empty */
- QLIST_INSERT_HEAD_RCU(&ram_list.blocks, new_block, next);
- }
- ram_list.mru_block = NULL;
- /* Write list before version */
- smp_wmb();
- ram_list.version++;
+ ram_block_add_list_locked(new_block);
+
qemu_mutex_unlock_ramlist();
cpu_physical_memory_set_dirty_range(new_block->offset,
--
2.17.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 2/7] xen: add pseudo RAM region for grant mappings
2023-10-05 18:16 ` [QEMU][PATCH v1 2/7] xen: add pseudo RAM region for grant mappings Vikram Garhwal
@ 2023-10-10 0:02 ` Stefano Stabellini
2023-10-10 0:29 ` Stefano Stabellini
2023-10-10 21:25 ` Vikram Garhwal
0 siblings, 2 replies; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-10 0:02 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, Juergen Gross, Anthony Perard,
Paul Durrant, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> From: Juergen Gross <jgross@suse.com>
>
> Add a memory region which can be used to automatically map granted
> memory. It is starting at 0x8000000000000000ULL in order to be able to
> distinguish it from normal RAM.
>
> For this reason the xen.ram memory region is expanded, which has no
> further impact as it is used just as a container of the real RAM
> regions and now the grant region.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
This patch doesn't apply to staging anymore
> ---
> hw/i386/xen/xen-hvm.c | 3 ++
> hw/xen/xen-hvm-common.c | 4 +--
> hw/xen/xen-mapcache.c | 27 ++++++++++++++
> include/exec/ram_addr.h | 1 +
> include/hw/xen/xen-hvm-common.h | 2 ++
> include/hw/xen/xen_pvdev.h | 3 ++
> include/sysemu/xen-mapcache.h | 3 ++
> softmmu/physmem.c | 62 +++++++++++++++++++++------------
> 8 files changed, 80 insertions(+), 25 deletions(-)
>
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index f42621e674..67a55558a6 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -172,6 +172,9 @@ static void xen_ram_init(PCMachineState *pcms,
> x86ms->above_4g_mem_size);
> memory_region_add_subregion(sysmem, 0x100000000ULL, &ram_hi);
> }
> +
> + /* Add grant mappings as a pseudo RAM region. */
> + ram_grants = *xen_init_grant_ram();
> }
>
> static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
> diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> index 565dc39c8f..b7255977a5 100644
> --- a/hw/xen/xen-hvm-common.c
> +++ b/hw/xen/xen-hvm-common.c
> @@ -9,7 +9,7 @@
> #include "hw/boards.h"
> #include "hw/xen/arch_hvm.h"
>
> -MemoryRegion ram_memory;
> +MemoryRegion ram_memory, ram_grants;
>
> void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> Error **errp)
> @@ -26,7 +26,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> return;
> }
>
> - if (mr == &ram_memory) {
> + if (mr == &ram_memory || mr == &ram_grants) {
> return;
> }
>
> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> index f7d974677d..8115c44c00 100644
> --- a/hw/xen/xen-mapcache.c
> +++ b/hw/xen/xen-mapcache.c
> @@ -14,7 +14,9 @@
>
> #include <sys/resource.h>
>
> +#include "hw/xen/xen-hvm-common.h"
> #include "hw/xen/xen_native.h"
> +#include "hw/xen/xen_pvdev.h"
> #include "qemu/bitmap.h"
>
> #include "sysemu/runstate.h"
> @@ -597,3 +599,28 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
> mapcache_unlock();
> return p;
> }
> +
> +MemoryRegion *xen_init_grant_ram(void)
> +{
> + RAMBlock *block;
> +
> + memory_region_init(&ram_grants, NULL, "xen.grants",
> + XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
> + block = g_malloc0(sizeof(*block));
> + block->mr = &ram_grants;
> + block->used_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
> + block->max_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
> + block->fd = -1;
> + block->page_size = XC_PAGE_SIZE;
> + block->host = (void *)XEN_GRANT_ADDR_OFF;
> + block->offset = XEN_GRANT_ADDR_OFF;
> + block->flags = RAM_PREALLOC;
> + ram_grants.ram_block = block;
> + ram_grants.ram = true;
> + ram_grants.terminates = true;
> + ram_block_add_list(block);
> + memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
> + &ram_grants);
> +
> + return &ram_grants;
It doesn't look like xen_init_grant_ram has anything to do with the
mapcache. It should be in another file. Maybe ./hw/xen/xen-hvm-common.c
or ./hw/i386/xen/xen-hvm.c (but this is x86 specific and we need grants
on ARM too)
> +}
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 90676093f5..c0b5f9a7d0 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -139,6 +139,7 @@ void qemu_ram_free(RAMBlock *block);
> int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, Error **errp);
>
> void qemu_ram_msync(RAMBlock *block, ram_addr_t start, ram_addr_t length);
> +void ram_block_add_list(RAMBlock *new_block);
>
> /* Clear whole block of mem */
> static inline void qemu_ram_block_writeback(RAMBlock *block)
> diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
> index 4e9904f1a6..0d300ba898 100644
> --- a/include/hw/xen/xen-hvm-common.h
> +++ b/include/hw/xen/xen-hvm-common.h
> @@ -17,6 +17,8 @@
> #include <xen/hvm/ioreq.h>
>
> extern MemoryRegion ram_memory;
> +
> +extern MemoryRegion ram_grants;
> extern MemoryListener xen_io_listener;
> extern DeviceListener xen_device_listener;
>
> diff --git a/include/hw/xen/xen_pvdev.h b/include/hw/xen/xen_pvdev.h
> index ddad4b9f36..0f1b5edfa9 100644
> --- a/include/hw/xen/xen_pvdev.h
> +++ b/include/hw/xen/xen_pvdev.h
> @@ -80,4 +80,7 @@ int xen_pv_send_notify(struct XenLegacyDevice *xendev);
> void xen_pv_printf(struct XenLegacyDevice *xendev, int msg_level,
> const char *fmt, ...) G_GNUC_PRINTF(3, 4);
>
> +#define XEN_GRANT_ADDR_OFF 0x8000000000000000ULL
> +#define XEN_MAX_VIRTIO_GRANTS 65536
> +
> #endif /* QEMU_HW_XEN_PVDEV_H */
> diff --git a/include/sysemu/xen-mapcache.h b/include/sysemu/xen-mapcache.h
> index c8e7c2f6cf..f4bedb1c11 100644
> --- a/include/sysemu/xen-mapcache.h
> +++ b/include/sysemu/xen-mapcache.h
> @@ -10,6 +10,7 @@
> #define XEN_MAPCACHE_H
>
> #include "exec/cpu-common.h"
> +#include "exec/ram_addr.h"
>
> typedef hwaddr (*phys_offset_to_gaddr_t)(hwaddr phys_offset,
> ram_addr_t size);
> @@ -25,6 +26,8 @@ void xen_invalidate_map_cache(void);
> uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
> hwaddr new_phys_addr,
> hwaddr size);
> +MemoryRegion *xen_init_grant_ram(void);
> +
> #else
>
> static inline void xen_map_cache_init(phys_offset_to_gaddr_t f,
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 309653c722..e182a2fa07 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
You might want to split this change out of this patch to make it easier
to get the physmem.c maintainers' attention
> @@ -1803,12 +1803,47 @@ static void dirty_memory_extend(ram_addr_t old_ram_size,
> }
> }
>
> +static void ram_block_add_list_locked(RAMBlock *new_block)
> + {
> + RAMBlock *block;
> + RAMBlock *last_block = NULL;
> +
> + /*
> + * Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
> + * QLIST (which has an RCU-friendly variant) does not have insertion at
> + * tail, so save the last element in last_block.
> + */
> + RAMBLOCK_FOREACH(block) {
> + last_block = block;
> + if (block->max_length < new_block->max_length) {
> + break;
> + }
> + }
> + if (block) {
> + QLIST_INSERT_BEFORE_RCU(block, new_block, next);
> + } else if (last_block) {
> + QLIST_INSERT_AFTER_RCU(last_block, new_block, next);
> + } else { /* list is empty */
> + QLIST_INSERT_HEAD_RCU(&ram_list.blocks, new_block, next);
> + }
> + ram_list.mru_block = NULL;
> +
> + /* Write list before version */
> + smp_wmb();
> + ram_list.version++;
> +}
> +
> +void ram_block_add_list(RAMBlock *new_block)
> +{
> + qemu_mutex_lock_ramlist();
> + ram_block_add_list_locked(new_block);
> + qemu_mutex_unlock_ramlist();
> +}
> +
> static void ram_block_add(RAMBlock *new_block, Error **errp)
> {
> const bool noreserve = qemu_ram_is_noreserve(new_block);
> const bool shared = qemu_ram_is_shared(new_block);
> - RAMBlock *block;
> - RAMBlock *last_block = NULL;
> ram_addr_t old_ram_size, new_ram_size;
> Error *err = NULL;
>
> @@ -1846,28 +1881,9 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
> if (new_ram_size > old_ram_size) {
> dirty_memory_extend(old_ram_size, new_ram_size);
> }
> - /* Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
> - * QLIST (which has an RCU-friendly variant) does not have insertion at
> - * tail, so save the last element in last_block.
> - */
> - RAMBLOCK_FOREACH(block) {
> - last_block = block;
> - if (block->max_length < new_block->max_length) {
> - break;
> - }
> - }
> - if (block) {
> - QLIST_INSERT_BEFORE_RCU(block, new_block, next);
> - } else if (last_block) {
> - QLIST_INSERT_AFTER_RCU(last_block, new_block, next);
> - } else { /* list is empty */
> - QLIST_INSERT_HEAD_RCU(&ram_list.blocks, new_block, next);
> - }
> - ram_list.mru_block = NULL;
>
> - /* Write list before version */
> - smp_wmb();
> - ram_list.version++;
> + ram_block_add_list_locked(new_block);
> +
> qemu_mutex_unlock_ramlist();
>
> cpu_physical_memory_set_dirty_range(new_block->offset,
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 2/7] xen: add pseudo RAM region for grant mappings
2023-10-10 0:02 ` Stefano Stabellini
@ 2023-10-10 0:29 ` Stefano Stabellini
2023-10-10 21:25 ` Vikram Garhwal
1 sibling, 0 replies; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-10 0:29 UTC (permalink / raw)
To: Stefano Stabellini
Cc: Vikram Garhwal, qemu-devel, Juergen Gross, Anthony Perard,
Paul Durrant, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
On Mon, 9 Oct 2023, Stefano Stabellini wrote:
> On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> > From: Juergen Gross <jgross@suse.com>
> >
> > Add a memory region which can be used to automatically map granted
> > memory. It is starting at 0x8000000000000000ULL in order to be able to
> > distinguish it from normal RAM.
> >
> > For this reason the xen.ram memory region is expanded, which has no
> > further impact as it is used just as a container of the real RAM
> > regions and now the grant region.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>
> This patch doesn't apply to staging anymore
>
>
> > ---
> > hw/i386/xen/xen-hvm.c | 3 ++
> > hw/xen/xen-hvm-common.c | 4 +--
> > hw/xen/xen-mapcache.c | 27 ++++++++++++++
> > include/exec/ram_addr.h | 1 +
> > include/hw/xen/xen-hvm-common.h | 2 ++
> > include/hw/xen/xen_pvdev.h | 3 ++
> > include/sysemu/xen-mapcache.h | 3 ++
> > softmmu/physmem.c | 62 +++++++++++++++++++++------------
> > 8 files changed, 80 insertions(+), 25 deletions(-)
> >
> > diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> > index f42621e674..67a55558a6 100644
> > --- a/hw/i386/xen/xen-hvm.c
> > +++ b/hw/i386/xen/xen-hvm.c
> > @@ -172,6 +172,9 @@ static void xen_ram_init(PCMachineState *pcms,
> > x86ms->above_4g_mem_size);
> > memory_region_add_subregion(sysmem, 0x100000000ULL, &ram_hi);
> > }
> > +
> > + /* Add grant mappings as a pseudo RAM region. */
> > + ram_grants = *xen_init_grant_ram();
> > }
> >
> > static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
> > diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> > index 565dc39c8f..b7255977a5 100644
> > --- a/hw/xen/xen-hvm-common.c
> > +++ b/hw/xen/xen-hvm-common.c
> > @@ -9,7 +9,7 @@
> > #include "hw/boards.h"
> > #include "hw/xen/arch_hvm.h"
> >
> > -MemoryRegion ram_memory;
> > +MemoryRegion ram_memory, ram_grants;
> >
> > void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> > Error **errp)
> > @@ -26,7 +26,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> > return;
> > }
> >
> > - if (mr == &ram_memory) {
> > + if (mr == &ram_memory || mr == &ram_grants) {
> > return;
> > }
> >
> > diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> > index f7d974677d..8115c44c00 100644
> > --- a/hw/xen/xen-mapcache.c
> > +++ b/hw/xen/xen-mapcache.c
> > @@ -14,7 +14,9 @@
> >
> > #include <sys/resource.h>
> >
> > +#include "hw/xen/xen-hvm-common.h"
> > #include "hw/xen/xen_native.h"
> > +#include "hw/xen/xen_pvdev.h"
> > #include "qemu/bitmap.h"
> >
> > #include "sysemu/runstate.h"
> > @@ -597,3 +599,28 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
> > mapcache_unlock();
> > return p;
> > }
> > +
> > +MemoryRegion *xen_init_grant_ram(void)
> > +{
> > + RAMBlock *block;
> > +
> > + memory_region_init(&ram_grants, NULL, "xen.grants",
> > + XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
> > + block = g_malloc0(sizeof(*block));
> > + block->mr = &ram_grants;
> > + block->used_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
> > + block->max_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
> > + block->fd = -1;
> > + block->page_size = XC_PAGE_SIZE;
> > + block->host = (void *)XEN_GRANT_ADDR_OFF;
> > + block->offset = XEN_GRANT_ADDR_OFF;
> > + block->flags = RAM_PREALLOC;
> > + ram_grants.ram_block = block;
> > + ram_grants.ram = true;
> > + ram_grants.terminates = true;
> > + ram_block_add_list(block);
> > + memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
> > + &ram_grants);
> > +
> > + return &ram_grants;
>
> It doesn't look like xen_init_grant_ram has anything to do with the
> mapcache. It should be in another file. Maybe ./hw/xen/xen-hvm-common.c
> or ./hw/i386/xen/xen-hvm.c (but this is x86 specific and we need grants
> on ARM too)
Now having seen all the other patches, it might be OK to keep this here.
I am OK with this patch once it is rebased on the latest staging. I
would still advise to split the physdev.c changes to a separate patch.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 2/7] xen: add pseudo RAM region for grant mappings
2023-10-10 0:02 ` Stefano Stabellini
2023-10-10 0:29 ` Stefano Stabellini
@ 2023-10-10 21:25 ` Vikram Garhwal
2023-10-10 22:30 ` Stefano Stabellini
1 sibling, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-10 21:25 UTC (permalink / raw)
To: Stefano Stabellini
Cc: qemu-devel, Juergen Gross, Anthony Perard, Paul Durrant,
Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
Hi Stefano,
On Mon, Oct 09, 2023 at 05:02:14PM -0700, Stefano Stabellini wrote:
> On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> > From: Juergen Gross <jgross@suse.com>
> >
> > Add a memory region which can be used to automatically map granted
> > memory. It is starting at 0x8000000000000000ULL in order to be able to
> > distinguish it from normal RAM.
> >
> > For this reason the xen.ram memory region is expanded, which has no
> > further impact as it is used just as a container of the real RAM
> > regions and now the grant region.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>
> This patch doesn't apply to staging anymore
Will re-base it. I rebased it against master branch.
>
>
> > ---
> > hw/i386/xen/xen-hvm.c | 3 ++
> > hw/xen/xen-hvm-common.c | 4 +--
> > hw/xen/xen-mapcache.c | 27 ++++++++++++++
> > include/exec/ram_addr.h | 1 +
> > include/hw/xen/xen-hvm-common.h | 2 ++
> > include/hw/xen/xen_pvdev.h | 3 ++
> > include/sysemu/xen-mapcache.h | 3 ++
> > softmmu/physmem.c | 62 +++++++++++++++++++++------------
> > 8 files changed, 80 insertions(+), 25 deletions(-)
> >
> > diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> > index f42621e674..67a55558a6 100644
> > --- a/hw/i386/xen/xen-hvm.c
> > +++ b/hw/i386/xen/xen-hvm.c
> > @@ -172,6 +172,9 @@ static void xen_ram_init(PCMachineState *pcms,
> > x86ms->above_4g_mem_size);
> > memory_region_add_subregion(sysmem, 0x100000000ULL, &ram_hi);
> > }
> > +
> > + /* Add grant mappings as a pseudo RAM region. */
> > + ram_grants = *xen_init_grant_ram();
> > }
> >
> > static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
> > diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> > index 565dc39c8f..b7255977a5 100644
> > --- a/hw/xen/xen-hvm-common.c
> > +++ b/hw/xen/xen-hvm-common.c
> > @@ -9,7 +9,7 @@
> > #include "hw/boards.h"
> > #include "hw/xen/arch_hvm.h"
> >
> > -MemoryRegion ram_memory;
> > +MemoryRegion ram_memory, ram_grants;
> >
> > void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> > Error **errp)
> > @@ -26,7 +26,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> > return;
> > }
> >
> > - if (mr == &ram_memory) {
> > + if (mr == &ram_memory || mr == &ram_grants) {
> > return;
> > }
> >
> > diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> > index f7d974677d..8115c44c00 100644
> > --- a/hw/xen/xen-mapcache.c
> > +++ b/hw/xen/xen-mapcache.c
> > @@ -14,7 +14,9 @@
> >
> > #include <sys/resource.h>
> >
> > +#include "hw/xen/xen-hvm-common.h"
> > #include "hw/xen/xen_native.h"
> > +#include "hw/xen/xen_pvdev.h"
> > #include "qemu/bitmap.h"
> >
> > #include "sysemu/runstate.h"
> > @@ -597,3 +599,28 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
> > mapcache_unlock();
> > return p;
> > }
> > +
> > +MemoryRegion *xen_init_grant_ram(void)
> > +{
> > + RAMBlock *block;
> > +
> > + memory_region_init(&ram_grants, NULL, "xen.grants",
> > + XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
> > + block = g_malloc0(sizeof(*block));
> > + block->mr = &ram_grants;
> > + block->used_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
> > + block->max_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
> > + block->fd = -1;
> > + block->page_size = XC_PAGE_SIZE;
> > + block->host = (void *)XEN_GRANT_ADDR_OFF;
> > + block->offset = XEN_GRANT_ADDR_OFF;
> > + block->flags = RAM_PREALLOC;
> > + ram_grants.ram_block = block;
> > + ram_grants.ram = true;
> > + ram_grants.terminates = true;
> > + ram_block_add_list(block);
> > + memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
> > + &ram_grants);
> > +
> > + return &ram_grants;
>
> It doesn't look like xen_init_grant_ram has anything to do with the
> mapcache. It should be in another file. Maybe ./hw/xen/xen-hvm-common.c
> or ./hw/i386/xen/xen-hvm.c (but this is x86 specific and we need grants
> on ARM too)
Do you mean to move all grant related functions? As moving this alone will not
be sufficient. There are lot of new grant related function added in later patches.
I am okay with moving all to xen-hvm-common.c
Following code movement will happen in this case:
1. All grant related static function to xen-hvm-common.c.
xen_ram_addr_from_grant_cache(), xen_ram_addr_from_mapcache(),
xen_map_grant_dyn(), xen_unmap_grant_dyn and xen_init_grant_ram().
2. Remove static from xen_ram_addr_from_mapcache_try().
Does these changes looks good?
>
>
> > +}
> > diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> > index 90676093f5..c0b5f9a7d0 100644
> > --- a/include/exec/ram_addr.h
> > +++ b/include/exec/ram_addr.h
> > @@ -139,6 +139,7 @@ void qemu_ram_free(RAMBlock *block);
> > int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, Error **errp);
> >
> > void qemu_ram_msync(RAMBlock *block, ram_addr_t start, ram_addr_t length);
> > +void ram_block_add_list(RAMBlock *new_block);
> >
> > /* Clear whole block of mem */
> > static inline void qemu_ram_block_writeback(RAMBlock *block)
> > diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
> > index 4e9904f1a6..0d300ba898 100644
> > --- a/include/hw/xen/xen-hvm-common.h
> > +++ b/include/hw/xen/xen-hvm-common.h
> > @@ -17,6 +17,8 @@
> > #include <xen/hvm/ioreq.h>
> >
> > extern MemoryRegion ram_memory;
> > +
> > +extern MemoryRegion ram_grants;
> > extern MemoryListener xen_io_listener;
> > extern DeviceListener xen_device_listener;
> >
> > diff --git a/include/hw/xen/xen_pvdev.h b/include/hw/xen/xen_pvdev.h
> > index ddad4b9f36..0f1b5edfa9 100644
> > --- a/include/hw/xen/xen_pvdev.h
> > +++ b/include/hw/xen/xen_pvdev.h
> > @@ -80,4 +80,7 @@ int xen_pv_send_notify(struct XenLegacyDevice *xendev);
> > void xen_pv_printf(struct XenLegacyDevice *xendev, int msg_level,
> > const char *fmt, ...) G_GNUC_PRINTF(3, 4);
> >
> > +#define XEN_GRANT_ADDR_OFF 0x8000000000000000ULL
> > +#define XEN_MAX_VIRTIO_GRANTS 65536
> > +
> > #endif /* QEMU_HW_XEN_PVDEV_H */
> > diff --git a/include/sysemu/xen-mapcache.h b/include/sysemu/xen-mapcache.h
> > index c8e7c2f6cf..f4bedb1c11 100644
> > --- a/include/sysemu/xen-mapcache.h
> > +++ b/include/sysemu/xen-mapcache.h
> > @@ -10,6 +10,7 @@
> > #define XEN_MAPCACHE_H
> >
> > #include "exec/cpu-common.h"
> > +#include "exec/ram_addr.h"
> >
> > typedef hwaddr (*phys_offset_to_gaddr_t)(hwaddr phys_offset,
> > ram_addr_t size);
> > @@ -25,6 +26,8 @@ void xen_invalidate_map_cache(void);
> > uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
> > hwaddr new_phys_addr,
> > hwaddr size);
> > +MemoryRegion *xen_init_grant_ram(void);
> > +
> > #else
> >
> > static inline void xen_map_cache_init(phys_offset_to_gaddr_t f,
> > diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> > index 309653c722..e182a2fa07 100644
> > --- a/softmmu/physmem.c
> > +++ b/softmmu/physmem.c
>
>
> You might want to split this change out of this patch to make it easier
> to get the physmem.c maintainers' attention
Understood, will create a new patch for this change.
>
>
> > @@ -1803,12 +1803,47 @@ static void dirty_memory_extend(ram_addr_t old_ram_size,
> > }
> > }
> >
> > +static void ram_block_add_list_locked(RAMBlock *new_block)
> > + {
> > + RAMBlock *block;
> > + RAMBlock *last_block = NULL;
> > +
> > + /*
> > + * Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
> > + * QLIST (which has an RCU-friendly variant) does not have insertion at
> > + * tail, so save the last element in last_block.
> > + */
> > + RAMBLOCK_FOREACH(block) {
> > + last_block = block;
> > + if (block->max_length < new_block->max_length) {
> > + break;
> > + }
> > + }
> > + if (block) {
> > + QLIST_INSERT_BEFORE_RCU(block, new_block, next);
> > + } else if (last_block) {
> > + QLIST_INSERT_AFTER_RCU(last_block, new_block, next);
> > + } else { /* list is empty */
> > + QLIST_INSERT_HEAD_RCU(&ram_list.blocks, new_block, next);
> > + }
> > + ram_list.mru_block = NULL;
> > +
> > + /* Write list before version */
> > + smp_wmb();
> > + ram_list.version++;
> > +}
> > +
> > +void ram_block_add_list(RAMBlock *new_block)
> > +{
> > + qemu_mutex_lock_ramlist();
> > + ram_block_add_list_locked(new_block);
> > + qemu_mutex_unlock_ramlist();
> > +}
> > +
> > static void ram_block_add(RAMBlock *new_block, Error **errp)
> > {
> > const bool noreserve = qemu_ram_is_noreserve(new_block);
> > const bool shared = qemu_ram_is_shared(new_block);
> > - RAMBlock *block;
> > - RAMBlock *last_block = NULL;
> > ram_addr_t old_ram_size, new_ram_size;
> > Error *err = NULL;
> >
> > @@ -1846,28 +1881,9 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
> > if (new_ram_size > old_ram_size) {
> > dirty_memory_extend(old_ram_size, new_ram_size);
> > }
> > - /* Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
> > - * QLIST (which has an RCU-friendly variant) does not have insertion at
> > - * tail, so save the last element in last_block.
> > - */
> > - RAMBLOCK_FOREACH(block) {
> > - last_block = block;
> > - if (block->max_length < new_block->max_length) {
> > - break;
> > - }
> > - }
> > - if (block) {
> > - QLIST_INSERT_BEFORE_RCU(block, new_block, next);
> > - } else if (last_block) {
> > - QLIST_INSERT_AFTER_RCU(last_block, new_block, next);
> > - } else { /* list is empty */
> > - QLIST_INSERT_HEAD_RCU(&ram_list.blocks, new_block, next);
> > - }
> > - ram_list.mru_block = NULL;
> >
> > - /* Write list before version */
> > - smp_wmb();
> > - ram_list.version++;
> > + ram_block_add_list_locked(new_block);
> > +
> > qemu_mutex_unlock_ramlist();
> >
> > cpu_physical_memory_set_dirty_range(new_block->offset,
> > --
> > 2.17.1
> >
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 2/7] xen: add pseudo RAM region for grant mappings
2023-10-10 21:25 ` Vikram Garhwal
@ 2023-10-10 22:30 ` Stefano Stabellini
0 siblings, 0 replies; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-10 22:30 UTC (permalink / raw)
To: Vikram Garhwal
Cc: Stefano Stabellini, qemu-devel, Juergen Gross, Anthony Perard,
Paul Durrant, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
Richard Henderson, Eduardo Habkost, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
On Tue, 10 Oct 2023, Vikram Garhwal wrote:
> On Mon, Oct 09, 2023 at 05:02:14PM -0700, Stefano Stabellini wrote:
> > On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> > > From: Juergen Gross <jgross@suse.com>
> > >
> > > Add a memory region which can be used to automatically map granted
> > > memory. It is starting at 0x8000000000000000ULL in order to be able to
> > > distinguish it from normal RAM.
> > >
> > > For this reason the xen.ram memory region is expanded, which has no
> > > further impact as it is used just as a container of the real RAM
> > > regions and now the grant region.
> > >
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> >
> > This patch doesn't apply to staging anymore
> Will re-base it. I rebased it against master branch.
> >
> >
> > > ---
> > > hw/i386/xen/xen-hvm.c | 3 ++
> > > hw/xen/xen-hvm-common.c | 4 +--
> > > hw/xen/xen-mapcache.c | 27 ++++++++++++++
> > > include/exec/ram_addr.h | 1 +
> > > include/hw/xen/xen-hvm-common.h | 2 ++
> > > include/hw/xen/xen_pvdev.h | 3 ++
> > > include/sysemu/xen-mapcache.h | 3 ++
> > > softmmu/physmem.c | 62 +++++++++++++++++++++------------
> > > 8 files changed, 80 insertions(+), 25 deletions(-)
> > >
> > > diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> > > index f42621e674..67a55558a6 100644
> > > --- a/hw/i386/xen/xen-hvm.c
> > > +++ b/hw/i386/xen/xen-hvm.c
> > > @@ -172,6 +172,9 @@ static void xen_ram_init(PCMachineState *pcms,
> > > x86ms->above_4g_mem_size);
> > > memory_region_add_subregion(sysmem, 0x100000000ULL, &ram_hi);
> > > }
> > > +
> > > + /* Add grant mappings as a pseudo RAM region. */
> > > + ram_grants = *xen_init_grant_ram();
> > > }
> > >
> > > static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
> > > diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> > > index 565dc39c8f..b7255977a5 100644
> > > --- a/hw/xen/xen-hvm-common.c
> > > +++ b/hw/xen/xen-hvm-common.c
> > > @@ -9,7 +9,7 @@
> > > #include "hw/boards.h"
> > > #include "hw/xen/arch_hvm.h"
> > >
> > > -MemoryRegion ram_memory;
> > > +MemoryRegion ram_memory, ram_grants;
> > >
> > > void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> > > Error **errp)
> > > @@ -26,7 +26,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> > > return;
> > > }
> > >
> > > - if (mr == &ram_memory) {
> > > + if (mr == &ram_memory || mr == &ram_grants) {
> > > return;
> > > }
> > >
> > > diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> > > index f7d974677d..8115c44c00 100644
> > > --- a/hw/xen/xen-mapcache.c
> > > +++ b/hw/xen/xen-mapcache.c
> > > @@ -14,7 +14,9 @@
> > >
> > > #include <sys/resource.h>
> > >
> > > +#include "hw/xen/xen-hvm-common.h"
> > > #include "hw/xen/xen_native.h"
> > > +#include "hw/xen/xen_pvdev.h"
> > > #include "qemu/bitmap.h"
> > >
> > > #include "sysemu/runstate.h"
> > > @@ -597,3 +599,28 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
> > > mapcache_unlock();
> > > return p;
> > > }
> > > +
> > > +MemoryRegion *xen_init_grant_ram(void)
> > > +{
> > > + RAMBlock *block;
> > > +
> > > + memory_region_init(&ram_grants, NULL, "xen.grants",
> > > + XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
> > > + block = g_malloc0(sizeof(*block));
> > > + block->mr = &ram_grants;
> > > + block->used_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
> > > + block->max_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
> > > + block->fd = -1;
> > > + block->page_size = XC_PAGE_SIZE;
> > > + block->host = (void *)XEN_GRANT_ADDR_OFF;
> > > + block->offset = XEN_GRANT_ADDR_OFF;
> > > + block->flags = RAM_PREALLOC;
> > > + ram_grants.ram_block = block;
> > > + ram_grants.ram = true;
> > > + ram_grants.terminates = true;
> > > + ram_block_add_list(block);
> > > + memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
> > > + &ram_grants);
> > > +
> > > + return &ram_grants;
> >
> > It doesn't look like xen_init_grant_ram has anything to do with the
> > mapcache. It should be in another file. Maybe ./hw/xen/xen-hvm-common.c
> > or ./hw/i386/xen/xen-hvm.c (but this is x86 specific and we need grants
> > on ARM too)
> Do you mean to move all grant related functions? As moving this alone will not
> be sufficient. There are lot of new grant related function added in later patches.
>
> I am okay with moving all to xen-hvm-common.c
>
> Following code movement will happen in this case:
> 1. All grant related static function to xen-hvm-common.c.
> xen_ram_addr_from_grant_cache(), xen_ram_addr_from_mapcache(),
> xen_map_grant_dyn(), xen_unmap_grant_dyn and xen_init_grant_ram().
> 2. Remove static from xen_ram_addr_from_mapcache_try().
>
> Does these changes looks good?
After reading all the patches, I think it is also OK to leave the code
here.
^ permalink raw reply [flat|nested] 23+ messages in thread
* [QEMU][PATCH v1 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
2023-10-05 18:16 [QEMU][PATCH v1 0/7] Xen: support grant mappings Vikram Garhwal
2023-10-05 18:16 ` [QEMU][PATCH v1 1/7] xen: when unplugging emulated devices skip virtio devices Vikram Garhwal
2023-10-05 18:16 ` [QEMU][PATCH v1 2/7] xen: add pseudo RAM region for grant mappings Vikram Garhwal
@ 2023-10-05 18:16 ` Vikram Garhwal
2023-10-10 0:10 ` Stefano Stabellini
2023-10-05 18:16 ` [QEMU][PATCH v1 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry Vikram Garhwal
` (3 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-05 18:16 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, Juergen Gross, Vikram Garhwal, Paolo Bonzini,
Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé
From: Juergen Gross <jgross@suse.com>
qemu_map_ram_ptr() and qemu_ram_ptr_length() share quite some code, so
modify qemu_ram_ptr_length() a little bit and use it for
qemu_map_ram_ptr(), too.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
softmmu/physmem.c | 58 +++++++++++++++++++----------------------------
1 file changed, 23 insertions(+), 35 deletions(-)
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index e182a2fa07..6e5e379dd0 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -2163,38 +2163,8 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
}
#endif /* !_WIN32 */
-/* Return a host pointer to ram allocated with qemu_ram_alloc.
- * This should not be used for general purpose DMA. Use address_space_map
- * or address_space_rw instead. For local memory (e.g. video ram) that the
- * device owns, use memory_region_get_ram_ptr.
- *
- * Called within RCU critical section.
- */
-void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
-{
- RAMBlock *block = ram_block;
-
- if (block == NULL) {
- block = qemu_get_ram_block(addr);
- addr -= block->offset;
- }
-
- if (xen_enabled() && block->host == NULL) {
- /* We need to check if the requested address is in the RAM
- * because we don't want to map the entire memory in QEMU.
- * In that case just map until the end of the page.
- */
- if (block->offset == 0) {
- return xen_map_cache(addr, 0, 0, false);
- }
-
- block->host = xen_map_cache(block->offset, block->max_length, 1, false);
- }
- return ramblock_ptr(block, addr);
-}
-
-/* Return a host pointer to guest's ram. Similar to qemu_map_ram_ptr
- * but takes a size argument.
+/*
+ * Return a host pointer to guest's ram.
*
* Called within RCU critical section.
*/
@@ -2202,7 +2172,9 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
hwaddr *size, bool lock)
{
RAMBlock *block = ram_block;
- if (*size == 0) {
+ hwaddr len = 0;
+
+ if (size && *size == 0) {
return NULL;
}
@@ -2210,7 +2182,10 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
block = qemu_get_ram_block(addr);
addr -= block->offset;
}
- *size = MIN(*size, block->max_length - addr);
+ if (size) {
+ *size = MIN(*size, block->max_length - addr);
+ len = *size;
+ }
if (xen_enabled() && block->host == NULL) {
/* We need to check if the requested address is in the RAM
@@ -2218,7 +2193,7 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
* In that case just map the requested area.
*/
if (block->offset == 0) {
- return xen_map_cache(addr, *size, lock, lock);
+ return xen_map_cache(addr, len, lock, lock);
}
block->host = xen_map_cache(block->offset, block->max_length, 1, lock);
@@ -2227,6 +2202,19 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
return ramblock_ptr(block, addr);
}
+/*
+ * Return a host pointer to ram allocated with qemu_ram_alloc.
+ * This should not be used for general purpose DMA. Use address_space_map
+ * or address_space_rw instead. For local memory (e.g. video ram) that the
+ * device owns, use memory_region_get_ram_ptr.
+ *
+ * Called within RCU critical section.
+ */
+void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
+{
+ return qemu_ram_ptr_length(ram_block, addr, NULL, false);
+}
+
/* Return the offset of a hostpointer within a ramblock */
ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host)
{
--
2.17.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
2023-10-05 18:16 ` [QEMU][PATCH v1 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length() Vikram Garhwal
@ 2023-10-10 0:10 ` Stefano Stabellini
2023-10-10 21:26 ` Vikram Garhwal
0 siblings, 1 reply; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-10 0:10 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, Juergen Gross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> From: Juergen Gross <jgross@suse.com>
>
> qemu_map_ram_ptr() and qemu_ram_ptr_length() share quite some code, so
> modify qemu_ram_ptr_length() a little bit and use it for
> qemu_map_ram_ptr(), too.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
This patch also doesn't apply due to code movement.
Other than that, the patch looks good to me
> ---
> softmmu/physmem.c | 58 +++++++++++++++++++----------------------------
> 1 file changed, 23 insertions(+), 35 deletions(-)
>
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index e182a2fa07..6e5e379dd0 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -2163,38 +2163,8 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
> }
> #endif /* !_WIN32 */
>
> -/* Return a host pointer to ram allocated with qemu_ram_alloc.
> - * This should not be used for general purpose DMA. Use address_space_map
> - * or address_space_rw instead. For local memory (e.g. video ram) that the
> - * device owns, use memory_region_get_ram_ptr.
> - *
> - * Called within RCU critical section.
> - */
> -void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
> -{
> - RAMBlock *block = ram_block;
> -
> - if (block == NULL) {
> - block = qemu_get_ram_block(addr);
> - addr -= block->offset;
> - }
> -
> - if (xen_enabled() && block->host == NULL) {
> - /* We need to check if the requested address is in the RAM
> - * because we don't want to map the entire memory in QEMU.
> - * In that case just map until the end of the page.
> - */
> - if (block->offset == 0) {
> - return xen_map_cache(addr, 0, 0, false);
> - }
> -
> - block->host = xen_map_cache(block->offset, block->max_length, 1, false);
> - }
> - return ramblock_ptr(block, addr);
> -}
> -
> -/* Return a host pointer to guest's ram. Similar to qemu_map_ram_ptr
> - * but takes a size argument.
> +/*
> + * Return a host pointer to guest's ram.
> *
> * Called within RCU critical section.
> */
> @@ -2202,7 +2172,9 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
> hwaddr *size, bool lock)
> {
> RAMBlock *block = ram_block;
> - if (*size == 0) {
> + hwaddr len = 0;
> +
> + if (size && *size == 0) {
> return NULL;
> }
>
> @@ -2210,7 +2182,10 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
> block = qemu_get_ram_block(addr);
> addr -= block->offset;
> }
> - *size = MIN(*size, block->max_length - addr);
> + if (size) {
> + *size = MIN(*size, block->max_length - addr);
> + len = *size;
> + }
>
> if (xen_enabled() && block->host == NULL) {
> /* We need to check if the requested address is in the RAM
> @@ -2218,7 +2193,7 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
> * In that case just map the requested area.
> */
> if (block->offset == 0) {
> - return xen_map_cache(addr, *size, lock, lock);
> + return xen_map_cache(addr, len, lock, lock);
> }
>
> block->host = xen_map_cache(block->offset, block->max_length, 1, lock);
> @@ -2227,6 +2202,19 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
> return ramblock_ptr(block, addr);
> }
>
> +/*
> + * Return a host pointer to ram allocated with qemu_ram_alloc.
> + * This should not be used for general purpose DMA. Use address_space_map
> + * or address_space_rw instead. For local memory (e.g. video ram) that the
> + * device owns, use memory_region_get_ram_ptr.
> + *
> + * Called within RCU critical section.
> + */
> +void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
> +{
> + return qemu_ram_ptr_length(ram_block, addr, NULL, false);
> +}
> +
> /* Return the offset of a hostpointer within a ramblock */
> ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host)
> {
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
2023-10-10 0:10 ` Stefano Stabellini
@ 2023-10-10 21:26 ` Vikram Garhwal
0 siblings, 0 replies; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-10 21:26 UTC (permalink / raw)
To: Stefano Stabellini
Cc: qemu-devel, Juergen Gross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
On Mon, Oct 09, 2023 at 05:10:43PM -0700, Stefano Stabellini wrote:
> On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> > From: Juergen Gross <jgross@suse.com>
> >
> > qemu_map_ram_ptr() and qemu_ram_ptr_length() share quite some code, so
> > modify qemu_ram_ptr_length() a little bit and use it for
> > qemu_map_ram_ptr(), too.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>
> This patch also doesn't apply due to code movement.
Will rebase it.
>
> Other than that, the patch looks good to me
>
>
> > ---
> > softmmu/physmem.c | 58 +++++++++++++++++++----------------------------
> > 1 file changed, 23 insertions(+), 35 deletions(-)
> >
> > diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> > index e182a2fa07..6e5e379dd0 100644
> > --- a/softmmu/physmem.c
> > +++ b/softmmu/physmem.c
> > @@ -2163,38 +2163,8 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
> > }
> > #endif /* !_WIN32 */
> >
> > -/* Return a host pointer to ram allocated with qemu_ram_alloc.
> > - * This should not be used for general purpose DMA. Use address_space_map
> > - * or address_space_rw instead. For local memory (e.g. video ram) that the
> > - * device owns, use memory_region_get_ram_ptr.
> > - *
> > - * Called within RCU critical section.
> > - */
> > -void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
> > -{
> > - RAMBlock *block = ram_block;
> > -
> > - if (block == NULL) {
> > - block = qemu_get_ram_block(addr);
> > - addr -= block->offset;
> > - }
> > -
> > - if (xen_enabled() && block->host == NULL) {
> > - /* We need to check if the requested address is in the RAM
> > - * because we don't want to map the entire memory in QEMU.
> > - * In that case just map until the end of the page.
> > - */
> > - if (block->offset == 0) {
> > - return xen_map_cache(addr, 0, 0, false);
> > - }
> > -
> > - block->host = xen_map_cache(block->offset, block->max_length, 1, false);
> > - }
> > - return ramblock_ptr(block, addr);
> > -}
> > -
> > -/* Return a host pointer to guest's ram. Similar to qemu_map_ram_ptr
> > - * but takes a size argument.
> > +/*
> > + * Return a host pointer to guest's ram.
> > *
> > * Called within RCU critical section.
> > */
> > @@ -2202,7 +2172,9 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
> > hwaddr *size, bool lock)
> > {
> > RAMBlock *block = ram_block;
> > - if (*size == 0) {
> > + hwaddr len = 0;
> > +
> > + if (size && *size == 0) {
> > return NULL;
> > }
> >
> > @@ -2210,7 +2182,10 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
> > block = qemu_get_ram_block(addr);
> > addr -= block->offset;
> > }
> > - *size = MIN(*size, block->max_length - addr);
> > + if (size) {
> > + *size = MIN(*size, block->max_length - addr);
> > + len = *size;
> > + }
> >
> > if (xen_enabled() && block->host == NULL) {
> > /* We need to check if the requested address is in the RAM
> > @@ -2218,7 +2193,7 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
> > * In that case just map the requested area.
> > */
> > if (block->offset == 0) {
> > - return xen_map_cache(addr, *size, lock, lock);
> > + return xen_map_cache(addr, len, lock, lock);
> > }
> >
> > block->host = xen_map_cache(block->offset, block->max_length, 1, lock);
> > @@ -2227,6 +2202,19 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
> > return ramblock_ptr(block, addr);
> > }
> >
> > +/*
> > + * Return a host pointer to ram allocated with qemu_ram_alloc.
> > + * This should not be used for general purpose DMA. Use address_space_map
> > + * or address_space_rw instead. For local memory (e.g. video ram) that the
> > + * device owns, use memory_region_get_ram_ptr.
> > + *
> > + * Called within RCU critical section.
> > + */
> > +void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
> > +{
> > + return qemu_ram_ptr_length(ram_block, addr, NULL, false);
> > +}
> > +
> > /* Return the offset of a hostpointer within a ramblock */
> > ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host)
> > {
> > --
> > 2.17.1
> >
^ permalink raw reply [flat|nested] 23+ messages in thread
* [QEMU][PATCH v1 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry
2023-10-05 18:16 [QEMU][PATCH v1 0/7] Xen: support grant mappings Vikram Garhwal
` (2 preceding siblings ...)
2023-10-05 18:16 ` [QEMU][PATCH v1 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length() Vikram Garhwal
@ 2023-10-05 18:16 ` Vikram Garhwal
2023-10-10 0:13 ` Stefano Stabellini
2023-10-05 18:16 ` [QEMU][PATCH v1 5/7] memory: add MemoryRegion map and unmap callbacks Vikram Garhwal
` (2 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-05 18:16 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, Juergen Gross, Anthony Perard, Paul Durrant,
open list:X86 Xen CPUs
From: Juergen Gross <jgross@suse.com>
Today xen_ram_addr_from_mapcache() will either abort() or return 0 in
case it can't find a matching entry for a pointer value. Both cases
are bad, so change that to return an invalid address instead.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
hw/xen/xen-mapcache.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
index 8115c44c00..8a61c7dde6 100644
--- a/hw/xen/xen-mapcache.c
+++ b/hw/xen/xen-mapcache.c
@@ -404,13 +404,8 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
}
}
if (!found) {
- fprintf(stderr, "%s, could not find %p\n", __func__, ptr);
- QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
- DPRINTF(" "HWADDR_FMT_plx" -> %p is present\n", reventry->paddr_index,
- reventry->vaddr_req);
- }
- abort();
- return 0;
+ mapcache_unlock();
+ return RAM_ADDR_INVALID;
}
entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
@@ -418,8 +413,7 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
entry = entry->next;
}
if (!entry) {
- DPRINTF("Trying to find address %p that is not in the mapcache!\n", ptr);
- raddr = 0;
+ raddr = RAM_ADDR_INVALID;
} else {
raddr = (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
((unsigned long) ptr - (unsigned long) entry->vaddr_base);
--
2.17.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry
2023-10-05 18:16 ` [QEMU][PATCH v1 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry Vikram Garhwal
@ 2023-10-10 0:13 ` Stefano Stabellini
0 siblings, 0 replies; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-10 0:13 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, Juergen Gross, Anthony Perard,
Paul Durrant, open list:X86 Xen CPUs
On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> From: Juergen Gross <jgross@suse.com>
>
> Today xen_ram_addr_from_mapcache() will either abort() or return 0 in
> case it can't find a matching entry for a pointer value. Both cases
> are bad, so change that to return an invalid address instead.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> hw/xen/xen-mapcache.c | 12 +++---------
> 1 file changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> index 8115c44c00..8a61c7dde6 100644
> --- a/hw/xen/xen-mapcache.c
> +++ b/hw/xen/xen-mapcache.c
> @@ -404,13 +404,8 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
> }
> }
> if (!found) {
> - fprintf(stderr, "%s, could not find %p\n", __func__, ptr);
> - QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
> - DPRINTF(" "HWADDR_FMT_plx" -> %p is present\n", reventry->paddr_index,
> - reventry->vaddr_req);
> - }
> - abort();
> - return 0;
> + mapcache_unlock();
> + return RAM_ADDR_INVALID;
> }
>
> entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
> @@ -418,8 +413,7 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
> entry = entry->next;
> }
> if (!entry) {
> - DPRINTF("Trying to find address %p that is not in the mapcache!\n", ptr);
> - raddr = 0;
> + raddr = RAM_ADDR_INVALID;
> } else {
> raddr = (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
> ((unsigned long) ptr - (unsigned long) entry->vaddr_base);
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [QEMU][PATCH v1 5/7] memory: add MemoryRegion map and unmap callbacks
2023-10-05 18:16 [QEMU][PATCH v1 0/7] Xen: support grant mappings Vikram Garhwal
` (3 preceding siblings ...)
2023-10-05 18:16 ` [QEMU][PATCH v1 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry Vikram Garhwal
@ 2023-10-05 18:16 ` Vikram Garhwal
2023-10-10 0:17 ` Stefano Stabellini
2023-10-05 18:16 ` [QEMU][PATCH v1 6/7] xen: add map and unmap callbacks for grant region Vikram Garhwal
2023-10-05 18:16 ` [QEMU][PATCH v1 7/7] hw: arm: Add grant mapping Vikram Garhwal
6 siblings, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-05 18:16 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, Juergen Gross, Vikram Garhwal, Paolo Bonzini,
Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé
From: Juergen Gross <jgross@suse.com>
In order to support mapping and unmapping guest memory dynamically to
and from qemu during address_space_[un]map() operations add the map()
and unmap() callbacks to MemoryRegionOps.
Those will be used e.g. for Xen grant mappings when performing guest
I/Os.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
include/exec/memory.h | 21 ++++++++++++++++++
softmmu/physmem.c | 50 +++++++++++++++++++++++++++++++++----------
2 files changed, 60 insertions(+), 11 deletions(-)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index c99842d2fc..f3c62d2883 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -280,6 +280,27 @@ struct MemoryRegionOps {
unsigned size,
MemTxAttrs attrs);
+ /*
+ * Dynamically create mapping. @addr is the guest address to map; @plen
+ * is the pointer to the usable length of the buffer.
+ * @mr contents can be changed in case a new memory region is created for
+ * the mapping.
+ * Returns the buffer address for accessing the data.
+ */
+ void *(*map)(MemoryRegion **mr,
+ hwaddr addr,
+ hwaddr *plen,
+ bool is_write,
+ MemTxAttrs attrs);
+
+ /* Unmap an area obtained via map() before. */
+ void (*unmap)(MemoryRegion *mr,
+ void *buffer,
+ ram_addr_t addr,
+ hwaddr len,
+ bool is_write,
+ hwaddr access_len);
+
enum device_endian endianness;
/* Guest-visible constraints: */
struct {
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 6e5e379dd0..5f425bea1c 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -3135,6 +3135,7 @@ void *address_space_map(AddressSpace *as,
hwaddr len = *plen;
hwaddr l, xlat;
MemoryRegion *mr;
+ void *ptr = NULL;
FlatView *fv;
if (len == 0) {
@@ -3168,12 +3169,20 @@ void *address_space_map(AddressSpace *as,
return bounce.buffer;
}
-
memory_region_ref(mr);
+
+ if (mr->ops && mr->ops->map) {
+ ptr = mr->ops->map(&mr, addr, plen, is_write, attrs);
+ }
+
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
l, is_write, attrs);
fuzz_dma_read_cb(addr, *plen, mr);
- return qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
+ if (ptr == NULL) {
+ ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
+ }
+
+ return ptr;
}
/* Unmaps a memory region previously mapped by address_space_map().
@@ -3189,11 +3198,16 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
mr = memory_region_from_host(buffer, &addr1);
assert(mr != NULL);
- if (is_write) {
- invalidate_and_set_dirty(mr, addr1, access_len);
- }
- if (xen_enabled()) {
- xen_invalidate_map_cache_entry(buffer);
+
+ if (mr->ops && mr->ops->unmap) {
+ mr->ops->unmap(mr, buffer, addr1, len, is_write, access_len);
+ } else {
+ if (is_write) {
+ invalidate_and_set_dirty(mr, addr1, access_len);
+ }
+ if (xen_enabled()) {
+ xen_invalidate_map_cache_entry(buffer);
+ }
}
memory_region_unref(mr);
return;
@@ -3266,10 +3280,18 @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
* doing this if we found actual RAM, which behaves the same
* regardless of attributes; so UNSPECIFIED is fine.
*/
+ if (mr->ops && mr->ops->map) {
+ cache->ptr = mr->ops->map(&mr, addr, &l, is_write,
+ MEMTXATTRS_UNSPECIFIED);
+ }
+
l = flatview_extend_translation(cache->fv, addr, len, mr,
cache->xlat, l, is_write,
MEMTXATTRS_UNSPECIFIED);
- cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
+ if (!cache->ptr) {
+ cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l,
+ true);
+ }
} else {
cache->ptr = NULL;
}
@@ -3291,14 +3313,20 @@ void address_space_cache_invalidate(MemoryRegionCache *cache,
void address_space_cache_destroy(MemoryRegionCache *cache)
{
- if (!cache->mrs.mr) {
+ MemoryRegion *mr = cache->mrs.mr;
+
+ if (!mr) {
return;
}
- if (xen_enabled()) {
+ if (mr->ops && mr->ops->unmap) {
+ mr->ops->unmap(mr, cache->ptr, cache->xlat, cache->len,
+ cache->is_write, cache->len);
+ } else if (xen_enabled()) {
xen_invalidate_map_cache_entry(cache->ptr);
}
- memory_region_unref(cache->mrs.mr);
+
+ memory_region_unref(mr);
flatview_unref(cache->fv);
cache->mrs.mr = NULL;
cache->fv = NULL;
--
2.17.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 5/7] memory: add MemoryRegion map and unmap callbacks
2023-10-05 18:16 ` [QEMU][PATCH v1 5/7] memory: add MemoryRegion map and unmap callbacks Vikram Garhwal
@ 2023-10-10 0:17 ` Stefano Stabellini
2023-10-11 7:01 ` Juergen Gross
0 siblings, 1 reply; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-10 0:17 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, Juergen Gross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> From: Juergen Gross <jgross@suse.com>
>
> In order to support mapping and unmapping guest memory dynamically to
> and from qemu during address_space_[un]map() operations add the map()
> and unmap() callbacks to MemoryRegionOps.
>
> Those will be used e.g. for Xen grant mappings when performing guest
> I/Os.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Can't we just use the existing Xen hooks in qemu_ram_ptr_length and
xen_invalidate_map_cache_entry? Do we really need new ones?
> ---
> include/exec/memory.h | 21 ++++++++++++++++++
> softmmu/physmem.c | 50 +++++++++++++++++++++++++++++++++----------
> 2 files changed, 60 insertions(+), 11 deletions(-)
>
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index c99842d2fc..f3c62d2883 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -280,6 +280,27 @@ struct MemoryRegionOps {
> unsigned size,
> MemTxAttrs attrs);
>
> + /*
> + * Dynamically create mapping. @addr is the guest address to map; @plen
> + * is the pointer to the usable length of the buffer.
> + * @mr contents can be changed in case a new memory region is created for
> + * the mapping.
> + * Returns the buffer address for accessing the data.
> + */
> + void *(*map)(MemoryRegion **mr,
> + hwaddr addr,
> + hwaddr *plen,
> + bool is_write,
> + MemTxAttrs attrs);
> +
> + /* Unmap an area obtained via map() before. */
> + void (*unmap)(MemoryRegion *mr,
> + void *buffer,
> + ram_addr_t addr,
> + hwaddr len,
> + bool is_write,
> + hwaddr access_len);
> +
> enum device_endian endianness;
> /* Guest-visible constraints: */
> struct {
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 6e5e379dd0..5f425bea1c 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -3135,6 +3135,7 @@ void *address_space_map(AddressSpace *as,
> hwaddr len = *plen;
> hwaddr l, xlat;
> MemoryRegion *mr;
> + void *ptr = NULL;
> FlatView *fv;
>
> if (len == 0) {
> @@ -3168,12 +3169,20 @@ void *address_space_map(AddressSpace *as,
> return bounce.buffer;
> }
>
> -
> memory_region_ref(mr);
> +
> + if (mr->ops && mr->ops->map) {
> + ptr = mr->ops->map(&mr, addr, plen, is_write, attrs);
> + }
> +
> *plen = flatview_extend_translation(fv, addr, len, mr, xlat,
> l, is_write, attrs);
> fuzz_dma_read_cb(addr, *plen, mr);
> - return qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
> + if (ptr == NULL) {
> + ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
> + }
> +
> + return ptr;
> }
>
> /* Unmaps a memory region previously mapped by address_space_map().
> @@ -3189,11 +3198,16 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
>
> mr = memory_region_from_host(buffer, &addr1);
> assert(mr != NULL);
> - if (is_write) {
> - invalidate_and_set_dirty(mr, addr1, access_len);
> - }
> - if (xen_enabled()) {
> - xen_invalidate_map_cache_entry(buffer);
> +
> + if (mr->ops && mr->ops->unmap) {
> + mr->ops->unmap(mr, buffer, addr1, len, is_write, access_len);
> + } else {
> + if (is_write) {
> + invalidate_and_set_dirty(mr, addr1, access_len);
> + }
> + if (xen_enabled()) {
> + xen_invalidate_map_cache_entry(buffer);
> + }
> }
> memory_region_unref(mr);
> return;
> @@ -3266,10 +3280,18 @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
> * doing this if we found actual RAM, which behaves the same
> * regardless of attributes; so UNSPECIFIED is fine.
> */
> + if (mr->ops && mr->ops->map) {
> + cache->ptr = mr->ops->map(&mr, addr, &l, is_write,
> + MEMTXATTRS_UNSPECIFIED);
> + }
> +
> l = flatview_extend_translation(cache->fv, addr, len, mr,
> cache->xlat, l, is_write,
> MEMTXATTRS_UNSPECIFIED);
> - cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
> + if (!cache->ptr) {
> + cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l,
> + true);
> + }
> } else {
> cache->ptr = NULL;
> }
> @@ -3291,14 +3313,20 @@ void address_space_cache_invalidate(MemoryRegionCache *cache,
>
> void address_space_cache_destroy(MemoryRegionCache *cache)
> {
> - if (!cache->mrs.mr) {
> + MemoryRegion *mr = cache->mrs.mr;
> +
> + if (!mr) {
> return;
> }
>
> - if (xen_enabled()) {
> + if (mr->ops && mr->ops->unmap) {
> + mr->ops->unmap(mr, cache->ptr, cache->xlat, cache->len,
> + cache->is_write, cache->len);
> + } else if (xen_enabled()) {
> xen_invalidate_map_cache_entry(cache->ptr);
> }
> - memory_region_unref(cache->mrs.mr);
> +
> + memory_region_unref(mr);
> flatview_unref(cache->fv);
> cache->mrs.mr = NULL;
> cache->fv = NULL;
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 5/7] memory: add MemoryRegion map and unmap callbacks
2023-10-10 0:17 ` Stefano Stabellini
@ 2023-10-11 7:01 ` Juergen Gross
2023-10-26 1:41 ` Stefano Stabellini
0 siblings, 1 reply; 23+ messages in thread
From: Juergen Gross @ 2023-10-11 7:01 UTC (permalink / raw)
To: Stefano Stabellini, Vikram Garhwal
Cc: qemu-devel, Paolo Bonzini, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé
[-- Attachment #1.1.1: Type: text/plain, Size: 926 bytes --]
On 10.10.23 02:17, Stefano Stabellini wrote:
> On Thu, 5 Oct 2023, Vikram Garhwal wrote:
>> From: Juergen Gross <jgross@suse.com>
>>
>> In order to support mapping and unmapping guest memory dynamically to
>> and from qemu during address_space_[un]map() operations add the map()
>> and unmap() callbacks to MemoryRegionOps.
>>
>> Those will be used e.g. for Xen grant mappings when performing guest
>> I/Os.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>
> Can't we just use the existing Xen hooks in qemu_ram_ptr_length and
> xen_invalidate_map_cache_entry? Do we really need new ones?
I tried your idea first and it didn't work out.
The existing hooks are invoked not only when explicitly [un]mapping memory
regions, but in some other cases, too. Have a look for qemu_ram_ptr_length()
call in flatview_write_continue().
Juergen
[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 5/7] memory: add MemoryRegion map and unmap callbacks
2023-10-11 7:01 ` Juergen Gross
@ 2023-10-26 1:41 ` Stefano Stabellini
0 siblings, 0 replies; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-26 1:41 UTC (permalink / raw)
To: Juergen Gross
Cc: Stefano Stabellini, Vikram Garhwal, qemu-devel, Paolo Bonzini,
Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé
On Wed, 11 Oct 2023, Juergen Gross wrote:
> On 10.10.23 02:17, Stefano Stabellini wrote:
> > On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> > > From: Juergen Gross <jgross@suse.com>
> > >
> > > In order to support mapping and unmapping guest memory dynamically to
> > > and from qemu during address_space_[un]map() operations add the map()
> > > and unmap() callbacks to MemoryRegionOps.
> > >
> > > Those will be used e.g. for Xen grant mappings when performing guest
> > > I/Os.
> > >
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> >
> > Can't we just use the existing Xen hooks in qemu_ram_ptr_length and
> > xen_invalidate_map_cache_entry? Do we really need new ones?
>
> I tried your idea first and it didn't work out.
>
> The existing hooks are invoked not only when explicitly [un]mapping memory
> regions, but in some other cases, too. Have a look for qemu_ram_ptr_length()
> call in flatview_write_continue().
Hi Juergen, thanks for the explanation and sorry for my late reply. I
missed your email when it came out.
If that is the only problem, it seems to me that it could be solved. The
call to qemu_ram_ptr_length() in flatview_write_continue is unlocked. It
should be also distinguishable by address (the grants have the top bit
set?)
So from qemu_ram_ptr_length() we could call xen_map_grant_dyn only when
locked. And in xen_map_grant_dyn we could also check that
XEN_GRANT_ADDR_OFF is set before continuing.
Do you see any other issues with it?
^ permalink raw reply [flat|nested] 23+ messages in thread
* [QEMU][PATCH v1 6/7] xen: add map and unmap callbacks for grant region
2023-10-05 18:16 [QEMU][PATCH v1 0/7] Xen: support grant mappings Vikram Garhwal
` (4 preceding siblings ...)
2023-10-05 18:16 ` [QEMU][PATCH v1 5/7] memory: add MemoryRegion map and unmap callbacks Vikram Garhwal
@ 2023-10-05 18:16 ` Vikram Garhwal
2023-10-10 0:27 ` Stefano Stabellini
2023-10-05 18:16 ` [QEMU][PATCH v1 7/7] hw: arm: Add grant mapping Vikram Garhwal
6 siblings, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-05 18:16 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, Juergen Gross, Vikram Garhwal, Anthony Perard,
Paul Durrant, Paolo Bonzini, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
From: Juergen Gross <jgross@suse.com>
Add the callbacks for mapping/unmapping guest memory via grants to the
special grant memory region.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
hw/xen/xen-mapcache.c | 167 +++++++++++++++++++++++++++++++++++++++++-
softmmu/physmem.c | 11 ++-
2 files changed, 173 insertions(+), 5 deletions(-)
diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
index 8a61c7dde6..52844a6a9d 100644
--- a/hw/xen/xen-mapcache.c
+++ b/hw/xen/xen-mapcache.c
@@ -9,6 +9,8 @@
*/
#include "qemu/osdep.h"
+#include "qemu/queue.h"
+#include "qemu/thread.h"
#include "qemu/units.h"
#include "qemu/error-report.h"
@@ -23,6 +25,8 @@
#include "sysemu/xen-mapcache.h"
#include "trace.h"
+#include <xenevtchn.h>
+#include <xengnttab.h>
//#define MAPCACHE_DEBUG
@@ -385,7 +389,7 @@ uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
return p;
}
-ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
+static ram_addr_t xen_ram_addr_from_mapcache_try(void *ptr)
{
MapCacheEntry *entry = NULL;
MapCacheRev *reventry;
@@ -594,10 +598,170 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
return p;
}
+struct XENMappedGrantRegion {
+ void *addr;
+ unsigned int pages;
+ unsigned int refs;
+ unsigned int prot;
+ uint32_t idx;
+ QLIST_ENTRY(XENMappedGrantRegion) list;
+};
+
+static xengnttab_handle *xen_region_gnttabdev;
+static QLIST_HEAD(GrantRegionList, XENMappedGrantRegion) xen_grant_mappings =
+ QLIST_HEAD_INITIALIZER(xen_grant_mappings);
+static QemuMutex xen_map_mutex;
+
+static void *xen_map_grant_dyn(MemoryRegion **mr, hwaddr addr, hwaddr *plen,
+ bool is_write, MemTxAttrs attrs)
+{
+ unsigned int page_off = addr & (XC_PAGE_SIZE - 1);
+ unsigned int i;
+ unsigned int nrefs = (page_off + *plen + XC_PAGE_SIZE - 1) >> XC_PAGE_SHIFT;
+ uint32_t ref = (addr - XEN_GRANT_ADDR_OFF) >> XC_PAGE_SHIFT;
+ uint32_t *refs = NULL;
+ unsigned int prot = PROT_READ;
+ struct XENMappedGrantRegion *mgr = NULL;
+
+ if (is_write) {
+ prot |= PROT_WRITE;
+ }
+
+ qemu_mutex_lock(&xen_map_mutex);
+
+ QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
+ if (mgr->idx == ref &&
+ mgr->pages == nrefs &&
+ (mgr->prot & prot) == prot) {
+ break;
+ }
+ }
+ if (!mgr) {
+ mgr = g_new(struct XENMappedGrantRegion, 1);
+
+ if (nrefs == 1) {
+ refs = &ref;
+ } else {
+ refs = g_new(uint32_t, nrefs);
+ for (i = 0; i < nrefs; i++) {
+ refs[i] = ref + i;
+ }
+ }
+ mgr->addr = xengnttab_map_domain_grant_refs(xen_region_gnttabdev, nrefs,
+ xen_domid, refs, prot);
+ if (mgr->addr) {
+ mgr->pages = nrefs;
+ mgr->refs = 1;
+ mgr->prot = prot;
+ mgr->idx = ref;
+
+ QLIST_INSERT_HEAD(&xen_grant_mappings, mgr, list);
+ } else {
+ g_free(mgr);
+ mgr = NULL;
+ }
+ } else {
+ mgr->refs++;
+ }
+
+ qemu_mutex_unlock(&xen_map_mutex);
+
+ if (nrefs > 1) {
+ g_free(refs);
+ }
+
+ return mgr ? mgr->addr + page_off : NULL;
+}
+
+static void xen_unmap_grant_dyn(MemoryRegion *mr, void *buffer, ram_addr_t addr,
+ hwaddr len, bool is_write, hwaddr access_len)
+{
+ unsigned int page_off = (unsigned long)buffer & (XC_PAGE_SIZE - 1);
+ unsigned int nrefs = (page_off + len + XC_PAGE_SIZE - 1) >> XC_PAGE_SHIFT;
+ unsigned int prot = PROT_READ;
+ struct XENMappedGrantRegion *mgr = NULL;
+
+ if (is_write) {
+ prot |= PROT_WRITE;
+ }
+
+ qemu_mutex_lock(&xen_map_mutex);
+
+ QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
+ if (mgr->addr == buffer - page_off &&
+ mgr->pages == nrefs &&
+ (mgr->prot & prot) == prot) {
+ break;
+ }
+ }
+ if (mgr) {
+ mgr->refs--;
+ if (!mgr->refs) {
+ xengnttab_unmap(xen_region_gnttabdev, mgr->addr, nrefs);
+
+ QLIST_REMOVE(mgr, list);
+ g_free(mgr);
+ }
+ } else {
+ error_report("xen_unmap_grant_dyn() trying to unmap unknown buffer");
+ }
+
+ qemu_mutex_unlock(&xen_map_mutex);
+}
+
+static ram_addr_t xen_ram_addr_from_grant_cache(void *ptr)
+{
+ unsigned int page_off = (unsigned long)ptr & (XC_PAGE_SIZE - 1);
+ struct XENMappedGrantRegion *mgr = NULL;
+ ram_addr_t raddr = RAM_ADDR_INVALID;
+
+ qemu_mutex_lock(&xen_map_mutex);
+
+ QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
+ if (mgr->addr == ptr - page_off) {
+ break;
+ }
+ }
+
+ if (mgr) {
+ raddr = (mgr->idx << XC_PAGE_SHIFT) + page_off + XEN_GRANT_ADDR_OFF;
+ }
+
+ qemu_mutex_unlock(&xen_map_mutex);
+
+ return raddr;
+}
+
+ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
+{
+ ram_addr_t raddr;
+
+ raddr = xen_ram_addr_from_mapcache_try(ptr);
+ if (raddr == RAM_ADDR_INVALID) {
+ raddr = xen_ram_addr_from_grant_cache(ptr);
+ }
+
+ return raddr;
+}
+
+static const struct MemoryRegionOps xen_grant_mr_ops = {
+ .map = xen_map_grant_dyn,
+ .unmap = xen_unmap_grant_dyn,
+ .endianness = DEVICE_LITTLE_ENDIAN,
+};
+
MemoryRegion *xen_init_grant_ram(void)
{
RAMBlock *block;
+ qemu_mutex_init(&xen_map_mutex);
+
+ xen_region_gnttabdev = xengnttab_open(NULL, 0);
+ if (xen_region_gnttabdev == NULL) {
+ fprintf(stderr, "can't open gnttab device\n");
+ return NULL;
+ }
+
memory_region_init(&ram_grants, NULL, "xen.grants",
XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
block = g_malloc0(sizeof(*block));
@@ -612,6 +776,7 @@ MemoryRegion *xen_init_grant_ram(void)
ram_grants.ram_block = block;
ram_grants.ram = true;
ram_grants.terminates = true;
+ ram_grants.ops = &xen_grant_mr_ops;
ram_block_add_list(block);
memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
&ram_grants);
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 5f425bea1c..e5346386db 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -2250,13 +2250,16 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
if (xen_enabled()) {
ram_addr_t ram_addr;
+
RCU_READ_LOCK_GUARD();
ram_addr = xen_ram_addr_from_mapcache(ptr);
- block = qemu_get_ram_block(ram_addr);
- if (block) {
- *offset = ram_addr - block->offset;
+ if (ram_addr != RAM_ADDR_INVALID) {
+ block = qemu_get_ram_block(ram_addr);
+ if (block) {
+ *offset = ram_addr - block->offset;
+ }
+ return block;
}
- return block;
}
RCU_READ_LOCK_GUARD();
--
2.17.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 6/7] xen: add map and unmap callbacks for grant region
2023-10-05 18:16 ` [QEMU][PATCH v1 6/7] xen: add map and unmap callbacks for grant region Vikram Garhwal
@ 2023-10-10 0:27 ` Stefano Stabellini
0 siblings, 0 replies; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-10 0:27 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, Juergen Gross, Anthony Perard,
Paul Durrant, Paolo Bonzini, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> From: Juergen Gross <jgross@suse.com>
>
> Add the callbacks for mapping/unmapping guest memory via grants to the
> special grant memory region.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
This looks good. We need to add a check to make sure we don't exceed
XEN_MAX_VIRTIO_GRANTS.
> ---
> hw/xen/xen-mapcache.c | 167 +++++++++++++++++++++++++++++++++++++++++-
> softmmu/physmem.c | 11 ++-
> 2 files changed, 173 insertions(+), 5 deletions(-)
>
> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> index 8a61c7dde6..52844a6a9d 100644
> --- a/hw/xen/xen-mapcache.c
> +++ b/hw/xen/xen-mapcache.c
> @@ -9,6 +9,8 @@
> */
>
> #include "qemu/osdep.h"
> +#include "qemu/queue.h"
> +#include "qemu/thread.h"
> #include "qemu/units.h"
> #include "qemu/error-report.h"
>
> @@ -23,6 +25,8 @@
> #include "sysemu/xen-mapcache.h"
> #include "trace.h"
>
> +#include <xenevtchn.h>
> +#include <xengnttab.h>
>
> //#define MAPCACHE_DEBUG
>
> @@ -385,7 +389,7 @@ uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
> return p;
> }
>
> -ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
> +static ram_addr_t xen_ram_addr_from_mapcache_try(void *ptr)
> {
> MapCacheEntry *entry = NULL;
> MapCacheRev *reventry;
> @@ -594,10 +598,170 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
> return p;
> }
>
> +struct XENMappedGrantRegion {
> + void *addr;
> + unsigned int pages;
> + unsigned int refs;
> + unsigned int prot;
> + uint32_t idx;
> + QLIST_ENTRY(XENMappedGrantRegion) list;
> +};
> +
> +static xengnttab_handle *xen_region_gnttabdev;
> +static QLIST_HEAD(GrantRegionList, XENMappedGrantRegion) xen_grant_mappings =
> + QLIST_HEAD_INITIALIZER(xen_grant_mappings);
> +static QemuMutex xen_map_mutex;
> +
> +static void *xen_map_grant_dyn(MemoryRegion **mr, hwaddr addr, hwaddr *plen,
> + bool is_write, MemTxAttrs attrs)
> +{
> + unsigned int page_off = addr & (XC_PAGE_SIZE - 1);
> + unsigned int i;
> + unsigned int nrefs = (page_off + *plen + XC_PAGE_SIZE - 1) >> XC_PAGE_SHIFT;
> + uint32_t ref = (addr - XEN_GRANT_ADDR_OFF) >> XC_PAGE_SHIFT;
> + uint32_t *refs = NULL;
> + unsigned int prot = PROT_READ;
> + struct XENMappedGrantRegion *mgr = NULL;
> +
> + if (is_write) {
> + prot |= PROT_WRITE;
> + }
> +
> + qemu_mutex_lock(&xen_map_mutex);
> +
> + QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
> + if (mgr->idx == ref &&
> + mgr->pages == nrefs &&
> + (mgr->prot & prot) == prot) {
> + break;
> + }
> + }
> + if (!mgr) {
> + mgr = g_new(struct XENMappedGrantRegion, 1);
> +
> + if (nrefs == 1) {
> + refs = &ref;
> + } else {
> + refs = g_new(uint32_t, nrefs);
> + for (i = 0; i < nrefs; i++) {
> + refs[i] = ref + i;
> + }
> + }
> + mgr->addr = xengnttab_map_domain_grant_refs(xen_region_gnttabdev, nrefs,
> + xen_domid, refs, prot);
> + if (mgr->addr) {
> + mgr->pages = nrefs;
> + mgr->refs = 1;
> + mgr->prot = prot;
> + mgr->idx = ref;
> +
> + QLIST_INSERT_HEAD(&xen_grant_mappings, mgr, list);
> + } else {
> + g_free(mgr);
> + mgr = NULL;
> + }
> + } else {
> + mgr->refs++;
> + }
> +
> + qemu_mutex_unlock(&xen_map_mutex);
> +
> + if (nrefs > 1) {
> + g_free(refs);
> + }
> +
> + return mgr ? mgr->addr + page_off : NULL;
> +}
> +
> +static void xen_unmap_grant_dyn(MemoryRegion *mr, void *buffer, ram_addr_t addr,
> + hwaddr len, bool is_write, hwaddr access_len)
> +{
> + unsigned int page_off = (unsigned long)buffer & (XC_PAGE_SIZE - 1);
> + unsigned int nrefs = (page_off + len + XC_PAGE_SIZE - 1) >> XC_PAGE_SHIFT;
> + unsigned int prot = PROT_READ;
> + struct XENMappedGrantRegion *mgr = NULL;
> +
> + if (is_write) {
> + prot |= PROT_WRITE;
> + }
> +
> + qemu_mutex_lock(&xen_map_mutex);
> +
> + QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
> + if (mgr->addr == buffer - page_off &&
> + mgr->pages == nrefs &&
> + (mgr->prot & prot) == prot) {
> + break;
> + }
> + }
> + if (mgr) {
> + mgr->refs--;
> + if (!mgr->refs) {
> + xengnttab_unmap(xen_region_gnttabdev, mgr->addr, nrefs);
> +
> + QLIST_REMOVE(mgr, list);
> + g_free(mgr);
> + }
> + } else {
> + error_report("xen_unmap_grant_dyn() trying to unmap unknown buffer");
> + }
> +
> + qemu_mutex_unlock(&xen_map_mutex);
> +}
> +
> +static ram_addr_t xen_ram_addr_from_grant_cache(void *ptr)
> +{
> + unsigned int page_off = (unsigned long)ptr & (XC_PAGE_SIZE - 1);
> + struct XENMappedGrantRegion *mgr = NULL;
> + ram_addr_t raddr = RAM_ADDR_INVALID;
> +
> + qemu_mutex_lock(&xen_map_mutex);
> +
> + QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
> + if (mgr->addr == ptr - page_off) {
> + break;
> + }
> + }
> +
> + if (mgr) {
> + raddr = (mgr->idx << XC_PAGE_SHIFT) + page_off + XEN_GRANT_ADDR_OFF;
> + }
> +
> + qemu_mutex_unlock(&xen_map_mutex);
> +
> + return raddr;
> +}
> +
> +ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
> +{
> + ram_addr_t raddr;
> +
> + raddr = xen_ram_addr_from_mapcache_try(ptr);
> + if (raddr == RAM_ADDR_INVALID) {
> + raddr = xen_ram_addr_from_grant_cache(ptr);
> + }
> +
> + return raddr;
> +}
> +
> +static const struct MemoryRegionOps xen_grant_mr_ops = {
> + .map = xen_map_grant_dyn,
> + .unmap = xen_unmap_grant_dyn,
> + .endianness = DEVICE_LITTLE_ENDIAN,
> +};
> +
> MemoryRegion *xen_init_grant_ram(void)
> {
> RAMBlock *block;
>
> + qemu_mutex_init(&xen_map_mutex);
> +
> + xen_region_gnttabdev = xengnttab_open(NULL, 0);
> + if (xen_region_gnttabdev == NULL) {
> + fprintf(stderr, "can't open gnttab device\n");
> + return NULL;
> + }
> +
> memory_region_init(&ram_grants, NULL, "xen.grants",
> XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
> block = g_malloc0(sizeof(*block));
> @@ -612,6 +776,7 @@ MemoryRegion *xen_init_grant_ram(void)
> ram_grants.ram_block = block;
> ram_grants.ram = true;
> ram_grants.terminates = true;
> + ram_grants.ops = &xen_grant_mr_ops;
> ram_block_add_list(block);
> memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
> &ram_grants);
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 5f425bea1c..e5346386db 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -2250,13 +2250,16 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
>
> if (xen_enabled()) {
> ram_addr_t ram_addr;
> +
> RCU_READ_LOCK_GUARD();
> ram_addr = xen_ram_addr_from_mapcache(ptr);
> - block = qemu_get_ram_block(ram_addr);
> - if (block) {
> - *offset = ram_addr - block->offset;
> + if (ram_addr != RAM_ADDR_INVALID) {
> + block = qemu_get_ram_block(ram_addr);
> + if (block) {
> + *offset = ram_addr - block->offset;
> + }
> + return block;
> }
> - return block;
> }
>
> RCU_READ_LOCK_GUARD();
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [QEMU][PATCH v1 7/7] hw: arm: Add grant mapping.
2023-10-05 18:16 [QEMU][PATCH v1 0/7] Xen: support grant mappings Vikram Garhwal
` (5 preceding siblings ...)
2023-10-05 18:16 ` [QEMU][PATCH v1 6/7] xen: add map and unmap callbacks for grant region Vikram Garhwal
@ 2023-10-05 18:16 ` Vikram Garhwal
2023-10-10 0:28 ` Stefano Stabellini
6 siblings, 1 reply; 23+ messages in thread
From: Vikram Garhwal @ 2023-10-05 18:16 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, Vikram Garhwal, Peter Maydell,
open list:ARM TCG CPUs
Enable grant ram mapping support for Xenpvh machine on ARM.
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
hw/arm/xen_arm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index f83b983ec5..553c289720 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -125,6 +125,9 @@ static void xen_init_ram(MachineState *machine)
DPRINTF("Initialized region xen.ram.hi: base 0x%llx size 0x%lx\n",
GUEST_RAM1_BASE, ram_size[1]);
}
+
+ DPRINTF("init grant ram mapping for XEN\n");
+ ram_grants = *xen_init_grant_ram();
}
void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
--
2.17.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [QEMU][PATCH v1 7/7] hw: arm: Add grant mapping.
2023-10-05 18:16 ` [QEMU][PATCH v1 7/7] hw: arm: Add grant mapping Vikram Garhwal
@ 2023-10-10 0:28 ` Stefano Stabellini
0 siblings, 0 replies; 23+ messages in thread
From: Stefano Stabellini @ 2023-10-10 0:28 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, Peter Maydell, open list:ARM TCG CPUs
On Thu, 5 Oct 2023, Vikram Garhwal wrote:
> Enable grant ram mapping support for Xenpvh machine on ARM.
>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> hw/arm/xen_arm.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index f83b983ec5..553c289720 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -125,6 +125,9 @@ static void xen_init_ram(MachineState *machine)
> DPRINTF("Initialized region xen.ram.hi: base 0x%llx size 0x%lx\n",
> GUEST_RAM1_BASE, ram_size[1]);
> }
> +
> + DPRINTF("init grant ram mapping for XEN\n");
> + ram_grants = *xen_init_grant_ram();
> }
>
> void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread