* [QEMU][PATCH v3 1/7] softmmu: physmem: Split ram_block_add()
2024-02-27 22:34 [QEMU][PATCH v3 0/7] Xen: support grant mappings Vikram Garhwal
@ 2024-02-27 22:34 ` Vikram Garhwal
2024-03-01 11:33 ` Alex Bennée
2024-02-27 22:34 ` [QEMU][PATCH v3 2/7] xen: add pseudo RAM region for grant mappings Vikram Garhwal
` (6 subsequent siblings)
7 siblings, 1 reply; 33+ messages in thread
From: Vikram Garhwal @ 2024-02-27 22:34 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, vikram.garhwal, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
Extract ram block list update to a new function ram_block_add_list(). This is
done to support grant mappings which adds a memory region for granted memory and
updates the ram_block list.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
include/exec/ram_addr.h | 1 +
system/physmem.c | 62 ++++++++++++++++++++++++++---------------
2 files changed, 40 insertions(+), 23 deletions(-)
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 90676093f5..c0b5f9a7d0 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -139,6 +139,7 @@ void qemu_ram_free(RAMBlock *block);
int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, Error **errp);
void qemu_ram_msync(RAMBlock *block, ram_addr_t start, ram_addr_t length);
+void ram_block_add_list(RAMBlock *new_block);
/* Clear whole block of mem */
static inline void qemu_ram_block_writeback(RAMBlock *block)
diff --git a/system/physmem.c b/system/physmem.c
index e3ebc19eef..84f3022099 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -1803,12 +1803,47 @@ static void dirty_memory_extend(ram_addr_t old_ram_size,
}
}
+static void ram_block_add_list_locked(RAMBlock *new_block)
+ {
+ RAMBlock *block;
+ RAMBlock *last_block = NULL;
+
+ /*
+ * Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
+ * QLIST (which has an RCU-friendly variant) does not have insertion at
+ * tail, so save the last element in last_block.
+ */
+ RAMBLOCK_FOREACH(block) {
+ last_block = block;
+ if (block->max_length < new_block->max_length) {
+ break;
+ }
+ }
+ if (block) {
+ QLIST_INSERT_BEFORE_RCU(block, new_block, next);
+ } else if (last_block) {
+ QLIST_INSERT_AFTER_RCU(last_block, new_block, next);
+ } else { /* list is empty */
+ QLIST_INSERT_HEAD_RCU(&ram_list.blocks, new_block, next);
+ }
+ ram_list.mru_block = NULL;
+
+ /* Write list before version */
+ smp_wmb();
+ ram_list.version++;
+}
+
+void ram_block_add_list(RAMBlock *new_block)
+{
+ qemu_mutex_lock_ramlist();
+ ram_block_add_list_locked(new_block);
+ qemu_mutex_unlock_ramlist();
+}
+
static void ram_block_add(RAMBlock *new_block, Error **errp)
{
const bool noreserve = qemu_ram_is_noreserve(new_block);
const bool shared = qemu_ram_is_shared(new_block);
- RAMBlock *block;
- RAMBlock *last_block = NULL;
ram_addr_t old_ram_size, new_ram_size;
Error *err = NULL;
@@ -1846,28 +1881,9 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
if (new_ram_size > old_ram_size) {
dirty_memory_extend(old_ram_size, new_ram_size);
}
- /* Keep the list sorted from biggest to smallest block. Unlike QTAILQ,
- * QLIST (which has an RCU-friendly variant) does not have insertion at
- * tail, so save the last element in last_block.
- */
- RAMBLOCK_FOREACH(block) {
- last_block = block;
- if (block->max_length < new_block->max_length) {
- break;
- }
- }
- if (block) {
- QLIST_INSERT_BEFORE_RCU(block, new_block, next);
- } else if (last_block) {
- QLIST_INSERT_AFTER_RCU(last_block, new_block, next);
- } else { /* list is empty */
- QLIST_INSERT_HEAD_RCU(&ram_list.blocks, new_block, next);
- }
- ram_list.mru_block = NULL;
- /* Write list before version */
- smp_wmb();
- ram_list.version++;
+ ram_block_add_list_locked(new_block);
+
qemu_mutex_unlock_ramlist();
cpu_physical_memory_set_dirty_range(new_block->offset,
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 1/7] softmmu: physmem: Split ram_block_add()
2024-02-27 22:34 ` [QEMU][PATCH v3 1/7] softmmu: physmem: Split ram_block_add() Vikram Garhwal
@ 2024-03-01 11:33 ` Alex Bennée
2024-04-10 11:10 ` Edgar E. Iglesias
0 siblings, 1 reply; 33+ messages in thread
From: Alex Bennée @ 2024-03-01 11:33 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
Vikram Garhwal <vikram.garhwal@amd.com> writes:
> Extract ram block list update to a new function ram_block_add_list(). This is
> done to support grant mappings which adds a memory region for granted memory and
> updates the ram_block list.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 1/7] softmmu: physmem: Split ram_block_add()
2024-03-01 11:33 ` Alex Bennée
@ 2024-04-10 11:10 ` Edgar E. Iglesias
0 siblings, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 11:10 UTC (permalink / raw)
To: Alex Bennée
Cc: Vikram Garhwal, qemu-devel, sstabellini, jgross, Paolo Bonzini,
Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé
[-- Attachment #1: Type: text/plain, Size: 645 bytes --]
On Fri, Mar 1, 2024 at 12:35 PM Alex Bennée <alex.bennee@linaro.org> wrote:
> Vikram Garhwal <vikram.garhwal@amd.com> writes:
>
> > Extract ram block list update to a new function ram_block_add_list().
> This is
> > done to support grant mappings which adds a memory region for granted
> memory and
> > updates the ram_block list.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>
>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
[-- Attachment #2: Type: text/html, Size: 1351 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* [QEMU][PATCH v3 2/7] xen: add pseudo RAM region for grant mappings
2024-02-27 22:34 [QEMU][PATCH v3 0/7] Xen: support grant mappings Vikram Garhwal
2024-02-27 22:34 ` [QEMU][PATCH v3 1/7] softmmu: physmem: Split ram_block_add() Vikram Garhwal
@ 2024-02-27 22:34 ` Vikram Garhwal
2024-03-01 14:05 ` Alex Bennée
2024-02-27 22:34 ` [QEMU][PATCH v3 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length() Vikram Garhwal
` (5 subsequent siblings)
7 siblings, 1 reply; 33+ messages in thread
From: Vikram Garhwal @ 2024-02-27 22:34 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, vikram.garhwal, jgross, Anthony Perard, Paul Durrant,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Michael S. Tsirkin, Marcel Apfelbaum, open list:X86 Xen CPUs
From: Juergen Gross <jgross@suse.com>
Add a memory region which can be used to automatically map granted
memory. It is starting at 0x8000000000000000ULL in order to be able to
distinguish it from normal RAM.
For this reason the xen.ram memory region is expanded, which has no
further impact as it is used just as a container of the real RAM
regions and now the grant region.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
hw/i386/xen/xen-hvm.c | 3 +++
hw/xen/xen-hvm-common.c | 4 ++--
hw/xen/xen-mapcache.c | 27 +++++++++++++++++++++++++++
include/hw/xen/xen-hvm-common.h | 2 ++
include/hw/xen/xen_pvdev.h | 3 +++
include/sysemu/xen-mapcache.h | 3 +++
6 files changed, 40 insertions(+), 2 deletions(-)
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index f42621e674..67a55558a6 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -172,6 +172,9 @@ static void xen_ram_init(PCMachineState *pcms,
x86ms->above_4g_mem_size);
memory_region_add_subregion(sysmem, 0x100000000ULL, &ram_hi);
}
+
+ /* Add grant mappings as a pseudo RAM region. */
+ ram_grants = *xen_init_grant_ram();
}
static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index baa1adb9f2..6e53d3bf81 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -9,7 +9,7 @@
#include "hw/boards.h"
#include "hw/xen/arch_hvm.h"
-MemoryRegion ram_memory;
+MemoryRegion ram_memory, ram_grants;
void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
Error **errp)
@@ -26,7 +26,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
return;
}
- if (mr == &ram_memory) {
+ if (mr == &ram_memory || mr == &ram_grants) {
return;
}
diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
index 4f956d048e..dfc412d138 100644
--- a/hw/xen/xen-mapcache.c
+++ b/hw/xen/xen-mapcache.c
@@ -14,7 +14,9 @@
#include <sys/resource.h>
+#include "hw/xen/xen-hvm-common.h"
#include "hw/xen/xen_native.h"
+#include "hw/xen/xen_pvdev.h"
#include "qemu/bitmap.h"
#include "sysemu/runstate.h"
@@ -590,3 +592,28 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
mapcache_unlock();
return p;
}
+
+MemoryRegion *xen_init_grant_ram(void)
+{
+ RAMBlock *block;
+
+ memory_region_init(&ram_grants, NULL, "xen.grants",
+ XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
+ block = g_malloc0(sizeof(*block));
+ block->mr = &ram_grants;
+ block->used_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
+ block->max_length = XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE;
+ block->fd = -1;
+ block->page_size = XC_PAGE_SIZE;
+ block->host = (void *)XEN_GRANT_ADDR_OFF;
+ block->offset = XEN_GRANT_ADDR_OFF;
+ block->flags = RAM_PREALLOC;
+ ram_grants.ram_block = block;
+ ram_grants.ram = true;
+ ram_grants.terminates = true;
+ ram_block_add_list(block);
+ memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
+ &ram_grants);
+
+ return &ram_grants;
+}
diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
index 4b1d728f35..8deeff6bcf 100644
--- a/include/hw/xen/xen-hvm-common.h
+++ b/include/hw/xen/xen-hvm-common.h
@@ -16,6 +16,8 @@
#include <xen/hvm/ioreq.h>
extern MemoryRegion ram_memory;
+
+extern MemoryRegion ram_grants;
extern MemoryListener xen_io_listener;
extern DeviceListener xen_device_listener;
diff --git a/include/hw/xen/xen_pvdev.h b/include/hw/xen/xen_pvdev.h
index ddad4b9f36..0f1b5edfa9 100644
--- a/include/hw/xen/xen_pvdev.h
+++ b/include/hw/xen/xen_pvdev.h
@@ -80,4 +80,7 @@ int xen_pv_send_notify(struct XenLegacyDevice *xendev);
void xen_pv_printf(struct XenLegacyDevice *xendev, int msg_level,
const char *fmt, ...) G_GNUC_PRINTF(3, 4);
+#define XEN_GRANT_ADDR_OFF 0x8000000000000000ULL
+#define XEN_MAX_VIRTIO_GRANTS 65536
+
#endif /* QEMU_HW_XEN_PVDEV_H */
diff --git a/include/sysemu/xen-mapcache.h b/include/sysemu/xen-mapcache.h
index c8e7c2f6cf..f4bedb1c11 100644
--- a/include/sysemu/xen-mapcache.h
+++ b/include/sysemu/xen-mapcache.h
@@ -10,6 +10,7 @@
#define XEN_MAPCACHE_H
#include "exec/cpu-common.h"
+#include "exec/ram_addr.h"
typedef hwaddr (*phys_offset_to_gaddr_t)(hwaddr phys_offset,
ram_addr_t size);
@@ -25,6 +26,8 @@ void xen_invalidate_map_cache(void);
uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
hwaddr new_phys_addr,
hwaddr size);
+MemoryRegion *xen_init_grant_ram(void);
+
#else
static inline void xen_map_cache_init(phys_offset_to_gaddr_t f,
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 2/7] xen: add pseudo RAM region for grant mappings
2024-02-27 22:34 ` [QEMU][PATCH v3 2/7] xen: add pseudo RAM region for grant mappings Vikram Garhwal
@ 2024-03-01 14:05 ` Alex Bennée
2024-04-10 11:12 ` Edgar E. Iglesias
0 siblings, 1 reply; 33+ messages in thread
From: Alex Bennée @ 2024-03-01 14:05 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, jgross, Anthony Perard, Paul Durrant,
Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Michael S. Tsirkin, Marcel Apfelbaum, open list:X86 Xen CPUs
Vikram Garhwal <vikram.garhwal@amd.com> writes:
> From: Juergen Gross <jgross@suse.com>
>
> Add a memory region which can be used to automatically map granted
> memory. It is starting at 0x8000000000000000ULL in order to be able to
> distinguish it from normal RAM.
Is the Xen memory map for HVM guests documented anywhere? I couldn't
find anything googling or on the Xen wiki. I'm guessing this is going to
be shared across all 64 bit HVM arches in Xen?
Anyway:
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 2/7] xen: add pseudo RAM region for grant mappings
2024-03-01 14:05 ` Alex Bennée
@ 2024-04-10 11:12 ` Edgar E. Iglesias
0 siblings, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 11:12 UTC (permalink / raw)
To: Alex Bennée
Cc: Vikram Garhwal, qemu-devel, sstabellini, jgross, Anthony Perard,
Paul Durrant, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Michael S. Tsirkin, Marcel Apfelbaum, open list:X86 Xen CPUs
[-- Attachment #1: Type: text/plain, Size: 703 bytes --]
On Fri, Mar 1, 2024 at 3:06 PM Alex Bennée <alex.bennee@linaro.org> wrote:
> Vikram Garhwal <vikram.garhwal@amd.com> writes:
>
> > From: Juergen Gross <jgross@suse.com>
> >
> > Add a memory region which can be used to automatically map granted
> > memory. It is starting at 0x8000000000000000ULL in order to be able to
> > distinguish it from normal RAM.
>
> Is the Xen memory map for HVM guests documented anywhere? I couldn't
> find anything googling or on the Xen wiki. I'm guessing this is going to
> be shared across all 64 bit HVM arches in Xen?
>
> Anyway:
>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>
>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
[-- Attachment #2: Type: text/html, Size: 1301 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* [QEMU][PATCH v3 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
2024-02-27 22:34 [QEMU][PATCH v3 0/7] Xen: support grant mappings Vikram Garhwal
2024-02-27 22:34 ` [QEMU][PATCH v3 1/7] softmmu: physmem: Split ram_block_add() Vikram Garhwal
2024-02-27 22:34 ` [QEMU][PATCH v3 2/7] xen: add pseudo RAM region for grant mappings Vikram Garhwal
@ 2024-02-27 22:34 ` Vikram Garhwal
2024-03-01 17:04 ` Alex Bennée
2024-04-10 11:15 ` Edgar E. Iglesias
2024-02-27 22:34 ` [QEMU][PATCH v3 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry Vikram Garhwal
` (4 subsequent siblings)
7 siblings, 2 replies; 33+ messages in thread
From: Vikram Garhwal @ 2024-02-27 22:34 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, vikram.garhwal, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
From: Juergen Gross <jgross@suse.com>
qemu_map_ram_ptr() and qemu_ram_ptr_length() share quite some code, so
modify qemu_ram_ptr_length() a little bit and use it for
qemu_map_ram_ptr(), too.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
system/physmem.c | 56 ++++++++++++++++++++----------------------------
1 file changed, 23 insertions(+), 33 deletions(-)
diff --git a/system/physmem.c b/system/physmem.c
index 84f3022099..949dcb20ba 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -2163,43 +2163,17 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
}
#endif /* !_WIN32 */
-/* Return a host pointer to ram allocated with qemu_ram_alloc.
- * This should not be used for general purpose DMA. Use address_space_map
- * or address_space_rw instead. For local memory (e.g. video ram) that the
- * device owns, use memory_region_get_ram_ptr.
- *
- * Called within RCU critical section.
- */
-void *qemu_map_ram_ptr(RAMBlock *block, ram_addr_t addr)
-{
- if (block == NULL) {
- block = qemu_get_ram_block(addr);
- addr -= block->offset;
- }
-
- if (xen_enabled() && block->host == NULL) {
- /* We need to check if the requested address is in the RAM
- * because we don't want to map the entire memory in QEMU.
- * In that case just map until the end of the page.
- */
- if (block->offset == 0) {
- return xen_map_cache(addr, 0, 0, false);
- }
-
- block->host = xen_map_cache(block->offset, block->max_length, 1, false);
- }
- return ramblock_ptr(block, addr);
-}
-
-/* Return a host pointer to guest's ram. Similar to qemu_map_ram_ptr
- * but takes a size argument.
+/*
+ * Return a host pointer to guest's ram.
*
* Called within RCU critical section.
*/
static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
hwaddr *size, bool lock)
{
- if (*size == 0) {
+ hwaddr len = 0;
+
+ if (size && *size == 0) {
return NULL;
}
@@ -2207,7 +2181,10 @@ static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
block = qemu_get_ram_block(addr);
addr -= block->offset;
}
- *size = MIN(*size, block->max_length - addr);
+ if (size) {
+ *size = MIN(*size, block->max_length - addr);
+ len = *size;
+ }
if (xen_enabled() && block->host == NULL) {
/* We need to check if the requested address is in the RAM
@@ -2215,7 +2192,7 @@ static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
* In that case just map the requested area.
*/
if (block->offset == 0) {
- return xen_map_cache(addr, *size, lock, lock);
+ return xen_map_cache(addr, len, lock, lock);
}
block->host = xen_map_cache(block->offset, block->max_length, 1, lock);
@@ -2224,6 +2201,19 @@ static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
return ramblock_ptr(block, addr);
}
+/*
+ * Return a host pointer to ram allocated with qemu_ram_alloc.
+ * This should not be used for general purpose DMA. Use address_space_map
+ * or address_space_rw instead. For local memory (e.g. video ram) that the
+ * device owns, use memory_region_get_ram_ptr.
+ *
+ * Called within RCU critical section.
+ */
+void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
+{
+ return qemu_ram_ptr_length(ram_block, addr, NULL, false);
+}
+
/* Return the offset of a hostpointer within a ramblock */
ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host)
{
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
2024-02-27 22:34 ` [QEMU][PATCH v3 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length() Vikram Garhwal
@ 2024-03-01 17:04 ` Alex Bennée
2024-03-06 20:58 ` Vikram Garhwal
2024-04-10 11:15 ` Edgar E. Iglesias
1 sibling, 1 reply; 33+ messages in thread
From: Alex Bennée @ 2024-03-01 17:04 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
Vikram Garhwal <vikram.garhwal@amd.com> writes:
> From: Juergen Gross <jgross@suse.com>
>
> qemu_map_ram_ptr() and qemu_ram_ptr_length() share quite some code, so
> modify qemu_ram_ptr_length() a little bit and use it for
> qemu_map_ram_ptr(), too.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> system/physmem.c | 56 ++++++++++++++++++++----------------------------
> 1 file changed, 23 insertions(+), 33 deletions(-)
>
<snip>
> -
> -/* Return a host pointer to guest's ram. Similar to qemu_map_ram_ptr
> - * but takes a size argument.
> +/*
> + * Return a host pointer to guest's ram.
> *
> * Called within RCU critical section.
> */
If you end up re-spinning it would be nice to kdoc this function and at
least call out size as a return by ref and optional.
> static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
> hwaddr *size, bool lock)
> {
> - if (*size == 0) {
> + hwaddr len = 0;
> +
> + if (size && *size == 0) {
> return NULL;
> }
>
> @@ -2207,7 +2181,10 @@ static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
> block = qemu_get_ram_block(addr);
> addr -= block->offset;
> }
> - *size = MIN(*size, block->max_length - addr);
> + if (size) {
> + *size = MIN(*size, block->max_length - addr);
> + len = *size;
> + }
>
> if (xen_enabled() && block->host == NULL) {
> /* We need to check if the requested address is in the RAM
> @@ -2215,7 +2192,7 @@ static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
> * In that case just map the requested area.
> */
> if (block->offset == 0) {
> - return xen_map_cache(addr, *size, lock, lock);
> + return xen_map_cache(addr, len, lock, lock);
I did wonder if len == 0 will confuse things but it seems xen_map_cache
will default to XC_PAGE_SIZE in that case.
Anyway:
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
2024-03-01 17:04 ` Alex Bennée
@ 2024-03-06 20:58 ` Vikram Garhwal
0 siblings, 0 replies; 33+ messages in thread
From: Vikram Garhwal @ 2024-03-06 20:58 UTC (permalink / raw)
To: Alex Bennée
Cc: qemu-devel, sstabellini, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
On Fri, Mar 01, 2024 at 05:04:54PM +0000, Alex Bennée wrote:
> Vikram Garhwal <vikram.garhwal@amd.com> writes:
>
> > From: Juergen Gross <jgross@suse.com>
> >
> > qemu_map_ram_ptr() and qemu_ram_ptr_length() share quite some code, so
> > modify qemu_ram_ptr_length() a little bit and use it for
> > qemu_map_ram_ptr(), too.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > ---
> > system/physmem.c | 56 ++++++++++++++++++++----------------------------
> > 1 file changed, 23 insertions(+), 33 deletions(-)
> >
> <snip>
> > -
> > -/* Return a host pointer to guest's ram. Similar to qemu_map_ram_ptr
> > - * but takes a size argument.
> > +/*
> > + * Return a host pointer to guest's ram.
> > *
> > * Called within RCU critical section.
> > */
>
> If you end up re-spinning it would be nice to kdoc this function and at
> least call out size as a return by ref and optional.
Will do if re-spinning is needed.
>
> > static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
> > hwaddr *size, bool lock)
> > {
> > - if (*size == 0) {
> > + hwaddr len = 0;
> > +
> > + if (size && *size == 0) {
> > return NULL;
> > }
> >
> > @@ -2207,7 +2181,10 @@ static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
> > block = qemu_get_ram_block(addr);
> > addr -= block->offset;
> > }
> > - *size = MIN(*size, block->max_length - addr);
> > + if (size) {
> > + *size = MIN(*size, block->max_length - addr);
> > + len = *size;
> > + }
> >
> > if (xen_enabled() && block->host == NULL) {
> > /* We need to check if the requested address is in the RAM
> > @@ -2215,7 +2192,7 @@ static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
> > * In that case just map the requested area.
> > */
> > if (block->offset == 0) {
> > - return xen_map_cache(addr, *size, lock, lock);
> > + return xen_map_cache(addr, len, lock, lock);
>
> I did wonder if len == 0 will confuse things but it seems xen_map_cache
> will default to XC_PAGE_SIZE in that case.
>
> Anyway:
>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>
> --
> Alex Bennée
> Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
2024-02-27 22:34 ` [QEMU][PATCH v3 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length() Vikram Garhwal
2024-03-01 17:04 ` Alex Bennée
@ 2024-04-10 11:15 ` Edgar E. Iglesias
1 sibling, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 11:15 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
[-- Attachment #1: Type: text/plain, Size: 4195 bytes --]
On Tue, Feb 27, 2024 at 11:37 PM Vikram Garhwal <vikram.garhwal@amd.com>
wrote:
> From: Juergen Gross <jgross@suse.com>
>
> qemu_map_ram_ptr() and qemu_ram_ptr_length() share quite some code, so
> modify qemu_ram_ptr_length() a little bit and use it for
> qemu_map_ram_ptr(), too.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> ---
> system/physmem.c | 56 ++++++++++++++++++++----------------------------
> 1 file changed, 23 insertions(+), 33 deletions(-)
>
> diff --git a/system/physmem.c b/system/physmem.c
> index 84f3022099..949dcb20ba 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -2163,43 +2163,17 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t
> length)
> }
> #endif /* !_WIN32 */
>
> -/* Return a host pointer to ram allocated with qemu_ram_alloc.
> - * This should not be used for general purpose DMA. Use address_space_map
> - * or address_space_rw instead. For local memory (e.g. video ram) that the
> - * device owns, use memory_region_get_ram_ptr.
> - *
> - * Called within RCU critical section.
> - */
> -void *qemu_map_ram_ptr(RAMBlock *block, ram_addr_t addr)
> -{
> - if (block == NULL) {
> - block = qemu_get_ram_block(addr);
> - addr -= block->offset;
> - }
> -
> - if (xen_enabled() && block->host == NULL) {
> - /* We need to check if the requested address is in the RAM
> - * because we don't want to map the entire memory in QEMU.
> - * In that case just map until the end of the page.
> - */
> - if (block->offset == 0) {
> - return xen_map_cache(addr, 0, 0, false);
> - }
> -
> - block->host = xen_map_cache(block->offset, block->max_length, 1,
> false);
> - }
> - return ramblock_ptr(block, addr);
> -}
> -
> -/* Return a host pointer to guest's ram. Similar to qemu_map_ram_ptr
> - * but takes a size argument.
> +/*
> + * Return a host pointer to guest's ram.
> *
> * Called within RCU critical section.
> */
> static void *qemu_ram_ptr_length(RAMBlock *block, ram_addr_t addr,
> hwaddr *size, bool lock)
> {
> - if (*size == 0) {
> + hwaddr len = 0;
> +
> + if (size && *size == 0) {
> return NULL;
> }
>
> @@ -2207,7 +2181,10 @@ static void *qemu_ram_ptr_length(RAMBlock *block,
> ram_addr_t addr,
> block = qemu_get_ram_block(addr);
> addr -= block->offset;
> }
> - *size = MIN(*size, block->max_length - addr);
> + if (size) {
> + *size = MIN(*size, block->max_length - addr);
> + len = *size;
> + }
>
> if (xen_enabled() && block->host == NULL) {
> /* We need to check if the requested address is in the RAM
> @@ -2215,7 +2192,7 @@ static void *qemu_ram_ptr_length(RAMBlock *block,
> ram_addr_t addr,
> * In that case just map the requested area.
> */
> if (block->offset == 0) {
> - return xen_map_cache(addr, *size, lock, lock);
> + return xen_map_cache(addr, len, lock, lock);
> }
>
> block->host = xen_map_cache(block->offset, block->max_length, 1,
> lock);
> @@ -2224,6 +2201,19 @@ static void *qemu_ram_ptr_length(RAMBlock *block,
> ram_addr_t addr,
> return ramblock_ptr(block, addr);
> }
>
> +/*
> + * Return a host pointer to ram allocated with qemu_ram_alloc.
> + * This should not be used for general purpose DMA. Use address_space_map
> + * or address_space_rw instead. For local memory (e.g. video ram) that the
> + * device owns, use memory_region_get_ram_ptr.
> + *
> + * Called within RCU critical section.
> + */
> +void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
> +{
> + return qemu_ram_ptr_length(ram_block, addr, NULL, false);
> +}
> +
> /* Return the offset of a hostpointer within a ramblock */
> ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host)
> {
> --
> 2.17.1
>
>
>
[-- Attachment #2: Type: text/html, Size: 5461 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* [QEMU][PATCH v3 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry
2024-02-27 22:34 [QEMU][PATCH v3 0/7] Xen: support grant mappings Vikram Garhwal
` (2 preceding siblings ...)
2024-02-27 22:34 ` [QEMU][PATCH v3 3/7] softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length() Vikram Garhwal
@ 2024-02-27 22:34 ` Vikram Garhwal
2024-03-01 17:08 ` Alex Bennée
2024-02-27 22:34 ` [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks Vikram Garhwal
` (3 subsequent siblings)
7 siblings, 1 reply; 33+ messages in thread
From: Vikram Garhwal @ 2024-02-27 22:34 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, vikram.garhwal, jgross, Anthony Perard, Paul Durrant,
open list:X86 Xen CPUs
From: Juergen Gross <jgross@suse.com>
Today xen_ram_addr_from_mapcache() will either abort() or return 0 in
case it can't find a matching entry for a pointer value. Both cases
are bad, so change that to return an invalid address instead.
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
hw/xen/xen-mapcache.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
index dfc412d138..179b7e95b2 100644
--- a/hw/xen/xen-mapcache.c
+++ b/hw/xen/xen-mapcache.c
@@ -396,13 +396,8 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
}
}
if (!found) {
- trace_xen_ram_addr_from_mapcache_not_found(ptr);
- QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
- trace_xen_ram_addr_from_mapcache_found(reventry->paddr_index,
- reventry->vaddr_req);
- }
- abort();
- return 0;
+ mapcache_unlock();
+ return RAM_ADDR_INVALID;
}
entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
@@ -411,7 +406,7 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
}
if (!entry) {
trace_xen_ram_addr_from_mapcache_not_in_cache(ptr);
- raddr = 0;
+ raddr = RAM_ADDR_INVALID;
} else {
raddr = (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
((unsigned long) ptr - (unsigned long) entry->vaddr_base);
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry
2024-02-27 22:34 ` [QEMU][PATCH v3 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry Vikram Garhwal
@ 2024-03-01 17:08 ` Alex Bennée
2024-04-10 11:14 ` Edgar E. Iglesias
0 siblings, 1 reply; 33+ messages in thread
From: Alex Bennée @ 2024-03-01 17:08 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, jgross, Anthony Perard, Paul Durrant,
open list:X86 Xen CPUs
Vikram Garhwal <vikram.garhwal@amd.com> writes:
> From: Juergen Gross <jgross@suse.com>
>
> Today xen_ram_addr_from_mapcache() will either abort() or return 0 in
> case it can't find a matching entry for a pointer value. Both cases
> are bad, so change that to return an invalid address instead.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> hw/xen/xen-mapcache.c | 11 +++--------
> 1 file changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> index dfc412d138..179b7e95b2 100644
> --- a/hw/xen/xen-mapcache.c
> +++ b/hw/xen/xen-mapcache.c
> @@ -396,13 +396,8 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
> }
> }
> if (!found) {
> - trace_xen_ram_addr_from_mapcache_not_found(ptr);
> - QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
> - trace_xen_ram_addr_from_mapcache_found(reventry->paddr_index,
> - reventry->vaddr_req);
> - }
If these tracepoints aren't useful they need removing from trace-events.
However I suspect it would be better to keep them in as they are fairly
cheap.
Otherwise:
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry
2024-03-01 17:08 ` Alex Bennée
@ 2024-04-10 11:14 ` Edgar E. Iglesias
0 siblings, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 11:14 UTC (permalink / raw)
To: Alex Bennée
Cc: Vikram Garhwal, qemu-devel, sstabellini, jgross, Anthony Perard,
Paul Durrant, open list:X86 Xen CPUs
[-- Attachment #1: Type: text/plain, Size: 1533 bytes --]
On Fri, Mar 1, 2024 at 6:08 PM Alex Bennée <alex.bennee@linaro.org> wrote:
> Vikram Garhwal <vikram.garhwal@amd.com> writes:
>
> > From: Juergen Gross <jgross@suse.com>
> >
> > Today xen_ram_addr_from_mapcache() will either abort() or return 0 in
> > case it can't find a matching entry for a pointer value. Both cases
> > are bad, so change that to return an invalid address instead.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > ---
> > hw/xen/xen-mapcache.c | 11 +++--------
> > 1 file changed, 3 insertions(+), 8 deletions(-)
> >
> > diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> > index dfc412d138..179b7e95b2 100644
> > --- a/hw/xen/xen-mapcache.c
> > +++ b/hw/xen/xen-mapcache.c
> > @@ -396,13 +396,8 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
> > }
> > }
> > if (!found) {
> > - trace_xen_ram_addr_from_mapcache_not_found(ptr);
> > - QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
> > -
> trace_xen_ram_addr_from_mapcache_found(reventry->paddr_index,
> > - reventry->vaddr_req);
> > - }
>
> If these tracepoints aren't useful they need removing from trace-events.
> However I suspect it would be better to keep them in as they are fairly
> cheap.
>
> Otherwise:
>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>
>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
[-- Attachment #2: Type: text/html, Size: 2434 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-02-27 22:34 [QEMU][PATCH v3 0/7] Xen: support grant mappings Vikram Garhwal
` (3 preceding siblings ...)
2024-02-27 22:34 ` [QEMU][PATCH v3 4/7] xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry Vikram Garhwal
@ 2024-02-27 22:34 ` Vikram Garhwal
2024-02-29 23:10 ` Stefano Stabellini
2024-04-10 16:44 ` Edgar E. Iglesias
2024-02-27 22:35 ` [QEMU][PATCH v3 6/7] xen: add map and unmap callbacks for grant region Vikram Garhwal
` (2 subsequent siblings)
7 siblings, 2 replies; 33+ messages in thread
From: Vikram Garhwal @ 2024-02-27 22:34 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, vikram.garhwal, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
From: Juergen Gross <jgross@suse.com>
In order to support mapping and unmapping guest memory dynamically to
and from qemu during address_space_[un]map() operations add the map()
and unmap() callbacks to MemoryRegionOps.
Those will be used e.g. for Xen grant mappings when performing guest
I/Os.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
include/exec/memory.h | 21 ++++++++++++++++++
system/physmem.c | 50 +++++++++++++++++++++++++++++++++----------
2 files changed, 60 insertions(+), 11 deletions(-)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 8626a355b3..9f7dfe59c7 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -282,6 +282,27 @@ struct MemoryRegionOps {
unsigned size,
MemTxAttrs attrs);
+ /*
+ * Dynamically create mapping. @addr is the guest address to map; @plen
+ * is the pointer to the usable length of the buffer.
+ * @mr contents can be changed in case a new memory region is created for
+ * the mapping.
+ * Returns the buffer address for accessing the data.
+ */
+ void *(*map)(MemoryRegion **mr,
+ hwaddr addr,
+ hwaddr *plen,
+ bool is_write,
+ MemTxAttrs attrs);
+
+ /* Unmap an area obtained via map() before. */
+ void (*unmap)(MemoryRegion *mr,
+ void *buffer,
+ ram_addr_t addr,
+ hwaddr len,
+ bool is_write,
+ hwaddr access_len);
+
enum device_endian endianness;
/* Guest-visible constraints: */
struct {
diff --git a/system/physmem.c b/system/physmem.c
index 949dcb20ba..d989e9fc1f 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -3141,6 +3141,7 @@ void *address_space_map(AddressSpace *as,
hwaddr len = *plen;
hwaddr l, xlat;
MemoryRegion *mr;
+ void *ptr = NULL;
FlatView *fv;
if (len == 0) {
@@ -3174,12 +3175,20 @@ void *address_space_map(AddressSpace *as,
return bounce.buffer;
}
-
memory_region_ref(mr);
+
+ if (mr->ops && mr->ops->map) {
+ ptr = mr->ops->map(&mr, addr, plen, is_write, attrs);
+ }
+
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
l, is_write, attrs);
fuzz_dma_read_cb(addr, *plen, mr);
- return qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
+ if (ptr == NULL) {
+ ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
+ }
+
+ return ptr;
}
/* Unmaps a memory region previously mapped by address_space_map().
@@ -3195,11 +3204,16 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
mr = memory_region_from_host(buffer, &addr1);
assert(mr != NULL);
- if (is_write) {
- invalidate_and_set_dirty(mr, addr1, access_len);
- }
- if (xen_enabled()) {
- xen_invalidate_map_cache_entry(buffer);
+
+ if (mr->ops && mr->ops->unmap) {
+ mr->ops->unmap(mr, buffer, addr1, len, is_write, access_len);
+ } else {
+ if (is_write) {
+ invalidate_and_set_dirty(mr, addr1, access_len);
+ }
+ if (xen_enabled()) {
+ xen_invalidate_map_cache_entry(buffer);
+ }
}
memory_region_unref(mr);
return;
@@ -3272,10 +3286,18 @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
* doing this if we found actual RAM, which behaves the same
* regardless of attributes; so UNSPECIFIED is fine.
*/
+ if (mr->ops && mr->ops->map) {
+ cache->ptr = mr->ops->map(&mr, addr, &l, is_write,
+ MEMTXATTRS_UNSPECIFIED);
+ }
+
l = flatview_extend_translation(cache->fv, addr, len, mr,
cache->xlat, l, is_write,
MEMTXATTRS_UNSPECIFIED);
- cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
+ if (!cache->ptr) {
+ cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l,
+ true);
+ }
} else {
cache->ptr = NULL;
}
@@ -3297,14 +3319,20 @@ void address_space_cache_invalidate(MemoryRegionCache *cache,
void address_space_cache_destroy(MemoryRegionCache *cache)
{
- if (!cache->mrs.mr) {
+ MemoryRegion *mr = cache->mrs.mr;
+
+ if (!mr) {
return;
}
- if (xen_enabled()) {
+ if (mr->ops && mr->ops->unmap) {
+ mr->ops->unmap(mr, cache->ptr, cache->xlat, cache->len,
+ cache->is_write, cache->len);
+ } else if (xen_enabled()) {
xen_invalidate_map_cache_entry(cache->ptr);
}
- memory_region_unref(cache->mrs.mr);
+
+ memory_region_unref(mr);
flatview_unref(cache->fv);
cache->mrs.mr = NULL;
cache->fv = NULL;
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-02-27 22:34 ` [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks Vikram Garhwal
@ 2024-02-29 23:10 ` Stefano Stabellini
2024-04-10 11:16 ` Edgar E. Iglesias
2024-04-10 16:44 ` Edgar E. Iglesias
1 sibling, 1 reply; 33+ messages in thread
From: Stefano Stabellini @ 2024-02-29 23:10 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
On Tue, 27 Feb 2024, Vikram Garhwal wrote:
> From: Juergen Gross <jgross@suse.com>
>
> In order to support mapping and unmapping guest memory dynamically to
> and from qemu during address_space_[un]map() operations add the map()
> and unmap() callbacks to MemoryRegionOps.
>
> Those will be used e.g. for Xen grant mappings when performing guest
> I/Os.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> include/exec/memory.h | 21 ++++++++++++++++++
> system/physmem.c | 50 +++++++++++++++++++++++++++++++++----------
> 2 files changed, 60 insertions(+), 11 deletions(-)
>
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 8626a355b3..9f7dfe59c7 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -282,6 +282,27 @@ struct MemoryRegionOps {
> unsigned size,
> MemTxAttrs attrs);
>
> + /*
> + * Dynamically create mapping. @addr is the guest address to map; @plen
> + * is the pointer to the usable length of the buffer.
> + * @mr contents can be changed in case a new memory region is created for
> + * the mapping.
> + * Returns the buffer address for accessing the data.
> + */
> + void *(*map)(MemoryRegion **mr,
> + hwaddr addr,
> + hwaddr *plen,
> + bool is_write,
> + MemTxAttrs attrs);
> +
> + /* Unmap an area obtained via map() before. */
> + void (*unmap)(MemoryRegion *mr,
> + void *buffer,
> + ram_addr_t addr,
> + hwaddr len,
> + bool is_write,
> + hwaddr access_len);
> +
> enum device_endian endianness;
> /* Guest-visible constraints: */
> struct {
> diff --git a/system/physmem.c b/system/physmem.c
> index 949dcb20ba..d989e9fc1f 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -3141,6 +3141,7 @@ void *address_space_map(AddressSpace *as,
> hwaddr len = *plen;
> hwaddr l, xlat;
> MemoryRegion *mr;
> + void *ptr = NULL;
> FlatView *fv;
>
> if (len == 0) {
> @@ -3174,12 +3175,20 @@ void *address_space_map(AddressSpace *as,
> return bounce.buffer;
> }
>
> -
> memory_region_ref(mr);
> +
> + if (mr->ops && mr->ops->map) {
> + ptr = mr->ops->map(&mr, addr, plen, is_write, attrs);
> + }
> +
> *plen = flatview_extend_translation(fv, addr, len, mr, xlat,
> l, is_write, attrs);
> fuzz_dma_read_cb(addr, *plen, mr);
> - return qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
> + if (ptr == NULL) {
> + ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
> + }
> +
> + return ptr;
> }
>
> /* Unmaps a memory region previously mapped by address_space_map().
> @@ -3195,11 +3204,16 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
>
> mr = memory_region_from_host(buffer, &addr1);
> assert(mr != NULL);
> - if (is_write) {
> - invalidate_and_set_dirty(mr, addr1, access_len);
> - }
> - if (xen_enabled()) {
> - xen_invalidate_map_cache_entry(buffer);
> +
> + if (mr->ops && mr->ops->unmap) {
> + mr->ops->unmap(mr, buffer, addr1, len, is_write, access_len);
> + } else {
> + if (is_write) {
> + invalidate_and_set_dirty(mr, addr1, access_len);
> + }
> + if (xen_enabled()) {
> + xen_invalidate_map_cache_entry(buffer);
> + }
> }
> memory_region_unref(mr);
> return;
> @@ -3272,10 +3286,18 @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
> * doing this if we found actual RAM, which behaves the same
> * regardless of attributes; so UNSPECIFIED is fine.
> */
> + if (mr->ops && mr->ops->map) {
> + cache->ptr = mr->ops->map(&mr, addr, &l, is_write,
> + MEMTXATTRS_UNSPECIFIED);
> + }
> +
> l = flatview_extend_translation(cache->fv, addr, len, mr,
> cache->xlat, l, is_write,
> MEMTXATTRS_UNSPECIFIED);
> - cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
> + if (!cache->ptr) {
> + cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l,
> + true);
> + }
> } else {
> cache->ptr = NULL;
> }
> @@ -3297,14 +3319,20 @@ void address_space_cache_invalidate(MemoryRegionCache *cache,
>
> void address_space_cache_destroy(MemoryRegionCache *cache)
> {
> - if (!cache->mrs.mr) {
> + MemoryRegion *mr = cache->mrs.mr;
> +
> + if (!mr) {
> return;
> }
>
> - if (xen_enabled()) {
> + if (mr->ops && mr->ops->unmap) {
> + mr->ops->unmap(mr, cache->ptr, cache->xlat, cache->len,
> + cache->is_write, cache->len);
> + } else if (xen_enabled()) {
> xen_invalidate_map_cache_entry(cache->ptr);
> }
> - memory_region_unref(cache->mrs.mr);
> +
> + memory_region_unref(mr);
> flatview_unref(cache->fv);
> cache->mrs.mr = NULL;
> cache->fv = NULL;
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-02-29 23:10 ` Stefano Stabellini
@ 2024-04-10 11:16 ` Edgar E. Iglesias
0 siblings, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 11:16 UTC (permalink / raw)
To: Stefano Stabellini
Cc: Vikram Garhwal, qemu-devel, jgross, Paolo Bonzini, Peter Xu,
David Hildenbrand, Philippe Mathieu-Daudé
[-- Attachment #1: Type: text/plain, Size: 6151 bytes --]
On Fri, Mar 1, 2024 at 12:11 AM Stefano Stabellini <sstabellini@kernel.org>
wrote:
> On Tue, 27 Feb 2024, Vikram Garhwal wrote:
> > From: Juergen Gross <jgross@suse.com>
> >
> > In order to support mapping and unmapping guest memory dynamically to
> > and from qemu during address_space_[un]map() operations add the map()
> > and unmap() callbacks to MemoryRegionOps.
> >
> > Those will be used e.g. for Xen grant mappings when performing guest
> > I/Os.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> ---
> > include/exec/memory.h | 21 ++++++++++++++++++
> > system/physmem.c | 50 +++++++++++++++++++++++++++++++++----------
> > 2 files changed, 60 insertions(+), 11 deletions(-)
> >
> > diff --git a/include/exec/memory.h b/include/exec/memory.h
> > index 8626a355b3..9f7dfe59c7 100644
> > --- a/include/exec/memory.h
> > +++ b/include/exec/memory.h
> > @@ -282,6 +282,27 @@ struct MemoryRegionOps {
> > unsigned size,
> > MemTxAttrs attrs);
> >
> > + /*
> > + * Dynamically create mapping. @addr is the guest address to map;
> @plen
> > + * is the pointer to the usable length of the buffer.
> > + * @mr contents can be changed in case a new memory region is
> created for
> > + * the mapping.
> > + * Returns the buffer address for accessing the data.
> > + */
> > + void *(*map)(MemoryRegion **mr,
> > + hwaddr addr,
> > + hwaddr *plen,
> > + bool is_write,
> > + MemTxAttrs attrs);
> > +
> > + /* Unmap an area obtained via map() before. */
> > + void (*unmap)(MemoryRegion *mr,
> > + void *buffer,
> > + ram_addr_t addr,
> > + hwaddr len,
> > + bool is_write,
> > + hwaddr access_len);
> > +
> > enum device_endian endianness;
> > /* Guest-visible constraints: */
> > struct {
> > diff --git a/system/physmem.c b/system/physmem.c
> > index 949dcb20ba..d989e9fc1f 100644
> > --- a/system/physmem.c
> > +++ b/system/physmem.c
> > @@ -3141,6 +3141,7 @@ void *address_space_map(AddressSpace *as,
> > hwaddr len = *plen;
> > hwaddr l, xlat;
> > MemoryRegion *mr;
> > + void *ptr = NULL;
> > FlatView *fv;
> >
> > if (len == 0) {
> > @@ -3174,12 +3175,20 @@ void *address_space_map(AddressSpace *as,
> > return bounce.buffer;
> > }
> >
> > -
> > memory_region_ref(mr);
> > +
> > + if (mr->ops && mr->ops->map) {
> > + ptr = mr->ops->map(&mr, addr, plen, is_write, attrs);
> > + }
> > +
> > *plen = flatview_extend_translation(fv, addr, len, mr, xlat,
> > l, is_write, attrs);
> > fuzz_dma_read_cb(addr, *plen, mr);
> > - return qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
> > + if (ptr == NULL) {
> > + ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
> > + }
> > +
> > + return ptr;
> > }
> >
> > /* Unmaps a memory region previously mapped by address_space_map().
> > @@ -3195,11 +3204,16 @@ void address_space_unmap(AddressSpace *as, void
> *buffer, hwaddr len,
> >
> > mr = memory_region_from_host(buffer, &addr1);
> > assert(mr != NULL);
> > - if (is_write) {
> > - invalidate_and_set_dirty(mr, addr1, access_len);
> > - }
> > - if (xen_enabled()) {
> > - xen_invalidate_map_cache_entry(buffer);
> > +
> > + if (mr->ops && mr->ops->unmap) {
> > + mr->ops->unmap(mr, buffer, addr1, len, is_write,
> access_len);
> > + } else {
> > + if (is_write) {
> > + invalidate_and_set_dirty(mr, addr1, access_len);
> > + }
> > + if (xen_enabled()) {
> > + xen_invalidate_map_cache_entry(buffer);
> > + }
> > }
> > memory_region_unref(mr);
> > return;
> > @@ -3272,10 +3286,18 @@ int64_t
> address_space_cache_init(MemoryRegionCache *cache,
> > * doing this if we found actual RAM, which behaves the same
> > * regardless of attributes; so UNSPECIFIED is fine.
> > */
> > + if (mr->ops && mr->ops->map) {
> > + cache->ptr = mr->ops->map(&mr, addr, &l, is_write,
> > + MEMTXATTRS_UNSPECIFIED);
> > + }
> > +
> > l = flatview_extend_translation(cache->fv, addr, len, mr,
> > cache->xlat, l, is_write,
> > MEMTXATTRS_UNSPECIFIED);
> > - cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat,
> &l, true);
> > + if (!cache->ptr) {
> > + cache->ptr = qemu_ram_ptr_length(mr->ram_block,
> cache->xlat, &l,
> > + true);
> > + }
> > } else {
> > cache->ptr = NULL;
> > }
> > @@ -3297,14 +3319,20 @@ void
> address_space_cache_invalidate(MemoryRegionCache *cache,
> >
> > void address_space_cache_destroy(MemoryRegionCache *cache)
> > {
> > - if (!cache->mrs.mr) {
> > + MemoryRegion *mr = cache->mrs.mr;
> > +
> > + if (!mr) {
> > return;
> > }
> >
> > - if (xen_enabled()) {
> > + if (mr->ops && mr->ops->unmap) {
> > + mr->ops->unmap(mr, cache->ptr, cache->xlat, cache->len,
> > + cache->is_write, cache->len);
> > + } else if (xen_enabled()) {
> > xen_invalidate_map_cache_entry(cache->ptr);
> > }
> > - memory_region_unref(cache->mrs.mr);
> > +
> > + memory_region_unref(mr);
> > flatview_unref(cache->fv);
> > cache->mrs.mr = NULL;
> > cache->fv = NULL;
> > --
> > 2.17.1
> >
>
>
[-- Attachment #2: Type: text/html, Size: 8711 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-02-27 22:34 ` [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks Vikram Garhwal
2024-02-29 23:10 ` Stefano Stabellini
@ 2024-04-10 16:44 ` Edgar E. Iglesias
2024-04-10 18:56 ` Peter Xu
1 sibling, 1 reply; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 16:44 UTC (permalink / raw)
To: Paolo Bonzini, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé
Cc: qemu-devel, sstabellini, vikram.garhwal, jgross
[-- Attachment #1: Type: text/plain, Size: 5775 bytes --]
On Tue, Feb 27, 2024 at 11:37 PM Vikram Garhwal <vikram.garhwal@amd.com>
wrote:
> From: Juergen Gross <jgross@suse.com>
>
> In order to support mapping and unmapping guest memory dynamically to
> and from qemu during address_space_[un]map() operations add the map()
> and unmap() callbacks to MemoryRegionOps.
>
> Those will be used e.g. for Xen grant mappings when performing guest
> I/Os.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>
Paolo, Peter, David, Phiippe, do you guys have any concerns with this patch?
Cheers,
Edgar
> ---
> include/exec/memory.h | 21 ++++++++++++++++++
> system/physmem.c | 50 +++++++++++++++++++++++++++++++++----------
> 2 files changed, 60 insertions(+), 11 deletions(-)
>
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 8626a355b3..9f7dfe59c7 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -282,6 +282,27 @@ struct MemoryRegionOps {
> unsigned size,
> MemTxAttrs attrs);
>
> + /*
> + * Dynamically create mapping. @addr is the guest address to map;
> @plen
> + * is the pointer to the usable length of the buffer.
> + * @mr contents can be changed in case a new memory region is created
> for
> + * the mapping.
> + * Returns the buffer address for accessing the data.
> + */
> + void *(*map)(MemoryRegion **mr,
> + hwaddr addr,
> + hwaddr *plen,
> + bool is_write,
> + MemTxAttrs attrs);
> +
> + /* Unmap an area obtained via map() before. */
> + void (*unmap)(MemoryRegion *mr,
> + void *buffer,
> + ram_addr_t addr,
> + hwaddr len,
> + bool is_write,
> + hwaddr access_len);
> +
> enum device_endian endianness;
> /* Guest-visible constraints: */
> struct {
> diff --git a/system/physmem.c b/system/physmem.c
> index 949dcb20ba..d989e9fc1f 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -3141,6 +3141,7 @@ void *address_space_map(AddressSpace *as,
> hwaddr len = *plen;
> hwaddr l, xlat;
> MemoryRegion *mr;
> + void *ptr = NULL;
> FlatView *fv;
>
> if (len == 0) {
> @@ -3174,12 +3175,20 @@ void *address_space_map(AddressSpace *as,
> return bounce.buffer;
> }
>
> -
> memory_region_ref(mr);
> +
> + if (mr->ops && mr->ops->map) {
> + ptr = mr->ops->map(&mr, addr, plen, is_write, attrs);
> + }
> +
> *plen = flatview_extend_translation(fv, addr, len, mr, xlat,
> l, is_write, attrs);
> fuzz_dma_read_cb(addr, *plen, mr);
> - return qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
> + if (ptr == NULL) {
> + ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
> + }
> +
> + return ptr;
> }
>
> /* Unmaps a memory region previously mapped by address_space_map().
> @@ -3195,11 +3204,16 @@ void address_space_unmap(AddressSpace *as, void
> *buffer, hwaddr len,
>
> mr = memory_region_from_host(buffer, &addr1);
> assert(mr != NULL);
> - if (is_write) {
> - invalidate_and_set_dirty(mr, addr1, access_len);
> - }
> - if (xen_enabled()) {
> - xen_invalidate_map_cache_entry(buffer);
> +
> + if (mr->ops && mr->ops->unmap) {
> + mr->ops->unmap(mr, buffer, addr1, len, is_write, access_len);
> + } else {
> + if (is_write) {
> + invalidate_and_set_dirty(mr, addr1, access_len);
> + }
> + if (xen_enabled()) {
> + xen_invalidate_map_cache_entry(buffer);
> + }
> }
> memory_region_unref(mr);
> return;
> @@ -3272,10 +3286,18 @@ int64_t address_space_cache_init(MemoryRegionCache
> *cache,
> * doing this if we found actual RAM, which behaves the same
> * regardless of attributes; so UNSPECIFIED is fine.
> */
> + if (mr->ops && mr->ops->map) {
> + cache->ptr = mr->ops->map(&mr, addr, &l, is_write,
> + MEMTXATTRS_UNSPECIFIED);
> + }
> +
> l = flatview_extend_translation(cache->fv, addr, len, mr,
> cache->xlat, l, is_write,
> MEMTXATTRS_UNSPECIFIED);
> - cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l,
> true);
> + if (!cache->ptr) {
> + cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat,
> &l,
> + true);
> + }
> } else {
> cache->ptr = NULL;
> }
> @@ -3297,14 +3319,20 @@ void
> address_space_cache_invalidate(MemoryRegionCache *cache,
>
> void address_space_cache_destroy(MemoryRegionCache *cache)
> {
> - if (!cache->mrs.mr) {
> + MemoryRegion *mr = cache->mrs.mr;
> +
> + if (!mr) {
> return;
> }
>
> - if (xen_enabled()) {
> + if (mr->ops && mr->ops->unmap) {
> + mr->ops->unmap(mr, cache->ptr, cache->xlat, cache->len,
> + cache->is_write, cache->len);
> + } else if (xen_enabled()) {
> xen_invalidate_map_cache_entry(cache->ptr);
> }
> - memory_region_unref(cache->mrs.mr);
> +
> + memory_region_unref(mr);
> flatview_unref(cache->fv);
> cache->mrs.mr = NULL;
> cache->fv = NULL;
> --
> 2.17.1
>
>
>
[-- Attachment #2: Type: text/html, Size: 7717 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-04-10 16:44 ` Edgar E. Iglesias
@ 2024-04-10 18:56 ` Peter Xu
2024-04-16 11:32 ` Edgar E. Iglesias
0 siblings, 1 reply; 33+ messages in thread
From: Peter Xu @ 2024-04-10 18:56 UTC (permalink / raw)
To: Edgar E. Iglesias
Cc: Paolo Bonzini, David Hildenbrand, Philippe Mathieu-Daudé,
qemu-devel, sstabellini, vikram.garhwal, jgross
On Wed, Apr 10, 2024 at 06:44:38PM +0200, Edgar E. Iglesias wrote:
> On Tue, Feb 27, 2024 at 11:37 PM Vikram Garhwal <vikram.garhwal@amd.com>
> wrote:
>
> > From: Juergen Gross <jgross@suse.com>
> >
> > In order to support mapping and unmapping guest memory dynamically to
> > and from qemu during address_space_[un]map() operations add the map()
> > and unmap() callbacks to MemoryRegionOps.
> >
> > Those will be used e.g. for Xen grant mappings when performing guest
> > I/Os.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> >
>
>
> Paolo, Peter, David, Phiippe, do you guys have any concerns with this patch?
This introduces a 3rd memory type afaict, neither direct nor !direct.
What happens if someone does address_space_write() to it? I didn't see it
covered here..
OTOH, the cover letter didn't mention too much either on the big picture:
https://lore.kernel.org/all/20240227223501.28475-1-vikram.garhwal@amd.com/
I want to have a quick grasp on whether it's justified worthwhile we should
introduce this complexity to qemu memory core.
Could I request a better cover letter when repost? It'll be great to
mention things like:
- what is grant mapping, why it needs to be used, when it can be used (is
it only relevant to vIOMMU=on)? Some more information on the high
level design using this type or MR would be great.
- why a 3rd memory type is required? Do we have other alternatives?
So it's all based on my very limited understanding of reading this:
https://xenbits.xenproject.org/docs/4.3-testing/misc/grant-tables.txt
If it's about cross-vm sharing memory, does it mean that in reality
there are RAMs allocated, but it's only about permission management?
In that case, is there any option we implement it with direct access
mode (however with some extra dynamic permissions applied on top using
some mechanism)?
- perhaps sold old links would be great too so people can read about the
context when it's not obvious, without a need to copy-paste.
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-04-10 18:56 ` Peter Xu
@ 2024-04-16 11:32 ` Edgar E. Iglesias
2024-04-16 13:28 ` Jürgen Groß
0 siblings, 1 reply; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-16 11:32 UTC (permalink / raw)
To: Peter Xu
Cc: Paolo Bonzini, David Hildenbrand, Philippe Mathieu-Daudé,
qemu-devel, sstabellini, vikram.garhwal, jgross
On Wed, Apr 10, 2024 at 8:56 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Apr 10, 2024 at 06:44:38PM +0200, Edgar E. Iglesias wrote:
> > On Tue, Feb 27, 2024 at 11:37 PM Vikram Garhwal <vikram.garhwal@amd.com>
> > wrote:
> >
> > > From: Juergen Gross <jgross@suse.com>
> > >
> > > In order to support mapping and unmapping guest memory dynamically to
> > > and from qemu during address_space_[un]map() operations add the map()
> > > and unmap() callbacks to MemoryRegionOps.
> > >
> > > Those will be used e.g. for Xen grant mappings when performing guest
> > > I/Os.
> > >
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > >
> >
> >
> > Paolo, Peter, David, Phiippe, do you guys have any concerns with this patch?
>
Thanks for your comments Peter,
> This introduces a 3rd memory type afaict, neither direct nor !direct.
>
> What happens if someone does address_space_write() to it? I didn't see it
> covered here..
You're right, that won't work, the memory needs to be mapped before it
can be used.
At minimum there should be some graceful failure, right now this will crash.
>
> OTOH, the cover letter didn't mention too much either on the big picture:
>
> https://lore.kernel.org/all/20240227223501.28475-1-vikram.garhwal@amd.com/
>
> I want to have a quick grasp on whether it's justified worthwhile we should
> introduce this complexity to qemu memory core.
>
> Could I request a better cover letter when repost? It'll be great to
> mention things like:
I'll do that, but also answer inline in the meantime since we should
perhaps change the approach.
>
> - what is grant mapping, why it needs to be used, when it can be used (is
> it only relevant to vIOMMU=on)? Some more information on the high
> level design using this type or MR would be great.
https://github.com/xen-project/xen/blob/master/docs/misc/grant-tables.txt
Xen VM's that use QEMU's VirtIO have a QEMU instance running in a separate VM.
There's basically two mechanisms for QEMU's Virtio backends to access
the guest's RAM.
1. Foreign mappings. This gives the VM running QEMU access to the
entire RAM of the guest VM.
2. Grant mappings. This allows the guest to dynamically grant and
remove access to pages as needed.
So the VM running QEMU, cannot map guest RAM unless it's been
instructed to do so by the guest.
#2 is desirable because if QEMU gets compromised it has a smaller
attack surface onto the guest.
>
> - why a 3rd memory type is required? Do we have other alternatives?
Yes, there are alternatives.
1. It was suggested by Stefano to try to handle this in existing qemu/hw/xen/*.
This would be less intrusive but perhaps also less explicit.
Concerns about touching the Memory API have been raised before, so
perhaps we should try this.
I'm a little unsure how we would deal with unmapping when the guest
removes our grants and we're using models that don't map but use
address_space_read/write().
2. Another approach could be to change the Xen grant-iommu in the
Linux kernel to work with a grant vIOMMU in QEMU.
Linux could explicitly ask QEMU's grant vIOMMU to map/unmap granted regions.
This would have the benefit that we wouldn't need to allocate
address-bit 63 for grants.
A drawback is that it may be slower since we're going to need to
bounce between guest/qemu a bit more.
> So it's all based on my very limited understanding of reading this:
> https://xenbits.xenproject.org/docs/4.3-testing/misc/grant-tables.txt
>
> If it's about cross-vm sharing memory, does it mean that in reality
> there are RAMs allocated, but it's only about permission management?
> In that case, is there any option we implement it with direct access
> mode (however with some extra dynamic permissions applied on top using
> some mechanism)?
Yes, I think the grant vIOMMU approach would fall into this category.
>
> - perhaps sold old links would be great too so people can read about the
> context when it's not obvious, without a need to copy-paste.
https://wiki.xenproject.org/wiki/Virtio_On_Xen
https://lwn.net/Articles/896938/
Cheers,
Edgar
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-04-16 11:32 ` Edgar E. Iglesias
@ 2024-04-16 13:28 ` Jürgen Groß
2024-04-16 15:55 ` Peter Xu
0 siblings, 1 reply; 33+ messages in thread
From: Jürgen Groß @ 2024-04-16 13:28 UTC (permalink / raw)
To: Edgar E. Iglesias, Peter Xu
Cc: Paolo Bonzini, David Hildenbrand, Philippe Mathieu-Daudé,
qemu-devel, sstabellini, vikram.garhwal
On 16.04.24 13:32, Edgar E. Iglesias wrote:
> On Wed, Apr 10, 2024 at 8:56 PM Peter Xu <peterx@redhat.com> wrote:
>>
>> On Wed, Apr 10, 2024 at 06:44:38PM +0200, Edgar E. Iglesias wrote:
>>> On Tue, Feb 27, 2024 at 11:37 PM Vikram Garhwal <vikram.garhwal@amd.com>
>>> wrote:
>>>
>>>> From: Juergen Gross <jgross@suse.com>
>>>>
>>>> In order to support mapping and unmapping guest memory dynamically to
>>>> and from qemu during address_space_[un]map() operations add the map()
>>>> and unmap() callbacks to MemoryRegionOps.
>>>>
>>>> Those will be used e.g. for Xen grant mappings when performing guest
>>>> I/Os.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>>>>
>>>
>>>
>>> Paolo, Peter, David, Phiippe, do you guys have any concerns with this patch?
>>
>
> Thanks for your comments Peter,
>
>
>> This introduces a 3rd memory type afaict, neither direct nor !direct.
>>
>> What happens if someone does address_space_write() to it? I didn't see it
>> covered here..
>
> You're right, that won't work, the memory needs to be mapped before it
> can be used.
> At minimum there should be some graceful failure, right now this will crash.
>
>>
>> OTOH, the cover letter didn't mention too much either on the big picture:
>>
>> https://lore.kernel.org/all/20240227223501.28475-1-vikram.garhwal@amd.com/
>>
>> I want to have a quick grasp on whether it's justified worthwhile we should
>> introduce this complexity to qemu memory core.
>>
>> Could I request a better cover letter when repost? It'll be great to
>> mention things like:
>
> I'll do that, but also answer inline in the meantime since we should
> perhaps change the approach.
>
>>
>> - what is grant mapping, why it needs to be used, when it can be used (is
>> it only relevant to vIOMMU=on)? Some more information on the high
>> level design using this type or MR would be great.
>
> https://github.com/xen-project/xen/blob/master/docs/misc/grant-tables.txt
>
> Xen VM's that use QEMU's VirtIO have a QEMU instance running in a separate VM.
>
> There's basically two mechanisms for QEMU's Virtio backends to access
> the guest's RAM.
> 1. Foreign mappings. This gives the VM running QEMU access to the
> entire RAM of the guest VM.
Additionally it requires qemu to run in dom0, while in general Xen allows
to run backends in less privileged "driver domains", which are usually not
allowed to perform foreign mappings.
> 2. Grant mappings. This allows the guest to dynamically grant and
> remove access to pages as needed.
> So the VM running QEMU, cannot map guest RAM unless it's been
> instructed to do so by the guest.
>
> #2 is desirable because if QEMU gets compromised it has a smaller
> attack surface onto the guest.
And it allows to run the virtio backend in a less privileged VM.
>
>>
>> - why a 3rd memory type is required? Do we have other alternatives?
>
> Yes, there are alternatives.
>
> 1. It was suggested by Stefano to try to handle this in existing qemu/hw/xen/*.
> This would be less intrusive but perhaps also less explicit.
> Concerns about touching the Memory API have been raised before, so
> perhaps we should try this.
> I'm a little unsure how we would deal with unmapping when the guest
> removes our grants and we're using models that don't map but use
> address_space_read/write().
Those would either need to use grant-copy hypercalls, or they'd need to map,
read/write, unmap.
>
> 2. Another approach could be to change the Xen grant-iommu in the
> Linux kernel to work with a grant vIOMMU in QEMU.
> Linux could explicitly ask QEMU's grant vIOMMU to map/unmap granted regions.
> This would have the benefit that we wouldn't need to allocate
> address-bit 63 for grants.
> A drawback is that it may be slower since we're going to need to
> bounce between guest/qemu a bit more.
It would be a _lot_ slower, unless virtio-iommu and grants are both modified
to match. I have looked into that, but the needed effort is rather large. At
the last Xen summit I have suggested to introduce a new grant format which
would work more like a normal page table structure. Using the same format
for virtio-iommu would allow to avoid the additional bounces between qemu and
the guest (and in fact that was one of the motivations to suggest the new
grant format).
Juergen
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-04-16 13:28 ` Jürgen Groß
@ 2024-04-16 15:55 ` Peter Xu
2024-04-17 10:34 ` Edgar E. Iglesias
0 siblings, 1 reply; 33+ messages in thread
From: Peter Xu @ 2024-04-16 15:55 UTC (permalink / raw)
To: Jürgen Groß
Cc: Edgar E. Iglesias, Paolo Bonzini, David Hildenbrand,
Philippe Mathieu-Daudé, qemu-devel, sstabellini,
vikram.garhwal
On Tue, Apr 16, 2024 at 03:28:41PM +0200, Jürgen Groß wrote:
> On 16.04.24 13:32, Edgar E. Iglesias wrote:
> > On Wed, Apr 10, 2024 at 8:56 PM Peter Xu <peterx@redhat.com> wrote:
> > >
> > > On Wed, Apr 10, 2024 at 06:44:38PM +0200, Edgar E. Iglesias wrote:
> > > > On Tue, Feb 27, 2024 at 11:37 PM Vikram Garhwal <vikram.garhwal@amd.com>
> > > > wrote:
> > > >
> > > > > From: Juergen Gross <jgross@suse.com>
> > > > >
> > > > > In order to support mapping and unmapping guest memory dynamically to
> > > > > and from qemu during address_space_[un]map() operations add the map()
> > > > > and unmap() callbacks to MemoryRegionOps.
> > > > >
> > > > > Those will be used e.g. for Xen grant mappings when performing guest
> > > > > I/Os.
> > > > >
> > > > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > > > >
> > > >
> > > >
> > > > Paolo, Peter, David, Phiippe, do you guys have any concerns with this patch?
> > >
> >
> > Thanks for your comments Peter,
> >
> >
> > > This introduces a 3rd memory type afaict, neither direct nor !direct.
> > >
> > > What happens if someone does address_space_write() to it? I didn't see it
> > > covered here..
> >
> > You're right, that won't work, the memory needs to be mapped before it
> > can be used.
> > At minimum there should be some graceful failure, right now this will crash.
> >
> > >
> > > OTOH, the cover letter didn't mention too much either on the big picture:
> > >
> > > https://lore.kernel.org/all/20240227223501.28475-1-vikram.garhwal@amd.com/
> > >
> > > I want to have a quick grasp on whether it's justified worthwhile we should
> > > introduce this complexity to qemu memory core.
> > >
> > > Could I request a better cover letter when repost? It'll be great to
> > > mention things like:
> >
> > I'll do that, but also answer inline in the meantime since we should
> > perhaps change the approach.
> >
> > >
> > > - what is grant mapping, why it needs to be used, when it can be used (is
> > > it only relevant to vIOMMU=on)? Some more information on the high
> > > level design using this type or MR would be great.
> >
> > https://github.com/xen-project/xen/blob/master/docs/misc/grant-tables.txt
> >
> > Xen VM's that use QEMU's VirtIO have a QEMU instance running in a separate VM.
> >
> > There's basically two mechanisms for QEMU's Virtio backends to access
> > the guest's RAM.
> > 1. Foreign mappings. This gives the VM running QEMU access to the
> > entire RAM of the guest VM.
>
> Additionally it requires qemu to run in dom0, while in general Xen allows
> to run backends in less privileged "driver domains", which are usually not
> allowed to perform foreign mappings.
>
> > 2. Grant mappings. This allows the guest to dynamically grant and
> > remove access to pages as needed.
> > So the VM running QEMU, cannot map guest RAM unless it's been
> > instructed to do so by the guest.
> >
> > #2 is desirable because if QEMU gets compromised it has a smaller
> > attack surface onto the guest.
>
> And it allows to run the virtio backend in a less privileged VM.
>
> >
> > >
> > > - why a 3rd memory type is required? Do we have other alternatives?
> >
> > Yes, there are alternatives.
> >
> > 1. It was suggested by Stefano to try to handle this in existing qemu/hw/xen/*.
> > This would be less intrusive but perhaps also less explicit.
> > Concerns about touching the Memory API have been raised before, so
> > perhaps we should try this.
> > I'm a little unsure how we would deal with unmapping when the guest
> > removes our grants and we're using models that don't map but use
> > address_space_read/write().
>
> Those would either need to use grant-copy hypercalls, or they'd need to map,
> read/write, unmap.
>
> >
> > 2. Another approach could be to change the Xen grant-iommu in the
> > Linux kernel to work with a grant vIOMMU in QEMU.
> > Linux could explicitly ask QEMU's grant vIOMMU to map/unmap granted regions.
> > This would have the benefit that we wouldn't need to allocate
> > address-bit 63 for grants.
> > A drawback is that it may be slower since we're going to need to
> > bounce between guest/qemu a bit more.
>
> It would be a _lot_ slower, unless virtio-iommu and grants are both modified
> to match. I have looked into that, but the needed effort is rather large. At
> the last Xen summit I have suggested to introduce a new grant format which
> would work more like a normal page table structure. Using the same format
> for virtio-iommu would allow to avoid the additional bounces between qemu and
> the guest (and in fact that was one of the motivations to suggest the new
> grant format).
I have a better picture now, thanks both.
It really looks like an vIOMMU already to me, perhaps with a special refID
mapping playing similar roles as IOVAs in the rest IOMMU worlds.
I can't yet tell what's the best way for Xen - as of now QEMU's memory API
does provide such translations via IOMMUMemoryRegionClass.translate(), but
only from that. So far it works for all vIOMMU emulations in QEMU, and I'd
hope we don't need to hack another memory type if possible for this,
especially if for performance's sake; more on below.
QEMU also suffers from similar issues with other kind of DMA protections,
at least that's what I'm aware with using either VT-d, SMMU, etc.. where
dynamic DMA mappings will slow the IOs down to a degree that it may not be
usable in real production. We kept it like that and so far AFAIK we don't
yet have a solution for that simply because of the nature on how DMA
buffers are mapped and managed within a guest OS no matter Linux or not.
For Linux as a guest we basically suggest enabling iommu=pt so that kernel
drivers are trusted, and kernel driven devices can have full access to
guest RAMs by using the vIOMMU's passthrough mode. Perhaps similar to
foreign mappings for Xen, but maybe still different, as Xen's topology is
definitely special as a hypervisor here.
While for userspace drivers within the guest OS it'll always go through
vfio-pci now, which will enforce effective DMA mappings not the passthrough
mode. Then it's suggested to only map as less as possible, e.g. DPDK only
maps at the start of the user driver so it's mostly not affected by the
slowness of frequently changing DMA mappings.
I'm not sure whether above ideas would even be applicable for Xen, but I
just to share the status quo regarding to how we manage protected DMAs when
without Xen, just in case there's anything useful to help route the path
forward.
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks
2024-04-16 15:55 ` Peter Xu
@ 2024-04-17 10:34 ` Edgar E. Iglesias
0 siblings, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-17 10:34 UTC (permalink / raw)
To: Peter Xu
Cc: Jürgen Groß, Paolo Bonzini, David Hildenbrand,
Philippe Mathieu-Daudé, qemu-devel, sstabellini
On Tue, Apr 16, 2024 at 5:55 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Tue, Apr 16, 2024 at 03:28:41PM +0200, Jürgen Groß wrote:
> > On 16.04.24 13:32, Edgar E. Iglesias wrote:
> > > On Wed, Apr 10, 2024 at 8:56 PM Peter Xu <peterx@redhat.com> wrote:
> > > >
> > > > On Wed, Apr 10, 2024 at 06:44:38PM +0200, Edgar E. Iglesias wrote:
> > > > > On Tue, Feb 27, 2024 at 11:37 PM Vikram Garhwal <vikram.garhwal@amd.com>
> > > > > wrote:
> > > > >
> > > > > > From: Juergen Gross <jgross@suse.com>
> > > > > >
> > > > > > In order to support mapping and unmapping guest memory dynamically to
> > > > > > and from qemu during address_space_[un]map() operations add the map()
> > > > > > and unmap() callbacks to MemoryRegionOps.
> > > > > >
> > > > > > Those will be used e.g. for Xen grant mappings when performing guest
> > > > > > I/Os.
> > > > > >
> > > > > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > > > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > > > > >
> > > > >
> > > > >
> > > > > Paolo, Peter, David, Phiippe, do you guys have any concerns with this patch?
> > > >
> > >
> > > Thanks for your comments Peter,
> > >
> > >
> > > > This introduces a 3rd memory type afaict, neither direct nor !direct.
> > > >
> > > > What happens if someone does address_space_write() to it? I didn't see it
> > > > covered here..
> > >
> > > You're right, that won't work, the memory needs to be mapped before it
> > > can be used.
> > > At minimum there should be some graceful failure, right now this will crash.
> > >
> > > >
> > > > OTOH, the cover letter didn't mention too much either on the big picture:
> > > >
> > > > https://lore.kernel.org/all/20240227223501.28475-1-vikram.garhwal@amd.com/
> > > >
> > > > I want to have a quick grasp on whether it's justified worthwhile we should
> > > > introduce this complexity to qemu memory core.
> > > >
> > > > Could I request a better cover letter when repost? It'll be great to
> > > > mention things like:
> > >
> > > I'll do that, but also answer inline in the meantime since we should
> > > perhaps change the approach.
> > >
> > > >
> > > > - what is grant mapping, why it needs to be used, when it can be used (is
> > > > it only relevant to vIOMMU=on)? Some more information on the high
> > > > level design using this type or MR would be great.
> > >
> > > https://github.com/xen-project/xen/blob/master/docs/misc/grant-tables.txt
> > >
> > > Xen VM's that use QEMU's VirtIO have a QEMU instance running in a separate VM.
> > >
> > > There's basically two mechanisms for QEMU's Virtio backends to access
> > > the guest's RAM.
> > > 1. Foreign mappings. This gives the VM running QEMU access to the
> > > entire RAM of the guest VM.
> >
> > Additionally it requires qemu to run in dom0, while in general Xen allows
> > to run backends in less privileged "driver domains", which are usually not
> > allowed to perform foreign mappings.
> >
> > > 2. Grant mappings. This allows the guest to dynamically grant and
> > > remove access to pages as needed.
> > > So the VM running QEMU, cannot map guest RAM unless it's been
> > > instructed to do so by the guest.
> > >
> > > #2 is desirable because if QEMU gets compromised it has a smaller
> > > attack surface onto the guest.
> >
> > And it allows to run the virtio backend in a less privileged VM.
> >
> > >
> > > >
> > > > - why a 3rd memory type is required? Do we have other alternatives?
> > >
> > > Yes, there are alternatives.
> > >
> > > 1. It was suggested by Stefano to try to handle this in existing qemu/hw/xen/*.
> > > This would be less intrusive but perhaps also less explicit.
> > > Concerns about touching the Memory API have been raised before, so
> > > perhaps we should try this.
> > > I'm a little unsure how we would deal with unmapping when the guest
> > > removes our grants and we're using models that don't map but use
> > > address_space_read/write().
> >
> > Those would either need to use grant-copy hypercalls, or they'd need to map,
> > read/write, unmap.
> >
> > >
> > > 2. Another approach could be to change the Xen grant-iommu in the
> > > Linux kernel to work with a grant vIOMMU in QEMU.
> > > Linux could explicitly ask QEMU's grant vIOMMU to map/unmap granted regions.
> > > This would have the benefit that we wouldn't need to allocate
> > > address-bit 63 for grants.
> > > A drawback is that it may be slower since we're going to need to
> > > bounce between guest/qemu a bit more.
> >
> > It would be a _lot_ slower, unless virtio-iommu and grants are both modified
> > to match. I have looked into that, but the needed effort is rather large. At
> > the last Xen summit I have suggested to introduce a new grant format which
> > would work more like a normal page table structure. Using the same format
> > for virtio-iommu would allow to avoid the additional bounces between qemu and
> > the guest (and in fact that was one of the motivations to suggest the new
> > grant format).
>
> I have a better picture now, thanks both.
>
> It really looks like an vIOMMU already to me, perhaps with a special refID
> mapping playing similar roles as IOVAs in the rest IOMMU worlds.
>
> I can't yet tell what's the best way for Xen - as of now QEMU's memory API
> does provide such translations via IOMMUMemoryRegionClass.translate(), but
> only from that. So far it works for all vIOMMU emulations in QEMU, and I'd
> hope we don't need to hack another memory type if possible for this,
> especially if for performance's sake; more on below.
>
> QEMU also suffers from similar issues with other kind of DMA protections,
> at least that's what I'm aware with using either VT-d, SMMU, etc.. where
> dynamic DMA mappings will slow the IOs down to a degree that it may not be
> usable in real production. We kept it like that and so far AFAIK we don't
> yet have a solution for that simply because of the nature on how DMA
> buffers are mapped and managed within a guest OS no matter Linux or not.
>
> For Linux as a guest we basically suggest enabling iommu=pt so that kernel
> drivers are trusted, and kernel driven devices can have full access to
> guest RAMs by using the vIOMMU's passthrough mode. Perhaps similar to
> foreign mappings for Xen, but maybe still different, as Xen's topology is
> definitely special as a hypervisor here.
>
> While for userspace drivers within the guest OS it'll always go through
> vfio-pci now, which will enforce effective DMA mappings not the passthrough
> mode. Then it's suggested to only map as less as possible, e.g. DPDK only
> maps at the start of the user driver so it's mostly not affected by the
> slowness of frequently changing DMA mappings.
>
> I'm not sure whether above ideas would even be applicable for Xen, but I
> just to share the status quo regarding to how we manage protected DMAs when
> without Xen, just in case there's anything useful to help route the path
> forward.
>
> Thanks,
>
> --
> Peter Xu
>
Thanks for the suggestions Peter and for your comments Jurgen.
We'll have to evaluate the different approaches a little more and see
where we go from here.
Best regards,
Edgar
^ permalink raw reply [flat|nested] 33+ messages in thread
* [QEMU][PATCH v3 6/7] xen: add map and unmap callbacks for grant region
2024-02-27 22:34 [QEMU][PATCH v3 0/7] Xen: support grant mappings Vikram Garhwal
` (4 preceding siblings ...)
2024-02-27 22:34 ` [QEMU][PATCH v3 5/7] memory: add MemoryRegion map and unmap callbacks Vikram Garhwal
@ 2024-02-27 22:35 ` Vikram Garhwal
2024-02-29 23:10 ` Stefano Stabellini
2024-02-27 22:35 ` [QEMU][PATCH v3 7/7] hw: arm: Add grant mapping Vikram Garhwal
2024-02-28 13:27 ` [QEMU][PATCH v3 0/7] Xen: support grant mappings Manos Pitsidianakis
7 siblings, 1 reply; 33+ messages in thread
From: Vikram Garhwal @ 2024-02-27 22:35 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, vikram.garhwal, jgross, Anthony Perard, Paul Durrant,
Paolo Bonzini, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
From: Juergen Gross <jgross@suse.com>
Add the callbacks for mapping/unmapping guest memory via grants to the
special grant memory region.
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
hw/xen/xen-mapcache.c | 176 +++++++++++++++++++++++++++++++++++++++++-
system/physmem.c | 11 ++-
2 files changed, 182 insertions(+), 5 deletions(-)
diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
index 179b7e95b2..2e4c9b4947 100644
--- a/hw/xen/xen-mapcache.c
+++ b/hw/xen/xen-mapcache.c
@@ -9,6 +9,8 @@
*/
#include "qemu/osdep.h"
+#include "qemu/queue.h"
+#include "qemu/thread.h"
#include "qemu/units.h"
#include "qemu/error-report.h"
@@ -23,6 +25,8 @@
#include "sysemu/xen-mapcache.h"
#include "trace.h"
+#include <xenevtchn.h>
+#include <xengnttab.h>
#if HOST_LONG_BITS == 32
# define MCACHE_BUCKET_SHIFT 16
@@ -377,7 +381,7 @@ uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
return p;
}
-ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
+static ram_addr_t xen_ram_addr_from_mapcache_try(void *ptr)
{
MapCacheEntry *entry = NULL;
MapCacheRev *reventry;
@@ -588,10 +592,179 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
return p;
}
+struct XENMappedGrantRegion {
+ void *addr;
+ unsigned int pages;
+ unsigned int refs;
+ unsigned int prot;
+ uint32_t idx;
+ QLIST_ENTRY(XENMappedGrantRegion) list;
+};
+
+static xengnttab_handle *xen_region_gnttabdev;
+static QLIST_HEAD(GrantRegionList, XENMappedGrantRegion) xen_grant_mappings =
+ QLIST_HEAD_INITIALIZER(xen_grant_mappings);
+static QemuMutex xen_map_mutex;
+
+static void *xen_map_grant_dyn(MemoryRegion **mr, hwaddr addr, hwaddr *plen,
+ bool is_write, MemTxAttrs attrs)
+{
+ unsigned int page_off = addr & (XC_PAGE_SIZE - 1);
+ unsigned int i;
+ unsigned int total_grants = 0;
+ unsigned int nrefs = (page_off + *plen + XC_PAGE_SIZE - 1) >> XC_PAGE_SHIFT;
+ uint32_t ref = (addr - XEN_GRANT_ADDR_OFF) >> XC_PAGE_SHIFT;
+ uint32_t *refs = NULL;
+ unsigned int prot = PROT_READ;
+ struct XENMappedGrantRegion *mgr = NULL;
+
+ if (is_write) {
+ prot |= PROT_WRITE;
+ }
+
+ qemu_mutex_lock(&xen_map_mutex);
+
+ QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
+ if (mgr->idx == ref &&
+ mgr->pages == nrefs &&
+ (mgr->prot & prot) == prot) {
+ break;
+ }
+
+ total_grants += mgr->pages;
+ }
+
+ if (!mgr) {
+ if (nrefs + total_grants >= XEN_MAX_VIRTIO_GRANTS) {
+ qemu_mutex_unlock(&xen_map_mutex);
+ return NULL;
+ }
+
+ mgr = g_new(struct XENMappedGrantRegion, 1);
+
+ if (nrefs == 1) {
+ refs = &ref;
+ } else {
+ refs = g_new(uint32_t, nrefs);
+ for (i = 0; i < nrefs; i++) {
+ refs[i] = ref + i;
+ }
+ }
+ mgr->addr = xengnttab_map_domain_grant_refs(xen_region_gnttabdev, nrefs,
+ xen_domid, refs, prot);
+ if (mgr->addr) {
+ mgr->pages = nrefs;
+ mgr->refs = 1;
+ mgr->prot = prot;
+ mgr->idx = ref;
+
+ QLIST_INSERT_HEAD(&xen_grant_mappings, mgr, list);
+ } else {
+ g_free(mgr);
+ mgr = NULL;
+ }
+ } else {
+ mgr->refs++;
+ }
+
+ qemu_mutex_unlock(&xen_map_mutex);
+
+ if (nrefs > 1) {
+ g_free(refs);
+ }
+
+ return mgr ? mgr->addr + page_off : NULL;
+}
+
+static void xen_unmap_grant_dyn(MemoryRegion *mr, void *buffer, ram_addr_t addr,
+ hwaddr len, bool is_write, hwaddr access_len)
+{
+ unsigned int page_off = (unsigned long)buffer & (XC_PAGE_SIZE - 1);
+ unsigned int nrefs = (page_off + len + XC_PAGE_SIZE - 1) >> XC_PAGE_SHIFT;
+ unsigned int prot = PROT_READ;
+ struct XENMappedGrantRegion *mgr = NULL;
+
+ if (is_write) {
+ prot |= PROT_WRITE;
+ }
+
+ qemu_mutex_lock(&xen_map_mutex);
+
+ QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
+ if (mgr->addr == buffer - page_off &&
+ mgr->pages == nrefs &&
+ (mgr->prot & prot) == prot) {
+ break;
+ }
+ }
+ if (mgr) {
+ mgr->refs--;
+ if (!mgr->refs) {
+ xengnttab_unmap(xen_region_gnttabdev, mgr->addr, nrefs);
+
+ QLIST_REMOVE(mgr, list);
+ g_free(mgr);
+ }
+ } else {
+ error_report("xen_unmap_grant_dyn() trying to unmap unknown buffer");
+ }
+
+ qemu_mutex_unlock(&xen_map_mutex);
+}
+
+static ram_addr_t xen_ram_addr_from_grant_cache(void *ptr)
+{
+ unsigned int page_off = (unsigned long)ptr & (XC_PAGE_SIZE - 1);
+ struct XENMappedGrantRegion *mgr = NULL;
+ ram_addr_t raddr = RAM_ADDR_INVALID;
+
+ qemu_mutex_lock(&xen_map_mutex);
+
+ QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
+ if (mgr->addr == ptr - page_off) {
+ break;
+ }
+ }
+
+ if (mgr) {
+ raddr = (mgr->idx << XC_PAGE_SHIFT) + page_off + XEN_GRANT_ADDR_OFF;
+ }
+
+ qemu_mutex_unlock(&xen_map_mutex);
+
+ return raddr;
+}
+
+ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
+{
+ ram_addr_t raddr;
+
+ raddr = xen_ram_addr_from_mapcache_try(ptr);
+ if (raddr == RAM_ADDR_INVALID) {
+ raddr = xen_ram_addr_from_grant_cache(ptr);
+ }
+
+ return raddr;
+}
+
+static const struct MemoryRegionOps xen_grant_mr_ops = {
+ .map = xen_map_grant_dyn,
+ .unmap = xen_unmap_grant_dyn,
+ .endianness = DEVICE_LITTLE_ENDIAN,
+};
+
MemoryRegion *xen_init_grant_ram(void)
{
RAMBlock *block;
+ qemu_mutex_init(&xen_map_mutex);
+
+ xen_region_gnttabdev = xengnttab_open(NULL, 0);
+ if (xen_region_gnttabdev == NULL) {
+ fprintf(stderr, "can't open gnttab device\n");
+ return NULL;
+ }
+
memory_region_init(&ram_grants, NULL, "xen.grants",
XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
block = g_malloc0(sizeof(*block));
@@ -606,6 +779,7 @@ MemoryRegion *xen_init_grant_ram(void)
ram_grants.ram_block = block;
ram_grants.ram = true;
ram_grants.terminates = true;
+ ram_grants.ops = &xen_grant_mr_ops;
ram_block_add_list(block);
memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
&ram_grants);
diff --git a/system/physmem.c b/system/physmem.c
index d989e9fc1f..e6fc075d8f 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -2232,13 +2232,16 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
if (xen_enabled()) {
ram_addr_t ram_addr;
+
RCU_READ_LOCK_GUARD();
ram_addr = xen_ram_addr_from_mapcache(ptr);
- block = qemu_get_ram_block(ram_addr);
- if (block) {
- *offset = ram_addr - block->offset;
+ if (ram_addr != RAM_ADDR_INVALID) {
+ block = qemu_get_ram_block(ram_addr);
+ if (block) {
+ *offset = ram_addr - block->offset;
+ }
+ return block;
}
- return block;
}
RCU_READ_LOCK_GUARD();
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 6/7] xen: add map and unmap callbacks for grant region
2024-02-27 22:35 ` [QEMU][PATCH v3 6/7] xen: add map and unmap callbacks for grant region Vikram Garhwal
@ 2024-02-29 23:10 ` Stefano Stabellini
2024-04-10 11:11 ` Edgar E. Iglesias
0 siblings, 1 reply; 33+ messages in thread
From: Stefano Stabellini @ 2024-02-29 23:10 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, jgross, Anthony Perard, Paul Durrant,
Paolo Bonzini, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
On Tue, 27 Feb 2024, Vikram Garhwal wrote:
> From: Juergen Gross <jgross@suse.com>
>
> Add the callbacks for mapping/unmapping guest memory via grants to the
> special grant memory region.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> hw/xen/xen-mapcache.c | 176 +++++++++++++++++++++++++++++++++++++++++-
> system/physmem.c | 11 ++-
> 2 files changed, 182 insertions(+), 5 deletions(-)
>
> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> index 179b7e95b2..2e4c9b4947 100644
> --- a/hw/xen/xen-mapcache.c
> +++ b/hw/xen/xen-mapcache.c
> @@ -9,6 +9,8 @@
> */
>
> #include "qemu/osdep.h"
> +#include "qemu/queue.h"
> +#include "qemu/thread.h"
> #include "qemu/units.h"
> #include "qemu/error-report.h"
>
> @@ -23,6 +25,8 @@
> #include "sysemu/xen-mapcache.h"
> #include "trace.h"
>
> +#include <xenevtchn.h>
> +#include <xengnttab.h>
>
> #if HOST_LONG_BITS == 32
> # define MCACHE_BUCKET_SHIFT 16
> @@ -377,7 +381,7 @@ uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
> return p;
> }
>
> -ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
> +static ram_addr_t xen_ram_addr_from_mapcache_try(void *ptr)
> {
> MapCacheEntry *entry = NULL;
> MapCacheRev *reventry;
> @@ -588,10 +592,179 @@ uint8_t *xen_replace_cache_entry(hwaddr old_phys_addr,
> return p;
> }
>
> +struct XENMappedGrantRegion {
> + void *addr;
> + unsigned int pages;
> + unsigned int refs;
> + unsigned int prot;
> + uint32_t idx;
> + QLIST_ENTRY(XENMappedGrantRegion) list;
> +};
> +
> +static xengnttab_handle *xen_region_gnttabdev;
> +static QLIST_HEAD(GrantRegionList, XENMappedGrantRegion) xen_grant_mappings =
> + QLIST_HEAD_INITIALIZER(xen_grant_mappings);
> +static QemuMutex xen_map_mutex;
> +
> +static void *xen_map_grant_dyn(MemoryRegion **mr, hwaddr addr, hwaddr *plen,
> + bool is_write, MemTxAttrs attrs)
> +{
> + unsigned int page_off = addr & (XC_PAGE_SIZE - 1);
> + unsigned int i;
> + unsigned int total_grants = 0;
> + unsigned int nrefs = (page_off + *plen + XC_PAGE_SIZE - 1) >> XC_PAGE_SHIFT;
> + uint32_t ref = (addr - XEN_GRANT_ADDR_OFF) >> XC_PAGE_SHIFT;
> + uint32_t *refs = NULL;
> + unsigned int prot = PROT_READ;
> + struct XENMappedGrantRegion *mgr = NULL;
> +
> + if (is_write) {
> + prot |= PROT_WRITE;
> + }
> +
> + qemu_mutex_lock(&xen_map_mutex);
> +
> + QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
> + if (mgr->idx == ref &&
> + mgr->pages == nrefs &&
> + (mgr->prot & prot) == prot) {
> + break;
> + }
> +
> + total_grants += mgr->pages;
> + }
> +
> + if (!mgr) {
> + if (nrefs + total_grants >= XEN_MAX_VIRTIO_GRANTS) {
> + qemu_mutex_unlock(&xen_map_mutex);
> + return NULL;
> + }
> +
> + mgr = g_new(struct XENMappedGrantRegion, 1);
> +
> + if (nrefs == 1) {
> + refs = &ref;
> + } else {
> + refs = g_new(uint32_t, nrefs);
> + for (i = 0; i < nrefs; i++) {
> + refs[i] = ref + i;
> + }
> + }
> + mgr->addr = xengnttab_map_domain_grant_refs(xen_region_gnttabdev, nrefs,
> + xen_domid, refs, prot);
> + if (mgr->addr) {
> + mgr->pages = nrefs;
> + mgr->refs = 1;
> + mgr->prot = prot;
> + mgr->idx = ref;
> +
> + QLIST_INSERT_HEAD(&xen_grant_mappings, mgr, list);
> + } else {
> + g_free(mgr);
> + mgr = NULL;
> + }
> + } else {
> + mgr->refs++;
> + }
> +
> + qemu_mutex_unlock(&xen_map_mutex);
> +
> + if (nrefs > 1) {
> + g_free(refs);
> + }
> +
> + return mgr ? mgr->addr + page_off : NULL;
> +}
> +
> +static void xen_unmap_grant_dyn(MemoryRegion *mr, void *buffer, ram_addr_t addr,
> + hwaddr len, bool is_write, hwaddr access_len)
> +{
> + unsigned int page_off = (unsigned long)buffer & (XC_PAGE_SIZE - 1);
> + unsigned int nrefs = (page_off + len + XC_PAGE_SIZE - 1) >> XC_PAGE_SHIFT;
> + unsigned int prot = PROT_READ;
> + struct XENMappedGrantRegion *mgr = NULL;
> +
> + if (is_write) {
> + prot |= PROT_WRITE;
> + }
> +
> + qemu_mutex_lock(&xen_map_mutex);
> +
> + QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
> + if (mgr->addr == buffer - page_off &&
> + mgr->pages == nrefs &&
> + (mgr->prot & prot) == prot) {
> + break;
> + }
> + }
> + if (mgr) {
> + mgr->refs--;
> + if (!mgr->refs) {
> + xengnttab_unmap(xen_region_gnttabdev, mgr->addr, nrefs);
> +
> + QLIST_REMOVE(mgr, list);
> + g_free(mgr);
> + }
> + } else {
> + error_report("xen_unmap_grant_dyn() trying to unmap unknown buffer");
> + }
> +
> + qemu_mutex_unlock(&xen_map_mutex);
> +}
> +
> +static ram_addr_t xen_ram_addr_from_grant_cache(void *ptr)
> +{
> + unsigned int page_off = (unsigned long)ptr & (XC_PAGE_SIZE - 1);
> + struct XENMappedGrantRegion *mgr = NULL;
> + ram_addr_t raddr = RAM_ADDR_INVALID;
> +
> + qemu_mutex_lock(&xen_map_mutex);
> +
> + QLIST_FOREACH(mgr, &xen_grant_mappings, list) {
> + if (mgr->addr == ptr - page_off) {
> + break;
> + }
> + }
> +
> + if (mgr) {
> + raddr = (mgr->idx << XC_PAGE_SHIFT) + page_off + XEN_GRANT_ADDR_OFF;
> + }
> +
> + qemu_mutex_unlock(&xen_map_mutex);
> +
> + return raddr;
> +}
> +
> +ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
> +{
> + ram_addr_t raddr;
> +
> + raddr = xen_ram_addr_from_mapcache_try(ptr);
> + if (raddr == RAM_ADDR_INVALID) {
> + raddr = xen_ram_addr_from_grant_cache(ptr);
> + }
> +
> + return raddr;
> +}
> +
> +static const struct MemoryRegionOps xen_grant_mr_ops = {
> + .map = xen_map_grant_dyn,
> + .unmap = xen_unmap_grant_dyn,
> + .endianness = DEVICE_LITTLE_ENDIAN,
> +};
> +
> MemoryRegion *xen_init_grant_ram(void)
> {
> RAMBlock *block;
>
> + qemu_mutex_init(&xen_map_mutex);
> +
> + xen_region_gnttabdev = xengnttab_open(NULL, 0);
> + if (xen_region_gnttabdev == NULL) {
> + fprintf(stderr, "can't open gnttab device\n");
> + return NULL;
> + }
> +
> memory_region_init(&ram_grants, NULL, "xen.grants",
> XEN_MAX_VIRTIO_GRANTS * XC_PAGE_SIZE);
> block = g_malloc0(sizeof(*block));
> @@ -606,6 +779,7 @@ MemoryRegion *xen_init_grant_ram(void)
> ram_grants.ram_block = block;
> ram_grants.ram = true;
> ram_grants.terminates = true;
> + ram_grants.ops = &xen_grant_mr_ops;
> ram_block_add_list(block);
> memory_region_add_subregion(get_system_memory(), XEN_GRANT_ADDR_OFF,
> &ram_grants);
> diff --git a/system/physmem.c b/system/physmem.c
> index d989e9fc1f..e6fc075d8f 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -2232,13 +2232,16 @@ RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
>
> if (xen_enabled()) {
> ram_addr_t ram_addr;
> +
> RCU_READ_LOCK_GUARD();
> ram_addr = xen_ram_addr_from_mapcache(ptr);
> - block = qemu_get_ram_block(ram_addr);
> - if (block) {
> - *offset = ram_addr - block->offset;
> + if (ram_addr != RAM_ADDR_INVALID) {
> + block = qemu_get_ram_block(ram_addr);
> + if (block) {
> + *offset = ram_addr - block->offset;
> + }
> + return block;
> }
> - return block;
> }
>
> RCU_READ_LOCK_GUARD();
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 6/7] xen: add map and unmap callbacks for grant region
2024-02-29 23:10 ` Stefano Stabellini
@ 2024-04-10 11:11 ` Edgar E. Iglesias
0 siblings, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 11:11 UTC (permalink / raw)
To: Stefano Stabellini
Cc: Vikram Garhwal, qemu-devel, jgross, Anthony Perard, Paul Durrant,
Paolo Bonzini, Peter Xu, David Hildenbrand,
Philippe Mathieu-Daudé, open list:X86 Xen CPUs
[-- Attachment #1: Type: text/plain, Size: 536 bytes --]
On Fri, Mar 1, 2024 at 12:34 AM Stefano Stabellini <sstabellini@kernel.org>
wrote:
> On Tue, 27 Feb 2024, Vikram Garhwal wrote:
> > From: Juergen Gross <jgross@suse.com>
> >
> > Add the callbacks for mapping/unmapping guest memory via grants to the
> > special grant memory region.
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>
>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
[-- Attachment #2: Type: text/html, Size: 1223 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* [QEMU][PATCH v3 7/7] hw: arm: Add grant mapping.
2024-02-27 22:34 [QEMU][PATCH v3 0/7] Xen: support grant mappings Vikram Garhwal
` (5 preceding siblings ...)
2024-02-27 22:35 ` [QEMU][PATCH v3 6/7] xen: add map and unmap callbacks for grant region Vikram Garhwal
@ 2024-02-27 22:35 ` Vikram Garhwal
2024-03-01 17:10 ` Alex Bennée
2024-02-28 13:27 ` [QEMU][PATCH v3 0/7] Xen: support grant mappings Manos Pitsidianakis
7 siblings, 1 reply; 33+ messages in thread
From: Vikram Garhwal @ 2024-02-27 22:35 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, vikram.garhwal, jgross, Peter Maydell,
open list:ARM TCG CPUs
Enable grant ram mapping support for Xenpvh machine on ARM.
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
hw/arm/xen_arm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index 32776d94df..b5993ef2a6 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -125,6 +125,9 @@ static void xen_init_ram(MachineState *machine)
GUEST_RAM1_BASE, ram_size[1]);
memory_region_add_subregion(sysmem, GUEST_RAM1_BASE, &ram_hi);
}
+
+ DPRINTF("init grant ram mapping for XEN\n");
+ ram_grants = *xen_init_grant_ram();
}
void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 7/7] hw: arm: Add grant mapping.
2024-02-27 22:35 ` [QEMU][PATCH v3 7/7] hw: arm: Add grant mapping Vikram Garhwal
@ 2024-03-01 17:10 ` Alex Bennée
2024-03-06 20:56 ` Vikram Garhwal
0 siblings, 1 reply; 33+ messages in thread
From: Alex Bennée @ 2024-03-01 17:10 UTC (permalink / raw)
To: Vikram Garhwal
Cc: qemu-devel, sstabellini, jgross, Peter Maydell,
open list:ARM TCG CPUs
Vikram Garhwal <vikram.garhwal@amd.com> writes:
> Enable grant ram mapping support for Xenpvh machine on ARM.
>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> hw/arm/xen_arm.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index 32776d94df..b5993ef2a6 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -125,6 +125,9 @@ static void xen_init_ram(MachineState *machine)
> GUEST_RAM1_BASE, ram_size[1]);
> memory_region_add_subregion(sysmem, GUEST_RAM1_BASE, &ram_hi);
> }
> +
> + DPRINTF("init grant ram mapping for XEN\n");
I don't think we need the DPRINTF here (there others where recently
converted to trace-points although I suspect a memory_region tracepoint
would be a better place to capture this).
> + ram_grants = *xen_init_grant_ram();
> }
>
> void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 7/7] hw: arm: Add grant mapping.
2024-03-01 17:10 ` Alex Bennée
@ 2024-03-06 20:56 ` Vikram Garhwal
2024-04-10 11:09 ` Edgar E. Iglesias
0 siblings, 1 reply; 33+ messages in thread
From: Vikram Garhwal @ 2024-03-06 20:56 UTC (permalink / raw)
To: Alex Bennée
Cc: qemu-devel, sstabellini, jgross, Peter Maydell,
open list:ARM TCG CPUs
Hi Alex,
On Fri, Mar 01, 2024 at 05:10:28PM +0000, Alex Bennée wrote:
> Vikram Garhwal <vikram.garhwal@amd.com> writes:
>
> > Enable grant ram mapping support for Xenpvh machine on ARM.
> >
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > ---
> > hw/arm/xen_arm.c | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > index 32776d94df..b5993ef2a6 100644
> > --- a/hw/arm/xen_arm.c
> > +++ b/hw/arm/xen_arm.c
> > @@ -125,6 +125,9 @@ static void xen_init_ram(MachineState *machine)
> > GUEST_RAM1_BASE, ram_size[1]);
> > memory_region_add_subregion(sysmem, GUEST_RAM1_BASE, &ram_hi);
> > }
> > +
> > + DPRINTF("init grant ram mapping for XEN\n");
>
> I don't think we need the DPRINTF here (there others where recently
> converted to trace-points although I suspect a memory_region tracepoint
> would be a better place to capture this).
May be drop the print? As it's not providing much information anyways.
>
> > + ram_grants = *xen_init_grant_ram();
> > }
> >
> > void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
>
> --
> Alex Bennée
> Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 7/7] hw: arm: Add grant mapping.
2024-03-06 20:56 ` Vikram Garhwal
@ 2024-04-10 11:09 ` Edgar E. Iglesias
0 siblings, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 11:09 UTC (permalink / raw)
To: Vikram Garhwal
Cc: Alex Bennée, qemu-devel, sstabellini, jgross, Peter Maydell,
open list:ARM TCG CPUs
[-- Attachment #1: Type: text/plain, Size: 1340 bytes --]
On Wed, Mar 6, 2024 at 9:57 PM Vikram Garhwal <vikram.garhwal@amd.com>
wrote:
> Hi Alex,
> On Fri, Mar 01, 2024 at 05:10:28PM +0000, Alex Bennée wrote:
> > Vikram Garhwal <vikram.garhwal@amd.com> writes:
> >
> > > Enable grant ram mapping support for Xenpvh machine on ARM.
> > >
> > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > > ---
> > > hw/arm/xen_arm.c | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > > index 32776d94df..b5993ef2a6 100644
> > > --- a/hw/arm/xen_arm.c
> > > +++ b/hw/arm/xen_arm.c
> > > @@ -125,6 +125,9 @@ static void xen_init_ram(MachineState *machine)
> > > GUEST_RAM1_BASE, ram_size[1]);
> > > memory_region_add_subregion(sysmem, GUEST_RAM1_BASE, &ram_hi);
> > > }
> > > +
> > > + DPRINTF("init grant ram mapping for XEN\n");
> >
> > I don't think we need the DPRINTF here (there others where recently
> > converted to trace-points although I suspect a memory_region tracepoint
> > would be a better place to capture this).
> May be drop the print? As it's not providing much information anyways.
>
With the DPRINTF dropped:
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
[-- Attachment #2: Type: text/html, Size: 2138 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 0/7] Xen: support grant mappings.
2024-02-27 22:34 [QEMU][PATCH v3 0/7] Xen: support grant mappings Vikram Garhwal
` (6 preceding siblings ...)
2024-02-27 22:35 ` [QEMU][PATCH v3 7/7] hw: arm: Add grant mapping Vikram Garhwal
@ 2024-02-28 13:27 ` Manos Pitsidianakis
2024-02-28 18:59 ` Vikram Garhwal
7 siblings, 1 reply; 33+ messages in thread
From: Manos Pitsidianakis @ 2024-02-28 13:27 UTC (permalink / raw)
To: qemu-devel, Vikram Garhwal; +Cc: sstabellini, vikram.garhwal, jgross
Hello Vikram,
Series doesn't apply on master. Can you rebase and also provide a
base-commit with --base=<COMMIT_SHA> when you use git-format-patch? This
will help git rebase if there are any conflicts found locally.
Thanks,
On Wed, 28 Feb 2024 00:34, Vikram Garhwal <vikram.garhwal@amd.com> wrote:
>Hi,
>This patch series add support for grant mappings as a pseudo RAM region for Xen.
>
>Enabling grant mappings patches(first 6) are written by Juergen in 2021.
>
>QEMU Virtio device provides an emulated backends for Virtio frontned devices
>in Xen.
>Please set "iommu_platform=on" option when invoking QEMU. As this will set
>VIRTIO_F_ACCESS_PLATFORM feature which will be used by virtio frontend in Xen
>to know whether backend supports grants or not.
>
>Changelog:
>v2->v3:
> Drop patch 1/7. This was done because device unplug is an x86-only case.
> Add missing qemu_mutex_unlock() before return.
>v1->v2:
> Split patch 2/7 to keep phymem.c changes in a separate.
> In patch "xen: add map and unmap callbacks for grant" add check for total
> allowed grant < XEN_MAX_VIRTIO_GRANTS.
> Fix formatting issues and re-based with master latest.
>
>Regards,
>Vikram
>
>Juergen Gross (5):
> xen: add pseudo RAM region for grant mappings
> softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
> xen: let xen_ram_addr_from_mapcache() return -1 in case of not found
> entry
> memory: add MemoryRegion map and unmap callbacks
> xen: add map and unmap callbacks for grant region
>
>Vikram Garhwal (2):
> softmmu: physmem: Split ram_block_add()
> hw: arm: Add grant mapping.
>
> hw/arm/xen_arm.c | 3 +
> hw/i386/xen/xen-hvm.c | 3 +
> hw/xen/xen-hvm-common.c | 4 +-
> hw/xen/xen-mapcache.c | 214 ++++++++++++++++++++++++++++++--
> include/exec/memory.h | 21 ++++
> include/exec/ram_addr.h | 1 +
> include/hw/xen/xen-hvm-common.h | 2 +
> include/hw/xen/xen_pvdev.h | 3 +
> include/sysemu/xen-mapcache.h | 3 +
> system/physmem.c | 179 +++++++++++++++-----------
> 10 files changed, 351 insertions(+), 82 deletions(-)
>
>--
>2.17.1
>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 0/7] Xen: support grant mappings.
2024-02-28 13:27 ` [QEMU][PATCH v3 0/7] Xen: support grant mappings Manos Pitsidianakis
@ 2024-02-28 18:59 ` Vikram Garhwal
2024-04-10 12:43 ` Edgar E. Iglesias
0 siblings, 1 reply; 33+ messages in thread
From: Vikram Garhwal @ 2024-02-28 18:59 UTC (permalink / raw)
To: Manos Pitsidianakis; +Cc: qemu-devel, sstabellini, jgross
Hi Manos,
On Wed, Feb 28, 2024 at 03:27:12PM +0200, Manos Pitsidianakis wrote:
> Hello Vikram,
>
> Series doesn't apply on master. Can you rebase and also provide a
> base-commit with --base=<COMMIT_SHA> when you use git-format-patch? This
> will help git rebase if there are any conflicts found locally.
I rebased it with latest master and it works fine. Series is based on following
commit: bfe8020c814a30479a4241aaa78b63960655962b.
For v4, I will send a version with base-commit id.
Can you please share what is base-commit id on your side?
Thanks!
>
> Thanks,
>
> On Wed, 28 Feb 2024 00:34, Vikram Garhwal <vikram.garhwal@amd.com> wrote:
> > Hi,
> > This patch series add support for grant mappings as a pseudo RAM region for Xen.
> >
> > Enabling grant mappings patches(first 6) are written by Juergen in 2021.
> >
> > QEMU Virtio device provides an emulated backends for Virtio frontned devices
> > in Xen.
> > Please set "iommu_platform=on" option when invoking QEMU. As this will set
> > VIRTIO_F_ACCESS_PLATFORM feature which will be used by virtio frontend in Xen
> > to know whether backend supports grants or not.
> >
> > Changelog:
> > v2->v3:
> > Drop patch 1/7. This was done because device unplug is an x86-only case.
> > Add missing qemu_mutex_unlock() before return.
> > v1->v2:
> > Split patch 2/7 to keep phymem.c changes in a separate.
> > In patch "xen: add map and unmap callbacks for grant" add check for total
> > allowed grant < XEN_MAX_VIRTIO_GRANTS.
> > Fix formatting issues and re-based with master latest.
> >
> > Regards,
> > Vikram
> >
> > Juergen Gross (5):
> > xen: add pseudo RAM region for grant mappings
> > softmmu: let qemu_map_ram_ptr() use qemu_ram_ptr_length()
> > xen: let xen_ram_addr_from_mapcache() return -1 in case of not found
> > entry
> > memory: add MemoryRegion map and unmap callbacks
> > xen: add map and unmap callbacks for grant region
> >
> > Vikram Garhwal (2):
> > softmmu: physmem: Split ram_block_add()
> > hw: arm: Add grant mapping.
> >
> > hw/arm/xen_arm.c | 3 +
> > hw/i386/xen/xen-hvm.c | 3 +
> > hw/xen/xen-hvm-common.c | 4 +-
> > hw/xen/xen-mapcache.c | 214 ++++++++++++++++++++++++++++++--
> > include/exec/memory.h | 21 ++++
> > include/exec/ram_addr.h | 1 +
> > include/hw/xen/xen-hvm-common.h | 2 +
> > include/hw/xen/xen_pvdev.h | 3 +
> > include/sysemu/xen-mapcache.h | 3 +
> > system/physmem.c | 179 +++++++++++++++-----------
> > 10 files changed, 351 insertions(+), 82 deletions(-)
> >
> > --
> > 2.17.1
> >
> >
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [QEMU][PATCH v3 0/7] Xen: support grant mappings.
2024-02-28 18:59 ` Vikram Garhwal
@ 2024-04-10 12:43 ` Edgar E. Iglesias
0 siblings, 0 replies; 33+ messages in thread
From: Edgar E. Iglesias @ 2024-04-10 12:43 UTC (permalink / raw)
To: Vikram Garhwal; +Cc: Manos Pitsidianakis, qemu-devel, sstabellini, jgross
[-- Attachment #1: Type: text/plain, Size: 1149 bytes --]
On Wed, Feb 28, 2024 at 8:00 PM Vikram Garhwal <vikram.garhwal@amd.com>
wrote:
> Hi Manos,
> On Wed, Feb 28, 2024 at 03:27:12PM +0200, Manos Pitsidianakis wrote:
> > Hello Vikram,
> >
> > Series doesn't apply on master. Can you rebase and also provide a
> > base-commit with --base=<COMMIT_SHA> when you use git-format-patch? This
> > will help git rebase if there are any conflicts found locally.
> I rebased it with latest master and it works fine. Series is based on
> following
> commit: bfe8020c814a30479a4241aaa78b63960655962b.
>
> For v4, I will send a version with base-commit id.
>
> Can you please share what is base-commit id on your side?
>
> Thanks!
> >
> > Thanks,
>
Hi all,
I'll send a v4 on behalf of Vikram.
Stefano, I saw your comments here:
https://marc.info/?l=qemu-devel&m=169999555103080
I've manage to loose the email so I can't reply but you indicated that
you're OK with the current patch but perhaps would have preferred another
approach.
I like what Juergen did (so I RB tagged it) but if you feel strongly about
finding another approach we can have a look.
Best regards,
Edgar
[-- Attachment #2: Type: text/html, Size: 1712 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread