qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine
@ 2024-08-12 13:05 Edgar E. Iglesias
  2024-08-12 13:05 ` [PATCH v1 01/10] MAINTAINERS: Add docs/system/arm/xenpvh.rst Edgar E. Iglesias
                   ` (9 more replies)
  0 siblings, 10 replies; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

This series breaks-out parts of the ARM PVH support into a reusable
QOM module. There's a bit of refactoring and some bug-fixes along
the way.

Finally we add a new x86 xen-pvh machine using the new xen-pvh-common
module.

The corresponding changes Xen for PVH x86 are work in progress
(by Xenia Ragiadakou). You can find a current version here:
https://github.com/edgarigl/xen/tree/edgar/virtio-pvh-upstream 

I've briefly described the steps to run Xen PVH x86 guests here
(including example guest kernel config, xl.cfg and xl command-lines):
https://github.com/edgarigl/docs/blob/master/xen/pvh/xenpvh-x86.md

Cheers,
Edgar

Edgar E. Iglesias (10):
  MAINTAINERS: Add docs/system/arm/xenpvh.rst
  hw/arm: xenpvh: Update file header to use SPDX
  hw/arm: xenpvh: Tweak machine description
  hw/arm: xenpvh: Add support for SMP guests
  hw/arm: xenpvh: Break out a common PVH module
  hw/arm: xenpvh: Rename xen_arm.c -> xen-pvh.c
  hw/arm: xenpvh: Reverse virtio-mmio creation order
  hw/xen: pvh-common: Add support for creating PCIe/GPEX
  hw/i386/xen: Add a Xen PVH x86 machine
  docs/system/i386: xenpvh: Add a basic description

 MAINTAINERS                     |   2 +
 docs/system/i386/xenpvh.rst     |  49 ++++++
 docs/system/target-i386.rst     |   1 +
 hw/arm/meson.build              |   2 +-
 hw/arm/trace-events             |   5 -
 hw/arm/xen-pvh.c                | 165 ++++++++++++++++++++
 hw/arm/xen_arm.c                | 267 --------------------------------
 hw/i386/xen/meson.build         |   1 +
 hw/i386/xen/xen-pvh.c           | 196 +++++++++++++++++++++++
 hw/xen/meson.build              |   1 +
 hw/xen/trace-events             |   4 +
 hw/xen/xen-pvh-common.c         | 262 +++++++++++++++++++++++++++++++
 include/hw/xen/xen-pvh-common.h |  53 +++++++
 13 files changed, 735 insertions(+), 273 deletions(-)
 create mode 100644 docs/system/i386/xenpvh.rst
 create mode 100644 hw/arm/xen-pvh.c
 delete mode 100644 hw/arm/xen_arm.c
 create mode 100644 hw/i386/xen/xen-pvh.c
 create mode 100644 hw/xen/xen-pvh-common.c
 create mode 100644 include/hw/xen/xen-pvh-common.h

-- 
2.43.0



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v1 01/10] MAINTAINERS: Add docs/system/arm/xenpvh.rst
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
@ 2024-08-12 13:05 ` Edgar E. Iglesias
  2024-08-13  1:45   ` Stefano Stabellini
  2024-08-12 13:05 ` [PATCH v1 02/10] hw/arm: xenpvh: Update file header to use SPDX Edgar E. Iglesias
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 10af212632..a24c2e14d9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -559,6 +559,7 @@ F: include/hw/xen/
 F: include/sysemu/xen.h
 F: include/sysemu/xen-mapcache.h
 F: stubs/xen-hw-stub.c
+F: docs/system/arm/xenpvh.rst
 
 Guest CPU Cores (NVMM)
 ----------------------
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 02/10] hw/arm: xenpvh: Update file header to use SPDX
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
  2024-08-12 13:05 ` [PATCH v1 01/10] MAINTAINERS: Add docs/system/arm/xenpvh.rst Edgar E. Iglesias
@ 2024-08-12 13:05 ` Edgar E. Iglesias
  2024-08-13  1:45   ` Stefano Stabellini
  2024-08-12 13:05 ` [PATCH v1 03/10] hw/arm: xenpvh: Tweak machine description Edgar E. Iglesias
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

Update file header to use SPDX and remove stray empty
comment line.

No functional changes.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 hw/arm/xen_arm.c | 19 +------------------
 1 file changed, 1 insertion(+), 18 deletions(-)

diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index 6fad829ede..766a194fa1 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -1,24 +1,7 @@
 /*
  * QEMU ARM Xen PVH Machine
  *
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and associated documentation files (the "Software"), to deal
- * in the Software without restriction, including without limitation the rights
- * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
- * copies of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
- * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
- * THE SOFTWARE.
+ * SPDX-License-Identifier: MIT
  */
 
 #include "qemu/osdep.h"
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 03/10] hw/arm: xenpvh: Tweak machine description
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
  2024-08-12 13:05 ` [PATCH v1 01/10] MAINTAINERS: Add docs/system/arm/xenpvh.rst Edgar E. Iglesias
  2024-08-12 13:05 ` [PATCH v1 02/10] hw/arm: xenpvh: Update file header to use SPDX Edgar E. Iglesias
@ 2024-08-12 13:05 ` Edgar E. Iglesias
  2024-08-13  1:45   ` Stefano Stabellini
  2024-08-12 13:05 ` [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests Edgar E. Iglesias
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

Tweak machine description to better express that this is
a Xen PVH machine for ARM.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 hw/arm/xen_arm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index 766a194fa1..5f75cc3779 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -216,7 +216,7 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
 {
 
     MachineClass *mc = MACHINE_CLASS(oc);
-    mc->desc = "Xen Para-virtualized PC";
+    mc->desc = "Xen PVH ARM machine";
     mc->init = xen_arm_init;
     mc->max_cpus = 1;
     mc->default_machine_opts = "accel=xen";
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
                   ` (2 preceding siblings ...)
  2024-08-12 13:05 ` [PATCH v1 03/10] hw/arm: xenpvh: Tweak machine description Edgar E. Iglesias
@ 2024-08-12 13:05 ` Edgar E. Iglesias
  2024-08-13  1:47   ` Stefano Stabellini
  2024-08-12 13:06 ` [PATCH v1 05/10] hw/arm: xenpvh: Break out a common PVH module Edgar E. Iglesias
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
servers to handle hotplug.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 hw/arm/xen_arm.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index 5f75cc3779..ef8315969c 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
 
     xen_init_ram(machine);
 
-    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
+    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
 
     xen_create_virtio_mmio_devices(xam);
 
@@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
     MachineClass *mc = MACHINE_CLASS(oc);
     mc->desc = "Xen PVH ARM machine";
     mc->init = xen_arm_init;
-    mc->max_cpus = 1;
+    /* MAX number of vcpus supported by Xen.  */
+    mc->max_cpus = GUEST_MAX_VCPUS;
     mc->default_machine_opts = "accel=xen";
     /* Set explicitly here to make sure that real ram_size is passed */
     mc->default_ram_size = 0;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 05/10] hw/arm: xenpvh: Break out a common PVH module
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
                   ` (3 preceding siblings ...)
  2024-08-12 13:05 ` [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests Edgar E. Iglesias
@ 2024-08-12 13:06 ` Edgar E. Iglesias
  2024-08-13  1:47   ` Stefano Stabellini
  2024-08-12 13:06 ` [PATCH v1 06/10] hw/arm: xenpvh: Rename xen_arm.c -> xen-pvh.c Edgar E. Iglesias
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	Edgar E. Iglesias, qemu-arm

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

Break out a common Xen PVH module in preparation for
adding a x86 Xen PVH Machine.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 hw/arm/trace-events             |   5 -
 hw/arm/xen_arm.c                | 154 ++++++--------------------
 hw/xen/meson.build              |   1 +
 hw/xen/trace-events             |   4 +
 hw/xen/xen-pvh-common.c         | 185 ++++++++++++++++++++++++++++++++
 include/hw/xen/xen-pvh-common.h |  45 ++++++++
 6 files changed, 269 insertions(+), 125 deletions(-)
 create mode 100644 hw/xen/xen-pvh-common.c
 create mode 100644 include/hw/xen/xen-pvh-common.h

diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index be6c8f720b..c64ad344bd 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -68,10 +68,5 @@ z2_aer915_send_too_long(int8_t msg) "message too long (%i bytes)"
 z2_aer915_send(uint8_t reg, uint8_t value) "reg %d value 0x%02x"
 z2_aer915_event(int8_t event, int8_t len) "i2c event =0x%x len=%d bytes"
 
-# xen_arm.c
-xen_create_virtio_mmio_devices(int i, int irq, uint64_t base) "Created virtio-mmio device %d: irq %d base 0x%"PRIx64
-xen_init_ram(uint64_t machine_ram_size) "Initialized xen ram with size 0x%"PRIx64
-xen_enable_tpm(uint64_t addr) "Connected tpmdev at address 0x%"PRIx64
-
 # bcm2838.c
 bcm2838_gic_set_irq(int irq, int level) "gic irq:%d lvl:%d"
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index ef8315969c..b8a5c09bdf 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -12,40 +12,25 @@
 #include "hw/irq.h"
 #include "hw/sysbus.h"
 #include "sysemu/block-backend.h"
-#include "sysemu/tpm_backend.h"
 #include "sysemu/sysemu.h"
-#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/xen-pvh-common.h"
 #include "sysemu/tpm.h"
 #include "hw/xen/arch_hvm.h"
-#include "trace.h"
 
 #define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
 OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
 
-static const MemoryListener xen_memory_listener = {
-    .region_add = xen_region_add,
-    .region_del = xen_region_del,
-    .log_start = NULL,
-    .log_stop = NULL,
-    .log_sync = NULL,
-    .log_global_start = NULL,
-    .log_global_stop = NULL,
-    .priority = MEMORY_LISTENER_PRIORITY_ACCEL,
-};
-
 struct XenArmState {
     /*< private >*/
     MachineState parent;
 
-    XenIOState *state;
+    XenPVHCommonState pvh;
 
     struct {
         uint64_t tpm_base_addr;
     } cfg;
 };
 
-static MemoryRegion ram_lo, ram_hi;
-
 /*
  * VIRTIO_MMIO_DEV_SIZE is imported from tools/libs/light/libxl_arm.c under Xen
  * repository.
@@ -57,64 +42,6 @@ static MemoryRegion ram_lo, ram_hi;
 #define NR_VIRTIO_MMIO_DEVICES   \
    (GUEST_VIRTIO_MMIO_SPI_LAST - GUEST_VIRTIO_MMIO_SPI_FIRST)
 
-static void xen_set_irq(void *opaque, int irq, int level)
-{
-    if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
-        error_report("xendevicemodel_set_irq_level failed");
-    }
-}
-
-static void xen_create_virtio_mmio_devices(XenArmState *xam)
-{
-    int i;
-
-    for (i = 0; i < NR_VIRTIO_MMIO_DEVICES; i++) {
-        hwaddr base = GUEST_VIRTIO_MMIO_BASE + i * VIRTIO_MMIO_DEV_SIZE;
-        qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
-                                         GUEST_VIRTIO_MMIO_SPI_FIRST + i);
-
-        sysbus_create_simple("virtio-mmio", base, irq);
-
-        trace_xen_create_virtio_mmio_devices(i,
-                                             GUEST_VIRTIO_MMIO_SPI_FIRST + i,
-                                             base);
-    }
-}
-
-static void xen_init_ram(MachineState *machine)
-{
-    MemoryRegion *sysmem = get_system_memory();
-    ram_addr_t block_len, ram_size[GUEST_RAM_BANKS];
-
-    trace_xen_init_ram(machine->ram_size);
-    if (machine->ram_size <= GUEST_RAM0_SIZE) {
-        ram_size[0] = machine->ram_size;
-        ram_size[1] = 0;
-        block_len = GUEST_RAM0_BASE + ram_size[0];
-    } else {
-        ram_size[0] = GUEST_RAM0_SIZE;
-        ram_size[1] = machine->ram_size - GUEST_RAM0_SIZE;
-        block_len = GUEST_RAM1_BASE + ram_size[1];
-    }
-
-    memory_region_init_ram(&xen_memory, NULL, "xen.ram", block_len,
-                           &error_fatal);
-
-    memory_region_init_alias(&ram_lo, NULL, "xen.ram.lo", &xen_memory,
-                             GUEST_RAM0_BASE, ram_size[0]);
-    memory_region_add_subregion(sysmem, GUEST_RAM0_BASE, &ram_lo);
-    if (ram_size[1] > 0) {
-        memory_region_init_alias(&ram_hi, NULL, "xen.ram.hi", &xen_memory,
-                                 GUEST_RAM1_BASE, ram_size[1]);
-        memory_region_add_subregion(sysmem, GUEST_RAM1_BASE, &ram_hi);
-    }
-
-    /* Setup support for grants.  */
-    memory_region_init_ram(&xen_grants, NULL, "xen.grants", block_len,
-                           &error_fatal);
-    memory_region_add_subregion(sysmem, XEN_GRANT_ADDR_OFF, &xen_grants);
-}
-
 void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
 {
     hw_error("Invalid ioreq type 0x%x\n", req->type);
@@ -135,55 +62,42 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
 
-#ifdef CONFIG_TPM
-static void xen_enable_tpm(XenArmState *xam)
-{
-    Error *errp = NULL;
-    DeviceState *dev;
-    SysBusDevice *busdev;
-
-    TPMBackend *be = qemu_find_tpm_be("tpm0");
-    if (be == NULL) {
-        error_report("Couldn't find tmp0 backend");
-        return;
-    }
-    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
-    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
-    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
-    busdev = SYS_BUS_DEVICE(dev);
-    sysbus_realize_and_unref(busdev, &error_fatal);
-    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
-
-    trace_xen_enable_tpm(xam->cfg.tpm_base_addr);
-}
-#endif
-
-static void xen_arm_init(MachineState *machine)
+static void xen_arm_init(MachineState *ms)
 {
-    XenArmState *xam = XEN_ARM(machine);
-
-    xam->state =  g_new0(XenIOState, 1);
+    XenArmState *xam = XEN_ARM(ms);
+    const struct {
+        const char *name;
+        MemMapEntry map;
+    } map[] = {
+        { "ram-low", { GUEST_RAM0_BASE, GUEST_RAM0_SIZE } },
+        { "ram-high", { GUEST_RAM1_BASE, GUEST_RAM1_SIZE } },
+        { "virtio-mmio", { GUEST_VIRTIO_MMIO_BASE, VIRTIO_MMIO_DEV_SIZE } },
+        { "tpm", { xam->cfg.tpm_base_addr, 0x1000 } },
+    };
+    int i;
 
-    if (machine->ram_size == 0) {
-        warn_report("%s non-zero ram size not specified. QEMU machine started"
-                    " without IOREQ (no emulated devices including virtio)",
-                    MACHINE_CLASS(object_get_class(OBJECT(machine)))->desc);
-        return;
+    object_initialize_child(OBJECT(ms), "pvh", &xam->pvh, TYPE_XEN_PVH_COMMON);
+
+    object_property_set_int(OBJECT(&xam->pvh), "max-cpus", ms->smp.max_cpus,
+                            &error_abort);
+    object_property_set_int(OBJECT(&xam->pvh), "ram-size", ms->ram_size,
+                            &error_abort);
+    object_property_set_int(OBJECT(&xam->pvh), "virtio-mmio-num",
+                            NR_VIRTIO_MMIO_DEVICES, &error_abort);
+    object_property_set_int(OBJECT(&xam->pvh), "virtio-mmio-irq-base",
+                            GUEST_VIRTIO_MMIO_SPI_FIRST, &error_abort);
+
+    for (i = 0; i < ARRAY_SIZE(map); i++) {
+        g_autofree char *base_name = g_strdup_printf("%s-base", map[i].name);
+        g_autofree char *size_name = g_strdup_printf("%s-size", map[i].name);
+
+        object_property_set_int(OBJECT(&xam->pvh), base_name, map[i].map.base,
+                                &error_abort);
+        object_property_set_int(OBJECT(&xam->pvh), size_name, map[i].map.size,
+                                &error_abort);
     }
 
-    xen_init_ram(machine);
-
-    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
-
-    xen_create_virtio_mmio_devices(xam);
-
-#ifdef CONFIG_TPM
-    if (xam->cfg.tpm_base_addr) {
-        xen_enable_tpm(xam);
-    } else {
-        warn_report("tpm-base-addr is not provided. TPM will not be enabled");
-    }
-#endif
+    sysbus_realize(SYS_BUS_DEVICE(&xam->pvh), &error_abort);
 }
 
 #ifdef CONFIG_TPM
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index d887fa9ba4..4a486e3673 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -15,6 +15,7 @@ xen_specific_ss = ss.source_set()
 xen_specific_ss.add(files(
   'xen-mapcache.c',
   'xen-hvm-common.c',
+  'xen-pvh-common.c',
 ))
 if have_xen_pci_passthrough
   xen_specific_ss.add(files(
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index d1b27f6c11..a07fe41c6d 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -64,6 +64,10 @@ destroy_hvm_domain_cannot_acquire_handle(void) "Cannot acquire xenctrl handle"
 destroy_hvm_domain_failed_action(const char *action, int sts, char *errno_s) "xc_domain_shutdown failed to issue %s, sts %d, %s"
 destroy_hvm_domain_action(int xen_domid, const char *action) "Issued domain %d %s"
 
+# xen-pvh-common.c
+xen_create_virtio_mmio_devices(int i, int irq, uint64_t base) "Created virtio-mmio device %d: irq %d base 0x%"PRIx64
+xen_enable_tpm(uint64_t addr) "Connected tpmdev at address 0x%"PRIx64
+
 # xen-mapcache.c
 xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
 xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
new file mode 100644
index 0000000000..0d368398d0
--- /dev/null
+++ b/hw/xen/xen-pvh-common.c
@@ -0,0 +1,185 @@
+/*
+ * Common Xen PVH code.
+ *
+ * Copyright (c) 2024 Advanced Micro Devices, Inc.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/error-report.h"
+#include "qapi/error.h"
+#include "hw/boards.h"
+#include "hw/irq.h"
+#include "hw/sysbus.h"
+#include "sysemu/sysemu.h"
+#include "sysemu/tpm.h"
+#include "sysemu/tpm_backend.h"
+#include "hw/xen/xen-pvh-common.h"
+#include "trace.h"
+
+static const MemoryListener xen_memory_listener = {
+    .region_add = xen_region_add,
+    .region_del = xen_region_del,
+    .log_start = NULL,
+    .log_stop = NULL,
+    .log_sync = NULL,
+    .log_global_start = NULL,
+    .log_global_stop = NULL,
+    .priority = MEMORY_LISTENER_PRIORITY_ACCEL,
+};
+
+static void xen_pvh_init_ram(XenPVHCommonState *s,
+                             MemoryRegion *sysmem)
+{
+    ram_addr_t block_len, ram_size[2];
+
+    if (s->cfg.ram_size <= s->cfg.ram_low.size) {
+        ram_size[0] = s->cfg.ram_size;
+        ram_size[1] = 0;
+        block_len = s->cfg.ram_low.base + ram_size[0];
+    } else {
+        ram_size[0] = s->cfg.ram_low.size;
+        ram_size[1] = s->cfg.ram_size - s->cfg.ram_low.size;
+        block_len = s->cfg.ram_high.base + ram_size[1];
+    }
+
+    memory_region_init_ram(&xen_memory, NULL, "xen.ram", block_len,
+                           &error_fatal);
+
+    memory_region_init_alias(&s->ram.low, NULL, "xen.ram.lo", &xen_memory,
+                             s->cfg.ram_low.base, ram_size[0]);
+    memory_region_add_subregion(sysmem, s->cfg.ram_low.base, &s->ram.low);
+    if (ram_size[1] > 0) {
+        memory_region_init_alias(&s->ram.high, NULL, "xen.ram.hi", &xen_memory,
+                                 s->cfg.ram_high.base, ram_size[1]);
+        memory_region_add_subregion(sysmem, s->cfg.ram_high.base, &s->ram.high);
+    }
+
+    /* Setup support for grants.  */
+    memory_region_init_ram(&xen_grants, NULL, "xen.grants", block_len,
+                           &error_fatal);
+    memory_region_add_subregion(sysmem, XEN_GRANT_ADDR_OFF, &xen_grants);
+}
+
+static void xen_set_irq(void *opaque, int irq, int level)
+{
+    if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
+        error_report("xendevicemodel_set_irq_level failed");
+    }
+}
+
+static void xen_create_virtio_mmio_devices(XenPVHCommonState *s)
+{
+    int i;
+
+    for (i = 0; i < s->cfg.virtio_mmio_num; i++) {
+        hwaddr base = s->cfg.virtio_mmio.base + i * s->cfg.virtio_mmio.size;
+        qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
+                                         s->cfg.virtio_mmio_irq_base + i);
+
+        sysbus_create_simple("virtio-mmio", base, irq);
+
+        trace_xen_create_virtio_mmio_devices(i,
+                                             s->cfg.virtio_mmio_irq_base + i,
+                                             base);
+    }
+}
+
+#ifdef CONFIG_TPM
+static void xen_enable_tpm(XenPVHCommonState *s)
+{
+    Error *errp = NULL;
+    DeviceState *dev;
+    SysBusDevice *busdev;
+
+    TPMBackend *be = qemu_find_tpm_be("tpm0");
+    if (be == NULL) {
+        error_report("Couldn't find tmp0 backend");
+        return;
+    }
+    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
+    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
+    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
+    busdev = SYS_BUS_DEVICE(dev);
+    sysbus_realize_and_unref(busdev, &error_fatal);
+    sysbus_mmio_map(busdev, 0, s->cfg.tpm.base);
+
+    trace_xen_enable_tpm(s->cfg.tpm.base);
+}
+#endif
+
+static void xen_pvh_realize(DeviceState *dev, Error **errp)
+{
+    XenPVHCommonState *s = XEN_PVH_COMMON(dev);
+    MemoryRegion *sysmem = get_system_memory();
+
+    if (s->cfg.ram_size == 0) {
+        /* FIXME: Prefix with object path and consider bailing out.  */
+        warn_report("non-zero ram size not specified. QEMU machine started"
+                    " without IOREQ (no emulated devices including virtio)");
+        return;
+    }
+
+    if (s->cfg.max_cpus == 0) {
+        /* FIXME: Prefix with object path and bail out.  */
+        warn_report("max-cpus not specified. QEMU machine started");
+        return;
+    }
+
+    xen_pvh_init_ram(s, sysmem);
+    xen_register_ioreq(&s->ioreq, s->cfg.max_cpus, &xen_memory_listener);
+
+    if (s->cfg.virtio_mmio_num) {
+        xen_create_virtio_mmio_devices(s);
+    }
+
+#ifdef CONFIG_TPM
+    if (s->cfg.tpm.base) {
+        xen_enable_tpm(s);
+    } else {
+        warn_report("tpm-base-addr is not provided. TPM will not be enabled");
+    }
+#endif
+}
+
+#define DEFINE_PROP_MEMMAP(n, f) \
+    DEFINE_PROP_UINT64(n "-base", XenPVHCommonState, cfg.f.base, 0), \
+    DEFINE_PROP_UINT64(n "-size", XenPVHCommonState, cfg.f.size, 0)
+
+static Property xen_pvh_properties[] = {
+    DEFINE_PROP_UINT32("max-cpus", XenPVHCommonState, cfg.max_cpus, 0),
+    DEFINE_PROP_UINT64("ram-size", XenPVHCommonState, cfg.ram_size, 0),
+    DEFINE_PROP_MEMMAP("ram-low", ram_low),
+    DEFINE_PROP_MEMMAP("ram-high", ram_high),
+    DEFINE_PROP_MEMMAP("virtio-mmio", virtio_mmio),
+    DEFINE_PROP_MEMMAP("tpm", tpm),
+    DEFINE_PROP_UINT32("virtio-mmio-num", XenPVHCommonState,
+                       cfg.virtio_mmio_num, 0),
+    DEFINE_PROP_UINT32("virtio-mmio-irq-base", XenPVHCommonState,
+                       cfg.virtio_mmio_irq_base, 0),
+    DEFINE_PROP_END_OF_LIST()
+};
+
+static void xen_pvh_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+
+    dc->realize = xen_pvh_realize;
+    device_class_set_props(dc, xen_pvh_properties);
+    /* No VMSD since we haven't got any top-level SoC state to save.  */
+}
+
+static const TypeInfo xen_pvh_info = {
+    .name = TYPE_XEN_PVH_COMMON,
+    .parent = TYPE_SYS_BUS_DEVICE,
+    .instance_size = sizeof(XenPVHCommonState),
+    .class_init = xen_pvh_class_init,
+};
+
+static void xen_pvh_register_types(void)
+{
+    type_register_static(&xen_pvh_info);
+}
+
+type_init(xen_pvh_register_types);
diff --git a/include/hw/xen/xen-pvh-common.h b/include/hw/xen/xen-pvh-common.h
new file mode 100644
index 0000000000..e958b441fd
--- /dev/null
+++ b/include/hw/xen/xen-pvh-common.h
@@ -0,0 +1,45 @@
+/*
+ * QEMU Xen PVH machine - common code.
+ *
+ * Copyright (c) 2024 Advanced Micro Devices, Inc.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef XEN_PVH_COMMON_H__
+#define XEN_PVH_COMMON_H__
+
+#include <assert.h>
+#include "hw/sysbus.h"
+#include "hw/hw.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/pci-host/gpex.h"
+
+#define TYPE_XEN_PVH_COMMON "xen-pvh-common"
+OBJECT_DECLARE_SIMPLE_TYPE(XenPVHCommonState, XEN_PVH_COMMON)
+
+typedef struct XenPVHCommonState {
+    /*< private >*/
+    SysBusDevice parent_obj;
+
+    XenIOState ioreq;
+
+    struct {
+        MemoryRegion low;
+        MemoryRegion high;
+    } ram;
+
+    struct {
+        uint64_t ram_size;
+        uint32_t max_cpus;
+        uint32_t virtio_mmio_num;
+        uint32_t virtio_mmio_irq_base;
+        struct {
+            uint64_t base;
+            uint64_t size;
+        } ram_low, ram_high,
+          virtio_mmio,
+          tpm;
+    } cfg;
+} XenPVHCommonState;
+#endif
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 06/10] hw/arm: xenpvh: Rename xen_arm.c -> xen-pvh.c
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
                   ` (4 preceding siblings ...)
  2024-08-12 13:06 ` [PATCH v1 05/10] hw/arm: xenpvh: Break out a common PVH module Edgar E. Iglesias
@ 2024-08-12 13:06 ` Edgar E. Iglesias
  2024-08-13  1:48   ` Stefano Stabellini
  2024-08-12 13:06 ` [PATCH v1 07/10] hw/arm: xenpvh: Reverse virtio-mmio creation order Edgar E. Iglesias
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

Rename xen_arm.c -> xen-pvh.c to better express that this
is a PVH machine and to align with x86 HVM and future PVH
machine filenames:
hw/i386/xen/xen-hvm.c
hw/i386/xen/xen-pvh.c (in preparation)

No functional changes.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 hw/arm/meson.build              | 2 +-
 hw/arm/{xen_arm.c => xen-pvh.c} | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename hw/arm/{xen_arm.c => xen-pvh.c} (100%)

diff --git a/hw/arm/meson.build b/hw/arm/meson.build
index 0c07ab522f..769fe9ec1a 100644
--- a/hw/arm/meson.build
+++ b/hw/arm/meson.build
@@ -59,7 +59,7 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true: files('fsl-imx7.c', 'mcimx7d-sabre.
 arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
 arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
 arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
-arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
+arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen-pvh.c'))
 
 system_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmu-common.c'))
 system_ss.add(when: 'CONFIG_CHEETAH', if_true: files('palm.c'))
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen-pvh.c
similarity index 100%
rename from hw/arm/xen_arm.c
rename to hw/arm/xen-pvh.c
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 07/10] hw/arm: xenpvh: Reverse virtio-mmio creation order
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
                   ` (5 preceding siblings ...)
  2024-08-12 13:06 ` [PATCH v1 06/10] hw/arm: xenpvh: Rename xen_arm.c -> xen-pvh.c Edgar E. Iglesias
@ 2024-08-12 13:06 ` Edgar E. Iglesias
  2024-08-13  1:48   ` Stefano Stabellini
  2024-08-12 13:06 ` [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX Edgar E. Iglesias
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	Edgar E. Iglesias

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

We've been creating the virtio-mmio devices in forwards order
but since the qbus lists prepend (rather than append) entries,
the virtio busses end up with decreasing base address order.

Xen enables virtio-mmio nodes in forwards order so there's been
a missmatch. So far, we've been working around this with an
out-of-tree patch to Xen.

This reverses the order making sure the virtio busses end up
ordered with increasing base addresses avoiding the need to
patch Xen.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 hw/xen/xen-pvh-common.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
index 0d368398d0..69a2dbdb6d 100644
--- a/hw/xen/xen-pvh-common.c
+++ b/hw/xen/xen-pvh-common.c
@@ -73,7 +73,18 @@ static void xen_create_virtio_mmio_devices(XenPVHCommonState *s)
 {
     int i;
 
-    for (i = 0; i < s->cfg.virtio_mmio_num; i++) {
+    /*
+     * We create the transports in reverse order. Since qbus_realize()
+     * prepends (not appends) new child buses, the decrementing loop below will
+     * create a list of virtio-mmio buses with increasing base addresses.
+     *
+     * When a -device option is processed from the command line,
+     * qbus_find_recursive() picks the next free virtio-mmio bus in forwards
+     * order.
+     *
+     * This is what the Xen tools expect.
+     */
+    for (i = s->cfg.virtio_mmio_num - 1; i >= 0; i--) {
         hwaddr base = s->cfg.virtio_mmio.base + i * s->cfg.virtio_mmio.size;
         qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
                                          s->cfg.virtio_mmio_irq_base + i);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
                   ` (6 preceding siblings ...)
  2024-08-12 13:06 ` [PATCH v1 07/10] hw/arm: xenpvh: Reverse virtio-mmio creation order Edgar E. Iglesias
@ 2024-08-12 13:06 ` Edgar E. Iglesias
  2024-08-13  1:48   ` Stefano Stabellini
  2024-08-12 13:06 ` [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine Edgar E. Iglesias
  2024-08-12 13:06 ` [PATCH v1 10/10] docs/system/i386: xenpvh: Add a basic description Edgar E. Iglesias
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	Edgar E. Iglesias

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

Add support for optionally creating a PCIe/GPEX controller.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 hw/xen/xen-pvh-common.c         | 66 +++++++++++++++++++++++++++++++++
 include/hw/xen/xen-pvh-common.h | 10 ++++-
 2 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
index 69a2dbdb6d..b1432e4bd9 100644
--- a/hw/xen/xen-pvh-common.c
+++ b/hw/xen/xen-pvh-common.c
@@ -120,6 +120,59 @@ static void xen_enable_tpm(XenPVHCommonState *s)
 }
 #endif
 
+static void xen_set_pci_intx_irq(void *opaque, int irq, int level)
+{
+    if (xen_set_pci_intx_level(xen_domid, 0, 0, 0, irq, level)) {
+        error_report("xendevicemodel_set_pci_intx_level failed");
+    }
+}
+
+static inline void xenpvh_gpex_init(XenPVHCommonState *s,
+                                    MemoryRegion *sysmem,
+                                    hwaddr ecam_base, hwaddr ecam_size,
+                                    hwaddr mmio_base, hwaddr mmio_size,
+                                    hwaddr mmio_high_base,
+                                    hwaddr mmio_high_size,
+                                    int intx_irq_base)
+{
+    MemoryRegion *ecam_reg;
+    MemoryRegion *mmio_reg;
+    DeviceState *dev;
+    int i;
+
+    object_initialize_child(OBJECT(s), "gpex", &s->pci.gpex,
+                            TYPE_GPEX_HOST);
+    dev = DEVICE(&s->pci.gpex);
+    sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
+
+    ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
+    memory_region_add_subregion(sysmem, ecam_base, ecam_reg);
+
+    mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
+
+    if (mmio_size) {
+        memory_region_init_alias(&s->pci.mmio_alias, OBJECT(dev), "pcie-mmio",
+                                 mmio_reg, mmio_base, mmio_size);
+        memory_region_add_subregion(sysmem, mmio_base, &s->pci.mmio_alias);
+    }
+
+    if (mmio_high_size) {
+        memory_region_init_alias(&s->pci.mmio_high_alias, OBJECT(dev),
+                "pcie-mmio-high",
+                mmio_reg, mmio_high_base, mmio_high_size);
+        memory_region_add_subregion(sysmem, mmio_high_base,
+                &s->pci.mmio_high_alias);
+    }
+
+    for (i = 0; i < GPEX_NUM_IRQS; i++) {
+        qemu_irq irq = qemu_allocate_irq(xen_set_pci_intx_irq, s, i);
+
+        sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
+        gpex_set_irq_num(GPEX_HOST(dev), i, intx_irq_base + i);
+        xen_set_pci_link_route(i, intx_irq_base + i);
+    }
+}
+
 static void xen_pvh_realize(DeviceState *dev, Error **errp)
 {
     XenPVHCommonState *s = XEN_PVH_COMMON(dev);
@@ -152,6 +205,14 @@ static void xen_pvh_realize(DeviceState *dev, Error **errp)
         warn_report("tpm-base-addr is not provided. TPM will not be enabled");
     }
 #endif
+
+    if (s->cfg.ecam.size) {
+        xenpvh_gpex_init(s, sysmem,
+                         s->cfg.ecam.base, s->cfg.ecam.size,
+                         s->cfg.mmio.base, s->cfg.mmio.size,
+                         s->cfg.mmio_high.base, s->cfg.mmio_high.size,
+                         s->cfg.pci_intx_irq_base);
+    }
 }
 
 #define DEFINE_PROP_MEMMAP(n, f) \
@@ -165,10 +226,15 @@ static Property xen_pvh_properties[] = {
     DEFINE_PROP_MEMMAP("ram-high", ram_high),
     DEFINE_PROP_MEMMAP("virtio-mmio", virtio_mmio),
     DEFINE_PROP_MEMMAP("tpm", tpm),
+    DEFINE_PROP_MEMMAP("pci-ecam", ecam),
+    DEFINE_PROP_MEMMAP("pci-mmio", mmio),
+    DEFINE_PROP_MEMMAP("pci-mmio-high", mmio_high),
     DEFINE_PROP_UINT32("virtio-mmio-num", XenPVHCommonState,
                        cfg.virtio_mmio_num, 0),
     DEFINE_PROP_UINT32("virtio-mmio-irq-base", XenPVHCommonState,
                        cfg.virtio_mmio_irq_base, 0),
+    DEFINE_PROP_UINT32("pci-intx-irq-base", XenPVHCommonState,
+                       cfg.pci_intx_irq_base, 0),
     DEFINE_PROP_END_OF_LIST()
 };
 
diff --git a/include/hw/xen/xen-pvh-common.h b/include/hw/xen/xen-pvh-common.h
index e958b441fd..faacfca70e 100644
--- a/include/hw/xen/xen-pvh-common.h
+++ b/include/hw/xen/xen-pvh-common.h
@@ -29,17 +29,25 @@ typedef struct XenPVHCommonState {
         MemoryRegion high;
     } ram;
 
+    struct {
+        GPEXHost gpex;
+        MemoryRegion mmio_alias;
+        MemoryRegion mmio_high_alias;
+    } pci;
+
     struct {
         uint64_t ram_size;
         uint32_t max_cpus;
         uint32_t virtio_mmio_num;
         uint32_t virtio_mmio_irq_base;
+        uint32_t pci_intx_irq_base;
         struct {
             uint64_t base;
             uint64_t size;
         } ram_low, ram_high,
           virtio_mmio,
-          tpm;
+          tpm,
+          ecam, mmio, mmio_high;
     } cfg;
 } XenPVHCommonState;
 #endif
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
                   ` (7 preceding siblings ...)
  2024-08-12 13:06 ` [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX Edgar E. Iglesias
@ 2024-08-12 13:06 ` Edgar E. Iglesias
  2024-08-13  1:48   ` Stefano Stabellini
  2024-08-12 13:06 ` [PATCH v1 10/10] docs/system/i386: xenpvh: Add a basic description Edgar E. Iglesias
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	Edgar E. Iglesias, Michael S. Tsirkin, Marcel Apfelbaum,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

This adds a Xen PVH x86 machine based on the PVH Common
module used by the ARM PVH machine.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 hw/i386/xen/meson.build |   1 +
 hw/i386/xen/xen-pvh.c   | 196 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 197 insertions(+)
 create mode 100644 hw/i386/xen/xen-pvh.c

diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index 3f0df8bc07..c73c62b8e3 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -4,6 +4,7 @@ i386_ss.add(when: 'CONFIG_XEN', if_true: files(
 ))
 i386_ss.add(when: ['CONFIG_XEN', xen], if_true: files(
   'xen-hvm.c',
+  'xen-pvh.c',
 ))
 
 i386_ss.add(when: 'CONFIG_XEN_BUS', if_true: files(
diff --git a/hw/i386/xen/xen-pvh.c b/hw/i386/xen/xen-pvh.c
new file mode 100644
index 0000000000..9c3d3fc58d
--- /dev/null
+++ b/hw/i386/xen/xen-pvh.c
@@ -0,0 +1,196 @@
+/*
+ * QEMU Xen PVH x86 Machine
+ *
+ * Copyright (c) 2024 Advanced Micro Devices, Inc.
+ * Written by Edgar E. Iglesias <edgar.iglesias@amd.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qapi/visitor.h"
+#include "qemu/error-report.h"
+#include "hw/boards.h"
+#include "sysemu/sysemu.h"
+#include "hw/xen/arch_hvm.h"
+#include "hw/xen/xen.h"
+#include "hw/xen/xen-pvh-common.h"
+
+#define TYPE_XEN_PVH_X86  MACHINE_TYPE_NAME("xenpvh")
+OBJECT_DECLARE_SIMPLE_TYPE(XenPVHx86State, XEN_PVH_X86)
+
+#define PVH_MAX_CPUS 128
+
+struct XenPVHx86State {
+    /*< private >*/
+    MachineState parent;
+
+    DeviceState *cpu[PVH_MAX_CPUS];
+    XenPVHCommonState pvh;
+
+    /*
+     * We provide these properties to allow Xen to move things to other
+     * addresses for example when users need to accomodate the memory-map
+     * for 1:1 mapped devices/memory.
+     */
+    struct {
+        MemMapEntry ram_low, ram_high;
+        MemMapEntry pci_ecam, pci_mmio, pci_mmio_high;
+    } cfg;
+};
+
+static void xenpvh_cpu_new(MachineState *ms,
+                           XenPVHx86State *xp,
+                           int cpu_idx,
+                           int64_t apic_id)
+{
+    Object *cpu = object_new(ms->cpu_type);
+
+    object_property_add_child(OBJECT(ms), "cpu[*]", cpu);
+    object_property_set_uint(cpu, "apic-id", apic_id, &error_fatal);
+    qdev_realize(DEVICE(cpu), NULL, &error_fatal);
+    object_unref(cpu);
+
+    xp->cpu[cpu_idx] = DEVICE(cpu);
+}
+
+static void xenpvh_init(MachineState *ms)
+{
+    XenPVHx86State *xp = XEN_PVH_X86(ms);
+    const struct {
+        const char *name;
+        MemMapEntry *map;
+    } map[] = {
+        { "ram-low", &xp->cfg.ram_low },
+        { "ram-high", &xp->cfg.ram_high },
+        { "pci-ecam", &xp->cfg.pci_ecam },
+        { "pci-mmio", &xp->cfg.pci_mmio },
+        { "pci-mmio-high", &xp->cfg.pci_mmio_high },
+    };
+    int i;
+
+    object_initialize_child(OBJECT(ms), "pvh", &xp->pvh, TYPE_XEN_PVH_COMMON);
+    object_property_set_int(OBJECT(&xp->pvh), "max-cpus", ms->smp.max_cpus,
+                            &error_abort);
+    object_property_set_int(OBJECT(&xp->pvh), "ram-size", ms->ram_size,
+                            &error_abort);
+
+    for (i = 0; i < ARRAY_SIZE(map); i++) {
+        g_autofree char *base_name = g_strdup_printf("%s-base", map[i].name);
+        g_autofree char *size_name = g_strdup_printf("%s-size", map[i].name);
+
+        object_property_set_int(OBJECT(&xp->pvh), base_name, map[i].map->base,
+                                 &error_abort);
+        object_property_set_int(OBJECT(&xp->pvh), size_name, map[i].map->size,
+                                 &error_abort);
+    }
+
+    /* GSI's 16 - 20 are used for legacy PCIe INTX IRQs.  */
+    object_property_set_int(OBJECT(&xp->pvh), "pci-intx-irq-base", 16,
+                            &error_abort);
+
+    sysbus_realize(SYS_BUS_DEVICE(&xp->pvh), &error_abort);
+
+    /* Create dummy cores. This will indirectly create the APIC MSI window.  */
+    for (i = 0; i < ms->smp.cpus; i++) {
+        xenpvh_cpu_new(ms, xp, i, i);
+    }
+}
+
+#define XENPVH_PROP_MEMMAP_SETTER(n, f)                                    \
+static void xenpvh_set_ ## n ## _ ## f(Object *obj, Visitor *v,            \
+                                       const char *name, void *opaque,     \
+                                       Error **errp)                       \
+{                                                                          \
+    XenPVHx86State *xp = XEN_PVH_X86(obj);                                 \
+    uint64_t value;                                                        \
+                                                                           \
+    if (!visit_type_size(v, name, &value, errp)) {                         \
+        return;                                                            \
+    }                                                                      \
+    xp->cfg.n.f = value;                                                   \
+}
+
+#define XENPVH_PROP_MEMMAP_GETTER(n, f)                                    \
+static void xenpvh_get_ ## n ## _ ## f(Object *obj, Visitor *v,            \
+                                       const char *name, void *opaque,     \
+                                       Error **errp)                       \
+{                                                                          \
+    XenPVHx86State *xp = XEN_PVH_X86(obj);                                 \
+    uint64_t value = xp->cfg.n.f;                                          \
+                                                                           \
+    visit_type_uint64(v, name, &value, errp);                              \
+}
+
+#define XENPVH_PROP_MEMMAP(n)              \
+    XENPVH_PROP_MEMMAP_SETTER(n, base)     \
+    XENPVH_PROP_MEMMAP_SETTER(n, size)     \
+    XENPVH_PROP_MEMMAP_GETTER(n, base)     \
+    XENPVH_PROP_MEMMAP_GETTER(n, size)
+
+
+XENPVH_PROP_MEMMAP(ram_low)
+XENPVH_PROP_MEMMAP(ram_high)
+XENPVH_PROP_MEMMAP(pci_ecam)
+XENPVH_PROP_MEMMAP(pci_mmio)
+XENPVH_PROP_MEMMAP(pci_mmio_high)
+
+static void xenpvh_instance_init(Object *obj)
+{
+    XenPVHx86State *xp = XEN_PVH_X86(obj);
+
+    /* Default memmap.  */
+    xp->cfg.ram_low.base = 0x0;
+    xp->cfg.ram_low.size = 0x80000000U;
+    xp->cfg.ram_high.base = 0xC000000000ULL;
+    xp->cfg.ram_high.size = 0x4000000000ULL;
+}
+
+static void xenpvh_machine_class_init(ObjectClass *oc, void *data)
+{
+    MachineClass *mc = MACHINE_CLASS(oc);
+
+    mc->desc = "Xen PVH x86 machine";
+    mc->init = xenpvh_init;
+    mc->max_cpus = PVH_MAX_CPUS;
+    mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
+    mc->default_machine_opts = "accel=xen";
+    /* Set explicitly here to make sure that real ram_size is passed */
+    mc->default_ram_size = 0;
+
+#define OC_MEMMAP_PROP(c, prop_name, name)                               \
+do {                                                                     \
+    object_class_property_add(c, prop_name "-base", "uint64_t",          \
+                              xenpvh_get_ ## name ## _base,              \
+                              xenpvh_set_ ## name ## _base, NULL, NULL); \
+    object_class_property_set_description(oc, prop_name "-base",         \
+                              prop_name " base address");                \
+    object_class_property_add(c, prop_name "-size", "uint64_t",          \
+                              xenpvh_get_ ## name ## _size,              \
+                              xenpvh_set_ ## name ## _size, NULL, NULL); \
+    object_class_property_set_description(oc, prop_name "-size",         \
+                              prop_name " size of memory region");       \
+} while (0)
+
+    OC_MEMMAP_PROP(oc, "ram-low", ram_low);
+    OC_MEMMAP_PROP(oc, "ram-high", ram_high);
+    OC_MEMMAP_PROP(oc, "pci-ecam", pci_ecam);
+    OC_MEMMAP_PROP(oc, "pci-mmio", pci_mmio);
+    OC_MEMMAP_PROP(oc, "pci-mmio-high", pci_mmio_high);
+}
+
+static const TypeInfo xenpvh_machine_type = {
+    .name = TYPE_XEN_PVH_X86,
+    .parent = TYPE_MACHINE,
+    .class_init = xenpvh_machine_class_init,
+    .instance_init = xenpvh_instance_init,
+    .instance_size = sizeof(XenPVHx86State),
+};
+
+static void xenpvh_machine_register_types(void)
+{
+    type_register_static(&xenpvh_machine_type);
+}
+
+type_init(xenpvh_machine_register_types)
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v1 10/10] docs/system/i386: xenpvh: Add a basic description
  2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
                   ` (8 preceding siblings ...)
  2024-08-12 13:06 ` [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine Edgar E. Iglesias
@ 2024-08-12 13:06 ` Edgar E. Iglesias
  2024-08-13  1:49   ` Stefano Stabellini
  9 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-12 13:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	Edgar E. Iglesias, Paolo Bonzini

From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
---
 MAINTAINERS                 |  1 +
 docs/system/i386/xenpvh.rst | 49 +++++++++++++++++++++++++++++++++++++
 docs/system/target-i386.rst |  1 +
 3 files changed, 51 insertions(+)
 create mode 100644 docs/system/i386/xenpvh.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index a24c2e14d9..da4c9d4d46 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -560,6 +560,7 @@ F: include/sysemu/xen.h
 F: include/sysemu/xen-mapcache.h
 F: stubs/xen-hw-stub.c
 F: docs/system/arm/xenpvh.rst
+F: docs/system/i386/xenpvh.rst
 
 Guest CPU Cores (NVMM)
 ----------------------
diff --git a/docs/system/i386/xenpvh.rst b/docs/system/i386/xenpvh.rst
new file mode 100644
index 0000000000..354250f073
--- /dev/null
+++ b/docs/system/i386/xenpvh.rst
@@ -0,0 +1,49 @@
+Xen PVH machine (``xenpvh``)
+=========================================
+
+Xen supports a spectrum of types of guests that vary in how they depend
+on HW virtualization features, emulation models and paravirtualization.
+PVH is a mode that uses HW virtualization features (like HVM) but tries
+to avoid emulation models and instead use passthrough or
+paravirtualized devices.
+
+QEMU can be used to provide PV virtio devices on an emulated PCIe controller.
+That is the purpose of this minimal machine.
+
+Supported devices
+-----------------
+
+The x86 Xen PVH QEMU machine provide the following devices:
+
+- RAM
+- GPEX host bridge
+- virtio-pci devices
+
+The idea is to only connect virtio-pci devices but in theory any compatible
+PCI device model will work depending on Xen and guest support.
+
+Running
+-------
+
+The Xen tools will typically construct a command-line and launch QEMU
+for you when needed. But here's an example of what it can look like in
+case you need to construct one manually:
+
+.. code-block:: console
+
+    qemu-system-i386 -xen-domid 3 -no-shutdown        \
+      -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-3,server=on,wait=off \
+      -mon chardev=libxl-cmd,mode=control             \
+      -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-3,server=on,wait=off \
+      -mon chardev=libxenstat-cmd,mode=control        \
+      -nodefaults                                     \
+      -no-user-config                                 \
+      -xen-attach -name g0                            \
+      -vnc none                                       \
+      -display none                                   \
+      -device virtio-net-pci,id=nic0,netdev=net0,mac=00:16:3e:5c:81:78 \
+      -netdev type=tap,id=net0,ifname=vif3.0-emu,br=xenbr0,script=no,downscript=no \
+      -smp 4,maxcpus=4                                \
+      -nographic                                      \
+      -machine xenpvh,ram-low-base=0,ram-low-size=2147483648,ram-high-base=4294967296,ram-high-size=2147483648,pci-ecam-base=824633720832,pci-ecam-size=268435456,pci-mmio-base=4026531840,pci-mmio-size=33554432,pci-mmio-high-base=824902156288,pci-mmio-high-size=68719476736 \
+      -m 4096
diff --git a/docs/system/target-i386.rst b/docs/system/target-i386.rst
index 1b8a1f248a..23e84e3ba7 100644
--- a/docs/system/target-i386.rst
+++ b/docs/system/target-i386.rst
@@ -26,6 +26,7 @@ Architectural features
    i386/cpu
    i386/hyperv
    i386/xen
+   i386/xenpvh
    i386/kvm-pv
    i386/sgx
    i386/amd-memory-encryption
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 01/10] MAINTAINERS: Add docs/system/arm/xenpvh.rst
  2024-08-12 13:05 ` [PATCH v1 01/10] MAINTAINERS: Add docs/system/arm/xenpvh.rst Edgar E. Iglesias
@ 2024-08-13  1:45   ` Stefano Stabellini
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:45 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  MAINTAINERS | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 10af212632..a24c2e14d9 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -559,6 +559,7 @@ F: include/hw/xen/
>  F: include/sysemu/xen.h
>  F: include/sysemu/xen-mapcache.h
>  F: stubs/xen-hw-stub.c
> +F: docs/system/arm/xenpvh.rst
>  
>  Guest CPU Cores (NVMM)
>  ----------------------
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 02/10] hw/arm: xenpvh: Update file header to use SPDX
  2024-08-12 13:05 ` [PATCH v1 02/10] hw/arm: xenpvh: Update file header to use SPDX Edgar E. Iglesias
@ 2024-08-13  1:45   ` Stefano Stabellini
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:45 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, qemu-arm

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Update file header to use SPDX and remove stray empty
> comment line.
> 
> No functional changes.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  hw/arm/xen_arm.c | 19 +------------------
>  1 file changed, 1 insertion(+), 18 deletions(-)
> 
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index 6fad829ede..766a194fa1 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -1,24 +1,7 @@
>  /*
>   * QEMU ARM Xen PVH Machine
>   *
> - *
> - * Permission is hereby granted, free of charge, to any person obtaining a copy
> - * of this software and associated documentation files (the "Software"), to deal
> - * in the Software without restriction, including without limitation the rights
> - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> - * copies of the Software, and to permit persons to whom the Software is
> - * furnished to do so, subject to the following conditions:
> - *
> - * The above copyright notice and this permission notice shall be included in
> - * all copies or substantial portions of the Software.
> - *
~ - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> - * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> - * THE SOFTWARE.
> + * SPDX-License-Identifier: MIT
>   */
>  
>  #include "qemu/osdep.h"
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 03/10] hw/arm: xenpvh: Tweak machine description
  2024-08-12 13:05 ` [PATCH v1 03/10] hw/arm: xenpvh: Tweak machine description Edgar E. Iglesias
@ 2024-08-13  1:45   ` Stefano Stabellini
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:45 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, qemu-arm

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Tweak machine description to better express that this is
> a Xen PVH machine for ARM.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  hw/arm/xen_arm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index 766a194fa1..5f75cc3779 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -216,7 +216,7 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
>  {
>  
>      MachineClass *mc = MACHINE_CLASS(oc);
> -    mc->desc = "Xen Para-virtualized PC";
> +    mc->desc = "Xen PVH ARM machine";
>      mc->init = xen_arm_init;
>      mc->max_cpus = 1;
>      mc->default_machine_opts = "accel=xen";
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-12 13:05 ` [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests Edgar E. Iglesias
@ 2024-08-13  1:47   ` Stefano Stabellini
  2024-08-13 17:02     ` Edgar E. Iglesias
  0 siblings, 1 reply; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:47 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, qemu-arm

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
> servers to handle hotplug.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> ---
>  hw/arm/xen_arm.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index 5f75cc3779..ef8315969c 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
>  
>      xen_init_ram(machine);
>  
> -    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
> +    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
>  
>      xen_create_virtio_mmio_devices(xam);
>  
> @@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
>      MachineClass *mc = MACHINE_CLASS(oc);
>      mc->desc = "Xen PVH ARM machine";
>      mc->init = xen_arm_init;
> -    mc->max_cpus = 1;
> +    /* MAX number of vcpus supported by Xen.  */
> +    mc->max_cpus = GUEST_MAX_VCPUS;

Will this cause allocations of data structures with 128 elements?
Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register it seems
possible? Or hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called
later on with the precise vCPU value which should be provided to QEMU
via the -smp command line option
(tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 05/10] hw/arm: xenpvh: Break out a common PVH module
  2024-08-12 13:06 ` [PATCH v1 05/10] hw/arm: xenpvh: Break out a common PVH module Edgar E. Iglesias
@ 2024-08-13  1:47   ` Stefano Stabellini
  2024-08-14 12:03     ` Edgar E. Iglesias
  0 siblings, 1 reply; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:47 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, qemu-arm

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Break out a common Xen PVH module in preparation for
> adding a x86 Xen PVH Machine.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> ---
>  hw/arm/trace-events             |   5 -
>  hw/arm/xen_arm.c                | 154 ++++++--------------------
>  hw/xen/meson.build              |   1 +
>  hw/xen/trace-events             |   4 +
>  hw/xen/xen-pvh-common.c         | 185 ++++++++++++++++++++++++++++++++
>  include/hw/xen/xen-pvh-common.h |  45 ++++++++
>  6 files changed, 269 insertions(+), 125 deletions(-)
>  create mode 100644 hw/xen/xen-pvh-common.c
>  create mode 100644 include/hw/xen/xen-pvh-common.h
> 
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index be6c8f720b..c64ad344bd 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -68,10 +68,5 @@ z2_aer915_send_too_long(int8_t msg) "message too long (%i bytes)"
>  z2_aer915_send(uint8_t reg, uint8_t value) "reg %d value 0x%02x"
>  z2_aer915_event(int8_t event, int8_t len) "i2c event =0x%x len=%d bytes"
>  
> -# xen_arm.c
> -xen_create_virtio_mmio_devices(int i, int irq, uint64_t base) "Created virtio-mmio device %d: irq %d base 0x%"PRIx64
> -xen_init_ram(uint64_t machine_ram_size) "Initialized xen ram with size 0x%"PRIx64
> -xen_enable_tpm(uint64_t addr) "Connected tpmdev at address 0x%"PRIx64
> -
>  # bcm2838.c
>  bcm2838_gic_set_irq(int irq, int level) "gic irq:%d lvl:%d"
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> index ef8315969c..b8a5c09bdf 100644
> --- a/hw/arm/xen_arm.c
> +++ b/hw/arm/xen_arm.c
> @@ -12,40 +12,25 @@
>  #include "hw/irq.h"
>  #include "hw/sysbus.h"
>  #include "sysemu/block-backend.h"
> -#include "sysemu/tpm_backend.h"
>  #include "sysemu/sysemu.h"
> -#include "hw/xen/xen-hvm-common.h"
> +#include "hw/xen/xen-pvh-common.h"
>  #include "sysemu/tpm.h"
>  #include "hw/xen/arch_hvm.h"
> -#include "trace.h"
>  
>  #define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
>  OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
>  
> -static const MemoryListener xen_memory_listener = {
> -    .region_add = xen_region_add,
> -    .region_del = xen_region_del,
> -    .log_start = NULL,
> -    .log_stop = NULL,
> -    .log_sync = NULL,
> -    .log_global_start = NULL,
> -    .log_global_stop = NULL,
> -    .priority = MEMORY_LISTENER_PRIORITY_ACCEL,
> -};
> -
>  struct XenArmState {
>      /*< private >*/
>      MachineState parent;
>  
> -    XenIOState *state;
> +    XenPVHCommonState pvh;
>  
>      struct {
>          uint64_t tpm_base_addr;
>      } cfg;
>  };
>  
> -static MemoryRegion ram_lo, ram_hi;
> -
>  /*
>   * VIRTIO_MMIO_DEV_SIZE is imported from tools/libs/light/libxl_arm.c under Xen
>   * repository.
> @@ -57,64 +42,6 @@ static MemoryRegion ram_lo, ram_hi;
>  #define NR_VIRTIO_MMIO_DEVICES   \
>     (GUEST_VIRTIO_MMIO_SPI_LAST - GUEST_VIRTIO_MMIO_SPI_FIRST)
>  
> -static void xen_set_irq(void *opaque, int irq, int level)
> -{
> -    if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
> -        error_report("xendevicemodel_set_irq_level failed");
> -    }
> -}
> -
> -static void xen_create_virtio_mmio_devices(XenArmState *xam)
> -{
> -    int i;
> -
> -    for (i = 0; i < NR_VIRTIO_MMIO_DEVICES; i++) {
> -        hwaddr base = GUEST_VIRTIO_MMIO_BASE + i * VIRTIO_MMIO_DEV_SIZE;
> -        qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
> -                                         GUEST_VIRTIO_MMIO_SPI_FIRST + i);
> -
> -        sysbus_create_simple("virtio-mmio", base, irq);
> -
> -        trace_xen_create_virtio_mmio_devices(i,
> -                                             GUEST_VIRTIO_MMIO_SPI_FIRST + i,
> -                                             base);
> -    }
> -}

There are 4 trivial functions in this file:

void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
                         bool add)
void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
void qmp_xen_set_global_dirty_log(bool enable, Error **errp)

Do you think it would make sense to move these too to xen-pvh-common.c ?


> -
> -static void xen_init_ram(MachineState *machine)
> -{
> -    MemoryRegion *sysmem = get_system_memory();
> -    ram_addr_t block_len, ram_size[GUEST_RAM_BANKS];
> -
> -    trace_xen_init_ram(machine->ram_size);
> -    if (machine->ram_size <= GUEST_RAM0_SIZE) {
> -        ram_size[0] = machine->ram_size;
> -        ram_size[1] = 0;
> -        block_len = GUEST_RAM0_BASE + ram_size[0];
> -    } else {
> -        ram_size[0] = GUEST_RAM0_SIZE;
> -        ram_size[1] = machine->ram_size - GUEST_RAM0_SIZE;
> -        block_len = GUEST_RAM1_BASE + ram_size[1];
> -    }
> -
> -    memory_region_init_ram(&xen_memory, NULL, "xen.ram", block_len,
> -                           &error_fatal);
> -
> -    memory_region_init_alias(&ram_lo, NULL, "xen.ram.lo", &xen_memory,
> -                             GUEST_RAM0_BASE, ram_size[0]);
> -    memory_region_add_subregion(sysmem, GUEST_RAM0_BASE, &ram_lo);
> -    if (ram_size[1] > 0) {
> -        memory_region_init_alias(&ram_hi, NULL, "xen.ram.hi", &xen_memory,
> -                                 GUEST_RAM1_BASE, ram_size[1]);
> -        memory_region_add_subregion(sysmem, GUEST_RAM1_BASE, &ram_hi);
> -    }
> -
> -    /* Setup support for grants.  */
> -    memory_region_init_ram(&xen_grants, NULL, "xen.grants", block_len,
> -                           &error_fatal);
> -    memory_region_add_subregion(sysmem, XEN_GRANT_ADDR_OFF, &xen_grants);
> -}
> -
>  void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
>  {
>      hw_error("Invalid ioreq type 0x%x\n", req->type);
> @@ -135,55 +62,42 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
>  {
>  }
>  
> -#ifdef CONFIG_TPM
> -static void xen_enable_tpm(XenArmState *xam)
> -{
> -    Error *errp = NULL;
> -    DeviceState *dev;
> -    SysBusDevice *busdev;
> -
> -    TPMBackend *be = qemu_find_tpm_be("tpm0");
> -    if (be == NULL) {
> -        error_report("Couldn't find tmp0 backend");
> -        return;
> -    }
> -    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
> -    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
> -    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
> -    busdev = SYS_BUS_DEVICE(dev);
> -    sysbus_realize_and_unref(busdev, &error_fatal);
> -    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
> -
> -    trace_xen_enable_tpm(xam->cfg.tpm_base_addr);
> -}
> -#endif
> -
> -static void xen_arm_init(MachineState *machine)
> +static void xen_arm_init(MachineState *ms)
>  {
> -    XenArmState *xam = XEN_ARM(machine);
> -
> -    xam->state =  g_new0(XenIOState, 1);
> +    XenArmState *xam = XEN_ARM(ms);
> +    const struct {
> +        const char *name;
> +        MemMapEntry map;
> +    } map[] = {
> +        { "ram-low", { GUEST_RAM0_BASE, GUEST_RAM0_SIZE } },
> +        { "ram-high", { GUEST_RAM1_BASE, GUEST_RAM1_SIZE } },
> +        { "virtio-mmio", { GUEST_VIRTIO_MMIO_BASE, VIRTIO_MMIO_DEV_SIZE } },
> +        { "tpm", { xam->cfg.tpm_base_addr, 0x1000 } },
> +    };
> +    int i;
>  
> -    if (machine->ram_size == 0) {
> -        warn_report("%s non-zero ram size not specified. QEMU machine started"
> -                    " without IOREQ (no emulated devices including virtio)",
> -                    MACHINE_CLASS(object_get_class(OBJECT(machine)))->desc);
> -        return;
> +    object_initialize_child(OBJECT(ms), "pvh", &xam->pvh, TYPE_XEN_PVH_COMMON);
> +
> +    object_property_set_int(OBJECT(&xam->pvh), "max-cpus", ms->smp.max_cpus,
> +                            &error_abort);

Is ms->smp.max_cpus updated according to the QEMU command line option?
If so, that would solve the problem of the static initialization of
max_cpus to GUEST_MAX_VCPUS in xen_arm_machine_class_init.


> +    object_property_set_int(OBJECT(&xam->pvh), "ram-size", ms->ram_size,
> +                            &error_abort);
> +    object_property_set_int(OBJECT(&xam->pvh), "virtio-mmio-num",
> +                            NR_VIRTIO_MMIO_DEVICES, &error_abort);
> +    object_property_set_int(OBJECT(&xam->pvh), "virtio-mmio-irq-base",
> +                            GUEST_VIRTIO_MMIO_SPI_FIRST, &error_abort);
> +
> +    for (i = 0; i < ARRAY_SIZE(map); i++) {
> +        g_autofree char *base_name = g_strdup_printf("%s-base", map[i].name);
> +        g_autofree char *size_name = g_strdup_printf("%s-size", map[i].name);
> +
> +        object_property_set_int(OBJECT(&xam->pvh), base_name, map[i].map.base,
> +                                &error_abort);
> +        object_property_set_int(OBJECT(&xam->pvh), size_name, map[i].map.size,
> +                                &error_abort);
>      }
>  
> -    xen_init_ram(machine);
> -
> -    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
> -
> -    xen_create_virtio_mmio_devices(xam);
> -
> -#ifdef CONFIG_TPM
> -    if (xam->cfg.tpm_base_addr) {
> -        xen_enable_tpm(xam);

Do you think it makes sense also to move xen_arm_get_tpm_base_addr and
xen_arm_set_tpm_base_addr to xen-pvh-common.c ?



> -    } else {
> -        warn_report("tpm-base-addr is not provided. TPM will not be enabled");
> -    }
> -#endif
> +    sysbus_realize(SYS_BUS_DEVICE(&xam->pvh), &error_abort);
>  }
>  
>  #ifdef CONFIG_TPM
> diff --git a/hw/xen/meson.build b/hw/xen/meson.build
> index d887fa9ba4..4a486e3673 100644
> --- a/hw/xen/meson.build
> +++ b/hw/xen/meson.build
> @@ -15,6 +15,7 @@ xen_specific_ss = ss.source_set()
>  xen_specific_ss.add(files(
>    'xen-mapcache.c',
>    'xen-hvm-common.c',
> +  'xen-pvh-common.c',
>  ))
>  if have_xen_pci_passthrough
>    xen_specific_ss.add(files(
> diff --git a/hw/xen/trace-events b/hw/xen/trace-events
> index d1b27f6c11..a07fe41c6d 100644
> --- a/hw/xen/trace-events
> +++ b/hw/xen/trace-events
> @@ -64,6 +64,10 @@ destroy_hvm_domain_cannot_acquire_handle(void) "Cannot acquire xenctrl handle"
>  destroy_hvm_domain_failed_action(const char *action, int sts, char *errno_s) "xc_domain_shutdown failed to issue %s, sts %d, %s"
>  destroy_hvm_domain_action(int xen_domid, const char *action) "Issued domain %d %s"
>  
> +# xen-pvh-common.c
> +xen_create_virtio_mmio_devices(int i, int irq, uint64_t base) "Created virtio-mmio device %d: irq %d base 0x%"PRIx64
> +xen_enable_tpm(uint64_t addr) "Connected tpmdev at address 0x%"PRIx64
> +
>  # xen-mapcache.c
>  xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
>  xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
> diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
> new file mode 100644
> index 0000000000..0d368398d0
> --- /dev/null
> +++ b/hw/xen/xen-pvh-common.c
> @@ -0,0 +1,185 @@
> +/*
> + * Common Xen PVH code.
> + *
> + * Copyright (c) 2024 Advanced Micro Devices, Inc.
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/error-report.h"
> +#include "qapi/error.h"
> +#include "hw/boards.h"
> +#include "hw/irq.h"
> +#include "hw/sysbus.h"
> +#include "sysemu/sysemu.h"
> +#include "sysemu/tpm.h"
> +#include "sysemu/tpm_backend.h"
> +#include "hw/xen/xen-pvh-common.h"
> +#include "trace.h"
> +
> +static const MemoryListener xen_memory_listener = {
> +    .region_add = xen_region_add,
> +    .region_del = xen_region_del,
> +    .log_start = NULL,
> +    .log_stop = NULL,
> +    .log_sync = NULL,
> +    .log_global_start = NULL,
> +    .log_global_stop = NULL,
> +    .priority = MEMORY_LISTENER_PRIORITY_ACCEL,
> +};
> +
> +static void xen_pvh_init_ram(XenPVHCommonState *s,
> +                             MemoryRegion *sysmem)
> +{
> +    ram_addr_t block_len, ram_size[2];
> +
> +    if (s->cfg.ram_size <= s->cfg.ram_low.size) {
> +        ram_size[0] = s->cfg.ram_size;
> +        ram_size[1] = 0;
> +        block_len = s->cfg.ram_low.base + ram_size[0];
> +    } else {
> +        ram_size[0] = s->cfg.ram_low.size;
> +        ram_size[1] = s->cfg.ram_size - s->cfg.ram_low.size;
> +        block_len = s->cfg.ram_high.base + ram_size[1];
> +    }
> +
> +    memory_region_init_ram(&xen_memory, NULL, "xen.ram", block_len,
> +                           &error_fatal);
> +
> +    memory_region_init_alias(&s->ram.low, NULL, "xen.ram.lo", &xen_memory,
> +                             s->cfg.ram_low.base, ram_size[0]);
> +    memory_region_add_subregion(sysmem, s->cfg.ram_low.base, &s->ram.low);
> +    if (ram_size[1] > 0) {
> +        memory_region_init_alias(&s->ram.high, NULL, "xen.ram.hi", &xen_memory,
> +                                 s->cfg.ram_high.base, ram_size[1]);
> +        memory_region_add_subregion(sysmem, s->cfg.ram_high.base, &s->ram.high);
> +    }
> +
> +    /* Setup support for grants.  */
> +    memory_region_init_ram(&xen_grants, NULL, "xen.grants", block_len,
> +                           &error_fatal);
> +    memory_region_add_subregion(sysmem, XEN_GRANT_ADDR_OFF, &xen_grants);
> +}
> +
> +static void xen_set_irq(void *opaque, int irq, int level)
> +{
> +    if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
> +        error_report("xendevicemodel_set_irq_level failed");
> +    }
> +}
> +
> +static void xen_create_virtio_mmio_devices(XenPVHCommonState *s)
> +{
> +    int i;
> +
> +    for (i = 0; i < s->cfg.virtio_mmio_num; i++) {
> +        hwaddr base = s->cfg.virtio_mmio.base + i * s->cfg.virtio_mmio.size;
> +        qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
> +                                         s->cfg.virtio_mmio_irq_base + i);
> +
> +        sysbus_create_simple("virtio-mmio", base, irq);
> +
> +        trace_xen_create_virtio_mmio_devices(i,
> +                                             s->cfg.virtio_mmio_irq_base + i,
> +                                             base);
> +    }
> +}
> +
> +#ifdef CONFIG_TPM
> +static void xen_enable_tpm(XenPVHCommonState *s)
> +{
> +    Error *errp = NULL;
> +    DeviceState *dev;
> +    SysBusDevice *busdev;
> +
> +    TPMBackend *be = qemu_find_tpm_be("tpm0");
> +    if (be == NULL) {
> +        error_report("Couldn't find tmp0 backend");
> +        return;
> +    }
> +    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
> +    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
> +    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
> +    busdev = SYS_BUS_DEVICE(dev);
> +    sysbus_realize_and_unref(busdev, &error_fatal);
> +    sysbus_mmio_map(busdev, 0, s->cfg.tpm.base);
> +
> +    trace_xen_enable_tpm(s->cfg.tpm.base);
> +}
> +#endif
> +
> +static void xen_pvh_realize(DeviceState *dev, Error **errp)
> +{
> +    XenPVHCommonState *s = XEN_PVH_COMMON(dev);
> +    MemoryRegion *sysmem = get_system_memory();
> +
> +    if (s->cfg.ram_size == 0) {
> +        /* FIXME: Prefix with object path and consider bailing out.  */

I am not sure about these two FIXME


> +        warn_report("non-zero ram size not specified. QEMU machine started"
> +                    " without IOREQ (no emulated devices including virtio)");

Also the warn message has a double negative?


> +        return;
> +    }
> +
> +    if (s->cfg.max_cpus == 0) {
> +        /* FIXME: Prefix with object path and bail out.  */
> +        warn_report("max-cpus not specified. QEMU machine started");
> +        return;
> +    }
> +
> +    xen_pvh_init_ram(s, sysmem);
> +    xen_register_ioreq(&s->ioreq, s->cfg.max_cpus, &xen_memory_listener);
> +
> +    if (s->cfg.virtio_mmio_num) {
> +        xen_create_virtio_mmio_devices(s);
> +    }
> +
> +#ifdef CONFIG_TPM
> +    if (s->cfg.tpm.base) {
> +        xen_enable_tpm(s);
> +    } else {
> +        warn_report("tpm-base-addr is not provided. TPM will not be enabled");
> +    }
> +#endif
> +}
> +
> +#define DEFINE_PROP_MEMMAP(n, f) \
> +    DEFINE_PROP_UINT64(n "-base", XenPVHCommonState, cfg.f.base, 0), \
> +    DEFINE_PROP_UINT64(n "-size", XenPVHCommonState, cfg.f.size, 0)
> +
> +static Property xen_pvh_properties[] = {
> +    DEFINE_PROP_UINT32("max-cpus", XenPVHCommonState, cfg.max_cpus, 0),
> +    DEFINE_PROP_UINT64("ram-size", XenPVHCommonState, cfg.ram_size, 0),
> +    DEFINE_PROP_MEMMAP("ram-low", ram_low),
> +    DEFINE_PROP_MEMMAP("ram-high", ram_high),
> +    DEFINE_PROP_MEMMAP("virtio-mmio", virtio_mmio),
> +    DEFINE_PROP_MEMMAP("tpm", tpm),
> +    DEFINE_PROP_UINT32("virtio-mmio-num", XenPVHCommonState,
> +                       cfg.virtio_mmio_num, 0),
> +    DEFINE_PROP_UINT32("virtio-mmio-irq-base", XenPVHCommonState,
> +                       cfg.virtio_mmio_irq_base, 0),
> +    DEFINE_PROP_END_OF_LIST()
> +};
> +
> +static void xen_pvh_class_init(ObjectClass *klass, void *data)
> +{
> +    DeviceClass *dc = DEVICE_CLASS(klass);
> +
> +    dc->realize = xen_pvh_realize;
> +    device_class_set_props(dc, xen_pvh_properties);
> +    /* No VMSD since we haven't got any top-level SoC state to save.  */
> +}
> +
> +static const TypeInfo xen_pvh_info = {
> +    .name = TYPE_XEN_PVH_COMMON,
> +    .parent = TYPE_SYS_BUS_DEVICE,
> +    .instance_size = sizeof(XenPVHCommonState),
> +    .class_init = xen_pvh_class_init,
> +};
> +
> +static void xen_pvh_register_types(void)
> +{
> +    type_register_static(&xen_pvh_info);
> +}
> +
> +type_init(xen_pvh_register_types);
> diff --git a/include/hw/xen/xen-pvh-common.h b/include/hw/xen/xen-pvh-common.h
> new file mode 100644
> index 0000000000..e958b441fd
> --- /dev/null
> +++ b/include/hw/xen/xen-pvh-common.h
> @@ -0,0 +1,45 @@
> +/*
> + * QEMU Xen PVH machine - common code.
> + *
> + * Copyright (c) 2024 Advanced Micro Devices, Inc.
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#ifndef XEN_PVH_COMMON_H__
> +#define XEN_PVH_COMMON_H__
> +
> +#include <assert.h>
> +#include "hw/sysbus.h"
> +#include "hw/hw.h"
> +#include "hw/xen/xen-hvm-common.h"
> +#include "hw/pci-host/gpex.h"
> +
> +#define TYPE_XEN_PVH_COMMON "xen-pvh-common"
> +OBJECT_DECLARE_SIMPLE_TYPE(XenPVHCommonState, XEN_PVH_COMMON)
> +
> +typedef struct XenPVHCommonState {
> +    /*< private >*/
> +    SysBusDevice parent_obj;
> +
> +    XenIOState ioreq;
> +
> +    struct {
> +        MemoryRegion low;
> +        MemoryRegion high;
> +    } ram;
> +
> +    struct {
> +        uint64_t ram_size;
> +        uint32_t max_cpus;
> +        uint32_t virtio_mmio_num;
> +        uint32_t virtio_mmio_irq_base;
> +        struct {
> +            uint64_t base;
> +            uint64_t size;
> +        } ram_low, ram_high,
> +          virtio_mmio,
> +          tpm;
> +    } cfg;
> +} XenPVHCommonState;
> +#endif
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 06/10] hw/arm: xenpvh: Rename xen_arm.c -> xen-pvh.c
  2024-08-12 13:06 ` [PATCH v1 06/10] hw/arm: xenpvh: Rename xen_arm.c -> xen-pvh.c Edgar E. Iglesias
@ 2024-08-13  1:48   ` Stefano Stabellini
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:48 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, qemu-arm

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Rename xen_arm.c -> xen-pvh.c to better express that this
> is a PVH machine and to align with x86 HVM and future PVH
> machine filenames:
> hw/i386/xen/xen-hvm.c
> hw/i386/xen/xen-pvh.c (in preparation)
> 
> No functional changes.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  hw/arm/meson.build              | 2 +-
>  hw/arm/{xen_arm.c => xen-pvh.c} | 0
>  2 files changed, 1 insertion(+), 1 deletion(-)
>  rename hw/arm/{xen_arm.c => xen-pvh.c} (100%)
> 
> diff --git a/hw/arm/meson.build b/hw/arm/meson.build
> index 0c07ab522f..769fe9ec1a 100644
> --- a/hw/arm/meson.build
> +++ b/hw/arm/meson.build
> @@ -59,7 +59,7 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true: files('fsl-imx7.c', 'mcimx7d-sabre.
>  arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
>  arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
>  arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
> -arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
> +arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen-pvh.c'))
>  
>  system_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmu-common.c'))
>  system_ss.add(when: 'CONFIG_CHEETAH', if_true: files('palm.c'))
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen-pvh.c
> similarity index 100%
> rename from hw/arm/xen_arm.c
> rename to hw/arm/xen-pvh.c
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 07/10] hw/arm: xenpvh: Reverse virtio-mmio creation order
  2024-08-12 13:06 ` [PATCH v1 07/10] hw/arm: xenpvh: Reverse virtio-mmio creation order Edgar E. Iglesias
@ 2024-08-13  1:48   ` Stefano Stabellini
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:48 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> We've been creating the virtio-mmio devices in forwards order
> but since the qbus lists prepend (rather than append) entries,
> the virtio busses end up with decreasing base address order.
> 
> Xen enables virtio-mmio nodes in forwards order so there's been
> a missmatch. So far, we've been working around this with an
> out-of-tree patch to Xen.
> 
> This reverses the order making sure the virtio busses end up
> ordered with increasing base addresses avoiding the need to
> patch Xen.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  hw/xen/xen-pvh-common.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
> index 0d368398d0..69a2dbdb6d 100644
> --- a/hw/xen/xen-pvh-common.c
> +++ b/hw/xen/xen-pvh-common.c
> @@ -73,7 +73,18 @@ static void xen_create_virtio_mmio_devices(XenPVHCommonState *s)
>  {
>      int i;
>  
> -    for (i = 0; i < s->cfg.virtio_mmio_num; i++) {
> +    /*
> +     * We create the transports in reverse order. Since qbus_realize()
> +     * prepends (not appends) new child buses, the decrementing loop below will
> +     * create a list of virtio-mmio buses with increasing base addresses.
> +     *
> +     * When a -device option is processed from the command line,
> +     * qbus_find_recursive() picks the next free virtio-mmio bus in forwards
> +     * order.
> +     *
> +     * This is what the Xen tools expect.
> +     */
> +    for (i = s->cfg.virtio_mmio_num - 1; i >= 0; i--) {
>          hwaddr base = s->cfg.virtio_mmio.base + i * s->cfg.virtio_mmio.size;
>          qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
>                                           s->cfg.virtio_mmio_irq_base + i);
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX
  2024-08-12 13:06 ` [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX Edgar E. Iglesias
@ 2024-08-13  1:48   ` Stefano Stabellini
  2024-08-14 15:26     ` Edgar E. Iglesias
  0 siblings, 1 reply; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:48 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Add support for optionally creating a PCIe/GPEX controller.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> ---
>  hw/xen/xen-pvh-common.c         | 66 +++++++++++++++++++++++++++++++++
>  include/hw/xen/xen-pvh-common.h | 10 ++++-
>  2 files changed, 75 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
> index 69a2dbdb6d..b1432e4bd9 100644
> --- a/hw/xen/xen-pvh-common.c
> +++ b/hw/xen/xen-pvh-common.c
> @@ -120,6 +120,59 @@ static void xen_enable_tpm(XenPVHCommonState *s)
>  }
>  #endif
>  
> +static void xen_set_pci_intx_irq(void *opaque, int irq, int level)
> +{
> +    if (xen_set_pci_intx_level(xen_domid, 0, 0, 0, irq, level)) {

Looking at the implementation of XEN_DMOP_set_pci_intx_level in
xen/arch/x86/hvm/dm.c, it looks like the device parameter of
xen_set_pci_intx_level is required?


> +        error_report("xendevicemodel_set_pci_intx_level failed");
> +    }
> +}
> +
> +static inline void xenpvh_gpex_init(XenPVHCommonState *s,
> +                                    MemoryRegion *sysmem,
> +                                    hwaddr ecam_base, hwaddr ecam_size,
> +                                    hwaddr mmio_base, hwaddr mmio_size,
> +                                    hwaddr mmio_high_base,
> +                                    hwaddr mmio_high_size,
> +                                    int intx_irq_base)
> +{
> +    MemoryRegion *ecam_reg;
> +    MemoryRegion *mmio_reg;
> +    DeviceState *dev;
> +    int i;
> +
> +    object_initialize_child(OBJECT(s), "gpex", &s->pci.gpex,
> +                            TYPE_GPEX_HOST);
> +    dev = DEVICE(&s->pci.gpex);
> +    sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
> +
> +    ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
> +    memory_region_add_subregion(sysmem, ecam_base, ecam_reg);

I notice we don't use ecam_size anywhere? Is that because the size is
standard?


> +    mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> +
> +    if (mmio_size) {
> +        memory_region_init_alias(&s->pci.mmio_alias, OBJECT(dev), "pcie-mmio",
> +                                 mmio_reg, mmio_base, mmio_size);
> +        memory_region_add_subregion(sysmem, mmio_base, &s->pci.mmio_alias);
> +    }
> +
> +    if (mmio_high_size) {
> +        memory_region_init_alias(&s->pci.mmio_high_alias, OBJECT(dev),
> +                "pcie-mmio-high",
> +                mmio_reg, mmio_high_base, mmio_high_size);
> +        memory_region_add_subregion(sysmem, mmio_high_base,
> +                &s->pci.mmio_high_alias);
> +    }
> +
> +    for (i = 0; i < GPEX_NUM_IRQS; i++) {
> +        qemu_irq irq = qemu_allocate_irq(xen_set_pci_intx_irq, s, i);
> +
> +        sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
> +        gpex_set_irq_num(GPEX_HOST(dev), i, intx_irq_base + i);
> +        xen_set_pci_link_route(i, intx_irq_base + i);

xen_set_pci_link_route is not currently implemented on ARM?

Looking at hw/i386/pc_piix.c:piix_intx_routing_notifier_xen it seems
that the routing is much more complex over there. But looking at other
machines that use GPEX such as hw/arm/virt.c it looks like the routing
is straightforward the same way as in this patch.

I thought that PCI interrupt pin swizzling was required, but maybe not ?

It is totally fine if we do something different, simpler, than
hw/i386/pc_piix.c:piix_intx_routing_notifier_xen. I just want to make
sure that things remain consistent between ARM and x86, and also between
Xen and QEMU view of virtual PCI interrupt routing.



> +    }
> +}
> +
>  static void xen_pvh_realize(DeviceState *dev, Error **errp)
>  {
>      XenPVHCommonState *s = XEN_PVH_COMMON(dev);
> @@ -152,6 +205,14 @@ static void xen_pvh_realize(DeviceState *dev, Error **errp)
>          warn_report("tpm-base-addr is not provided. TPM will not be enabled");
>      }
>  #endif
> +
> +    if (s->cfg.ecam.size) {
> +        xenpvh_gpex_init(s, sysmem,
> +                         s->cfg.ecam.base, s->cfg.ecam.size,
> +                         s->cfg.mmio.base, s->cfg.mmio.size,
> +                         s->cfg.mmio_high.base, s->cfg.mmio_high.size,
> +                         s->cfg.pci_intx_irq_base);
> +    }
>  }
>  
>  #define DEFINE_PROP_MEMMAP(n, f) \
> @@ -165,10 +226,15 @@ static Property xen_pvh_properties[] = {
>      DEFINE_PROP_MEMMAP("ram-high", ram_high),
>      DEFINE_PROP_MEMMAP("virtio-mmio", virtio_mmio),
>      DEFINE_PROP_MEMMAP("tpm", tpm),
> +    DEFINE_PROP_MEMMAP("pci-ecam", ecam),
> +    DEFINE_PROP_MEMMAP("pci-mmio", mmio),
> +    DEFINE_PROP_MEMMAP("pci-mmio-high", mmio_high),
>      DEFINE_PROP_UINT32("virtio-mmio-num", XenPVHCommonState,
>                         cfg.virtio_mmio_num, 0),
>      DEFINE_PROP_UINT32("virtio-mmio-irq-base", XenPVHCommonState,
>                         cfg.virtio_mmio_irq_base, 0),
> +    DEFINE_PROP_UINT32("pci-intx-irq-base", XenPVHCommonState,
> +                       cfg.pci_intx_irq_base, 0),
>      DEFINE_PROP_END_OF_LIST()
>  };
>  
> diff --git a/include/hw/xen/xen-pvh-common.h b/include/hw/xen/xen-pvh-common.h
> index e958b441fd..faacfca70e 100644
> --- a/include/hw/xen/xen-pvh-common.h
> +++ b/include/hw/xen/xen-pvh-common.h
> @@ -29,17 +29,25 @@ typedef struct XenPVHCommonState {
>          MemoryRegion high;
>      } ram;
>  
> +    struct {
> +        GPEXHost gpex;
> +        MemoryRegion mmio_alias;
> +        MemoryRegion mmio_high_alias;
> +    } pci;
> +
>      struct {
>          uint64_t ram_size;
>          uint32_t max_cpus;
>          uint32_t virtio_mmio_num;
>          uint32_t virtio_mmio_irq_base;
> +        uint32_t pci_intx_irq_base;
>          struct {
>              uint64_t base;
>              uint64_t size;
>          } ram_low, ram_high,
>            virtio_mmio,
> -          tpm;
> +          tpm,
> +          ecam, mmio, mmio_high;
>      } cfg;
>  } XenPVHCommonState;
>  #endif
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine
  2024-08-12 13:06 ` [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine Edgar E. Iglesias
@ 2024-08-13  1:48   ` Stefano Stabellini
  2024-08-14 15:50     ` Edgar E. Iglesias
  0 siblings, 1 reply; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:48 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
	Richard Henderson, Eduardo Habkost

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> This adds a Xen PVH x86 machine based on the PVH Common
> module used by the ARM PVH machine.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> ---
>  hw/i386/xen/meson.build |   1 +
>  hw/i386/xen/xen-pvh.c   | 196 ++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 197 insertions(+)
>  create mode 100644 hw/i386/xen/xen-pvh.c
> 
> diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
> index 3f0df8bc07..c73c62b8e3 100644
> --- a/hw/i386/xen/meson.build
> +++ b/hw/i386/xen/meson.build
> @@ -4,6 +4,7 @@ i386_ss.add(when: 'CONFIG_XEN', if_true: files(
>  ))
>  i386_ss.add(when: ['CONFIG_XEN', xen], if_true: files(
>    'xen-hvm.c',
> +  'xen-pvh.c',
>  ))
>  
>  i386_ss.add(when: 'CONFIG_XEN_BUS', if_true: files(
> diff --git a/hw/i386/xen/xen-pvh.c b/hw/i386/xen/xen-pvh.c
> new file mode 100644
> index 0000000000..9c3d3fc58d
> --- /dev/null
> +++ b/hw/i386/xen/xen-pvh.c
> @@ -0,0 +1,196 @@
> +/*
> + * QEMU Xen PVH x86 Machine
> + *
> + * Copyright (c) 2024 Advanced Micro Devices, Inc.
> + * Written by Edgar E. Iglesias <edgar.iglesias@amd.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qapi/error.h"
> +#include "qapi/visitor.h"
> +#include "qemu/error-report.h"
> +#include "hw/boards.h"
> +#include "sysemu/sysemu.h"
> +#include "hw/xen/arch_hvm.h"
> +#include "hw/xen/xen.h"
> +#include "hw/xen/xen-pvh-common.h"
> +
> +#define TYPE_XEN_PVH_X86  MACHINE_TYPE_NAME("xenpvh")
> +OBJECT_DECLARE_SIMPLE_TYPE(XenPVHx86State, XEN_PVH_X86)
> +
> +#define PVH_MAX_CPUS 128
> +
> +struct XenPVHx86State {
> +    /*< private >*/
> +    MachineState parent;
> +
> +    DeviceState *cpu[PVH_MAX_CPUS];
> +    XenPVHCommonState pvh;
> +
> +    /*
> +     * We provide these properties to allow Xen to move things to other
> +     * addresses for example when users need to accomodate the memory-map
> +     * for 1:1 mapped devices/memory.
> +     */
> +    struct {
> +        MemMapEntry ram_low, ram_high;
> +        MemMapEntry pci_ecam, pci_mmio, pci_mmio_high;

Can we use the same properties already present under XenPVHCommonState?



> +    } cfg;
> +};
> +
> +static void xenpvh_cpu_new(MachineState *ms,
> +                           XenPVHx86State *xp,
> +                           int cpu_idx,
> +                           int64_t apic_id)
> +{
> +    Object *cpu = object_new(ms->cpu_type);
> +
> +    object_property_add_child(OBJECT(ms), "cpu[*]", cpu);
> +    object_property_set_uint(cpu, "apic-id", apic_id, &error_fatal);
> +    qdev_realize(DEVICE(cpu), NULL, &error_fatal);
> +    object_unref(cpu);
> +
> +    xp->cpu[cpu_idx] = DEVICE(cpu);


Why do we need to create CPU devices in QEMU given that we are only
doing device emulation? I guess it is because ....



> +}
> +
> +static void xenpvh_init(MachineState *ms)
> +{
> +    XenPVHx86State *xp = XEN_PVH_X86(ms);
> +    const struct {
> +        const char *name;
> +        MemMapEntry *map;
> +    } map[] = {
> +        { "ram-low", &xp->cfg.ram_low },
> +        { "ram-high", &xp->cfg.ram_high },
> +        { "pci-ecam", &xp->cfg.pci_ecam },
> +        { "pci-mmio", &xp->cfg.pci_mmio },
> +        { "pci-mmio-high", &xp->cfg.pci_mmio_high },
> +    };
> +    int i;
> +
> +    object_initialize_child(OBJECT(ms), "pvh", &xp->pvh, TYPE_XEN_PVH_COMMON);
> +    object_property_set_int(OBJECT(&xp->pvh), "max-cpus", ms->smp.max_cpus,
> +                            &error_abort);
> +    object_property_set_int(OBJECT(&xp->pvh), "ram-size", ms->ram_size,
> +                            &error_abort);
> +
> +    for (i = 0; i < ARRAY_SIZE(map); i++) {
> +        g_autofree char *base_name = g_strdup_printf("%s-base", map[i].name);
> +        g_autofree char *size_name = g_strdup_printf("%s-size", map[i].name);
> +
> +        object_property_set_int(OBJECT(&xp->pvh), base_name, map[i].map->base,
> +                                 &error_abort);
> +        object_property_set_int(OBJECT(&xp->pvh), size_name, map[i].map->size,
> +                                 &error_abort);
> +    }
> +
> +    /* GSI's 16 - 20 are used for legacy PCIe INTX IRQs.  */
> +    object_property_set_int(OBJECT(&xp->pvh), "pci-intx-irq-base", 16,
> +                            &error_abort);
> +
> +    sysbus_realize(SYS_BUS_DEVICE(&xp->pvh), &error_abort);
> +
> +    /* Create dummy cores. This will indirectly create the APIC MSI window.  */

... of the APIC MSI window ?



> +    for (i = 0; i < ms->smp.cpus; i++) {
> +        xenpvh_cpu_new(ms, xp, i, i);
> +    }
> +}
> +
> +#define XENPVH_PROP_MEMMAP_SETTER(n, f)                                    \
> +static void xenpvh_set_ ## n ## _ ## f(Object *obj, Visitor *v,            \
> +                                       const char *name, void *opaque,     \
> +                                       Error **errp)                       \
> +{                                                                          \
> +    XenPVHx86State *xp = XEN_PVH_X86(obj);                                 \
> +    uint64_t value;                                                        \
> +                                                                           \
> +    if (!visit_type_size(v, name, &value, errp)) {                         \
> +        return;                                                            \
> +    }                                                                      \
> +    xp->cfg.n.f = value;                                                   \
> +}
> +
> +#define XENPVH_PROP_MEMMAP_GETTER(n, f)                                    \
> +static void xenpvh_get_ ## n ## _ ## f(Object *obj, Visitor *v,            \
> +                                       const char *name, void *opaque,     \
> +                                       Error **errp)                       \
> +{                                                                          \
> +    XenPVHx86State *xp = XEN_PVH_X86(obj);                                 \
> +    uint64_t value = xp->cfg.n.f;                                          \
> +                                                                           \
> +    visit_type_uint64(v, name, &value, errp);                              \
> +}
> +
> +#define XENPVH_PROP_MEMMAP(n)              \
> +    XENPVH_PROP_MEMMAP_SETTER(n, base)     \
> +    XENPVH_PROP_MEMMAP_SETTER(n, size)     \
> +    XENPVH_PROP_MEMMAP_GETTER(n, base)     \
> +    XENPVH_PROP_MEMMAP_GETTER(n, size)
> +
> +
> +XENPVH_PROP_MEMMAP(ram_low)
> +XENPVH_PROP_MEMMAP(ram_high)
> +XENPVH_PROP_MEMMAP(pci_ecam)
> +XENPVH_PROP_MEMMAP(pci_mmio)
> +XENPVH_PROP_MEMMAP(pci_mmio_high)

I would make all of these common with ARM. It wouldn't hurt to make them
configurable on ARM too, in fact we might need it for 1:1 mapped guests



> +static void xenpvh_instance_init(Object *obj)
> +{
> +    XenPVHx86State *xp = XEN_PVH_X86(obj);
> +
> +    /* Default memmap.  */
> +    xp->cfg.ram_low.base = 0x0;
> +    xp->cfg.ram_low.size = 0x80000000U;
> +    xp->cfg.ram_high.base = 0xC000000000ULL;
> +    xp->cfg.ram_high.size = 0x4000000000ULL;
> +}
> +
> +static void xenpvh_machine_class_init(ObjectClass *oc, void *data)
> +{
> +    MachineClass *mc = MACHINE_CLASS(oc);
> +
> +    mc->desc = "Xen PVH x86 machine";
> +    mc->init = xenpvh_init;
> +    mc->max_cpus = PVH_MAX_CPUS;
> +    mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
> +    mc->default_machine_opts = "accel=xen";
> +    /* Set explicitly here to make sure that real ram_size is passed */
> +    mc->default_ram_size = 0;
> +
> +#define OC_MEMMAP_PROP(c, prop_name, name)                               \
> +do {                                                                     \
> +    object_class_property_add(c, prop_name "-base", "uint64_t",          \
> +                              xenpvh_get_ ## name ## _base,              \
> +                              xenpvh_set_ ## name ## _base, NULL, NULL); \
> +    object_class_property_set_description(oc, prop_name "-base",         \
> +                              prop_name " base address");                \
> +    object_class_property_add(c, prop_name "-size", "uint64_t",          \
> +                              xenpvh_get_ ## name ## _size,              \
> +                              xenpvh_set_ ## name ## _size, NULL, NULL); \
> +    object_class_property_set_description(oc, prop_name "-size",         \
> +                              prop_name " size of memory region");       \
> +} while (0)
> +
> +    OC_MEMMAP_PROP(oc, "ram-low", ram_low);
> +    OC_MEMMAP_PROP(oc, "ram-high", ram_high);
> +    OC_MEMMAP_PROP(oc, "pci-ecam", pci_ecam);
> +    OC_MEMMAP_PROP(oc, "pci-mmio", pci_mmio);
> +    OC_MEMMAP_PROP(oc, "pci-mmio-high", pci_mmio_high);
> +}
> +
> +static const TypeInfo xenpvh_machine_type = {
> +    .name = TYPE_XEN_PVH_X86,
> +    .parent = TYPE_MACHINE,
> +    .class_init = xenpvh_machine_class_init,
> +    .instance_init = xenpvh_instance_init,
> +    .instance_size = sizeof(XenPVHx86State),
> +};
> +
> +static void xenpvh_machine_register_types(void)
> +{
> +    type_register_static(&xenpvh_machine_type);
> +}
> +
> +type_init(xenpvh_machine_register_types)
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 10/10] docs/system/i386: xenpvh: Add a basic description
  2024-08-12 13:06 ` [PATCH v1 10/10] docs/system/i386: xenpvh: Add a basic description Edgar E. Iglesias
@ 2024-08-13  1:49   ` Stefano Stabellini
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13  1:49 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, Paolo Bonzini

On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>


Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  MAINTAINERS                 |  1 +
>  docs/system/i386/xenpvh.rst | 49 +++++++++++++++++++++++++++++++++++++
>  docs/system/target-i386.rst |  1 +
>  3 files changed, 51 insertions(+)
>  create mode 100644 docs/system/i386/xenpvh.rst
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a24c2e14d9..da4c9d4d46 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -560,6 +560,7 @@ F: include/sysemu/xen.h
>  F: include/sysemu/xen-mapcache.h
>  F: stubs/xen-hw-stub.c
>  F: docs/system/arm/xenpvh.rst
> +F: docs/system/i386/xenpvh.rst
>  
>  Guest CPU Cores (NVMM)
>  ----------------------
> diff --git a/docs/system/i386/xenpvh.rst b/docs/system/i386/xenpvh.rst
> new file mode 100644
> index 0000000000..354250f073
> --- /dev/null
> +++ b/docs/system/i386/xenpvh.rst
> @@ -0,0 +1,49 @@
> +Xen PVH machine (``xenpvh``)
> +=========================================
> +
> +Xen supports a spectrum of types of guests that vary in how they depend
> +on HW virtualization features, emulation models and paravirtualization.
> +PVH is a mode that uses HW virtualization features (like HVM) but tries
> +to avoid emulation models and instead use passthrough or
> +paravirtualized devices.
> +
> +QEMU can be used to provide PV virtio devices on an emulated PCIe controller.
> +That is the purpose of this minimal machine.
> +
> +Supported devices
> +-----------------
> +
> +The x86 Xen PVH QEMU machine provide the following devices:
> +
> +- RAM
> +- GPEX host bridge
> +- virtio-pci devices
> +
> +The idea is to only connect virtio-pci devices but in theory any compatible
> +PCI device model will work depending on Xen and guest support.
> +
> +Running
> +-------
> +
> +The Xen tools will typically construct a command-line and launch QEMU
> +for you when needed. But here's an example of what it can look like in
> +case you need to construct one manually:
> +
> +.. code-block:: console
> +
> +    qemu-system-i386 -xen-domid 3 -no-shutdown        \
> +      -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-3,server=on,wait=off \
> +      -mon chardev=libxl-cmd,mode=control             \
> +      -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-3,server=on,wait=off \
> +      -mon chardev=libxenstat-cmd,mode=control        \
> +      -nodefaults                                     \
> +      -no-user-config                                 \
> +      -xen-attach -name g0                            \
> +      -vnc none                                       \
> +      -display none                                   \
> +      -device virtio-net-pci,id=nic0,netdev=net0,mac=00:16:3e:5c:81:78 \
> +      -netdev type=tap,id=net0,ifname=vif3.0-emu,br=xenbr0,script=no,downscript=no \
> +      -smp 4,maxcpus=4                                \
> +      -nographic                                      \
> +      -machine xenpvh,ram-low-base=0,ram-low-size=2147483648,ram-high-base=4294967296,ram-high-size=2147483648,pci-ecam-base=824633720832,pci-ecam-size=268435456,pci-mmio-base=4026531840,pci-mmio-size=33554432,pci-mmio-high-base=824902156288,pci-mmio-high-size=68719476736 \
> +      -m 4096
> diff --git a/docs/system/target-i386.rst b/docs/system/target-i386.rst
> index 1b8a1f248a..23e84e3ba7 100644
> --- a/docs/system/target-i386.rst
> +++ b/docs/system/target-i386.rst
> @@ -26,6 +26,7 @@ Architectural features
>     i386/cpu
>     i386/hyperv
>     i386/xen
> +   i386/xenpvh
>     i386/kvm-pv
>     i386/sgx
>     i386/amd-memory-encryption
> -- 
> 2.43.0
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-13  1:47   ` Stefano Stabellini
@ 2024-08-13 17:02     ` Edgar E. Iglesias
  2024-08-13 17:20       ` Andrew Cooper
  2024-08-13 22:52       ` Stefano Stabellini
  0 siblings, 2 replies; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-13 17:02 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: qemu-devel, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm

On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano Stabellini wrote:
> On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > 
> > Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
> > servers to handle hotplug.
> > 
> > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > ---
> >  hw/arm/xen_arm.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > index 5f75cc3779..ef8315969c 100644
> > --- a/hw/arm/xen_arm.c
> > +++ b/hw/arm/xen_arm.c
> > @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
> >  
> >      xen_init_ram(machine);
> >  
> > -    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
> > +    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
> >  
> >      xen_create_virtio_mmio_devices(xam);
> >  
> > @@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
> >      MachineClass *mc = MACHINE_CLASS(oc);
> >      mc->desc = "Xen PVH ARM machine";
> >      mc->init = xen_arm_init;
> > -    mc->max_cpus = 1;
> > +    /* MAX number of vcpus supported by Xen.  */
> > +    mc->max_cpus = GUEST_MAX_VCPUS;
> 
> Will this cause allocations of data structures with 128 elements?
> Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register it seems
> possible? Or hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called

Yes, in theory there's probably overhead with this but as you correctly
noted below, a PVH aware xl will set the max_cpus option to a lower value.

With a non-pvh aware xl, I was a little worried about the overhead
but I couldn't see any visible slow-down on ARM neither in boot or in network
performance (I didn't run very sophisticated benchmarks).


> later on with the precise vCPU value which should be provided to QEMU
> via the -smp command line option
> (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?

Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4 based on
values from the xl.cfg. If the user doesn't set maxvcpus in xl.cfg, xl
will set maxvcpus to the same value as vcpus.

Best regards,
Edgar


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-13 17:02     ` Edgar E. Iglesias
@ 2024-08-13 17:20       ` Andrew Cooper
  2024-08-13 22:52       ` Stefano Stabellini
  1 sibling, 0 replies; 35+ messages in thread
From: Andrew Cooper @ 2024-08-13 17:20 UTC (permalink / raw)
  To: Edgar E. Iglesias, Stefano Stabellini
  Cc: qemu-devel, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm

On 13/08/2024 6:02 pm, Edgar E. Iglesias wrote:
> On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano Stabellini wrote:
>> On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
>>> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
>>>
>>> Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
>>> servers to handle hotplug.
>>>
>>> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
>>> ---
>>>  hw/arm/xen_arm.c | 5 +++--
>>>  1 file changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
>>> index 5f75cc3779..ef8315969c 100644
>>> --- a/hw/arm/xen_arm.c
>>> +++ b/hw/arm/xen_arm.c
>>> @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
>>>  
>>>      xen_init_ram(machine);
>>>  
>>> -    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
>>> +    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
>>>  
>>>      xen_create_virtio_mmio_devices(xam);
>>>  
>>> @@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
>>>      MachineClass *mc = MACHINE_CLASS(oc);
>>>      mc->desc = "Xen PVH ARM machine";
>>>      mc->init = xen_arm_init;
>>> -    mc->max_cpus = 1;
>>> +    /* MAX number of vcpus supported by Xen.  */
>>> +    mc->max_cpus = GUEST_MAX_VCPUS;

The only suitable number here is the one you get back from XEN_DMOP_nr_vcpus

GUEST_MAX_VCPUS is and has always been bogus as a compile time constant
in the Xen public headers (yes, ARM inherited it from x86, but it should
have never been copied to start with).  Please do not introduce any
further uses of it.

~Andrew


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-13 17:02     ` Edgar E. Iglesias
  2024-08-13 17:20       ` Andrew Cooper
@ 2024-08-13 22:52       ` Stefano Stabellini
  2024-08-14 11:49         ` Edgar E. Iglesias
  1 sibling, 1 reply; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-13 22:52 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: Stefano Stabellini, qemu-devel, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, qemu-arm

On Tue, 13 Aug 2024, Edgar E. Iglesias wrote:
> On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano Stabellini wrote:
> > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > > 
> > > Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
> > > servers to handle hotplug.
> > > 
> > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > > ---
> > >  hw/arm/xen_arm.c | 5 +++--
> > >  1 file changed, 3 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > > index 5f75cc3779..ef8315969c 100644
> > > --- a/hw/arm/xen_arm.c
> > > +++ b/hw/arm/xen_arm.c
> > > @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
> > >  
> > >      xen_init_ram(machine);
> > >  
> > > -    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
> > > +    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
> > >  
> > >      xen_create_virtio_mmio_devices(xam);
> > >  
> > > @@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
> > >      MachineClass *mc = MACHINE_CLASS(oc);
> > >      mc->desc = "Xen PVH ARM machine";
> > >      mc->init = xen_arm_init;
> > > -    mc->max_cpus = 1;
> > > +    /* MAX number of vcpus supported by Xen.  */
> > > +    mc->max_cpus = GUEST_MAX_VCPUS;
> > 
> > Will this cause allocations of data structures with 128 elements?
> > Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register it seems
> > possible? Or hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called
> 
> Yes, in theory there's probably overhead with this but as you correctly
> noted below, a PVH aware xl will set the max_cpus option to a lower value.
> 
> With a non-pvh aware xl, I was a little worried about the overhead
> but I couldn't see any visible slow-down on ARM neither in boot or in network
> performance (I didn't run very sophisticated benchmarks).
 
What do you mean by "non-pvh aware xl"? All useful versions of xl
support pvh?

> > later on with the precise vCPU value which should be provided to QEMU
> > via the -smp command line option
> > (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?
> 
> Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4 based on
> values from the xl.cfg. If the user doesn't set maxvcpus in xl.cfg, xl
> will set maxvcpus to the same value as vcpus.

OK good. In that case if this is just an initial value meant to be
overwritten, I think it is best to keep it as 1.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-13 22:52       ` Stefano Stabellini
@ 2024-08-14 11:49         ` Edgar E. Iglesias
  2024-08-15  0:30           ` Stefano Stabellini
  0 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-14 11:49 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: qemu-devel, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm, andrew.cooper3

On Tue, Aug 13, 2024 at 03:52:32PM -0700, Stefano Stabellini wrote:
> On Tue, 13 Aug 2024, Edgar E. Iglesias wrote:
> > On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano Stabellini wrote:
> > > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > > > 
> > > > Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
> > > > servers to handle hotplug.
> > > > 
> > > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > > > ---
> > > >  hw/arm/xen_arm.c | 5 +++--
> > > >  1 file changed, 3 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > > > index 5f75cc3779..ef8315969c 100644
> > > > --- a/hw/arm/xen_arm.c
> > > > +++ b/hw/arm/xen_arm.c
> > > > @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
> > > >  
> > > >      xen_init_ram(machine);
> > > >  
> > > > -    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
> > > > +    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
> > > >  
> > > >      xen_create_virtio_mmio_devices(xam);
> > > >  
> > > > @@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
> > > >      MachineClass *mc = MACHINE_CLASS(oc);
> > > >      mc->desc = "Xen PVH ARM machine";
> > > >      mc->init = xen_arm_init;
> > > > -    mc->max_cpus = 1;
> > > > +    /* MAX number of vcpus supported by Xen.  */
> > > > +    mc->max_cpus = GUEST_MAX_VCPUS;
> > > 
> > > Will this cause allocations of data structures with 128 elements?
> > > Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register it seems
> > > possible? Or hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called
> > 
> > Yes, in theory there's probably overhead with this but as you correctly
> > noted below, a PVH aware xl will set the max_cpus option to a lower value.
> > 
> > With a non-pvh aware xl, I was a little worried about the overhead
> > but I couldn't see any visible slow-down on ARM neither in boot or in network
> > performance (I didn't run very sophisticated benchmarks).
>  
> What do you mean by "non-pvh aware xl"? All useful versions of xl
> support pvh?


I mean an xl without our PVH patches merged.
xl in upstream doesn't know much about PVH yet.
Even for ARM, we're still carrying significant patches in our tree.


> > > later on with the precise vCPU value which should be provided to QEMU
> > > via the -smp command line option
> > > (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?
> > 
> > Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4 based on
> > values from the xl.cfg. If the user doesn't set maxvcpus in xl.cfg, xl
> > will set maxvcpus to the same value as vcpus.
> 
> OK good. In that case if this is just an initial value meant to be
> overwritten, I think it is best to keep it as 1.

Sorry but that won't work. I think the confusion here may be that
it's easy to mix up mc->max_cpus and machine->smp.max_cpus, these are
not the same. They have different purposes.

I'll try to clarify the 3 values in play.

machine-smp.cpus:
Number of guest vcpus active at boot.
Passed to QEMU via the -smp command-line option.
We don't use this value in QEMU's ARM PVH machines.

machine->smp.max_cpus:
Max number of vcpus that the guest can use (equal or larger than machine-smp.cpus).
Will be set by xl via the "-smp X,maxcpus=Y" command-line option to QEMU.
Taken from maxvcpus from xl.cfg, same as XEN_DMOP_nr_vcpus.
This is what we use for xen_register_ioreq().

mc->max_cpus:
Absolute MAX in QEMU used to cap the -smp command-line options.
If xl tries to set -smp (machine->smp.max_cpus) larger than this, QEMU will bail out.
Used to setup xen_register_ioreq() ONLY if -smp maxcpus was NOT set (i.e by a non PVH aware xl).
Cannot be 1 because that would limit QEMU to MAX 1 vcpu.

I guess we could set mc->max_cpus to what XEN_DMOP_nr_vcpus returns but I'll
have to check if we can even issue that hypercall this early in QEMU since
mc->max_cpus is setup before we even parse the machine options. We may
not yet know what domid we're attaching to yet.

Cheers,
Edgar


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 05/10] hw/arm: xenpvh: Break out a common PVH module
  2024-08-13  1:47   ` Stefano Stabellini
@ 2024-08-14 12:03     ` Edgar E. Iglesias
  0 siblings, 0 replies; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-14 12:03 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: qemu-devel, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm

On Mon, Aug 12, 2024 at 06:47:51PM -0700, Stefano Stabellini wrote:
> On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > 
> > Break out a common Xen PVH module in preparation for
> > adding a x86 Xen PVH Machine.
> > 
> > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > ---
> >  hw/arm/trace-events             |   5 -
> >  hw/arm/xen_arm.c                | 154 ++++++--------------------
> >  hw/xen/meson.build              |   1 +
> >  hw/xen/trace-events             |   4 +
> >  hw/xen/xen-pvh-common.c         | 185 ++++++++++++++++++++++++++++++++
> >  include/hw/xen/xen-pvh-common.h |  45 ++++++++
> >  6 files changed, 269 insertions(+), 125 deletions(-)
> >  create mode 100644 hw/xen/xen-pvh-common.c
> >  create mode 100644 include/hw/xen/xen-pvh-common.h
> > 
> > diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> > index be6c8f720b..c64ad344bd 100644
> > --- a/hw/arm/trace-events
> > +++ b/hw/arm/trace-events
> > @@ -68,10 +68,5 @@ z2_aer915_send_too_long(int8_t msg) "message too long (%i bytes)"
> >  z2_aer915_send(uint8_t reg, uint8_t value) "reg %d value 0x%02x"
> >  z2_aer915_event(int8_t event, int8_t len) "i2c event =0x%x len=%d bytes"
> >  
> > -# xen_arm.c
> > -xen_create_virtio_mmio_devices(int i, int irq, uint64_t base) "Created virtio-mmio device %d: irq %d base 0x%"PRIx64
> > -xen_init_ram(uint64_t machine_ram_size) "Initialized xen ram with size 0x%"PRIx64
> > -xen_enable_tpm(uint64_t addr) "Connected tpmdev at address 0x%"PRIx64
> > -
> >  # bcm2838.c
> >  bcm2838_gic_set_irq(int irq, int level) "gic irq:%d lvl:%d"
> > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > index ef8315969c..b8a5c09bdf 100644
> > --- a/hw/arm/xen_arm.c
> > +++ b/hw/arm/xen_arm.c
> > @@ -12,40 +12,25 @@
> >  #include "hw/irq.h"
> >  #include "hw/sysbus.h"
> >  #include "sysemu/block-backend.h"
> > -#include "sysemu/tpm_backend.h"
> >  #include "sysemu/sysemu.h"
> > -#include "hw/xen/xen-hvm-common.h"
> > +#include "hw/xen/xen-pvh-common.h"
> >  #include "sysemu/tpm.h"
> >  #include "hw/xen/arch_hvm.h"
> > -#include "trace.h"
> >  
> >  #define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
> >  OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
> >  
> > -static const MemoryListener xen_memory_listener = {
> > -    .region_add = xen_region_add,
> > -    .region_del = xen_region_del,
> > -    .log_start = NULL,
> > -    .log_stop = NULL,
> > -    .log_sync = NULL,
> > -    .log_global_start = NULL,
> > -    .log_global_stop = NULL,
> > -    .priority = MEMORY_LISTENER_PRIORITY_ACCEL,
> > -};
> > -
> >  struct XenArmState {
> >      /*< private >*/
> >      MachineState parent;
> >  
> > -    XenIOState *state;
> > +    XenPVHCommonState pvh;
> >  
> >      struct {
> >          uint64_t tpm_base_addr;
> >      } cfg;
> >  };
> >  
> > -static MemoryRegion ram_lo, ram_hi;
> > -
> >  /*
> >   * VIRTIO_MMIO_DEV_SIZE is imported from tools/libs/light/libxl_arm.c under Xen
> >   * repository.
> > @@ -57,64 +42,6 @@ static MemoryRegion ram_lo, ram_hi;
> >  #define NR_VIRTIO_MMIO_DEVICES   \
> >     (GUEST_VIRTIO_MMIO_SPI_LAST - GUEST_VIRTIO_MMIO_SPI_FIRST)
> >  
> > -static void xen_set_irq(void *opaque, int irq, int level)
> > -{
> > -    if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
> > -        error_report("xendevicemodel_set_irq_level failed");
> > -    }
> > -}
> > -
> > -static void xen_create_virtio_mmio_devices(XenArmState *xam)
> > -{
> > -    int i;
> > -
> > -    for (i = 0; i < NR_VIRTIO_MMIO_DEVICES; i++) {
> > -        hwaddr base = GUEST_VIRTIO_MMIO_BASE + i * VIRTIO_MMIO_DEV_SIZE;
> > -        qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
> > -                                         GUEST_VIRTIO_MMIO_SPI_FIRST + i);
> > -
> > -        sysbus_create_simple("virtio-mmio", base, irq);
> > -
> > -        trace_xen_create_virtio_mmio_devices(i,
> > -                                             GUEST_VIRTIO_MMIO_SPI_FIRST + i,
> > -                                             base);
> > -    }
> > -}
> 
> There are 4 trivial functions in this file:
> 
> void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
> void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
>                          bool add)
> void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
> void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
> 
> Do you think it would make sense to move these too to xen-pvh-common.c ?


Yes, it would be nice to at least provide some sort of default
implementation for these that archs like x86 can override. I'll look
into it for v2.


> 
> 
> > -
> > -static void xen_init_ram(MachineState *machine)
> > -{
> > -    MemoryRegion *sysmem = get_system_memory();
> > -    ram_addr_t block_len, ram_size[GUEST_RAM_BANKS];
> > -
> > -    trace_xen_init_ram(machine->ram_size);
> > -    if (machine->ram_size <= GUEST_RAM0_SIZE) {
> > -        ram_size[0] = machine->ram_size;
> > -        ram_size[1] = 0;
> > -        block_len = GUEST_RAM0_BASE + ram_size[0];
> > -    } else {
> > -        ram_size[0] = GUEST_RAM0_SIZE;
> > -        ram_size[1] = machine->ram_size - GUEST_RAM0_SIZE;
> > -        block_len = GUEST_RAM1_BASE + ram_size[1];
> > -    }
> > -
> > -    memory_region_init_ram(&xen_memory, NULL, "xen.ram", block_len,
> > -                           &error_fatal);
> > -
> > -    memory_region_init_alias(&ram_lo, NULL, "xen.ram.lo", &xen_memory,
> > -                             GUEST_RAM0_BASE, ram_size[0]);
> > -    memory_region_add_subregion(sysmem, GUEST_RAM0_BASE, &ram_lo);
> > -    if (ram_size[1] > 0) {
> > -        memory_region_init_alias(&ram_hi, NULL, "xen.ram.hi", &xen_memory,
> > -                                 GUEST_RAM1_BASE, ram_size[1]);
> > -        memory_region_add_subregion(sysmem, GUEST_RAM1_BASE, &ram_hi);
> > -    }
> > -
> > -    /* Setup support for grants.  */
> > -    memory_region_init_ram(&xen_grants, NULL, "xen.grants", block_len,
> > -                           &error_fatal);
> > -    memory_region_add_subregion(sysmem, XEN_GRANT_ADDR_OFF, &xen_grants);
> > -}
> > -
> >  void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
> >  {
> >      hw_error("Invalid ioreq type 0x%x\n", req->type);
> > @@ -135,55 +62,42 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
> >  {
> >  }
> >  
> > -#ifdef CONFIG_TPM
> > -static void xen_enable_tpm(XenArmState *xam)
> > -{
> > -    Error *errp = NULL;
> > -    DeviceState *dev;
> > -    SysBusDevice *busdev;
> > -
> > -    TPMBackend *be = qemu_find_tpm_be("tpm0");
> > -    if (be == NULL) {
> > -        error_report("Couldn't find tmp0 backend");
> > -        return;
> > -    }
> > -    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
> > -    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
> > -    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
> > -    busdev = SYS_BUS_DEVICE(dev);
> > -    sysbus_realize_and_unref(busdev, &error_fatal);
> > -    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
> > -
> > -    trace_xen_enable_tpm(xam->cfg.tpm_base_addr);
> > -}
> > -#endif
> > -
> > -static void xen_arm_init(MachineState *machine)
> > +static void xen_arm_init(MachineState *ms)
> >  {
> > -    XenArmState *xam = XEN_ARM(machine);
> > -
> > -    xam->state =  g_new0(XenIOState, 1);
> > +    XenArmState *xam = XEN_ARM(ms);
> > +    const struct {
> > +        const char *name;
> > +        MemMapEntry map;
> > +    } map[] = {
> > +        { "ram-low", { GUEST_RAM0_BASE, GUEST_RAM0_SIZE } },
> > +        { "ram-high", { GUEST_RAM1_BASE, GUEST_RAM1_SIZE } },
> > +        { "virtio-mmio", { GUEST_VIRTIO_MMIO_BASE, VIRTIO_MMIO_DEV_SIZE } },
> > +        { "tpm", { xam->cfg.tpm_base_addr, 0x1000 } },
> > +    };
> > +    int i;
> >  
> > -    if (machine->ram_size == 0) {
> > -        warn_report("%s non-zero ram size not specified. QEMU machine started"
> > -                    " without IOREQ (no emulated devices including virtio)",
> > -                    MACHINE_CLASS(object_get_class(OBJECT(machine)))->desc);
> > -        return;
> > +    object_initialize_child(OBJECT(ms), "pvh", &xam->pvh, TYPE_XEN_PVH_COMMON);
> > +
> > +    object_property_set_int(OBJECT(&xam->pvh), "max-cpus", ms->smp.max_cpus,
> > +                            &error_abort);
> 
> Is ms->smp.max_cpus updated according to the QEMU command line option?
> If so, that would solve the problem of the static initialization of
> max_cpus to GUEST_MAX_VCPUS in xen_arm_machine_class_init.

Yes, it kind of solves it but I'll leave that discussion to the other
thread.

> 
> 
> > +    object_property_set_int(OBJECT(&xam->pvh), "ram-size", ms->ram_size,
> > +                            &error_abort);
> > +    object_property_set_int(OBJECT(&xam->pvh), "virtio-mmio-num",
> > +                            NR_VIRTIO_MMIO_DEVICES, &error_abort);
> > +    object_property_set_int(OBJECT(&xam->pvh), "virtio-mmio-irq-base",
> > +                            GUEST_VIRTIO_MMIO_SPI_FIRST, &error_abort);
> > +
> > +    for (i = 0; i < ARRAY_SIZE(map); i++) {
> > +        g_autofree char *base_name = g_strdup_printf("%s-base", map[i].name);
> > +        g_autofree char *size_name = g_strdup_printf("%s-size", map[i].name);
> > +
> > +        object_property_set_int(OBJECT(&xam->pvh), base_name, map[i].map.base,
> > +                                &error_abort);
> > +        object_property_set_int(OBJECT(&xam->pvh), size_name, map[i].map.size,
> > +                                &error_abort);
> >      }
> >  
> > -    xen_init_ram(machine);
> > -
> > -    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
> > -
> > -    xen_create_virtio_mmio_devices(xam);
> > -
> > -#ifdef CONFIG_TPM
> > -    if (xam->cfg.tpm_base_addr) {
> > -        xen_enable_tpm(xam);
> 
> Do you think it makes sense also to move xen_arm_get_tpm_base_addr and
> xen_arm_set_tpm_base_addr to xen-pvh-common.c ?
>

Yes, perhaps we should create an abstract PVH machine. We could probably
reuse more code between x86/ARM and future PVH support. I can have a
look at it for v2.

> 
> 
> > -    } else {
> > -        warn_report("tpm-base-addr is not provided. TPM will not be enabled");
> > -    }
> > -#endif
> > +    sysbus_realize(SYS_BUS_DEVICE(&xam->pvh), &error_abort);
> >  }
> >  
> >  #ifdef CONFIG_TPM
> > diff --git a/hw/xen/meson.build b/hw/xen/meson.build
> > index d887fa9ba4..4a486e3673 100644
> > --- a/hw/xen/meson.build
> > +++ b/hw/xen/meson.build
> > @@ -15,6 +15,7 @@ xen_specific_ss = ss.source_set()
> >  xen_specific_ss.add(files(
> >    'xen-mapcache.c',
> >    'xen-hvm-common.c',
> > +  'xen-pvh-common.c',
> >  ))
> >  if have_xen_pci_passthrough
> >    xen_specific_ss.add(files(
> > diff --git a/hw/xen/trace-events b/hw/xen/trace-events
> > index d1b27f6c11..a07fe41c6d 100644
> > --- a/hw/xen/trace-events
> > +++ b/hw/xen/trace-events
> > @@ -64,6 +64,10 @@ destroy_hvm_domain_cannot_acquire_handle(void) "Cannot acquire xenctrl handle"
> >  destroy_hvm_domain_failed_action(const char *action, int sts, char *errno_s) "xc_domain_shutdown failed to issue %s, sts %d, %s"
> >  destroy_hvm_domain_action(int xen_domid, const char *action) "Issued domain %d %s"
> >  
> > +# xen-pvh-common.c
> > +xen_create_virtio_mmio_devices(int i, int irq, uint64_t base) "Created virtio-mmio device %d: irq %d base 0x%"PRIx64
> > +xen_enable_tpm(uint64_t addr) "Connected tpmdev at address 0x%"PRIx64
> > +
> >  # xen-mapcache.c
> >  xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
> >  xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
> > diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
> > new file mode 100644
> > index 0000000000..0d368398d0
> > --- /dev/null
> > +++ b/hw/xen/xen-pvh-common.c
> > @@ -0,0 +1,185 @@
> > +/*
> > + * Common Xen PVH code.
> > + *
> > + * Copyright (c) 2024 Advanced Micro Devices, Inc.
> > + *
> > + * SPDX-License-Identifier: GPL-2.0-or-later
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "qemu/error-report.h"
> > +#include "qapi/error.h"
> > +#include "hw/boards.h"
> > +#include "hw/irq.h"
> > +#include "hw/sysbus.h"
> > +#include "sysemu/sysemu.h"
> > +#include "sysemu/tpm.h"
> > +#include "sysemu/tpm_backend.h"
> > +#include "hw/xen/xen-pvh-common.h"
> > +#include "trace.h"
> > +
> > +static const MemoryListener xen_memory_listener = {
> > +    .region_add = xen_region_add,
> > +    .region_del = xen_region_del,
> > +    .log_start = NULL,
> > +    .log_stop = NULL,
> > +    .log_sync = NULL,
> > +    .log_global_start = NULL,
> > +    .log_global_stop = NULL,
> > +    .priority = MEMORY_LISTENER_PRIORITY_ACCEL,
> > +};
> > +
> > +static void xen_pvh_init_ram(XenPVHCommonState *s,
> > +                             MemoryRegion *sysmem)
> > +{
> > +    ram_addr_t block_len, ram_size[2];
> > +
> > +    if (s->cfg.ram_size <= s->cfg.ram_low.size) {
> > +        ram_size[0] = s->cfg.ram_size;
> > +        ram_size[1] = 0;
> > +        block_len = s->cfg.ram_low.base + ram_size[0];
> > +    } else {
> > +        ram_size[0] = s->cfg.ram_low.size;
> > +        ram_size[1] = s->cfg.ram_size - s->cfg.ram_low.size;
> > +        block_len = s->cfg.ram_high.base + ram_size[1];
> > +    }
> > +
> > +    memory_region_init_ram(&xen_memory, NULL, "xen.ram", block_len,
> > +                           &error_fatal);
> > +
> > +    memory_region_init_alias(&s->ram.low, NULL, "xen.ram.lo", &xen_memory,
> > +                             s->cfg.ram_low.base, ram_size[0]);
> > +    memory_region_add_subregion(sysmem, s->cfg.ram_low.base, &s->ram.low);
> > +    if (ram_size[1] > 0) {
> > +        memory_region_init_alias(&s->ram.high, NULL, "xen.ram.hi", &xen_memory,
> > +                                 s->cfg.ram_high.base, ram_size[1]);
> > +        memory_region_add_subregion(sysmem, s->cfg.ram_high.base, &s->ram.high);
> > +    }
> > +
> > +    /* Setup support for grants.  */
> > +    memory_region_init_ram(&xen_grants, NULL, "xen.grants", block_len,
> > +                           &error_fatal);
> > +    memory_region_add_subregion(sysmem, XEN_GRANT_ADDR_OFF, &xen_grants);
> > +}
> > +
> > +static void xen_set_irq(void *opaque, int irq, int level)
> > +{
> > +    if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) {
> > +        error_report("xendevicemodel_set_irq_level failed");
> > +    }
> > +}
> > +
> > +static void xen_create_virtio_mmio_devices(XenPVHCommonState *s)
> > +{
> > +    int i;
> > +
> > +    for (i = 0; i < s->cfg.virtio_mmio_num; i++) {
> > +        hwaddr base = s->cfg.virtio_mmio.base + i * s->cfg.virtio_mmio.size;
> > +        qemu_irq irq = qemu_allocate_irq(xen_set_irq, NULL,
> > +                                         s->cfg.virtio_mmio_irq_base + i);
> > +
> > +        sysbus_create_simple("virtio-mmio", base, irq);
> > +
> > +        trace_xen_create_virtio_mmio_devices(i,
> > +                                             s->cfg.virtio_mmio_irq_base + i,
> > +                                             base);
> > +    }
> > +}
> > +
> > +#ifdef CONFIG_TPM
> > +static void xen_enable_tpm(XenPVHCommonState *s)
> > +{
> > +    Error *errp = NULL;
> > +    DeviceState *dev;
> > +    SysBusDevice *busdev;
> > +
> > +    TPMBackend *be = qemu_find_tpm_be("tpm0");
> > +    if (be == NULL) {
> > +        error_report("Couldn't find tmp0 backend");
> > +        return;
> > +    }
> > +    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
> > +    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
> > +    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
> > +    busdev = SYS_BUS_DEVICE(dev);
> > +    sysbus_realize_and_unref(busdev, &error_fatal);
> > +    sysbus_mmio_map(busdev, 0, s->cfg.tpm.base);
> > +
> > +    trace_xen_enable_tpm(s->cfg.tpm.base);
> > +}
> > +#endif
> > +
> > +static void xen_pvh_realize(DeviceState *dev, Error **errp)
> > +{
> > +    XenPVHCommonState *s = XEN_PVH_COMMON(dev);
> > +    MemoryRegion *sysmem = get_system_memory();
> > +
> > +    if (s->cfg.ram_size == 0) {
> > +        /* FIXME: Prefix with object path and consider bailing out.  */
> 
> I am not sure about these two FIXME
> 
> 
> > +        warn_report("non-zero ram size not specified. QEMU machine started"
> > +                    " without IOREQ (no emulated devices including virtio)");
> 
> Also the warn message has a double negative?


Yeah, I forgot to fix these up before posting. Will fix for v2.


> 
> 
> > +        return;
> > +    }
> > +
> > +    if (s->cfg.max_cpus == 0) {
> > +        /* FIXME: Prefix with object path and bail out.  */
> > +        warn_report("max-cpus not specified. QEMU machine started");
> > +        return;
> > +    }
> > +
> > +    xen_pvh_init_ram(s, sysmem);
> > +    xen_register_ioreq(&s->ioreq, s->cfg.max_cpus, &xen_memory_listener);
> > +
> > +    if (s->cfg.virtio_mmio_num) {
> > +        xen_create_virtio_mmio_devices(s);
> > +    }
> > +
> > +#ifdef CONFIG_TPM
> > +    if (s->cfg.tpm.base) {
> > +        xen_enable_tpm(s);
> > +    } else {
> > +        warn_report("tpm-base-addr is not provided. TPM will not be enabled");
> > +    }
> > +#endif
> > +}
> > +
> > +#define DEFINE_PROP_MEMMAP(n, f) \
> > +    DEFINE_PROP_UINT64(n "-base", XenPVHCommonState, cfg.f.base, 0), \
> > +    DEFINE_PROP_UINT64(n "-size", XenPVHCommonState, cfg.f.size, 0)
> > +
> > +static Property xen_pvh_properties[] = {
> > +    DEFINE_PROP_UINT32("max-cpus", XenPVHCommonState, cfg.max_cpus, 0),
> > +    DEFINE_PROP_UINT64("ram-size", XenPVHCommonState, cfg.ram_size, 0),
> > +    DEFINE_PROP_MEMMAP("ram-low", ram_low),
> > +    DEFINE_PROP_MEMMAP("ram-high", ram_high),
> > +    DEFINE_PROP_MEMMAP("virtio-mmio", virtio_mmio),
> > +    DEFINE_PROP_MEMMAP("tpm", tpm),
> > +    DEFINE_PROP_UINT32("virtio-mmio-num", XenPVHCommonState,
> > +                       cfg.virtio_mmio_num, 0),
> > +    DEFINE_PROP_UINT32("virtio-mmio-irq-base", XenPVHCommonState,
> > +                       cfg.virtio_mmio_irq_base, 0),
> > +    DEFINE_PROP_END_OF_LIST()
> > +};
> > +
> > +static void xen_pvh_class_init(ObjectClass *klass, void *data)
> > +{
> > +    DeviceClass *dc = DEVICE_CLASS(klass);
> > +
> > +    dc->realize = xen_pvh_realize;
> > +    device_class_set_props(dc, xen_pvh_properties);
> > +    /* No VMSD since we haven't got any top-level SoC state to save.  */
> > +}
> > +
> > +static const TypeInfo xen_pvh_info = {
> > +    .name = TYPE_XEN_PVH_COMMON,
> > +    .parent = TYPE_SYS_BUS_DEVICE,
> > +    .instance_size = sizeof(XenPVHCommonState),
> > +    .class_init = xen_pvh_class_init,
> > +};
> > +
> > +static void xen_pvh_register_types(void)
> > +{
> > +    type_register_static(&xen_pvh_info);
> > +}
> > +
> > +type_init(xen_pvh_register_types);
> > diff --git a/include/hw/xen/xen-pvh-common.h b/include/hw/xen/xen-pvh-common.h
> > new file mode 100644
> > index 0000000000..e958b441fd
> > --- /dev/null
> > +++ b/include/hw/xen/xen-pvh-common.h
> > @@ -0,0 +1,45 @@
> > +/*
> > + * QEMU Xen PVH machine - common code.
> > + *
> > + * Copyright (c) 2024 Advanced Micro Devices, Inc.
> > + *
> > + * SPDX-License-Identifier: GPL-2.0-or-later
> > + */
> > +
> > +#ifndef XEN_PVH_COMMON_H__
> > +#define XEN_PVH_COMMON_H__
> > +
> > +#include <assert.h>
> > +#include "hw/sysbus.h"
> > +#include "hw/hw.h"
> > +#include "hw/xen/xen-hvm-common.h"
> > +#include "hw/pci-host/gpex.h"
> > +
> > +#define TYPE_XEN_PVH_COMMON "xen-pvh-common"
> > +OBJECT_DECLARE_SIMPLE_TYPE(XenPVHCommonState, XEN_PVH_COMMON)
> > +
> > +typedef struct XenPVHCommonState {
> > +    /*< private >*/
> > +    SysBusDevice parent_obj;
> > +
> > +    XenIOState ioreq;
> > +
> > +    struct {
> > +        MemoryRegion low;
> > +        MemoryRegion high;
> > +    } ram;
> > +
> > +    struct {
> > +        uint64_t ram_size;
> > +        uint32_t max_cpus;
> > +        uint32_t virtio_mmio_num;
> > +        uint32_t virtio_mmio_irq_base;
> > +        struct {
> > +            uint64_t base;
> > +            uint64_t size;
> > +        } ram_low, ram_high,
> > +          virtio_mmio,
> > +          tpm;
> > +    } cfg;
> > +} XenPVHCommonState;
> > +#endif
> > -- 
> > 2.43.0
> > 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX
  2024-08-13  1:48   ` Stefano Stabellini
@ 2024-08-14 15:26     ` Edgar E. Iglesias
  2024-08-15  0:29       ` Stefano Stabellini
  0 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-14 15:26 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: qemu-devel, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel

On Mon, Aug 12, 2024 at 06:48:37PM -0700, Stefano Stabellini wrote:
> On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > 
> > Add support for optionally creating a PCIe/GPEX controller.
> > 
> > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > ---
> >  hw/xen/xen-pvh-common.c         | 66 +++++++++++++++++++++++++++++++++
> >  include/hw/xen/xen-pvh-common.h | 10 ++++-
> >  2 files changed, 75 insertions(+), 1 deletion(-)
> > 
> > diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
> > index 69a2dbdb6d..b1432e4bd9 100644
> > --- a/hw/xen/xen-pvh-common.c
> > +++ b/hw/xen/xen-pvh-common.c
> > @@ -120,6 +120,59 @@ static void xen_enable_tpm(XenPVHCommonState *s)
> >  }
> >  #endif
> >  
> > +static void xen_set_pci_intx_irq(void *opaque, int irq, int level)
> > +{
> > +    if (xen_set_pci_intx_level(xen_domid, 0, 0, 0, irq, level)) {
> 
> Looking at the implementation of XEN_DMOP_set_pci_intx_level in
> xen/arch/x86/hvm/dm.c, it looks like the device parameter of
> xen_set_pci_intx_level is required?

Yes, by setting device = 0, we're bypassing the irq swizzling in Xen.
I'll try to clarify below.


> 
> 
> > +        error_report("xendevicemodel_set_pci_intx_level failed");
> > +    }
> > +}
> > +
> > +static inline void xenpvh_gpex_init(XenPVHCommonState *s,
> > +                                    MemoryRegion *sysmem,
> > +                                    hwaddr ecam_base, hwaddr ecam_size,
> > +                                    hwaddr mmio_base, hwaddr mmio_size,
> > +                                    hwaddr mmio_high_base,
> > +                                    hwaddr mmio_high_size,
> > +                                    int intx_irq_base)
> > +{
> > +    MemoryRegion *ecam_reg;
> > +    MemoryRegion *mmio_reg;
> > +    DeviceState *dev;
> > +    int i;
> > +
> > +    object_initialize_child(OBJECT(s), "gpex", &s->pci.gpex,
> > +                            TYPE_GPEX_HOST);
> > +    dev = DEVICE(&s->pci.gpex);
> > +    sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
> > +
> > +    ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
> > +    memory_region_add_subregion(sysmem, ecam_base, ecam_reg);
> 
> I notice we don't use ecam_size anywhere? Is that because the size is
> standard?

Yes. we could remove the size property, having it slightly simplifies the
prop setting code (keeping these memmap prop-pairs alike) but it's not a big deal.

> 
> 
> > +    mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> > +
> > +    if (mmio_size) {
> > +        memory_region_init_alias(&s->pci.mmio_alias, OBJECT(dev), "pcie-mmio",
> > +                                 mmio_reg, mmio_base, mmio_size);
> > +        memory_region_add_subregion(sysmem, mmio_base, &s->pci.mmio_alias);
> > +    }
> > +
> > +    if (mmio_high_size) {
> > +        memory_region_init_alias(&s->pci.mmio_high_alias, OBJECT(dev),
> > +                "pcie-mmio-high",
> > +                mmio_reg, mmio_high_base, mmio_high_size);
> > +        memory_region_add_subregion(sysmem, mmio_high_base,
> > +                &s->pci.mmio_high_alias);
> > +    }
> > +
> > +    for (i = 0; i < GPEX_NUM_IRQS; i++) {
> > +        qemu_irq irq = qemu_allocate_irq(xen_set_pci_intx_irq, s, i);
> > +
> > +        sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
> > +        gpex_set_irq_num(GPEX_HOST(dev), i, intx_irq_base + i);
> > +        xen_set_pci_link_route(i, intx_irq_base + i);
> 
> xen_set_pci_link_route is not currently implemented on ARM?
> 
> Looking at hw/i386/pc_piix.c:piix_intx_routing_notifier_xen it seems
> that the routing is much more complex over there. But looking at other
> machines that use GPEX such as hw/arm/virt.c it looks like the routing
> is straightforward the same way as in this patch.
> 
> I thought that PCI interrupt pin swizzling was required, but maybe not ?
> 
> It is totally fine if we do something different, simpler, than
> hw/i386/pc_piix.c:piix_intx_routing_notifier_xen. I just want to make
> sure that things remain consistent between ARM and x86, and also between
> Xen and QEMU view of virtual PCI interrupt routing.
>

Good questions. The following is the way I understand things but I may
ofcourse be wrong.

Yes, we're doing things differently than hw/i386/pc_piix.c mainly
because we're using the GPEX PCIe host bridge with it's internal
standard swizzling down to 4 INTX interrupts. Similar to microvm and
the ARM virt machine.

The swizzling for the GPEX is done inside the GPEX model and it's
described by xl in the ACPI tables for PVH guests. We don't want
Xen to do any additional swizzling in xen_set_pci_intx_level(), hence
device=0.

I haven't plumbed the GPEX connectinos for ARM yet but I think we could
simply call xendevicemodel_set_irq_level() and not use the pci_intx
calls that aren't implement (we wouldn't need them).

For x86/pvh, I wonder if we should be using xen_set_pci_intx_level() /
xen_set_pci_link_route() or some other API? since we're basically
bypassing things?
In one of the first implementations we used set_isa_irq_level() but
that call only reaches into irqs < 16 so it messed things up.

Does any one have any better ideas or suggestions?

Cheers,
Edgar


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine
  2024-08-13  1:48   ` Stefano Stabellini
@ 2024-08-14 15:50     ` Edgar E. Iglesias
  2024-08-15  0:19       ` Stefano Stabellini
  0 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-14 15:50 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: qemu-devel, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
	Richard Henderson, Eduardo Habkost

On Mon, Aug 12, 2024 at 06:48:52PM -0700, Stefano Stabellini wrote:
> On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > 
> > This adds a Xen PVH x86 machine based on the PVH Common
> > module used by the ARM PVH machine.
> > 
> > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > ---
> >  hw/i386/xen/meson.build |   1 +
> >  hw/i386/xen/xen-pvh.c   | 196 ++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 197 insertions(+)
> >  create mode 100644 hw/i386/xen/xen-pvh.c
> > 
> > diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
> > index 3f0df8bc07..c73c62b8e3 100644
> > --- a/hw/i386/xen/meson.build
> > +++ b/hw/i386/xen/meson.build
> > @@ -4,6 +4,7 @@ i386_ss.add(when: 'CONFIG_XEN', if_true: files(
> >  ))
> >  i386_ss.add(when: ['CONFIG_XEN', xen], if_true: files(
> >    'xen-hvm.c',
> > +  'xen-pvh.c',
> >  ))
> >  
> >  i386_ss.add(when: 'CONFIG_XEN_BUS', if_true: files(
> > diff --git a/hw/i386/xen/xen-pvh.c b/hw/i386/xen/xen-pvh.c
> > new file mode 100644
> > index 0000000000..9c3d3fc58d
> > --- /dev/null
> > +++ b/hw/i386/xen/xen-pvh.c
> > @@ -0,0 +1,196 @@
> > +/*
> > + * QEMU Xen PVH x86 Machine
> > + *
> > + * Copyright (c) 2024 Advanced Micro Devices, Inc.
> > + * Written by Edgar E. Iglesias <edgar.iglesias@amd.com>
> > + *
> > + * SPDX-License-Identifier: GPL-2.0-or-later
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "qapi/error.h"
> > +#include "qapi/visitor.h"
> > +#include "qemu/error-report.h"
> > +#include "hw/boards.h"
> > +#include "sysemu/sysemu.h"
> > +#include "hw/xen/arch_hvm.h"
> > +#include "hw/xen/xen.h"
> > +#include "hw/xen/xen-pvh-common.h"
> > +
> > +#define TYPE_XEN_PVH_X86  MACHINE_TYPE_NAME("xenpvh")
> > +OBJECT_DECLARE_SIMPLE_TYPE(XenPVHx86State, XEN_PVH_X86)
> > +
> > +#define PVH_MAX_CPUS 128
> > +
> > +struct XenPVHx86State {
> > +    /*< private >*/
> > +    MachineState parent;
> > +
> > +    DeviceState *cpu[PVH_MAX_CPUS];
> > +    XenPVHCommonState pvh;
> > +
> > +    /*
> > +     * We provide these properties to allow Xen to move things to other
> > +     * addresses for example when users need to accomodate the memory-map
> > +     * for 1:1 mapped devices/memory.
> > +     */
> > +    struct {
> > +        MemMapEntry ram_low, ram_high;
> > +        MemMapEntry pci_ecam, pci_mmio, pci_mmio_high;
> 
> Can we use the same properties already present under XenPVHCommonState?
> 
> 
> 
> > +    } cfg;
> > +};
> > +
> > +static void xenpvh_cpu_new(MachineState *ms,
> > +                           XenPVHx86State *xp,
> > +                           int cpu_idx,
> > +                           int64_t apic_id)
> > +{
> > +    Object *cpu = object_new(ms->cpu_type);
> > +
> > +    object_property_add_child(OBJECT(ms), "cpu[*]", cpu);
> > +    object_property_set_uint(cpu, "apic-id", apic_id, &error_fatal);
> > +    qdev_realize(DEVICE(cpu), NULL, &error_fatal);
> > +    object_unref(cpu);
> > +
> > +    xp->cpu[cpu_idx] = DEVICE(cpu);
> 
> 
> Why do we need to create CPU devices in QEMU given that we are only
> doing device emulation? I guess it is because ....
> 
> 
> 
> > +}
> > +
> > +static void xenpvh_init(MachineState *ms)
> > +{
> > +    XenPVHx86State *xp = XEN_PVH_X86(ms);
> > +    const struct {
> > +        const char *name;
> > +        MemMapEntry *map;
> > +    } map[] = {
> > +        { "ram-low", &xp->cfg.ram_low },
> > +        { "ram-high", &xp->cfg.ram_high },
> > +        { "pci-ecam", &xp->cfg.pci_ecam },
> > +        { "pci-mmio", &xp->cfg.pci_mmio },
> > +        { "pci-mmio-high", &xp->cfg.pci_mmio_high },
> > +    };
> > +    int i;
> > +
> > +    object_initialize_child(OBJECT(ms), "pvh", &xp->pvh, TYPE_XEN_PVH_COMMON);
> > +    object_property_set_int(OBJECT(&xp->pvh), "max-cpus", ms->smp.max_cpus,
> > +                            &error_abort);
> > +    object_property_set_int(OBJECT(&xp->pvh), "ram-size", ms->ram_size,
> > +                            &error_abort);
> > +
> > +    for (i = 0; i < ARRAY_SIZE(map); i++) {
> > +        g_autofree char *base_name = g_strdup_printf("%s-base", map[i].name);
> > +        g_autofree char *size_name = g_strdup_printf("%s-size", map[i].name);
> > +
> > +        object_property_set_int(OBJECT(&xp->pvh), base_name, map[i].map->base,
> > +                                 &error_abort);
> > +        object_property_set_int(OBJECT(&xp->pvh), size_name, map[i].map->size,
> > +                                 &error_abort);
> > +    }
> > +
> > +    /* GSI's 16 - 20 are used for legacy PCIe INTX IRQs.  */
> > +    object_property_set_int(OBJECT(&xp->pvh), "pci-intx-irq-base", 16,
> > +                            &error_abort);
> > +
> > +    sysbus_realize(SYS_BUS_DEVICE(&xp->pvh), &error_abort);
> > +
> > +    /* Create dummy cores. This will indirectly create the APIC MSI window.  */
> 
> ... of the APIC MSI window ?


Yes, exactly. I did have a first version without the CPUs that only had
the MSI window but there was a bit of disentanglement needed and some
other issue that I forgot. Note that with TCG disabled, this doesn't
have the any CPU emulation so it doesn't affect our small PVH config
very much. I could look into the MSI windows again though.


> 
> 
> 
> > +    for (i = 0; i < ms->smp.cpus; i++) {
> > +        xenpvh_cpu_new(ms, xp, i, i);
> > +    }
> > +}
> > +
> > +#define XENPVH_PROP_MEMMAP_SETTER(n, f)                                    \
> > +static void xenpvh_set_ ## n ## _ ## f(Object *obj, Visitor *v,            \
> > +                                       const char *name, void *opaque,     \
> > +                                       Error **errp)                       \
> > +{                                                                          \
> > +    XenPVHx86State *xp = XEN_PVH_X86(obj);                                 \
> > +    uint64_t value;                                                        \
> > +                                                                           \
> > +    if (!visit_type_size(v, name, &value, errp)) {                         \
> > +        return;                                                            \
> > +    }                                                                      \
> > +    xp->cfg.n.f = value;                                                   \
> > +}
> > +
> > +#define XENPVH_PROP_MEMMAP_GETTER(n, f)                                    \
> > +static void xenpvh_get_ ## n ## _ ## f(Object *obj, Visitor *v,            \
> > +                                       const char *name, void *opaque,     \
> > +                                       Error **errp)                       \
> > +{                                                                          \
> > +    XenPVHx86State *xp = XEN_PVH_X86(obj);                                 \
> > +    uint64_t value = xp->cfg.n.f;                                          \
> > +                                                                           \
> > +    visit_type_uint64(v, name, &value, errp);                              \
> > +}
> > +
> > +#define XENPVH_PROP_MEMMAP(n)              \
> > +    XENPVH_PROP_MEMMAP_SETTER(n, base)     \
> > +    XENPVH_PROP_MEMMAP_SETTER(n, size)     \
> > +    XENPVH_PROP_MEMMAP_GETTER(n, base)     \
> > +    XENPVH_PROP_MEMMAP_GETTER(n, size)
> > +
> > +
> > +XENPVH_PROP_MEMMAP(ram_low)
> > +XENPVH_PROP_MEMMAP(ram_high)
> > +XENPVH_PROP_MEMMAP(pci_ecam)
> > +XENPVH_PROP_MEMMAP(pci_mmio)
> > +XENPVH_PROP_MEMMAP(pci_mmio_high)
> 
> I would make all of these common with ARM. It wouldn't hurt to make them
> configurable on ARM too, in fact we might need it for 1:1 mapped guests
> 


Yes, that was the plan in future patches.


> 
> 
> > +static void xenpvh_instance_init(Object *obj)
> > +{
> > +    XenPVHx86State *xp = XEN_PVH_X86(obj);
> > +
> > +    /* Default memmap.  */
> > +    xp->cfg.ram_low.base = 0x0;
> > +    xp->cfg.ram_low.size = 0x80000000U;
> > +    xp->cfg.ram_high.base = 0xC000000000ULL;
> > +    xp->cfg.ram_high.size = 0x4000000000ULL;
> > +}
> > +
> > +static void xenpvh_machine_class_init(ObjectClass *oc, void *data)
> > +{
> > +    MachineClass *mc = MACHINE_CLASS(oc);
> > +
> > +    mc->desc = "Xen PVH x86 machine";
> > +    mc->init = xenpvh_init;
> > +    mc->max_cpus = PVH_MAX_CPUS;
> > +    mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
> > +    mc->default_machine_opts = "accel=xen";
> > +    /* Set explicitly here to make sure that real ram_size is passed */
> > +    mc->default_ram_size = 0;
> > +
> > +#define OC_MEMMAP_PROP(c, prop_name, name)                               \
> > +do {                                                                     \
> > +    object_class_property_add(c, prop_name "-base", "uint64_t",          \
> > +                              xenpvh_get_ ## name ## _base,              \
> > +                              xenpvh_set_ ## name ## _base, NULL, NULL); \
> > +    object_class_property_set_description(oc, prop_name "-base",         \
> > +                              prop_name " base address");                \
> > +    object_class_property_add(c, prop_name "-size", "uint64_t",          \
> > +                              xenpvh_get_ ## name ## _size,              \
> > +                              xenpvh_set_ ## name ## _size, NULL, NULL); \
> > +    object_class_property_set_description(oc, prop_name "-size",         \
> > +                              prop_name " size of memory region");       \
> > +} while (0)
> > +
> > +    OC_MEMMAP_PROP(oc, "ram-low", ram_low);
> > +    OC_MEMMAP_PROP(oc, "ram-high", ram_high);
> > +    OC_MEMMAP_PROP(oc, "pci-ecam", pci_ecam);
> > +    OC_MEMMAP_PROP(oc, "pci-mmio", pci_mmio);
> > +    OC_MEMMAP_PROP(oc, "pci-mmio-high", pci_mmio_high);
> > +}
> > +
> > +static const TypeInfo xenpvh_machine_type = {
> > +    .name = TYPE_XEN_PVH_X86,
> > +    .parent = TYPE_MACHINE,
> > +    .class_init = xenpvh_machine_class_init,
> > +    .instance_init = xenpvh_instance_init,
> > +    .instance_size = sizeof(XenPVHx86State),
> > +};
> > +
> > +static void xenpvh_machine_register_types(void)
> > +{
> > +    type_register_static(&xenpvh_machine_type);
> > +}
> > +
> > +type_init(xenpvh_machine_register_types)
> > -- 
> > 2.43.0
> > 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine
  2024-08-14 15:50     ` Edgar E. Iglesias
@ 2024-08-15  0:19       ` Stefano Stabellini
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-15  0:19 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: Stefano Stabellini, qemu-devel, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, Michael S. Tsirkin, Marcel Apfelbaum, Paolo Bonzini,
	Richard Henderson, Eduardo Habkost

On Wed, 14 Aug 2024, Edgar E. Iglesias wrote:
> On Mon, Aug 12, 2024 at 06:48:52PM -0700, Stefano Stabellini wrote:
> > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > > 
> > > This adds a Xen PVH x86 machine based on the PVH Common
> > > module used by the ARM PVH machine.
> > > 
> > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > > ---
> > >  hw/i386/xen/meson.build |   1 +
> > >  hw/i386/xen/xen-pvh.c   | 196 ++++++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 197 insertions(+)
> > >  create mode 100644 hw/i386/xen/xen-pvh.c
> > > 
> > > diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
> > > index 3f0df8bc07..c73c62b8e3 100644
> > > --- a/hw/i386/xen/meson.build
> > > +++ b/hw/i386/xen/meson.build
> > > @@ -4,6 +4,7 @@ i386_ss.add(when: 'CONFIG_XEN', if_true: files(
> > >  ))
> > >  i386_ss.add(when: ['CONFIG_XEN', xen], if_true: files(
> > >    'xen-hvm.c',
> > > +  'xen-pvh.c',
> > >  ))
> > >  
> > >  i386_ss.add(when: 'CONFIG_XEN_BUS', if_true: files(
> > > diff --git a/hw/i386/xen/xen-pvh.c b/hw/i386/xen/xen-pvh.c
> > > new file mode 100644
> > > index 0000000000..9c3d3fc58d
> > > --- /dev/null
> > > +++ b/hw/i386/xen/xen-pvh.c
> > > @@ -0,0 +1,196 @@
> > > +/*
> > > + * QEMU Xen PVH x86 Machine
> > > + *
> > > + * Copyright (c) 2024 Advanced Micro Devices, Inc.
> > > + * Written by Edgar E. Iglesias <edgar.iglesias@amd.com>
> > > + *
> > > + * SPDX-License-Identifier: GPL-2.0-or-later
> > > + */
> > > +
> > > +#include "qemu/osdep.h"
> > > +#include "qapi/error.h"
> > > +#include "qapi/visitor.h"
> > > +#include "qemu/error-report.h"
> > > +#include "hw/boards.h"
> > > +#include "sysemu/sysemu.h"
> > > +#include "hw/xen/arch_hvm.h"
> > > +#include "hw/xen/xen.h"
> > > +#include "hw/xen/xen-pvh-common.h"
> > > +
> > > +#define TYPE_XEN_PVH_X86  MACHINE_TYPE_NAME("xenpvh")
> > > +OBJECT_DECLARE_SIMPLE_TYPE(XenPVHx86State, XEN_PVH_X86)
> > > +
> > > +#define PVH_MAX_CPUS 128
> > > +
> > > +struct XenPVHx86State {
> > > +    /*< private >*/
> > > +    MachineState parent;
> > > +
> > > +    DeviceState *cpu[PVH_MAX_CPUS];
> > > +    XenPVHCommonState pvh;
> > > +
> > > +    /*
> > > +     * We provide these properties to allow Xen to move things to other
> > > +     * addresses for example when users need to accomodate the memory-map
> > > +     * for 1:1 mapped devices/memory.
> > > +     */
> > > +    struct {
> > > +        MemMapEntry ram_low, ram_high;
> > > +        MemMapEntry pci_ecam, pci_mmio, pci_mmio_high;
> > 
> > Can we use the same properties already present under XenPVHCommonState?
> > 
> > 
> > 
> > > +    } cfg;
> > > +};
> > > +
> > > +static void xenpvh_cpu_new(MachineState *ms,
> > > +                           XenPVHx86State *xp,
> > > +                           int cpu_idx,
> > > +                           int64_t apic_id)
> > > +{
> > > +    Object *cpu = object_new(ms->cpu_type);
> > > +
> > > +    object_property_add_child(OBJECT(ms), "cpu[*]", cpu);
> > > +    object_property_set_uint(cpu, "apic-id", apic_id, &error_fatal);
> > > +    qdev_realize(DEVICE(cpu), NULL, &error_fatal);
> > > +    object_unref(cpu);
> > > +
> > > +    xp->cpu[cpu_idx] = DEVICE(cpu);
> > 
> > 
> > Why do we need to create CPU devices in QEMU given that we are only
> > doing device emulation? I guess it is because ....
> > 
> > 
> > 
> > > +}
> > > +
> > > +static void xenpvh_init(MachineState *ms)
> > > +{
> > > +    XenPVHx86State *xp = XEN_PVH_X86(ms);
> > > +    const struct {
> > > +        const char *name;
> > > +        MemMapEntry *map;
> > > +    } map[] = {
> > > +        { "ram-low", &xp->cfg.ram_low },
> > > +        { "ram-high", &xp->cfg.ram_high },
> > > +        { "pci-ecam", &xp->cfg.pci_ecam },
> > > +        { "pci-mmio", &xp->cfg.pci_mmio },
> > > +        { "pci-mmio-high", &xp->cfg.pci_mmio_high },
> > > +    };
> > > +    int i;
> > > +
> > > +    object_initialize_child(OBJECT(ms), "pvh", &xp->pvh, TYPE_XEN_PVH_COMMON);
> > > +    object_property_set_int(OBJECT(&xp->pvh), "max-cpus", ms->smp.max_cpus,
> > > +                            &error_abort);
> > > +    object_property_set_int(OBJECT(&xp->pvh), "ram-size", ms->ram_size,
> > > +                            &error_abort);
> > > +
> > > +    for (i = 0; i < ARRAY_SIZE(map); i++) {
> > > +        g_autofree char *base_name = g_strdup_printf("%s-base", map[i].name);
> > > +        g_autofree char *size_name = g_strdup_printf("%s-size", map[i].name);
> > > +
> > > +        object_property_set_int(OBJECT(&xp->pvh), base_name, map[i].map->base,
> > > +                                 &error_abort);
> > > +        object_property_set_int(OBJECT(&xp->pvh), size_name, map[i].map->size,
> > > +                                 &error_abort);
> > > +    }
> > > +
> > > +    /* GSI's 16 - 20 are used for legacy PCIe INTX IRQs.  */
> > > +    object_property_set_int(OBJECT(&xp->pvh), "pci-intx-irq-base", 16,
> > > +                            &error_abort);
> > > +
> > > +    sysbus_realize(SYS_BUS_DEVICE(&xp->pvh), &error_abort);
> > > +
> > > +    /* Create dummy cores. This will indirectly create the APIC MSI window.  */
> > 
> > ... of the APIC MSI window ?
> 
> 
> Yes, exactly. I did have a first version without the CPUs that only had
> the MSI window but there was a bit of disentanglement needed and some
> other issue that I forgot. Note that with TCG disabled, this doesn't
> have the any CPU emulation so it doesn't affect our small PVH config
> very much. I could look into the MSI windows again though.

no, I think this is OK, especially if the number of CPUs is accurate


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX
  2024-08-14 15:26     ` Edgar E. Iglesias
@ 2024-08-15  0:29       ` Stefano Stabellini
  0 siblings, 0 replies; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-15  0:29 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: Stefano Stabellini, qemu-devel, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel

On Wed, 14 Aug 2024, Edgar E. Iglesias wrote:
> On Mon, Aug 12, 2024 at 06:48:37PM -0700, Stefano Stabellini wrote:
> > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > > 
> > > Add support for optionally creating a PCIe/GPEX controller.
> > > 
> > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > > ---
> > >  hw/xen/xen-pvh-common.c         | 66 +++++++++++++++++++++++++++++++++
> > >  include/hw/xen/xen-pvh-common.h | 10 ++++-
> > >  2 files changed, 75 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c
> > > index 69a2dbdb6d..b1432e4bd9 100644
> > > --- a/hw/xen/xen-pvh-common.c
> > > +++ b/hw/xen/xen-pvh-common.c
> > > @@ -120,6 +120,59 @@ static void xen_enable_tpm(XenPVHCommonState *s)
> > >  }
> > >  #endif
> > >  
> > > +static void xen_set_pci_intx_irq(void *opaque, int irq, int level)
> > > +{
> > > +    if (xen_set_pci_intx_level(xen_domid, 0, 0, 0, irq, level)) {
> > 
> > Looking at the implementation of XEN_DMOP_set_pci_intx_level in
> > xen/arch/x86/hvm/dm.c, it looks like the device parameter of
> > xen_set_pci_intx_level is required?
> 
> Yes, by setting device = 0, we're bypassing the irq swizzling in Xen.
> I'll try to clarify below.
> 
> 
> > 
> > 
> > > +        error_report("xendevicemodel_set_pci_intx_level failed");
> > > +    }
> > > +}
> > > +
> > > +static inline void xenpvh_gpex_init(XenPVHCommonState *s,
> > > +                                    MemoryRegion *sysmem,
> > > +                                    hwaddr ecam_base, hwaddr ecam_size,
> > > +                                    hwaddr mmio_base, hwaddr mmio_size,
> > > +                                    hwaddr mmio_high_base,
> > > +                                    hwaddr mmio_high_size,
> > > +                                    int intx_irq_base)
> > > +{
> > > +    MemoryRegion *ecam_reg;
> > > +    MemoryRegion *mmio_reg;
> > > +    DeviceState *dev;
> > > +    int i;
> > > +
> > > +    object_initialize_child(OBJECT(s), "gpex", &s->pci.gpex,
> > > +                            TYPE_GPEX_HOST);
> > > +    dev = DEVICE(&s->pci.gpex);
> > > +    sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
> > > +
> > > +    ecam_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
> > > +    memory_region_add_subregion(sysmem, ecam_base, ecam_reg);
> > 
> > I notice we don't use ecam_size anywhere? Is that because the size is
> > standard?
> 
> Yes. we could remove the size property, having it slightly simplifies the
> prop setting code (keeping these memmap prop-pairs alike) but it's not a big deal.

Not a big deal either way, up to you


> > > +    mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> > > +
> > > +    if (mmio_size) {
> > > +        memory_region_init_alias(&s->pci.mmio_alias, OBJECT(dev), "pcie-mmio",
> > > +                                 mmio_reg, mmio_base, mmio_size);
> > > +        memory_region_add_subregion(sysmem, mmio_base, &s->pci.mmio_alias);
> > > +    }
> > > +
> > > +    if (mmio_high_size) {
> > > +        memory_region_init_alias(&s->pci.mmio_high_alias, OBJECT(dev),
> > > +                "pcie-mmio-high",
> > > +                mmio_reg, mmio_high_base, mmio_high_size);
> > > +        memory_region_add_subregion(sysmem, mmio_high_base,
> > > +                &s->pci.mmio_high_alias);
> > > +    }
> > > +
> > > +    for (i = 0; i < GPEX_NUM_IRQS; i++) {
> > > +        qemu_irq irq = qemu_allocate_irq(xen_set_pci_intx_irq, s, i);
> > > +
> > > +        sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
> > > +        gpex_set_irq_num(GPEX_HOST(dev), i, intx_irq_base + i);
> > > +        xen_set_pci_link_route(i, intx_irq_base + i);
> > 
> > xen_set_pci_link_route is not currently implemented on ARM?
> > 
> > Looking at hw/i386/pc_piix.c:piix_intx_routing_notifier_xen it seems
> > that the routing is much more complex over there. But looking at other
> > machines that use GPEX such as hw/arm/virt.c it looks like the routing
> > is straightforward the same way as in this patch.
> > 
> > I thought that PCI interrupt pin swizzling was required, but maybe not ?
> > 
> > It is totally fine if we do something different, simpler, than
> > hw/i386/pc_piix.c:piix_intx_routing_notifier_xen. I just want to make
> > sure that things remain consistent between ARM and x86, and also between
> > Xen and QEMU view of virtual PCI interrupt routing.
> >
> 
> Good questions. The following is the way I understand things but I may
> ofcourse be wrong.
> 
> Yes, we're doing things differently than hw/i386/pc_piix.c mainly
> because we're using the GPEX PCIe host bridge with it's internal
> standard swizzling down to 4 INTX interrupts. Similar to microvm and
> the ARM virt machine.
> 
> The swizzling for the GPEX is done inside the GPEX model and it's
> described by xl in the ACPI tables for PVH guests. We don't want
> Xen to do any additional swizzling in xen_set_pci_intx_level(), hence
> device=0.

OK


> I haven't plumbed the GPEX connectinos for ARM yet but I think we could
> simply call xendevicemodel_set_irq_level() and not use the pci_intx
> calls that aren't implement (we wouldn't need them).
> 
> For x86/pvh, I wonder if we should be using xen_set_pci_intx_level() /
> xen_set_pci_link_route() or some other API? since we're basically
> bypassing things?
> In one of the first implementations we used set_isa_irq_level() but
> that call only reaches into irqs < 16 so it messed things up.
> 
> Does any one have any better ideas or suggestions?

I think QEMU is free to call or not call any API at setup time. Given
that the PVH interrupt controller is emulated by Xen, the important
thing is that when QEMU raises an interrupt or an MSI with
xen_set_isa_irq_level, xen_inject_msi and xen_set_pci_intx_level, Xen
injects it into the guest as expected and the guest receives it
appropriately.

To oversimplify things, I was worried that QEMU tries to inject INTA but
the guest receives INTD instead. Or QEMU tries to raise a level
interrupt and Xen injects an edge interrupt instead.

Also I think we should try to do things the same way between the PVH
machine on ARM and X86. But we can (should?) do things differently from
hw/i386/pc_piix.c.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-14 11:49         ` Edgar E. Iglesias
@ 2024-08-15  0:30           ` Stefano Stabellini
  2024-08-16 10:39             ` Edgar E. Iglesias
  0 siblings, 1 reply; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-15  0:30 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: Stefano Stabellini, qemu-devel, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, qemu-arm, andrew.cooper3

On Wed, 14 Aug 2024, Edgar E. Iglesias wrote:
> On Tue, Aug 13, 2024 at 03:52:32PM -0700, Stefano Stabellini wrote:
> > On Tue, 13 Aug 2024, Edgar E. Iglesias wrote:
> > > On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano Stabellini wrote:
> > > > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > > > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > > > > 
> > > > > Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
> > > > > servers to handle hotplug.
> > > > > 
> > > > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > > > > ---
> > > > >  hw/arm/xen_arm.c | 5 +++--
> > > > >  1 file changed, 3 insertions(+), 2 deletions(-)
> > > > > 
> > > > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > > > > index 5f75cc3779..ef8315969c 100644
> > > > > --- a/hw/arm/xen_arm.c
> > > > > +++ b/hw/arm/xen_arm.c
> > > > > @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
> > > > >  
> > > > >      xen_init_ram(machine);
> > > > >  
> > > > > -    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
> > > > > +    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
> > > > >  
> > > > >      xen_create_virtio_mmio_devices(xam);
> > > > >  
> > > > > @@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
> > > > >      MachineClass *mc = MACHINE_CLASS(oc);
> > > > >      mc->desc = "Xen PVH ARM machine";
> > > > >      mc->init = xen_arm_init;
> > > > > -    mc->max_cpus = 1;
> > > > > +    /* MAX number of vcpus supported by Xen.  */
> > > > > +    mc->max_cpus = GUEST_MAX_VCPUS;
> > > > 
> > > > Will this cause allocations of data structures with 128 elements?
> > > > Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register it seems
> > > > possible? Or hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called
> > > 
> > > Yes, in theory there's probably overhead with this but as you correctly
> > > noted below, a PVH aware xl will set the max_cpus option to a lower value.
> > > 
> > > With a non-pvh aware xl, I was a little worried about the overhead
> > > but I couldn't see any visible slow-down on ARM neither in boot or in network
> > > performance (I didn't run very sophisticated benchmarks).
> >  
> > What do you mean by "non-pvh aware xl"? All useful versions of xl
> > support pvh?
> 
> 
> I mean an xl without our PVH patches merged.
> xl in upstream doesn't know much about PVH yet.
> Even for ARM, we're still carrying significant patches in our tree.
 
Oh I see. In that case, I don't think we need to support "non-pvh aware xl".

 
> > > > later on with the precise vCPU value which should be provided to QEMU
> > > > via the -smp command line option
> > > > (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?
> > > 
> > > Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4 based on
> > > values from the xl.cfg. If the user doesn't set maxvcpus in xl.cfg, xl
> > > will set maxvcpus to the same value as vcpus.
> > 
> > OK good. In that case if this is just an initial value meant to be
> > overwritten, I think it is best to keep it as 1.
> 
> Sorry but that won't work. I think the confusion here may be that
> it's easy to mix up mc->max_cpus and machine->smp.max_cpus, these are
> not the same. They have different purposes.
> 
> I'll try to clarify the 3 values in play.
> 
> machine-smp.cpus:
> Number of guest vcpus active at boot.
> Passed to QEMU via the -smp command-line option.
> We don't use this value in QEMU's ARM PVH machines.
> 
> machine->smp.max_cpus:
> Max number of vcpus that the guest can use (equal or larger than machine-smp.cpus).
> Will be set by xl via the "-smp X,maxcpus=Y" command-line option to QEMU.
> Taken from maxvcpus from xl.cfg, same as XEN_DMOP_nr_vcpus.
> This is what we use for xen_register_ioreq().
> 
> mc->max_cpus:
> Absolute MAX in QEMU used to cap the -smp command-line options.
> If xl tries to set -smp (machine->smp.max_cpus) larger than this, QEMU will bail out.
> Used to setup xen_register_ioreq() ONLY if -smp maxcpus was NOT set (i.e by a non PVH aware xl).
> Cannot be 1 because that would limit QEMU to MAX 1 vcpu.
> 
> I guess we could set mc->max_cpus to what XEN_DMOP_nr_vcpus returns but I'll
> have to check if we can even issue that hypercall this early in QEMU since
> mc->max_cpus is setup before we even parse the machine options. We may
> not yet know what domid we're attaching to yet.
 
If mc->max_cpus is the absolute max and it will not be used if -smp is
passed to QEMU, then I think it is OK to use GUEST_MAX_VCPUS


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-15  0:30           ` Stefano Stabellini
@ 2024-08-16 10:39             ` Edgar E. Iglesias
  2024-08-16 16:53               ` Stefano Stabellini
  0 siblings, 1 reply; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-16 10:39 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: qemu-devel, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, jason.andryuk, edgar.iglesias, xen-devel,
	qemu-arm, andrew.cooper3

[-- Attachment #1: Type: text/plain, Size: 5379 bytes --]

On Thu, Aug 15, 2024 at 2:30 AM Stefano Stabellini <sstabellini@kernel.org>
wrote:

> On Wed, 14 Aug 2024, Edgar E. Iglesias wrote:
> > On Tue, Aug 13, 2024 at 03:52:32PM -0700, Stefano Stabellini wrote:
> > > On Tue, 13 Aug 2024, Edgar E. Iglesias wrote:
> > > > On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano Stabellini wrote:
> > > > > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> > > > > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> > > > > >
> > > > > > Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
> > > > > > servers to handle hotplug.
> > > > > >
> > > > > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> > > > > > ---
> > > > > >  hw/arm/xen_arm.c | 5 +++--
> > > > > >  1 file changed, 3 insertions(+), 2 deletions(-)
> > > > > >
> > > > > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > > > > > index 5f75cc3779..ef8315969c 100644
> > > > > > --- a/hw/arm/xen_arm.c
> > > > > > +++ b/hw/arm/xen_arm.c
> > > > > > @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState
> *machine)
> > > > > >
> > > > > >      xen_init_ram(machine);
> > > > > >
> > > > > > -    xen_register_ioreq(xam->state, machine->smp.cpus,
> &xen_memory_listener);
> > > > > > +    xen_register_ioreq(xam->state, machine->smp.max_cpus,
> &xen_memory_listener);
> > > > > >
> > > > > >      xen_create_virtio_mmio_devices(xam);
> > > > > >
> > > > > > @@ -218,7 +218,8 @@ static void
> xen_arm_machine_class_init(ObjectClass *oc, void *data)
> > > > > >      MachineClass *mc = MACHINE_CLASS(oc);
> > > > > >      mc->desc = "Xen PVH ARM machine";
> > > > > >      mc->init = xen_arm_init;
> > > > > > -    mc->max_cpus = 1;
> > > > > > +    /* MAX number of vcpus supported by Xen.  */
> > > > > > +    mc->max_cpus = GUEST_MAX_VCPUS;
> > > > >
> > > > > Will this cause allocations of data structures with 128 elements?
> > > > > Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register it seems
> > > > > possible? Or hw/xen/xen-hvm-common.c:xen_do_ioreq_register is
> called
> > > >
> > > > Yes, in theory there's probably overhead with this but as you
> correctly
> > > > noted below, a PVH aware xl will set the max_cpus option to a lower
> value.
> > > >
> > > > With a non-pvh aware xl, I was a little worried about the overhead
> > > > but I couldn't see any visible slow-down on ARM neither in boot or
> in network
> > > > performance (I didn't run very sophisticated benchmarks).
> > >
> > > What do you mean by "non-pvh aware xl"? All useful versions of xl
> > > support pvh?
> >
> >
> > I mean an xl without our PVH patches merged.
> > xl in upstream doesn't know much about PVH yet.
> > Even for ARM, we're still carrying significant patches in our tree.
>
> Oh I see. In that case, I don't think we need to support "non-pvh aware
> xl".
>
>
> > > > > later on with the precise vCPU value which should be provided to
> QEMU
> > > > > via the -smp command line option
> > > > > (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?
> > > >
> > > > Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4 based on
> > > > values from the xl.cfg. If the user doesn't set maxvcpus in xl.cfg,
> xl
> > > > will set maxvcpus to the same value as vcpus.
> > >
> > > OK good. In that case if this is just an initial value meant to be
> > > overwritten, I think it is best to keep it as 1.
> >
> > Sorry but that won't work. I think the confusion here may be that
> > it's easy to mix up mc->max_cpus and machine->smp.max_cpus, these are
> > not the same. They have different purposes.
> >
> > I'll try to clarify the 3 values in play.
> >
> > machine-smp.cpus:
> > Number of guest vcpus active at boot.
> > Passed to QEMU via the -smp command-line option.
> > We don't use this value in QEMU's ARM PVH machines.
> >
> > machine->smp.max_cpus:
> > Max number of vcpus that the guest can use (equal or larger than
> machine-smp.cpus).
> > Will be set by xl via the "-smp X,maxcpus=Y" command-line option to QEMU.
> > Taken from maxvcpus from xl.cfg, same as XEN_DMOP_nr_vcpus.
> > This is what we use for xen_register_ioreq().
> >
> > mc->max_cpus:
> > Absolute MAX in QEMU used to cap the -smp command-line options.
> > If xl tries to set -smp (machine->smp.max_cpus) larger than this, QEMU
> will bail out.
> > Used to setup xen_register_ioreq() ONLY if -smp maxcpus was NOT set (i.e
> by a non PVH aware xl).
> > Cannot be 1 because that would limit QEMU to MAX 1 vcpu.
> >
> > I guess we could set mc->max_cpus to what XEN_DMOP_nr_vcpus returns but
> I'll
> > have to check if we can even issue that hypercall this early in QEMU
> since
> > mc->max_cpus is setup before we even parse the machine options. We may
> > not yet know what domid we're attaching to yet.
>
> If mc->max_cpus is the absolute max and it will not be used if -smp is
> passed to QEMU, then I think it is OK to use GUEST_MAX_VCPUS
>


Looking at this a little more. If users (xl) don't pass an -smp option we
actually default to smp.max_cpus=1.
So, another option is to simply remove the upper limit in QEMU (e.g we can
set mc->max_cpus to something very large like UINT32_MAX).
That would avoid early hypercalls, avoid using GUEST_MAX_VCPUS and always
let xl dictate the max_cpus value using the -smp cmdline option.

[-- Attachment #2: Type: text/html, Size: 6996 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-16 10:39             ` Edgar E. Iglesias
@ 2024-08-16 16:53               ` Stefano Stabellini
  2024-08-16 22:58                 ` Jason Andryuk
  0 siblings, 1 reply; 35+ messages in thread
From: Stefano Stabellini @ 2024-08-16 16:53 UTC (permalink / raw)
  To: Edgar E. Iglesias
  Cc: Stefano Stabellini, qemu-devel, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, jason.andryuk, edgar.iglesias,
	xen-devel, qemu-arm, andrew.cooper3

[-- Attachment #1: Type: text/plain, Size: 6418 bytes --]

On Fri, 16 Aug 2024, Edgar E. Iglesias wrote:
> On Thu, Aug 15, 2024 at 2:30 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>       On Wed, 14 Aug 2024, Edgar E. Iglesias wrote:
>       > On Tue, Aug 13, 2024 at 03:52:32PM -0700, Stefano Stabellini wrote:
>       > > On Tue, 13 Aug 2024, Edgar E. Iglesias wrote:
>       > > > On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano Stabellini wrote:
>       > > > > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
>       > > > > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
>       > > > > >
>       > > > > > Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
>       > > > > > servers to handle hotplug.
>       > > > > >
>       > > > > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
>       > > > > > ---
>       > > > > >  hw/arm/xen_arm.c | 5 +++--
>       > > > > >  1 file changed, 3 insertions(+), 2 deletions(-)
>       > > > > >
>       > > > > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
>       > > > > > index 5f75cc3779..ef8315969c 100644
>       > > > > > --- a/hw/arm/xen_arm.c
>       > > > > > +++ b/hw/arm/xen_arm.c
>       > > > > > @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
>       > > > > > 
>       > > > > >      xen_init_ram(machine);
>       > > > > > 
>       > > > > > -    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
>       > > > > > +    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
>       > > > > > 
>       > > > > >      xen_create_virtio_mmio_devices(xam);
>       > > > > > 
>       > > > > > @@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
>       > > > > >      MachineClass *mc = MACHINE_CLASS(oc);
>       > > > > >      mc->desc = "Xen PVH ARM machine";
>       > > > > >      mc->init = xen_arm_init;
>       > > > > > -    mc->max_cpus = 1;
>       > > > > > +    /* MAX number of vcpus supported by Xen.  */
>       > > > > > +    mc->max_cpus = GUEST_MAX_VCPUS;
>       > > > >
>       > > > > Will this cause allocations of data structures with 128 elements?
>       > > > > Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register it seems
>       > > > > possible? Or hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called
>       > > >
>       > > > Yes, in theory there's probably overhead with this but as you correctly
>       > > > noted below, a PVH aware xl will set the max_cpus option to a lower value.
>       > > >
>       > > > With a non-pvh aware xl, I was a little worried about the overhead
>       > > > but I couldn't see any visible slow-down on ARM neither in boot or in network
>       > > > performance (I didn't run very sophisticated benchmarks).
>       > > 
>       > > What do you mean by "non-pvh aware xl"? All useful versions of xl
>       > > support pvh?
>       >
>       >
>       > I mean an xl without our PVH patches merged.
>       > xl in upstream doesn't know much about PVH yet.
>       > Even for ARM, we're still carrying significant patches in our tree.
> 
>       Oh I see. In that case, I don't think we need to support "non-pvh aware xl".
> 
> 
>       > > > > later on with the precise vCPU value which should be provided to QEMU
>       > > > > via the -smp command line option
>       > > > > (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?
>       > > >
>       > > > Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4 based on
>       > > > values from the xl.cfg. If the user doesn't set maxvcpus in xl.cfg, xl
>       > > > will set maxvcpus to the same value as vcpus.
>       > >
>       > > OK good. In that case if this is just an initial value meant to be
>       > > overwritten, I think it is best to keep it as 1.
>       >
>       > Sorry but that won't work. I think the confusion here may be that
>       > it's easy to mix up mc->max_cpus and machine->smp.max_cpus, these are
>       > not the same. They have different purposes.
>       >
>       > I'll try to clarify the 3 values in play.
>       >
>       > machine-smp.cpus:
>       > Number of guest vcpus active at boot.
>       > Passed to QEMU via the -smp command-line option.
>       > We don't use this value in QEMU's ARM PVH machines.
>       >
>       > machine->smp.max_cpus:
>       > Max number of vcpus that the guest can use (equal or larger than machine-smp.cpus).
>       > Will be set by xl via the "-smp X,maxcpus=Y" command-line option to QEMU.
>       > Taken from maxvcpus from xl.cfg, same as XEN_DMOP_nr_vcpus.
>       > This is what we use for xen_register_ioreq().
>       >
>       > mc->max_cpus:
>       > Absolute MAX in QEMU used to cap the -smp command-line options.
>       > If xl tries to set -smp (machine->smp.max_cpus) larger than this, QEMU will bail out.
>       > Used to setup xen_register_ioreq() ONLY if -smp maxcpus was NOT set (i.e by a non PVH aware xl).
>       > Cannot be 1 because that would limit QEMU to MAX 1 vcpu.
>       >
>       > I guess we could set mc->max_cpus to what XEN_DMOP_nr_vcpus returns but I'll
>       > have to check if we can even issue that hypercall this early in QEMU since
>       > mc->max_cpus is setup before we even parse the machine options. We may
>       > not yet know what domid we're attaching to yet.
> 
>       If mc->max_cpus is the absolute max and it will not be used if -smp is
>       passed to QEMU, then I think it is OK to use GUEST_MAX_VCPUS
> 
> Looking at this a little more. If users (xl) don't pass an -smp option we actually default to smp.max_cpus=1.
> So, another option is to simply remove the upper limit in QEMU (e.g we can set mc->max_cpus to something very large like UINT32_MAX).
> That would avoid early hypercalls, avoid using GUEST_MAX_VCPUS and always let xl dictate the max_cpus value using the -smp cmdline option. 

As the expectation is that there will be always a smp.max_cpus option
passed to QEMU, I would avoid an extra early hypercall.

For the initial value, I would use something static and large, but not
unreasonably large as UINT32_MAX to be more resilient in (erroneous)
cases where smp.max_cpus is not passed.

So I would initialize it to GUEST_MAX_VCPUS, or if we don't want to use
GUEST_MAX_VCPUS, something equivalent in the 64-256 range.

Alternative we can have a runtime check and exit with a warning if
smp.max_cpus is not set.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-16 16:53               ` Stefano Stabellini
@ 2024-08-16 22:58                 ` Jason Andryuk
  2024-08-20 14:13                   ` Edgar E. Iglesias
  0 siblings, 1 reply; 35+ messages in thread
From: Jason Andryuk @ 2024-08-16 22:58 UTC (permalink / raw)
  To: Stefano Stabellini, Edgar E. Iglesias
  Cc: qemu-devel, anthony, paul, peter.maydell, alex.bennee,
	xenia.ragiadakou, edgar.iglesias, xen-devel, qemu-arm,
	andrew.cooper3

On 2024-08-16 12:53, Stefano Stabellini wrote:
> On Fri, 16 Aug 2024, Edgar E. Iglesias wrote:
>> On Thu, Aug 15, 2024 at 2:30 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>>        On Wed, 14 Aug 2024, Edgar E. Iglesias wrote:
>>        > On Tue, Aug 13, 2024 at 03:52:32PM -0700, Stefano Stabellini wrote:
>>        > > On Tue, 13 Aug 2024, Edgar E. Iglesias wrote:
>>        > > > On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano Stabellini wrote:
>>        > > > > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
>>        > > > > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
>>        > > > > >
>>        > > > > > Add SMP support for Xen PVH ARM guests. Create max_cpus ioreq
>>        > > > > > servers to handle hotplug.
>>        > > > > >
>>        > > > > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
>>        > > > > > ---
>>        > > > > >  hw/arm/xen_arm.c | 5 +++--
>>        > > > > >  1 file changed, 3 insertions(+), 2 deletions(-)
>>        > > > > >
>>        > > > > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
>>        > > > > > index 5f75cc3779..ef8315969c 100644
>>        > > > > > --- a/hw/arm/xen_arm.c
>>        > > > > > +++ b/hw/arm/xen_arm.c
>>        > > > > > @@ -173,7 +173,7 @@ static void xen_arm_init(MachineState *machine)
>>        > > > > >
>>        > > > > >      xen_init_ram(machine);
>>        > > > > >
>>        > > > > > -    xen_register_ioreq(xam->state, machine->smp.cpus, &xen_memory_listener);
>>        > > > > > +    xen_register_ioreq(xam->state, machine->smp.max_cpus, &xen_memory_listener);
>>        > > > > >
>>        > > > > >      xen_create_virtio_mmio_devices(xam);
>>        > > > > >
>>        > > > > > @@ -218,7 +218,8 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
>>        > > > > >      MachineClass *mc = MACHINE_CLASS(oc);
>>        > > > > >      mc->desc = "Xen PVH ARM machine";
>>        > > > > >      mc->init = xen_arm_init;
>>        > > > > > -    mc->max_cpus = 1;
>>        > > > > > +    /* MAX number of vcpus supported by Xen.  */
>>        > > > > > +    mc->max_cpus = GUEST_MAX_VCPUS;
>>        > > > >
>>        > > > > Will this cause allocations of data structures with 128 elements?
>>        > > > > Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register it seems
>>        > > > > possible? Or hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called
>>        > > >
>>        > > > Yes, in theory there's probably overhead with this but as you correctly
>>        > > > noted below, a PVH aware xl will set the max_cpus option to a lower value.
>>        > > >
>>        > > > With a non-pvh aware xl, I was a little worried about the overhead
>>        > > > but I couldn't see any visible slow-down on ARM neither in boot or in network
>>        > > > performance (I didn't run very sophisticated benchmarks).
>>        > >
>>        > > What do you mean by "non-pvh aware xl"? All useful versions of xl
>>        > > support pvh?
>>        >
>>        >
>>        > I mean an xl without our PVH patches merged.
>>        > xl in upstream doesn't know much about PVH yet.
>>        > Even for ARM, we're still carrying significant patches in our tree.
>>
>>        Oh I see. In that case, I don't think we need to support "non-pvh aware xl".
>>
>>
>>        > > > > later on with the precise vCPU value which should be provided to QEMU
>>        > > > > via the -smp command line option
>>        > > > > (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?
>>        > > >
>>        > > > Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4 based on
>>        > > > values from the xl.cfg. If the user doesn't set maxvcpus in xl.cfg, xl
>>        > > > will set maxvcpus to the same value as vcpus.
>>        > >
>>        > > OK good. In that case if this is just an initial value meant to be
>>        > > overwritten, I think it is best to keep it as 1.
>>        >
>>        > Sorry but that won't work. I think the confusion here may be that
>>        > it's easy to mix up mc->max_cpus and machine->smp.max_cpus, these are
>>        > not the same. They have different purposes.
>>        >
>>        > I'll try to clarify the 3 values in play.
>>        >
>>        > machine-smp.cpus:
>>        > Number of guest vcpus active at boot.
>>        > Passed to QEMU via the -smp command-line option.
>>        > We don't use this value in QEMU's ARM PVH machines.
>>        >
>>        > machine->smp.max_cpus:
>>        > Max number of vcpus that the guest can use (equal or larger than machine-smp.cpus).
>>        > Will be set by xl via the "-smp X,maxcpus=Y" command-line option to QEMU.
>>        > Taken from maxvcpus from xl.cfg, same as XEN_DMOP_nr_vcpus.
>>        > This is what we use for xen_register_ioreq().
>>        >
>>        > mc->max_cpus:
>>        > Absolute MAX in QEMU used to cap the -smp command-line options.
>>        > If xl tries to set -smp (machine->smp.max_cpus) larger than this, QEMU will bail out.
>>        > Used to setup xen_register_ioreq() ONLY if -smp maxcpus was NOT set (i.e by a non PVH aware xl).
>>        > Cannot be 1 because that would limit QEMU to MAX 1 vcpu.
>>        >
>>        > I guess we could set mc->max_cpus to what XEN_DMOP_nr_vcpus returns but I'll
>>        > have to check if we can even issue that hypercall this early in QEMU since
>>        > mc->max_cpus is setup before we even parse the machine options. We may
>>        > not yet know what domid we're attaching to yet.
>>
>>        If mc->max_cpus is the absolute max and it will not be used if -smp is
>>        passed to QEMU, then I think it is OK to use GUEST_MAX_VCPUS
>>
>> Looking at this a little more. If users (xl) don't pass an -smp option we actually default to smp.max_cpus=1.
>> So, another option is to simply remove the upper limit in QEMU (e.g we can set mc->max_cpus to something very large like UINT32_MAX).
>> That would avoid early hypercalls, avoid using GUEST_MAX_VCPUS and always let xl dictate the max_cpus value using the -smp cmdline option.
> 
> As the expectation is that there will be always a smp.max_cpus option
> passed to QEMU, I would avoid an extra early hypercall.
> 
> For the initial value, I would use something static and large, but not
> unreasonably large as UINT32_MAX to be more resilient in (erroneous)
> cases where smp.max_cpus is not passed.
> 
> So I would initialize it to GUEST_MAX_VCPUS, or if we don't want to use
> GUEST_MAX_VCPUS, something equivalent in the 64-256 range.
> 
> Alternative we can have a runtime check and exit with a warning if
> smp.max_cpus is not set.

FYI, xl only passes a -smp option when the domU has more than 1 vcpu. 
Though that implies only a single vcpu.

Regards,
Jason


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
  2024-08-16 22:58                 ` Jason Andryuk
@ 2024-08-20 14:13                   ` Edgar E. Iglesias
  0 siblings, 0 replies; 35+ messages in thread
From: Edgar E. Iglesias @ 2024-08-20 14:13 UTC (permalink / raw)
  To: Jason Andryuk
  Cc: Stefano Stabellini, qemu-devel, anthony, paul, peter.maydell,
	alex.bennee, xenia.ragiadakou, edgar.iglesias, xen-devel,
	qemu-arm, andrew.cooper3

[-- Attachment #1: Type: text/plain, Size: 8078 bytes --]

On Sat, Aug 17, 2024 at 2:45 AM Jason Andryuk <jason.andryuk@amd.com> wrote:

> On 2024-08-16 12:53, Stefano Stabellini wrote:
> > On Fri, 16 Aug 2024, Edgar E. Iglesias wrote:
> >> On Thu, Aug 15, 2024 at 2:30 AM Stefano Stabellini <
> sstabellini@kernel.org> wrote:
> >>        On Wed, 14 Aug 2024, Edgar E. Iglesias wrote:
> >>        > On Tue, Aug 13, 2024 at 03:52:32PM -0700, Stefano Stabellini
> wrote:
> >>        > > On Tue, 13 Aug 2024, Edgar E. Iglesias wrote:
> >>        > > > On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano
> Stabellini wrote:
> >>        > > > > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> >>        > > > > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> >>        > > > > >
> >>        > > > > > Add SMP support for Xen PVH ARM guests. Create
> max_cpus ioreq
> >>        > > > > > servers to handle hotplug.
> >>        > > > > >
> >>        > > > > > Signed-off-by: Edgar E. Iglesias <
> edgar.iglesias@amd.com>
> >>        > > > > > ---
> >>        > > > > >  hw/arm/xen_arm.c | 5 +++--
> >>        > > > > >  1 file changed, 3 insertions(+), 2 deletions(-)
> >>        > > > > >
> >>        > > > > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> >>        > > > > > index 5f75cc3779..ef8315969c 100644
> >>        > > > > > --- a/hw/arm/xen_arm.c
> >>        > > > > > +++ b/hw/arm/xen_arm.c
> >>        > > > > > @@ -173,7 +173,7 @@ static void
> xen_arm_init(MachineState *machine)
> >>        > > > > >
> >>        > > > > >      xen_init_ram(machine);
> >>        > > > > >
> >>        > > > > > -    xen_register_ioreq(xam->state, machine->smp.cpus,
> &xen_memory_listener);
> >>        > > > > > +    xen_register_ioreq(xam->state,
> machine->smp.max_cpus, &xen_memory_listener);
> >>        > > > > >
> >>        > > > > >      xen_create_virtio_mmio_devices(xam);
> >>        > > > > >
> >>        > > > > > @@ -218,7 +218,8 @@ static void
> xen_arm_machine_class_init(ObjectClass *oc, void *data)
> >>        > > > > >      MachineClass *mc = MACHINE_CLASS(oc);
> >>        > > > > >      mc->desc = "Xen PVH ARM machine";
> >>        > > > > >      mc->init = xen_arm_init;
> >>        > > > > > -    mc->max_cpus = 1;
> >>        > > > > > +    /* MAX number of vcpus supported by Xen.  */
> >>        > > > > > +    mc->max_cpus = GUEST_MAX_VCPUS;
> >>        > > > >
> >>        > > > > Will this cause allocations of data structures with 128
> elements?
> >>        > > > > Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register
> it seems
> >>        > > > > possible? Or
> hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called
> >>        > > >
> >>        > > > Yes, in theory there's probably overhead with this but as
> you correctly
> >>        > > > noted below, a PVH aware xl will set the max_cpus option
> to a lower value.
> >>        > > >
> >>        > > > With a non-pvh aware xl, I was a little worried about the
> overhead
> >>        > > > but I couldn't see any visible slow-down on ARM neither in
> boot or in network
> >>        > > > performance (I didn't run very sophisticated benchmarks).
> >>        > >
> >>        > > What do you mean by "non-pvh aware xl"? All useful versions
> of xl
> >>        > > support pvh?
> >>        >
> >>        >
> >>        > I mean an xl without our PVH patches merged.
> >>        > xl in upstream doesn't know much about PVH yet.
> >>        > Even for ARM, we're still carrying significant patches in our
> tree.
> >>
> >>        Oh I see. In that case, I don't think we need to support
> "non-pvh aware xl".
> >>
> >>
> >>        > > > > later on with the precise vCPU value which should be
> provided to QEMU
> >>        > > > > via the -smp command line option
> >>        > > > >
> (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?
> >>        > > >
> >>        > > > Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4
> based on
> >>        > > > values from the xl.cfg. If the user doesn't set maxvcpus
> in xl.cfg, xl
> >>        > > > will set maxvcpus to the same value as vcpus.
> >>        > >
> >>        > > OK good. In that case if this is just an initial value meant
> to be
> >>        > > overwritten, I think it is best to keep it as 1.
> >>        >
> >>        > Sorry but that won't work. I think the confusion here may be
> that
> >>        > it's easy to mix up mc->max_cpus and machine->smp.max_cpus,
> these are
> >>        > not the same. They have different purposes.
> >>        >
> >>        > I'll try to clarify the 3 values in play.
> >>        >
> >>        > machine-smp.cpus:
> >>        > Number of guest vcpus active at boot.
> >>        > Passed to QEMU via the -smp command-line option.
> >>        > We don't use this value in QEMU's ARM PVH machines.
> >>        >
> >>        > machine->smp.max_cpus:
> >>        > Max number of vcpus that the guest can use (equal or larger
> than machine-smp.cpus).
> >>        > Will be set by xl via the "-smp X,maxcpus=Y" command-line
> option to QEMU.
> >>        > Taken from maxvcpus from xl.cfg, same as XEN_DMOP_nr_vcpus.
> >>        > This is what we use for xen_register_ioreq().
> >>        >
> >>        > mc->max_cpus:
> >>        > Absolute MAX in QEMU used to cap the -smp command-line options.
> >>        > If xl tries to set -smp (machine->smp.max_cpus) larger than
> this, QEMU will bail out.
> >>        > Used to setup xen_register_ioreq() ONLY if -smp maxcpus was
> NOT set (i.e by a non PVH aware xl).
> >>        > Cannot be 1 because that would limit QEMU to MAX 1 vcpu.
> >>        >
> >>        > I guess we could set mc->max_cpus to what XEN_DMOP_nr_vcpus
> returns but I'll
> >>        > have to check if we can even issue that hypercall this early
> in QEMU since
> >>        > mc->max_cpus is setup before we even parse the machine
> options. We may
> >>        > not yet know what domid we're attaching to yet.
> >>
> >>        If mc->max_cpus is the absolute max and it will not be used if
> -smp is
> >>        passed to QEMU, then I think it is OK to use GUEST_MAX_VCPUS
> >>
> >> Looking at this a little more. If users (xl) don't pass an -smp option
> we actually default to smp.max_cpus=1.
> >> So, another option is to simply remove the upper limit in QEMU (e.g we
> can set mc->max_cpus to something very large like UINT32_MAX).
> >> That would avoid early hypercalls, avoid using GUEST_MAX_VCPUS and
> always let xl dictate the max_cpus value using the -smp cmdline option.
> >
> > As the expectation is that there will be always a smp.max_cpus option
> > passed to QEMU, I would avoid an extra early hypercall.
> >
> > For the initial value, I would use something static and large, but not
> > unreasonably large as UINT32_MAX to be more resilient in (erroneous)
> > cases where smp.max_cpus is not passed.
> >
> > So I would initialize it to GUEST_MAX_VCPUS, or if we don't want to use
> > GUEST_MAX_VCPUS, something equivalent in the 64-256 range.
>

Thanks Stefano,

I'm going to send a v2 following this suggestion of using GUEST_MAX_VCPUS.
Will also add comments clarifying that this is a MAX value for the
command-line option
and not what gets passed to register_ioreq.
We can continue the discussion from there to see if we want to change
things,
I don't have a strong opinion here so I'm happy to go either way.



> >
> > Alternative we can have a runtime check and exit with a warning if
> > smp.max_cpus is not set.
>
> FYI, xl only passes a -smp option when the domU has more than 1 vcpu.
> Though that implies only a single vcpu.
>
>
Thanks Jason, yes, in that case the default of cpus=1, maxcpus=1 gets set.
I was initially under the wrong assumption that without -smp options, the
max would get set.
This is what I was trying to clarify in my previous email:
>> Looking at this a little more. If users (xl) don't pass an -smp option
we actually default to smp.max_cpus=1.

Best regards,
Edgar

[-- Attachment #2: Type: text/html, Size: 11036 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2024-08-20 14:14 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
2024-08-12 13:05 ` [PATCH v1 01/10] MAINTAINERS: Add docs/system/arm/xenpvh.rst Edgar E. Iglesias
2024-08-13  1:45   ` Stefano Stabellini
2024-08-12 13:05 ` [PATCH v1 02/10] hw/arm: xenpvh: Update file header to use SPDX Edgar E. Iglesias
2024-08-13  1:45   ` Stefano Stabellini
2024-08-12 13:05 ` [PATCH v1 03/10] hw/arm: xenpvh: Tweak machine description Edgar E. Iglesias
2024-08-13  1:45   ` Stefano Stabellini
2024-08-12 13:05 ` [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests Edgar E. Iglesias
2024-08-13  1:47   ` Stefano Stabellini
2024-08-13 17:02     ` Edgar E. Iglesias
2024-08-13 17:20       ` Andrew Cooper
2024-08-13 22:52       ` Stefano Stabellini
2024-08-14 11:49         ` Edgar E. Iglesias
2024-08-15  0:30           ` Stefano Stabellini
2024-08-16 10:39             ` Edgar E. Iglesias
2024-08-16 16:53               ` Stefano Stabellini
2024-08-16 22:58                 ` Jason Andryuk
2024-08-20 14:13                   ` Edgar E. Iglesias
2024-08-12 13:06 ` [PATCH v1 05/10] hw/arm: xenpvh: Break out a common PVH module Edgar E. Iglesias
2024-08-13  1:47   ` Stefano Stabellini
2024-08-14 12:03     ` Edgar E. Iglesias
2024-08-12 13:06 ` [PATCH v1 06/10] hw/arm: xenpvh: Rename xen_arm.c -> xen-pvh.c Edgar E. Iglesias
2024-08-13  1:48   ` Stefano Stabellini
2024-08-12 13:06 ` [PATCH v1 07/10] hw/arm: xenpvh: Reverse virtio-mmio creation order Edgar E. Iglesias
2024-08-13  1:48   ` Stefano Stabellini
2024-08-12 13:06 ` [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX Edgar E. Iglesias
2024-08-13  1:48   ` Stefano Stabellini
2024-08-14 15:26     ` Edgar E. Iglesias
2024-08-15  0:29       ` Stefano Stabellini
2024-08-12 13:06 ` [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine Edgar E. Iglesias
2024-08-13  1:48   ` Stefano Stabellini
2024-08-14 15:50     ` Edgar E. Iglesias
2024-08-15  0:19       ` Stefano Stabellini
2024-08-12 13:06 ` [PATCH v1 10/10] docs/system/i386: xenpvh: Add a basic description Edgar E. Iglesias
2024-08-13  1:49   ` Stefano Stabellini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).