* [PATCH v2 0/4] hw/arm: xenpvh: Enable PCI for ARM PVH
@ 2024-09-23 14:55 Edgar E. Iglesias
2024-09-23 14:55 ` [PATCH v2 1/4] hw/xen: Expose handle_bufioreq in xen_register_ioreq Edgar E. Iglesias
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Edgar E. Iglesias @ 2024-09-23 14:55 UTC (permalink / raw)
To: qemu-devel
Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee,
edgar.iglesias, xen-devel
From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
Enable PCI on the ARM PVH machine. First we add a way to control the use
of buffered IOREQ's since those are not supported on Xen/ARM.
Finally we enable the PCI support.
I've published some instructions on how to try this including the work in
progress Xen side of the PVH PCI support:
https://github.com/edgarigl/docs/blob/master/xen/pvh/virtio-pci-dom0less.md
Cheers,
Edgar
ChangeLog:
v1 -> v2:
* Change handle_ioreq from int to uint8_t.
* Fallback to legacy API if buffered ioreqs are enabled and also if
the new API is not supported. Clarify with comments.
Edgar E. Iglesias (4):
hw/xen: Expose handle_bufioreq in xen_register_ioreq
hw/xen: xenpvh: Disable buffered IOREQs for ARM
hw/xen: xenpvh: Add pci-intx-irq-base property
hw/arm: xenpvh: Enable PCI for ARM PVH
hw/arm/xen-pvh.c | 17 ++++++
hw/i386/xen/xen-hvm.c | 4 +-
hw/i386/xen/xen-pvh.c | 3 +
hw/xen/xen-hvm-common.c | 101 ++++++++++++++++++++------------
hw/xen/xen-pvh-common.c | 40 ++++++++++++-
include/hw/xen/xen-hvm-common.h | 3 +
include/hw/xen/xen-pvh-common.h | 3 +
include/hw/xen/xen_native.h | 3 +-
8 files changed, 133 insertions(+), 41 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH v2 1/4] hw/xen: Expose handle_bufioreq in xen_register_ioreq 2024-09-23 14:55 [PATCH v2 0/4] hw/arm: xenpvh: Enable PCI for ARM PVH Edgar E. Iglesias @ 2024-09-23 14:55 ` Edgar E. Iglesias 2024-09-24 22:50 ` Stefano Stabellini 2024-09-23 14:55 ` [PATCH v2 2/4] hw/xen: xenpvh: Disable buffered IOREQs for ARM Edgar E. Iglesias ` (2 subsequent siblings) 3 siblings, 1 reply; 7+ messages in thread From: Edgar E. Iglesias @ 2024-09-23 14:55 UTC (permalink / raw) To: qemu-devel Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee, edgar.iglesias, xen-devel, Edgar E. Iglesias, Paolo Bonzini, Richard Henderson, Eduardo Habkost, Michael S. Tsirkin, Marcel Apfelbaum From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> Expose handle_bufioreq in xen_register_ioreq(). This is to allow machines to enable or disable buffered ioreqs. No functional change since all callers still set it to HVM_IOREQSRV_BUFIOREQ_ATOMIC. Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> --- hw/i386/xen/xen-hvm.c | 4 +- hw/xen/xen-hvm-common.c | 101 ++++++++++++++++++++------------ hw/xen/xen-pvh-common.c | 4 +- include/hw/xen/xen-hvm-common.h | 3 + include/hw/xen/xen_native.h | 3 +- 5 files changed, 74 insertions(+), 41 deletions(-) diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c index 4f6446600c..d3df488c48 100644 --- a/hw/i386/xen/xen-hvm.c +++ b/hw/i386/xen/xen-hvm.c @@ -614,7 +614,9 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory) state = g_new0(XenIOState, 1); - xen_register_ioreq(state, max_cpus, &xen_memory_listener); + xen_register_ioreq(state, max_cpus, + HVM_IOREQSRV_BUFIOREQ_ATOMIC, + &xen_memory_listener); xen_is_stubdomain = xen_check_stubdomain(state->xenstore); diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c index 3a9d6f981b..3ce994fc3a 100644 --- a/hw/xen/xen-hvm-common.c +++ b/hw/xen/xen-hvm-common.c @@ -667,6 +667,8 @@ static int xen_map_ioreq_server(XenIOState *state) xen_pfn_t ioreq_pfn; xen_pfn_t bufioreq_pfn; evtchn_port_t bufioreq_evtchn; + unsigned long num_frames = 1; + unsigned long frame = 1; int rc; /* @@ -675,59 +677,79 @@ static int xen_map_ioreq_server(XenIOState *state) */ QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0); QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1); + + if (state->has_bufioreq) { + frame = 0; + num_frames = 2; + } state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid, XENMEM_resource_ioreq_server, - state->ioservid, 0, 2, + state->ioservid, + frame, num_frames, &addr, PROT_READ | PROT_WRITE, 0); if (state->fres != NULL) { trace_xen_map_resource_ioreq(state->ioservid, addr); - state->buffered_io_page = addr; - state->shared_page = addr + XC_PAGE_SIZE; + state->shared_page = addr; + if (state->has_bufioreq) { + state->buffered_io_page = addr; + state->shared_page = addr + XC_PAGE_SIZE; + } } else if (errno != EOPNOTSUPP) { error_report("failed to map ioreq server resources: error %d handle=%p", errno, xen_xc); return -1; } - rc = xen_get_ioreq_server_info(xen_domid, state->ioservid, - (state->shared_page == NULL) ? - &ioreq_pfn : NULL, - (state->buffered_io_page == NULL) ? - &bufioreq_pfn : NULL, - &bufioreq_evtchn); - if (rc < 0) { - error_report("failed to get ioreq server info: error %d handle=%p", - errno, xen_xc); - return rc; - } - - if (state->shared_page == NULL) { + /* + * If we fail to map the shared page with xenforeignmemory_map_resource() + * or if we're using buffered ioreqs, we need xen_get_ioreq_server_info() + * to provide the the addresses to map the shared page and/or to get the + * event-channel port for buffered ioreqs. + */ + if (state->shared_page == NULL || state->has_bufioreq) { trace_xen_map_ioreq_server_shared_page(ioreq_pfn); + rc = xen_get_ioreq_server_info(xen_domid, state->ioservid, + (state->shared_page == NULL) ? + &ioreq_pfn : NULL, + (state->has_bufioreq && + state->buffered_io_page == NULL) ? + &bufioreq_pfn : NULL, + &bufioreq_evtchn); + if (rc < 0) { + error_report("failed to get ioreq server info: error %d handle=%p", + errno, xen_xc); + return rc; + } - state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid, - PROT_READ | PROT_WRITE, - 1, &ioreq_pfn, NULL); + if (state->shared_page == NULL) { + trace_xen_map_ioreq_server_shared_page(ioreq_pfn); + + state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid, + PROT_READ | PROT_WRITE, + 1, &ioreq_pfn, NULL); + } if (state->shared_page == NULL) { error_report("map shared IO page returned error %d handle=%p", errno, xen_xc); } - } - if (state->buffered_io_page == NULL) { - trace_xen_map_ioreq_server_buffered_io_page(bufioreq_pfn); + if (state->has_bufioreq && state->buffered_io_page == NULL) { + trace_xen_map_ioreq_server_buffered_io_page(bufioreq_pfn); - state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid, - PROT_READ | PROT_WRITE, - 1, &bufioreq_pfn, - NULL); - if (state->buffered_io_page == NULL) { - error_report("map buffered IO page returned error %d", errno); - return -1; + state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid, + PROT_READ | PROT_WRITE, + 1, &bufioreq_pfn, + NULL); + if (state->buffered_io_page == NULL) { + error_report("map buffered IO page returned error %d", errno); + return -1; + } } } - if (state->shared_page == NULL || state->buffered_io_page == NULL) { + if (state->shared_page == NULL || + (state->has_bufioreq && state->buffered_io_page == NULL)) { return -1; } @@ -830,14 +852,15 @@ static void xen_do_ioreq_register(XenIOState *state, state->ioreq_local_port[i] = rc; } - rc = qemu_xen_evtchn_bind_interdomain(state->xce_handle, xen_domid, - state->bufioreq_remote_port); - if (rc == -1) { - error_report("buffered evtchn bind error %d", errno); - goto err; + if (state->has_bufioreq) { + rc = qemu_xen_evtchn_bind_interdomain(state->xce_handle, xen_domid, + state->bufioreq_remote_port); + if (rc == -1) { + error_report("buffered evtchn bind error %d", errno); + goto err; + } + state->bufioreq_local_port = rc; } - state->bufioreq_local_port = rc; - /* Init RAM management */ #ifdef XEN_COMPAT_PHYSMAP xen_map_cache_init(xen_phys_offset_to_gaddr, state); @@ -865,6 +888,7 @@ err: } void xen_register_ioreq(XenIOState *state, unsigned int max_cpus, + uint8_t handle_bufioreq, const MemoryListener *xen_memory_listener) { int rc; @@ -883,7 +907,8 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus, goto err; } - rc = xen_create_ioreq_server(xen_domid, &state->ioservid); + state->has_bufioreq = handle_bufioreq != HVM_IOREQSRV_BUFIOREQ_OFF; + rc = xen_create_ioreq_server(xen_domid, handle_bufioreq, &state->ioservid); if (!rc) { xen_do_ioreq_register(state, max_cpus, xen_memory_listener); } else { diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c index 28d7168446..08641fdcec 100644 --- a/hw/xen/xen-pvh-common.c +++ b/hw/xen/xen-pvh-common.c @@ -194,7 +194,9 @@ static void xen_pvh_init(MachineState *ms) } xen_pvh_init_ram(s, sysmem); - xen_register_ioreq(&s->ioreq, ms->smp.max_cpus, &xen_memory_listener); + xen_register_ioreq(&s->ioreq, ms->smp.max_cpus, + HVM_IOREQSRV_BUFIOREQ_ATOMIC, + &xen_memory_listener); if (s->cfg.virtio_mmio_num) { xen_create_virtio_mmio_devices(s); diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h index 3d796235dc..0f586c4384 100644 --- a/include/hw/xen/xen-hvm-common.h +++ b/include/hw/xen/xen-hvm-common.h @@ -81,6 +81,8 @@ typedef struct XenIOState { QLIST_HEAD(, XenPciDevice) dev_list; DeviceListener device_listener; + bool has_bufioreq; + Notifier exit; } XenIOState; @@ -95,6 +97,7 @@ void xen_device_unrealize(DeviceListener *listener, DeviceState *dev); void xen_hvm_change_state_handler(void *opaque, bool running, RunState rstate); void xen_register_ioreq(XenIOState *state, unsigned int max_cpus, + uint8_t handle_bufioreq, const MemoryListener *xen_memory_listener); void cpu_ioreq_pio(ioreq_t *req); diff --git a/include/hw/xen/xen_native.h b/include/hw/xen/xen_native.h index 1a5ad693a4..5caf91a616 100644 --- a/include/hw/xen/xen_native.h +++ b/include/hw/xen/xen_native.h @@ -464,10 +464,11 @@ static inline void xen_unmap_pcidev(domid_t dom, } static inline int xen_create_ioreq_server(domid_t dom, + int handle_bufioreq, ioservid_t *ioservid) { int rc = xendevicemodel_create_ioreq_server(xen_dmod, dom, - HVM_IOREQSRV_BUFIOREQ_ATOMIC, + handle_bufioreq, ioservid); if (rc == 0) { -- 2.43.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/4] hw/xen: Expose handle_bufioreq in xen_register_ioreq 2024-09-23 14:55 ` [PATCH v2 1/4] hw/xen: Expose handle_bufioreq in xen_register_ioreq Edgar E. Iglesias @ 2024-09-24 22:50 ` Stefano Stabellini 0 siblings, 0 replies; 7+ messages in thread From: Stefano Stabellini @ 2024-09-24 22:50 UTC (permalink / raw) To: Edgar E. Iglesias Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell, alex.bennee, edgar.iglesias, xen-devel, Paolo Bonzini, Richard Henderson, Eduardo Habkost, Michael S. Tsirkin, Marcel Apfelbaum On Mon, 23 Sep 2024, Edgar E. Iglesias wrote: > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> > > Expose handle_bufioreq in xen_register_ioreq(). > This is to allow machines to enable or disable buffered ioreqs. > > No functional change since all callers still set it to > HVM_IOREQSRV_BUFIOREQ_ATOMIC. > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> > --- > hw/i386/xen/xen-hvm.c | 4 +- > hw/xen/xen-hvm-common.c | 101 ++++++++++++++++++++------------ > hw/xen/xen-pvh-common.c | 4 +- > include/hw/xen/xen-hvm-common.h | 3 + > include/hw/xen/xen_native.h | 3 +- > 5 files changed, 74 insertions(+), 41 deletions(-) > > diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c > index 4f6446600c..d3df488c48 100644 > --- a/hw/i386/xen/xen-hvm.c > +++ b/hw/i386/xen/xen-hvm.c > @@ -614,7 +614,9 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory) > > state = g_new0(XenIOState, 1); > > - xen_register_ioreq(state, max_cpus, &xen_memory_listener); > + xen_register_ioreq(state, max_cpus, > + HVM_IOREQSRV_BUFIOREQ_ATOMIC, > + &xen_memory_listener); > > xen_is_stubdomain = xen_check_stubdomain(state->xenstore); > > diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c > index 3a9d6f981b..3ce994fc3a 100644 > --- a/hw/xen/xen-hvm-common.c > +++ b/hw/xen/xen-hvm-common.c > @@ -667,6 +667,8 @@ static int xen_map_ioreq_server(XenIOState *state) > xen_pfn_t ioreq_pfn; > xen_pfn_t bufioreq_pfn; > evtchn_port_t bufioreq_evtchn; > + unsigned long num_frames = 1; > + unsigned long frame = 1; > int rc; > > /* > @@ -675,59 +677,79 @@ static int xen_map_ioreq_server(XenIOState *state) > */ > QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0); > QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1); > + > + if (state->has_bufioreq) { > + frame = 0; > + num_frames = 2; > + } > state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid, > XENMEM_resource_ioreq_server, > - state->ioservid, 0, 2, > + state->ioservid, > + frame, num_frames, > &addr, > PROT_READ | PROT_WRITE, 0); > if (state->fres != NULL) { > trace_xen_map_resource_ioreq(state->ioservid, addr); > - state->buffered_io_page = addr; > - state->shared_page = addr + XC_PAGE_SIZE; > + state->shared_page = addr; > + if (state->has_bufioreq) { > + state->buffered_io_page = addr; > + state->shared_page = addr + XC_PAGE_SIZE; > + } > } else if (errno != EOPNOTSUPP) { > error_report("failed to map ioreq server resources: error %d handle=%p", > errno, xen_xc); > return -1; > } > > - rc = xen_get_ioreq_server_info(xen_domid, state->ioservid, > - (state->shared_page == NULL) ? > - &ioreq_pfn : NULL, > - (state->buffered_io_page == NULL) ? > - &bufioreq_pfn : NULL, > - &bufioreq_evtchn); > - if (rc < 0) { > - error_report("failed to get ioreq server info: error %d handle=%p", > - errno, xen_xc); > - return rc; > - } > - > - if (state->shared_page == NULL) { > + /* > + * If we fail to map the shared page with xenforeignmemory_map_resource() > + * or if we're using buffered ioreqs, we need xen_get_ioreq_server_info() > + * to provide the the addresses to map the shared page and/or to get the > + * event-channel port for buffered ioreqs. > + */ > + if (state->shared_page == NULL || state->has_bufioreq) { > trace_xen_map_ioreq_server_shared_page(ioreq_pfn); > + rc = xen_get_ioreq_server_info(xen_domid, state->ioservid, > + (state->shared_page == NULL) ? > + &ioreq_pfn : NULL, > + (state->has_bufioreq && > + state->buffered_io_page == NULL) ? > + &bufioreq_pfn : NULL, > + &bufioreq_evtchn); > + if (rc < 0) { > + error_report("failed to get ioreq server info: error %d handle=%p", > + errno, xen_xc); > + return rc; > + } > > - state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid, > - PROT_READ | PROT_WRITE, > - 1, &ioreq_pfn, NULL); > + if (state->shared_page == NULL) { > + trace_xen_map_ioreq_server_shared_page(ioreq_pfn); > + > + state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid, > + PROT_READ | PROT_WRITE, > + 1, &ioreq_pfn, NULL); > + } > if (state->shared_page == NULL) { > error_report("map shared IO page returned error %d handle=%p", > errno, xen_xc); > } > - } > > - if (state->buffered_io_page == NULL) { > - trace_xen_map_ioreq_server_buffered_io_page(bufioreq_pfn); > + if (state->has_bufioreq && state->buffered_io_page == NULL) { > + trace_xen_map_ioreq_server_buffered_io_page(bufioreq_pfn); > > - state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid, > - PROT_READ | PROT_WRITE, > - 1, &bufioreq_pfn, > - NULL); > - if (state->buffered_io_page == NULL) { > - error_report("map buffered IO page returned error %d", errno); > - return -1; > + state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid, > + PROT_READ | PROT_WRITE, > + 1, &bufioreq_pfn, > + NULL); > + if (state->buffered_io_page == NULL) { > + error_report("map buffered IO page returned error %d", errno); > + return -1; > + } > } > } > > - if (state->shared_page == NULL || state->buffered_io_page == NULL) { > + if (state->shared_page == NULL || > + (state->has_bufioreq && state->buffered_io_page == NULL)) { > return -1; > } > > @@ -830,14 +852,15 @@ static void xen_do_ioreq_register(XenIOState *state, > state->ioreq_local_port[i] = rc; > } > > - rc = qemu_xen_evtchn_bind_interdomain(state->xce_handle, xen_domid, > - state->bufioreq_remote_port); > - if (rc == -1) { > - error_report("buffered evtchn bind error %d", errno); > - goto err; > + if (state->has_bufioreq) { > + rc = qemu_xen_evtchn_bind_interdomain(state->xce_handle, xen_domid, > + state->bufioreq_remote_port); > + if (rc == -1) { > + error_report("buffered evtchn bind error %d", errno); > + goto err; > + } > + state->bufioreq_local_port = rc; > } > - state->bufioreq_local_port = rc; > - > /* Init RAM management */ > #ifdef XEN_COMPAT_PHYSMAP > xen_map_cache_init(xen_phys_offset_to_gaddr, state); > @@ -865,6 +888,7 @@ err: > } > > void xen_register_ioreq(XenIOState *state, unsigned int max_cpus, > + uint8_t handle_bufioreq, > const MemoryListener *xen_memory_listener) > { > int rc; > @@ -883,7 +907,8 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus, > goto err; > } > > - rc = xen_create_ioreq_server(xen_domid, &state->ioservid); > + state->has_bufioreq = handle_bufioreq != HVM_IOREQSRV_BUFIOREQ_OFF; > + rc = xen_create_ioreq_server(xen_domid, handle_bufioreq, &state->ioservid); > if (!rc) { > xen_do_ioreq_register(state, max_cpus, xen_memory_listener); > } else { > diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c > index 28d7168446..08641fdcec 100644 > --- a/hw/xen/xen-pvh-common.c > +++ b/hw/xen/xen-pvh-common.c > @@ -194,7 +194,9 @@ static void xen_pvh_init(MachineState *ms) > } > > xen_pvh_init_ram(s, sysmem); > - xen_register_ioreq(&s->ioreq, ms->smp.max_cpus, &xen_memory_listener); > + xen_register_ioreq(&s->ioreq, ms->smp.max_cpus, > + HVM_IOREQSRV_BUFIOREQ_ATOMIC, > + &xen_memory_listener); > > if (s->cfg.virtio_mmio_num) { > xen_create_virtio_mmio_devices(s); > diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h > index 3d796235dc..0f586c4384 100644 > --- a/include/hw/xen/xen-hvm-common.h > +++ b/include/hw/xen/xen-hvm-common.h > @@ -81,6 +81,8 @@ typedef struct XenIOState { > QLIST_HEAD(, XenPciDevice) dev_list; > DeviceListener device_listener; > > + bool has_bufioreq; > + > Notifier exit; > } XenIOState; > > @@ -95,6 +97,7 @@ void xen_device_unrealize(DeviceListener *listener, DeviceState *dev); > > void xen_hvm_change_state_handler(void *opaque, bool running, RunState rstate); > void xen_register_ioreq(XenIOState *state, unsigned int max_cpus, > + uint8_t handle_bufioreq, > const MemoryListener *xen_memory_listener); > > void cpu_ioreq_pio(ioreq_t *req); > diff --git a/include/hw/xen/xen_native.h b/include/hw/xen/xen_native.h > index 1a5ad693a4..5caf91a616 100644 > --- a/include/hw/xen/xen_native.h > +++ b/include/hw/xen/xen_native.h > @@ -464,10 +464,11 @@ static inline void xen_unmap_pcidev(domid_t dom, > } > > static inline int xen_create_ioreq_server(domid_t dom, > + int handle_bufioreq, > ioservid_t *ioservid) > { > int rc = xendevicemodel_create_ioreq_server(xen_dmod, dom, > - HVM_IOREQSRV_BUFIOREQ_ATOMIC, > + handle_bufioreq, > ioservid); > > if (rc == 0) { > -- > 2.43.0 > ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 2/4] hw/xen: xenpvh: Disable buffered IOREQs for ARM 2024-09-23 14:55 [PATCH v2 0/4] hw/arm: xenpvh: Enable PCI for ARM PVH Edgar E. Iglesias 2024-09-23 14:55 ` [PATCH v2 1/4] hw/xen: Expose handle_bufioreq in xen_register_ioreq Edgar E. Iglesias @ 2024-09-23 14:55 ` Edgar E. Iglesias 2024-09-24 22:52 ` Stefano Stabellini 2024-09-23 14:55 ` [PATCH v2 3/4] hw/xen: xenpvh: Add pci-intx-irq-base property Edgar E. Iglesias 2024-09-23 14:55 ` [PATCH v2 4/4] hw/arm: xenpvh: Enable PCI for ARM PVH Edgar E. Iglesias 3 siblings, 1 reply; 7+ messages in thread From: Edgar E. Iglesias @ 2024-09-23 14:55 UTC (permalink / raw) To: qemu-devel Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee, edgar.iglesias, xen-devel, Edgar E. Iglesias, Paolo Bonzini, Richard Henderson, Eduardo Habkost, Michael S. Tsirkin, Marcel Apfelbaum, qemu-arm From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> Add a way to enable/disable buffered IOREQs for PVH machines and disable them for ARM. ARM does not support buffered IOREQ's nor the legacy way to map IOREQ info pages. See the following for more details: https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=2fbd7e609e1803ac5e5c26e22aa8e4b5a6cddbb1 https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/arch/arm/ioreq.c;h=2e829d2e7f3760401b96fa7c930e2015fb1cf463;hb=HEAD#l138 Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> --- hw/arm/xen-pvh.c | 3 +++ hw/i386/xen/xen-pvh.c | 3 +++ hw/xen/xen-pvh-common.c | 2 +- include/hw/xen/xen-pvh-common.h | 3 +++ 4 files changed, 10 insertions(+), 1 deletion(-) diff --git a/hw/arm/xen-pvh.c b/hw/arm/xen-pvh.c index 04cb9855af..28af3910ea 100644 --- a/hw/arm/xen-pvh.c +++ b/hw/arm/xen-pvh.c @@ -66,6 +66,9 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data) */ mc->max_cpus = GUEST_MAX_VCPUS; + /* Xen/ARM does not use buffered IOREQs. */ + xpc->handle_bufioreq = HVM_IOREQSRV_BUFIOREQ_OFF; + /* List of supported features known to work on PVH ARM. */ xpc->has_tpm = true; xpc->has_virtio_mmio = true; diff --git a/hw/i386/xen/xen-pvh.c b/hw/i386/xen/xen-pvh.c index 45645667e9..f1f02d3311 100644 --- a/hw/i386/xen/xen-pvh.c +++ b/hw/i386/xen/xen-pvh.c @@ -89,6 +89,9 @@ static void xen_pvh_machine_class_init(ObjectClass *oc, void *data) /* We have an implementation specific init to create CPU objects. */ xpc->init = xen_pvh_init; + /* Enable buffered IOREQs. */ + xpc->handle_bufioreq = HVM_IOREQSRV_BUFIOREQ_ATOMIC; + /* * PCI INTX routing. * diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c index 08641fdcec..76a9b2b945 100644 --- a/hw/xen/xen-pvh-common.c +++ b/hw/xen/xen-pvh-common.c @@ -195,7 +195,7 @@ static void xen_pvh_init(MachineState *ms) xen_pvh_init_ram(s, sysmem); xen_register_ioreq(&s->ioreq, ms->smp.max_cpus, - HVM_IOREQSRV_BUFIOREQ_ATOMIC, + xpc->handle_bufioreq, &xen_memory_listener); if (s->cfg.virtio_mmio_num) { diff --git a/include/hw/xen/xen-pvh-common.h b/include/hw/xen/xen-pvh-common.h index bc09eea936..5cdd23c2f4 100644 --- a/include/hw/xen/xen-pvh-common.h +++ b/include/hw/xen/xen-pvh-common.h @@ -43,6 +43,9 @@ struct XenPVHMachineClass { */ int (*set_pci_link_route)(uint8_t line, uint8_t irq); + /* Allow implementations to optionally enable buffered ioreqs. */ + uint8_t handle_bufioreq; + /* * Each implementation can optionally enable features that it * supports and are known to work. -- 2.43.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/4] hw/xen: xenpvh: Disable buffered IOREQs for ARM 2024-09-23 14:55 ` [PATCH v2 2/4] hw/xen: xenpvh: Disable buffered IOREQs for ARM Edgar E. Iglesias @ 2024-09-24 22:52 ` Stefano Stabellini 0 siblings, 0 replies; 7+ messages in thread From: Stefano Stabellini @ 2024-09-24 22:52 UTC (permalink / raw) To: Edgar E. Iglesias Cc: qemu-devel, sstabellini, anthony, paul, peter.maydell, alex.bennee, edgar.iglesias, xen-devel, Paolo Bonzini, Richard Henderson, Eduardo Habkost, Michael S. Tsirkin, Marcel Apfelbaum, qemu-arm On Mon, 23 Sep 2024, Edgar E. Iglesias wrote: > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> > > Add a way to enable/disable buffered IOREQs for PVH machines > and disable them for ARM. ARM does not support buffered > IOREQ's nor the legacy way to map IOREQ info pages. > > See the following for more details: > https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=2fbd7e609e1803ac5e5c26e22aa8e4b5a6cddbb1 > https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/arch/arm/ioreq.c;h=2e829d2e7f3760401b96fa7c930e2015fb1cf463;hb=HEAD#l138 > > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> > --- > hw/arm/xen-pvh.c | 3 +++ > hw/i386/xen/xen-pvh.c | 3 +++ > hw/xen/xen-pvh-common.c | 2 +- > include/hw/xen/xen-pvh-common.h | 3 +++ > 4 files changed, 10 insertions(+), 1 deletion(-) > > diff --git a/hw/arm/xen-pvh.c b/hw/arm/xen-pvh.c > index 04cb9855af..28af3910ea 100644 > --- a/hw/arm/xen-pvh.c > +++ b/hw/arm/xen-pvh.c > @@ -66,6 +66,9 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data) > */ > mc->max_cpus = GUEST_MAX_VCPUS; > > + /* Xen/ARM does not use buffered IOREQs. */ > + xpc->handle_bufioreq = HVM_IOREQSRV_BUFIOREQ_OFF; > + > /* List of supported features known to work on PVH ARM. */ > xpc->has_tpm = true; > xpc->has_virtio_mmio = true; > diff --git a/hw/i386/xen/xen-pvh.c b/hw/i386/xen/xen-pvh.c > index 45645667e9..f1f02d3311 100644 > --- a/hw/i386/xen/xen-pvh.c > +++ b/hw/i386/xen/xen-pvh.c > @@ -89,6 +89,9 @@ static void xen_pvh_machine_class_init(ObjectClass *oc, void *data) > /* We have an implementation specific init to create CPU objects. */ > xpc->init = xen_pvh_init; > > + /* Enable buffered IOREQs. */ > + xpc->handle_bufioreq = HVM_IOREQSRV_BUFIOREQ_ATOMIC; > + > /* > * PCI INTX routing. > * > diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c > index 08641fdcec..76a9b2b945 100644 > --- a/hw/xen/xen-pvh-common.c > +++ b/hw/xen/xen-pvh-common.c > @@ -195,7 +195,7 @@ static void xen_pvh_init(MachineState *ms) > > xen_pvh_init_ram(s, sysmem); > xen_register_ioreq(&s->ioreq, ms->smp.max_cpus, > - HVM_IOREQSRV_BUFIOREQ_ATOMIC, > + xpc->handle_bufioreq, > &xen_memory_listener); > > if (s->cfg.virtio_mmio_num) { > diff --git a/include/hw/xen/xen-pvh-common.h b/include/hw/xen/xen-pvh-common.h > index bc09eea936..5cdd23c2f4 100644 > --- a/include/hw/xen/xen-pvh-common.h > +++ b/include/hw/xen/xen-pvh-common.h > @@ -43,6 +43,9 @@ struct XenPVHMachineClass { > */ > int (*set_pci_link_route)(uint8_t line, uint8_t irq); > > + /* Allow implementations to optionally enable buffered ioreqs. */ > + uint8_t handle_bufioreq; > + > /* > * Each implementation can optionally enable features that it > * supports and are known to work. > -- > 2.43.0 > ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 3/4] hw/xen: xenpvh: Add pci-intx-irq-base property 2024-09-23 14:55 [PATCH v2 0/4] hw/arm: xenpvh: Enable PCI for ARM PVH Edgar E. Iglesias 2024-09-23 14:55 ` [PATCH v2 1/4] hw/xen: Expose handle_bufioreq in xen_register_ioreq Edgar E. Iglesias 2024-09-23 14:55 ` [PATCH v2 2/4] hw/xen: xenpvh: Disable buffered IOREQs for ARM Edgar E. Iglesias @ 2024-09-23 14:55 ` Edgar E. Iglesias 2024-09-23 14:55 ` [PATCH v2 4/4] hw/arm: xenpvh: Enable PCI for ARM PVH Edgar E. Iglesias 3 siblings, 0 replies; 7+ messages in thread From: Edgar E. Iglesias @ 2024-09-23 14:55 UTC (permalink / raw) To: qemu-devel Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee, edgar.iglesias, xen-devel, Edgar E. Iglesias From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> --- hw/xen/xen-pvh-common.c | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/hw/xen/xen-pvh-common.c b/hw/xen/xen-pvh-common.c index 76a9b2b945..218ac851cf 100644 --- a/hw/xen/xen-pvh-common.c +++ b/hw/xen/xen-pvh-common.c @@ -218,6 +218,11 @@ static void xen_pvh_init(MachineState *ms) error_report("pci-ecam-size only supports values 0 or 0x10000000"); exit(EXIT_FAILURE); } + if (!s->cfg.pci_intx_irq_base) { + error_report("PCI enabled but pci-intx-irq-base not set"); + exit(EXIT_FAILURE); + } + xenpvh_gpex_init(s, xpc, sysmem); } @@ -273,6 +278,30 @@ XEN_PVH_PROP_MEMMAP(pci_ecam) XEN_PVH_PROP_MEMMAP(pci_mmio) XEN_PVH_PROP_MEMMAP(pci_mmio_high) +static void xen_pvh_set_pci_intx_irq_base(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) +{ + XenPVHMachineState *xp = XEN_PVH_MACHINE(obj); + uint32_t value; + + if (!visit_type_uint32(v, name, &value, errp)) { + return; + } + + xp->cfg.pci_intx_irq_base = value; +} + +static void xen_pvh_get_pci_intx_irq_base(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) +{ + XenPVHMachineState *xp = XEN_PVH_MACHINE(obj); + uint32_t value = xp->cfg.pci_intx_irq_base; + + visit_type_uint32(v, name, &value, errp); +} + void xen_pvh_class_setup_common_props(XenPVHMachineClass *xpc) { ObjectClass *oc = OBJECT_CLASS(xpc); @@ -318,6 +347,13 @@ do { \ OC_MEMMAP_PROP(oc, "pci-ecam", pci_ecam); OC_MEMMAP_PROP(oc, "pci-mmio", pci_mmio); OC_MEMMAP_PROP(oc, "pci-mmio-high", pci_mmio_high); + + object_class_property_add(oc, "pci-intx-irq-base", "uint32_t", + xen_pvh_get_pci_intx_irq_base, + xen_pvh_set_pci_intx_irq_base, + NULL, NULL); + object_class_property_set_description(oc, "pci-intx-irq-base", + "Set PCI INTX interrupt base line."); } #ifdef CONFIG_TPM -- 2.43.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 4/4] hw/arm: xenpvh: Enable PCI for ARM PVH 2024-09-23 14:55 [PATCH v2 0/4] hw/arm: xenpvh: Enable PCI for ARM PVH Edgar E. Iglesias ` (2 preceding siblings ...) 2024-09-23 14:55 ` [PATCH v2 3/4] hw/xen: xenpvh: Add pci-intx-irq-base property Edgar E. Iglesias @ 2024-09-23 14:55 ` Edgar E. Iglesias 3 siblings, 0 replies; 7+ messages in thread From: Edgar E. Iglesias @ 2024-09-23 14:55 UTC (permalink / raw) To: qemu-devel Cc: sstabellini, anthony, paul, peter.maydell, alex.bennee, edgar.iglesias, xen-devel, qemu-arm From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> Enable PCI support for the ARM Xen PVH machine. Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> --- hw/arm/xen-pvh.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/hw/arm/xen-pvh.c b/hw/arm/xen-pvh.c index 28af3910ea..33f0dd5982 100644 --- a/hw/arm/xen-pvh.c +++ b/hw/arm/xen-pvh.c @@ -39,6 +39,16 @@ static void xen_arm_instance_init(Object *obj) VIRTIO_MMIO_DEV_SIZE }; } +static void xen_pvh_set_pci_intx_irq(void *opaque, int intx_irq, int level) +{ + XenPVHMachineState *s = XEN_PVH_MACHINE(opaque); + int irq = s->cfg.pci_intx_irq_base + intx_irq; + + if (xendevicemodel_set_irq_level(xen_dmod, xen_domid, irq, level)) { + error_report("xendevicemodel_set_pci_intx_level failed"); + } +} + static void xen_arm_machine_class_init(ObjectClass *oc, void *data) { XenPVHMachineClass *xpc = XEN_PVH_MACHINE_CLASS(oc); @@ -69,7 +79,11 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data) /* Xen/ARM does not use buffered IOREQs. */ xpc->handle_bufioreq = HVM_IOREQSRV_BUFIOREQ_OFF; + /* PCI INTX delivery. */ + xpc->set_pci_intx_irq = xen_pvh_set_pci_intx_irq; + /* List of supported features known to work on PVH ARM. */ + xpc->has_pci = true; xpc->has_tpm = true; xpc->has_virtio_mmio = true; -- 2.43.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-09-24 22:53 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-09-23 14:55 [PATCH v2 0/4] hw/arm: xenpvh: Enable PCI for ARM PVH Edgar E. Iglesias 2024-09-23 14:55 ` [PATCH v2 1/4] hw/xen: Expose handle_bufioreq in xen_register_ioreq Edgar E. Iglesias 2024-09-24 22:50 ` Stefano Stabellini 2024-09-23 14:55 ` [PATCH v2 2/4] hw/xen: xenpvh: Disable buffered IOREQs for ARM Edgar E. Iglesias 2024-09-24 22:52 ` Stefano Stabellini 2024-09-23 14:55 ` [PATCH v2 3/4] hw/xen: xenpvh: Add pci-intx-irq-base property Edgar E. Iglesias 2024-09-23 14:55 ` [PATCH v2 4/4] hw/arm: xenpvh: Enable PCI for ARM PVH Edgar E. Iglesias
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).