From: Stefano Stabellini <sstabellini@kernel.org>
To: peter.maydell@linaro.org
Cc: sstabellini@kernel.org, qemu-devel@nongnu.org,
vikram.garhwal@amd.com, richard.henderson@linaro.org,
Stefano Stabellini <stefano.stabellini@amd.com>,
Paul Durrant <paul@xen.org>
Subject: [PULL v4 06/10] hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration failure
Date: Fri, 9 Jun 2023 10:07:47 -0700 [thread overview]
Message-ID: <20230609170751.4059054-6-sstabellini@kernel.org> (raw)
In-Reply-To: <alpine.DEB.2.22.394.2306091007210.3803068@ubuntu-linux-20-04-desktop>
From: Stefano Stabellini <stefano.stabellini@amd.com>
On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails continue
to the PV backends initialization.
Also, moved the IOREQ registration and mapping subroutine to new function
xen_do_ioreq_register().
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Paul Durrant <paul@xen.org>
---
hw/xen/xen-hvm-common.c | 57 +++++++++++++++++++++++++++--------------
1 file changed, 38 insertions(+), 19 deletions(-)
diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index a31b067404..cb82f4b83d 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -764,27 +764,12 @@ void xen_shutdown_fatal_error(const char *fmt, ...)
qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
}
-void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
- MemoryListener xen_memory_listener)
+static void xen_do_ioreq_register(XenIOState *state,
+ unsigned int max_cpus,
+ MemoryListener xen_memory_listener)
{
int i, rc;
- setup_xen_backend_ops();
-
- state->xce_handle = qemu_xen_evtchn_open();
- if (state->xce_handle == NULL) {
- perror("xen: event channel open");
- goto err;
- }
-
- state->xenstore = xs_daemon_open();
- if (state->xenstore == NULL) {
- perror("xen: xenstore open");
- goto err;
- }
-
- xen_create_ioreq_server(xen_domid, &state->ioservid);
-
state->exit.notify = xen_exit_notifier;
qemu_add_exit_notifier(&state->exit);
@@ -849,12 +834,46 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
QLIST_INIT(&state->dev_list);
device_listener_register(&state->device_listener);
+ return;
+
+err:
+ error_report("xen hardware virtual machine initialisation failed");
+ exit(1);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+ MemoryListener xen_memory_listener)
+{
+ int rc;
+
+ setup_xen_backend_ops();
+
+ state->xce_handle = qemu_xen_evtchn_open();
+ if (state->xce_handle == NULL) {
+ perror("xen: event channel open");
+ goto err;
+ }
+
+ state->xenstore = xs_daemon_open();
+ if (state->xenstore == NULL) {
+ perror("xen: xenstore open");
+ goto err;
+ }
+
+ rc = xen_create_ioreq_server(xen_domid, &state->ioservid);
+ if (!rc) {
+ xen_do_ioreq_register(state, max_cpus, xen_memory_listener);
+ } else {
+ warn_report("xen: failed to create ioreq server");
+ }
+
xen_bus_init();
xen_be_init();
return;
+
err:
- error_report("xen hardware virtual machine initialisation failed");
+ error_report("xen hardware virtual machine backend registration failed");
exit(1);
}
--
2.25.1
next prev parent reply other threads:[~2023-06-09 17:09 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-09 17:07 [PULL v4 0/10] xenpvh4-tag Stefano Stabellini
2023-06-09 17:07 ` [PULL v4 01/10] hw/i386/xen/: move xen-mapcache.c to hw/xen/ Stefano Stabellini
2023-06-09 17:07 ` [PULL v4 02/10] hw/i386/xen: rearrange xen_hvm_init_pc Stefano Stabellini
2023-06-09 17:07 ` [PULL v4 03/10] hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState Stefano Stabellini
2023-06-09 17:07 ` [PULL v4 04/10] xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common Stefano Stabellini
2023-06-09 17:07 ` [PULL v4 05/10] include/hw/xen/xen_common: return error from xen_create_ioreq_server Stefano Stabellini
2023-06-09 17:07 ` Stefano Stabellini [this message]
2023-06-09 17:07 ` [PULL v4 07/10] hw/xen/xen-hvm-common: Use g_new and error_report Stefano Stabellini
2023-06-09 17:07 ` [PULL v4 08/10] meson.build: do not set have_xen_pci_passthrough for aarch64 targets Stefano Stabellini
2023-06-09 17:07 ` [PULL v4 09/10] hw/arm: introduce xenpvh machine Stefano Stabellini
2023-06-09 22:59 ` Richard Henderson
2023-06-12 0:10 ` Vikram Garhwal
2023-06-14 4:20 ` Richard Henderson
2023-06-09 17:07 ` [PULL v4 10/10] meson.build: enable xenpv machine build for ARM Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230609170751.4059054-6-sstabellini@kernel.org \
--to=sstabellini@kernel.org \
--cc=paul@xen.org \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=stefano.stabellini@amd.com \
--cc=vikram.garhwal@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).