From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: "Kevin Wolf" <kwolf@redhat.com>,
"Hanna Reitz" <hreitz@redhat.com>,
"Stefano Stabellini" <sstabellini@kernel.org>,
"Anthony Perard" <anthony.perard@citrix.com>,
"Paul Durrant" <paul@xen.org>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Richard Henderson" <richard.henderson@linaro.org>,
"Eduardo Habkost" <eduardo@habkost.net>,
"David Woodhouse" <dwmw2@infradead.org>,
"Marcelo Tosatti" <mtosatti@redhat.com>,
qemu-block@nongnu.org, xen-devel@lists.xenproject.org,
kvm@vger.kernel.org
Subject: [PATCH 02/12] hw/xen: select kernel mode for per-vCPU event channel upcall vector
Date: Mon, 16 Oct 2023 16:18:59 +0100 [thread overview]
Message-ID: <20231016151909.22133-3-dwmw2@infradead.org> (raw)
In-Reply-To: <20231016151909.22133-1-dwmw2@infradead.org>
From: David Woodhouse <dwmw@amazon.co.uk>
A guest which has configured the per-vCPU upcall vector may set the
HVM_PARAM_CALLBACK_IRQ param to fairly much anything other than zero.
For example, Linux v6.0+ after commit b1c3497e604 ("x86/xen: Add support
for HVMOP_set_evtchn_upcall_vector") will just do this after setting the
vector:
/* Trick toolstack to think we are enlightened. */
if (!cpu)
rc = xen_set_callback_via(1);
That's explicitly setting the delivery to GSI#, but it's supposed to be
overridden by the per-vCPU vector setting. This mostly works in QEMU
*except* for the logic to enable the in-kernel handling of event channels,
which falsely determines that the kernel cannot accelerate GSI delivery
in this case.
Add a kvm_xen_has_vcpu_callback_vector() to report whether vCPU#0 has
the vector set, and use that in xen_evtchn_set_callback_param() to
enable the kernel acceleration features even when the param *appears*
to be set to target a GSI.
Preserve the Xen behaviour that when HVM_PARAM_CALLBACK_IRQ is set to
*zero* the event channel delivery is disabled completely. (Which is
what that bizarre guest behaviour is working round in the first place.)
Fixes: 91cce756179 ("hw/xen: Add xen_evtchn device for event channel emulation")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
hw/i386/kvm/xen_evtchn.c | 6 ++++++
include/sysemu/kvm_xen.h | 1 +
target/i386/kvm/xen-emu.c | 7 +++++++
3 files changed, 14 insertions(+)
diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c
index 4df973022c..d72dca6591 100644
--- a/hw/i386/kvm/xen_evtchn.c
+++ b/hw/i386/kvm/xen_evtchn.c
@@ -490,6 +490,12 @@ int xen_evtchn_set_callback_param(uint64_t param)
break;
}
+ /* If the guest has set a per-vCPU callback vector, prefer that. */
+ if (gsi && kvm_xen_has_vcpu_callback_vector()) {
+ in_kernel = kvm_xen_has_cap(EVTCHN_SEND);
+ gsi = 0;
+ }
+
if (!ret) {
/* If vector delivery was turned *off* then tell the kernel */
if ((s->callback_param >> CALLBACK_VIA_TYPE_SHIFT) ==
diff --git a/include/sysemu/kvm_xen.h b/include/sysemu/kvm_xen.h
index 595abfbe40..961c702c4e 100644
--- a/include/sysemu/kvm_xen.h
+++ b/include/sysemu/kvm_xen.h
@@ -22,6 +22,7 @@
int kvm_xen_soft_reset(void);
uint32_t kvm_xen_get_caps(void);
void *kvm_xen_get_vcpu_info_hva(uint32_t vcpu_id);
+bool kvm_xen_has_vcpu_callback_vector(void);
void kvm_xen_inject_vcpu_callback_vector(uint32_t vcpu_id, int type);
void kvm_xen_set_callback_asserted(void);
int kvm_xen_set_vcpu_virq(uint32_t vcpu_id, uint16_t virq, uint16_t port);
diff --git a/target/i386/kvm/xen-emu.c b/target/i386/kvm/xen-emu.c
index b49a840438..477e93cd92 100644
--- a/target/i386/kvm/xen-emu.c
+++ b/target/i386/kvm/xen-emu.c
@@ -424,6 +424,13 @@ void kvm_xen_set_callback_asserted(void)
}
}
+bool kvm_xen_has_vcpu_callback_vector(void)
+{
+ CPUState *cs = qemu_get_cpu(0);
+
+ return cs && !!X86_CPU(cs)->env.xen_vcpu_callback_vector;
+}
+
void kvm_xen_inject_vcpu_callback_vector(uint32_t vcpu_id, int type)
{
CPUState *cs = qemu_get_cpu(vcpu_id);
--
2.40.1
next prev parent reply other threads:[~2023-10-16 15:24 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-16 15:18 [PATCH 0/12] Get Xen PV shim running in qemu David Woodhouse
2023-10-16 15:18 ` [PATCH 01/12] i386/xen: fix per-vCPU upcall vector for Xen emulation David Woodhouse
2023-10-24 12:16 ` Paul Durrant
2023-10-24 12:58 ` David Woodhouse
2023-10-16 15:18 ` David Woodhouse [this message]
2023-10-24 12:29 ` [PATCH 02/12] hw/xen: select kernel mode for per-vCPU event channel upcall vector Paul Durrant
2023-10-24 13:20 ` David Woodhouse
2023-10-16 15:19 ` [PATCH 03/12] include: update Xen public headers to Xen 4.17.2 release David Woodhouse
2023-10-24 12:30 ` Paul Durrant
2023-10-16 15:19 ` [PATCH 04/12] i386/xen: advertise XEN_HVM_CPUID_UPCALL_VECTOR in CPUID David Woodhouse
2023-10-24 12:32 ` Paul Durrant
2023-10-16 15:19 ` [PATCH 05/12] hw/xen: populate store frontend nodes with XenStore PFN/port David Woodhouse
2023-10-24 12:35 ` Paul Durrant
2023-10-24 12:53 ` David Woodhouse
2023-10-16 15:19 ` [PATCH 06/12] hw/xen: add get_frontend_path() method to XenDeviceClass David Woodhouse
2023-10-24 12:42 ` Paul Durrant
2023-10-24 12:56 ` David Woodhouse
2023-10-24 12:59 ` Paul Durrant
2023-10-24 13:29 ` David Woodhouse
2023-10-24 13:37 ` Paul Durrant
2023-10-25 8:30 ` David Woodhouse
2023-11-21 12:25 ` David Woodhouse
2023-10-16 15:19 ` [PATCH 07/12] hw/xen: update Xen console to XenDevice model David Woodhouse
2023-10-24 13:07 ` Paul Durrant
2023-10-16 15:19 ` [PATCH 08/12] hw/xen: do not repeatedly try to create a failing backend device David Woodhouse
2023-10-24 13:19 ` Paul Durrant
2023-10-16 15:19 ` [PATCH 09/12] hw/xen: prevent duplicate device registrations David Woodhouse
2023-10-24 14:10 ` Paul Durrant
2023-10-24 14:38 ` David Woodhouse
2023-10-16 15:19 ` [PATCH 10/12] hw/xen: automatically assign device index to console devices David Woodhouse
2023-10-16 15:19 ` [PATCH 11/12] hw/xen: automatically assign device index to block devices David Woodhouse
2023-10-17 10:21 ` Kevin Wolf
2023-10-17 18:02 ` David Woodhouse
2023-10-18 7:32 ` Igor Mammedov
2023-10-18 8:32 ` David Woodhouse
2023-10-23 9:30 ` Igor Mammedov
2023-10-23 9:42 ` David Woodhouse
2023-10-23 9:42 ` David Woodhouse
2023-10-23 13:45 ` Kevin Wolf
2023-10-18 8:52 ` Kevin Wolf
2023-10-18 10:52 ` David Woodhouse
2023-10-19 11:21 ` Kevin Wolf
2023-10-20 17:47 ` David Woodhouse
2023-10-18 23:13 ` David Woodhouse
2023-10-16 15:19 ` [PATCH 12/12] hw/xen: add support for Xen primary console in emulated mode David Woodhouse
2023-10-24 14:20 ` Paul Durrant
2023-10-24 15:37 ` David Woodhouse
2023-10-24 15:39 ` Paul Durrant
2023-10-24 15:49 ` David Woodhouse
2023-10-24 16:25 ` Paul Durrant
2023-10-24 16:34 ` David Woodhouse
2023-10-25 8:31 ` Paul Durrant
2023-10-25 9:00 ` David Woodhouse
2023-10-25 10:44 ` Paul Durrant
2023-10-24 15:24 ` [PATCH 0/12] Get Xen PV shim running in qemu Alex Bennée
2023-10-24 16:11 ` David Woodhouse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231016151909.22133-3-dwmw2@infradead.org \
--to=dwmw2@infradead.org \
--cc=anthony.perard@citrix.com \
--cc=eduardo@habkost.net \
--cc=hreitz@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=kwolf@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=mtosatti@redhat.com \
--cc=paul@xen.org \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).