From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
wei.liu2@citrix.com, andrew.cooper3@citrix.com,
ian.jackson@eu.citrix.com, Julien Grall <julien.grall@arm.com>,
jbeulich@suse.com, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
roger.pau@citrix.com
Subject: [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event
Date: Mon, 21 Nov 2016 16:00:46 -0500 [thread overview]
Message-ID: <1479762047-29431-11-git-send-email-boris.ostrovsky@oracle.com> (raw)
In-Reply-To: <1479762047-29431-1-git-send-email-boris.ostrovsky@oracle.com>
.. and update GPE0 registers.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
Changes in v3:
* Add per-arch arch_update_avail_vcpus() (nop for ARM)
* send_guest_global_virq() is updated to deal with
offlined VCPU0, made non-static.
xen/arch/arm/domain.c | 5 +++++
xen/arch/x86/domain.c | 16 ++++++++++++++++
xen/common/domctl.c | 1 +
xen/common/event_channel.c | 7 +++++--
xen/include/xen/domain.h | 1 +
xen/include/xen/event.h | 8 ++++++++
6 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 7e43691..19af326 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -549,6 +549,11 @@ void vcpu_destroy(struct vcpu *v)
free_xenheap_pages(v->arch.stack, STACK_ORDER);
}
+int arch_update_avail_vcpus(struct domain *d)
+{
+ return 0;
+}
+
int arch_domain_create(struct domain *d, unsigned int domcr_flags,
struct xen_arch_domainconfig *config)
{
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 1bd5eb6..c0c0d4f 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -509,6 +509,22 @@ void vcpu_destroy(struct vcpu *v)
xfree(v->arch.pv_vcpu.trap_ctxt);
}
+int arch_update_avail_vcpus(struct domain *d)
+{
+ /*
+ * For PVH guests we need to send an SCI and set enable/status
+ * bits in GPE block.
+ */
+ if ( is_hvm_domain(d) && !has_acpi_ff(d) )
+ {
+ d->arch.hvm_domain.acpi_io.gpe[2] =
+ d->arch.hvm_domain.acpi_io.gpe[0] = 1 << XEN_GPE0_CPUHP_BIT;
+ send_guest_global_virq(d, VIRQ_SCI);
+ }
+
+ return 0;
+}
+
int arch_domain_create(struct domain *d, unsigned int domcr_flags,
struct xen_arch_domainconfig *config)
{
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 626f2cb..2ae6a91 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1179,6 +1179,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
xfree(d->avail_vcpus);
d->avail_vcpus = avail_vcpus;
+ ret = arch_update_avail_vcpus(d);
break;
}
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 638dc5e..1d77373 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -727,7 +727,7 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
spin_unlock_irqrestore(&v->virq_lock, flags);
}
-static void send_guest_global_virq(struct domain *d, uint32_t virq)
+void send_guest_global_virq(struct domain *d, uint32_t virq)
{
unsigned long flags;
int port;
@@ -739,7 +739,10 @@ static void send_guest_global_virq(struct domain *d, uint32_t virq)
if ( unlikely(d == NULL) || unlikely(d->vcpu == NULL) )
return;
- v = d->vcpu[0];
+ /* Send to first available VCPU */
+ for_each_vcpu(d, v)
+ if ( is_vcpu_online(v) )
+ break;
if ( unlikely(v == NULL) )
return;
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index bce0ea1..b386038 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -52,6 +52,7 @@ void vcpu_destroy(struct vcpu *v);
int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
void unmap_vcpu_info(struct vcpu *v);
+int arch_update_avail_vcpus(struct domain *d);
int arch_domain_create(struct domain *d, unsigned int domcr_flags,
struct xen_arch_domainconfig *config);
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 5008c80..74bd605 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -23,6 +23,14 @@
void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq);
/*
+ * send_guest_global_virq: Notify guest via a global VIRQ.
+ * @d: domain to which virtual IRQ should be sent. First
+ * online VCPU will be selected.
+ * @virq: Virtual IRQ number (VIRQ_*)
+ */
+void send_guest_global_virq(struct domain *d, uint32_t virq);
+
+/*
* send_global_virq: Notify the domain handling a global VIRQ.
* @virq: Virtual IRQ number (VIRQ_*)
*/
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-11-21 21:00 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus Boris Ostrovsky
2016-11-22 10:31 ` Jan Beulich
2016-11-22 10:39 ` Jan Beulich
2016-11-22 12:34 ` Boris Ostrovsky
2016-11-22 13:59 ` Jan Beulich
2016-11-22 14:37 ` Boris Ostrovsky
2016-11-22 15:07 ` Jan Beulich
2016-11-22 15:43 ` Boris Ostrovsky
2016-11-22 16:01 ` Jan Beulich
[not found] ` <a4ac4c28-833b-df5f-ce34-1fa72f7c4cd2@oracle.com>
2016-11-22 23:47 ` Boris Ostrovsky
2016-11-23 8:09 ` Jan Beulich
2016-11-23 13:33 ` Boris Ostrovsky
2016-11-23 13:58 ` Jan Beulich
2016-11-23 14:16 ` Boris Ostrovsky
2016-11-25 18:16 ` Boris Ostrovsky
2016-11-28 7:59 ` Jan Beulich
2016-11-22 12:19 ` Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests Boris Ostrovsky
2016-11-22 10:37 ` Jan Beulich
2016-11-22 12:28 ` Boris Ostrovsky
2016-11-22 14:07 ` Jan Beulich
2016-11-22 14:53 ` Boris Ostrovsky
2016-11-22 15:13 ` Jan Beulich
2016-11-22 15:52 ` Boris Ostrovsky
2016-11-22 16:02 ` Jan Beulich
2016-11-21 21:00 ` [PATCH v3 03/11] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 04/11] acpi: Make pmtimer optional in FADT Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 05/11] acpi: Power and Sleep ACPI buttons are not emulated for PVH guests Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 06/11] acpi: PVH guests need _E02 method Boris Ostrovsky
2016-11-22 9:13 ` Jan Beulich
2016-11-22 20:20 ` Konrad Rzeszutek Wilk
2016-11-21 21:00 ` [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
2016-11-22 11:34 ` Jan Beulich
2016-11-22 12:38 ` Boris Ostrovsky
2016-11-22 14:08 ` Jan Beulich
2016-11-28 15:16 ` Boris Ostrovsky
2016-11-28 15:48 ` Roger Pau Monné
2016-11-21 21:00 ` [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
2016-11-22 14:11 ` Paul Durrant
2016-11-22 15:01 ` Jan Beulich
2016-11-22 15:30 ` Boris Ostrovsky
2016-11-22 16:05 ` Jan Beulich
2016-11-22 16:33 ` Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 09/11] events/x86: Define SCI virtual interrupt Boris Ostrovsky
2016-11-22 15:25 ` Jan Beulich
2016-11-22 15:57 ` Boris Ostrovsky
2016-11-22 16:07 ` Jan Beulich
2016-11-21 21:00 ` Boris Ostrovsky [this message]
2016-11-22 15:32 ` [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event Jan Beulich
2016-11-21 21:00 ` [PATCH v3 11/11] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1479762047-29431-11-git-send-email-boris.ostrovsky@oracle.com \
--to=boris.ostrovsky@oracle.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=roger.pau@citrix.com \
--cc=sstabellini@kernel.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).