From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Cc: wei.liu2@citrix.com, andrew.cooper3@citrix.com,
ian.jackson@eu.citrix.com, jbeulich@suse.com,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
roger.pau@citrix.com
Subject: [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl
Date: Tue, 3 Jan 2017 09:04:09 -0500 [thread overview]
Message-ID: <1483452256-2879-6-git-send-email-boris.ostrovsky@oracle.com> (raw)
In-Reply-To: <1483452256-2879-1-git-send-email-boris.ostrovsky@oracle.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes in v6:
* Adjustments to to patch 4 changes.
* Added a spinlock for VCPU map access
* Return an error on guest trying to write VCPU map
xen/arch/x86/hvm/acpi.c | 57 +++++++++++++++++++++++++++++++++++-----
xen/include/asm-x86/hvm/domain.h | 1 +
2 files changed, 52 insertions(+), 6 deletions(-)
diff --git a/xen/arch/x86/hvm/acpi.c b/xen/arch/x86/hvm/acpi.c
index f0a84f9..9f0578e 100644
--- a/xen/arch/x86/hvm/acpi.c
+++ b/xen/arch/x86/hvm/acpi.c
@@ -7,17 +7,22 @@
#include <xen/lib.h>
#include <xen/sched.h>
+#include <asm/guest_access.h>
+
#include <public/arch-x86/xen.h>
-static int acpi_cpumap_access_common(struct domain *d, bool is_write,
- unsigned int port,
+static int acpi_cpumap_access_common(struct domain *d, bool is_guest_access,
+ bool is_write, unsigned int port,
unsigned int bytes, uint32_t *val)
{
unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
+ int rc = X86EMUL_OKAY;
BUILD_BUG_ON(XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN
> ACPI_GPE0_BLK_ADDRESS_V1);
+ spin_lock(&d->arch.hvm_domain.acpi_lock);
+
if ( !is_write )
{
uint32_t mask = (bytes < 4) ? ~0U << (bytes * 8) : 0;
@@ -32,23 +37,61 @@ static int acpi_cpumap_access_common(struct domain *d, bool is_write,
memcpy(val, (uint8_t *)d->avail_vcpus + first_byte,
min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
}
+ else if ( !is_guest_access )
+ memcpy((uint8_t *)d->avail_vcpus + first_byte, val,
+ min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
else
/* Guests do not write CPU map */
- return X86EMUL_UNHANDLEABLE;
+ rc = X86EMUL_UNHANDLEABLE;
- return X86EMUL_OKAY;
+ spin_unlock(&d->arch.hvm_domain.acpi_lock);
+
+ return rc;
}
int hvm_acpi_domctl_access(struct domain *d,
const struct xen_domctl_acpi_access *access)
{
- return -ENOSYS;
+ unsigned int bytes, i;
+ uint32_t val = 0;
+ uint8_t *ptr = (uint8_t *)&val;
+ int rc;
+ bool is_write = (access->rw == XEN_DOMCTL_ACPI_WRITE) ? true : false;
+
+ if ( has_acpi_dm_ff(d) )
+ return -EOPNOTSUPP;
+
+ if ( access->space_id != XEN_ACPI_SYSTEM_IO )
+ return -EINVAL;
+
+ if ( !((access->address >= XEN_ACPI_CPU_MAP) &&
+ (access->address < XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN)) )
+ return -ENODEV;
+
+ for ( i = 0; i < access->width; i += sizeof(val) )
+ {
+ bytes = (access->width - i > sizeof(val)) ?
+ sizeof(val) : access->width - i;
+
+ if ( is_write && copy_from_guest_offset(ptr, access->val, i, bytes) )
+ return -EFAULT;
+
+ rc = acpi_cpumap_access_common(d, false, is_write,
+ access->address, bytes, &val);
+ if ( rc )
+ return rc;
+
+ if ( !is_write && copy_to_guest_offset(access->val, i, ptr, bytes) )
+ return -EFAULT;
+ }
+
+ return 0;
}
static int acpi_cpumap_guest_access(int dir, unsigned int port,
unsigned int bytes, uint32_t *val)
{
- return acpi_cpumap_access_common(current->domain,
+ return acpi_cpumap_access_common(current->domain, true,
(dir == IOREQ_WRITE) ? true : false,
port, bytes, val);
}
@@ -148,6 +191,8 @@ void hvm_acpi_init(struct domain *d)
sizeof(d->arch.hvm_domain.acpi.pm1a_sts) +
sizeof(d->arch.hvm_domain.acpi.pm1a_en),
acpi_guest_access);
+
+ spin_lock_init(&d->arch.hvm_domain.acpi_lock);
}
/*
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 07815b6..438ea12 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -111,6 +111,7 @@ struct hvm_domain {
*/
#define hvm_hw_acpi hvm_hw_pmtimer
struct hvm_hw_acpi acpi;
+ spinlock_t acpi_lock;
/* VCPU which is current target for 8259 interrupts. */
struct vcpu *i8259_target;
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-01-03 14:04 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access Boris Ostrovsky
2017-01-03 18:21 ` Daniel De Graaf
2017-01-03 20:51 ` Konrad Rzeszutek Wilk
2017-01-03 14:04 ` [PATCH v6 02/12] x86/save: public/arch-x86/hvm/save.h is available to hypervisor and tools only Boris Ostrovsky
2017-01-03 16:55 ` Jan Beulich
2017-01-03 14:04 ` [PATCH v6 03/12] pvh/acpi: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 04/12] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
2017-01-03 14:04 ` Boris Ostrovsky [this message]
2017-07-31 14:14 ` [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl Ross Lagerwall
2017-07-31 14:59 ` Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 06/12] events/x86: Define SCI virtual interrupt Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 07/12] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 08/12] libxl: Update xenstore on VCPU hotplug for all guest types Boris Ostrovsky
2017-01-04 10:36 ` Wei Liu
2017-01-03 14:04 ` [PATCH v6 09/12] tools: Call XEN_DOMCTL_acpi_access on PVH VCPU hotplug Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 10/12] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 11/12] pvh/acpi: Save ACPI registers for PVH guests Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
2017-01-03 16:58 ` Jan Beulich
2017-01-03 19:33 ` Boris Ostrovsky
2017-01-04 9:26 ` Jan Beulich
2017-01-03 18:19 ` Stefano Stabellini
2017-01-03 20:31 ` Boris Ostrovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1483452256-2879-6-git-send-email-boris.ostrovsky@oracle.com \
--to=boris.ostrovsky@oracle.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=roger.pau@citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).