From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
wei.liu2@citrix.com, ian.jackson@eu.citrix.com,
roger.pau@citrix.com
Subject: Re: [PATCH v5 06/13] x86/domctl: Handle ACPI access from domctl
Date: Tue, 20 Dec 2016 09:45:06 -0500 [thread overview]
Message-ID: <dc5ba321-0de5-9f78-ab71-bc6b12e98db3@oracle.com> (raw)
In-Reply-To: <58593F33020000780012AF7C@prv-mh.provo.novell.com>
On 12/20/2016 08:24 AM, Jan Beulich wrote:
>
>> -static int acpi_access_common(struct domain *d,
>> +static int acpi_access_common(struct domain *d, bool is_guest_access,
> Why? I thought the domctl is needed only for updating the CPU
> map? Or maybe it would help if the patch had a non-empty
> description.
domctl updates both the map and the status. I.e. in the toolstack it
looks like
/*Update VCPU map. */
rc = xc_acpi_iowrite(CTX->xch, domid, XEN_ACPI_CPU_MAP,
cpumap->size, cpumap->map);
if (!rc) {
/* Send an SCI. */
uint16_t val = 1 << XEN_ACPI_GPE0_CPUHP_BIT;
rc = xc_acpi_iowrite(CTX->xch, domid, ACPI_GPE0_BLK_ADDRESS_V1,
sizeof(val), &val);
}
I'll make a note in the commit message of the fact that both are accessed.
OTOH, maybe we should have an update to the map trigger the SCI and not
require the toolstack to do so?
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-12-20 14:45 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-16 23:18 [PATCH v5 00/13] PVH VCPU hotplug support Boris Ostrovsky
2016-12-16 23:18 ` [PATCH v5 01/13] x86/pmtimer: Move ACPI registers from PMTState to hvm_domain Boris Ostrovsky
2016-12-19 14:12 ` Jan Beulich
2016-12-16 23:18 ` [PATCH v5 02/13] acpi/x86: Define ACPI IO registers for PVH guests Boris Ostrovsky
2016-12-20 18:07 ` Julien Grall
2016-12-16 23:18 ` [PATCH v5 03/13] domctl: Add XEN_DOMCTL_acpi_access Boris Ostrovsky
2016-12-19 14:17 ` Jan Beulich
2016-12-19 14:48 ` Boris Ostrovsky
2016-12-19 14:53 ` Jan Beulich
2016-12-16 23:18 ` [PATCH v5 04/13] pvh/acpi: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
2016-12-20 11:24 ` Jan Beulich
2016-12-20 14:03 ` Boris Ostrovsky
2016-12-20 14:10 ` Jan Beulich
2016-12-20 14:16 ` Boris Ostrovsky
2016-12-20 14:45 ` Jan Beulich
2016-12-20 14:55 ` Andrew Cooper
2016-12-20 15:31 ` Boris Ostrovsky
2016-12-16 23:18 ` [PATCH v5 05/13] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
2016-12-20 11:50 ` Jan Beulich
2016-12-20 14:35 ` Boris Ostrovsky
2016-12-20 14:47 ` Jan Beulich
2016-12-20 15:29 ` Boris Ostrovsky
2016-12-20 15:41 ` Jan Beulich
2016-12-20 16:46 ` Andrew Cooper
2016-12-20 16:51 ` Boris Ostrovsky
2016-12-16 23:18 ` [PATCH v5 06/13] x86/domctl: Handle ACPI access from domctl Boris Ostrovsky
2016-12-20 13:24 ` Jan Beulich
2016-12-20 14:45 ` Boris Ostrovsky [this message]
2016-12-20 14:52 ` Jan Beulich
2016-12-16 23:18 ` [PATCH v5 07/13] events/x86: Define SCI virtual interrupt Boris Ostrovsky
2016-12-16 23:18 ` [PATCH v5 08/13] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
2016-12-20 13:37 ` Jan Beulich
2016-12-20 14:54 ` Boris Ostrovsky
2016-12-16 23:18 ` [PATCH v5 09/13] libxl: Update xenstore on VCPU hotplug for all guest types Boris Ostrovsky
2017-01-04 10:34 ` Wei Liu
2017-01-04 13:53 ` Boris Ostrovsky
2016-12-16 23:18 ` [PATCH v5 10/13] tools: Call XEN_DOMCTL_acpi_access on PVH VCPU hotplug Boris Ostrovsky
2017-01-04 10:35 ` Wei Liu
2016-12-16 23:18 ` [PATCH v5 11/13] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
2016-12-16 23:18 ` [PATCH v5 12/13] pvh/acpi: Save ACPI registers for PVH guests Boris Ostrovsky
2016-12-20 13:57 ` Jan Beulich
2016-12-20 15:09 ` Boris Ostrovsky
2016-12-20 15:40 ` Jan Beulich
2016-12-16 23:18 ` [PATCH v5 13/13] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dc5ba321-0de5-9f78-ab71-bc6b12e98db3@oracle.com \
--to=boris.ostrovsky@oracle.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=roger.pau@citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).