xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Cc: wei.liu2@citrix.com, ian.jackson@eu.citrix.com,
	Julien Grall <julien.grall@arm.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [PATCH v2 02/11] acpi: Define ACPI IO registers for PVH guests
Date: Mon, 14 Nov 2016 10:01:40 -0500	[thread overview]
Message-ID: <493a272a-1c22-c951-e9fb-82bb5dfc9364@oracle.com> (raw)
In-Reply-To: <3d336927-229b-368c-b835-c6850886634a@oracle.com>

On 11/09/2016 04:01 PM, Boris Ostrovsky wrote:
> On 11/09/2016 02:58 PM, Andrew Cooper wrote:
>> On 09/11/16 15:14, Boris Ostrovsky wrote:
>>> On 11/09/2016 09:59 AM, Andrew Cooper wrote:
>>>>> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
>>>>> index 2e5809b..e3fa704 100644
>>>>> --- a/xen/include/public/hvm/ioreq.h
>>>>> +++ b/xen/include/public/hvm/ioreq.h
>>>>> @@ -24,6 +24,8 @@
>>>>>  #ifndef _IOREQ_H_
>>>>>  #define _IOREQ_H_
>>>>>  
>>>>> +#include "hvm_info_table.h" /* HVM_MAX_VCPUS */
>>>>> +
>>>>>  #define IOREQ_READ      1
>>>>>  #define IOREQ_WRITE     0
>>>>>  
>>>>> @@ -124,6 +126,17 @@ typedef struct buffered_iopage buffered_iopage_t;
>>>>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>>>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>>>>  
>>>>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>>>>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>>>>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>>>>> +
>>>>> +/* Location of online VCPU bitmap. */
>>>>> +#define ACPI_CPU_MAP                 0xaf00
>>>>> +#define ACPI_CPU_MAP_LEN             ((HVM_MAX_VCPUS / 8) + \
>>>>> +                                      ((HVM_MAX_VCPUS & 7) ? 1 : 0))
>>>>> +#if ACPI_CPU_MAP + ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>>>>> +#error "ACPI_CPU_MAP is too big"
>>>>> +#endif
>>>> Why is this in ioreq.h?  It has nothing to do with ioreq's.
>>>>
>>>> The current ACPI bits in here are to do with the qemu ACPI interface,
>>>> not the Xen ACPI interface.
>>>>
>>>> Also, please can we avoid hard-coding the location of the map in the
>>>> hypervisor ABI.  These constants make it impossible to ever extend the
>>>> number of HVM vcpus at a future date.
>>> The first three logically belong here because corresponding blocks'
>>> addresses are defined right above.
>> They have no relationship to the ones above, other than their name.
> They describe the same object --- for example
> ACPI_PM1A_CNT_BLK_ADDRESS_V1 and (new) ACPI_PM1A_CNT_BLK_LEN describe
> pm1a control.
>
> As far as definitions being there for qemu interface only ---
> ACPI_PM1A_CNT_BLK_ADDRESS_V1, for example, is used only by hvmloader and
> libacpi.
>
>
>>> ACPI_CPU_MAP has to be seen by both the toolstack (libacpi) and the
>>> hypervisor (and qemu as well, although it is defined as
>>> PIIX4_CPU_HOTPLUG_IO_BASE).
>>>
>>> Where do you think it should go then?
>> This highlights a reoccurring problem in Xen which desperately needs
>> fixing, but still isn't high enough on my TODO list to tackle yet.
>>
>> There is no central registration of claims on domain resources.  This is
>> the root cause of memory accounting problems for HVM guests.
>>
>>
>> The way I planned to fix this was to have Xen maintain a registry of
>> domains physical resources which ends up looking very much like
>> /proc/io{mem,ports}.  There will be a hypercall interface for querying
>> this information, and for a toolstack and device model to modify it.
>>
>> The key point is that Xen needs to be authoritative source of
>> information pertaining to layout, rather than the current fiasco we have
>> of the toolstack, qemu and hvmloader all thinking they know and control
>> what's going on.  This fixes several current unknowns which have caused
>> real problems, such as whether a domain was told about certain RMRRs
>> when it booted, or how many PXEROMs qemu tried to fit into the physmap.
>>
>> This information (eventually, when I get Xen-level migration v2 sorted)
>> needs to move at the head of the migration stream.
>>
>> The way I would envisage this working is that on domain create, Xen
>> makes a blank map indicating that all space is free.  By selecting
>> X86_EMUL_APCI_*, Xen takes out an allocation when it wires up the ioport
>> handler.
>>
>> Later, when constructing the ACPI tables, the toolstack reads the
>> current ioport allocations and can see exactly which ports are reserved
>> for what.
>>
>>
>> Now, I understand that lumbering you with this work as a prerequisite
>> would be unfair.
>>
>> Therefore, I will accept an alternative of hiding all these definitions
>> behind __XEN_TOOLS__ so the longterm fix can be introduced in a
>> compatible manner in the future.
>
> __XEN_TOOLS__ or (__XEN__ || __XEN_TOOLS__) ? Because both the toolstack
> and the hypervisor want to see them.
>
>
>> That said, I am still certain that they shouldn't live in ioreq.h, as
>> they have nothing to do with Qemu.
> None of the existing files looks (to me) much better in terms of being
> more appropriate. include/public/arch-x86/xen.h?

Andrew, ping on these two questions.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-11-14 15:01 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-09 14:39 [PATCH v2 00/11] PVH VCPU hotplug support Boris Ostrovsky
2016-11-09 14:39 ` [PATCH v2 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus Boris Ostrovsky
2016-11-09 15:04   ` Andrew Cooper
2016-11-09 15:29     ` Boris Ostrovsky
2016-11-09 19:23       ` Andrew Cooper
2016-11-09 19:47         ` Boris Ostrovsky
2016-11-14 14:59           ` Boris Ostrovsky
2016-11-14 17:17             ` Andrew Cooper
2016-11-14 17:48               ` Boris Ostrovsky
2016-11-14 18:19                 ` Andrew Cooper
2016-11-14 18:44                   ` Boris Ostrovsky
2016-11-15 16:41                     ` Andrew Cooper
2016-11-15 17:04                       ` Boris Ostrovsky
2016-11-15 17:30                         ` Andrew Cooper
2016-11-15  8:34   ` Jan Beulich
2016-11-15 14:28     ` Boris Ostrovsky
2016-11-15 14:33       ` Jan Beulich
2016-11-15 15:00         ` Boris Ostrovsky
2016-11-09 14:39 ` [PATCH v2 02/11] acpi: Define ACPI IO registers for PVH guests Boris Ostrovsky
2016-11-09 14:48   ` Julien Grall
2016-11-09 14:54     ` Boris Ostrovsky
2016-11-09 14:48   ` Paul Durrant
2016-11-09 14:59   ` Andrew Cooper
2016-11-09 15:14     ` Boris Ostrovsky
2016-11-09 19:58       ` Andrew Cooper
2016-11-09 21:01         ` Boris Ostrovsky
2016-11-14 15:01           ` Boris Ostrovsky [this message]
2016-11-14 17:19             ` Andrew Cooper
2016-11-15  8:47   ` Jan Beulich
2016-11-15 14:47     ` Boris Ostrovsky
2016-11-15 15:13       ` Jan Beulich
2016-11-15 15:41         ` Boris Ostrovsky
2016-11-15 15:53           ` Jan Beulich
2016-11-15 16:23             ` Boris Ostrovsky
2016-11-15 16:33               ` Jan Beulich
2016-11-15 16:58                 ` Boris Ostrovsky
2016-11-15 16:59                   ` Jan Beulich
2016-11-15 18:31                   ` Andrew Cooper
2016-11-09 14:39 ` [PATCH v2 03/11] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
2016-11-11 19:57   ` Konrad Rzeszutek Wilk
2016-11-12 15:40     ` Wei Liu
2016-11-09 14:39 ` [PATCH v2 04/11] acpi: Make pmtimer optional in FADT Boris Ostrovsky
2016-11-15  8:49   ` Jan Beulich
2016-11-09 14:39 ` [PATCH v2 05/11] acpi: Power and Sleep ACPI buttons are not emulated for PVH guests Boris Ostrovsky
2016-11-11 19:56   ` Konrad Rzeszutek Wilk
2016-11-15  8:54   ` Jan Beulich
2016-11-09 14:39 ` [PATCH v2 06/11] acpi: PVH guests need _E02 method Boris Ostrovsky
2016-11-11 19:58   ` Konrad Rzeszutek Wilk
2016-11-15  8:57   ` Jan Beulich
2016-11-09 14:39 ` [PATCH v2 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
2016-11-09 14:56   ` Paul Durrant
2016-11-11 20:01   ` Konrad Rzeszutek Wilk
2016-11-15  9:04   ` Jan Beulich
2016-11-09 14:39 ` [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
2016-11-09 14:58   ` Paul Durrant
2016-11-11 20:07   ` Konrad Rzeszutek Wilk
2016-11-15  9:24   ` Jan Beulich
2016-11-15 14:55     ` Boris Ostrovsky
2016-11-15 15:17       ` Jan Beulich
2016-11-15 15:44         ` Boris Ostrovsky
2016-11-15 15:56           ` Jan Beulich
2016-11-15 19:19             ` Andrew Cooper
2016-11-15 19:38               ` Boris Ostrovsky
2016-11-15 20:07                 ` Andrew Cooper
2016-11-15 20:20                   ` Boris Ostrovsky
2016-11-17  0:00                     ` Boris Ostrovsky
2016-11-17  0:08                       ` Andrew Cooper
2016-11-16  9:23               ` Jan Beulich
2016-11-09 14:39 ` [PATCH v2 09/11] events/x86: Define SCI virtual interrupt Boris Ostrovsky
2016-11-15  9:29   ` Jan Beulich
2016-11-09 14:39 ` [PATCH v2 10/11] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
2016-11-15  9:36   ` Jan Beulich
2016-11-15 14:57     ` Boris Ostrovsky
2016-11-15 15:18       ` Jan Beulich
2016-11-15  9:38   ` Jan Beulich
2016-11-09 14:39 ` [PATCH v2 11/11] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=493a272a-1c22-c951-e9fb-82bb5dfc9364@oracle.com \
    --to=boris.ostrovsky@oracle.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien.grall@arm.com \
    --cc=paul.durrant@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).