xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>,
	Keir Fraser <keir@xen.org>
Subject: Re: [PATCH RFC 0/8] x86/hvm, libxl: HVM SMT topology support
Date: Thu, 3 Mar 2016 09:52:43 +0000	[thread overview]
Message-ID: <56D8096B.7090105@oracle.com> (raw)
In-Reply-To: <56D74708.8010100@citrix.com>



On 03/02/2016 08:03 PM, Andrew Cooper wrote:
> On 02/03/16 19:18, Joao Martins wrote:
>>
>> On 02/25/2016 05:21 PM, Andrew Cooper wrote:
>>> On 22/02/16 21:02, Joao Martins wrote:
>>>> Hey!
>>>>
>>>> This series are a follow-up on the thread about the performance
>>>> of hard-pinned HVM guests. Here we propose allowing libxl to
>>>> change how the CPU topology looks like for the HVM guest, which can 
>>>> favor certain workloads as depicted by Elena on this thread [0]. 
>>>> It shows around 22-23% gain on io bound workloads having the guest
>>>> vCPUs hard pinned to the pCPUs with a matching core+thread.
>>>>
>>>> This series is divided as following:
>>>> * Patch 1     : Sets initial apicid to be the vcpuid as opposed
>>>>                 to vcpuid * 2 for each core;
>>>> * Patch 2     : Whitespace cleanup
>>>> * Patch 3     : Adds new leafs to describe Intel/AMD cache
>>>>                 topology. Though it's only internal to libxl;
>>>> * Patch 4     : Internal call to set per package CPUID values.
>>>> * Patch 5 - 8 : Interfaces for xl and libxl for setting topology.
>>>>
>>>> I couldn't quite figure out which user interface was better so I
>>>> included both our "smt" option and full description of the topology
>>>> i.e. "sockets", "cores", "threads" option same as the "-smp"
>>>> option on QEMU. Note that the latter could also be used on
>>>> libvirt since topology is described in their XML configs.
>>>>
>>>> It's also an RFC as AMD support isn't implemented yet.
>>>>
>>>> Any comments are appreciated!
>>> Hey.  Sorry I am late getting to this - I am currently swamped.  Some
>>> general observations.
>> Hey Andrew, Thanks for the pointers!
>>
>>> The cpuid policy code in Xen was never re-thought through after
>>> multi-vcpu guests were introduced, which means they have no
>>> understanding of per-package, per-core and per-thread values.
>>>
>>> As part of my further cpuid work, I will need to fix this.  I was
>>> planning to fix it by requiring full cpu topology information to be
>>> passed as part of the domaincreate or max_vcpus hypercall  (not chosen
>>> which yet).  This would include cores-per-package, threads-per-core etc,
>>> and allow Xen to correctly fill in the per-core cpuid values in leaves
>>> 4, 0xB and 80000008.
>> FWIW CPU topology on domaincreate sounds nice. Or would max_vcpus hypercall
>> serve other purposes too? (CPU hotplug, migration)
> 
> With cpu hotplug, a guest is still limited at max_vcpus, and this
> hypercall is the second action during domain creation.
OK

> 
> With migration, an empty domain must already be created for the contents
> of the stream to be inserted into.  At a minimum, this is createdomain
> and max_vcpus, usually with a max_mem to avoid it getting arbitrarily large.
> 
> One (mis)feature I want to fix is that currently, the cpuid policy is
> regenerated by the toolstack on the destination of the migration, after
> the cpu state has been reloaded in Xen.  This causes a chicken and egg
> problem between checking the validity of guest state, such as %cr4
> against the guest cpuid policy.
> 
> I wish to fix this by putting the domain cpuid policy at the head of the
> migration stream, which allows the receiving side to first verify that
> the domains cpuid policy is compatible with the host, and then verify
> all further migration state against the policy.
> 
> Even with this, there will be a chicken and egg situation when it comes
> to specifying topology.  The best that we can do is let the toolstack
> recreate it from scratch (from what is hopefully the same domain
> configuration at a higher level), then verify consistency when the
> policy is loaded.
/nods Thanks for educating on this.

> 
>>
>>> In particular, I am concerned about giving the toolstack the ability to
>>> blindly control the APIC IDs.  Their layout is very closely linked to
>>> topology, and in particular to the HTT flag.
>>>
>>> Overall, I want to avoid any possibility of generating APIC layouts
>>> (including the emulated IOAPIC with HVM guests) which don't conform to
>>> the appropriate AMD/Intel manuals.
>> I see so overall having Xen control the topology would be a better approach that
>> "mangling" the APICIDs in the cpuid policy as I am proposing. One good thing
>> about Xen handling the topology bits would be for Intel CPUs with CPUID faulting
>> support where PV guests could also see the topology info. And given that the
>> word 10 of hw_caps won't be exposed (as per your CPUID), handling the PV case on
>> cpuid policy wouldn't be as clean.
> 
> Which word do you mean here?  Even before my series, Xen only had 9
> words in hw_cap.
Hm, I used the wrong nomenclature here: what I meant was the 10th feature word
from x86_boot_capability (since the sysctl/libxl are capped to 8 words only)
which in the header files is word 9 on your series (previously moved from word
3). It's the one meant for "Other features, Linux-defined mapping", where
X86_FEATURE_CPUID_FAULTING is defined.

Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-03-03  9:52 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-22 21:02 [PATCH RFC 0/8] x86/hvm, libxl: HVM SMT topology support Joao Martins
2016-02-22 21:02 ` [PATCH RFC 1/8] x86/hvm: set initial apicid to vcpu_id Joao Martins
2016-02-25 17:03   ` Jan Beulich
2016-03-02 18:49     ` Joao Martins
2016-02-22 21:02 ` [PATCH RFC 2/8] libxl: remove whitespace on libxl_types.idl Joao Martins
2016-02-25 16:28   ` Wei Liu
2016-03-02 19:14     ` Joao Martins
2016-02-22 21:02 ` [PATCH RFC 3/8] libxl: cpuid: add cache core count support Joao Martins
2016-02-22 21:02 ` [PATCH RFC 4/8] libxl: cpuid: add guest topology support Joao Martins
2016-02-25 16:29   ` Wei Liu
2016-03-02 19:14     ` Joao Martins
2016-02-22 21:02 ` [PATCH RFC 5/8] libxl: introduce smt field Joao Martins
2016-02-25 16:29   ` Wei Liu
2016-02-22 21:02 ` [PATCH RFC 6/8] xl: introduce smt option Joao Martins
2016-02-22 21:02 ` [PATCH RFC 7/8] libxl: introduce topology fields Joao Martins
2016-02-25 16:29   ` Wei Liu
2016-03-02 19:16     ` Joao Martins
2016-02-22 21:02 ` [PATCH RFC 8/8] xl: introduce topology options Joao Martins
2016-02-25 17:21 ` [PATCH RFC 0/8] x86/hvm, libxl: HVM SMT topology support Andrew Cooper
2016-02-26 15:03   ` Dario Faggioli
2016-02-26 15:27     ` Konrad Rzeszutek Wilk
2016-02-26 15:42       ` Dario Faggioli
2016-02-26 15:48         ` Andrew Cooper
2016-03-02 19:18   ` Joao Martins
2016-03-02 20:03     ` Andrew Cooper
2016-03-03  9:52       ` Joao Martins [this message]
2016-03-03 10:24         ` Andrew Cooper
2016-03-03 12:23           ` Joao Martins
2016-03-03 12:48             ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56D8096B.7090105@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=elena.ufimtseva@oracle.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=keir@xen.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).