From: David Vrabel <david.vrabel@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>,
konrad@kernel.org, xen-devel@lists.xenproject.org,
boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
keir@xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [XEN PATCH 1/2] hvm: Support more than 32 VCPUS when migrating.
Date: Wed, 9 Apr 2014 16:38:37 +0100 [thread overview]
Message-ID: <5345697D.8000405@citrix.com> (raw)
In-Reply-To: <20140409153444.GA6604@phenom.dumpdata.com>
On 09/04/14 16:34, Konrad Rzeszutek Wilk wrote:
> On Wed, Apr 09, 2014 at 09:37:01AM +0200, Roger Pau Monné wrote:
>> On 08/04/14 20:53, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Apr 08, 2014 at 08:18:48PM +0200, Roger Pau Monné wrote:
>>>> On 08/04/14 19:25, konrad@kernel.org wrote:
>>>>> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>>>
>>>>> When we migrate an HVM guest, by default our shared_info can
>>>>> only hold up to 32 CPUs. As such the hypercall
>>>>> VCPUOP_register_vcpu_info was introduced which allowed us to
>>>>> setup per-page areas for VCPUs. This means we can boot PVHVM
>>>>> guest with more than 32 VCPUs. During migration the per-cpu
>>>>> structure is allocated fresh by the hypervisor (vcpu_info_mfn
>>>>> is set to INVALID_MFN) so that the newly migrated guest
>>>>> can do make the VCPUOP_register_vcpu_info hypercall.
>>>>>
>>>>> Unfortunatly we end up triggering this condition:
>>>>> /* Run this command on yourself or on other offline VCPUS. */
>>>>> if ( (v != current) && !test_bit(_VPF_down, &v->pause_flags) )
>>>>>
>>>>> which means we are unable to setup the per-cpu VCPU structures
>>>>> for running vCPUS. The Linux PV code paths make this work by
>>>>> iterating over every vCPU with:
>>>>>
>>>>> 1) is target CPU up (VCPUOP_is_up hypercall?)
>>>>> 2) if yes, then VCPUOP_down to pause it.
>>>>> 3) VCPUOP_register_vcpu_info
>>>>> 4) if it was down, then VCPUOP_up to bring it back up
>>>>>
>>>>> But since VCPUOP_down, VCPUOP_is_up, and VCPUOP_up are
>>>>> not allowed on HVM guests we can't do this. This patch
>>>>> enables this.
>>>>
>>>> Hmmm, this looks like a very convoluted approach to something that could
>>>> be solved more easily IMHO. What we do on FreeBSD is put all vCPUs into
>>>> suspension, which means that all vCPUs except vCPU#0 will be in the
>>>> cpususpend_handler, see:
>>>>
>>>> http://svnweb.freebsd.org/base/head/sys/amd64/amd64/mp_machdep.c?revision=263878&view=markup#l1460
>>>
>>> How do you 'suspend' them? If I remember there is a disadvantage of doing
>>> this as you have to bring all the CPUs "offline". That in Linux means using
>>> the stop_machine which is pretty big hammer and increases the latency for migration.
>>
>> In order to suspend them an IPI_SUSPEND is sent to all vCPUs except vCPU#0:
>>
>> http://fxr.watson.org/fxr/source/kern/subr_smp.c#L289
>>
>> Which makes all APs call cpususpend_handler, so we know all APs are
>> stuck in a while loop with interrupts disabled:
>>
>> http://fxr.watson.org/fxr/source/amd64/amd64/mp_machdep.c#L1459
>>
>> Then on resume the APs are taken out of the while loop and the first
>> thing they do before returning from the IPI handler is registering the
>> new per-cpu vcpu_info area. But I'm not sure this is something that can
>> be accomplished easily on Linux.
>
> That is a bit of what the 'stop_machine' would do. It puts all of the
> CPUs in whatever function you want. But I am not sure of the latency impact - as
> in what if the migration takes longer and all of the CPUs sit there spinning.
> Another variant of that is the 'smp_call_function'.
I tested stop_machine() on all CPUs during suspend once and it was
awful: 100s of ms of additional downtime.
Perhaps a hand-rolled IPI-and-park-in-handler would be quicker the full
stop_machine().
David
next prev parent reply other threads:[~2014-04-09 15:39 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1396859560.22845.4.camel@kazak.uk.xensource.com>
2014-04-08 17:25 ` [PATCH] Fixes for more than 32 VCPUs migration for HVM guests (v1) konrad
2014-04-08 17:25 ` [XEN PATCH 1/2] hvm: Support more than 32 VCPUS when migrating konrad
2014-04-08 18:18 ` [Xen-devel] " Roger Pau Monné
2014-04-08 18:53 ` Konrad Rzeszutek Wilk
2014-04-09 7:37 ` Roger Pau Monné
2014-04-09 15:34 ` Konrad Rzeszutek Wilk
2014-04-09 15:38 ` David Vrabel [this message]
2014-04-09 15:55 ` Konrad Rzeszutek Wilk
2014-04-09 8:33 ` Ian Campbell
2014-04-09 9:04 ` Roger Pau Monné
2014-04-09 9:06 ` Jan Beulich
2014-04-09 15:27 ` Konrad Rzeszutek Wilk
2014-04-09 15:36 ` Jan Beulich
2014-04-22 18:34 ` Konrad Rzeszutek Wilk
2014-04-23 8:57 ` Jan Beulich
2014-04-08 17:25 ` [LINUX PATCH 2/2] xen/pvhvm: Support more than 32 VCPUs " konrad
2014-04-09 8:03 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5345697D.8000405@citrix.com \
--to=david.vrabel@citrix.com \
--cc=boris.ostrovsky@oracle.com \
--cc=jbeulich@suse.com \
--cc=keir@xen.org \
--cc=konrad.wilk@oracle.com \
--cc=konrad@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=roger.pau@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox