public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jeremy Fitzhardinge <jeremy@goop.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>, "mingo@elte.hu" <mingo@elte.hu>,
	"tglx@linutronix.de" <tglx@linutronix.de>
Subject: Re: [Xen-devel] Re: [PATCH 1/5] xen: events: use irq_alloc_desc(_at) instead of open-coding an IRQ allocator.
Date: Tue, 26 Oct 2010 13:20:38 -0700	[thread overview]
Message-ID: <4CC73816.50108@goop.org> (raw)
In-Reply-To: <alpine.DEB.2.00.1010261847150.1407@kaball-desktop>

 On 10/26/2010 12:49 PM, Stefano Stabellini wrote:
> On Tue, 26 Oct 2010, Ian Campbell wrote:
>> On Mon, 2010-10-25 at 19:02 +0100, Ian Campbell wrote:
>>>
>>>> What do you see when you pass in a PCI device and say give the guest
>>> 32 CPUs??
>>>
>>> I can try tomorrow and see, based on what you say above without
>>> implementing what I described I suspect the answer will be "carnage". 
>> Actually, it looks like multi-vcpu is broken, I only see 1 regardless of
>> how many I configured. It's not clear if this is breakage in Linus'
>> tree, something I pulled in from one of Jeremy's, yours or Stefano's
>> trees or some local pebcak. I'll investigate...
>  
> I found the bug, it was introduced by:
>
> "xen: use vcpu_ops to setup cpu masks"
>
> I have added the fix at the end of my branch and I am also appending the
> fix here.

Acked.

    J

> ---
>
>
> xen: initialize cpu masks for pv guests in xen_smp_init
>
> Pv guests don't have ACPI and need the cpu masks to be set
> correctly as early as possible so we call xen_fill_possible_map from
> xen_smp_init.
> On the other hand the initial domain supports ACPI so in this case we skip
> xen_fill_possible_map and rely on it. However Xen might limit the number
> of cpus usable by the domain, so we filter those masks during smp
> initialization using the VCPUOP_is_up hypercall.
> It is important that the filtering is done before
> xen_setup_vcpu_info_placement.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 1386767..834dfeb 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -28,6 +28,7 @@
>  #include <asm/xen/interface.h>
>  #include <asm/xen/hypercall.h>
>  
> +#include <xen/xen.h>
>  #include <xen/page.h>
>  #include <xen/events.h>
>  
> @@ -156,6 +157,25 @@ static void __init xen_fill_possible_map(void)
>  {
>  	int i, rc;
>  
> +	if (xen_initial_domain())
> +		return;
> +
> +	for (i = 0; i < nr_cpu_ids; i++) {
> +		rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
> +		if (rc >= 0) {
> +			num_processors++;
> +			set_cpu_possible(i, true);
> +		}
> +	}
> +}
> +
> +static void __init xen_filter_cpu_maps(void)
> +{
> +	int i, rc;
> +
> +	if (!xen_initial_domain())
> +		return;
> +
>  	num_processors = 0;
>  	disabled_cpus = 0;
>  	for (i = 0; i < nr_cpu_ids; i++) {
> @@ -179,6 +199,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
>  	   old memory can be recycled */
>  	make_lowmem_page_readwrite(xen_initial_gdt);
>  
> +	xen_filter_cpu_maps();
>  	xen_setup_vcpu_info_placement();
>  }
>  
> @@ -195,8 +216,6 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
>  	if (xen_smp_intr_init(0))
>  		BUG();
>  
> -	xen_fill_possible_map();
> -
>  	if (!alloc_cpumask_var(&xen_cpu_initialized_map, GFP_KERNEL))
>  		panic("could not allocate xen_cpu_initialized_map\n");
>  
> @@ -487,5 +506,6 @@ static const struct smp_ops xen_smp_ops __initdata = {
>  void __init xen_smp_init(void)
>  {
>  	smp_ops = xen_smp_ops;
> +	xen_fill_possible_map();
>  	xen_init_spinlocks();
>  }
>


  reply	other threads:[~2010-10-26 20:20 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <tip-77dff1c755c3218691e95e7e38ee14323b35dbdb@git.kernel.org>
2010-10-16  0:15 ` [tip:irq/core] x86: xen: Sanitise sparse_irq handling Jeremy Fitzhardinge
2010-10-16  0:17   ` [Xen-devel] " Jeremy Fitzhardinge
2010-10-16  2:01     ` H. Peter Anvin
2010-10-25 16:22       ` [PATCH 00/05] xen: events: cleanups after irq core improvements (Was: Re: [Xen-devel] Re: [tip:irq/core] x86: xen: Sanitise sparse_irq handling) Ian Campbell
2010-10-25 16:23         ` [PATCH 1/5] xen: events: use irq_alloc_desc(_at) instead of open-coding an IRQ allocator Ian Campbell
2010-10-25 17:35           ` Konrad Rzeszutek Wilk
2010-10-25 18:02             ` Ian Campbell
2010-10-26  8:15               ` [Xen-devel] " Ian Campbell
2010-10-26 19:49                 ` Stefano Stabellini
2010-10-26 20:20                   ` Jeremy Fitzhardinge [this message]
2010-10-25 23:03             ` Jeremy Fitzhardinge
2010-10-25 23:05               ` H. Peter Anvin
2010-10-25 23:21                 ` Jeremy Fitzhardinge
2010-10-26 14:17               ` [Xen-devel] " Konrad Rzeszutek Wilk
2010-10-26 16:44                 ` Jeremy Fitzhardinge
2010-10-26 17:08                   ` Konrad Rzeszutek Wilk
2010-10-28 12:43                     ` Stefano Stabellini
2010-10-28 16:22                       ` Jeremy Fitzhardinge
2010-10-25 16:23         ` [PATCH 2/5] xen: events: turn irq_info constructors into initialiser functions Ian Campbell
2010-10-25 16:23         ` [PATCH 3/5] xen: events: push setup of irq<->{evtchn,pirq} maps into irq_info init functions Ian Campbell
2010-10-26 14:31           ` Konrad Rzeszutek Wilk
2010-10-25 16:23         ` [PATCH 4/5] xen: events: dynamically allocate irq info structures Ian Campbell
2010-10-26 14:30           ` Konrad Rzeszutek Wilk
2010-10-26 16:37             ` Jeremy Fitzhardinge
2010-10-25 16:23         ` [PATCH 5/5] xen: events: use per-cpu variable for cpu_evtchn_mask Ian Campbell
2010-10-26 14:36           ` Konrad Rzeszutek Wilk
2010-10-26 14:50             ` Ian Campbell
2010-10-25 23:03         ` [PATCH 00/05] xen: events: cleanups after irq core improvements (Was: Re: [Xen-devel] Re: [tip:irq/core] x86: xen: Sanitise sparse_irq handling) Jeremy Fitzhardinge
2010-10-26  7:25           ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4CC73816.50108@goop.org \
    --to=jeremy@goop.org \
    --cc=Ian.Campbell@eu.citrix.com \
    --cc=hpa@zytor.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=tglx@linutronix.de \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox