xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Keir Fraser <keir.fraser@eu.citrix.com>
To: "Jiang, Yunhong" <yunhong.jiang@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [PATCH] Allocate vmcs pages when system booting
Date: Fri, 15 Jan 2010 10:28:30 +0000	[thread overview]
Message-ID: <C775F3CE.66CF%keir.fraser@eu.citrix.com> (raw)
In-Reply-To: <C8EDE645B81E5141A8C6B2F73FD926511497CCC0B7@shzsmsx501.ccr.corp.intel.com>

Another example where a hotplug notifier would be handy. Can you add a VMX
hook in the path bringing the CPU up (e.g., cpu_up, or do_boot_cpu), kind-of
equivalent to what would happen if we had CPU_UP_PREPARE notifiers?

>From there you can allocate memory in a safe (irqs enabled, on a cpu that is
already fully online) context.

This is the right direction for when notifiers get added -- I think I will
sort that out after 4.0 is done.

Btw, I am away rest of today and not back on email until Monday. I can
discuss further then. But we'll make sure this is sorted one way or another
before 4.0.0-rc2.

 -- Keir

On 15/01/2010 09:06, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:

> Hi, Keir, recently a bug is reported for the alloc vmcs page for the new added
> CPU. The reason is because when we try to allocate vmcs page, if we need flush
> TLB for allocated page, we will hit ASSERT in flush_area_mask in following
> code.
> 
> So my question are:
> a) Why do we need make sure the irq is enabeld in flush_area_mask? I assume it
> is because this function may takes a long time waiting for all CPU flush TLB,
> am I right?
> b) Can we still turn to the original patch, i.e. pre-allocate all VMCS pages
> for all possible CPU?
> 
> Thanks
> Yunhong Jiang
> 
> void flush_area_mask(const cpumask_t *mask, const void *va, unsigned int
> flags)
> {
>     ASSERT(local_irq_is_enabled());
> 
>     if ( cpu_isset(smp_processor_id(), *mask) )
>         flush_area_local(va, flags);
> 
> The trace for the error:.
> ----------------------
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016d40b>] flush_area_mask+0x2c/0x140
> (XEN)    [<ffff82c480114da7>] alloc_heap_pages+0x44f/0x493
> (XEN)    [<ffff82c48011642b>] alloc_domheap_pages+0x115/0x186
> (XEN)    [<ffff82c4801164ec>] alloc_xenheap_pages+0x50/0xd8
> (XEN)    [<ffff82c4801b3b4f>] vmx_alloc_vmcs+0x18/0x65
> (XEN)    [<ffff82c4801b45e9>] vmx_cpu_up+0x4d3/0x72a
> (XEN)    [<ffff82c4801b83fc>] start_vmx+0x86/0x144
> (XEN)    [<ffff82c480191703>] init_intel+0x2a2/0x2ff
> (XEN)    [<ffff82c480191094>] identify_cpu+0xc3/0x23d
> (XEN)    [<ffff82c48016d7b4>] smp_store_cpu_info+0x3b/0xc1
> (XEN)    [<ffff82c48016d93b>] smp_callin+0x101/0x227
> (XEN)    [<ffff82c48016ec85>] start_secondary+0xb3/0x43f
> (XEN)    
> (XEN) 
> (XEN) ****************************************
> (XEN) Panic on CPU 56:
> (XEN) Assertion 'local_irq_is_enabled()' failed at smp.c:223
> 
> 
> 
>> -----Original Message-----
>> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com]
>> Sent: Thursday, November 12, 2009 7:23 PM
>> To: Jiang, Yunhong; xen-devel@lists.xensource.com
>> Subject: Re: [PATCH] Allocate vmcs pages when system booting
>> 
>> On 12/11/2009 10:52, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
>> 
>>> Currently the VMCS page is allocated when the CPU is brought-up and
>>> identified, and then spin_debug_enable() is called.
>>> 
>>> This does not work for cpu hot-add. When hot-added CPU is brought-up and try
>>> to allocate the vmcs page, it will hit check_lock, because irq is disabled
>>> still.
>>> 
>>> This patch allocate the vmcs pages for all possible pages when system
>>> booting,
>>> so that we don't allocate vmcs anymore when secondary CPU is up.
>>> 
>>> Signed-off-by: Jiang, Yunhong <yunhong.jiang@intel.com>
>> 
>> A general point: using cpu_possible_map is not a good idea any longer, since
>> it is going to be all-ones as soon as your physical cpu hotplug patches go
>> in (I don't intend to make that a build-time option). Hence we could
>> potentially massively over-allocate pages with this approach.
>> 
>> The good news is that, when bringing a CPU online, we don't need to worry
>> about acquiring IRQ-unsafe locks with IRQS disabled until the CPU is added
>> to the cpu_online_map. This is because the race we are trying to avoid with
>> the spin-debug checks involves rendezvous of CPUs via IPIs, and a CPU not in
>> cpu_online_map will never get waited on by others.
>> 
>> So, your better fix would be to spin_debug_disable() at the top of
>> start_secondary(), and spin_debug_enable() just before
>> cpu_set(...online_map).
>> 
>> Can you try this alternative fix please?
>> 
>> -- Keir
>> 
> 

      parent reply	other threads:[~2010-01-15 10:28 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-12 10:52 [PATCH] Allocate vmcs pages when system booting Jiang, Yunhong
2009-11-12 11:22 ` Keir Fraser
2009-11-12 14:58   ` Jiang, Yunhong
2009-11-12 15:04     ` Keir Fraser
2009-11-12 15:15       ` Jiang, Yunhong
2010-01-15  9:06   ` Jiang, Yunhong
2010-01-15  9:30     ` Jan Beulich
2010-01-15 10:31       ` Keir Fraser
2010-01-17  2:02         ` Jiang, Yunhong
2010-01-17 21:35           ` Keir Fraser
2010-01-18  8:11             ` Keir Fraser
2010-01-18  8:22               ` Jiang, Yunhong
2010-03-18 10:45         ` Jan Beulich
2010-03-18 10:51           ` Keir Fraser
2010-03-19  2:17           ` Jiang, Yunhong
2010-01-15 10:28     ` Keir Fraser [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C775F3CE.66CF%keir.fraser@eu.citrix.com \
    --to=keir.fraser@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    --cc=yunhong.jiang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).