From: "Jan Beulich" <JBeulich@novell.com>
To: Tim Deegan <Tim.Deegan@citrix.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [RFC][PATCH] walking the page lists needs the page_alloc lock
Date: Thu, 12 Aug 2010 16:09:24 +0100 [thread overview]
Message-ID: <4C642AC4020000780000F8D8@vpn.id2.novell.com> (raw)
In-Reply-To: <20100723134913.GQ13291@whitby.uk.xensource.com>
>>> On 23.07.10 at 15:49, Tim Deegan <Tim.Deegan@citrix.com> wrote:
> There are a few places in Xen where we walk a domain's page lists
> without holding the page_alloc lock. They race with updates to the page
> lists, which are normally rare but can be quite common under PoD when
> the domain is close to its memory limit and the PoD reclaimer is busy.
> This patch protects those places by taking the page_alloc lock.
>
> I think this is OK for the two debug-key printouts - they don't run from
> irq context and look deadlock-free. The tboot change seems safe too
While the comment says the patch would leave debug key printouts
alone, ...
> unless tboot shutdown functions are called from irq context or with the
> page_alloc lock held. The p2m one is the scariest but there are already
> code paths in PoD that take the page_alloc lock with the p2m lock held
> so it's no worse than existing code.
>
> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>
>
> diff -r e8dbc1262f52 xen/arch/x86/domain.c
> --- a/xen/arch/x86/domain.c Wed Jul 21 09:02:10 2010 +0100
> +++ b/xen/arch/x86/domain.c Fri Jul 23 14:33:22 2010 +0100
> @@ -139,12 +139,14 @@ void dump_pageframe_info(struct domain *
... the actual patch still touches a respective function. It would seem
to me that this part ought to be reverted.
> }
> else
> {
> + spin_lock(&d->page_alloc_lock);
> page_list_for_each ( page, &d->page_list )
> {
> printk(" DomPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> _p(page_to_mfn(page)),
> page->count_info, page->u.inuse.type_info);
> }
> + spin_unlock(&d->page_alloc_lock);
> }
>
> if ( is_hvm_domain(d) )
> @@ -152,12 +154,14 @@ void dump_pageframe_info(struct domain *
> p2m_pod_dump_data(d);
> }
>
> + spin_lock(&d->page_alloc_lock);
> page_list_for_each ( page, &d->xenpage_list )
> {
> printk(" XenPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> _p(page_to_mfn(page)),
> page->count_info, page->u.inuse.type_info);
> }
> + spin_unlock(&d->page_alloc_lock);
> }
>
> struct domain *alloc_domain_struct(void)
Sorry for not noticing this earlier.
Jan
next prev parent reply other threads:[~2010-08-12 15:09 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-23 13:49 [RFC][PATCH] walking the page lists needs the page_alloc lock Tim Deegan
2010-07-23 13:55 ` Tim Deegan
2010-08-12 15:09 ` Jan Beulich [this message]
2010-08-12 16:37 ` Tim Deegan
2010-08-13 6:40 ` Jan Beulich
2010-08-13 6:46 ` Keir Fraser
2010-08-13 7:06 ` Jan Beulich
2010-08-13 7:10 ` Keir Fraser
2010-08-13 7:20 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C642AC4020000780000F8D8@vpn.id2.novell.com \
--to=jbeulich@novell.com \
--cc=Tim.Deegan@citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).