public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Pallipadi, Venkatesh" <venkatesh.pallipadi@intel.com>
To: Jerome Glisse <glisse@freedesktop.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Pallipadi, Venkatesh" <venkatesh.pallipadi@intel.com>,
	suresh.b.siddha@intel.com
Subject: Re: PAT wc & vmap mapping count issue ?
Date: Thu, 30 Jul 2009 12:17:57 -0700	[thread overview]
Message-ID: <20090730191757.GA16239@linux-os.sc.intel.com> (raw)
In-Reply-To: <1248973593.2462.35.camel@localhost>

On Thu, Jul 30, 2009 at 10:06:33AM -0700, Jerome Glisse wrote:
> On Thu, 2009-07-30 at 13:11 +0200, Jerome Glisse wrote:
> > Hello,
> > 
> > I think i am facing a PAT issue code (at bottom of the mail) leads
> > to mapping count issue such as one at bottom of mail. Is my test
> > code buggy ? If so what is wrong with it ? Otherwise how could i
> > track this down ? (Tested with lastest Linus tree). Note that
> > the mapping count sometimes is negative, sometimes it's positive
> > but without proper mapping.
> > 
> > (With AMD Athlon(tm) Dual Core Processor 4450e)
> > 
> > Note that bad page might takes time to happen 256 pages is bit
> > too little either increasing that or doing memory hungry task
> > will helps triggering the bug faster.
> > 
> > Cheers,
> > Jerome
> > 
> > Jul 30 11:12:36 localhost kernel: BUG: Bad page state in process bash
> > pfn:6daed
> > Jul 30 11:12:36 localhost kernel: page:ffffea0001b6bb40
> > flags:4000000000000000 count:1 mapcount:1 mapping:(null) index:6d8
> > Jul 30 11:12:36 localhost kernel: Pid: 1876, comm: bash Not tainted
> > 2.6.31-rc2 #30
> > Jul 30 11:12:36 localhost kernel: Call Trace:
> > Jul 30 11:12:36 localhost kernel: [<ffffffff81098570>] bad_page
> > +0xf8/0x10d
> > Jul 30 11:12:36 localhost kernel: [<ffffffff810997aa>]
> > get_page_from_freelist+0x357/0x475
> > Jul 30 11:12:36 localhost kernel: [<ffffffff810a72e3>] ? cond_resched
> > +0x9/0xb
> > Jul 30 11:12:36 localhost kernel: [<ffffffff810a9958>] ? copy_page_range
> > +0x4cc/0x558
> > Jul 30 11:12:36 localhost kernel: [<ffffffff810999e0>]
> > __alloc_pages_nodemask+0x118/0x562
> > Jul 30 11:12:36 localhost kernel: [<ffffffff812a92c3>] ?
> > _spin_unlock_irq+0xe/0x11
> > Jul 30 11:12:36 localhost kernel: [<ffffffff810a9dda>]
> > alloc_pages_node.clone.0+0x14/0x16
> > Jul 30 11:12:36 localhost kernel: [<ffffffff810aa0b1>] do_wp_page
> > +0x2d5/0x57d
> > Jul 30 11:12:36 localhost kernel: [<ffffffff810aac00>] handle_mm_fault
> > +0x586/0x5e0
> > Jul 30 11:12:36 localhost kernel: [<ffffffff812ab635>] do_page_fault
> > +0x20a/0x21f
> > Jul 30 11:12:36 localhost kernel: [<ffffffff812a968f>] page_fault
> > +0x1f/0x30
> > Jul 30 11:12:36 localhost kernel: Disabling lock debugging due to kernel
> > taint
> > 
> > #define NPAGEST 256
> > void test_wc(void)
> > {
> >         struct page *pages[NPAGEST];
> >         int i, j;
> >         void *virt;
> > 
> >         for (i = 0; i < NPAGEST; i++) {
> >                 pages[i] = NULL;
> >         }
> >         for (i = 0; i < NPAGEST; i++) {
> >                 pages[i] = alloc_page(__GFP_DMA32 | GFP_USER);
> >                 if (pages[i] == NULL) {
> >                         printk(KERN_ERR "Failled allocating page %d\n",
> > i);
> >                         goto out_free;
> >                 }
> >                 if (!PageHighMem(pages[i]))
> >                         if (set_memory_wc((unsigned long)
> > page_address(pages[i]), 1)) {
> >                                 printk(KERN_ERR "Failled setting page %d
> > wc\n", i);
> >                                 goto out_free;
> >                         }
> >         }
> >         virt = vmap(pages, NPAGEST, 0,
> > pgprot_writecombine(PAGE_KERNEL));
> >         if (virt == NULL) {
> >                 printk(KERN_ERR "Failled vmapping\n");
> >                 goto out_free;
> >         }
> >         vunmap(virt);
> > out_free:
> >         for (i = 0; i < NPAGEST; i++) {
> >                 if (pages[i]) {
> >                         if (!PageHighMem(pages[i]))
> >                                 set_memory_wb((unsigned long)
> > page_address(pages[i]), 1);
> >                         __free_page(pages[i]);
> >                 }
> >         }
> > }
> 
> vmaping doesn't seems to be involved with the corruption simply
> setting some pages with set_memory_wc is enough.
> 
> 

This seems to be a regression from changeset
3869c4aa18835c8c61b44bd0f3ace36e9d3b5bd0

Below test patch should fix the problem. Can you please try it and let use know.
We can then send a cleaner patch with changelog etc to upstream+stable kernels.

Thanks,
Venki

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
---
 arch/x86/mm/pageattr.c |    9 ++++++---
 1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 1b734d7..895d90e 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -997,12 +997,15 @@ EXPORT_SYMBOL(set_memory_array_uc);
 int _set_memory_wc(unsigned long addr, int numpages)
 {
 	int ret;
+	unsigned long addr_copy = addr;
+
 	ret = change_page_attr_set(&addr, numpages,
 				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
-
 	if (!ret) {
-		ret = change_page_attr_set(&addr, numpages,
-				    __pgprot(_PAGE_CACHE_WC), 0);
+		ret = change_page_attr_set_clr(&addr_copy, numpages,
+					       __pgprot(_PAGE_CACHE_WC),
+					       __pgprot(_PAGE_CACHE_MASK),
+					       0, 0, NULL);
 	}
 	return ret;
 }
-- 
1.6.0.6



  parent reply	other threads:[~2009-07-30 19:18 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-30 11:11 PAT wc & vmap mapping count issue ? Jerome Glisse
2009-07-30 17:06 ` Jerome Glisse
2009-07-30 18:01   ` Pallipadi, Venkatesh
2009-07-30 18:48     ` Jerome Glisse
2009-07-30 19:17   ` Pallipadi, Venkatesh [this message]
2009-07-30 20:04     ` Jerome Glisse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090730191757.GA16239@linux-os.sc.intel.com \
    --to=venkatesh.pallipadi@intel.com \
    --cc=glisse@freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=suresh.b.siddha@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox