From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: Sachin Sant <sachinp@in.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>, linuxppc-dev@ozlabs.org
Subject: Re: [OOPS] hugetlbfs tests with 2.6.30-rc8-git1
Date: Sat, 06 Jun 2009 06:17:42 +1000 [thread overview]
Message-ID: <1244233062.31984.6.camel@pasglop> (raw)
In-Reply-To: <4A290195.3080807@in.ibm.com>
On Fri, 2009-06-05 at 16:59 +0530, Sachin Sant wrote:
> While executing Hugetlbfs tests against 2.6.30-rc8-git1 on a
> Power 6 box observed the following OOPS message.
> NIP [c000000000038240] .hpte_need_flush+0x1bc/0x2d8
> LR [c0000000000380f0] .hpte_need_flush+0x6c/0x2d8
Weird. I don't really see what happened there.
> Call Trace:
> [c0000000fa8ff710] [c000000000038264] .hpte_need_flush+0x1e0/0x2d8 (unreliable)
> [c0000000fa8ff7d0] [c000000000039fa4] .huge_ptep_get_and_clear+0x40/0x5c
> [c0000000fa8ff850] [c00000000012d46c] .__unmap_hugepage_range+0x178/0x2b8
> [c0000000fa8ff940] [c00000000012d600] .unmap_hugepage_range+0x54/0x88
> [c0000000fa8ff9e0] [c0000000001173a0] .unmap_vmas+0x178/0x8f4
> [c0000000fa8ffb30] [c00000000011cab8] .unmap_region+0xfc/0x1e4
> [c0000000fa8ffc00] [c00000000011e248] .do_munmap+0x2f4/0x38c
> [c0000000fa8ffcc0] [c0000000002f6d74] .SyS_shmdt+0xc0/0x188
> [c0000000fa8ffd70] [c00000000000c430] .sys_ipc+0x274/0x2fc
> [c0000000fa8ffe30] [c000000000008534] syscall_exit+0x0/0x40
> Instruction dump:
> 78090220 2fbd0000 409e0010 7929e0e4 7be00120 4800000c 792945c6 7be00600
> 7d3f0378 7c1cb82e 3d360001 2f800000 <eb898000> 409e0028 7fe3fb78 7f24cb78
The call trace looks rather ordinary. In fact, the DAR address doesn't
even look that bad, depends how much RAM you have in this partition I
suppose.
> I first noticed this with 2.6.30-rc7-git3 on a power6 machine,
> but could not recreate again on the same machine. Now the problem
> has resurfaced again with 2.6.30-rc8 (and with git1 as well) on
> another Power6 box.
>
> I had seen similar failures(although the back trace was different,
> crash point was same) with older kernels and Mel submitted a patch
> to fix that issue. Here is the link to that patch.
>
> http://lists.ozlabs.org/pipermail/linuxppc-dev/2009-May/071395.html
>
> I have attached the .config.
No, Mel's patch is for a different problem and has been fixed upstream
already. This is more concerning... I'm not sure what's up but would
you be able to send a disassembly of the hpte_need_flush() function in
your kernel binary for me to see what access precisely caused the
fault ?
Cheers,
Ben.
next prev parent reply other threads:[~2009-06-05 20:18 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-05 11:29 [OOPS] hugetlbfs tests with 2.6.30-rc8-git1 Sachin Sant
2009-06-05 15:04 ` Mel Gorman
2009-06-05 15:48 ` Sachin Sant
2009-06-05 20:17 ` Benjamin Herrenschmidt [this message]
2009-06-06 19:51 ` Sachin Sant
2009-06-14 11:38 ` Sachin Sant
2009-06-15 0:56 ` Michael Ellerman
2009-06-15 1:38 ` Stephen Rothwell
2009-06-15 13:00 ` Sachin Sant
2009-06-15 21:26 ` Benjamin Herrenschmidt
2009-06-16 1:30 ` Benjamin Herrenschmidt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1244233062.31984.6.camel@pasglop \
--to=benh@kernel.crashing.org \
--cc=linuxppc-dev@ozlabs.org \
--cc=mel@csn.ul.ie \
--cc=sachinp@in.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).