linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>,
	riel@redhat.com, mgorman@suse.de
Cc: Peter Zijlstra <peterz@infradead.org>,
	paulus@samba.org, linuxppc-dev@lists.ozlabs.org,
	linux-mm@kvack.org
Subject: Panic on ppc64 with numa_balancing and !sparsemem_vmemmap
Date: Wed, 19 Feb 2014 23:32:00 +0530	[thread overview]
Message-ID: <20140219180200.GA29257@linux.vnet.ibm.com> (raw)


On a powerpc machine with CONFIG_NUMA_BALANCING=y and CONFIG_SPARSEMEM_VMEMMAP
not enabled,  kernel panics.

This is true of kernel versions 3.13 to the latest commit 960dfc4 which is
3.14-rc3+.  i.e the recent 3 fixups from Aneesh doesnt seem to help this case.

Sometimes it fails on boot up itself. Otherwise a kernel compile is good enough
to trigger the same. I am seeing this on a Power 7 box.

Kernel 3.14.0-rc3-mainline_v313-00168-g960dfc4 on an ppc64

transam2s-lp1 login: qla2xxx [0003:01:00.1]-8038:2: Cable is unplugged...
Unable to handle kernel paging request for data at address 0x00000457
Faulting instruction address: 0xc0000000000d6004
cpu 0x38: Vector: 300 (Data Access) at [c00000171561f700]
    pc: c0000000000d6004: .task_numa_fault+0x604/0xa30
    lr: c0000000000d62fc: .task_numa_fault+0x8fc/0xa30
    sp: c00000171561f980
   msr: 8000000000009032
   dar: 457
 dsisr: 40000000
  current = 0xc0000017155d9b00
  paca    = 0xc00000000ec1e000   softe: 0        irq_happened: 0x00
    pid   = 16898, comm = gzip
enter ? for help
[c00000171561fa70] c0000000001b0fb0 .do_numa_page+0x1b0/0x2a0
[c00000171561fb20] c0000000001b2788 .handle_mm_fault+0x538/0xca0
[c00000171561fc00] c00000000082f498 .do_page_fault+0x378/0x880
[c00000171561fe30] c000000000009568 handle_page_fault+0x10/0x30
--- Exception: 301 (Data Access) at 00000000100031d8
SP (3fffd45ea2d0) is in userspace
38:mon>


(gdb) list *(task_numa_fault+0x604)
0xc0000000000d6004 is in task_numa_fault (/home/srikar/work/linux.git/include/linux/mm.h:753).
748             return cpupid_to_cpu(cpupid) == (-1 & LAST__CPU_MASK);
749     }
750
751     static inline bool __cpupid_match_pid(pid_t task_pid, int cpupid)
752     {
753             return (task_pid & LAST__PID_MASK) == cpupid_to_pid(cpupid);
754     }
755
756     #define cpupid_match_pid(task, cpupid) __cpupid_match_pid(task->pid, cpupid)
757     #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
(gdb) 


However this doesnt seem to happen if we have CONFIG_SPARSEMEM_VMEMMAP=y set in the config.


-- 
Thanks nnn Regards
Srikar Dronamraju

             reply	other threads:[~2014-02-19 18:02 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-19 18:02 Srikar Dronamraju [this message]
2014-03-03 17:26 ` Panic on ppc64 with numa_balancing and !sparsemem_vmemmap Mel Gorman
2014-03-03 19:15   ` Aneesh Kumar K.V
2014-03-03 20:04     ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140219180200.GA29257@linux.vnet.ibm.com \
    --to=srikar@linux.vnet.ibm.com \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mgorman@suse.de \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).