linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Jens Osterkamp <Jens.Osterkamp@gmx.de>,
	Christoph Lameter <clameter@sgi.com>,
	linux-mm@kvack.org,
	Linux kernel mailing list <linux-kernel@vger.kernel.org>
Subject: Re: [BUG] in 2.6.25-rc3 with 64k page size and SLUB_DEBUG_ON
Date: Thu, 6 Mar 2008 22:45:16 +0100	[thread overview]
Message-ID: <20080306214516.GB886@elte.hu> (raw)
In-Reply-To: <84144f020803061327r2310b23ew6211bc09f9ba25a3@mail.gmail.com>

* Pekka Enberg <penberg@cs.helsinki.fi> wrote:

> >  Mount-cache hash table entries: 4096
> >  BUG: scheduling while atomic: kthreadd/2/0x00056ef8
> >  Call Trace:
> >  [c00000003c187b68] [c00000000000f140] .show_stack+0x70/0x1bc (unreliable)
> >  [c00000003c187c18] [c000000000052d0c] .__schedule_bug+0x64/0x80
> >  [c00000003c187ca8] [c00000000036fa84] .schedule+0xc4/0x6b0
> >  [c00000003c187d98] [c0000000003702d0] .schedule_timeout+0x3c/0xe8
> >  [c00000003c187e68] [c00000000036f82c] .wait_for_common+0x150/0x22c
> >  [c00000003c187f28] [c000000000074868] .kthreadd+0x12c/0x1f0
> >  [c00000003c187fd8] [c000000000024864] .kernel_thread+0x4c/0x68
> >  ------------[ cut here ]------------
> >  kernel BUG at /home/auto/jens/kernels/linux-2.6.25-rc3/kernel/sched.c:4532!
> >  cpu 0x0: Vector: 700 (Program Check) at [c00000003c187bc8]
> >     pc: c000000000051f8c: .sched_setscheduler+0x5c/0x48c
> >     lr: c0000000000748b0: .kthreadd+0x174/0x1f0
> >     sp: c00000003c187e48
> >    msr: 9000000000029032
> >   current = 0xc00000007e0808a0
> >   paca    = 0xc0000000004cf880
> >     pid   = 2, comm = kthreadd
> >  kernel BUG at /home/auto/jens/kernels/linux-2.6.25-rc3/kernel/sched.c:4532!
> >  enter ? for help
> >  [c00000003c187f28] c0000000000748b0 .kthreadd+0x174/0x1f0
> >  [c00000003c187fd8] c000000000024864 .kernel_thread+0x4c/0x68
> >  0:mon>
> >
> >  In the code this corresponds to
> >
> >  int sched_setscheduler(struct task_struct *p, int policy,
> >                        struct sched_param *param)
> >  {
> >         int retval, oldprio, oldpolicy = -1, on_rq, running;
> >         unsigned long flags;
> >         const struct sched_class *prev_class = p->sched_class;
> >         struct rq *rq;
> >
> >         /* may grab non-irq protected spin_locks */
> >         BUG_ON(in_interrupt());
> >  recheck:
> >         /* double check policy once rq lock held */
> >         if (policy < 0)
> >                 policy = oldpolicy = p->policy;
> >         else if (policy != SCHED_FIFO && policy != SCHED_RR &&
> >                         policy != SCHED_NORMAL && policy != SCHED_BATCH &&
> >                         policy != SCHED_IDLE)
> >                 return -EINVAL;
> >
> >  With slub_debug=- on the kernel command line, the problem is gone.
> >  With 4k page size the problem also does not occur.
> >
> >  Any ideas on why this occurs and how to debug this further ?
> 
> There's no SLUB in the stack traces. Ingo, any suggestions how to debug this?

hm, no idea - is this powerpc? It seems to have hit a atomicity check 
due to:

> >  BUG: scheduling while atomic: kthreadd/2/0x00056ef8

preempt-count 0x00056ef8 is totally out of whack. Serious memory 
corruption.

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      reply	other threads:[~2008-03-06 21:45 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-06 13:47 [BUG] in 2.6.25-rc3 with 64k page size and SLUB_DEBUG_ON Jens Osterkamp
2008-03-06 19:55 ` Christoph Lameter
2008-03-06 21:07   ` Jens Osterkamp
2008-03-06 21:50     ` Christoph Lameter
2008-03-06 21:52       ` Jens Osterkamp
2008-03-06 21:56         ` Christoph Lameter
2008-03-06 22:00           ` Pekka Enberg
2008-03-06 22:04             ` Pekka Enberg
2008-03-06 22:07             ` Jens Osterkamp
2008-03-06 22:20               ` Christoph Lameter
2008-03-06 22:24                 ` Pekka Enberg
2008-03-07 12:20                   ` Jens Osterkamp
2008-03-07 12:40                     ` Pekka J Enberg
2008-03-07 12:44                       ` Pekka J Enberg
2008-03-07 22:18                         ` Jens Osterkamp
2008-03-07 22:30                       ` Jens Osterkamp
2008-03-07 22:59                         ` Christoph Lameter
2008-03-12 15:19                           ` Jens Osterkamp
2008-03-12 23:34                             ` Christoph Lameter
2008-03-18 16:44                               ` Jens Osterkamp
2008-03-18 17:45                                 ` Christoph Lameter
2008-03-18 17:51                                   ` Pekka Enberg
2008-03-06 22:25                 ` Jens Osterkamp
2008-03-07 22:09                 ` Jens Osterkamp
2008-03-06 22:21           ` Jens Osterkamp
2008-03-06 21:27 ` Pekka Enberg
2008-03-06 21:45   ` Ingo Molnar [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080306214516.GB886@elte.hu \
    --to=mingo@elte.hu \
    --cc=Jens.Osterkamp@gmx.de \
    --cc=clameter@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@cs.helsinki.fi \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).