* Re: poll-related "scheduling while atomic", 2.5.44-mm6
@ 2002-10-29 22:38 Paolo Ciarrocchi
2002-10-30 6:27 ` Andrew Morton
0 siblings, 1 reply; 5+ messages in thread
From: Paolo Ciarrocchi @ 2002-10-29 22:38 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm
>> So my guess is somewhere between -mm5 and -mm6 we
>> screwed up the atomicity count.
>Mine too. I'll check it out, thanks.
The same here as well
>Do you have preemption enabled?
yes
Paolo
--
Powered by Outblaze
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: poll-related "scheduling while atomic", 2.5.44-mm6
2002-10-29 22:38 poll-related "scheduling while atomic", 2.5.44-mm6 Paolo Ciarrocchi
@ 2002-10-30 6:27 ` Andrew Morton
0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2002-10-30 6:27 UTC (permalink / raw)
To: Paolo Ciarrocchi, Matt Reppert; +Cc: linux-kernel
Paolo Ciarrocchi wrote:
>
> >> So my guess is somewhere between -mm5 and -mm6 we
> >> screwed up the atomicity count.
> >Mine too. I'll check it out, thanks.
>
> The same here as well
>
This'll fix it up. Whoever invented cut-n-paste has a lot to
answer for.
--- 25/mm/swap.c~preempt-count-fix Tue Oct 29 22:19:54 2002
+++ 25-akpm/mm/swap.c Tue Oct 29 22:20:16 2002
@@ -90,11 +90,12 @@ void lru_cache_add_active(struct page *p
void lru_add_drain(void)
{
- struct pagevec *pvec = &per_cpu(lru_add_pvecs, get_cpu());
+ int cpu = get_cpu();
+ struct pagevec *pvec = &per_cpu(lru_add_pvecs, cpu);
if (pagevec_count(pvec))
__pagevec_lru_add(pvec);
- pvec = &per_cpu(lru_add_active_pvecs, get_cpu());
+ pvec = &per_cpu(lru_add_active_pvecs, cpu);
if (pagevec_count(pvec))
__pagevec_lru_add_active(pvec);
put_cpu();
.
I had a crash while testing SMP+preempt btw. Nasty one - took a
pagefault from userspace but do_page_fault() decided that the
fault was in-kernel or something. It fell all the way through
to die() and, well, died. I saw the same happen some months ago.
^ permalink raw reply [flat|nested] 5+ messages in thread
* poll-related "scheduling while atomic", 2.5.44-mm6
@ 2002-10-29 21:37 Matt Reppert
2002-10-29 22:19 ` Andrew Morton
0 siblings, 1 reply; 5+ messages in thread
From: Matt Reppert @ 2002-10-29 21:37 UTC (permalink / raw)
To: linux-kernel
Debug: sleeping function called from illegal context at mm/slab.c:1304
Call Trace:
[<c0113f98>] __might_sleep+0x54/0x5c
[<c012e342>] kmem_flagcheck+0x1e/0x50
[<c012ec4b>] kmalloc+0x4b/0x114
[<c014c2cd>] sys_poll+0x91/0x284
[<c0106eb3>] syscall_call+0x7/0xb
This one comes from calling kmalloc with GFP_KERNEL in sys_poll.
bad: scheduling while atomic!
Call Trace:
[<c0112ba1>] do_schedule+0x3d/0x2c8
[<c011d14e>] add_timer+0x36/0x124
[<c011ddb0>] schedule_timeout+0x84/0xa4
[<c011dd20>] process_timeout+0x0/0xc
[<c014c216>] do_poll+0xc2/0xe8
[<c014c3ca>] sys_poll+0x18e/0x284
[<c0106eb3>] syscall_call+0x7/0xb
Another little tidbit. I was in X11 while this was happening, and I
happened to stop a process (nautilus) just before I looked in my logs
about this ... and caught a "Notice: process nautilus exited with
preempt_count 2". So my guess is somewhere between -mm5 and -mm6 we
screwed up the atomicity count. (Funny I didn't see that for more
processes, though.)
Matt
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2002-10-30 6:20 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-10-29 22:38 poll-related "scheduling while atomic", 2.5.44-mm6 Paolo Ciarrocchi
2002-10-30 6:27 ` Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2002-10-29 21:37 Matt Reppert
2002-10-29 22:19 ` Andrew Morton
2002-10-29 23:55 ` Matt Reppert
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox