public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: khromy@lnuxlab.ath.cx (khromy)
To: Andrew Morton <akpm@digeo.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: 2.5.53-mm3: xmms: page allocation failure. order:5, mode:0x20
Date: Sun, 29 Dec 2002 19:51:07 -0500	[thread overview]
Message-ID: <20021230005107.GA25318@lnuxlab.ath.cx> (raw)
In-Reply-To: <3E0F8EF6.3C264886@digeo.com>

On Sun, Dec 29, 2002 at 04:10:30PM -0800, Andrew Morton wrote:
> khromy wrote:
> > 
> > On Sun, Dec 29, 2002 at 03:54:52PM -0800, Andrew Morton wrote:
> > > khromy wrote:
> > > >
> > > > On Sun, Dec 29, 2002 at 12:42:20PM -0800, Andrew Morton wrote:
> > > > > khromy wrote:
> > > > > >
> > > > > > Running 2.5.53-mm3, I found the following in dmesg.  I don't remember
> > > > > > getting anything like this with 2.5.53-mm3.
> > > > > >
> > > > > > xmms: page allocation failure. order:5, mode:0x20
> > > > >
> > > > > gack.  Someone is requesting 128k of memory with GFP_ATOMIC.  It fell
> > > > > afoul of the reduced memory reserves.  It deserved to.
> > > > >
> > > > > Could you please add this patch, and make sure that you have set
> > > > > CONFIG_KALLSYMS=y?  This will find the culprit.
> > > >
> > > > XFree86: page allocation failure. order:0, mode:0xd0
> > > > Call Trace:
> > > >  [<c012a3dd>] __alloc_pages+0x255/0x264
> > > >  [<c012a414>] __get_free_pages+0x28/0x60
> > > >  [<c012c7e6>] cache_grow+0xb6/0x20c
> > > >  [<c012c9cf>] __cache_alloc_refill+0x93/0x220
> > > >  [<c012cb96>] cache_alloc_refill+0x3a/0x58
> > > >  [<c012cf1d>] kmem_cache_alloc+0x45/0xc8
> > > >  [<c017e36c>] journal_alloc_journal_head+0x10/0x68
> > > >  [<c017e458>] journal_add_journal_head+0x80/0x120
> > >
> > > oops, sorry.  They're all expected.  I'd like to know where
> > > the 5-order failure during xmms usage came from.  Were you
> > > using a CDROM at the time??
> > 
> > Nope, playing mp3s from a harddrive.  I can reproduce it by doing some
> > IO or compiling something and then switching back and forth through
> > workspaces really fast at the same time..
> 
> Ah.  Well could you please add the second patch?  That'll find the source.

It didn't apply ontop of the one that you sent earlier so I compiled
with only the second one..

I got this while applying the second one:
patching file mm/page_alloc.c
Hunk #1 succeeded at 572 (offset 25 lines).

And here is dmesg:

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

xmms: page allocation failure. order:5, mode:0x20
Call Trace:
 [<c012a3e7>] __alloc_pages+0x25f/0x26c
 [<c012a41c>] __get_free_pages+0x28/0x60
 [<c010e36e>] dma_alloc_coherent+0x3e/0x74
 [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
 [<c021c31d>] set_dac2_rate+0xb5/0xe0
 [<c021f01d>] es1371_ioctl+0x10d5/0x140c
 [<c012d228>] kmem_cache_free+0x174/0x1b8
 [<c014ccf9>] sys_ioctl+0x1fd/0x254
 [<c01089af>] syscall_call+0x7/0xb

-- 
L1:	khromy		;khromy(at)lnuxlab.ath.cx

  parent reply	other threads:[~2002-12-30  0:24 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-12-29 20:26 2.5.53-mm3: xmms: page allocation failure. order:5, mode:0x20 khromy
2002-12-29 20:34 ` khromy
2002-12-29 20:42 ` Andrew Morton
2002-12-29 23:32   ` khromy
2002-12-29 23:54     ` Andrew Morton
     [not found]       ` <20021230002604.GA25134@lnuxlab.ath.cx>
     [not found]         ` <3E0F8EF6.3C264886@digeo.com>
2002-12-30  0:51           ` khromy [this message]
2002-12-30  0:49             ` Andrew Morton
2002-12-30  1:32   ` Alan Cox
2002-12-30  1:06     ` Andrew Morton
2002-12-30  1:20     ` khromy
2002-12-30  1:49       ` Alan Cox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20021230005107.GA25318@lnuxlab.ath.cx \
    --to=khromy@lnuxlab.ath.cx \
    --cc=akpm@digeo.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox