public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Crashes and lockups in XFS filesystem (2.6.8-rc4).
@ 2004-08-18 16:16 David Martinez Moreno
  2004-08-19  8:44 ` Nathan Scott
  0 siblings, 1 reply; 5+ messages in thread
From: David Martinez Moreno @ 2004-08-18 16:16 UTC (permalink / raw)
  To: linux-kernel, nathans; +Cc: ender

	Hello, I am getting persistent lockups that could be IMHO XFS-related. I 
created a fresh XFS filesystem in a SCSI disk, with xfsprogs version 2.6.18.

	Mounted /dev/sda1 under /mnt, after that, I have been copying lots of files 
from /dev/md0, then run a find blabla -exec rm \{\{ \; over /mnt and then 
voilà! the lockup:

ulises:/mnt/debian# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda5             5,6G  1,8G  3,5G  34% /
tmpfs                 252M     0  252M   0% /dev/shm
/dev/hda1              19M   11M  6,9M  62% /boot
/dev/hda6             9,2G  1,8G  7,0G  21% /var
/dev/hda8             3,7G  2,3G  1,2G  67% /home
/dev/md0              224G  182G   42G  82% /mirror
/dev/sda1              69G   56G   13G  82% /mnt
ulises:/mnt/debian# find . \( -name *m68k.deb \) -exec rm \{\} \; &
[1] 13215
ulises:/mnt/debian# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda5             5,6G  1,8G  3,5G  34% /
tmpfs                 252M     0  252M   0% /dev/shm
/dev/hda1              19M   11M  6,9M  62% /boot
/dev/hda6             9,2G  1,8G  7,0G  21% /var
/dev/hda8             3,7G  2,3G  1,2G  67% /home
Segmentation fault  <<<<<< when trying to display free space in /mnt
ulises:/mnt/debian# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/hda5              5767428   1839536   3634920  34% /
tmpfs                   257484         0    257484   0% /dev/shm
/dev/hda1                18998     11044      6973  62% /boot
/dev/hda6              9612100   1874984   7248844  21% /var
/dev/hda8              3799944   2398188   1208728  67% /home
Segmentation fault
ulises:~# dmesg
[...]
XFS mounting filesystem sda1
Starting XFS recovery on filesystem: sda1 (dev: sda1)
Ending XFS recovery on filesystem: sda1 (dev: sda1)
Unable to handle kernel paging request at virtual address 020000b4
 printing eip:
c01fcd41
*pde = 00000000
Oops: 0000 [#1]
CPU:    0
EIP:    0060:[<c01fcd41>]    Not tainted
EFLAGS: 00010206   (2.6.8-rc4)
EIP is at xfs_log_force+0x28/0x6c
eax: 02000000   ebx: 00000002   ecx: 00000000   edx: 00000000
esi: df234c40   edi: dacae2f4   ebp: 00000000   esp: dec44f64
ds: 007b   es: 007b   ss: 0068
Process xfssyncd (pid: 174, threadinfo=dec44000 task=dee7a8c0)
Stack: dffabc80 00000000 00000031 c0207bba 00000031 df234c40 c020f59d dacae2f4
       00000000 00000000 00000002 004bcd64 dec44fb0 00000000 00000002 00000000
       dec44000 df234c40 00000000 00000000 c020ec0f dacae2f4 00000031 00000000
Call Trace:
 [<c0207bba>] xfs_getsb+0x2f/0x45
 [<c020f59d>] xfs_syncsub+0x4e/0x303
 [<c020ec0f>] xfs_sync+0x29/0x2d
 [<c02211cc>] vfs_sync+0x34/0x38
 [<c022076f>] xfssyncd+0x7e/0xce
 [<c02206f1>] xfssyncd+0x0/0xce
 [<c0101fdd>] kernel_thread_helper+0x5/0xb
Code: f6 80 b4 00 00 00 08 75 34 89 ce 09 d6 75 18 89 5c 24 04 89
 <1>Unable to handle kernel NULL pointer dereference at virtual address 
00000000
 printing eip:
c020ebb4
*pde = 00000000
Oops: 0000 [#2]
CPU:    0
EIP:    0060:[<c020ebb4>]    Not tainted
EFLAGS: 00010202   (2.6.8-rc4)
EIP is at xfs_statvfs+0xb6/0xe8
eax: 00000000   ebx: dacae314   ecx: dacae2f4   edx: 00000000
esi: dacae2f4   edi: 00000000   ebp: c0cf1ebc   esp: c0cf1e68
ds: 007b   es: 007b   ss: 0068
Process df (pid: 30308, threadinfo=c0cf1000 task=c64726c0)
Stack: dacae2f4 00000000 c0cf1f74 c0cf1efc c0cf1000 c0221194 dfced400 c0cf1ebc
       00000000 c0220b0a dfced400 c0cf1ebc 00000000 c013e3af c154ce00 c0cf1ebc
       c0cf1f14 c013e430 c154ce00 c0cf1ebc d24a1001 58465342 00000000 bfeb0fdc
Call Trace:
 [<c0221194>] vfs_statvfs+0x34/0x38
 [<c0220b0a>] linvfs_statfs+0x28/0x2e
 [<c013e3af>] vfs_statfs+0x4b/0x66
 [<c013e430>] vfs_statfs64+0x23/0xb2
 [<c013e5ea>] sys_statfs64+0x81/0xbf
 [<c024fac4>] tty_write+0x179/0x1bc
 [<c0254921>] write_chan+0x0/0x219
 [<c024f94b>] tty_write+0x0/0x1bc
 [<c01403e9>] vfs_write+0xc9/0x119
 [<c014050a>] sys_write+0x51/0x80
 [<c0103aa1>] sysenter_past_esp+0x52/0x71
Code: 8b 00 c7 45 24 ff 00 00 00 89 c2 0f b6 c8 25 00 ff 0f 00 c1
 <1>Unable to handle kernel NULL pointer dereference at virtual address 
00000000
 printing eip:
c020ebb4
*pde = 00000000
Oops: 0000 [#3]
CPU:    0
EIP:    0060:[<c020ebb4>]    Not tainted
EFLAGS: 00010202   (2.6.8-rc4)
EIP is at xfs_statvfs+0xb6/0xe8
eax: 00000000   ebx: dacae314   ecx: dacae2f4   edx: 00000000
esi: dacae2f4   edi: 00000000   ebp: c987cebc   esp: c987ce68
ds: 007b   es: 007b   ss: 0068
Process df (pid: 30728, threadinfo=c987c000 task=c156abd0)
Stack: dacae2f4 00000000 c987cf74 c987cefc c987c000 c0221194 dfced400 c987cebc
       00000000 c0220b0a dfced400 c987cebc 00000000 c013e3af c154ce00 c987cebc
       c987cf14 c013e430 c154ce00 c987cebc d24a1001 58465342 00000000 bfeb0fdc
Call Trace:
 [<c0221194>] vfs_statvfs+0x34/0x38
 [<c0220b0a>] linvfs_statfs+0x28/0x2e
 [<c013e3af>] vfs_statfs+0x4b/0x66
 [<c013e430>] vfs_statfs64+0x23/0xb2
 [<c013e5ea>] sys_statfs64+0x81/0xbf
 [<c024fac4>] tty_write+0x179/0x1bc
 [<c0111811>] recalc_task_prio+0x93/0x188
 [<c0105878>] math_state_restore+0x28/0x42
 [<c0103aa1>] sysenter_past_esp+0x52/0x71
Code: 8b 00 c7 45 24 ff 00 00 00 89 c2 0f b6 c8 25 00 ff 0f 00 c1
 <1>Unable to handle kernel NULL pointer dereference at virtual address 
00000000
 printing eip:
c020ebb4
*pde = 00000000
Oops: 0000 [#4]
CPU:    0
EIP:    0060:[<c020ebb4>]    Not tainted
EFLAGS: 00010202   (2.6.8-rc4)
EIP is at xfs_statvfs+0xb6/0xe8
eax: 00000000   ebx: dacae314   ecx: dacae2f4   edx: 00000000
esi: dacae2f4   edi: 00000000   ebp: d27c1ebc   esp: d27c1e68
ds: 007b   es: 007b   ss: 0068
Process df (pid: 7037, threadinfo=d27c1000 task=c64726c0)
Stack: dacae2f4 00000000 d27c1f74 d27c1efc d27c1000 c0221194 dfced400 d27c1ebc
       00000000 c0220b0a dfced400 d27c1ebc 00000000 c013e3af c154ce00 d27c1ebc
       d27c1f14 c013e430 c154ce00 d27c1ebc d75b8001 58465342 00000000 bfeb0fdc
Call Trace:
 [<c0221194>] vfs_statvfs+0x34/0x38
 [<c0220b0a>] linvfs_statfs+0x28/0x2e
 [<c013e3af>] vfs_statfs+0x4b/0x66
 [<c013e430>] vfs_statfs64+0x23/0xb2
 [<c013e5ea>] sys_statfs64+0x81/0xbf
 [<c024fac4>] tty_write+0x179/0x1bc
 [<c0254921>] write_chan+0x0/0x219
 [<c024f94b>] tty_write+0x0/0x1bc
 [<c01403e9>] vfs_write+0xc9/0x119
 [<c014050a>] sys_write+0x51/0x80
 [<c0103aa1>] sysenter_past_esp+0x52/0x71
Code: 8b 00 c7 45 24 ff 00 00 00 89 c2 0f b6 c8 25 00 ff 0f 00 c1
[...]

	XFS keeps segfaulting and dying in my machine. It is so strange...

General data:
x86 P-IV 2.5 GHz
512 MB RAM
The filesystem was onto a MAXTOR 70 GB SCSI disk, connected to Adaptec AIC79XX 
PCI-X SCSI card. The other filesystem containing files over XFS is a RAID0 
over two identical IDE disks.

	If you need further information, like .config or so, please do not hesitate 
to ask.

	Thanks in advance,


		Ender.
-- 
 Why is a cow? Mu. (Ommmmmmmmmm)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Crashes and lockups in XFS filesystem (2.6.8-rc4).
  2004-08-19  8:44 ` Nathan Scott
@ 2004-08-19  8:39   ` David Martínez Moreno
  2004-08-19 17:35   ` Random crashes (was Re: Crashes and lockups in XFS filesystem (2.6.8-rc4).) David Martinez Moreno
       [not found]   ` <200408192127.58996.ender@debian.org>
  2 siblings, 0 replies; 5+ messages in thread
From: David Martínez Moreno @ 2004-08-19  8:39 UTC (permalink / raw)
  To: Nathan Scott; +Cc: linux-kernel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

El Jueves, 19 de Agosto de 2004 10:44, Nathan Scott escribió:
> On Wed, Aug 18, 2004 at 06:16:57PM +0200, David Martinez Moreno wrote:
> > 	Hello, I am getting persistent lockups that could be IMHO XFS-related. I
> > created a fresh XFS filesystem in a SCSI disk, with xfsprogs version
> > 2.6.18.
> >
> > 	Mounted /dev/sda1 under /mnt, after that, I have been copying lots of
> > files from /dev/md0, then run a find blabla -exec rm \{\} \; over /mnt
> > and then voilà! the lockup:
>
> Did /mnt run out of space while doing that?  Or nearly?  There's
> a known issue with that area of the XFS code, in conjunction with
> 4K stacks at the moment - was that enabled in your .config?
>
> Looks like something stamped on parts of the xfs_mount structure
> for the filesystem mounted at /mnt, a stack overrun would explain
> that and your subsequent oopsen.

	Hello, Nathan, many thanks for the reply.

	The machine locked up last night again, so I cannot login, and I cannot 
manage to remember if I enabled 4K stacks in the code, could be possible, but 
I am not sure.

	What I am sure of is that /mnt was plenty (8-10 GB or so) of disk space. I 
had two or three concurrent accesses, IIRC, some of them writing from the 
RAID to the SCSI (both using XFS), and the last one finding and removing 
unwanted packages.

	Could this scenario give you some hints? I will return where the server is in 
5 hours, and I will send to the list the .config.

	Again, many thanks,


		Ender.
- -- 
Prepare ship for ludicrous speed! Fasten all seatbelts, seal all
 entrances and exits, close all shops in the mall, cancel the three
 ring circus, secure all animals in the zoo!
		-- Colonel Sandurz (Spaceballs).
- --
Servicios de red - Network services
RedIRIS - Spanish Academic Network for Research and Development
Red.es - Madrid (Spain)
Tlf (+34) 91.212.76.25
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFBJGc/Ws/EhA1iABsRAqYVAKDPWUIi6m5pBH3nwbgHttV9ko436ACfYDbp
F+SsCIE95BvW9m7YMbg/yc8=
=MHxU
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Crashes and lockups in XFS filesystem (2.6.8-rc4).
  2004-08-18 16:16 Crashes and lockups in XFS filesystem (2.6.8-rc4) David Martinez Moreno
@ 2004-08-19  8:44 ` Nathan Scott
  2004-08-19  8:39   ` David Martínez Moreno
                     ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Nathan Scott @ 2004-08-19  8:44 UTC (permalink / raw)
  To: David Martinez Moreno; +Cc: linux-kernel

On Wed, Aug 18, 2004 at 06:16:57PM +0200, David Martinez Moreno wrote:
> 	Hello, I am getting persistent lockups that could be IMHO XFS-related. I 
> created a fresh XFS filesystem in a SCSI disk, with xfsprogs version 2.6.18.
> 
> 	Mounted /dev/sda1 under /mnt, after that, I have been copying lots of files 
> from /dev/md0, then run a find blabla -exec rm \{\{ \; over /mnt and then 
> voilà! the lockup:

Did /mnt run out of space while doing that?  Or nearly?  There's
a known issue with that area of the XFS code, in conjunction with
4K stacks at the moment - was that enabled in your .config?

Looks like something stamped on parts of the xfs_mount structure
for the filesystem mounted at /mnt, a stack overrun would explain
that and your subsequent oopsen.

cheers.

-- 
Nathan

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Random crashes (was Re: Crashes and lockups in XFS filesystem (2.6.8-rc4).)
  2004-08-19  8:44 ` Nathan Scott
  2004-08-19  8:39   ` David Martínez Moreno
@ 2004-08-19 17:35   ` David Martinez Moreno
       [not found]   ` <200408192127.58996.ender@debian.org>
  2 siblings, 0 replies; 5+ messages in thread
From: David Martinez Moreno @ 2004-08-19 17:35 UTC (permalink / raw)
  To: linux-kernel, Nathan Scott; +Cc: ender

[-- Attachment #1: Type: text/plain, Size: 5205 bytes --]

El Jueves, 19 de Agosto de 2004 10:44, Nathan Scott escribió:
> On Wed, Aug 18, 2004 at 06:16:57PM +0200, David Martinez Moreno wrote:
> > 	Hello, I am getting persistent lockups that could be IMHO XFS-related. I
> > created a fresh XFS filesystem in a SCSI disk, with xfsprogs version
> > 2.6.18.
> >
> > 	Mounted /dev/sda1 under /mnt, after that, I have been copying lots of
> > files from /dev/md0, then run a find blabla -exec rm \{\{ \; over /mnt
> > and then voilà! the lockup:
>
> Did /mnt run out of space while doing that?  Or nearly?  There's
> a known issue with that area of the XFS code, in conjunction with
> 4K stacks at the moment - was that enabled in your .config?
>
> Looks like something stamped on parts of the xfs_mount structure
> for the filesystem mounted at /mnt, a stack overrun would explain
> that and your subsequent oopsen.

	Hello, Nathan. I am here at the University now, and yes, the kernel were 
compiled with 4K stacks. I rebuilt it with 8K and rebooted:

ulises:~# zcat /proc/config.gz |grep STACKS
# CONFIG_4KSTACKS is not set

	I ran a disk test and waited.

	After an hour, the system crashed. I forgot to plug in a console, so rebooted 
and waited again. And here it is:

------------[ cut here ]------------
kernel BUG at mm/vmscan.c:565!
invalid operand: 0000 [#1]
CPU:    0
EIP:    0060:[<c01307f1>]    Not tainted
EFLAGS: 00010046   (2.6.8.1)
EIP is at shrink_cache+0x7c/0x2c7
eax: 00000000   ebx: c0415d24   ecx: c12485f8   edx: c12485f8
esi: 0000001d   edi: c0415d48   ebp: 0000001c   esp: c15bfeb0
ds: 007b   es: 007b   ss: 0068
Process kswapd0 (pid: 10, threadinfo=c15be000 task=c15720b0)
Stack: c15ab080 00000002 c15be000 00000020 c10444f8 c10f8b18 00000000 00000001
       00000286 c15c0130 c15bfee4 c012ddd8 00000000 00000286 00000000 00000000
       00000000 00000000 c15bfef8 c15bfef8 00000000 00000000 00000000 dff28180
Call Trace:
 [<c012ddd8>] drain_cpu_caches+0x40/0x42
 [<c0130085>] shrink_slab+0x7b/0x186
 [<c0130f19>] shrink_zone+0x9e/0xb8
 [<c01312d6>] balance_pgdat+0x1c9/0x22d
 [<c0131401>] kswapd+0xc7/0xd7
 [<c0112dac>] autoremove_wake_function+0x0/0x57
 [<c0112dac>] autoremove_wake_function+0x0/0x57
 [<c013133a>] kswapd+0x0/0xd7
 [<c0101fdd>] kernel_thread_helper+0x5/0xb
Code: 0f 0b 35 02 02 81 3c c0 8b 51 04 8b 01 89 50 04 89 02 c7 41
 <1>Unable to handle kernel NULL pointer dereference at virtual address 
00000004
 printing eip:
c012e3c7
*pde = 00000000
Oops: 0000 [#2]
CPU:    0
EIP:    0060:[<c012e3c7>]    Not tainted
EFLAGS: 00010006   (2.6.8.1)
EIP is at free_block+0x43/0xcb
eax: 00800000   ebx: 00000000   ecx: 00000000   edx: c1000000
esi: c0415d48   edi: 00000000   ebp: c0415d54   esp: c15bfb20
ds: 007b   es: 007b   ss: 0068
Process kswapd0 (pid: 10, threadinfo=c15be000 task=c15720b0)
Stack: c022b4c2 dffe0090 c0415d64 2002002c d242f800 00000286 c1401480 c012e499
       c0415d48 c1161948 2002002c c1161948 c1161938 d242f800 00000286 0000062a
       c012e7a6 c0415d48 c1161938 c1a4ce80 dcdab000 00000004 c0338237 d242f800
Call Trace:
 [<c022b4c2>] memmove+0x4d/0x4f
 [<c012e499>] cache_flusharray+0x4a/0xb6
 [<c012e7a6>] kfree+0x5e/0x62
 [<c0338237>] kfree_skbmem+0x13/0x2c
 [<c03382b8>] __kfree_skb+0x68/0xdd
 [<c03591aa>] tcp_clean_rtx_queue+0x12f/0x3a0
 [<c0359a53>] tcp_ack+0xb4/0x560
 [<c035bc4d>] __tcp_data_snd_check+0xd3/0xe2
 [<c035c454>] tcp_rcv_established+0x419/0x84a
 [<c036470d>] tcp_v4_do_rcv+0x117/0x11c
 [<c0364d51>] tcp_v4_rcv+0x63f/0x884
 [<c025b3a5>] scrup+0xe3/0xf7
 [<c034badf>] ip_local_deliver+0xa3/0x1a2
 [<c034bec4>] ip_rcv+0x2e6/0x3fe
 [<c033d335>] netif_receive_skb+0x14b/0x17e
 [<c033d3dd>] process_backlog+0x75/0xf6
 [<c033d4c8>] net_rx_action+0x6a/0xe2
 [<c0117d52>] __do_softirq+0x7e/0x80
 [<c0117d7a>] do_softirq+0x26/0x28
 [<c0105e79>] do_IRQ+0xc4/0xdf
 [<c0104e82>] do_invalid_op+0x0/0xcb
 [<c0104460>] common_interrupt+0x18/0x20
 [<c0104e82>] do_invalid_op+0x0/0xcb
 [<c0104b61>] die+0x7a/0xcd
 [<c0104f4b>] do_invalid_op+0xc9/0xcb
 [<c01307f1>] shrink_cache+0x7c/0x2c7
 [<c02217da>] vn_purge+0x108/0x116
 [<c010451d>] error_code+0x2d/0x38
 [<c01307f1>] shrink_cache+0x7c/0x2c7
 [<c012ddd8>] drain_cpu_caches+0x40/0x42
 [<c0130085>] shrink_slab+0x7b/0x186
 [<c0130f19>] shrink_zone+0x9e/0xb8
 [<c01312d6>] balance_pgdat+0x1c9/0x22d
 [<c0131401>] kswapd+0xc7/0xd7
 [<c0112dac>] autoremove_wake_function+0x0/0x57
 [<c0112dac>] autoremove_wake_function+0x0/0x57
 [<c013133a>] kswapd+0x0/0xd7
 [<c0101fdd>] kernel_thread_helper+0x5/0xb
Code: 8b 53 04 8b 03 89 50 04 89 02 c7 43 04 00 02 20 00 2b 4b 0c
 <0>Kernel panic: Fatal exception in interrupt
In interrupt handler - not syncing

	Last time I seem to remember some function related to ext3, so this is more a 
data/stack corruption than others, what do you think?

	I attach current dmesg and config, if it helps.

	Is there anything else I could do? Patches, other trees, special 
configurations? This machine seems impossible to stabilize.

	The machine is using SiI libata driver, if it could have some problem 
related...

	Many thanks in advance,


		Ender.
-- 
 Why is a cow? Mu. (Ommmmmmmmmm)

[-- Attachment #2: dmesg.gz --]
[-- Type: application/x-gzip, Size: 5228 bytes --]

[-- Attachment #3: config.gz --]
[-- Type: application/x-gzip, Size: 6233 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Crashes and lockups in XFS filesystem (2.6.8-rc4 and 2.6.8.1-mm2).
       [not found]   ` <200408192127.58996.ender@debian.org>
@ 2004-08-20  8:47     ` Nathan Scott
  0 siblings, 0 replies; 5+ messages in thread
From: Nathan Scott @ 2004-08-20  8:47 UTC (permalink / raw)
  To: David Martinez Moreno; +Cc: linux-kernel

On Thu, Aug 19, 2004 at 09:27:58PM +0200, David Martinez Moreno wrote:
> El Jueves, 19 de Agosto de 2004 10:44, Nathan Scott escribió:
> > Did /mnt run out of space while doing that?  Or nearly?  There's
> > a known issue with that area of the XFS code, in conjunction with
> > 4K stacks at the moment - was that enabled in your .config?
> >
> > Looks like something stamped on parts of the xfs_mount structure
> > for the filesystem mounted at /mnt, a stack overrun would explain
> > that and your subsequent oopsen.
> 
> 	Bad news!
> 
> 	Aaarggh! It seemed that 2.6.8.1-mm2 was going to stand, and it did during an 
> entinre hour or so, but it failed as well. The XFS filesystem on the RAID 0 
> was being acceded by 100 users and I was copying files from it to the SCSI 
> disk sda1, under XFS as well.

Hmmm, that looks awfully similar to the problem other folks are
chasing (see thread "Possible dcache BUG").  Is it reproducible
for you by any chance?

There have been some debugging patches that Marcelo posted in that
thread, would be a good idea to try those out; although with your
crash its difficult to see which slab it is thats corrupted (iirc,
Marcelo's patch was targetting the buffer_head slab).

> 	The crash:
> 
> ------------[ cut here ]------------
> kernel BUG at include/linux/list.h:164!
> invalid operand: 0000 [#1]
> CPU:    0
> EIP:    0060:[<c01332ef>]    Not tainted VLI
> EFLAGS: 00010006   (2.6.8.1-mm2)
> EIP is at shrink_cache+0x310/0x31d
> eax: ffffffff   ebx: c0426264   ecx: c1081658   edx: c10444f8
> esi: 0000000f   edi: c0426288   ebp: 0000000e   esp: c15afeb0
> ds: 007b   es: 007b   ss: 0068
> Process kswapd0 (pid: 10, threadinfo=c15ae000 task=c15620b0)
> Stack: c15afeb0 c15afeb0 c15ae000 00000020 c10444d8 c10d3d98 00000000 00000001
>        c13bcd20 c1251860 c13bb6a0 c13bb840 c13bb680 c129cce0 c129d980 c13fbc20
>        c13b8b20 c13c2940 c13c2960 c135f300 c13ba160 c13ba1c0 c13c9120 c13bcd00
> Call Trace:
>  [<c01328c8>] shrink_slab+0x85/0x187
>  [<c0133817>] shrink_zone+0x9e/0xb8
>  [<c0133bd4>] balance_pgdat+0x1c9/0x22d
>  [<c0133cff>] kswapd+0xc7/0xd7
>  [<c0113a2d>] autoremove_wake_function+0x0/0x57
>  [<c0113a2d>] autoremove_wake_function+0x0/0x57
>  [<c0133c38>] kswapd+0x0/0xd7
>  [<c0101fdd>] kernel_thread_helper+0x5/0xb
> Code: eb e4 8b 44 24 10 83 c5 01 89 50 04 89 02 8d 44 24 10 89 42 04 89 54 24 
> 10 e9 cd fd ff ff 0f 0b a5 00 d0 0d 3d c0 e9

-- 
Nathan

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2004-08-20  7:52 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-18 16:16 Crashes and lockups in XFS filesystem (2.6.8-rc4) David Martinez Moreno
2004-08-19  8:44 ` Nathan Scott
2004-08-19  8:39   ` David Martínez Moreno
2004-08-19 17:35   ` Random crashes (was Re: Crashes and lockups in XFS filesystem (2.6.8-rc4).) David Martinez Moreno
     [not found]   ` <200408192127.58996.ender@debian.org>
2004-08-20  8:47     ` Crashes and lockups in XFS filesystem (2.6.8-rc4 and 2.6.8.1-mm2) Nathan Scott

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox