public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* xfs crash
@ 2010-05-18  4:45 Jabir M
  2010-05-18  6:02 ` Dave Chinner
  0 siblings, 1 reply; 10+ messages in thread
From: Jabir M @ 2010-05-18  4:45 UTC (permalink / raw)
  To: xfs


Hi all,

       I am using xfs file system on a MIPS based processor having linux-2.6.24. when executed a script to copy an mp4 file of 1GB upto 30 instances on back ground. xfs crashed displaying the following log. Please suggest me a solution.

CPU 1 Unable to handle kernel paging request at virtual address 00000006, epc == 803375e8, ra == 803375d4
Oops[#1]:
Cpu 1
$ 0   : 00000000 00000000 00000000 00000025
$ 4   : 8f1c383c 00000038 00000038 0000003b
$ 8   : 00000002 00000400 0000000a 00000400
$12   : 00000000 8054c5c0 0000123c 00000000
$16   : 8f1c383c 8d2c936c 00000038 0000003b
$20   : 87738000 00000000 00000000 00000000
$24   : 8f39b1d0 8036ca64
$28   : 8f0ea000 8f0eb428 8fb18900 803375d4
Hi    : 00000000
Lo    : 00001104
epc   : 803375e8 xfs_trans_log_buf+0xa0/0xe0     Not tainted
ra    : 803375d4 xfs_trans_log_buf+0x8c/0xe0
Status: 1100c303    KERNEL EXL IE
Cause : 00800008
BadVA : 00000006
PrId  : 00019548 (MIPS 34K)
Modules linked in: block2mtd
Process cp (pid: 1407, threadinfo=8f0ea000, task=8f89b648)
Stack : 00000000 03a70c78 8f0ca480 8d2c936c 8f017400 8d2c936c 8f336200 00000001
        8f08ee98 802e2858 00000008 00014005 8f3362c0 8f3362c0 8f0eb468 8f0eb46c
        00000038 0000003b 00000000 87738010 00000001 802e7a18 00000001 00000000
        8f08ee08 00000001 00000024 00000000 8064eec0 8f08ee30 8f017400 80300ea4
        00004004 80337af0 00000200 00001104 003a5fc3 03a70c78 00000008 00000000
        ...
Call Trace:
[<803375e8>] xfs_trans_log_buf+0xa0/0xe0
[<802e2858>] xfs_alloc_log_agf+0x58/0x6c
[<802e7a18>] xfs_alloc_delrec+0x4b8/0xba4
[<802e814c>] xfs_alloc_delete+0x48/0xec
[<802e1e84>] xfs_alloc_fixup_trees+0x8c/0x46c
[<802e4004>] xfs_alloc_ag_vextent_near+0x42c/0xae8
[<802e47fc>] xfs_alloc_ag_vextent+0x13c/0x1a4
[<802e51d4>] xfs_alloc_vextent+0x3d4/0x490
[<802f6930>] xfs_bmap_btalloc+0x66c/0xabc
[<802fac58>] xfs_bmapi+0xd48/0x13f8
[<803215d8>] xfs_iomap_write_allocate+0x178/0x59c
[<80320408>] xfs_iomap+0x40c/0x468
[<8034570c>] xfs_map_blocks+0x48/0xac
[<80346928>] xfs_page_state_convert+0x658/0x910
[<80346f28>] xfs_vm_writepage+0x84/0x20c
[<8016987c>] __writepage+0x1c/0x88
[<8016a6a8>] write_cache_pages+0x300/0x418
[<8016a828>] do_writepages+0x44/0x78
[<801abf80>] __writeback_single_inode+0xa0/0x488
[<801ac934>] sync_sb_inodes+0x318/0x470
[<801acddc>] writeback_inodes+0x9c/0x160
[<8016b3bc>] balance_dirty_pages_ratelimited_nr+0x278/0x3c8
[<80163e64>] generic_file_buffered_write+0x1fc/0x758
[<8034f8fc>] xfs_write+0x680/0x934
[<80184608>] do_sync_write+0xe0/0x168
[<80185298>] sys_write+0x58/0xc0
[<8010bed0>] stack_done+0x20/0x3c


Code: 02402821  34630001  ae230060 <90470006> 2403fff7  02603021  34e70001  00e33824  a0470006



Thanks
Jabir

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread
* xfs crash
@ 2007-11-05 21:51 Cedric - Equinoxe Media
       [not found] ` <20071106082632.GU995458@sgi.com>
  0 siblings, 1 reply; 10+ messages in thread
From: Cedric - Equinoxe Media @ 2007-11-05 21:51 UTC (permalink / raw)
  To: xfs

Hi,

I got a crash with xfs serving nfs :

Hardware is Dell Poweredge 2950 with RAID5 SAS
Linux fng2 2.6.22-3-amd64 #1 SMP Wed Oct 31 13:43:07 UTC 2007 x86_64 GNU/Linux

Here is the dmesg :

NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
NFSD: starting 90-second grace period
XFS internal error XFS_WANT_CORRUPTED_GOTO at line 1563 of file
fs/xfs/xfs_alloc.c.  Caller 0xffffffff88113b35

Call Trace:
 [<ffffffff88111fcb>] :xfs:xfs_free_ag_extent+0x1a6/0x6b5
 [<ffffffff88113b35>] :xfs:xfs_free_extent+0xa9/0xc9
 [<ffffffff8811cf77>] :xfs:xfs_bmap_finish+0xee/0x167
 [<ffffffff8813c843>] :xfs:xfs_itruncate_finish+0x19b/0x2e0
 [<ffffffff881543c1>] :xfs:xfs_setattr+0x841/0xe57
 [<ffffffff8023a29e>] __mod_timer+0xc3/0xd3
 [<ffffffff80229db3>] task_rq_lock+0x3d/0x6f
 [<ffffffff80229976>] __activate_task+0x26/0x38
 [<ffffffff8815eae8>] :xfs:xfs_vn_setattr+0x121/0x144
 [<ffffffff80296791>] notify_change+0x156/0x2f1
 [<ffffffff88309aba>] :nfsd:nfsd_setattr+0x334/0x4b1
 [<ffffffff883102e2>] :nfsd:nfsd3_proc_setattr+0xa2/0xae
 [<ffffffff8830524d>] :nfsd:nfsd_dispatch+0xdd/0x19e
 [<ffffffff88283180>] :sunrpc:svc_process+0x3df/0x6ef
 [<ffffffff803f4cc2>] __down_read+0x12/0x9a
 [<ffffffff88305815>] :nfsd:nfsd+0x191/0x2ac
 [<ffffffff8020aba8>] child_rip+0xa/0x12
 [<ffffffff88305684>] :nfsd:nfsd+0x0/0x2ac
 [<ffffffff8020ab9e>] child_rip+0x0/0x12

xfs_force_shutdown(sda4,0x8) called from line 4258 of file
fs/xfs/xfs_bmap.c.  Return address = 0xffffffff8811cfb4
Filesystem "sda4": Corruption of in-memory data detected.  Shutting down
filesystem: sda4
Please umount the filesystem, and rectify the problem(s)
nfsd: non-standard errno: -117

----------------
Here I stopped nfs, umount -f /dev/sda4, mount /dev/sda4 then start nfs
again.
----------------

nfsd: last server has exited
nfsd: unexporting all filesystems
xfs_force_shutdown(sda4,0x1) called from line 423 of file
fs/xfs/xfs_rw.c.  Return address = 0xffffffff88158289
xfs_force_shutdown(sda4,0x1) called from line 423 of file
fs/xfs/xfs_rw.c.  Return address = 0xffffffff88158289
XFS mounting filesystem sda4
Starting XFS recovery on filesystem: sda4 (logdev: internal)
XFS internal error XFS_WANT_CORRUPTED_GOTO at line 1563 of file
fs/xfs/xfs_alloc.c.  Caller 0xffffffff88113b35

Call Trace:
 [<ffffffff88111fcb>] :xfs:xfs_free_ag_extent+0x1a6/0x6b5
 [<ffffffff88113b35>] :xfs:xfs_free_extent+0xa9/0xc9
 [<ffffffff88145edf>] :xfs:xlog_recover_process_efi+0xf7/0x12a
 [<ffffffff88147288>] :xfs:xlog_recover_process_efis+0x4f/0x81
 [<ffffffff881472d3>] :xfs:xlog_recover_finish+0x19/0x9a
 [<ffffffff8814bd22>] :xfs:xfs_mountfs+0x83d/0x91b
 [<ffffffff802f4b29>] _atomic_dec_and_lock+0x39/0x58
 [<ffffffff88151b34>] :xfs:xfs_mount+0x317/0x39d
 [<ffffffff88161889>] :xfs:xfs_fs_fill_super+0x0/0x1a7
 [<ffffffff88161907>] :xfs:xfs_fs_fill_super+0x7e/0x1a7
 [<ffffffff803f4c21>] __down_write_nested+0x12/0x9a
 [<ffffffff80297003>] get_filesystem+0x12/0x35
 [<ffffffff80284ff5>] sget+0x39d/0x3af
 [<ffffffff80284a0c>] set_bdev_super+0x0/0xf
 [<ffffffff80284a1b>] test_bdev_super+0x0/0xd
 [<ffffffff80285a35>] get_sb_bdev+0x105/0x152
 [<ffffffff8028542a>] vfs_kern_mount+0x93/0x11a
 [<ffffffff80285500>] do_kern_mount+0x43/0xdd
 [<ffffffff80298eda>] do_mount+0x691/0x708
 [<ffffffff80297a3c>] mntput_no_expire+0x1c/0x94
 [<ffffffff8028cba5>] link_path_walk+0xce/0xe0
 [<ffffffff80266af7>] activate_page+0xad/0xd4
 [<ffffffff8025ff17>] find_get_page+0x21/0x50
 [<ffffffff80262028>] filemap_nopage+0x180/0x2ab
 [<ffffffff8026c709>] __handle_mm_fault+0x3e6/0x9d9
 [<ffffffff80269c17>] zone_statistics+0x3f/0x60
 [<ffffffff802f796f>] __up_read+0x13/0x8a
 [<ffffffff802646fe>] __alloc_pages+0x5a/0x2bc
 [<ffffffff80298fdb>] sys_mount+0x8a/0xd7
 [<ffffffff80209d8e>] system_call+0x7e/0x83

Ending XFS recovery on filesystem: sda4 (logdev: internal)
NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery
directory
NFSD: starting 90-second grace period


I have no other message in the dmesg, the server has latest RAID
firmware from dell.

Cédric.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-05-19 11:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-18  4:45 xfs crash Jabir M
2010-05-18  6:02 ` Dave Chinner
2010-05-19 11:26   ` Jabir M
  -- strict thread matches above, loose matches on Subject: below --
2007-11-05 21:51 Cedric - Equinoxe Media
     [not found] ` <20071106082632.GU995458@sgi.com>
2007-11-06  9:21   ` Cedric - Equinoxe Media
2007-11-06 16:07     ` Cedric - Equinoxe Media
2007-11-06 16:44       ` Justin Piszcz
2007-11-06 17:08         ` Cedric - Equinoxe Media
2007-11-06 20:55       ` David Chinner
2007-11-07 10:58         ` Cedric - Equinoxe Media

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox