All of lore.kernel.org
 help / color / mirror / Atom feed
* file system corruption
@ 2008-08-27 11:41 Ensar Gul
  0 siblings, 0 replies; 11+ messages in thread
From: Ensar Gul @ 2008-08-27 11:41 UTC (permalink / raw)
  To: linux-mtd

we used  128Mib ST Nand flash (NAND01GW3828N6)  with jffs2 as a file 
system for debian linux 2.6.11. Linux runs on ARM9 processor. The system 
works without any problem. It boots using uboot, mount filesystem and 
runs application program. Hovever, sometimes the file system becomes 
corrupt and it does not boot. In our application, we can not afford even 
a single failure. The logs are below. Could anyone tell me what was 
going wrong.

Thanks.

Regards

Ensar
--------------------

U-Boot 1.1.6 (Aug  7 2007 - 15:15:56) Mindspeed 0.04.0

DRAM:  128 MB
Comcerto Flash Subsystem Initialization
Flash:  0 kB
NAND:  board_nand_init nand->IO_ADDR_R =11400000
!!! oobsize: 0x40 - oobblock: 0x800
128 MiB
*** Warning - bad CRC, using default environment

In:    serial
Out:   serial
Err:   serial
Reserve MSP memory
Comcerto-515 > run bootaa

NAND read: device 0 offset 0x200000, size 0x200000

Reading data from 0x3ff800 -- 100% complete.
  2097152 bytes read: OK
## Downloading image at 01000000 ...
code_offset=0x80
code_base=800000
data_offset=872c
code_size=86ac
data_base=8086ac
data_size=138110
zeroinit_base=9407bc
prog_entry=100

AIFHEADER:
BL_DecompressCode=e1a00000
BL_SelfRelocCode=e1a00000
BL_DbgInitZeroInit=eb00000c
EntryPointOffset=ff800100
ProgramExitInstr=ef000011
ImageReadOnlySize=86ac
ImageReadWriteSize=138110
ImageDebugSize=2060
ImageZeroInitSize=11324
ImageDebugType=1e681
ImageBase=800000
WorkSpace=0
AddressMode=20
DataBase=0
FirstFatOffset=14289c
Reserved2=0
DebugInitInstr=e1a00000
ZeroInitCode[0]=e04ec00f

NAND read: device 0 offset 0x0, size 0x200000

Reading data from 0x1ff800 -- 100% complete.
  2097152 bytes read: OK
Copying ARM1 startup code from 07e003c0, start address 02000000

Starting kernel ...
 
Linux version 2.6.11.7-1.08.9tsavo (root@mete) (gcc version 3.3.2 
20030820 (prerelease)) #75 Thu Jun 5 12:28:07 EEST 
2008�������������������������������������
CPU: ARM920Tid(wb) [41129200] revision 0 (ARMv4T)
CPU0: D VIVT write-back cache
CPU0: I cache: 16384 bytes, associativity 64, 32 byte lines, 8 sets
CPU0: D cache: 16384 bytes, associativity 64, 32 byte lines, 8 sets
Machine: ARM-M825xx2 Comcerto
Memory policy: ECC disabled, Data cache writeback
Built 1 zonelists
Kernel command line: console=ttyS0,115200 mem=110M root=/dev/mtdblock2 
rw rootfstype=jffs2
PID hash table entries: 512 (order: 9, 8192 bytes)
HZ: 200
Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
Memory: 110MB = 110MB total
Memory: 109184KB available (1719K code, 594K data, 84K init)
Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
CPU: Testing write buffer coherency: ok
NET: Registered protocol family 16
Comcerto GPIO: init
Comcerto GPIO: GPIOs 0 6 configured as IRQ source(s)
Comcerto PCI: init
Comcerto PCI: Comcerto device is not configured in PCI host
         Verify that HBMODE_n(low) pin is high and HBBURSTEN_n(high) pin 
is low
         On Mindspeed MH02-D370-xxx board, remove jumper JPCISLV1
SPI core: loaded version 0.2
NetWinder Floating Point Emulator V0.97 (double precision)
JFFS2 version 2.2. (NAND) (C) 2001-2003 Red Hat, Inc.
yaffs Jun  5 2008 12:25:46 Installing.
Serial: 8250/16550 driver $Revision: 1.1.1.3 $ 2 ports, IRQ sharing disabled
Serial: Comcerto 16550 serial driver $Revision: 1.4 $
ttyS0 at MMIO 0x10090000 (irq = 41) is a 16550A
io scheduler noop registered
loop: loaded (max 8 devices)
nbd: registered device at major 43
smi_memory_phy = a70000
CSPtoMSPCommunicationControlQueue(PA): 0x9000210
CSPtoMSPCommunicationControlQueue(VA): 0xf9000210
MSPtoCSPCommunicationControlQueue(PA): 0x9000220
MSPtoCSPCommunicationControlQueue(VA): 0xf9000220
SharedMemoryRxControlStructure(PA): 0x9000230
SharedMemoryRxControlStructure(VA): 0xf9000230
SharedMemoryTxControlStructure(PA): 0x9000248
SharedMemoryTxControlStructure(VA): 0xf9000248
Phy->storage: 0xd77968
Virt->storage: 0xf0d77968
NAND device: Manufacturer ID: 0x20, Chip ID: 0xf1 (ST Micro NAND 128MiB 
3,3V 8-bit)
Scanning device for bad blocks
Bad eraseblock 468 at 0x03a80000
Creating 5 MTD partitions on "NAND 128MiB 3,3V 8-bit":
0x00000000-0x00200000 : "MSP boot partition"
0x00200000-0x00400000 : "Linux boot partition"
0x00400000-0x02c00000 : "Comcerto Filesystem partition 1"
0x02c00000-0x05400000 : "Comcerto Filesystem partition 2"
0x05400000-0x08000000 : "Free data section"
Comcerto flash: request_mem_region(0x5000000, 4194304) failed
SPI core: add adapter comcerto-spi
NET: Registered protocol family 2
IP: routing cache hash table of 512 buckets, 4Kbytes
TCP established hash table entries: 4096 (order: 3, 32768 bytes)
TCP bind hash table entries: 4096 (order: 2, 16384 bytes)
TCP: Hash tables configured (established 4096 bind 4096)
NET: Registered protocol family 1
NET: Registered protocol family 17
mtd->read(0x1fbf8 bytes from 0x320408) returned ECC error
jffs2_scan_eraseblock(): Node at 0x00335a14 {0x1985, 0xe002, 0x0000044b) 
has invalid CRC 0xc79d4317 (calculated 0xc79d4397)
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a18: 
0x044b instead
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a1c: 
0x4317 instead
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a20: 
0x03b3 instead
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a24: 
0x0028 instead
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a28: 
0x81ed instead
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a30: 
0x4a74 instead
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a34: 
0xa241 instead
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a38: 
0xa241 instead
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00335a3c: 
0xa241 instead
Further such events for this erase block will not be printed
Empty flash at 0x01a4c5a8 ends at 0x01a4c800
Empty flash at 0x01a71548 ends at 0x01a71800
Empty flash at 0x01c974cc ends at 0x01c97800
Empty flash at 0x01f7f1ec ends at 0x01f7f800
Empty flash at 0x0202dd78 ends at 0x0202e000
Empty flash at 0x021bb850 ends at 0x021bc000
Empty flash at 0x023921c8 ends at 0x02392800
Empty flash at 0x024793e0 ends at 0x02479800
Empty flash at 0x027b9e54 ends at 0x027ba000
VFS: Mounted root (jffs2 filesystem).
Freeing init memory: 84K
mtd->read(0x44 bytes from 0x335e60) returned ECC error
jffs2_get_inode_nodes(): Data CRC failed on node at 0x00335e60: Read 
0x6d989fd1, calculated 0xd1a48571
mtd->read(0x3f2 bytes from 0x335620) returned ECC error
jffs2_get_inode_nodes(): Data CRC failed on node at 0x003355dc: Read 
0xc9f2a6c2, calculated 0x3a9c8604
jffs2_get_inode_nodes(): Data CRC failed on node at 0x01a4b564: Read 
0x46bd3053, calculated 0xa1cecbbe
jffs2_get_inode_nodes(): Data CRC failed on node at 0x01f7efac: Read 
0xceff7023, calculated 0x7d174f5a
jffs2_get_inode_nodes(): Data CRC failed on node at 0x01c96488: Read 
0xc9f49189, calculated 0x7df38c4b
jffs2_get_inode_nodes(): Data CRC failed on node at 0x02391184: Read 
0xc869a0b9, calculated 0x777c9efb
jffs2_get_inode_nodes(): Data CRC failed on node at 0x026bb540: Read 
0x0ceb5936, calculated 0xdf626212
jffs2_get_inode_nodes(): Data CRC failed on node at 0x027b93d4: Read 
0x491a7085, calculated 0x6a8ab7a9
jffs2_get_inode_nodes(): Data CRC failed on node at 0x021bb694: Read 
0xf30656e5, calculated 0xb75cfdac
jffs2_get_inode_nodes(): Data CRC failed on node at 0x01a70504: Read 
0xadaa9d29, calculated 0xc94c9f9a
jffs2_get_inode_nodes(): Data CRC failed on node at 0x0247839c: Read 
0xdc965a8b, calculated 0x2606b314
jffs2_get_inode_nodes(): Data CRC failed on node at 0x0202cd34: Read 
0xfa9a425d, calculated 0x44ffb299

^ permalink raw reply	[flat|nested] 11+ messages in thread
* File system corruption
@ 2012-10-11 17:52 Wayne Walker
  2012-10-11 18:03 ` Wayne Walker
  2012-10-11 21:07 ` Dave Chinner
  0 siblings, 2 replies; 11+ messages in thread
From: Wayne Walker @ 2012-10-11 17:52 UTC (permalink / raw)
  To: xfs

In short, I am able to:  mkfs...; mount...; cp 1gbfile...; sync; cp 
1gbfile...; sync  # and now the xfs is corrupt

I see multiple bugs

1. very simple, non-corner-case actions create a corrupted file system
2. corrupt data is knowingly written to the file system.
3. the file system stays online and writable
4. future write operations to the file system return success.

Details:

[wwalker@speedy ~] [] $ cat xfs_bug_report
bash-4.1# uname -a
Linux localhost.localdomain 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 
19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux

bash-4.1# xfs_repair -V
xfs_repair version 3.1.1


bash-4.1# cat /proc/meminfo
MemTotal:       98933876 kB
MemFree:        10626620 kB
Buffers:           88828 kB
Cached:          1693684 kB
SwapCached:            0 kB
Active:          2094048 kB
Inactive:         278972 kB
Active(anon):    1713716 kB
Inactive(anon):    95388 kB
Active(file):     380332 kB
Inactive(file):   183584 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:      20479992 kB
SwapFree:       20479992 kB
Dirty:               208 kB
Writeback:             0 kB
AnonPages:        590704 kB
Mapped:            57760 kB
Shmem:           1218600 kB
Slab:            1761776 kB
SReclaimable:     142184 kB
SUnreclaim:      1619592 kB
KernelStack:        4632 kB
PageTables:        13496 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    28003888 kB
Committed_AS:    3281984 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      489664 kB
VmallocChunk:   34307745544 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:   40960
HugePages_Free:    40794
HugePages_Rsvd:      173
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        5632 kB
DirectMap2M:     2082816 kB
DirectMap1G:    98566144 kB


bash-4.1# # 2 CPUs, E5620 8 core procs


bash-4.1# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz
stepping        : 2
cpu MHz         : 1600.000
cache size      : 12288 KB
physical id     : 0
siblings        : 8
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall 
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good 
xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx 
smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt aes lahf_lm 
ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips        : 4800.20
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

<14 core reports deleted>

processor       : 15
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz
stepping        : 2
cpu MHz         : 1600.000
cache size      : 12288 KB
physical id     : 1
siblings        : 8
core id         : 10
cpu cores       : 4
apicid          : 53
initial apicid  : 53
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall 
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good 
xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx 
smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt aes lahf_lm 
ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips        : 4799.88
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:


bash-4.1# cat /proc/mounts
rootfs / rootfs rw 0 0
/proc /proc proc rw,relatime 0 0
/sys /sys sysfs rw,relatime 0 0
udev /dev devtmpfs 
rw,relatime,size=49459988k,nr_inodes=12364997,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/sdb1 / ext3 
rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered 0 0
/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0
/dev/sdb5 /core xfs rw,relatime,attr2,noquota 0 0
/dev/sdb6 /data xfs rw,relatime,attr2,noquota 0 0
/dev/sdd1 /database xfs rw,relatime,attr2,noquota 0 0
/dev/sdb2 /secondary ext3 
rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/sda1 /vpd xfs rw,noatime,attr2,sunit=2048,swidth=8192,noquota 0 0
/dev/sda8 /cfg_backup xfs 
rw,noatime,attr2,sunit=2048,swidth=8192,noquota 0 0
/dev/sdg1 /db_backup xfs 
rw,relatime,attr2,sunit=2048,swidth=8192,noquota 0 0
/dev/sdf1 /dtfs_data/data2 xfs 
rw,noatime,attr2,nobarrier,logdev=/dev/sda6,sunit=2048,swidth=8192,noquota 
0 0
/dev/sdh1 /dtfs_data/data3 xfs 
rw,noatime,attr2,nobarrier,logdev=/dev/sda7,sunit=2048,swidth=8192,noquota 
0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
/etc/auto.misc /misc autofs 
rw,relatime,fd=7,pgrp=2972,timeout=300,minproto=5,maxproto=5,indirect 0 0
-hosts /net autofs 
rw,relatime,fd=13,pgrp=2972,timeout=300,minproto=5,maxproto=5,indirect 0 0


bash-4.1# cat /proc/partitions
major minor  #blocks  name

    8        0   39082680 sda
    8        1      18432 sda1
    8        2     391168 sda2
    8        3     390144 sda3
    8        4          1 sda4
    8        5     389120 sda5
    8        6     390144 sda6
    8        7     389120 sda7
    8        8   37108736 sda8
    8       16   78150744 sdb
    8       17   10240000 sdb1
    8       18   10240000 sdb2
    8       19   20480000 sdb3
    8       20          1 sdb4
    8       21    5120000 sdb5
    8       22   32067584 sdb6
    8       48 2254857216 sdd
    8       49 2147482624 sdd1
    8       32 4731979776 sdc
    8       33 4731977728 sdc1
    8       64 4732048384 sde
    8       65 4732046336 sde1
    8       96  712964096 sdg
    8       97  712962048 sdg1
    8      112 5502995456 sdh
    8      113 5502993408 sdh1
    8       80 5502925824 sdf
    8       81 5502923776 sdf1


bash-4.1# lspci | grep -i RAID
84:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 
[Thunderbolt] (rev 01)
85:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 
[Thunderbolt] (rev 01)

There is an SSD (INTEL SSDSA2CT040G3; sda) used as an external log.

Each controller has 8 SEAGATE ST33000650SS 3 TB SATA drives.

The file system with a problem (sde1) is a RAID 6 made up of 6 drives 
and split into 3 pieces (sdc, sdd, sde) of roughly 4.7 TB, 2.2 TB, 4.7 TB.
sdc and sdd are mounted but are idle (sdc) or probably idle (sdd has 
postgres data on it but no transactions occurring) during the steps to 
produce a corrupt fs.

There are no LVMs in use


Both BBUs are fully charged and good


All VDs are set to: WriteBack, ReadAhead, Direct, No Write Cache if bad BBU

There is no significant IO or CPU load on the machine at all during the 
tests.

The exact commands to create the failure:

/sbin/mkfs.xfs -f -l logdev=/dev/sda5 -b size=4096 -d su=1024k,sw=4 
/dev/sde1
cat /etc/fstab
mount -t xfs -o defaults,noatime,logdev=/dev/sda5 /dev/sde1 /dtfs_data/data1
cp random_data.1G /dtfs_data/data1
# returns 0
sync
# file system reported no failure yet
cp random_data.1G /dtfs_data/data1
# returns 0
sync
# file system reports stack trace, bad agf, and page discard

bash-4.1# xfs_info /dtfs_data/data1
meta-data=/dev/sde1              isize=256    agcount=5, 
agsize=268435200 blks
          =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=1183011584, imaxpct=5
          =                       sunit=256    swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =external               bsize=4096   blocks=97280, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread
* File system corruption
@ 2009-07-16 18:08 John Quigley
  2009-07-16 19:20 ` Eric Sandeen
  0 siblings, 1 reply; 11+ messages in thread
From: John Quigley @ 2009-07-16 18:08 UTC (permalink / raw)
  To: XFS Development

Hey Folks:

I'm periodically encountering an issue with XFS that you might perhaps be interested in.  The environment in which this manifests itself is on a CentOS Linux machine (custom 2.6.28.7 kernel), which is serving the XFS mount point in question with the standard Linux nfsd.  The XFS file system lives on an LVM device in a striping configuration (2 wide stripe), with two iSCSI volumes acting as the constituent physical volumes.  This configuration is somewhat baroque, I know.

I'm experiencing periodic file system corruption, which manifests in the XFS file system going offline, and refusing subsequent mounts.  The only way to recover from this has been to perform a xfs_repair -L, which has resulted in data loss on each occasion, as expected.

Now, here's what I witness in the system logs:

<snip>
kernel: XFS: bad magic number
kernel: XFS: SB validate failed

kernel: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
kernel: Filesystem "dm-0": XFS internal error xfs_ialloc_read_agi at line 1408 of file fs/xfs/xfs_ialloc.c.  Caller 0xffffffff8118711a
kernel: Pid: 3842, comm: nfsd Not tainted 2.6.28.7.cs.8 #3 
kernel: Call Trace:
kernel:  [<ffffffff8118711a>] xfs_ialloc_ag_select+0x22a/0x320
kernel:  [<ffffffff81186481>] xfs_ialloc_read_agi+0xe1/0x140
kernel:  [<ffffffff8118711a>] xfs_ialloc_ag_select+0x22a/0x320
kernel:  [<ffffffff811f5bfd>] swiotlb_map_single_attrs+0x1d/0xf0
kernel:  [<ffffffff8118711a>] xfs_ialloc_ag_select+0x22a/0x320
kernel:  [<ffffffff81187bfc>] xfs_dialloc+0x31c/0xa90
kernel:  [<ffffffff81076be5>] __alloc_pages_internal+0xf5/0x4f0
kernel:  [<ffffffff8109ac46>] cache_alloc_refill+0x96/0x5a0
kernel:  [<ffffffff8119012f>] xfs_ialloc+0x7f/0x6f0
kernel:  [<ffffffff811ad0c6>] kmem_zone_alloc+0x86/0xc0
kernel:  [<ffffffff811a66d8>] xfs_dir_ialloc+0xa8/0x360
kernel:  [<ffffffff811a4008>] xfs_trans_reserve+0xa8/0x220
kernel:  [<ffffffff813a29e7>] __down_write_nested+0x17/0xa0
kernel:  [<ffffffff811a952f>] xfs_create+0x2ef/0x4e0
kernel:  [<ffffffff811b523c>] xfs_vn_mknod+0x14c/0x1a0
kernel:  [<ffffffff810a864c>] vfs_create+0xec/0x160
kernel:  [<ffffffffa00c53c3>] nfsd_create_v3+0x3b3/0x500 [nfsd]
kernel:  [<ffffffffa00cc178>] nfsd3_proc_create+0x118/0x1b0 [nfsd]
kernel:  [<ffffffffa00be22a>] nfsd_dispatch+0xba/0x270 [nfsd]
kernel:  [<ffffffffa0061fde>] svc_process+0x49e/0x800 [sunrpc]
kernel:  [<ffffffff8102efc0>] default_wake_function+0x0/0x10
kernel:  [<ffffffff813a2a97>] __down_read+0x17/0xa6
kernel:  [<ffffffffa00be9a9>] nfsd+0x199/0x2c0 [nfsd]
kernel:  [<ffffffffa00be810>] nfsd+0x0/0x2c0 [nfsd]
kernel:  [<ffffffff8104a4b7>] kthread+0x47/0x90
kernel:  [<ffffffff810322a7>] schedule_tail+0x27/0x70
kernel:  [<ffffffff8100d0d9>] child_rip+0xa/0x11
kernel:  [<ffffffff8104a470>] kthread+0x0/0x90
kernel:  [<ffffffff8100d0cf>] child_rip+0x0/0x11

</snip>

The resultant stack trace coming from "XFS internal error xfs_ialloc_read_agi" repeats itself numerous times, at which point, the following is seen:

<snip>

kernel: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
kernel: Filesystem "dm-0": XFS internal error xfs_alloc_read_agf at line 2194 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffff8115cf09
kernel: Pid: 3756, comm: nfsd Not tainted 2.6.28.7.cs.8 #3
kernel: Call Trace:
kernel:  [<ffffffff8115cf09>] xfs_alloc_fix_freelist+0x3e9/0x480
kernel:  [<ffffffff8115abe3>] xfs_alloc_read_agf+0xd3/0x1e0
kernel:  [<ffffffff8115cf09>] xfs_alloc_fix_freelist+0x3e9/0x480
kernel:  [<ffffffff8100d0cf>] child_rip+0x0/0x11
kernel:  [<ffffffff8115cf09>] xfs_alloc_fix_freelist+0x3e9/0x480
kernel:  [<ffffffff811e8033>] vsnprintf+0x743/0x890
kernel:  [<ffffffff81268a8a>] wait_for_xmitr+0x5a/0xc0
kernel:  [<ffffffff8100d0cf>] child_rip+0x0/0x11
kernel:  [<ffffffff813a2a97>] __down_read+0x17/0xa6
kernel:  [<ffffffff8115d215>] xfs_alloc_vextent+0x1b5/0x4e0
kernel:  [<ffffffff8116c0e8>] xfs_bmap_btalloc+0x608/0xb00
kernel:  [<ffffffff8116f60a>] xfs_bmapi+0xa4a/0x12a0
kernel:  [<ffffffff8118e93c>] xfs_imap_to_bp+0xac/0x130
kernel:  [<ffffffff8117a37a>] xfs_dir2_grow_inode+0x15a/0x410
kernel:  [<ffffffff8117b26f>] xfs_dir2_sf_to_block+0x9f/0x5c0
kernel:  [<ffffffff811ad0c6>] kmem_zone_alloc+0x86/0xc0
kernel:  [<ffffffff811ad132>] kmem_zone_zalloc+0x32/0x50
kernel:  [<ffffffff811918ce>] xfs_inode_item_init+0x1e/0x80
kernel:  [<ffffffff81183880>] xfs_dir2_sf_addname+0x430/0x5d0
kernel:  [<ffffffff811903c8>] xfs_ialloc+0x318/0x6f0
kernel:  [<ffffffff8117b0a2>] xfs_dir_createname+0x182/0x1e0
kernel:  [<ffffffff811a95df>] xfs_create+0x39f/0x4e0
kernel:  [<ffffffff811b523c>] xfs_vn_mknod+0x14c/0x1a0
kernel:  [<ffffffff810a864c>] vfs_create+0xec/0x160
kernel:  [<ffffffffa00c53c3>] nfsd_create_v3+0x3b3/0x500 [nfsd]
kernel:  [<ffffffffa00cc178>] nfsd3_proc_create+0x118/0x1b0 [nfsd]
kernel:  [<ffffffffa00be22a>] nfsd_dispatch+0xba/0x270 [nfsd]
kernel:  [<ffffffffa0061fde>] svc_process+0x49e/0x800 [sunrpc]
kernel:  [<ffffffff813a2a97>] __down_read+0x17/0xa6
kernel:  [<ffffffffa00be9a9>] nfsd+0x199/0x2c0 [nfsd]
kernel:  [<ffffffffa00be810>] nfsd+0x0/0x2c0 [nfsd]
kernel:  [<ffffffff8104a4b7>] kthread+0x47/0x90
kernel:  [<ffffffff810322a7>] schedule_tail+0x27/0x70
kernel:  [<ffffffff8100d0d9>] child_rip+0xa/0x11
kernel:  [<ffffffff8104a470>] kthread+0x0/0x90
kernel:  [<ffffffff8100d0cf>] child_rip+0x0/0x11

kernel: Filesystem "dm-0": XFS internal error xfs_trans_cancel at line 1164 of file fs/xfs/xfs_trans.c.  Caller 0xffffffff811a9411
kernel: Pid: 3756, comm: nfsd Not tainted 2.6.28.7.cs.8 #3
kernel: Call Trace:
kernel:  [<ffffffff811a9411>] xfs_create+0x1d1/0x4e0
kernel:  [<ffffffff811a3475>] xfs_trans_cancel+0xe5/0x110
kernel:  [<ffffffff811a9411>] xfs_create+0x1d1/0x4e0
kernel:  [<ffffffff811b523c>] xfs_vn_mknod+0x14c/0x1a0
kernel:  [<ffffffff810a864c>] vfs_create+0xec/0x160
kernel:  [<ffffffffa00c53c3>] nfsd_create_v3+0x3b3/0x500 [nfsd]
kernel:  [<ffffffffa00cc178>] nfsd3_proc_create+0x118/0x1b0 [nfsd]
kernel:  [<ffffffffa00be22a>] nfsd_dispatch+0xba/0x270 [nfsd]
kernel:  [<ffffffffa0061fde>] svc_process+0x49e/0x800 [sunrpc]
kernel:  [<ffffffff813a2a97>] __down_read+0x17/0xa6
kernel:  [<ffffffffa00be9a9>] nfsd+0x199/0x2c0 [nfsd]
kernel:  [<ffffffffa00be810>] nfsd+0x0/0x2c0 [nfsd]
kernel:  [<ffffffff8104a4b7>] kthread+0x47/0x90
kernel:  [<ffffffff810322a7>] schedule_tail+0x27/0x70
kernel:  [<ffffffff8100d0d9>] child_rip+0xa/0x11
kernel:  [<ffffffff8104a470>] kthread+0x0/0x90
kernel:  [<ffffffff8100d0cf>] child_rip+0x0/0x11
kernel: xfs_force_shutdown(dm-0,0x8) called from line 1165 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff811a348e
kernel: Filesystem "dm-0": Corruption of in-memory data detected.  Shutting down filesystem: dm-0
kernel: Please umount the filesystem, and rectify the problem(s)
kernel: nfsd: non-standard errno: -117

kernel: Filesystem "dm-0": xfs_log_force: error 5 returned.

</snip>

I'm somewhat at a loss with this one - it's been experienced on a customer's installation, so I don't have ready access to the machine.  All internal tests to attempt reproduction with identical hardware/software configurations has been unfruitful.  I'm concerned about the custom kernel, and may attempt to downgrade to the stock CentOS 5.3 kernel (2.6.18, if I remember correctly).

Any insight would be hugely appreciated, and of course tell me how I can help further.  Thanks so much.

John Quigley
jquigley.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread
* file system corruption
@ 2004-07-12  5:39 Achuth Kamath
  2004-07-12  6:56 ` David Woodhouse
  0 siblings, 1 reply; 11+ messages in thread
From: Achuth Kamath @ 2004-07-12  5:39 UTC (permalink / raw)
  To: linux-mtd

Hi list,
   I had a jffs2 file system on my flash of one
device. After some changes, and restart, it looks like
the file system is corrupted. It says something like
this:
 jffs2_scan_inode_mode(): Data CRC failed on node at
0x...

Do we have a solution for the same? Does anybody know
when can this problem arise?

Thanks and Regards,
Achuth  

________________________________________________________________________
Yahoo! India Careers: Over 50,000 jobs online
Go to: http://yahoo.naukri.com/

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-10-24 22:50 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-08-27 11:41 file system corruption Ensar Gul
  -- strict thread matches above, loose matches on Subject: below --
2012-10-11 17:52 File " Wayne Walker
2012-10-11 18:03 ` Wayne Walker
2012-10-11 21:07 ` Dave Chinner
     [not found]   ` <50789076.7040402@crossroads.com>
2012-10-13  0:14     ` Dave Chinner
2012-10-24 21:19       ` Wayne Walker
2012-10-24 22:51         ` Dave Chinner
2009-07-16 18:08 John Quigley
2009-07-16 19:20 ` Eric Sandeen
2004-07-12  5:39 file " Achuth Kamath
2004-07-12  6:56 ` David Woodhouse

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.