* xfs_db 2.9.8: coredump
@ 2008-12-26 14:36 Justin Piszcz
2008-12-26 17:10 ` Eric Sandeen
2008-12-26 17:12 ` Eric Sandeen
0 siblings, 2 replies; 9+ messages in thread
From: Justin Piszcz @ 2008-12-26 14:36 UTC (permalink / raw)
To: xfs; +Cc: Alan Piszcz, esandeen
# xfs_db -V
xfs_db version 2.9.8
p34:~# xfs_db -c frag -f /dev/sda1
Segmentation fault (core dumped)
p34:~# xfs_db -c frag -r /dev/sda1
Segmentation fault (core dumped)
(It was working BEFORE I ran xfs_fsr on it, it was at 16% fragmentation).
Now it can no longer check it?
===============================================================================
Program terminated with signal 11, Segmentation fault.
[New process 11346]
(gdb) bt
#0 0x00007f6a67c8c3e0 in ?? () from /lib/libc.so.6
#1 0x00007f6a67c8b43a in memmove () from /lib/libc.so.6
#2 0x0000000000403f2e in ?? ()
#3 0x0000000000412ff1 in ?? ()
#4 0x00000000004129cb in ?? ()
#5 0x0000000000413392 in ?? ()
#6 0x0000000000413833 in ?? ()
#7 0x0000000000412af3 in ?? ()
#8 0x0000000000413933 in ?? ()
#9 0x0000000000412af3 in ?? ()
#10 0x0000000000412d81 in ?? ()
#11 0x0000000000414af6 in ?? ()
#12 0x00007f6a67c2d1a6 in __libc_start_main () from /lib/libc.so.6
#13 0x0000000000402839 in ?? ()
#14 0x00007fff709e46a8 in ?? ()
#15 0x000000000000001c in ?? ()
#16 0x0000000000000005 in ?? ()
#17 0x00007fff709e5eb3 in ?? ()
#18 0x00007fff709e5eba in ?? ()
#19 0x00007fff709e5ebd in ?? ()
#20 0x00007fff709e5ec2 in ?? ()
#21 0x00007fff709e5ec5 in ?? ()
#22 0x0000000000000000 in ?? ()
(gdb) q
p34:~#
===============================================================================
[251994.093918] xfs_db[11095]: segfault at c40ff8 ip 00007f1a990a53e0 sp 00007fffa1dff608 error 4 in libc-2.7.so[7f1a99028000+14a000]
[252342.441918] xfs_db[11335]: segfault at eb2ff8 ip 00007fd8263cc3e0 sp 00007fff2f1258f8 error 4 in libc-2.7.so[7fd82634f000+14a000]
[252349.502317] xfs_db[11337]: segfault at fa5ff8 ip 00007f77340dc3e0 sp 00007fff3ce34608 error 4 in libc-2.7.so[7f773405f000+14a000]
[252381.346221] xfs_db[11346]: segfault at 1c71ff8 ip 00007f6a67c8c3e0 sp 00007fff709e41b8 error 4 in libc-2.7.so[7f6a67c0f000+14a000]
[252388.659308] xfs_db[11348]: segfault at 18caff8 ip 00007f2e0f0f93e0 sp 00007fff17e53628 error 4 in libc-2.7.so[7f2e0f07c000+14a000]
===============================================================================
The end of the strace for xfs_db -c frag -r /dev/sda1:
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\2I"..., 1
6384) = 16384
lseek(4, 377470976, SEEK_SET) = 377470976
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\2I"..., 1
6384) = 16384
lseek(4, 4711120683008, SEEK_SET) = 4711120683008
read(4, "BMAP\0\0\0+\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377
\377\0\0\0\0\0\0\0\0\0"..., 4096) = 4096
lseek(4, 377520128, SEEK_SET) = 377520128
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\nI"..., 1
6384) = 16384
lseek(4, 4711209119744, SEEK_SET) = 4711209119744
read(4, "BMAP\0\0\0=\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377
\377\0\0\0\0\0\0\0\0\0"..., 4096) = 4096
lseek(4, 1480579043328, SEEK_SET) = 1480579043328
read(4, "BMAP\0\0\0/\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377
\377\0\0\0\0\0\0\0\0\0"..., 4096) = 4096
lseek(4, 377552896, SEEK_SET) = 377552896
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\7I"..., 1
6384) = 16384
lseek(4, 377610240, SEEK_SET) = 377610240
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\3I"..., 1
6384) = 16384
lseek(4, 377675776, SEEK_SET) = 377675776
read(4, "IN\201\244\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\5H"..
., 16384) = 16384
lseek(4, 377724928, SEEK_SET) = 377724928
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\6I"..., 1
6384) = 16384
lseek(4, 377757696, SEEK_SET) = 377757696
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\10I"...,
16384) = 16384
lseek(4, 377798656, SEEK_SET) = 377798656
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\7I"..., 1
6384) = 16384
lseek(4, 377823232, SEEK_SET) = 377823232
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\10I"...,
16384) = 16384
lseek(4, 377856000, SEEK_SET) = 377856000
read(4, "IN\201\244\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\6H"..
., 16384) = 16384
lseek(4, 1487440093184, SEEK_SET) = 1487440093184
read(4, "BMAP\0\0\0\31\377\377\377\377\377\377\377\377\377\377\377\377\377\377\3
77\377\0\0\0\0\0\0\0\0\0"..., 4096) = 4096
lseek(4, 377896960, SEEK_SET) = 377896960
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\7I"..., 1
6384) = 16384
lseek(4, 377921536, SEEK_SET) = 377921536
read(4, "IN\201\244\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\5I"..
., 16384) = 16384
lseek(4, 377946112, SEEK_SET) = 377946112
read(4, "IN\0\0\2\2\0\0\0\0\3\350\0\0\0d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\2I"..., 1
6384) = 16384
lseek(4, 1488025448448, SEEK_SET) = 1488025448448
read(4, "\354j8\243\207$\351\334e\232\5o<8\366y\255\350\215\340R\234`\5\332\247\
22\326\7D%c\214"..., 4096) = 4096
brk(0x2516000) = 0x2516000
brk(0x250a000) = 0x250a000
mmap(NULL, 188416, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7
fe1d1554000
brk(0x24fd000) = 0x24fd000
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV (core dumped) +++
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: xfs_db 2.9.8: coredump
2008-12-26 14:36 xfs_db 2.9.8: coredump Justin Piszcz
@ 2008-12-26 17:10 ` Eric Sandeen
2008-12-26 17:12 ` Eric Sandeen
1 sibling, 0 replies; 9+ messages in thread
From: Eric Sandeen @ 2008-12-26 17:10 UTC (permalink / raw)
To: Justin Piszcz; +Cc: Alan Piszcz, esandeen, xfs
Justin Piszcz wrote:
> # xfs_db -V
> xfs_db version 2.9.8
>
> p34:~# xfs_db -c frag -f /dev/sda1
> Segmentation fault (core dumped)
> p34:~# xfs_db -c frag -r /dev/sda1
> Segmentation fault (core dumped)
>
> (It was working BEFORE I ran xfs_fsr on it, it was at 16% fragmentation).
>
> Now it can no longer check it?
>
> ===============================================================================
>
> Program terminated with signal 11, Segmentation fault.
> [New process 11346]
> (gdb) bt
> #0 0x00007f6a67c8c3e0 in ?? () from /lib/libc.so.6
> #1 0x00007f6a67c8b43a in memmove () from /lib/libc.so.6
> #2 0x0000000000403f2e in ?? ()
> #3 0x0000000000412ff1 in ?? ()
> #4 0x00000000004129cb in ?? ()
> #5 0x0000000000413392 in ?? ()
please run against a debug binary so we can get useful info...
If this was on fedora, just install the matching xfsprogs-debuginfo package.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: xfs_db 2.9.8: coredump
2008-12-26 14:36 xfs_db 2.9.8: coredump Justin Piszcz
2008-12-26 17:10 ` Eric Sandeen
@ 2008-12-26 17:12 ` Eric Sandeen
2008-12-26 17:45 ` Justin Piszcz
1 sibling, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2008-12-26 17:12 UTC (permalink / raw)
To: Justin Piszcz; +Cc: Alan Piszcz, esandeen, xfs
Justin Piszcz wrote:
> # xfs_db -V
> xfs_db version 2.9.8
>
> p34:~# xfs_db -c frag -f /dev/sda1
> Segmentation fault (core dumped)
> p34:~# xfs_db -c frag -r /dev/sda1
> Segmentation fault (core dumped)
>
> (It was working BEFORE I ran xfs_fsr on it, it was at 16% fragmentation).
>
> Now it can no longer check it?
>
Or, get an xfs_metadump of the fs and see if db segfaults on that too.
I'd be curious to know if xfs_repair -n flaggs any problems, as well.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: xfs_db 2.9.8: coredump
2008-12-26 17:12 ` Eric Sandeen
@ 2008-12-26 17:45 ` Justin Piszcz
2008-12-26 17:52 ` Justin Piszcz
2008-12-26 19:18 ` Eric Sandeen
0 siblings, 2 replies; 9+ messages in thread
From: Justin Piszcz @ 2008-12-26 17:45 UTC (permalink / raw)
To: Eric Sandeen; +Cc: Alan Piszcz, esandeen, xfs
On Fri, 26 Dec 2008, Eric Sandeen wrote:
> Justin Piszcz wrote:
>> # xfs_db -V
>> xfs_db version 2.9.8
>>
>> p34:~# xfs_db -c frag -f /dev/sda1
>> Segmentation fault (core dumped)
>> p34:~# xfs_db -c frag -r /dev/sda1
>> Segmentation fault (core dumped)
>>
>> (It was working BEFORE I ran xfs_fsr on it, it was at 16% fragmentation).
>>
>> Now it can no longer check it?
>>
>
> Or, get an xfs_metadump of the fs and see if db segfaults on that too.
>
> I'd be curious to know if xfs_repair -n flaggs any problems, as well.
>
> -Eric
>
# xfs_metadump -g /dev/sda1 /root/sda1.metadump
Copied 300096 of 426176 inodes (0 of 12 AGs)
Copying log
#
p34:~# xfs_db -c frag -f sda1.metadump
xfs_db: unexpected XFS SB magic number 0x5846534d
xfs_db: read failed: Invalid argument
xfs_db: data size check failed
cache_node_purge: refcount was 1, not zero (node=0x1ce4b80)
xfs_db: cannot read root inode (22)
cache_node_purge: refcount was 1, not zero (node=0x1ce4cd0)
xfs_db: cannot read realtime bitmap inode (22)
Segmentation fault (core dumped)
p34:~#
p34:~# xfs_repair -n /dev/sda1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 2
- agno = 1
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
p34:~#
Again, unmount / remount, no issues.
p34:~# umount /r1
p34:~# mount -a ; dmesg | tail -n 2
[264025.739788] XFS mounting filesystem sda1
[264025.836900] Ending clean XFS mount for filesystem: sda1
p34:~#
But xfs_db dies, on the metadump file as well.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: xfs_db 2.9.8: coredump
2008-12-26 17:45 ` Justin Piszcz
@ 2008-12-26 17:52 ` Justin Piszcz
2008-12-28 22:39 ` Dave Chinner
2008-12-26 19:18 ` Eric Sandeen
1 sibling, 1 reply; 9+ messages in thread
From: Justin Piszcz @ 2008-12-26 17:52 UTC (permalink / raw)
To: Eric Sandeen; +Cc: Alan Piszcz, esandeen, xfs
On Fri, 26 Dec 2008, Justin Piszcz wrote:
>
>
> On Fri, 26 Dec 2008, Eric Sandeen wrote:
>
>> Justin Piszcz wrote:
>>> # xfs_db -V
>>> xfs_db version 2.9.8
>>>
>>> p34:~# xfs_db -c frag -f /dev/sda1
>>> Segmentation fault (core dumped)
>>> p34:~# xfs_db -c frag -r /dev/sda1
>>> Segmentation fault (core dumped)
>>>
>>> (It was working BEFORE I ran xfs_fsr on it, it was at 16% fragmentation).
>>>
>>> Now it can no longer check it?
>>>
>>
>> Or, get an xfs_metadump of the fs and see if db segfaults on that too.
>>
>> I'd be curious to know if xfs_repair -n flaggs any problems, as well.
>>
>> -Eric
>>
p34:~# umount /r1
p34:~# sync
p34:~# xfs_db -c frag -r /dev/sda1
actual 365758, ideal 358711, fragmentation factor 1.93%
p34:~# mount -a
p34:~# xfs_db -c frag -r /dev/sda1
actual 365758, ideal 358711, fragmentation factor 1.93%
p34:~# mount -a ; dmesg | tail -n 2
p34:~# xfs_fsr /dev/sda1
/r1 start inode=0
p34:~# xfs_db -c frag -r /dev/sda1
actual 365751, ideal 358711, fragmentation factor 1.92%
p34:~#
Now it is working? I do not understand.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: xfs_db 2.9.8: coredump
2008-12-26 17:52 ` Justin Piszcz
@ 2008-12-28 22:39 ` Dave Chinner
2008-12-28 22:44 ` Justin Piszcz
2008-12-29 4:48 ` Eric Sandeen
0 siblings, 2 replies; 9+ messages in thread
From: Dave Chinner @ 2008-12-28 22:39 UTC (permalink / raw)
To: Justin Piszcz; +Cc: Eric Sandeen, Alan Piszcz, esandeen, xfs
On Fri, Dec 26, 2008 at 12:52:10PM -0500, Justin Piszcz wrote:
> On Fri, 26 Dec 2008, Justin Piszcz wrote:
> > On Fri, 26 Dec 2008, Eric Sandeen wrote:
> >> Justin Piszcz wrote:
> >>> # xfs_db -V
> >>> xfs_db version 2.9.8
> >>>
> >>> p34:~# xfs_db -c frag -f /dev/sda1
> >>> Segmentation fault (core dumped)
> >>> p34:~# xfs_db -c frag -r /dev/sda1
> >>> Segmentation fault (core dumped)
> >>>
> >>> (It was working BEFORE I ran xfs_fsr on it, it was at 16% fragmentation).
> >>>
> >>> Now it can no longer check it?
xfs_db works on the block device under the filesystem, not the
filesystem. Also, the block device on linux caches blocks, so
after running xfs_fsr the filesystem layout has changed but the
underlying block device now has a stale cache. hence xfs_db is
probably being pointed off into la-la land by the stale block
device cache.
# echo 3 > /proc/sys/vm/drop_caches
Is your friend whenever you want to use xfs_db on a mounted
filesystem.
> p34:~# umount /r1
> p34:~# sync
And now the block device is coherent....
> p34:~# xfs_db -c frag -r /dev/sda1
> actual 365758, ideal 358711, fragmentation factor 1.93%
> p34:~# mount -a
> p34:~# xfs_db -c frag -r /dev/sda1
> actual 365758, ideal 358711, fragmentation factor 1.93%
> p34:~# mount -a ; dmesg | tail -n 2
> p34:~# xfs_fsr /dev/sda1
> /r1 start inode=0
> p34:~# xfs_db -c frag -r /dev/sda1
> actual 365751, ideal 358711, fragmentation factor 1.92%
> p34:~#
And this resulted in very little change so the block device
cache wasn't completely wacked....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: xfs_db 2.9.8: coredump
2008-12-28 22:39 ` Dave Chinner
@ 2008-12-28 22:44 ` Justin Piszcz
2008-12-29 4:48 ` Eric Sandeen
1 sibling, 0 replies; 9+ messages in thread
From: Justin Piszcz @ 2008-12-28 22:44 UTC (permalink / raw)
To: Dave Chinner; +Cc: Eric Sandeen, Alan Piszcz, esandeen, xfs
On Mon, 29 Dec 2008, Dave Chinner wrote:
> On Fri, Dec 26, 2008 at 12:52:10PM -0500, Justin Piszcz wrote:
>> On Fri, 26 Dec 2008, Justin Piszcz wrote:
>>> On Fri, 26 Dec 2008, Eric Sandeen wrote:
>>>> Justin Piszcz wrote:
>>>>> # xfs_db -V
>>>>> xfs_db version 2.9.8
>>>>>
>>>>> p34:~# xfs_db -c frag -f /dev/sda1
>>>>> Segmentation fault (core dumped)
>>>>> p34:~# xfs_db -c frag -r /dev/sda1
>>>>> Segmentation fault (core dumped)
>>>>>
>>>>> (It was working BEFORE I ran xfs_fsr on it, it was at 16% fragmentation).
>>>>>
>>>>> Now it can no longer check it?
>
> xfs_db works on the block device under the filesystem, not the
> filesystem. Also, the block device on linux caches blocks, so
> after running xfs_fsr the filesystem layout has changed but the
> underlying block device now has a stale cache. hence xfs_db is
> probably being pointed off into la-la land by the stale block
> device cache.
>
> # echo 3 > /proc/sys/vm/drop_caches
Thanks Dave, I will have to add this to my scripts whenever an xfs_fsr is
performed to avoid the problem in the future.
>
> Is your friend whenever you want to use xfs_db on a mounted
> filesystem.
>
>> p34:~# umount /r1
>> p34:~# sync
>
> And now the block device is coherent....
>
>> p34:~# xfs_db -c frag -r /dev/sda1
>> actual 365758, ideal 358711, fragmentation factor 1.93%
>
>> p34:~# mount -a
>> p34:~# xfs_db -c frag -r /dev/sda1
>> actual 365758, ideal 358711, fragmentation factor 1.93%
>> p34:~# mount -a ; dmesg | tail -n 2
>> p34:~# xfs_fsr /dev/sda1
>> /r1 start inode=0
>> p34:~# xfs_db -c frag -r /dev/sda1
>> actual 365751, ideal 358711, fragmentation factor 1.92%
>> p34:~#
>
> And this resulted in very little change so the block device
> cache wasn't completely wacked....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: xfs_db 2.9.8: coredump
2008-12-28 22:39 ` Dave Chinner
2008-12-28 22:44 ` Justin Piszcz
@ 2008-12-29 4:48 ` Eric Sandeen
1 sibling, 0 replies; 9+ messages in thread
From: Eric Sandeen @ 2008-12-29 4:48 UTC (permalink / raw)
To: Justin Piszcz, Alan Piszcz, xfs
Dave Chinner wrote:
> On Fri, Dec 26, 2008 at 12:52:10PM -0500, Justin Piszcz wrote:
>> On Fri, 26 Dec 2008, Justin Piszcz wrote:
>>> On Fri, 26 Dec 2008, Eric Sandeen wrote:
>>>> Justin Piszcz wrote:
>>>>> # xfs_db -V
>>>>> xfs_db version 2.9.8
>>>>>
>>>>> p34:~# xfs_db -c frag -f /dev/sda1
>>>>> Segmentation fault (core dumped)
>>>>> p34:~# xfs_db -c frag -r /dev/sda1
>>>>> Segmentation fault (core dumped)
>>>>>
>>>>> (It was working BEFORE I ran xfs_fsr on it, it was at 16% fragmentation).
>>>>>
>>>>> Now it can no longer check it?
>
> xfs_db works on the block device under the filesystem, not the
> filesystem. Also, the block device on linux caches blocks, so
> after running xfs_fsr the filesystem layout has changed but the
> underlying block device now has a stale cache. hence xfs_db is
> probably being pointed off into la-la land by the stale block
> device cache.
Ah, missed the part about doing it online. Probably shouldn't segfault
though, I suppose.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: xfs_db 2.9.8: coredump
2008-12-26 17:45 ` Justin Piszcz
2008-12-26 17:52 ` Justin Piszcz
@ 2008-12-26 19:18 ` Eric Sandeen
1 sibling, 0 replies; 9+ messages in thread
From: Eric Sandeen @ 2008-12-26 19:18 UTC (permalink / raw)
To: Justin Piszcz; +Cc: Eric Sandeen, Alan Piszcz, esandeen, xfs
Justin Piszcz wrote:
>
> On Fri, 26 Dec 2008, Eric Sandeen wrote:
>> Or, get an xfs_metadump of the fs and see if db segfaults on that too.
>>
>> I'd be curious to know if xfs_repair -n flaggs any problems, as well.
>>
>> -Eric
>>
>
> # xfs_metadump -g /dev/sda1 /root/sda1.metadump
> Copied 300096 of 426176 inodes (0 of 12 AGs)
> Copying log
> #
>
> p34:~# xfs_db -c frag -f sda1.metadump
> xfs_db: unexpected XFS SB magic number 0x5846534d
> xfs_db: read failed: Invalid argument
> xfs_db: data size check failed
> cache_node_purge: refcount was 1, not zero (node=0x1ce4b80)
> xfs_db: cannot read root inode (22)
> cache_node_purge: refcount was 1, not zero (node=0x1ce4cd0)
> xfs_db: cannot read realtime bitmap inode (22)
> Segmentation fault (core dumped)
well, you have to xfs_mdrestore that to a sparse file before pointing db
at it for it to be valid (still, should not dump).
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2008-12-29 4:49 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-12-26 14:36 xfs_db 2.9.8: coredump Justin Piszcz
2008-12-26 17:10 ` Eric Sandeen
2008-12-26 17:12 ` Eric Sandeen
2008-12-26 17:45 ` Justin Piszcz
2008-12-26 17:52 ` Justin Piszcz
2008-12-28 22:39 ` Dave Chinner
2008-12-28 22:44 ` Justin Piszcz
2008-12-29 4:48 ` Eric Sandeen
2008-12-26 19:18 ` Eric Sandeen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox