* XFS recovery resumes...
[not found] <29874428.3384.1376259762936.JavaMail.root@benjamin.baylink.com>
@ 2013-08-11 22:36 ` Jay Ashworth
2013-08-18 21:38 ` Jay Ashworth
2013-08-22 9:16 ` XFS recovery resumes Stefan Ring
0 siblings, 2 replies; 23+ messages in thread
From: Jay Ashworth @ 2013-08-11 22:36 UTC (permalink / raw)
To: xfs
I'm trying to dedupe the two large XFS filesystems on which I have DVR
recordings, so that I can walk around amongst the available HDDs and create
new filesystems under everything.
Every time I rm a file, the filesystem blows up, and the driver shuts it
down.
Some background:
At the moment, I have 2 devices, /dev/sdd1 mounted on /appl/media4, and
/dev/sda1 mounted on /appl/media5, and a large script, created by hand-
hacking the output of a perl dupe finder script.
The large script was mangled so that it would remove anything that was a
dupe from media4, unless the file was an unlabeled lost+found on media5,
and had a name on media4. In that case, I removed the file on media5, and
then moved it from media4 to media5.
After the hand-hacking on the script, I sorted it to do all the rm's first,
and then all the mv's, to make sure free space when up before it went down.
And, of course, when I ran the script, it caused the XFS driver to cough and
die, leading to error 5s and gnashing of teeth.
I unmounted media5, remounted it (which worked), and unmounted it again to
run xfs_repair -n. That found one inode that was pointing somewhere bogus
(and I apologize that I can't copy that in; I was running under screen, and
it doesn't cooperate with scrollback well). I ran an xfs_repair without -n,
and it found and fixed the one error without complaint.
I mounted and unmounted it successfully (nothing notable in dmesg), and reran
xfs_repair -n, which, this time, ran without any problems reported.
So I remounted the filesystem, and again tried to run the script.
And again, it tripped something, and the filesystem unmounted, and here's the
dmesg output from the first and second trips:
First time:
[169324.654803] XFS (sdd1): Ending clean mount
[1278872.471310] ccbc0000: 41 42 54 42 00 00 00 04 df ff ff ff ff ff ff ff ABTB............
[1278872.471324] XFS (sda1): Internal error xfs_btree_check_sblock at line 119 of file /home/abuild/rpmbuild/BUI
LD/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_btree.c. Caller 0xe3caf3a5
[1278872.471328]
[1278872.471334] Pid: 16696, comm: rm Not tainted 3.4.47-2.38-default #1
[1278872.471338] Call Trace:
[1278872.471368] [<c0205349>] try_stack_unwind+0x199/0x1b0
[1278872.471382] [<c02041c7>] dump_trace+0x47/0xf0
[1278872.471391] [<c02053ab>] show_trace_log_lvl+0x4b/0x60
[1278872.471398] [<c02053d8>] show_trace+0x18/0x20
[1278872.471409] [<c06825ba>] dump_stack+0x6d/0x72
[1278872.471534] [<e3c826ed>] xfs_corruption_error+0x5d/0x90 [xfs]
[1278872.471650] [<e3cae9f4>] xfs_btree_check_sblock+0x74/0x100 [xfs]
[1278872.471834] [<e3caf3a5>] xfs_btree_read_buf_block.constprop.24+0x95/0xb0 [xfs]
[1278872.472007] [<e3caf423>] xfs_btree_lookup_get_block+0x63/0xc0 [xfs]
[1278872.472207] [<e3cb251a>] xfs_btree_lookup+0x9a/0x460 [xfs]
[1278872.472379] [<e3c9576a>] xfs_alloc_fixup_trees+0x27a/0x370 [xfs]
[1278872.472510] [<e3c97b63>] xfs_alloc_ag_vextent_size+0x523/0x670 [xfs]
[1278872.472647] [<e3c9874f>] xfs_alloc_ag_vextent+0x9f/0x100 [xfs]
[1278872.472781] [<e3c9899a>] xfs_alloc_fix_freelist+0x1ea/0x450 [xfs]
[1278872.472915] [<e3c98cd5>] xfs_free_extent+0xd5/0x160 [xfs]
[1278872.473052] [<e3ca9f4e>] xfs_bmap_finish+0x15e/0x1b0 [xfs]
[1278872.473214] [<e3cc47e9>] xfs_itruncate_extents+0x159/0x2f0 [xfs]
[1278872.473422] [<e3c92ff5>] xfs_inactive+0x335/0x4a0 [xfs]
[1278872.473516] [<c0337e84>] evict+0x84/0x150
[1278872.473530] [<c032ea22>] do_unlinkat+0x102/0x160
[1278872.473546] [<c069331c>] sysenter_do_call+0x12/0x28
[1278872.473578] [<b779b430>] 0xb779b42f
[1278872.473583] XFS (sda1): Corruption detected. Unmount and run xfs_repair
[1278872.473599] XFS (sda1): xfs_do_force_shutdown(0x8) called from line 3732 of file /home/abuild/rpmbuild/BUIL
D/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_bmap.c. Return address = 0xe3ca9f8c
[1278872.584543] XFS (sda1): Corruption of in-memory data detected. Shutting down filesystem
[1278872.584555] XFS (sda1): Please umount the filesystem and rectify the problem(s)
[1278881.888038] XFS (sda1): xfs_log_force: error 5 returned.
[1278911.968046] XFS (sda1): xfs_log_force: error 5 returned.
[1278942.048037] XFS (sda1): xfs_log_force: error 5 returned.
[1278972.128049] XFS (sda1): xfs_log_force: error 5 returned.
[1279002.208042] XFS (sda1): xfs_log_force: error 5 returned.
[1279028.046331] XFS (sda1): xfs_log_force: error 5 returned.
[1279028.046349] XFS (sda1): xfs_do_force_shutdown(0x1) called from line 1031 of file /home/abuild/rpmbuild/BUIL
D/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_buf.c. Return address = 0xe3c813c0
[1279028.060676] XFS (sda1): xfs_log_force: error 5 returned.
[1279028.067532] XFS (sda1): xfs_log_force: error 5 returned.
Here's me mounting and umounting, with the xfs_repair runs in the middle:
[1279032.147391] XFS (sda1): Mounting Filesystem
[1279032.305924] XFS (sda1): Starting recovery (logdev: internal)
[1279035.263630] XFS (sda1): Ending recovery (logdev: internal)
[1279238.566041] XFS (sda1): Mounting Filesystem
[1279238.713051] XFS (sda1): Ending clean mount
[1279286.829764] XFS (sda1): Mounting Filesystem
[1279286.982409] XFS (sda1): Ending clean mount
[1279368.607644] XFS (sda1): Mounting Filesystem
[1279368.755048] XFS (sda1): Ending clean mount
Second time:
[1279388.664986] c1516000: 41 42 54 43 00 00 00 04 df ff ff ff ff ff ff ff ABTC............
[1279388.665000] XFS (sda1): Internal error xfs_btree_check_sblock at line 119 of file /home/abuild/rpmbuild/BUI
LD/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_btree.c. Caller 0xe3caf3a5
[1279388.665004]
[1279388.665010] Pid: 18452, comm: rm Not tainted 3.4.47-2.38-default #1
[1279388.665015] Call Trace:
[1279388.665045] [<c0205349>] try_stack_unwind+0x199/0x1b0
[1279388.665058] [<c02041c7>] dump_trace+0x47/0xf0
[1279388.665067] [<c02053ab>] show_trace_log_lvl+0x4b/0x60
[1279388.665075] [<c02053d8>] show_trace+0x18/0x20
[1279388.665086] [<c06825ba>] dump_stack+0x6d/0x72
[1279388.665211] [<e3c826ed>] xfs_corruption_error+0x5d/0x90 [xfs]
[1279388.665327] [<e3cae9f4>] xfs_btree_check_sblock+0x74/0x100 [xfs]
[1279388.665511] [<e3caf3a5>] xfs_btree_read_buf_block.constprop.24+0x95/0xb0 [xfs]
[1279388.665684] [<e3caf423>] xfs_btree_lookup_get_block+0x63/0xc0 [xfs]
[1279388.665856] [<e3cb251a>] xfs_btree_lookup+0x9a/0x460 [xfs]
[1279388.666029] [<e3c97691>] xfs_alloc_ag_vextent_size+0x51/0x670 [xfs]
[1279388.666163] [<e3c9874f>] xfs_alloc_ag_vextent+0x9f/0x100 [xfs]
[1279388.666298] [<e3c9899a>] xfs_alloc_fix_freelist+0x1ea/0x450 [xfs]
[1279388.666433] [<e3c98cd5>] xfs_free_extent+0xd5/0x160 [xfs]
[1279388.666571] [<e3ca9f4e>] xfs_bmap_finish+0x15e/0x1b0 [xfs]
[1279388.666734] [<e3cc47e9>] xfs_itruncate_extents+0x159/0x2f0 [xfs]
[1279388.666944] [<e3c92ff5>] xfs_inactive+0x335/0x4a0 [xfs]
[1279388.667039] [<c0337e84>] evict+0x84/0x150
[1279388.667053] [<c032ea22>] do_unlinkat+0x102/0x160
[1279388.667069] [<c069331c>] sysenter_do_call+0x12/0x28
[1279388.667100] [<b772f430>] 0xb772f42f
[1279388.667105] XFS (sda1): Corruption detected. Unmount and run xfs_repair
[1279388.667120] XFS (sda1): xfs_do_force_shutdown(0x8) called from line 3732 of file /home/abuild/rpmbuild/BUIL
D/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_bmap.c. Return address = 0xe3ca9f8c
[1279388.690497] XFS (sda1): Corruption of in-memory data detected. Shutting down filesystem
[1279388.690506] XFS (sda1): Please umount the filesystem and rectify the problem(s)
[1279398.816060] XFS (sda1): xfs_log_force: error 5 returned.
[1279428.832065] XFS (sda1): xfs_log_force: error 5 returned.
[ ... ]
It's not entirely clear to me whether this problem is specific inodes that
are corrupt or not, or just something in the filesystem header.
Kernel:
Linux duckling 3.4.47-2.38-default #1 SMP Fri May 31 20:17:40 UTC 2013 (3961086) i686 athlon i386 GNU/Linux
progs:
xfsprogs-3.1.6-9.1.2.i586
Worst case, if I can't get these to behave, I'll just beg, borrow or steal
a spare 3T and copy everything to it, and then redo the FSs on these 2
drives, but it would a bit easier if I could get them to settle down a
bit...
Anyone have any suggestions as to which mole I should whack next?
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* XFS recovery resumes...
2013-08-11 22:36 ` XFS recovery resumes Jay Ashworth
@ 2013-08-18 21:38 ` Jay Ashworth
2013-08-18 21:51 ` Joe Landman
2013-08-18 22:06 ` Stan Hoeppner
2013-08-22 9:16 ` XFS recovery resumes Stefan Ring
1 sibling, 2 replies; 23+ messages in thread
From: Jay Ashworth @ 2013-08-18 21:38 UTC (permalink / raw)
To: xfs
I'm trying to dedupe the two large XFS filesystems on which I have DVR
recordings, so that I can walk around amongst the available HDDs and create
new filesystems under everything.
Every time I rm a file, the filesystem blows up, and the driver shuts it
down.
Some background:
At the moment, I have 2 devices, /dev/sdd1 mounted on /appl/media4, and
/dev/sda1 mounted on /appl/media5, and a large script, created by hand-
hacking the output of a perl dupe finder script.
The large script was mangled so that it would remove anything that was a
dupe from media4, unless the file was an unlabeled lost+found on media5,
and had a name on media4. In that case, I removed the file on media5, and
then moved it from media4 to media5.
After the hand-hacking on the script, I sorted it to do all the rm's first,
and then all the mv's, to make sure free space when up before it went down.
And, of course, when I ran the script, it caused the XFS driver to cough and
die, leading to error 5s and gnashing of teeth.
I unmounted media5, remounted it (which worked), and unmounted it again to
run xfs_repair -n. That found one inode that was pointing somewhere bogus
(and I apologize that I can't copy that in; I was running under screen, and
it doesn't cooperate with scrollback well). I ran an xfs_repair without -n,
and it found and fixed the one error without complaint.
I mounted and unmounted it successfully (nothing notable in dmesg), and reran
xfs_repair -n, which, this time, ran without any problems reported.
So I remounted the filesystem, and again tried to run the script.
And again, it tripped something, and the filesystem unmounted, and here's the
dmesg output from the first and second trips:
First time:
[169324.654803] XFS (sdd1): Ending clean mount
[1278872.471310] ccbc0000: 41 42 54 42 00 00 00 04 df ff ff ff ff ff ff ff ABTB............
[1278872.471324] XFS (sda1): Internal error xfs_btree_check_sblock at line 119 of file /home/abuild/rpmbuild/BUI
LD/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_btree.c. Caller 0xe3caf3a5
[1278872.471328]
[1278872.471334] Pid: 16696, comm: rm Not tainted 3.4.47-2.38-default #1
[1278872.471338] Call Trace:
[1278872.471368] [<c0205349>] try_stack_unwind+0x199/0x1b0
[1278872.471382] [<c02041c7>] dump_trace+0x47/0xf0
[1278872.471391] [<c02053ab>] show_trace_log_lvl+0x4b/0x60
[1278872.471398] [<c02053d8>] show_trace+0x18/0x20
[1278872.471409] [<c06825ba>] dump_stack+0x6d/0x72
[1278872.471534] [<e3c826ed>] xfs_corruption_error+0x5d/0x90 [xfs]
[1278872.471650] [<e3cae9f4>] xfs_btree_check_sblock+0x74/0x100 [xfs]
[1278872.471834] [<e3caf3a5>] xfs_btree_read_buf_block.constprop.24+0x95/0xb0 [xfs]
[1278872.472007] [<e3caf423>] xfs_btree_lookup_get_block+0x63/0xc0 [xfs]
[1278872.472207] [<e3cb251a>] xfs_btree_lookup+0x9a/0x460 [xfs]
[1278872.472379] [<e3c9576a>] xfs_alloc_fixup_trees+0x27a/0x370 [xfs]
[1278872.472510] [<e3c97b63>] xfs_alloc_ag_vextent_size+0x523/0x670 [xfs]
[1278872.472647] [<e3c9874f>] xfs_alloc_ag_vextent+0x9f/0x100 [xfs]
[1278872.472781] [<e3c9899a>] xfs_alloc_fix_freelist+0x1ea/0x450 [xfs]
[1278872.472915] [<e3c98cd5>] xfs_free_extent+0xd5/0x160 [xfs]
[1278872.473052] [<e3ca9f4e>] xfs_bmap_finish+0x15e/0x1b0 [xfs]
[1278872.473214] [<e3cc47e9>] xfs_itruncate_extents+0x159/0x2f0 [xfs]
[1278872.473422] [<e3c92ff5>] xfs_inactive+0x335/0x4a0 [xfs]
[1278872.473516] [<c0337e84>] evict+0x84/0x150
[1278872.473530] [<c032ea22>] do_unlinkat+0x102/0x160
[1278872.473546] [<c069331c>] sysenter_do_call+0x12/0x28
[1278872.473578] [<b779b430>] 0xb779b42f
[1278872.473583] XFS (sda1): Corruption detected. Unmount and run xfs_repair
[1278872.473599] XFS (sda1): xfs_do_force_shutdown(0x8) called from line 3732 of file /home/abuild/rpmbuild/BUIL
D/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_bmap.c. Return address = 0xe3ca9f8c
[1278872.584543] XFS (sda1): Corruption of in-memory data detected. Shutting down filesystem
[1278872.584555] XFS (sda1): Please umount the filesystem and rectify the problem(s)
[1278881.888038] XFS (sda1): xfs_log_force: error 5 returned.
[1278911.968046] XFS (sda1): xfs_log_force: error 5 returned.
[1278942.048037] XFS (sda1): xfs_log_force: error 5 returned.
[1278972.128049] XFS (sda1): xfs_log_force: error 5 returned.
[1279002.208042] XFS (sda1): xfs_log_force: error 5 returned.
[1279028.046331] XFS (sda1): xfs_log_force: error 5 returned.
[1279028.046349] XFS (sda1): xfs_do_force_shutdown(0x1) called from line 1031 of file /home/abuild/rpmbuild/BUIL
D/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_buf.c. Return address = 0xe3c813c0
[1279028.060676] XFS (sda1): xfs_log_force: error 5 returned.
[1279028.067532] XFS (sda1): xfs_log_force: error 5 returned.
Here's me mounting and umounting, with the xfs_repair runs in the middle:
[1279032.147391] XFS (sda1): Mounting Filesystem
[1279032.305924] XFS (sda1): Starting recovery (logdev: internal)
[1279035.263630] XFS (sda1): Ending recovery (logdev: internal)
[1279238.566041] XFS (sda1): Mounting Filesystem
[1279238.713051] XFS (sda1): Ending clean mount
[1279286.829764] XFS (sda1): Mounting Filesystem
[1279286.982409] XFS (sda1): Ending clean mount
[1279368.607644] XFS (sda1): Mounting Filesystem
[1279368.755048] XFS (sda1): Ending clean mount
Second time:
[1279388.664986] c1516000: 41 42 54 43 00 00 00 04 df ff ff ff ff ff ff ff ABTC............
[1279388.665000] XFS (sda1): Internal error xfs_btree_check_sblock at line 119 of file /home/abuild/rpmbuild/BUI
LD/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_btree.c. Caller 0xe3caf3a5
[1279388.665004]
[1279388.665010] Pid: 18452, comm: rm Not tainted 3.4.47-2.38-default #1
[1279388.665015] Call Trace:
[1279388.665045] [<c0205349>] try_stack_unwind+0x199/0x1b0
[1279388.665058] [<c02041c7>] dump_trace+0x47/0xf0
[1279388.665067] [<c02053ab>] show_trace_log_lvl+0x4b/0x60
[1279388.665075] [<c02053d8>] show_trace+0x18/0x20
[1279388.665086] [<c06825ba>] dump_stack+0x6d/0x72
[1279388.665211] [<e3c826ed>] xfs_corruption_error+0x5d/0x90 [xfs]
[1279388.665327] [<e3cae9f4>] xfs_btree_check_sblock+0x74/0x100 [xfs]
[1279388.665511] [<e3caf3a5>] xfs_btree_read_buf_block.constprop.24+0x95/0xb0 [xfs]
[1279388.665684] [<e3caf423>] xfs_btree_lookup_get_block+0x63/0xc0 [xfs]
[1279388.665856] [<e3cb251a>] xfs_btree_lookup+0x9a/0x460 [xfs]
[1279388.666029] [<e3c97691>] xfs_alloc_ag_vextent_size+0x51/0x670 [xfs]
[1279388.666163] [<e3c9874f>] xfs_alloc_ag_vextent+0x9f/0x100 [xfs]
[1279388.666298] [<e3c9899a>] xfs_alloc_fix_freelist+0x1ea/0x450 [xfs]
[1279388.666433] [<e3c98cd5>] xfs_free_extent+0xd5/0x160 [xfs]
[1279388.666571] [<e3ca9f4e>] xfs_bmap_finish+0x15e/0x1b0 [xfs]
[1279388.666734] [<e3cc47e9>] xfs_itruncate_extents+0x159/0x2f0 [xfs]
[1279388.666944] [<e3c92ff5>] xfs_inactive+0x335/0x4a0 [xfs]
[1279388.667039] [<c0337e84>] evict+0x84/0x150
[1279388.667053] [<c032ea22>] do_unlinkat+0x102/0x160
[1279388.667069] [<c069331c>] sysenter_do_call+0x12/0x28
[1279388.667100] [<b772f430>] 0xb772f42f
[1279388.667105] XFS (sda1): Corruption detected. Unmount and run xfs_repair
[1279388.667120] XFS (sda1): xfs_do_force_shutdown(0x8) called from line 3732 of file /home/abuild/rpmbuild/BUIL
D/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_bmap.c. Return address = 0xe3ca9f8c
[1279388.690497] XFS (sda1): Corruption of in-memory data detected. Shutting down filesystem
[1279388.690506] XFS (sda1): Please umount the filesystem and rectify the problem(s)
[1279398.816060] XFS (sda1): xfs_log_force: error 5 returned.
[1279428.832065] XFS (sda1): xfs_log_force: error 5 returned.
[ ... ]
It's not entirely clear to me whether this problem is specific inodes that
are corrupt or not, or just something in the filesystem header.
Kernel:
Linux duckling 3.4.47-2.38-default #1 SMP Fri May 31 20:17:40 UTC 2013 (3961086) i686 athlon i386 GNU/Linux
progs:
xfsprogs-3.1.6-9.1.2.i586
Worst case, if I can't get these to behave, I'll just beg, borrow or steal
a spare 3T and copy everything to it, and then redo the FSs on these 2
drives, but it would a bit easier if I could get them to settle down a
bit...
Anyone have any suggestions as to which mole I should whack next?
[ ... ]
Built xfsprogs 3.1.11 from GIT, and ran it, and on /appl/media4, /dev/sda1:
============
duckling:/appl/downloads/xfsprogs # xfs_repair /dev/sda1
Phase 1 - find and verify superblock...
Not enough RAM available for repair to enable prefetching.
This will be _slow_.
You need at least 497MB RAM to run with prefetching enabled.
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
ir_freecount/free mismatch, inode chunk 2/128, freecount 62 nfree 61
ir_freecount/free mismatch, inode chunk 3/128, freecount 36 nfree 35
xfs_allocbt_read_verify: XFS_CORRUPTION_ERROR
xfs_allocbt_read_verify: XFS_CORRUPTION_ERROR
xfs_allocbt_read_verify: XFS_CORRUPTION_ERROR
xfs_allocbt_read_verify: XFS_CORRUPTION_ERROR
xfs_allocbt_read_verify: XFS_CORRUPTION_ERROR
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
imap claims a free inode 1073742013 is in use, correcting imap and clearing inode
cleared inode 1073742013
- agno = 3
imap claims a free inode 1610612893 is in use, correcting imap and clearing inode
cleared inode 1610612893
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
__read_verify: XFS_CORRUPTION_ERROR
can't read leaf block 8388608 for directory inode 128
rebuilding directory inode 128
name create failed in ino 128 (117), filesystem may be out of space
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
============
It's not clear to me whether that actually fixed anything or not, but
I think I'm going to put off a second run, or a run on the other FS
which threw more CORRUPTION errors in a later stage, until I have a
better idea what's going on...
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-18 21:38 ` Jay Ashworth
@ 2013-08-18 21:51 ` Joe Landman
2013-08-18 22:11 ` Jay Ashworth
2013-08-18 22:06 ` Stan Hoeppner
1 sibling, 1 reply; 23+ messages in thread
From: Joe Landman @ 2013-08-18 21:51 UTC (permalink / raw)
To: xfs
On 08/18/2013 05:38 PM, Jay Ashworth wrote:
> I'm trying to dedupe the two large XFS filesystems on which I have DVR
> recordings, so that I can walk around amongst the available HDDs and create
> new filesystems under everything.
[...]
> duckling:/appl/downloads/xfsprogs # xfs_repair /dev/sda1
> Phase 1 - find and verify superblock...
> Not enough RAM available for repair to enable prefetching.
> This will be _slow_.
> You need at least 497MB RAM to run with prefetching enabled.
^^^^^
This is 1/2 GB ram, and you didn't specify the memory options of the
xfs_repair ... so I'm going to guess at this point that you ran out of
ram. Paging while running xfs_repair is no fun.
How much ram do you have in this box? Next question is, is this an ECC
memory box?
Not sure if you are hitting a bug as much as running into something else
like a hardware limit (RAM) or a memory stick issue.
Do you have EDAC (or mcelog) on? Any errors from this?
Joe
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-18 21:38 ` Jay Ashworth
2013-08-18 21:51 ` Joe Landman
@ 2013-08-18 22:06 ` Stan Hoeppner
2013-08-19 3:55 ` Jay Ashworth
1 sibling, 1 reply; 23+ messages in thread
From: Stan Hoeppner @ 2013-08-18 22:06 UTC (permalink / raw)
To: Jay Ashworth; +Cc: xfs
On 8/18/2013 4:38 PM, Jay Ashworth wrote:
> I'm trying to dedupe the two large XFS filesystems on which I have DVR
> recordings, so that I can walk around amongst the available HDDs and create
> new filesystems under everything.
>
> Every time I rm a file, the filesystem blows up, and the driver shuts it
> down.
>
> Some background:
>
> At the moment, I have 2 devices, /dev/sdd1 mounted on /appl/media4, and
> /dev/sda1 mounted on /appl/media5, and a large script, created by hand-
> hacking the output of a perl dupe finder script.
>
> The large script was mangled so that it would remove anything that was a
> dupe from media4, unless the file was an unlabeled lost+found on media5,
> and had a name on media4. In that case, I removed the file on media5, and
> then moved it from media4 to media5.
>
> After the hand-hacking on the script, I sorted it to do all the rm's first,
> and then all the mv's, to make sure free space when up before it went down.
>
> And, of course, when I ran the script, it caused the XFS driver to cough and
> die, leading to error 5s and gnashing of teeth.
If this script is the catalyst of your XFS problems, it seems logical
that you would include said script in your trouble report, yet you did
not. It's a bit foolish to assume you can't break a Linux subsystem
with a poorly written program and/or in combination with a platform that
isn't up to the task being asked of it. As Joe mentioned having too
little RAM could be part of this problem.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-18 21:51 ` Joe Landman
@ 2013-08-18 22:11 ` Jay Ashworth
2013-08-18 22:57 ` Joe Landman
0 siblings, 1 reply; 23+ messages in thread
From: Jay Ashworth @ 2013-08-18 22:11 UTC (permalink / raw)
To: xfs
----- Original Message -----
> From: "Joe Landman" <joe.landman@gmail.com>
> > You need at least 497MB RAM to run with prefetching enabled.
>
> ^^^^^
>
> This is 1/2 GB ram, and you didn't specify the memory options of the
> xfs_repair ... so I'm going to guess at this point that you ran out of
> ram. Paging while running xfs_repair is no fun.
>
> How much ram do you have in this box? Next question is, is this an ECC
> memory box?
512M. It's a *very* old KT6V based board, and when we tried to expand
it several years back, it went bat-guano with any more than half a gig.
> Not sure if you are hitting a bug as much as running into something
> else like a hardware limit (RAM) or a memory stick issue.
Well, the upstream cause was a 7 year old Antec power supply that
finally died, about a month ago, slowly.
> Do you have EDAC (or mcelog) on? Any errors from this?
I don't have mcelog on, and no, the memory isn't registered, but a
4-pass run of Memtest+ came up clean, so I'm speculating that the
*continuing* problem isn't hardware; I'm pretty sure it was just the
failing 12V rail on the dying PS. I just have to clean up after it
enough to get *one* of these 2 drives cleaned off, then I can make a
new FS, and play musical files.
Or, I may just go grab a 3TB external after all. :-)
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-18 22:11 ` Jay Ashworth
@ 2013-08-18 22:57 ` Joe Landman
2013-08-18 23:21 ` Jay Ashworth
0 siblings, 1 reply; 23+ messages in thread
From: Joe Landman @ 2013-08-18 22:57 UTC (permalink / raw)
To: xfs
On 08/18/2013 06:11 PM, Jay Ashworth wrote:
> ----- Original Message -----
>> From: "Joe Landman" <joe.landman@gmail.com>
>
>>> You need at least 497MB RAM to run with prefetching enabled.
>>
>> ^^^^^
>>
>> This is 1/2 GB ram, and you didn't specify the memory options of the
>> xfs_repair ... so I'm going to guess at this point that you ran out of
>> ram. Paging while running xfs_repair is no fun.
>>
>> How much ram do you have in this box? Next question is, is this an ECC
>> memory box?
>
> 512M. It's a *very* old KT6V based board, and when we tried to expand
> it several years back, it went bat-guano with any more than half a gig.
Ahhh .... ok. Got it.
>
>> Not sure if you are hitting a bug as much as running into something
>> else like a hardware limit (RAM) or a memory stick issue.
>
> Well, the upstream cause was a 7 year old Antec power supply that
> finally died, about a month ago, slowly.
Ok. I've had power supplies take down memory in the past. You might be
hitting a bad memory cell courtesy of the PS.
>
>> Do you have EDAC (or mcelog) on? Any errors from this?
>
> I don't have mcelog on, and no, the memory isn't registered, but a
> 4-pass run of Memtest+ came up clean, so I'm speculating that the
Not registered (which is just buffered), but ECC. ECC does a parity
computation on some number of bits, and provides you a rough "good/bad"
binary state of a particular area of memory. If the parity bits stored
don't match what is computed on read, then odds are that something is
wrong. Its not foolproof, but its a good mechanism to catch potential
errors.
We've had cases where Memtest(*) reported everything fine, yet I was
able to generate ECC errors in a few minutes by running a memory
intensive app. Memtest does do some hardware exercise, but its not
usually hitting memory the way apps do. That difference can be
significant. This is in part why the day job stopped using memtest for
testing a number of years ago. We now run heavy duty electronic
structure codes, and pi/e/... computations for burn in.
> *continuing* problem isn't hardware; I'm pretty sure it was just the
> failing 12V rail on the dying PS. I just have to clean up after it
> enough to get *one* of these 2 drives cleaned off, then I can make a
> new FS, and play musical files.
Ahhh ...
I was running a Plex server on an old machine for a while. I had to
shift over to a beefier box with ECC ram and more CPUs. Right now my
Plex server has 8 cpus, 24 GB RAM, and about 1TB of disk (old). Once
you start doing recoding on the fly (multi-resolution output), you need
the ram and processor power.
>
> Or, I may just go grab a 3TB external after all. :-)
If you do that, and you still hit the error, chances are you might need
to swap out your MB and CPU/RAM to something newer (not to mention the
PS). I'd recommend ECC based systems if at all possible. Xfs can and
will get very unhappy if bits are flipped on its data structures while
you are making changes to the file system.
--
Joe
>
> Cheers,
> -- jra
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-18 22:57 ` Joe Landman
@ 2013-08-18 23:21 ` Jay Ashworth
0 siblings, 0 replies; 23+ messages in thread
From: Jay Ashworth @ 2013-08-18 23:21 UTC (permalink / raw)
To: xfs
----- Original Message -----
> From: "Joe Landman" <joe.landman@gmail.com>
> Ok. I've had power supplies take down memory in the past. You might be
> hitting a bad memory cell courtesy of the PS.
Possibly, though see below.
> >> Do you have EDAC (or mcelog) on? Any errors from this?
> >
> > I don't have mcelog on, and no, the memory isn't registered, but a
> > 4-pass run of Memtest+ came up clean, so I'm speculating that the
>
> Not registered (which is just buffered), but ECC. ECC does a parity
> computation on some number of bits, and provides you a rough "good/bad"
> binary state of a particular area of memory. If the parity bits stored
> don't match what is computed on read, then odds are that something is
> wrong. Its not foolproof, but its a good mechanism to catch potential
> errors.
Sure. In my experience, all ECC is registered/buffered, and no non-ECC
is, so I use it as shorthand. No possible chance this northbridge would
do ECC, no. :-)
> We've had cases where Memtest(*) reported everything fine, yet I was
> able to generate ECC errors in a few minutes by running a memory
> intensive app. Memtest does do some hardware exercise, but its not
> usually hitting memory the way apps do. That difference can be
> significant. This is in part why the day job stopped using memtest for
> testing a number of years ago. We now run heavy duty electronic
> structure codes, and pi/e/... computations for burn in.
Fair point. I did also run the non-+ version of Memtest, which I
understand uses a different algorithm, and a couple other things
I found on the UBCD, so I'm *relatively* confident I don't have a
running RAM problem, though as you say, not 100%.
> > *continuing* problem isn't hardware; I'm pretty sure it was just the
> > failing 12V rail on the dying PS. I just have to clean up after it
> > enough to get *one* of these 2 drives cleaned off, then I can make a
> > new FS, and play musical files.
>
> Ahhh ...
>
> I was running a Plex server on an old machine for a while. I had to
> shift over to a beefier box with ECC ram and more CPUs. Right now my
> Plex server has 8 cpus, 24 GB RAM, and about 1TB of disk (old). Once
> you start doing recoding on the fly (multi-resolution output), you
> need the ram and processor power.
>
> >
> > Or, I may just go grab a 3TB external after all. :-)
>
> If you do that, and you still hit the error, chances are you might
> need to swap out your MB and CPU/RAM to something newer (not to mention the
> PS). I'd recommend ECC based systems if at all possible. Xfs can and
> will get very unhappy if bits are flipped on its data structures while
> you are making changes to the file system.
As it happens, Dave helped me clean up a mess 4 or 5 years ago, where
a *wire opened up* on the PATA cable, and all my data structures had
a missing bit. Ghod was that a mess.
We did end up getting the drive. So assuming I can reliably read the
big drive (I have a 3T, a 2T, and a 1T all with different problems),
I'm going to move all the files from it to the new 3T I just bought,
and then play musical files down the chain one at a time.
Thank ghod the new season hasn't started yet. ;-)
Thanks for the help, Joe.
Oh, and the script that Stan was so worried about? It's all
rm and mv commands. 5859 of them.
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-18 22:06 ` Stan Hoeppner
@ 2013-08-19 3:55 ` Jay Ashworth
2013-08-19 6:47 ` Stan Hoeppner
0 siblings, 1 reply; 23+ messages in thread
From: Jay Ashworth @ 2013-08-19 3:55 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 2090 bytes --]
Still the same outage from 2 weeks ago, Stan; my script had nothing to do with breaking the FSs. Was a zorched power supply, almost certainly.
And in fact, after 32 years adminning *nix boxes for a living, yes, I do expect that if any userland program can /corrupt/ FS internals without twiddling with /dev/sdX, either the FS is broken or the hardware is.
In this case I'm quite certain it /was/ the hardware, and 85-90% confident it's fixed now.
Cheers,
-jra
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
Stan Hoeppner <stan@hardwarefreak.com> wrote:
On 8/18/2013 4:38 PM, Jay Ashworth wrote:
> I'm trying to dedupe the two large XFS filesystems on which I have DVR
> recordings, so that I can walk around amongst the available HDDs and create
> new filesystems under everything.
>
> Every time I rm a file, the filesystem blows up, and the driver shuts it
> down.
>
> Some background:
>
> At the moment, I have 2 devices, /dev/sdd1 mounted on /appl/media4, and
> /dev/sda1 mounted on /appl/media5, and a large script, created by hand-
> hacking the output of a perl dupe finder script.
>
> The large script was mangled so that it would remove anything that was a
> dupe from media4, unless the file was an unlabeled lost+found on media5,
> and had a name on media4. In that case, I removed the file on media5, and
> then moved it from media4 to media5.
>
> After the hand-hacking on the script, I sorted it to do all the rm's first,
> and then all the mv's, to make sure free space when up before it went down.
>
> And, of course, when I ran the script, it caused the XFS driver to cough and
> die, leading to error 5s and gnashing of teeth.
If this script is the catalyst of your XFS problems, it seems logical
that you would include said script in your trouble report, yet you did
not. It's a bit foolish to assume you can't break a Linux subsystem
with a poorly written program and/or in combination with a platform that
isn't up to the task being asked of it. As Joe mentioned having too
little RAM could be part of this problem.
--
Stan
[-- Attachment #1.2: Type: text/html, Size: 2702 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-19 3:55 ` Jay Ashworth
@ 2013-08-19 6:47 ` Stan Hoeppner
2013-08-24 23:43 ` Jay Ashworth
2013-08-24 23:48 ` Default mkfs parms for my DVR drive Jay Ashworth
0 siblings, 2 replies; 23+ messages in thread
From: Stan Hoeppner @ 2013-08-19 6:47 UTC (permalink / raw)
To: Jay Ashworth; +Cc: xfs
On 8/18/2013 10:55 PM, Jay Ashworth wrote:
> Still the same outage from 2 weeks ago, Stan; my script had nothing to do with breaking the FSs. Was a zorched power supply, almost certainly.
Sorry I missed this in your first post Jay.
> [1278872.584543] XFS (sda1): Corruption of in-memory data detected.
> Shutting down filesystem
Joe appears to have hit the nail on the head WRT this being a hardware
problem. This error confirms it. It would appear that when the Antec
PSU went South it damaged a motherboard device, possibly a VRM, probably
a cap or two, or more. Maybe damaged a DRAM cell or few that work fine
with memtest86+ but not with the access pattern generated by your XFS
workload.
I'd first try manually clocking the DIMMs down a bit, from 400 to 333,
or 333 to 266, whichever is called for. IIRC that VIA Northbrige has
decoupled CPU and DRAM buses so you should be able to clock the DRAM
down without affecting CPU frequency. If the problem persists, swap the
DIMMs if you have some on hand or can get them really cheap like $10 for
a pair. If that doesn't fix it, this may be a viable inexpensive solution:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813186215
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103888
http://www.newegg.com/Product/Product.aspx?Item=N82E16820145252
$109 to replace your central electronics complex. This is the least
expensive quality set of parts with good feature set I could come up
with at Newegg, to take the sting out of dropping cash on a forced
upgrade. $15 more for the Foxconn AM3 board w/HDMI if you have a newer
TV or AV receiver. If it all ships from the Memphis warehouse it should
reach St. Petersburg in a few days, a couple more if items ship from the
LA or Newark facilities. I very rarely get anything from Newark, mostly
from Memphis, then and LA.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-11 22:36 ` XFS recovery resumes Jay Ashworth
2013-08-18 21:38 ` Jay Ashworth
@ 2013-08-22 9:16 ` Stefan Ring
2013-08-27 23:59 ` Dave Chinner
1 sibling, 1 reply; 23+ messages in thread
From: Stefan Ring @ 2013-08-22 9:16 UTC (permalink / raw)
To: Jay Ashworth; +Cc: Linux fs XFS
On Mon, Aug 12, 2013 at 12:36 AM, Jay Ashworth <jra@baylink.com> wrote:
> (and I apologize that I can't copy that in; I was running under screen, and
> it doesn't cooperate with scrollback well).
Running inside screen is perfect for conserving scrollback, although
actually getting it out is a bit tedious:
- Enter copy mode: Ctrl-a Ctrl-[ (moving around: Pg-Up/Pg-Down via
Ctrl-B/Ctrl-F)
- Mark a selection: SPC + moving up or down
- Yank: Y
- Write to file: Ctrl-a :writebuf /tmp/scrlog
I also tend to have this in my ~/.screenrc:
defscrollback 20000
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-19 6:47 ` Stan Hoeppner
@ 2013-08-24 23:43 ` Jay Ashworth
2013-08-25 3:44 ` Stan Hoeppner
2013-08-24 23:48 ` Default mkfs parms for my DVR drive Jay Ashworth
1 sibling, 1 reply; 23+ messages in thread
From: Jay Ashworth @ 2013-08-24 23:43 UTC (permalink / raw)
To: xfs
----- Original Message -----
> From: "Stan Hoeppner" <stan@hardwarefreak.com>
> Joe appears to have hit the nail on the head WRT this being a hardware
> problem. This error confirms it. It would appear that when the Antec
> PSU went South it damaged a motherboard device, possibly a VRM, probably
> a cap or two, or more. Maybe damaged a DRAM cell or few that work fine
> with memtest86+ but not with the access pattern generated by your XFS
> workload.
Well, it appears you may be right.
I'd got all the data off that 3T with no read failures, and then remade
the filesystem.
I had to use -f because it saw the old one, but I don't know if that's
pertinent here or not.
Anyroad, I made the new filesystem, with whatever mkfs.xfs's defaults are
for a 3T filesystem in 3.1.11, and then started rsyncing the 2TB drive onto
it, so I could fix that one.
Got 88GB in, and did the same thing:
===========================================
Aug 22 13:34:13 duckling kernel: [67215.008867] XFS (sda1): Corruption detected. Unmount and run xfs_repair
Aug 22 13:34:13 duckling kernel: [67215.008899] XFS (sda1): Internal error xfs_trans_cancel at line 1467 of file /home/abuild/rpmbuild/BUILD/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_trans.c. Caller 0xe3d9349d
Aug 22 13:34:13 duckling kernel: [67215.008903]
Aug 22 13:34:13 duckling kernel: [67215.008910] Pid: 4122, comm: rsync Not tainted 3.4.47-2.38-default #1
Aug 22 13:34:13 duckling kernel: [67215.008914] Call Trace:
Aug 22 13:34:13 duckling kernel: [67215.008946] [<c0205349>] try_stack_unwind+0x199/0x1b0
Aug 22 13:34:13 duckling kernel: [67215.008959] [<c02041c7>] dump_trace+0x47/0xf0
Aug 22 13:34:13 duckling kernel: [67215.008968] [<c02053ab>] show_trace_log_lvl+0x4b/0x60
Aug 22 13:34:13 duckling kernel: [67215.008975] [<c02053d8>] show_trace+0x18/0x20
Aug 22 13:34:13 duckling kernel: [67215.008986] [<c06825ba>] dump_stack+0x6d/0x72
Aug 22 13:34:13 duckling kernel: [67215.009137] [<e3dd2d47>] xfs_trans_cancel+0xe7/0x110 [xfs]
Aug 22 13:34:13 duckling kernel: [67215.009426] [<e3d9349d>] xfs_create+0x22d/0x570 [xfs]
Aug 22 13:34:13 duckling kernel: [67215.009551] [<e3d8aafa>] xfs_vn_mknod+0x8a/0x170 [xfs]
Aug 22 13:34:13 duckling kernel: [67215.009624] [<c032ce03>] vfs_create+0xa3/0x130
Aug 22 13:34:13 duckling kernel: [67215.009634] [<c032f215>] do_last+0x6b5/0x7e0
Aug 22 13:34:13 duckling kernel: [67215.009644] [<c032f42a>] path_openat+0xaa/0x360
Aug 22 13:34:13 duckling kernel: [67215.009652] [<c032f7ce>] do_filp_open+0x2e/0x80
Aug 22 13:34:13 duckling kernel: [67215.009664] [<c032133e>] do_sys_open+0xee/0x1d0
Aug 22 13:34:13 duckling kernel: [67215.009673] [<c0321450>] sys_open+0x30/0x40
Aug 22 13:34:13 duckling kernel: [67215.009687] [<c069331c>] sysenter_do_call+0x12/0x28
Aug 22 13:34:13 duckling kernel: [67215.009719] [<b76bb430>] 0xb76bb42f
Aug 22 13:34:13 duckling kernel: [67215.009726] XFS (sda1): xfs_do_force_shutdown(0x8) called from line 1468 of file /home/abuild/rpmbuild/BUILD/kernel-default-3.4.47/linux-3.4/fs/xfs/xfs_trans.c. Return address = 0xe3dd2d5f
Aug 22 13:34:13 duckling kernel: [67215.034952] XFS (sda1): Corruption of in-memory data detected. Shutting down filesystem
Aug 22 13:34:13 duckling kernel: [67215.034966] XFS (sda1): Please umount the filesystem and rectify the problem(s)
===========================================
Followed by the obligatory:
Aug 22 13:35:37 duckling kernel: [67299.040080] XFS (sda1): xfs_log_force: error 5 returned.
a lot.
> I'd first try manually clocking the DIMMs down a bit, from 400 to 333,
> or 333 to 266, whichever is called for. IIRC that VIA Northbrige has
> decoupled CPU and DRAM buses so you should be able to clock the DRAM
> down without affecting CPU frequency. If the problem persists, swap the
> DIMMs if you have some on hand or can get them really cheap like $10
> for a pair.
I'll try swapping it; this mobo has always gotten whacky if we went over 512M,
which is why we haven't.
I don't know if I can manually reclock the ram, though I might can turn the
waitstates up.
> If that doesn't fix it, this may be a viable inexpensive
> solution:
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16813186215
> http://www.newegg.com/Product/Product.aspx?Item=N82E16819103888
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820145252
>
> $109 to replace your central electronics complex. This is the least
> expensive quality set of parts with good feature set I could come up
> with at Newegg, to take the sting out of dropping cash on a forced
> upgrade. $15 more for the Foxconn AM3 board w/HDMI if you have a newer
> TV or AV receiver.
Well, I can live without HDMI, but my present MS-7021 mobo has 5 PCI
slots, and I'm using all of them: 2 PVR-150s, a PVR-500, and a SiI
4-port raid (which will talk to 2 and 3TB drives; the motherboard SATA
won't even see them).
I forget what's in 5, but I think it was the only VGA card I had with
S-Video out.
So, while that's a damn nice price point, it will require me to buy
a bunch of Ethernet tuners as well. <sigh>
I'll try the RAM. It's really odd, though, that the badblocks workload
and both memtests couldn't find a problem, if it is the memory plane...
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Default mkfs parms for my DVR drive
2013-08-19 6:47 ` Stan Hoeppner
2013-08-24 23:43 ` Jay Ashworth
@ 2013-08-24 23:48 ` Jay Ashworth
2013-08-25 0:00 ` Joe Landman
1 sibling, 1 reply; 23+ messages in thread
From: Jay Ashworth @ 2013-08-24 23:48 UTC (permalink / raw)
To: xfs
This is a Seagate ST3000DM001, all one volume, for my sister's DVR on
which I've been doing this volume recovery work. The default setup that
mkfs.xfs returns with no parms supplies is this:
meta-data=/dev/sda1 isize=256 agcount=4, agsize=183141568 blks
= sectsz=4096 attr=2, projid32bit=0
data = bsize=4096 blocks=732566272, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=357698, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
and that takes about 3 minutes to mkfs a 3TB drive.
Anyone have some thoughts they wish to cast upon the waters about either part
of that?
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Default mkfs parms for my DVR drive
2013-08-24 23:48 ` Default mkfs parms for my DVR drive Jay Ashworth
@ 2013-08-25 0:00 ` Joe Landman
2013-08-25 0:41 ` Jay Ashworth
0 siblings, 1 reply; 23+ messages in thread
From: Joe Landman @ 2013-08-25 0:00 UTC (permalink / raw)
To: xfs
On 08/24/2013 07:48 PM, Jay Ashworth wrote:
> This is a Seagate ST3000DM001, all one volume, for my sister's DVR on
> which I've been doing this volume recovery work. The default setup that
> mkfs.xfs returns with no parms supplies is this:
>
> meta-data=/dev/sda1 isize=256 agcount=4, agsize=183141568 blks
> = sectsz=4096 attr=2, projid32bit=0
> data = bsize=4096 blocks=732566272, imaxpct=5
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal log bsize=4096 blocks=357698, version=2
> = sectsz=4096 sunit=1 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> and that takes about 3 minutes to mkfs a 3TB drive.
>
> Anyone have some thoughts they wish to cast upon the waters about either part
> of that?
Dave, Eric, and the rest of the xfs team will tell you "use the defaults
Luke". For 99 and 44/100ths percent of users, this is the right choice.
I am guessing that some of the delay may be the speed of the interface
to the disk ... but even then 3 minutes sounds long, unless something
else is hitting the disk at the same time.
Which kernel version btw? A quick 'uname -a' is a good thing.
Your hardware could also be somewhat slow ... Could you do an
lshw -class disk -class storage
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Default mkfs parms for my DVR drive
2013-08-25 0:00 ` Joe Landman
@ 2013-08-25 0:41 ` Jay Ashworth
2013-08-25 3:41 ` Jay Ashworth
0 siblings, 1 reply; 23+ messages in thread
From: Jay Ashworth @ 2013-08-25 0:41 UTC (permalink / raw)
To: xfs
---- Original Message -----
> From: "Joe Landman" <joe.landman@gmail.com>
> Dave, Eric, and the rest of the xfs team will tell you "use the defaults
> Luke". For 99 and 44/100ths percent of users, this is the right choice.
Well, if I can't; they're the ones who screwed up. :-)
> I am guessing that some of the delay may be the speed of the interface
> to the disk ... but even then 3 minutes sounds long, unless something
> else is hitting the disk at the same time.
>
> Which kernel version btw? A quick 'uname -a' is a good thing.
Ok, ok; this isn't the same thread anymore. :-)
Linux duckling 3.4.47-2.38-default #1 SMP Fri May 31 20:17:40 UTC 2013 (3961086) i686 athlon i386 GNU/Linux
> Your hardware could also be somewhat slow ... Could you do an
>
> lshw -class disk -class storage
It probably is.
It's an old Athlon, MSI MS-7021, KT6V chipset; 512M of DDR... maybe that's
DDR 2; it won't run right with more.
I don't seem to have lshw.
Or, oddly, hwconfig. It's got 3 Seagate ST3000DM001s, 2 on a SiI 7114 PCI
with a pair of Fujitsu Deskstars, 2T and 1T; 40G Samsung boot on the mobo SATA;
the third 3000 is in a USB 2 enclosure.
And this latest rsync has gotten 78G and then paused; 80G and then paused....
84G and then paused...
5 min LA 1.9, and the rsync is the top process.
I have smartd running; no errors yet.
Still watching...
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Default mkfs parms for my DVR drive
2013-08-25 0:41 ` Jay Ashworth
@ 2013-08-25 3:41 ` Jay Ashworth
0 siblings, 0 replies; 23+ messages in thread
From: Jay Ashworth @ 2013-08-25 3:41 UTC (permalink / raw)
To: xfs
----- Original Message -----
> From: "Jay Ashworth" <jra@baylink.com>
> And this latest rsync has gotten 78G and then paused; 80G and then
> paused....
>
> 84G and then paused...
>
> 5 min LA 1.9, and the rsync is the top process.
>
> I have smartd running; no errors yet.
>
> Still watching...
Well, it's now up to 377GB, and it hasn't crashed yet. Since the original
FSs and the new one were mkfs'd by different versions of the mkfs program,
I suppose it's possible that might have contributed to the crash in some
perverted way that running badblocks -w in the middle would definitely
prevent.
We'll see if it survives the night.
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-24 23:43 ` Jay Ashworth
@ 2013-08-25 3:44 ` Stan Hoeppner
2013-08-25 15:29 ` Jay Ashworth
0 siblings, 1 reply; 23+ messages in thread
From: Stan Hoeppner @ 2013-08-25 3:44 UTC (permalink / raw)
To: xfs
On 8/24/2013 6:43 PM, Jay Ashworth wrote:
> ----- Original Message -----
>> From: "Stan Hoeppner" <stan@hardwarefreak.com>
>
>> Joe appears to have hit the nail on the head WRT this being a hardware
>> problem. This error confirms it. It would appear that when the Antec
>> PSU went South it damaged a motherboard device, possibly a VRM, probably
>> a cap or two, or more. Maybe damaged a DRAM cell or few that work fine
>> with memtest86+ but not with the access pattern generated by your XFS
>> workload.
>
> Well, it appears you may be right.
...
> Aug 22 13:34:13 duckling kernel: [67215.034952] XFS (sda1): Corruption of in-memory data detected. Shutting down filesystem
I don't see any other possibility than a hardware problem. And given
the age of that hardware, it's cheaper in dollars and time to start over
with new gear.
> I'll try swapping it; this mobo has always gotten whacky if we went over 512M,
> which is why we haven't.
The manual says up to 2GB DDR2. Board has two DIMM sockets, which means
1GB DIMMs supported. If anything over 512MB (2x256MB DIMMs) causes
problems then the board had a flaw, or needed a BIOS update, etc. And
now it's physically damaged.
> I don't know if I can manually reclock the ram, though I might can turn the
> waitstates up.
That probably won't help but you can try it. The manual shows the BIOS
does not support independent clocking of the DRAM.
>> If that doesn't fix it, this may be a viable inexpensive
>> solution:
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16813186215
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16819103888
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16820145252
>>
>> $109 to replace your central electronics complex. This is the least
>> expensive quality set of parts with good feature set I could come up
>> with at Newegg, to take the sting out of dropping cash on a forced
>> upgrade. $15 more for the Foxconn AM3 board w/HDMI if you have a newer
>> TV or AV receiver.
>
> Well, I can live without HDMI, but my present MS-7021 mobo has 5 PCI
> slots, and I'm using all of them: 2 PVR-150s, a PVR-500, and a SiI
> 4-port raid (which will talk to 2 and 3TB drives; the motherboard SATA
> won't even see them).
You'll be extremely hard pressed to find a current board with more than
3 PCI unless you buy used. Hmmm...let's see....here we go:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813135329
http://www.newegg.com/Product/Product.aspx?Item=N82E16819113283
http://www.newegg.com/Product/Product.aspx?Item=N82E16820148194
-- $155
For less than $50 more you not only get all the slots/ports you need,
but also a much faster dual core CPU and GPU, plus HDMI. And you'll no
longer have disks on the slow PCI bus. Looks like a winner.
> I forget what's in 5, but I think it was the only VGA card I had with
> S-Video out.
If you absolutely need Svideo/composite output then you'll need to use
an external converter or switch box, something like this:
http://www.newegg.com/Product/Product.aspx?Item=9SIA0U00JZ2490
> So, while that's a damn nice price point, it will require me to buy
> a bunch of Ethernet tuners as well. <sigh>
Not now. ;)
> I'll try the RAM. It's really odd, though, that the badblocks workload
> and both memtests couldn't find a problem, if it is the memory plane...
This isn't odd at all and actually quite common. The problem likely is
not in the DRAM modules or individual transistors in the DRAM chips.
The problem is likely unstable signalling to/from the DIMM sockets, or
unstable power to the CPU or Northbridge, caused by old and now damaged
power delivery circuits on the mainboard.
Download and run burnp6 for 5-10 minutes. That'll tell you if the CPU
is getting sufficient power. Make sure the CPU fan is in working order
first. It's called BURNp6 for a reason. The Athlons didn't have
thermal shutdown capability, and this will literally destroy the CPU
with heat build up if the fans aren't working properly. If cooling is
good, and the system hard locks or exhibits other strange behavior, then
you know it's time to replace the board. But I think you know that
already. This will simply be the exclamation point.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-25 3:44 ` Stan Hoeppner
@ 2013-08-25 15:29 ` Jay Ashworth
2013-08-25 17:45 ` Stan Hoeppner
0 siblings, 1 reply; 23+ messages in thread
From: Jay Ashworth @ 2013-08-25 15:29 UTC (permalink / raw)
To: xfs
----- Original Message -----
> From: "Stan Hoeppner" <stan@hardwarefreak.com>
> I don't see any other possibility than a hardware problem. And given
> the age of that hardware, it's cheaper in dollars and time to start
> over with new gear.
Only if you have it, Stan. Only if you have it...
> > I'll try swapping it; this mobo has always gotten whacky if we went
> > over 512M, which is why we haven't.
>
> The manual says up to 2GB DDR2. Board has two DIMM sockets, which means
> 1GB DIMMs supported. If anything over 512MB (2x256MB DIMMs) causes
> problems then the board had a flaw, or needed a BIOS update, etc. And
> now it's physically damaged.
The BIOS was up to date when we installed it new.
> You'll be extremely hard pressed to find a current board with more
> than 3 PCI unless you buy used. Hmmm...let's see....here we go:
I know. :-}
> http://www.newegg.com/Product/Product.aspx?Item=N82E16813135329
> http://www.newegg.com/Product/Product.aspx?Item=N82E16819113283
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820148194
>
> -- $155
>
> For less than $50 more you not only get all the slots/ports you need,
> but also a much faster dual core CPU and GPU, plus HDMI. And you'll no
> longer have disks on the slow PCI bus. Looks like a winner.
It does.
> > I forget what's in 5, but I think it was the only VGA card I had
> > with
> > S-Video out.
>
> If you absolutely need Svideo/composite output then you'll need to use
> an external converter or switch box, something like this:
>
> http://www.newegg.com/Product/Product.aspx?Item=9SIA0U00JZ2490
I don't know if the set has HDMI in or not; it's an older Philips; 37"
I think. Probably.
> > I'll try the RAM. It's really odd, though, that the badblocks workload
> > and both memtests couldn't find a problem, if it is the memory plane...
>
> This isn't odd at all and actually quite common. The problem likely is
> not in the DRAM modules or individual transistors in the DRAM chips.
> The problem is likely unstable signalling to/from the DIMM sockets, or
> unstable power to the CPU or Northbridge, caused by old and now
> damaged power delivery circuits on the mainboard.
>
> Download and run burnp6 for 5-10 minutes. That'll tell you if the CPU
> is getting sufficient power. Make sure the CPU fan is in working order
> first. It's called BURNp6 for a reason. The Athlons didn't have
> thermal shutdown capability, and this will literally destroy the CPU
> with heat build up if the fans aren't working properly. If cooling is
> good, and the system hard locks or exhibits other strange behavior,
> then you know it's time to replace the board. But I think you know that
> already. This will simply be the exclamation point.
Well, oddly, it's up to about 1.4TB moved now overnight, and not a whisper
of an error in any channel. It does need to be replaced, but the question
is can I make it limp along reliably until she gets another job...
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-25 15:29 ` Jay Ashworth
@ 2013-08-25 17:45 ` Stan Hoeppner
2013-08-25 20:27 ` Jay Ashworth
0 siblings, 1 reply; 23+ messages in thread
From: Stan Hoeppner @ 2013-08-25 17:45 UTC (permalink / raw)
To: xfs
On 8/25/2013 10:29 AM, Jay Ashworth wrote:
> ----- Original Message -----
>> From: "Stan Hoeppner" <stan@hardwarefreak.com>
>
>> I don't see any other possibility than a hardware problem. And given
>> the age of that hardware, it's cheaper in dollars and time to start
>> over with new gear.
>
> Only if you have it, Stan. Only if you have it...
True, that.
>>> I'll try swapping it; this mobo has always gotten whacky if we went
>>> over 512M, which is why we haven't.
>>
>> The manual says up to 2GB DDR2. Board has two DIMM sockets, which means
>> 1GB DIMMs supported. If anything over 512MB (2x256MB DIMMs) causes
>> problems then the board had a flaw, or needed a BIOS update, etc. And
>> now it's physically damaged.
>
> The BIOS was up to date when we installed it new.
>
>> You'll be extremely hard pressed to find a current board with more
>> than 3 PCI unless you buy used. Hmmm...let's see....here we go:
>
> I know. :-}
>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16813135329
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16819113283
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16820148194
>>
>> -- $155
>>
>> For less than $50 more you not only get all the slots/ports you need,
>> but also a much faster dual core CPU and GPU, plus HDMI. And you'll no
>> longer have disks on the slow PCI bus. Looks like a winner.
>
> It does.
>
>>> I forget what's in 5, but I think it was the only VGA card I had
>>> with
>>> S-Video out.
>>
>> If you absolutely need Svideo/composite output then you'll need to use
>> an external converter or switch box, something like this:
>>
>> http://www.newegg.com/Product/Product.aspx?Item=9SIA0U00JZ2490
>
> I don't know if the set has HDMI in or not; it's an older Philips; 37"
> I think. Probably.
The board has VGA, DVI, and HDMI, so you should be covered six ways to
Sunday with flat panel displays. If this Philips is an older "fork lift
required" 37" CRT then it probably only has composite, Svideo, maybe
component.
>>> I'll try the RAM. It's really odd, though, that the badblocks workload
>>> and both memtests couldn't find a problem, if it is the memory plane...
>>
>> This isn't odd at all and actually quite common. The problem likely is
>> not in the DRAM modules or individual transistors in the DRAM chips.
>> The problem is likely unstable signalling to/from the DIMM sockets, or
>> unstable power to the CPU or Northbridge, caused by old and now
>> damaged power delivery circuits on the mainboard.
>>
>> Download and run burnp6 for 5-10 minutes. That'll tell you if the CPU
>> is getting sufficient power. Make sure the CPU fan is in working order
>> first. It's called BURNp6 for a reason. The Athlons didn't have
>> thermal shutdown capability, and this will literally destroy the CPU
>> with heat build up if the fans aren't working properly. If cooling is
>> good, and the system hard locks or exhibits other strange behavior,
>> then you know it's time to replace the board. But I think you know that
>> already. This will simply be the exclamation point.
>
> Well, oddly, it's up to about 1.4TB moved now overnight, and not a whisper
> of an error in any channel. It does need to be replaced, but the question
> is can I make it limp along reliably until she gets another job...
Just keep fingers/toes crossed. That mobo is nearly 10 years old, never
handled RAM correctly. You suffered a PSU failure which apparently
damaged something to some degree. But you now know there are relatively
inexpensive upgrade options available with the features you need, and
you can begin planning, while not in "emergency mode" with sis hounding
you every day to fix it. ;)
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-25 17:45 ` Stan Hoeppner
@ 2013-08-25 20:27 ` Jay Ashworth
2013-08-26 5:45 ` Stan Hoeppner
0 siblings, 1 reply; 23+ messages in thread
From: Jay Ashworth @ 2013-08-25 20:27 UTC (permalink / raw)
To: xfs
----- Original Message -----
> The board has VGA, DVI, and HDMI, so you should be covered six ways to
> Sunday with flat panel displays. If this Philips is an older "fork lift
> required" 37" CRT then it probably only has composite, Svideo, maybe
> component.
I *think* it has HDMI in, I just didn't have any HDMI capable VGA
cards at the time, so I moved on to something else in my head.
> > Well, oddly, it's up to about 1.4TB moved now overnight, and not a
> > whisper
> > of an error in any channel. It does need to be replaced, but the
> > question
> > is can I make it limp along reliably until she gets another job...
>
> Just keep fingers/toes crossed. That mobo is nearly 10 years old,
> never handled RAM correctly. You suffered a PSU failure which apparently
> damaged something to some degree. But you now know there are relatively
> inexpensive upgrade options available with the features you need, and
> you can begin planning, while not in "emergency mode" with sis
> hounding you every day to fix it. ;)
This is the second Major Catastrophe in about 8 years, so we've gotten
settled a bit that she takes second position if she can't pay my rate. :-)
But yes, an upgrade was planned; I just wanted to upgrade the damn tuners
first...
Thanks for the homework with NewEgg; I don't mind buying stuff from
them as long as it isn't HDDs. They can't pack worth a crap; it's
Received Wisdom on the MythTV mailing list that you *never* buy
drives from them, if you want them to last more than a year.
My endgame is to replace the entire backend with an HP DL180g6, which
has 12 SAS/SATA tray slots on the front, and proper cooling. But that,
too, is down the road a bit.
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-25 20:27 ` Jay Ashworth
@ 2013-08-26 5:45 ` Stan Hoeppner
2013-08-26 15:42 ` Jay Ashworth
0 siblings, 1 reply; 23+ messages in thread
From: Stan Hoeppner @ 2013-08-26 5:45 UTC (permalink / raw)
To: xfs
On 8/25/2013 3:27 PM, Jay Ashworth wrote:
...
> But yes, an upgrade was planned; I just wanted to upgrade the damn tuners
> first...
I'm not really into the DIY DVR scene, but I'd think with over the air,
cable, and sat all being digital now that you should be able to get a
single board to do the job, multi-channel simultaneous recording and all.
> Thanks for the homework with NewEgg;
No problem. I have a rep to maintain after all. Maybe you didn't
notice the right hand side of my email address. :)
> I don't mind buying stuff from
> them as long as it isn't HDDs. They can't pack worth a crap; it's
> Received Wisdom on the MythTV mailing list that you *never* buy
> drives from them, if you want them to last more than a year.
Hadn't heard that before. I've never had a problem with any of the
spinning drives I purchased from them. I had a Corsair SSD die after ~4
months, no fault of Newegg. Of the few grand I've spent with them since
2003, on parts for many new systems, repairs and upgrades, the only
other problem I've had was a $20 four channel fan controller w/one dead
channel outta the box.
> My endgame is to replace the entire backend with an HP DL180g6, which
> has 12 SAS/SATA tray slots on the front, and proper cooling. But that,
> too, is down the road a bit.
Are you still talking about your sister's DVR here? Build another PC
and spend some of the $$ you'd save on a 65" Panasonic Plasma. Save the
rest for a rainy day. The DL180G6 (discontinued BTW) with a handful of
drives will cost more than the PC and plasma TV combined.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-26 5:45 ` Stan Hoeppner
@ 2013-08-26 15:42 ` Jay Ashworth
0 siblings, 0 replies; 23+ messages in thread
From: Jay Ashworth @ 2013-08-26 15:42 UTC (permalink / raw)
To: xfs
----- Original Message -----
> From: "Stan Hoeppner" <stan@hardwarefreak.com>
> I'm not really into the DIY DVR scene, but I'd think with over the air,
> cable, and sat all being digital now that you should be able to get a
> single board to do the job, multi-channel simultaneous recording and
> all.
These days, that's the Ceton InfiniTV or the Silicon Dust HDHomeRun,
both of which are Ethernet-attach, and take zero slots.
And, alas, the other implication of unemployment and poverty is "analog
only cable".
> > Thanks for the homework with NewEgg;
>
> No problem. I have a rep to maintain after all. Maybe you didn't
> notice the right hand side of my email address. :)
Heh.
> > I don't mind buying stuff from
> > them as long as it isn't HDDs. They can't pack worth a crap; it's
> > Received Wisdom on the MythTV mailing list that you *never* buy
> > drives from them, if you want them to last more than a year.
>
> Hadn't heard that before. I've never had a problem with any of the
> spinning drives I purchased from them. I had a Corsair SSD die after ~4
> months, no fault of Newegg. Of the few grand I've spent with them since
> 2003, on parts for many new systems, repairs and upgrades, the only
> other problem I've had was a $20 four channel fan controller w/one
> dead channel outta the box.
Things may have changed, but one of these ST3000s we're talking about came
by UPS fom TigerDirect... wrapped in one wrap of heavy kraft, and nothing
else. Happily, their local retail store swapped it for me, for one that
came in in proper packaging, without trouble.
> > My endgame is to replace the entire backend with an HP DL180g6,
> > which
> > has 12 SAS/SATA tray slots on the front, and proper cooling. But
> > that,
> > too, is down the road a bit.
>
> Are you still talking about your sister's DVR here? Build another PC
> and spend some of the $$ you'd save on a 65" Panasonic Plasma. Save the
> rest for a rainy day. The DL180G6 (discontinued BTW) with a handful of
> drives will cost more than the PC and plasma TV combined.
Because it's discontinued, the secondary market price is about $200, and
the 12 tray slots on the front will take the SATA drives I already have;
why I picked it.
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-22 9:16 ` XFS recovery resumes Stefan Ring
@ 2013-08-27 23:59 ` Dave Chinner
2013-08-28 0:19 ` Jay Ashworth
0 siblings, 1 reply; 23+ messages in thread
From: Dave Chinner @ 2013-08-27 23:59 UTC (permalink / raw)
To: Stefan Ring; +Cc: Jay Ashworth, Linux fs XFS
On Thu, Aug 22, 2013 at 11:16:18AM +0200, Stefan Ring wrote:
> On Mon, Aug 12, 2013 at 12:36 AM, Jay Ashworth <jra@baylink.com> wrote:
> > (and I apologize that I can't copy that in; I was running under screen, and
> > it doesn't cooperate with scrollback well).
>
> Running inside screen is perfect for conserving scrollback, although
> actually getting it out is a bit tedious:
>
> - Enter copy mode: Ctrl-a Ctrl-[ (moving around: Pg-Up/Pg-Down via
> Ctrl-B/Ctrl-F)
> - Mark a selection: SPC + moving up or down
> - Yank: Y
> - Write to file: Ctrl-a :writebuf /tmp/scrlog
>
> I also tend to have this in my ~/.screenrc:
>
> defscrollback 20000
Add this to your ~/.screenrc:
termcapinfo xterm|xterms|xs|rxvt ti@:te@
And screen will write to the terminal's scrollback buffer rather
than it's own internal buffer.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: XFS recovery resumes...
2013-08-27 23:59 ` Dave Chinner
@ 2013-08-28 0:19 ` Jay Ashworth
0 siblings, 0 replies; 23+ messages in thread
From: Jay Ashworth @ 2013-08-28 0:19 UTC (permalink / raw)
To: xfs
----- Original Message -----
> From: "Dave Chinner" <david@fromorbit.com>
> Add this to your ~/.screenrc:
>
> termcapinfo xterm|xterms|xs|rxvt ti@:te@
>
> And screen will write to the terminal's scrollback buffer rather
> than it's own internal buffer.
Well, sure, but only from whatever screen is active, and they'll all
get mixed together. There's no real good answer to it...
-- j
--
Jay R. Ashworth Baylink jra@baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2013-08-28 0:20 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <29874428.3384.1376259762936.JavaMail.root@benjamin.baylink.com>
2013-08-11 22:36 ` XFS recovery resumes Jay Ashworth
2013-08-18 21:38 ` Jay Ashworth
2013-08-18 21:51 ` Joe Landman
2013-08-18 22:11 ` Jay Ashworth
2013-08-18 22:57 ` Joe Landman
2013-08-18 23:21 ` Jay Ashworth
2013-08-18 22:06 ` Stan Hoeppner
2013-08-19 3:55 ` Jay Ashworth
2013-08-19 6:47 ` Stan Hoeppner
2013-08-24 23:43 ` Jay Ashworth
2013-08-25 3:44 ` Stan Hoeppner
2013-08-25 15:29 ` Jay Ashworth
2013-08-25 17:45 ` Stan Hoeppner
2013-08-25 20:27 ` Jay Ashworth
2013-08-26 5:45 ` Stan Hoeppner
2013-08-26 15:42 ` Jay Ashworth
2013-08-24 23:48 ` Default mkfs parms for my DVR drive Jay Ashworth
2013-08-25 0:00 ` Joe Landman
2013-08-25 0:41 ` Jay Ashworth
2013-08-25 3:41 ` Jay Ashworth
2013-08-22 9:16 ` XFS recovery resumes Stefan Ring
2013-08-27 23:59 ` Dave Chinner
2013-08-28 0:19 ` Jay Ashworth
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox