From: Adam Talbot <talbotx@comcast.net>
To: Adam Talbot <talbotx@comcast.net>
Cc: David Greaves <david@dgreaves.com>, Neil Brown <neilb@suse.de>,
Francois Barre <francois.barre@gmail.com>,
linux-raid@vger.kernel.org
Subject: Re: Large single raid and XFS or two small ones and EXT3?
Date: Sun, 25 Jun 2006 07:51:37 -0700 [thread overview]
Message-ID: <449EA2F9.3050503@comcast.net> (raw)
In-Reply-To: <449DC23E.4000604@comcast.net>
ACK!
At one point some one stated that they were having problems with XFS
crashing under high NFS loads... Did it look something like this?
-Adam
Starting XFS recovery on filesystem: md0 (logdev: internal)
Filesystem "md0": XFS internal error xlog_valid_rec_header(1) at line
3478 of file fs/xfs/xfs_log_recover.c. Caller 0xffffffff802114fc
Call Trace: <ffffffff80211437>{xlog_valid_rec_header+231}
<ffffffff802114fc>{xlog_do_recovery_pass+172}
<ffffffff8020f0c8>{xlog_find_tail+2344}
<ffffffff802217e1>{kmem_alloc+97}
<ffffffff80211bb0>{xlog_recover+192}
<ffffffff8020c564>{xfs_log_mount+1380}
<ffffffff80213968>{xfs_mountfs+2712}
<ffffffff8016aa3a>{set_blocksize+138}
<ffffffff80224d1d>{xfs_setsize_buftarg_flags+61}
<ffffffff802192b4>{xfs_mount+2724}
<ffffffff8022ae00>{linvfs_fill_super+0}
<ffffffff8022aeb8>{linvfs_fill_super+184}
<ffffffff8024a62e>{strlcpy+78}
<ffffffff80169db2>{sget+722} <ffffffff8016a460>{set_bdev_super+0}
<ffffffff8022ae00>{linvfs_fill_super+0}
<ffffffff8022ae00>{linvfs_fill_super+0}
<ffffffff8016a5bc>{get_sb_bdev+268}
<ffffffff8016a84b>{do_kern_mount+107}
<ffffffff8017eed3>{do_mount+1603}
<ffffffff8011a2f9>{do_page_fault+1033}
<ffffffff80145f66>{find_get_pages+22}
<ffffffff8014d57a>{invalidate_mapping_pages+202}
<ffffffff80149f99>{__alloc_pages+89}
<ffffffff8014a234>{__get_free_pages+52}
<ffffffff8017f257>{sys_mount+151} <ffffffff8010a996>{system_call+126}
XFS: log mount/recovery failed: error 990
XFS: log mount failed
Adam Talbot wrote:
> Trying to test for tuning with different chunk's. Just finished 16K
> chunk and am about 20% done with the 32K test. Here are the numbers on
> 16K chunk, will send 32, 96,128,192 and 256 as I get them. But keep in
> mind each one of these tests take about 4~6 hours, so it is a slow
> process... I have settled for XFS as the file system type, it seems to
> be able to beat any thing else out there.
> -Adam
>
> XFS
> Config=NAS+NFS
> RAID6 16K chunk
> nas tmp # time tar cf - . | (cd /data ; tar xf - )
> real 252m40.143s
> user 1m4.720s
> sys 25m6.270s
> /dev/md/0 1.1T 371G 748G 34% /data
> 4.207 hours @ 90,167M/hour or 1502M/min or 25.05M/sec
>
>
>
>
> David Greaves wrote:
>
>> Adam Talbot wrote:
>>
>>
>>> OK, this topic I relay need to get in on.
>>> I have spent the last few week bench marking my new 1.2TB, 6 disk, RAID6
>>> array.
>>>
>>>
>> Very interesting. Thanks.
>>
>> Did you get around to any 'tuning'.
>> Things like raid chunk size, external logs for xfs, blockdev readahead
>> on the underlying devices and the raid device?
>>
>> David
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
next prev parent reply other threads:[~2006-06-25 14:51 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-06-22 19:11 Large single raid and XFS or two small ones and EXT3? Chris Allen
2006-06-22 19:16 ` Gordon Henderson
2006-06-22 19:23 ` H. Peter Anvin
2006-06-22 19:58 ` Chris Allen
2006-06-22 20:00 ` Chris Allen
2006-06-23 8:59 ` PFC
2006-06-23 9:26 ` Francois Barre
2006-06-23 12:50 ` Chris Allen
2006-06-23 13:14 ` Gordon Henderson
2006-06-23 13:30 ` Francois Barre
2006-06-23 14:46 ` Martin Schröder
2006-06-23 14:59 ` Francois Barre
2006-06-23 15:13 ` Bill Davidsen
2006-06-23 15:34 ` Francois Barre
2006-06-23 19:49 ` Nix
2006-06-24 5:19 ` Neil Brown
2006-06-24 7:59 ` Adam Talbot
2006-06-24 9:34 ` David Greaves
2006-06-24 22:52 ` Adam Talbot
2006-06-25 13:06 ` Joshua Baker-LePain
2006-06-28 3:45 ` I need a PCI V2.1 4 port SATA card Guy
2006-06-28 4:29 ` Brad Campbell
2006-06-28 10:20 ` Justin Piszcz
2006-06-28 11:55 ` Christian Pernegger
2006-06-28 11:59 ` Gordon Henderson
2006-06-29 18:45 ` Bill Davidsen
2006-06-28 19:38 ` Justin Piszcz
2006-06-28 12:12 ` Petr Vyskocil
2006-06-25 14:51 ` Adam Talbot [this message]
2006-06-25 20:35 ` Large single raid and XFS or two small ones and EXT3? Chris Allen
2006-06-25 23:57 ` Bill Davidsen
2006-06-26 0:42 ` Adam Talbot
2006-06-26 14:03 ` Bill Davidsen
2006-06-24 12:40 ` Justin Piszcz
2006-06-26 0:06 ` Bill Davidsen
2006-06-26 8:06 ` Justin Piszcz
2006-06-23 15:17 ` Chris Allen
2006-06-23 14:01 ` Al Boldi
2006-06-23 16:06 ` Andreas Dilger
2006-06-23 16:41 ` Christian Pedaschus
2006-06-23 16:46 ` Christian Pedaschus
2006-06-23 19:53 ` Nix
2006-06-23 16:21 ` Russell Cattelan
2006-06-23 18:19 ` Tom Vier
2006-06-27 12:05 ` Large single raid... - XFS over NFS woes Dexter Filmore
2006-06-23 19:48 ` Large single raid and XFS or two small ones and EXT3? Nix
2006-06-25 19:13 ` David Rees
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=449EA2F9.3050503@comcast.net \
--to=talbotx@comcast.net \
--cc=david@dgreaves.com \
--cc=francois.barre@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).