From: Russell Cattelan <cattelan@thebarn.com>
To: Al Boldi <a1426z@gawab.com>
Cc: linux-raid@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: Large single raid and XFS or two small ones and EXT3?
Date: Fri, 23 Jun 2006 11:21:34 -0500 [thread overview]
Message-ID: <449C150E.3040107@thebarn.com> (raw)
In-Reply-To: <200606231701.44803.a1426z@gawab.com>
Al Boldi wrote:
>Chris Allen wrote:
>
>
>>Francois Barre wrote:
>>
>>
>>>2006/6/23, PFC <lists@peufeu.com>:
>>>
>>>
>>>> - XFS is faster and fragments less, but make sure you have a
>>>>good UPS
>>>>
>>>>
>>>Why a good UPS ? XFS has a good strong journal, I never had an issue
>>>with it yet... And believe me, I did have some dirty things happening
>>>here...
>>>
>>>
>>>
>>>> - ReiserFS 3.6 is mature and fast, too, you might consider it
>>>> - ext3 is slow if you have many files in one directory, but
>>>>has more
>>>>mature tools (resize, recovery etc)
>>>>
>>>>
>>>XFS tools are kind of mature also. Online grow, dump, ...
>>>
>>>
>>>
>>>> I'd go with XFS or Reiser.
>>>>
>>>>
>>>I'd go with XFS. But I may be kind of fanatic...
>>>
>>>
>>Strange that whatever the filesystem you get equal numbers of people
>>saying that they have never lost a single byte to those who have had
>>horrible corruption and would never touch it again. We stopped using XFS
>>about a year ago because we were getting kernel stack space panics under
>>heavy load over NFS. It looks like the time has come to give it another
>>try.
>>
>>
>
>If you are keen on data integrity then don't touch any fs w/o data=ordered.
>
>ext3 is still king wrt data=ordered, albeit slow.
>
>Now XFS is fast, but doesn't support data=ordered. It seems that their
>solution to the problem is to pass the burden onto hw by using barriers.
>Maybe XFS can get away with this. Maybe.
>
>Thanks!
>
>--
>
>
When you refer to data=ordered are you taking about ext3 user data
journaling?
While user data journaling seems like a good idea is unclear as what
benefits it really provides?
By writing all user data twice the write performance of the files system
is effectively halved.
Granted the log is on area of the disk so some performance advantages
show up due
to less head seeking for those writes.
As far us meta data jornaling goes it is a fundamental requirement that
the journal is
synced to disk to a given point in order to release the pinned meta
data, thus allowing
the meta data to be synced to disk.
The way most files systems guarantee file system consistency is to
either sync all
outstanding meta data changes to disk or to sync a record of what incore
changes
have been made to disk.
In the XFS case since it logs meta data delta to the log it can record more
change operations in a smaller number of disk blocks, ext3 on the other hand
writes the entire metadata block to the log.
As far as barriers go I assume you are referring to the ide write barriers?
The need for barrier support in the file system is a result of cheap ide
disks providing large write caches but not having enough reserve power to
guarantee that the cache will be sync'ed to disk in the event of a power
failure.
Originally when xfs was written the disks/raids used by SGI system was
pretty
much exclusively enterprise level devices that would guarantee the write
caches
would be flushed in the event of a power failure.
Note ext3,xfs,and reiser all use write barrier now fos r ide disks.
next prev parent reply other threads:[~2006-06-23 16:21 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-06-22 19:11 Large single raid and XFS or two small ones and EXT3? Chris Allen
2006-06-22 19:16 ` Gordon Henderson
2006-06-22 19:23 ` H. Peter Anvin
2006-06-22 19:58 ` Chris Allen
2006-06-22 20:00 ` Chris Allen
2006-06-23 8:59 ` PFC
2006-06-23 9:26 ` Francois Barre
2006-06-23 12:50 ` Chris Allen
2006-06-23 13:14 ` Gordon Henderson
2006-06-23 13:30 ` Francois Barre
2006-06-23 14:46 ` Martin Schröder
2006-06-23 14:59 ` Francois Barre
2006-06-23 15:13 ` Bill Davidsen
2006-06-23 15:34 ` Francois Barre
2006-06-23 19:49 ` Nix
2006-06-24 5:19 ` Neil Brown
2006-06-24 7:59 ` Adam Talbot
2006-06-24 9:34 ` David Greaves
2006-06-24 22:52 ` Adam Talbot
2006-06-25 13:06 ` Joshua Baker-LePain
2006-06-28 3:45 ` I need a PCI V2.1 4 port SATA card Guy
2006-06-28 4:29 ` Brad Campbell
2006-06-28 10:20 ` Justin Piszcz
2006-06-28 11:55 ` Christian Pernegger
2006-06-28 11:59 ` Gordon Henderson
2006-06-29 18:45 ` Bill Davidsen
2006-06-28 19:38 ` Justin Piszcz
2006-06-28 12:12 ` Petr Vyskocil
2006-06-25 14:51 ` Large single raid and XFS or two small ones and EXT3? Adam Talbot
2006-06-25 20:35 ` Chris Allen
2006-06-25 23:57 ` Bill Davidsen
2006-06-26 0:42 ` Adam Talbot
2006-06-26 14:03 ` Bill Davidsen
2006-06-24 12:40 ` Justin Piszcz
2006-06-26 0:06 ` Bill Davidsen
2006-06-26 8:06 ` Justin Piszcz
2006-06-23 15:17 ` Chris Allen
2006-06-23 14:01 ` Al Boldi
2006-06-23 16:06 ` Andreas Dilger
2006-06-23 16:41 ` Christian Pedaschus
2006-06-23 16:46 ` Christian Pedaschus
2006-06-23 19:53 ` Nix
2006-06-23 16:21 ` Russell Cattelan [this message]
2006-06-23 18:19 ` Tom Vier
2006-06-27 12:05 ` Large single raid... - XFS over NFS woes Dexter Filmore
2006-06-23 19:48 ` Large single raid and XFS or two small ones and EXT3? Nix
2006-06-25 19:13 ` David Rees
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=449C150E.3040107@thebarn.com \
--to=cattelan@thebarn.com \
--cc=a1426z@gawab.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).