linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tom Vier <tmv@comcast.net>
To: Russell Cattelan <cattelan@thebarn.com>
Cc: Al Boldi <a1426z@gawab.com>,
	linux-raid@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: Large single raid and XFS or two small ones and EXT3?
Date: Fri, 23 Jun 2006 14:19:25 -0400	[thread overview]
Message-ID: <20060623181925.GB3894@zero> (raw)
In-Reply-To: <449C150E.3040107@thebarn.com>

On Fri, Jun 23, 2006 at 11:21:34AM -0500, Russell Cattelan wrote:
> When you refer to data=ordered are you taking about ext3 user data 
> journaling?

iirc, data=ordered just writes new data out before updating block pointers,
the file's length in its inode, and the block usage bitmap. That way you
don't get junk or zeroed data at the tail of the file. However, i think to
prevent data leaks (from deleted files), data=writeback requires a write to
the journal, indicating what blocks are being added, so that on recovery
they can be zeroed if the transaction wasn't completed.

> While user data journaling seems like a good idea is unclear as what 
> benefits it really provides?

Data gets commited sooner (until pressure or timeouts force the data to be
written to its final spot - then you loose thruput and there's a net delay).
I think for bursts of small file creation, data=journaled is a win. I don't
know how lazy ext3 is about writing the data to its final position. It
probably does it when the commit timeout hits 0 or the journal is full.

> As far as barriers go I assume you are referring to the ide write barriers?
> 
> The need for barrier support in the file system is a result of cheap ide
> disks providing large write caches but not having enough reserve power to
> guarantee that the cache will be sync'ed to disk in the event of a power
> failure.

It's needed on any drive (including scsi) that has writeback cache enabled.
Most scsi drives (in my experience) come from the factory with the cache set
to write thru, in case the fs/os doesn't use ordered tags, cache flushes, or
force-unit-access writes.

> Note ext3,xfs,and reiser all use write barrier now fos r ide disks.

What i've found very disappointing is that my raid1 doesn't support them!

Jun 22 10:53:49 zero kernel: Filesystem "md1": Disabling barriers, not
supported by the underlying device

I'm not sure if it's the sata drive that don't support write barriers, or if
it's just the md1 layer. I need to investigate that. I think reiserfs also
complained that trying to enabled write barriers fails on that md1 (i've
been playing with various fs'es on it).

-- 
Tom Vier <tmv@comcast.net>
DSA Key ID 0x15741ECE

  reply	other threads:[~2006-06-23 18:19 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-06-22 19:11 Large single raid and XFS or two small ones and EXT3? Chris Allen
2006-06-22 19:16 ` Gordon Henderson
2006-06-22 19:23   ` H. Peter Anvin
2006-06-22 19:58     ` Chris Allen
2006-06-22 20:00   ` Chris Allen
2006-06-23  8:59 ` PFC
2006-06-23  9:26   ` Francois Barre
2006-06-23 12:50     ` Chris Allen
2006-06-23 13:14       ` Gordon Henderson
2006-06-23 13:30       ` Francois Barre
2006-06-23 14:46         ` Martin Schröder
2006-06-23 14:59           ` Francois Barre
2006-06-23 15:13           ` Bill Davidsen
2006-06-23 15:34             ` Francois Barre
2006-06-23 19:49               ` Nix
2006-06-24  5:19               ` Neil Brown
2006-06-24  7:59                 ` Adam Talbot
2006-06-24  9:34                   ` David Greaves
2006-06-24 22:52                     ` Adam Talbot
2006-06-25 13:06                       ` Joshua Baker-LePain
2006-06-28  3:45                         ` I need a PCI V2.1 4 port SATA card Guy
2006-06-28  4:29                           ` Brad Campbell
2006-06-28 10:20                             ` Justin Piszcz
2006-06-28 11:55                             ` Christian Pernegger
2006-06-28 11:59                               ` Gordon Henderson
2006-06-29 18:45                                 ` Bill Davidsen
2006-06-28 19:38                               ` Justin Piszcz
2006-06-28 12:12                             ` Petr Vyskocil
2006-06-25 14:51                       ` Large single raid and XFS or two small ones and EXT3? Adam Talbot
2006-06-25 20:35                         ` Chris Allen
2006-06-25 23:57                   ` Bill Davidsen
2006-06-26  0:42                     ` Adam Talbot
2006-06-26 14:03                       ` Bill Davidsen
2006-06-24 12:40                 ` Justin Piszcz
2006-06-26  0:06                   ` Bill Davidsen
2006-06-26  8:06                     ` Justin Piszcz
2006-06-23 15:17           ` Chris Allen
2006-06-23 14:01       ` Al Boldi
2006-06-23 16:06         ` Andreas Dilger
2006-06-23 16:41           ` Christian Pedaschus
2006-06-23 16:46             ` Christian Pedaschus
2006-06-23 19:53             ` Nix
2006-06-23 16:21         ` Russell Cattelan
2006-06-23 18:19           ` Tom Vier [this message]
2006-06-27 12:05       ` Large single raid... - XFS over NFS woes Dexter Filmore
2006-06-23 19:48   ` Large single raid and XFS or two small ones and EXT3? Nix
2006-06-25 19:13     ` David Rees

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060623181925.GB3894@zero \
    --to=tmv@comcast.net \
    --cc=a1426z@gawab.com \
    --cc=cattelan@thebarn.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).