public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Wiki entry for disk write caches and RAID controllers
@ 2009-02-05  9:19 Michael Monnerie
  0 siblings, 0 replies; only message in thread
From: Michael Monnerie @ 2009-02-05  9:19 UTC (permalink / raw)
  To: xfs

I updated the sections about "disk write cache" and "RAID controllers", 
please review and comment.

http://xfs.org/index.php/XFS_FAQ

I merged my original paragraph about "disk write cache" with the "What 
is the problem with the write cache on journaled filesystems?" text.

On Donnerstag 05 Februar 2009 Dave Chinner wrote:
> What I missed was the "barriers turned on" - I was referring
> (context not quoted) to the fact that RAID5 is not unіque in it's
> ability to trash the filesystem on powerfail. You are right,
> barriers on a single disk should prevent filesystem corruption
> and will prevent loss of synchronously written data. only
> asynchronously written data will get lost (just like all the
> stuff sitting in RAM).

So can we say that 1 disk can let write cache on with barriers enabled?

For the Xyratex case: If all write caches are off (or write through), 
does it matter if you use [no]barrier? I guess no, just want to be sure.

mfg zmi
-- 
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2009-02-05  9:20 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-02-05  9:19 Wiki entry for disk write caches and RAID controllers Michael Monnerie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox