public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Ming Zhang <mingz@ele.uri.edu>
To: Peter Grandi <pg_xfs@xfs.for.sabi.co.UK>
Cc: Linux XFS <linux-xfs@oss.sgi.com>
Subject: Re: stable xfs
Date: Tue, 18 Jul 2006 21:20:44 -0400	[thread overview]
Message-ID: <1153272044.2669.282.camel@localhost.localdomain> (raw)
In-Reply-To: <17597.27469.834961.186850@base.ty.sabi.co.UK>

On Wed, 2006-07-19 at 00:14 +0100, Peter Grandi wrote:
> >>> On Tue, 18 Jul 2006 18:36:06 -0400, Ming Zhang
> >>> <mingz@ele.uri.edu> said:
> 
> mingz> [ .. ] example on what is an improper use?
> 
> Well, this mailing list is full of them :-). However it is
> easier to say what is an optimal use:
> 
>   * A 64 bit system.
>   * With a large, parallel storage system.

when u say large parallel storage system, you mean independent spindles
right? but most people will have all disks configured in one RAID5/6 and
thus it is not parallel any more.


>   * The block IO system handles all storage errors.

so current MD/LVM/SATA/SCSI layers are not good enough?

>   * With backups of the contents of the storage system.
> 
> In other words, an Altix in an enterprise computing room... :-)

just kidding, are you a SGI sales? ;)

> 
> Something like 64 bit systems running a UNIX-like OS, one system
> production and one for backup, each with some TiB of RAID10
> storage, both with UPSes giving a significant amount of uptime,
> and extensive hot swapping abilities. If you got that, XFS can
> give really good performance quite safely.
> 
> My impression is that the design of XFS was based on a focus on
> performance, at the file system level, via on-disk layout,
> massive ''transactions'', and parallel IO requests, assuming
> that the block IO subsystem handles every storage error issue
> both transparently and gracefully.
> 
> It is _possible_, and may even be appropriate after carefully
> thinking it through, to use XFS in a 32 bit system without UPS,
> and with no storage system redundancy, and with device errors
> not handled by the block IO system, and with little parallelism
> in the storage subsystem; e.g. a SOHO desktop or server.

i think with write barrier support, system without UPS should be ok.
considering even u have UPS, kernel oops in other parts still can take
the FS down.

 
> 
> But then I have seen people building RAIDs stuffing in a couple
> dozen drives from the same shipping box, so improper use of XFS
> is definitely a second order issue at that kind of level :-).
> 
> 

  reply	other threads:[~2006-07-19  1:21 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-07-17 15:30 stable xfs Ming Zhang
2006-07-17 16:20 ` Peter Grandi
2006-07-18 22:36   ` Ming Zhang
2006-07-18 23:14     ` Peter Grandi
2006-07-19  1:20       ` Ming Zhang [this message]
2006-07-19  5:56         ` Chris Wedgwood
2006-07-19 10:53           ` Peter Grandi
2006-07-19 14:45             ` Ming Zhang
2006-07-22 17:13               ` Peter Grandi
2006-07-20  6:12             ` Chris Wedgwood
2006-07-22 17:31               ` Peter Grandi
2006-07-19 14:10           ` Ming Zhang
2006-07-19 10:24         ` Peter Grandi
2006-07-19 13:11           ` Ming Zhang
2006-07-20  6:15             ` Chris Wedgwood
2006-07-20 14:08               ` Ming Zhang
2006-07-20 16:17                 ` Chris Wedgwood
2006-07-20 16:38                   ` Ming Zhang
2006-07-20 19:04                     ` Chris Wedgwood
2006-07-21  0:19                       ` Ming Zhang
2006-07-21  3:26                         ` Chris Wedgwood
2006-07-21 13:10                           ` Ming Zhang
2006-07-21 16:07                             ` Chris Wedgwood
2006-07-21 17:00                               ` Ming Zhang
2006-07-21 18:07                                 ` Chris Wedgwood
2006-07-24  1:14                                   ` Ming Zhang
2006-07-22 18:09                     ` Peter Grandi
2006-07-22 17:47                 ` Peter Grandi
2006-07-22 15:37             ` Peter Grandi
2006-07-18 23:54 ` Nathan Scott
2006-07-19  1:15   ` Ming Zhang
2006-07-19  7:40   ` Martin Steigerwald
2006-07-19 14:11     ` Ming Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1153272044.2669.282.camel@localhost.localdomain \
    --to=mingz@ele.uri.edu \
    --cc=linux-xfs@oss.sgi.com \
    --cc=pg_xfs@xfs.for.sabi.co.UK \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox