public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Paul P Komkoff Jr <i@stingr.net>
To: Chris Mason <chris.mason@oracle.com>
Cc: Eric Anopolsky <erpo41@gmail.com>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: parity data
Date: Tue, 9 Sep 2008 18:07:49 +0400	[thread overview]
Message-ID: <20080909140749.GA4219@stingr.net> (raw)
In-Reply-To: <1220957012.29741.38.camel@think.oraclecorp.com>

Replying to Chris Mason:

I'd like to step into this thread because it's relevant to my
interests.

> > Let's say I have 4 100GB drives (2 fast ones and 2 slow ones). I've
> > restricted a performance critical directory to the two fastest drives,
> > currently totaling 100GB of performance critical data. The rest of the
> > data on the system is striped.
> > 
> > How much free space do I have on the filesystem? 100GB (the amount of
> > data I can store in the performance critical directory)? 200GB (the
> > amount of data I can store outside the performance critical directory if
> > the striping is guaranteed)? 300GB (the amount of data I can store
> > outside the performance critical directory if the striping is best
> > effort)?
> > 
> 
> People already create these configurations, they just do it with
> multiple filesystems.  And, when they want to resize the performance
> critical section, it is a difficult (and often slow) operation.
> 
> More flexibility in managing storage is the end goal for btrfs, and
> we're just barely getting to the point where we can start addressing
> these difficult issues.

Some time ago I created some list of features of ideal filesystems.
Currently, btrfs (with all proposed but not implemented yet things) is
very close. For example, the ability to freely manage the media pool,
IOW add and remove harddisks of arbitrary size is very important now,
when it's not very uncommon to have a box with 24 hard drives each of
them can fail at any time, and it's economically unfeasible to keep
spare pool of N drives of exactly the same size.
The individual drive size constraint, which is very important in
traditional layered raid-then-lvm-then-fs approach, is not present in
our ideal case, which allows us to manage our storage more
effectively.

Another point is per-object locality/redundancy policy. It's a killer
feature, because, in a traditional world, you'll have to manage
(resize and move) all those partitions around, which is not very
flexible given that you might have 24 drives and then you'll have to
create one raid10, one raid6, and one raid0 on top of them, juggling
the underlying partition sizes, etc, you know.

It is essential to have a filesystem which will do it for you, again,
with more efficiency that you can extract from 30-year-old way of
setting up "block devices".

-- 
Paul P 'Stingray' Komkoff Jr // http://stingr.net/key <- my pgp key
 This message represents the official view of the voices in my head

  reply	other threads:[~2008-09-09 14:07 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-09-07  5:43 parity data Eric Anopolsky
2008-09-08 14:47 ` Chris Mason
2008-09-09  0:46   ` Eric Anopolsky
2008-09-09 10:43     ` Chris Mason
2008-09-09 14:07       ` Paul P Komkoff Jr [this message]
2008-09-10  1:32       ` Eric Anopolsky
2008-09-10 12:59         ` Chris Mason

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080909140749.GA4219@stingr.net \
    --to=i@stingr.net \
    --cc=chris.mason@oracle.com \
    --cc=erpo41@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox