public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Chris Mason <chris.mason@oracle.com>
To: Chris Samuel <chris@csamuel.org>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Bonnie++ run with RAID-1 on a single SSD (2.6.29-rc4-224-g4b6136c)
Date: Fri, 13 Feb 2009 09:26:11 -0500	[thread overview]
Message-ID: <1234535171.17533.9.camel@think.oraclecorp.com> (raw)
In-Reply-To: <200902132331.12928.chris@csamuel.org>

On Fri, 2009-02-13 at 23:31 +1100, Chris Samuel wrote:
> Hi folks,
> 
> For people who might be interested, here is how btrfs performs
> with two partitions on a single SSD drive in a RAID-1 mirror.
> 
> This is on a Dell E4200 with Core 2 Duo U9300 (1.2GHz), 2GB RAM
> and a Samsung SSD (128GB Thin uSATA SSD).
> 

Thanks for posting these, it is especially good to see the metadata ops
are still fast on this ssd.

> Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> sys26            2G           28299  17 18633  12           85702  29  3094  18
>                     ------Sequential Create------ --------Random Create--------
>                     -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>                  16  7513  99 +++++ +++  5140  98  3964  67 +++++ +++  5652  99
> sys26,2G,,,28299,17,18633,12,,,85702,29,3093.9,18,16,7513,99,+++++,+++,5140,98,3964,67,+++++,+++,5652,99
> 

So, btrfs is doing ~28MB/s writes while writing the data  twice and XFS
is doing 62MB writing it once.  That's not too bad really.

But, one important thing about the ssds is they stripe internally across
a bunch of flash storage, and then they have the FTL managing all the
writes.

So, if you make two partitions on a single device, a raid1 data write
from btrfs is very likely to result in two large IOs, which the FTL very
well might put directly adjacent to each other on the SSD.

Duplicating the data does make it more likely you'll recover something
if the device goes bad, but two devices are still safer than one.

I'm not saying the test isn't valid, I just want to make sure people
reading the list don't run off and partition their ssds in hopes of
getting raid ;)

-chris



  parent reply	other threads:[~2009-02-13 14:26 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-02-13 12:31 Bonnie++ run with RAID-1 on a single SSD (2.6.29-rc4-224-g4b6136c) Chris Samuel
2009-02-13 13:27 ` Sander
2009-02-13 14:26 ` Chris Mason [this message]
2009-03-24 10:41   ` Chris Samuel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1234535171.17533.9.camel@think.oraclecorp.com \
    --to=chris.mason@oracle.com \
    --cc=chris@csamuel.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox