From: Casper Bang <casper.bang@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: Experiences: Why BTRFS had to yield for ZFS
Date: Wed, 19 Sep 2012 07:28:05 +0000 (UTC) [thread overview]
Message-ID: <loom.20120919T083653-817@post.gmane.org> (raw)
In-Reply-To: 5058068C.4040704@oracle.com
> Anand Jain <Anand.Jain <at> oracle.com> writes:
> archive-log-apply script - if you could, can you share the
> script itself ? or provide more details about the script.
> (It will help to understand the work-load in question).
Our setup entails a whole bunch of scripts, but the apply script looks like this
(orion is the production environment, pandium is the shadow):
http://pastebin.com/k4T7deap
The script invokes rman passing rman_recover_database.rcs:
connect target /
run {
crosscheck archivelog all;
delete noprompt expired archivelog all;
catalog start with '/backup/oracle/flash_recovery_area/FROM_PROD/archivelog'
noprompt;
recover database;
}
We receive a 1GB archivelog roughly every 20'th minute, depending on the
workload of the production environment. Apply rate starts out fine with btrfs >
ext4 > zfs, but ends out with ZFS > ext4 > btrfs. The following numbers are from
our consumer spinning-platter disk test, but they are equally representable to
the SSD numbers we got.
Ext4 starts out with a realtime to SCN ratio of about 3.4 and ends down around a
factor 2.2.
ZFS starts out with a realtime to SCN ratio of about 7.5 and ends down around a
factor 4.4.
Btrfs starts out with a realtime to SCN ratio of about 2.2 and ends down around
a factor 0.8. This of course means we will never be able to catch up with
production, as btrfs can't apply these as fast as they're created.
It was even worse with btrfs on our 10xSSD server, where 20 min. of realtime
work would end up taking some 5h to get applied (factor 0.06), obviously useless
to us.
I should point out, that during this process we also had to move some large
backup sets around and we saw several times btrfs eating massive IO never to
finish a simple mv command.
I'm inclined to believe we've found some weak corner, perhaps in combination
with SSD's - but it led us to compare with ext4 and ZFS, and dismiss btrfs for
this over ZFS as it solves our problem.
next prev parent reply other threads:[~2012-09-19 7:28 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-17 8:45 Experiences: Why BTRFS had to yield for ZFS Casper Bang
2012-09-17 9:15 ` Ralf Hildebrandt
2012-09-17 9:55 ` Casper Bnag
2012-09-17 10:05 ` Avi Miller
2012-09-17 10:47 ` Casper Bnag
2012-09-17 10:58 ` Avi Miller
2012-09-18 16:48 ` Andrew McGlashan
2012-09-18 21:46 ` Avi Miller
2012-09-18 5:28 ` Anand Jain
2012-09-19 7:28 ` Casper Bang [this message]
2012-09-19 7:36 ` Fajar A. Nugraha
2012-09-19 8:09 ` Casper Bang
2012-09-18 23:08 ` Gregory Farnum
2012-09-19 15:25 ` Chris Mason
2012-09-19 19:43 ` Casper Bang
2012-10-08 14:38 ` Casper Bang
2012-10-08 20:59 ` Avi Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=loom.20120919T083653-817@post.gmane.org \
--to=casper.bang@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).