From: "Swâmi Petaramesh" <swami@petaramesh.org>
To: Hugo Mills <hugo@carfax.org.uk>, linux-btrfs@vger.kernel.org
Subject: Re: Massive BTRFS performance degradation
Date: Sun, 09 Mar 2014 13:10:05 +0100 [thread overview]
Message-ID: <9889995.2gi4G0FAds@tethys> (raw)
In-Reply-To: <20140309113350.GH6318@carfax.org.uk>
Le dimanche 9 mars 2014 11:33:50 Hugo Mills a écrit :
>
> ssd should be activated automatically on any non-rotational device.
> ssd_spread is generally slower on modern SSDs than the ssd option.
> discard is, except on the very latest hardware, a synchronous command
> (it's a limitation of the SATA standard), and therefore results in
> very very poor performance.
Thanks for the info Hugo :-)
> There's one known and serious bug in 3.11 before 3.11.6 which
> affects balances. Please make sure that you're running 3.11.6 or
> later. There may be other bugs in there that have been fixed in later
> kernel versions as well, but that's the "headline" one.
Latest Ubuntu / Mint now have 3.11.0-18. Anyway I don't think my "old lady
neighbour" will ever hear about balance or care, and will ever try to run it
on her laptop. She would first have to figure out what a terminal and command
line are ;-)
> We don't get many bug reports of kernel oopses in send. This may be
> that we don't have many people trying to use it (it is, after all,
> fairly deep and poorly explained magic at the moment). It may be that
> you have some corruption that's gone undetected otherwise,
Well, that's a rather "young" BTRFS setup (less than a month) that passes
scrub without detecting any error, and a plain "btrfs send" works, then an
incremental one fails...
> send code isn't handling it well. Or it may be an actual bug in send.
I would tend to believe so ;-)
> At least you've reported it. (It might also be worth putting a copy of
> the report on bugzilla.kernel.org, because then it doesn't get
> forgotten in the email noise here).
> > - btrfs-defrag.sgh hangs because of some glitch with "filefrag".
>
> Is that a btrfs problem, or a filefrag problem?
Looks like it's a filefrag problem. Looks like filefrag stalls forever trying to
figure out the fragmentation status of some files...
> btrfs-defrag.sh isn't something I've heard of before, so I'd say it's
> unlikely to be maintained by any of the main btrfs developers (and hence is
> much more likely to be unmaintained or just plain broken in general).
It's a useful script that can be found there
https://gitorious.org/btrfs-defrag
...and it's maintained by Dmitry, who's a nice, responsive and helpful guy.
> > - bedup crashes badly and looks completely unmaintained as far as I can
> > tell and nobody seems to care.
>
> That's because nobody here is connected to bedup in any way. It was
> a third-party piece of software written by someone (I don't even
> recall who) who hasn't, as far as I know, engaged with the main btrfs
> developers at all.
bedup is mentioned on the BRTFS wiki
https://btrfs.wiki.kernel.org/index.php/Deduplication
...as being the only current way to perform BTRFS deduplication. I found it in
the wiki and belived/hoped it was something more "official and maintained" that
what you seem to mean - alas...
Actually deduplication WAS the reason why I recently made the move to BTRFS
again, for deduplication in ZFS is working, but *SO* memory hungry and
performance killer unless you have *lots* of RAM...
So I wanted to give a try at BTRFS offline bedup.
> > Soooo weeelllll... Looks like readiness for prime time is still
> > ahead of us...
>
> I think that's fair to say. However, it is noticeably improving
> over time. The timescales are just quite long.
If the timescales become really too long, people with just end keeping with
the idea that BTRFS is not ready for production and won't be any previsible
time soon...
Kind regards.
--
Swâmi Petaramesh <swami@petaramesh.org> http://petaramesh.org PGP 9076E32E
next prev parent reply other threads:[~2014-03-09 12:11 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-09 7:48 Massive BTRFS performance degradation KC
2014-03-09 8:17 ` Swâmi Petaramesh
2014-03-09 10:01 ` Martin Steigerwald
2014-03-09 10:23 ` Swâmi Petaramesh
2014-03-09 11:33 ` Hugo Mills
2014-03-09 11:54 ` Martin Steigerwald
2014-03-09 12:10 ` Swâmi Petaramesh [this message]
2014-03-09 17:14 ` boris
2014-03-14 2:11 ` discard synchronous on most SSDs? Marc MERLIN
2014-03-14 3:39 ` Chris Murphy
2014-03-14 5:17 ` Marc MERLIN
2014-03-14 7:33 ` Chris Samuel
2014-03-14 19:26 ` Marc MERLIN
2014-03-14 19:57 ` Martin K. Petersen
2014-03-14 20:46 ` Holger Hoffstätte
2014-03-15 4:21 ` Marc MERLIN
2014-03-15 9:38 ` Holger Hoffstätte
2014-03-15 5:25 ` Chris Samuel
2014-03-15 6:48 ` Chris Samuel
2014-03-15 11:26 ` Duncan
2014-03-15 22:48 ` Chris Samuel
2014-03-16 6:06 ` Marc MERLIN
2014-03-16 17:09 ` Chris Murphy
2014-03-16 16:22 ` Martin K. Petersen
2014-03-16 17:50 ` Marc MERLIN
2014-03-15 4:06 ` Chris Samuel
2014-03-16 16:07 ` Martin K. Petersen
2014-03-14 12:07 ` Duncan
2014-03-14 21:44 ` Chris Murphy
2014-03-14 7:27 ` Chris Samuel
2014-03-09 17:36 ` Massive BTRFS performance degradation Austin S Hemmelgarn
2014-03-09 18:55 ` Tobias Holst
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9889995.2gi4G0FAds@tethys \
--to=swami@petaramesh.org \
--cc=hugo@carfax.org.uk \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox