linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Clemens Eisserer <linuxhippy@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: Is there any way to determine fragmentation for compressed btrfs volumes?
Date: Thu, 10 Apr 2014 17:12:47 +0200	[thread overview]
Message-ID: <CAFvQSYReY9QNkC2gCfFzn+XO-DBua-9CQzXq+91=JMOBEOQdrw@mail.gmail.com> (raw)
In-Reply-To: <CAFvQSYSZe5JMe7D3OXbwPW=aJrz5yV_ndNUHDsXg9-iK3KE8Tg@mail.gmail.com>

Hi again,

So it seems performance issues caused by FS degredation are in the
current development stage of btrfs not getting a lot of attention.
Hopefully production use on facebook servers will expose one or
another issue and with the developers being employed there chances are
good btrfs will handle workload like mine more gracefully with these
issues fixed.

Thanks & best regards, Clemens

2014-04-08 21:41 GMT+02:00 Clemens Eisserer <linuxhippy@gmail.com>:
> Hi Ducan,
>
>> You mention trying scrub and defragging the entire volume, but you don't
>> mention balance.  Balance by default rewrites all chunks (tho you can add
>> filters to rewrite only say data chunks, not metadata, if you like), so
>> that's what I'd say to try, as it should defrag in the process.
>>
>> Tho we've seen a few reports from people saying a full balance actually
>> made for instance boot times longer instead of shorter, too.  I haven't
>> seen and don't have an explanation for that. <shrug>
>
> Ah sorry, I confused scrub with balance.
> I didn't do a scrub but actually I tried to balance the device as I
> was told basically all data has to go through the allocator again
> which most likely will improve on-disk layout. Unfourtunatly in my
> case it made the situation a lot worse, when running this Linux system
> in virtualbox (where disk access has a lot more overhead) it is now
> even with the SSD super slow.
>
>
>> Meanwhile, have you tried a trim/discard (fstrim command), and/or do you
>> run with the discard (or is it trim) mount option?
> I have tried with discard on and off - however this isn't a SSD issue.
>
> It is slow even for read-only workload, and windows on the same SSD
> works as expected (with CrystalDiskMark showing the dirve meets the
> specs (~350mb/s seq. write, 500b/s sequential read, good 4k random
> values).
> So I doubt this is an SSD issue.
>
>
>> One more thing to consider.  The btrfs of a year and a half ago was a
>> rather less mature btrfs than we have today.  I recently booted to backup
>> here, and did a brand new mkfs.btrfs on my working filesystems to take
>> advantage of several of the newer features
>
> I had hopes I could avoid that.
> Having a FS in such a degraded state with the only option to re-create
> it isn't that compelling ;)
>
> Thanks & regards, Clemens

      reply	other threads:[~2014-04-10 15:12 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-08 11:46 Is there any way to determine fragmentation for compressed btrfs volumes? Clemens Eisserer
2014-04-08 13:01 ` Duncan
2014-04-08 19:41   ` Clemens Eisserer
2014-04-10 15:12     ` Clemens Eisserer [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAFvQSYReY9QNkC2gCfFzn+XO-DBua-9CQzXq+91=JMOBEOQdrw@mail.gmail.com' \
    --to=linuxhippy@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).