linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Martin Steigerwald <Martin@lichtvoll.de>
To: linux-btrfs@vger.kernel.org
Cc: Liu Bo <bo.li.liu@oracle.com>, ching <lsching17@gmail.com>
Subject: Re: enquiry about autodefrag option
Date: Wed, 19 Sep 2012 19:39:42 +0200	[thread overview]
Message-ID: <201209191939.43182.Martin@lichtvoll.de> (raw)
In-Reply-To: <5059D2D3.3050303@oracle.com>

Am Mittwoch, 19. September 2012 schrieb Liu Bo:
> On 09/19/2012 07:28 PM, ching wrote:
[…]
> > On 09/17/2012 07:15 PM, ching wrote:
> >> I am testing btrfs for long-term storage and backup, and i would
> >> like to know more about "autodefrag" option:
> >> 
> >> 1. Will "autodefrag" option benefit ssd?
> >>
> >>     My understanding is:
> >>
> >>        autodrag -> number of extent decrease -> metadata decrease ->
> >>a
> >>
> >> "healthier" filesystem in the long run
> >>
> >>    (P.S. I am aware that autodefrag will introduce extra write I/O)
> 
> Yes, your understanding is right, random write workloads will benefit
> from it.

What about the extra I/O? And the greatly reduced seek times on SSDs?

Upto now I kept away from defragmenting on SSD.

I wonder about a good way to decide whether autodefrag makes things better 
or worse for a specific workload. What are the criteria on rotating media 
and what are they on SSD?


[… informational part about a BTRFS on SSD that should have an age of at 
least 8 months with almost daily upgrades  …]

I only have / running on SSD, but this since quite some time. And it does 
not seem to have gotten much worse – however this is only subjective 
feeling of performance.

Except for fstrim times. fstrim take way more time than in the beginning¹. 
So there seems to be free space fragmentation. Which makes sense for a 
root filesystem on a Debian Sid machine with lots of upgrade activity and 
way over 50% usage.

merkaba:~> df -hT /          
Dateisystem    Typ   Größe Benutzt Verf. Verw% Eingehängt auf
/dev/dm-0      btrfs   19G     13G  3,6G   79% /

merkaba:~> btrfs fi sh       
failed to read /dev/sr0
Label: 'debian'  uuid: […]
        Total devices 1 FS bytes used 12.25GB
        devid    1 size 18.62GB used 18.62GB path /dev/dm-0

merkaba:~> btrfs fi df /
Data: total=15.10GB, used=11.58GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.75GB, used=688.96MB
Metadata: total=8.00MB, used=0.00

I think I get rid of that duplicate metadata once I redo the fs with 8 or 
16 KiB metadata blocks.

I thought about rebalancing it, but last time boot time doubled after a 
complete rebalance. The effect of a rebalance of just the metadata tree to 
one instead of two might be different tough.

Intel SSD 320 300GB on Kernel 3.6-rc5.


[1] It has been a second or two in the beginning I think. Then it grew 
over time.

merkaba:~> time fstrim -v /
/: 5877809152 bytes were trimmed
fstrim -v /  0,00s user 5,74s system 14% cpu 39,920 total
merkaba:~> time fstrim -v /
/: 5875712000 bytes were trimmed
fstrim -v /  0,00s user 5,55s system 14% cpu 39,095 total
merkaba:~> time fstrim -v /
/: 5875712000 bytes were trimmed
fstrim -v /  0,00s user 5,62s system 14% cpu 38,538 total

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

  reply	other threads:[~2012-09-19 17:39 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-17 11:15 enquiry about autodefrag option (resent) ching
2012-09-19 11:28 ` enquiry about autodefrag option ching
2012-09-19 14:12   ` Liu Bo
2012-09-19 17:39     ` Martin Steigerwald [this message]
2012-09-20 14:05       ` David Sterba
2012-09-19 23:36     ` ching
2012-09-20 11:51       ` David Sterba
2012-09-20 21:03         ` ching
2012-09-24 15:50           ` David Sterba
  -- strict thread matches above, loose matches on Subject: below --
2012-09-17  6:12 ching lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201209191939.43182.Martin@lichtvoll.de \
    --to=martin@lichtvoll.de \
    --cc=bo.li.liu@oracle.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lsching17@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).