linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Austin S Hemmelgarn <ahferroin7@gmail.com>
To: "G. Richard Bellamy" <rbellamy@pteradigm.com>,
	Chris Murphy <lists@colorremedies.com>
Cc: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Large files, nodatacow and fragmentation
Date: Tue, 02 Sep 2014 15:17:50 -0400	[thread overview]
Message-ID: <540617DE.5070605@gmail.com> (raw)
In-Reply-To: <CADw2B2N4T_XWOWt++8wGhXk5WJmKgQGqhbt=m2JbQCO_ekfhxA@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3295 bytes --]

On 2014-09-02 14:31, G. Richard Bellamy wrote:
> I thought I'd follow-up and give everyone an update, in case anyone
> had further interest.
> 
> I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
> bcache front device.
> 
> It's 5x600GB SAS 15k RPM drives RAID10, with the 512MB SSD bcache.
> 
> 2014-09-02 11:23:16
> root@eanna i /var/lib/libvirt/images # lsblk
> NAME      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> sda         8:0    0 558.9G  0 disk
> └─bcache3 254:3    0 558.9G  0 disk /var/lib/btrfs/data
> sdb         8:16   0 558.9G  0 disk
> └─bcache2 254:2    0 558.9G  0 disk
> sdc         8:32   0 558.9G  0 disk
> └─bcache1 254:1    0 558.9G  0 disk
> sdd         8:48   0 558.9G  0 disk
> └─bcache0 254:0    0 558.9G  0 disk
> sde         8:64   0 558.9G  0 disk
> └─bcache4 254:4    0 558.9G  0 disk
> sdf         8:80   0   1.8T  0 disk
> └─sdf1      8:81   0   1.8T  0 part
> sdg         8:96   0   477G  0 disk /var/lib/btrfs/system
> sdh         8:112  0   477G  0 disk
> sdi         8:128  0   477G  0 disk
> ├─bcache0 254:0    0 558.9G  0 disk
> ├─bcache1 254:1    0 558.9G  0 disk
> ├─bcache2 254:2    0 558.9G  0 disk
> ├─bcache3 254:3    0 558.9G  0 disk /var/lib/btrfs/data
> └─bcache4 254:4    0 558.9G  0 disk
> sr0        11:0    1  1024M  0 rom
> 
> I further split the system and data drives of the VM Win7 guest. It's
> very interesting to see the huge level of fragmentation I'm seeing,
> even with the help of ordered writes offered by bcache - in other
> words while bcache seems to be offering me stability and better
> behavior to the guest, the underlying the filesystem is still seeing a
> level of fragmentation that has me scratching my head.
> 
> That being said, I don't know what would be normal fragmentation of a
> VM Win7 guest system drive, so could be I'm just operating in my zone
> of ignorance again.
> 
> 2014-09-01 14:41:19
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 7 extents found
> atlas-system.qcow2: 154 extents found
> 2014-09-01 18:12:27
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 564 extents found
> atlas-system.qcow2: 28171 extents found
> 2014-09-02 08:22:00
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 564 extents found
> atlas-system.qcow2: 35281 extents found
> 2014-09-02 08:44:43
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 564 extents found
> atlas-system.qcow2: 37203 extents found
> 2014-09-02 10:14:32
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 564 extents found
> atlas-system.qcow2: 40903 extents found
> 
This may sound odd, but are you exposing the disk to the Win7 guest as a
non-rotational device? Win7 and higher tend to have different write
behavior when they think they are on an SSD (or something else where
seek latency is effectively 0).  Most VMM's (at least, most that I've
seen) will use fallocate to punch holes for ranges that get TRIM'ed in
the guest, so if windows is sending TRIM commands, that may also be part
of the issue.  Also, you might try reducing the amount of logging in the
guest.


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 2455 bytes --]

  parent reply	other threads:[~2014-09-02 19:18 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-11 18:36 Large files, nodatacow and fragmentation G. Richard Bellamy
2014-08-11 19:14 ` Roman Mamedov
2014-08-11 21:37   ` G. Richard Bellamy
2014-08-11 23:31   ` Chris Murphy
2014-08-14  3:57     ` G. Richard Bellamy
2014-08-14  4:23       ` Chris Murphy
2014-08-14 14:30         ` G. Richard Bellamy
2014-08-14 15:05           ` Austin S Hemmelgarn
2014-08-14 18:15             ` G. Richard Bellamy
2014-08-14 18:40           ` Chris Murphy
2014-08-14 23:16             ` G. Richard Bellamy
2014-08-15  1:05               ` Chris Murphy
2014-09-02 18:31                 ` G. Richard Bellamy
2014-09-02 19:17                   ` Chris Murphy
2014-09-02 19:17                   ` Austin S Hemmelgarn [this message]
2014-09-02 23:30                     ` G. Richard Bellamy
2014-09-03  6:01                       ` Chris Murphy
2014-09-03  6:26                         ` Chris Murphy
2014-09-03 15:45                           ` G. Richard Bellamy
2014-09-03 18:53                             ` Clemens Eisserer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=540617DE.5070605@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lists@colorremedies.com \
    --cc=rbellamy@pteradigm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).