linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "G. Richard Bellamy" <rbellamy@pteradigm.com>
To: Austin S Hemmelgarn <ahferroin7@gmail.com>
Cc: Chris Murphy <lists@colorremedies.com>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Large files, nodatacow and fragmentation
Date: Tue, 2 Sep 2014 16:30:05 -0700	[thread overview]
Message-ID: <CADw2B2M+WSbCyphH3Y=5w8JyMHYe7sL_MdvScy9NLPS3PbRHiQ@mail.gmail.com> (raw)
In-Reply-To: <540617DE.5070605@gmail.com>

Thanks @chris & @austin. You both bring up interesting questions and points.

@chris: atlas-data.qcow2 isn't running any software or logging at this
time, I isolated my D:\ drive on that file via clonezilla and
virt-resize.
Microsoft DiskPart version 6.1.7601
Copyright (C) 1999-2008 Microsoft Corporation.
On computer: ATLAS

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          350 GB      0 B
  Disk 1    Online          300 GB      0 B

DISKPART> list vol

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
  Volume 0     E                       CD-ROM          0 B  No Media
  Volume 1     F   CDROM        CDFS   DVD-ROM       70 MB  Healthy
  Volume 2         System Rese  NTFS   Partition    100 MB  Healthy    System
  Volume 3     C   System       NTFS   Partition    349 GB  Healthy    Boot
  Volume 4     D   Data         NTFS   Partition    299 GB  Healthy

Volume 2 & 3 == atlas-system.qcow2
Volume 4 == atlas-data.qcow2

...and the current fragmentation:
2014-09-02 16:27:45
root@eanna i /var/lib/libvirt/images # filefrag atlas-*
atlas-data.qcow2: 564 extents found
atlas-system.qcow2: 47412 extents found

@austin, the Windows 7 guest sees both disks as spinning rust.

On Tue, Sep 2, 2014 at 12:17 PM, Austin S Hemmelgarn
<ahferroin7@gmail.com> wrote:
> On 2014-09-02 14:31, G. Richard Bellamy wrote:
>> I thought I'd follow-up and give everyone an update, in case anyone
>> had further interest.
>>
>> I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
>> bcache front device.
>>
>> It's 5x600GB SAS 15k RPM drives RAID10, with the 512MB SSD bcache.
>>
>> 2014-09-02 11:23:16
>> root@eanna i /var/lib/libvirt/images # lsblk
>> NAME      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>> sda         8:0    0 558.9G  0 disk
>> └─bcache3 254:3    0 558.9G  0 disk /var/lib/btrfs/data
>> sdb         8:16   0 558.9G  0 disk
>> └─bcache2 254:2    0 558.9G  0 disk
>> sdc         8:32   0 558.9G  0 disk
>> └─bcache1 254:1    0 558.9G  0 disk
>> sdd         8:48   0 558.9G  0 disk
>> └─bcache0 254:0    0 558.9G  0 disk
>> sde         8:64   0 558.9G  0 disk
>> └─bcache4 254:4    0 558.9G  0 disk
>> sdf         8:80   0   1.8T  0 disk
>> └─sdf1      8:81   0   1.8T  0 part
>> sdg         8:96   0   477G  0 disk /var/lib/btrfs/system
>> sdh         8:112  0   477G  0 disk
>> sdi         8:128  0   477G  0 disk
>> ├─bcache0 254:0    0 558.9G  0 disk
>> ├─bcache1 254:1    0 558.9G  0 disk
>> ├─bcache2 254:2    0 558.9G  0 disk
>> ├─bcache3 254:3    0 558.9G  0 disk /var/lib/btrfs/data
>> └─bcache4 254:4    0 558.9G  0 disk
>> sr0        11:0    1  1024M  0 rom
>>
>> I further split the system and data drives of the VM Win7 guest. It's
>> very interesting to see the huge level of fragmentation I'm seeing,
>> even with the help of ordered writes offered by bcache - in other
>> words while bcache seems to be offering me stability and better
>> behavior to the guest, the underlying the filesystem is still seeing a
>> level of fragmentation that has me scratching my head.
>>
>> That being said, I don't know what would be normal fragmentation of a
>> VM Win7 guest system drive, so could be I'm just operating in my zone
>> of ignorance again.
>>
>> 2014-09-01 14:41:19
>> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
>> atlas-data.qcow2: 7 extents found
>> atlas-system.qcow2: 154 extents found
>> 2014-09-01 18:12:27
>> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
>> atlas-data.qcow2: 564 extents found
>> atlas-system.qcow2: 28171 extents found
>> 2014-09-02 08:22:00
>> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
>> atlas-data.qcow2: 564 extents found
>> atlas-system.qcow2: 35281 extents found
>> 2014-09-02 08:44:43
>> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
>> atlas-data.qcow2: 564 extents found
>> atlas-system.qcow2: 37203 extents found
>> 2014-09-02 10:14:32
>> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
>> atlas-data.qcow2: 564 extents found
>> atlas-system.qcow2: 40903 extents found
>>
> This may sound odd, but are you exposing the disk to the Win7 guest as a
> non-rotational device? Win7 and higher tend to have different write
> behavior when they think they are on an SSD (or something else where
> seek latency is effectively 0).  Most VMM's (at least, most that I've
> seen) will use fallocate to punch holes for ranges that get TRIM'ed in
> the guest, so if windows is sending TRIM commands, that may also be part
> of the issue.  Also, you might try reducing the amount of logging in the
> guest.
>

  reply	other threads:[~2014-09-02 23:30 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-11 18:36 Large files, nodatacow and fragmentation G. Richard Bellamy
2014-08-11 19:14 ` Roman Mamedov
2014-08-11 21:37   ` G. Richard Bellamy
2014-08-11 23:31   ` Chris Murphy
2014-08-14  3:57     ` G. Richard Bellamy
2014-08-14  4:23       ` Chris Murphy
2014-08-14 14:30         ` G. Richard Bellamy
2014-08-14 15:05           ` Austin S Hemmelgarn
2014-08-14 18:15             ` G. Richard Bellamy
2014-08-14 18:40           ` Chris Murphy
2014-08-14 23:16             ` G. Richard Bellamy
2014-08-15  1:05               ` Chris Murphy
2014-09-02 18:31                 ` G. Richard Bellamy
2014-09-02 19:17                   ` Chris Murphy
2014-09-02 19:17                   ` Austin S Hemmelgarn
2014-09-02 23:30                     ` G. Richard Bellamy [this message]
2014-09-03  6:01                       ` Chris Murphy
2014-09-03  6:26                         ` Chris Murphy
2014-09-03 15:45                           ` G. Richard Bellamy
2014-09-03 18:53                             ` Clemens Eisserer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADw2B2M+WSbCyphH3Y=5w8JyMHYe7sL_MdvScy9NLPS3PbRHiQ@mail.gmail.com' \
    --to=rbellamy@pteradigm.com \
    --cc=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lists@colorremedies.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).