public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Maloney <peter.maloney@brockmann-consult.de>
To: Alexander Block <ablock84@googlemail.com>
Cc: linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: atime and filesystems with snapshots (especially Btrfs)
Date: Fri, 25 May 2012 22:27:53 +0200	[thread overview]
Message-ID: <4FBFEB49.7050901@brockmann-consult.de> (raw)
In-Reply-To: <CAB9VWqAV+TohoMP0pcP6D6SEWB4o40XbHQssO1_Mbz3d_kUHaA@mail.gmail.com>

On 05/25/2012 09:10 PM, Alexander Block wrote:
> Just to show some numbers I made a simple test on a fresh btrfs fs. I
> copied my hosts /usr (4 gig) folder to that fs and checked metadata
> usage with "btrfs fi df /mnt", which was around 300m. Then I created
> 10 snapshots and checked metadata usage again, which didn't change
> much. Then I run "grep foobar /mnt -R" to update all files atime.
> After this was finished, metadata usage was 2.59 gig. So I lost 2.2
> gig just because I searched for something. If someone already has
> nearly no space left, he probably won't be able to move some data to
> another disk, as he may get ENOSPC while copying the data.
>
> Here is the output of the final "btrfs fi df":
>
> # btrfs fi df /mnt
> Data: total=6.01GB, used=4.19GB
> System, DUP: total=8.00MB, used=4.00KB
> System: total=4.00MB, used=0.00
> Metadata, DUP: total=3.25GB, used=2.59GB
> Metadata: total=8.00MB, used=0.00
>
> I don't know much about other filesystems that support snapshots, but
> I have the feeling that most of them would have the same problem. Also
> all other filesystems in combination with LVM snapshots may cause
> problems (I'm not very familiar with LVM). Filesystem image formats,
> like qcow, vmdk, vbox and so on may also have problems with atime.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Did you run the recursive grep after each snapshot (which I would expect
would result in 11 times as many metadata blocks, max 3.3 GB), or just
once after all 10 snapshots (which I think would mean only 2x as many
metadata blocks, max 600 MB)?


  reply	other threads:[~2012-05-25 20:28 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-25 15:35 atime and filesystems with snapshots (especially Btrfs) Alexander Block
2012-05-25 15:42 ` Josef Bacik
2012-05-25 15:59   ` Alexander Block
2012-05-25 16:28     ` Andreas Dilger
2012-05-25 16:38       ` Alexander Block
     [not found]     ` <CAOjFWZ6qgAkVF-Ep5FYf7ty+AiJQjistY=Fr7ALNrWS=-RT_5w@mail.gmail.com>
2012-05-25 16:48       ` Alexander Block
2012-05-25 19:10 ` Alexander Block
2012-05-25 20:27   ` Peter Maloney [this message]
2012-05-25 20:42     ` Alexander Block
2012-05-25 20:48       ` Alexander Block
2012-05-29  8:14 ` Boaz Harrosh
2012-05-29 14:03   ` Alexander Block

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FBFEB49.7050901@brockmann-consult.de \
    --to=peter.maloney@brockmann-consult.de \
    --cc=ablock84@googlemail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox