Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Arne Jansen <sensille@gmx.net>
To: chb@muc.de
Cc: Chris Mason <chris.mason@oracle.com>,
	Josef Bacik <josef@redhat.com>, Sage Weil <sage@newdream.net>,
	ceph-devel@vger.kernel.org, linux-btrfs@vger.kernel.org
Subject: Re: ceph on btrfs [was Re: ceph on non-btrfs file systems]
Date: Mon, 24 Oct 2011 23:37:08 +0200	[thread overview]
Message-ID: <4EA5DA84.9040600@gmx.net> (raw)
In-Reply-To: <CAO47_-_nkU4=ixMKhT5-PhLDhCkprNt3nN7+DfmWgbLH8qrQiA@mail.gmail.com>

On 24.10.2011 23:34, Christian Brunner wrote:
> 2011/10/24 Chris Mason<chris.mason@oracle.com>:
>> On Mon, Oct 24, 2011 at 03:51:47PM -0400, Josef Bacik wrote:
>>> On Mon, Oct 24, 2011 at 10:06:49AM -0700, Sage Weil wrote:
>>>> [adding linux-btrfs to cc]
>>>>
>>>> Josef, Chris, any ideas on the below issues?
>>>>
>>>> On Mon, 24 Oct 2011, Christian Brunner wrote:
>>>>> Thanks for explaining this. I don't have any objections against btrfs
>>>>> as a osd filesystem. Even the fact that there is no btrfs-fsck doesn't
>>>>> scare me, since I can use the ceph replication to recover a lost
>>>>> btrfs-filesystem. The only problem I have is, that btrfs is not stable
>>>>> on our side and I wonder what you are doing to make it work. (Maybe
>>>>> it's related to the load pattern of using ceph as a backend store for
>>>>> qemu).
>>>>>
>>>>> Here is a list of the btrfs problems I'm having:
>>>>>
>>>>> - When I run ceph with the default configuration (btrfs snaps enabled)
>>>>> I can see a rapid increase in Disk-I/O after a few hours of uptime.
>>>>> Btrfs-cleaner is using more and more time in
>>>>> btrfs_clean_old_snapshots().
>>>>
>>>> In theory, there shouldn't be any significant difference between taking a
>>>> snapshot and removing it a few commits later, and the prior root refs that
>>>> btrfs holds on to internally until the new commit is complete.  That's
>>>> clearly not quite the case, though.
>>>>
>>>> In any case, we're going to try to reproduce this issue in our
>>>> environment.
>>>>
>>>
>>> I've noticed this problem too, clean_old_snapshots is taking quite a while in
>>> cases where it really shouldn't.  I will see if I can come up with a reproducer
>>> that doesn't require setting up ceph ;).
>>
>> This sounds familiar though, I thought we had fixed a similar
>> regression.  Either way, Arne's readahead code should really help.
>>
>> Which kernel version were you running?
>>
>> [ ack on the rest of Josef's comments ]
>
> This was with a 3.0 kernel, including all btrfs-patches from josefs
> git repo plus the "use the global reserve when truncating the free
> space cache inode" patch.
>
> I'll try the readahead code.

The current readahead code is only used for scrub. I plan to extend it
to snapshot deletion in a next step, but currently I'm afraid it can't
help.

-Arne

>
> Thanks,
> Christian
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


  reply	other threads:[~2011-10-24 21:37 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <Pine.LNX.4.64.1110231739380.25255@cobra.newdream.net>
     [not found] ` <CAO47_-9jp===DT=scpe=U8BnPnUCAVz7xUWVCC9AMVmx67CdaA@mail.gmail.com>
2011-10-24 17:06   ` ceph on btrfs [was Re: ceph on non-btrfs file systems] Sage Weil
2011-10-24 19:51     ` Josef Bacik
2011-10-24 20:35       ` Chris Mason
2011-10-24 21:34         ` Christian Brunner
2011-10-24 21:37           ` Arne Jansen [this message]
2011-10-25 11:56       ` Christian Brunner
2011-10-25 12:23         ` Josef Bacik
2011-10-25 14:25           ` Christian Brunner
2011-10-25 15:00             ` Josef Bacik
2011-10-25 15:05             ` Josef Bacik
2011-10-25 15:13               ` Christian Brunner
2011-10-25 20:15               ` Chris Mason
2011-10-25 20:22                 ` Josef Bacik
2011-10-26  0:16                   ` Christian Brunner
2011-10-26  8:21                     ` Christian Brunner
2011-10-26 13:23                   ` Chris Mason
2011-10-27 15:07                     ` Josef Bacik
2011-10-27 18:14                       ` Josef Bacik
2011-10-25 16:36           ` Sage Weil
2011-10-25 19:09             ` Christian Brunner
2011-10-25 22:27               ` Sage Weil
2011-10-27 19:52         ` Josef Bacik
2011-10-27 20:39           ` Christian Brunner
     [not found]             ` <CAO47_-_+Oqs1sHeYEBfxgwugSUYKftQLQ9jEyDgFPFu8fXe34w@mail.gmail.com>
     [not found]               ` <CAO47_-8YGAxoYOBRKxLP2HULqEtV5bMugzzybq3srCVFZczgGA@mail.gmail.com>
2011-10-31 10:25                 ` Christian Brunner
2011-10-31 13:29                   ` Christian Brunner
2011-10-31 14:04                     ` Josef Bacik
2011-10-25 10:23     ` Christoph Hellwig
2011-10-25 16:23       ` Sage Weil
     [not found] <CAO47_-9L7SdQwhJ27B6yzrqG8xvj+CeZHeSutgeCixcv7kUidg@mail.gmail.com>
     [not found] ` <Pine.LNX.4.64.1110252221510.6574@cobra.newdream.net>
2011-10-26  8:12   ` Christian Brunner
2011-10-26 16:32     ` Sage Weil
     [not found] <4EA86FD7.4030407@tuxadero.com>
2011-10-27 10:53 ` Martin Mailand
2011-10-27 10:59   ` Stefan Majer
2011-10-27 11:17     ` Martin Mailand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4EA5DA84.9040600@gmx.net \
    --to=sensille@gmx.net \
    --cc=ceph-devel@vger.kernel.org \
    --cc=chb@muc.de \
    --cc=chris.mason@oracle.com \
    --cc=josef@redhat.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=sage@newdream.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox