From: Marc Cousin <cousinmarc@gmail.com>
To: dsterba@suse.cz, linux-btrfs@vger.kernel.org
Subject: Re: snapshot destruction making IO extremely slow
Date: Mon, 30 Mar 2015 17:09:52 +0200 [thread overview]
Message-ID: <55196740.1000705@gmail.com> (raw)
In-Reply-To: <20150330142532.GI32051@suse.cz>
On 30/03/2015 16:25, David Sterba wrote:
> On Wed, Mar 25, 2015 at 11:55:36AM +0100, Marc Cousin wrote:
>> On 25/03/2015 02:19, David Sterba wrote:
>>> Snapper might add to that if you have
>>>
>>> EMPTY_PRE_POST_CLEANUP="yes"
>>>
>>> as it reads the pre/post snapshots and deletes them if the diff is
>>> empty. This adds some IO stress.
>>
>> I couldn't find a clear explanation in the documentation. Does it mean
>> that when there is absolutely no difference between two snapshots, one
>> of them is deleted ?
>
> Only the pre-post snapshots, ie. no timeline or other types (eg.
> manually created one).
>
>> And that snapper does a diff between them to
>> determine that ?
>
> AFAIK yes.
>
>> If so, yes, I can remove it, I don't care about that :)
>>
>>>
>>>> The btrfs cleaner is 100% active:
>>>>
>>>> 1501 root 20 0 0 0 0 R 100,0 0,0 9:10.40 [btrfs-cleaner]
>>>
>>> That points to the snapshot cleaning, but the cleaner thread does more
>>> than that. It may also process delayed file deletion and work scheduled
>>> if 'autodefrag' is on.
>>
>> autodefrag is activated. These are mechanical drives, so I'd rather keep
>> it on, shouldn't I ?
>
> You should (I do have autogefrag on), unless you applications are
> latency sensitive and you can measure the difference. Autodefrag tends
> to read/write surrounding blocks for random write so it may imply some
> seek penalty if the affected block is far from the others.
>
>>>> What is "funny" is that the filesystem seems to be working again when
>>>> there is some IO activity and btrfs-cleaner gets to a lower cpu usage
>>>> (around 70%).
>>>
>>> Possibly a behaviour caused by scheduling (both cpu and io), the other
>>> process gets a slice and slows down cleaner that hogs the system.
>>
>> I have almost no IO on these disks during the problem (I had put an
>> iostat on the first email). Only one CPU core at 100% load. That's why I
>> felt it looked more like a locking or serialization issue.
>
> So it would be good to sample the active threads and see where it's
> spending the time. It could be the somewhere in the rb-tree representing
> extents, but that's a guess.
>
I just need to be told how to do that :)
Something like a perf top ?
next prev parent reply other threads:[~2015-03-30 15:09 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-22 8:11 snapshot destruction making IO extremely slow Marc Cousin
2015-03-22 8:23 ` Marc Cousin
2015-03-25 1:19 ` David Sterba
2015-03-25 10:55 ` Marc Cousin
2015-03-25 11:38 ` Rich Freeman
2015-03-30 14:30 ` David Sterba
2015-03-30 14:25 ` David Sterba
2015-03-30 15:09 ` Marc Cousin [this message]
2015-03-31 17:05 ` David Sterba
2015-04-20 9:51 ` Marc Cousin
2015-04-23 15:42 ` Marc Cousin
2017-05-24 8:10 ` Marc Cousin
2017-05-24 8:23 ` Marat Khalili
2017-06-05 8:30 ` Jakob Schürz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55196740.1000705@gmail.com \
--to=cousinmarc@gmail.com \
--cc=dsterba@suse.cz \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).