linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Arne Jansen <sensille@gmx.net>
To: Andi Kleen <andi@firstfloor.org>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH v3 3/6] btrfs: add scrub code and prototypes
Date: Thu, 17 Mar 2011 00:10:57 +0100	[thread overview]
Message-ID: <4D814381.9070105@gmx.net> (raw)
In-Reply-To: <m2mxkulnx3.fsf@firstfloor.org>

On 16.03.2011 23:07, Andi Kleen wrote:
> Arne Jansen<sensille@gmx.net>  writes:
>> +	 */
>> +	mutex_lock(&fs_info->scrub_lock);
>> +	atomic_inc(&fs_info->scrubs_running);
>> +	mutex_unlock(&fs_info->scrub_lock);
> It seems odd to protect an atomic_inc with a mutex.
> Is that done for some side effect? Otherwise you either
> don't need atomic or don't need the lock.
>
The reason it is atomic is because it is checked inside a wait_event,
where I can't hold a lock. The mutex is there to protect the check
in btrfs_scrub_pause and btrfs_scrub_cancel. But, now that I think
of it, there is still a race condition left. I'll rethink the locking there
and see if I can eliminate some of the mutex_locks.
> That seems to be all over the source file.
>
>> +int btrfs_scrub_pause(struct btrfs_root *root)
>> +{
>> +	struct btrfs_fs_info *fs_info = root->fs_info;
>> +	mutex_lock(&fs_info->scrub_lock);
>
> As I understand it you take that mutex on every transaction
> commit, which is a fast path for normal IO.
>
A transaction commit only happens every 30 seconds. At this point,
all outstanding data gets flushed and the super blocks written. I only
pause the scrub in a very late phase during commit. At this point,
the commit is already single threaded.

Apart from that you can be sure that scrub will have an impact on the
performance, as it keeps the disks at 100% busy.
To mitigate this, all scrub activity happens inside ioctls. The idea is that
this way the user can control the impact of the scrub using ionice.

--Arne

> For me that looks like a scalability problem with enough
> cores. Did you do any performance testing of this on a system
> with a reasonable number of cores?
>
> btrfs already has enough scalability problems, please don't
> add new ones.
>
> -Andi
>


  reply	other threads:[~2011-03-16 23:10 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-03-12 14:50 [PATCH v3 0/6] btrfs: scrub Arne Jansen
2011-03-12 14:50 ` [PATCH v3 1/6] btrfs: add parameter to btrfs_lookup_csum_range Arne Jansen
2011-03-12 14:50 ` [PATCH v3 2/6] btrfs: make struct map_lookup public Arne Jansen
2011-03-12 14:50 ` [PATCH v3 3/6] btrfs: add scrub code and prototypes Arne Jansen
2011-03-13 23:50   ` Ilya Dryomov
2011-03-14  9:57     ` Arne Jansen
2011-03-16 14:35   ` Ilya Dryomov
2011-03-16 14:54     ` Ilya Dryomov
2011-03-16 22:07   ` Andi Kleen
2011-03-16 23:10     ` Arne Jansen [this message]
2011-03-17 19:02       ` Arne Jansen
2011-03-12 14:50 ` [PATCH v3 4/6] btrfs: sync scrub with commit & device removal Arne Jansen
2011-03-12 14:50 ` [PATCH v3 5/6] btrfs: add state information for scrub Arne Jansen
2011-03-12 14:50 ` [PATCH v3 6/6] btrfs: new ioctls " Arne Jansen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D814381.9070105@gmx.net \
    --to=sensille@gmx.net \
    --cc=andi@firstfloor.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).