linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Thavatchai Makphaibulchoke <thavatchai.makpahibulchoke@hp.com>
Cc: Jan Kara <jack@suse.cz>, T Makphaibulchoke <tmac@hp.com>,
	tytso@mit.edu, adilger.kernel@dilger.ca,
	linux-ext4@vger.kernel.org, linux-kernel@vger.kernel.org,
	aswin@hp.com
Subject: Re: [PATCH v2] fs/ext4: increase parallelism in updating ext4 orphan list
Date: Mon, 14 Apr 2014 19:40:24 +0200	[thread overview]
Message-ID: <20140414174024.GC13860@quack.suse.cz> (raw)
In-Reply-To: <534C135A.9010803@hp.com>

On Mon 14-04-14 10:56:58, Thavatchai Makphaibulchoke wrote:
> On 04/02/2014 11:41 AM, Jan Kara wrote:
> >   Thanks for the patches and measurements! So I agree we contend a lot on
> > orphan list changes in ext4. But what you do seems to be unnecessarily
> > complicated and somewhat hiding the real substance of the patch. If I
> > understand your patch correctly, all it does is that it does the
> > preliminary work (ext4_reserve_inode_write(),
> > ext4_journal_get_write_access()) without the global orphan mutex (under the
> > hashed mutex).
> > 
> > However orphan operations on a single inode are already serialized by
> > i_mutex so there's no need to introduce any new hashed lock. Just add
> > assertion mutex_locked(&inode->i_mutex) to ext4_orphan_add() and
> > ext4_orphan_del() - you might need to lock i_mutex around the code in
> > fs/ext4/migrate.c and in ext4_tmpfile() but that should be fine.
> > 
> 
> I've done some data collection.  Looks like there are more callers to
> ext4_orphan_add() and ext4_orphan_del() without holding the i_mutex as
> expected.
> 
> This is what I've found on one of my 8 core machine.
> 
>                  --------------------------------------------
>                  |   ext4_orphan_add  |   ext4_orphan_del   |
>                  --------------------------------------------
>                  |   Total  | without |   Total  | without  |
>                  |          | holding |          | holding  |
>                  |          | i_mutex |          | i_mutex  |
> -------------------------------------------------------------
> | First comes up |          |         |          |          | 
> | to multi-user  |     2446 |     363 |     2081 |     1659 |
> -------------------------------------------------------------
> | After alltests | 23812582 |  173599 | 23853863 |  2467999 |
> -------------------------------------------------------------
> | After disk     | 30860981 |  818255 | 30261069 |  8881905 |
> -------------------------------------------------------------
> 
> Though some orphan calls already held the i_mutex, using the i_mutex to
> serialize orphan operations within a single inode seems to negate all of
> the performance improvement from the original patch.  There seems to be
> no performance differences form the current implementation.
  Thanks for trying that out! Can you please send me a patch you have been
testing? Because it doesn't quite make sense to me why using i_mutex should
be worse than using hashed locks...

> > Also I'm somewhat failing to see what the spinlock s_orphan_lock brings us.
> > I'd guess that the mutex could still protect also the in-memory list and we
> > have to grab it in all the relevant cases anyway (in some rare cases we
> > could avoid taking the mutex and spinlock would be enough but these
> > shouldn't be performance relevant). Please correct me if I'm wrong here, I
> > didn't look at the code for that long.
> > 
> 
> The same data also shows 0 call from the error part.  Re-running the
> benchmark, replacing the spinlock with the same disk oprhan mutex, does
> not seem to have any performance impact from that of the original patch.
  OK, at least that makes sense.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

  reply	other threads:[~2014-04-14 17:40 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-02 15:38 [PATCH 0/2] fs/ext4: increase parallelism in updating ext4 orphan list T Makphaibulchoke
2013-10-04  0:28 ` Andreas Dilger
2013-10-03 23:20   ` Thavatchai Makphaibulchoke
2014-04-02 16:29 ` [PATCH v2] " T Makphaibulchoke
2014-04-02 17:41   ` Jan Kara
2014-04-02 19:48     ` Thavatchai Makphaibulchoke
2014-04-14 16:56     ` Thavatchai Makphaibulchoke
2014-04-14 17:40       ` Jan Kara [this message]
2014-04-15 16:27         ` Thavatchai Makphaibulchoke
2014-04-15 17:25           ` Jan Kara
2014-04-15 20:22             ` Thavatchai Makphaibulchoke
2014-04-24 17:31 ` [PATCH v3] " T Makphaibulchoke
2014-04-30 10:10   ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140414174024.GC13860@quack.suse.cz \
    --to=jack@suse.cz \
    --cc=adilger.kernel@dilger.ca \
    --cc=aswin@hp.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=thavatchai.makpahibulchoke@hp.com \
    --cc=tmac@hp.com \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).