public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stephen Hemminger <shemminger@vyatta.com>
To: Andi Kleen <andi@firstfloor.org>
Cc: Andi Kleen <andi@firstfloor.org>,
	Chris Mason <chris.mason@oracle.com>,
	linux-btrfs@vger.kernel.org
Subject: Re: btrfs_tree_lock & trylock
Date: Mon, 8 Sep 2008 08:50:54 -0700	[thread overview]
Message-ID: <20080908085054.21acbe77@extreme> (raw)
In-Reply-To: <20080908154714.GM26079@one.firstfloor.org>

On Mon, 8 Sep 2008 17:47:14 +0200
Andi Kleen <andi@firstfloor.org> wrote:

> On Mon, Sep 08, 2008 at 08:07:51AM -0700, Stephen Hemminger wrote:
> > On Mon, 8 Sep 2008 16:20:52 +0200
> > Andi Kleen <andi@firstfloor.org> wrote:
> > 
> > > On Mon, Sep 08, 2008 at 10:02:30AM -0400, Chris Mason wrote:
> > > > On Mon, 2008-09-08 at 15:54 +0200, Andi Kleen wrote:
> > > > > > The idea is to try to spin for a bit to avoid scheduling away, which is
> > > > > > especially important for the high levels.  Most holders of the mutex
> > > > > > let it go very quickly.
> > > > > 
> > > > > Ok but that surely should be implemented in the general mutex code then
> > > > > or at least in a standard adaptive mutex wrapper? 
> > > > 
> > > > That depends, am I the only one crazy enough to think its a good idea?
> > > 
> > > Adaptive mutexes are classic, a lot of other OS have it.
> > 
> > The problem is that they are a nuisance. It is impossible to choose
> > the right trade off between spin an no-spin, also they optimize for
> > a case that doesn't occur often enough to be justified.
> 
> At least the numbers done by Gregory et.al. were dramatic improvements.
> Given that was an extreme case in that the rt kernel does everything
> with mutexes, but it was still a very clear win on a wide range
> of workloads.
> 
> -Andi

My guess is that the improvement happens mostly from the first couple of tries,
not from repeated spinning. And since it is a mutex, you could even do:

   if (mutex_trylock(&eb->mutex))
	return 0;
   cpu_relax();
   if (mutex_trylock(&eb->mutex))
        return 0;
   yield();
   return mutex_lock(&eb->mutex);


  reply	other threads:[~2008-09-08 15:50 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-09-08 11:10 btrfs_tree_lock & trylock Andi Kleen
2008-09-08 13:47 ` Chris Mason
2008-09-08 13:54   ` Andi Kleen
2008-09-08 14:02     ` Chris Mason
2008-09-08 14:20       ` Andi Kleen
2008-09-08 15:07         ` Stephen Hemminger
2008-09-08 15:28           ` Chris Mason
2008-09-08 23:26             ` Steve Long
2008-09-08 15:47           ` Andi Kleen
2008-09-08 15:50             ` Stephen Hemminger [this message]
2008-09-08 15:55               ` Chris Mason
2008-09-08 16:13                 ` jim owens
2008-09-08 16:20                   ` Chris Mason
2008-09-08 16:49                     ` Stephen Hemminger
2008-09-08 17:17                       ` Christoph Hellwig
2008-09-08 17:32                         ` Ric Wheeler
2008-09-08 23:28                           ` Steve Long
2008-09-08 17:16           ` adaptive mutexes, was " Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080908085054.21acbe77@extreme \
    --to=shemminger@vyatta.com \
    --cc=andi@firstfloor.org \
    --cc=chris.mason@oracle.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox