From: Peter Zijlstra <peterz@infradead.org>
To: Manfred Spraul <manfred@colorfullife.com>
Cc: Boqun Feng <boqun.feng@gmail.com>,
Davidlohr Bueso <dave@stgolabs.net>,
Waiman.Long@hpe.com, mingo@kernel.org,
torvalds@linux-foundation.org, ggherdovich@suse.com,
mgorman@techsingularity.net, linux-kernel@vger.kernel.org,
Paul McKenney <paulmck@linux.vnet.ibm.com>,
Will Deacon <will.deacon@arm.com>
Subject: Re: sem_lock() vs qspinlocks
Date: Sun, 22 May 2016 11:38:28 +0200 [thread overview]
Message-ID: <20160522093828.GM3193@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <48cb5e2c-f346-d702-30af-2a6666886df4@colorfullife.com>
On Sun, May 22, 2016 at 10:43:08AM +0200, Manfred Spraul wrote:
> How would we handle mixed spin_lock()/mutex_lock() code?
> For the IPC code, I would like to replace the outer lock with a mutex.
> The code only uses spinlocks, because at the time it was written, the mutex
> code didn't contain a busy wait.
> With a mutex, the code would become simpler (all the
> lock/unlock/kmalloc/relock parts could be removed).
>
> The result would be something like:
>
> mutex_lock(A) spin_lock(B)
> spin_unlock_wait(B) if (!mutex_is_locked(A))
> do_something() do_something()
>
Should work similarly, but we'll have to audit mutex for these same
issues. I'll put it on todo.
next prev parent reply other threads:[~2016-05-22 9:38 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-20 5:39 sem_lock() vs qspinlocks Davidlohr Bueso
2016-05-20 7:49 ` Peter Zijlstra
2016-05-20 15:00 ` Davidlohr Bueso
2016-05-20 15:05 ` Peter Zijlstra
2016-05-20 15:25 ` Davidlohr Bueso
2016-05-20 15:28 ` Peter Zijlstra
2016-05-20 20:47 ` Waiman Long
2016-05-20 20:52 ` Peter Zijlstra
2016-05-21 0:59 ` Davidlohr Bueso
2016-05-21 4:01 ` Waiman Long
2016-05-21 7:40 ` Peter Zijlstra
2016-05-20 7:53 ` Peter Zijlstra
2016-05-20 8:13 ` Peter Zijlstra
2016-05-20 8:18 ` Peter Zijlstra
2016-05-20 9:07 ` Giovanni Gherdovich
2016-05-20 9:34 ` Peter Zijlstra
2016-05-20 8:30 ` Peter Zijlstra
2016-05-20 9:00 ` Peter Zijlstra
2016-05-20 10:09 ` Ingo Molnar
2016-05-20 10:45 ` Mel Gorman
2016-05-20 11:58 ` Peter Zijlstra
2016-05-20 14:05 ` Boqun Feng
2016-05-20 15:21 ` Peter Zijlstra
2016-05-20 16:04 ` Peter Zijlstra
2016-05-20 17:00 ` Linus Torvalds
2016-05-20 21:06 ` Peter Zijlstra
2016-05-20 21:44 ` Linus Torvalds
2016-05-21 0:48 ` Davidlohr Bueso
2016-05-21 2:30 ` Linus Torvalds
2016-05-21 7:37 ` Peter Zijlstra
2016-05-21 13:49 ` Manfred Spraul
2016-05-24 10:57 ` Peter Zijlstra
2016-05-21 17:14 ` Davidlohr Bueso
2016-05-23 12:25 ` Peter Zijlstra
2016-05-23 17:52 ` Linus Torvalds
2016-05-25 6:37 ` Boqun Feng
2016-05-22 8:43 ` Manfred Spraul
2016-05-22 9:38 ` Peter Zijlstra [this message]
2016-05-20 16:20 ` Davidlohr Bueso
2016-05-20 20:44 ` Waiman Long
2016-05-20 20:53 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160522093828.GM3193@twins.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=Waiman.Long@hpe.com \
--cc=boqun.feng@gmail.com \
--cc=dave@stgolabs.net \
--cc=ggherdovich@suse.com \
--cc=linux-kernel@vger.kernel.org \
--cc=manfred@colorfullife.com \
--cc=mgorman@techsingularity.net \
--cc=mingo@kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=torvalds@linux-foundation.org \
--cc=will.deacon@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).