From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [RFC] Userspace RCU: (ab)using futexes to save cpu cycles and energy
Date: Thu, 1 Oct 2009 07:40:38 -0700 [thread overview]
Message-ID: <20091001144037.GB6205@linux.vnet.ibm.com> (raw)
In-Reply-To: <20090923174820.GA12827@Krystal>
On Wed, Sep 23, 2009 at 01:48:20PM -0400, Mathieu Desnoyers wrote:
> Hi,
>
> When implementing the call_rcu() "worker thread" in userspace, I ran
> into the problem that it had to be woken up periodically to check if
> there are any callbacks to execute. However, I easily imagine that this
> does not fit well with the "green computing" definition.
>
> Therefore, I've looked at ways to have the call_rcu() callers waking up
> this worker thread when callbacks are enqueued. However, I don't want to
> take any lock and the fast path (when no wake up is required) should not
> cause any cache-line exchange.
>
> Here are the primitives I've created. I'd like to have feedback on my
> futex use, just to make sure I did not do any incorrect assumptions.
>
> This could also be eventually used in the QSBR Userspace RCU quiescent
> state and in mb/signal userspace RCU when exiting RCU read-side C.S. to
> ensure synchronize_rcu() does not busy-wait for too long.
>
> /*
> * Wake-up any waiting defer thread. Called from many concurrent threads.
> */
> static void wake_up_defer(void)
> {
> if (unlikely(atomic_read(&defer_thread_futex) == -1))
> atomic_set(&defer_thread_futex, 0);
> futex(&defer_thread_futex, FUTEX_WAKE,
> 0, NULL, NULL, 0);
> }
>
> /*
> * Defer thread waiting. Single thread.
> */
> static void wait_defer(void)
> {
> atomic_dec(&defer_thread_futex);
> if (atomic_read(&defer_thread_futex) == -1)
> futex(&defer_thread_futex, FUTEX_WAIT, -1,
> NULL, NULL, 0);
> }
The standard approach would be to use pthread_cond_wait() and
pthread_cond_broadcast(). Unfortunately, this would require holding a
pthread_mutex_lock across both operations, which would not necessarily
be so good for wake-up-side scalability.
That said, without this sort of heavy-locking approach, wakeup races
are quite difficult to avoid.
Thanx, Paul
next prev parent reply other threads:[~2009-10-04 1:54 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-23 17:48 [RFC] Userspace RCU: (ab)using futexes to save cpu cycles and energy Mathieu Desnoyers
2009-09-23 18:04 ` Chris Friesen
2009-09-23 19:03 ` Mathieu Desnoyers
2009-09-23 22:32 ` Mathieu Desnoyers
2009-09-23 23:12 ` Chris Friesen
2009-09-23 23:28 ` Mathieu Desnoyers
2009-09-26 7:05 ` Mathieu Desnoyers
2009-09-28 7:11 ` Michael Schnell
2009-09-28 10:58 ` Michael Schnell
2009-09-28 11:01 ` Michael Schnell
2009-10-01 14:40 ` Paul E. McKenney [this message]
2009-10-04 14:37 ` Mathieu Desnoyers
2009-10-04 20:36 ` Paul E. McKenney
2009-10-04 21:12 ` Mathieu Desnoyers
[not found] ` <4AC99D55.8000102@lumino.de>
[not found] ` <20091005125533.GA1857@Krystal>
2009-10-05 13:22 ` Mathieu Desnoyers
2009-10-05 22:21 ` Mathieu Desnoyers
2009-10-07 7:22 ` Michael Schnell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091001144037.GB6205@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@polymtl.ca \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox