From: Oleg Nesterov <oleg@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>,
Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>,
Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
Ingo Molnar <mingo@kernel.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Thomas Gleixner <tglx@linutronix.de>,
Steven Rostedt <rostedt@goodmis.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 4/5] rcusync: introduce rcu_sync_struct->exclusive mode
Date: Fri, 4 Oct 2013 21:56:23 +0200 [thread overview]
Message-ID: <20131004195623.GA19436@redhat.com> (raw)
In-Reply-To: <20131004192944.GU15690@laptop.programming.kicks-ass.net>
On 10/04, Peter Zijlstra wrote:
>
> On Fri, Oct 04, 2013 at 08:46:40PM +0200, Oleg Nesterov wrote:
>
> > Note: it would be more clean to do __complete_locked() under
> > ->rss_lock in rcu_sync_exit() in the "else" branch, but we don't
> > have this trivial helper.
>
> Something equivalent in available functions would be:
>
> rss->gp_comp.done++;
> __wake_up_locked_key(&rss->gp_comp.wait, TASK_NORMAL, NULL);
Or __wake_up_locked(&rss->gp_comp.wait, TASK_NORMAL, 1).
Sure, this is what I had in mind. Just I thought that you also dislike
the idea to use/add the new helper ;) (and I think it would be better
to add the new helper even if we are not going to export it).
> > struct rcu_sync_struct {
> > int gp_state;
> > int gp_count;
> > - wait_queue_head_t gp_wait;
> > + struct completion gp_comp;
> >
> > int cb_state;
> > struct rcu_head cb_head;
> >
> > + bool exclusive;
> > struct rcu_sync_ops *ops;
> > };
>
> I suppose we have a hole before or after cb_state to fit exclusive in.,
> now it looks like we're going to create another hole before the *ops
> pointer.
Yes, it probably makes sense to rearrange the members. And, for example,
gp_state and cb_state can be "char" and packed together.
> > @@ -4,7 +4,7 @@
> > enum { GP_IDLE = 0, GP_PENDING, GP_PASSED };
> > enum { CB_IDLE = 0, CB_PENDING, CB_REPLAY };
> >
> > -#define rss_lock gp_wait.lock
> > +#define rss_lock gp_comp.wait.lock
>
> Should we, for convenience, also do:
>
> #define rss_wait gp_comp.wait
Yes, I considered this too. OK, will do.
> > void rcu_sync_enter(struct rcu_sync_struct *rss)
> > @@ -56,9 +58,13 @@ void rcu_sync_enter(struct rcu_sync_struct *rss)
> > if (need_sync) {
> > rss->ops->sync();
> > rss->gp_state = GP_PASSED;
> > - wake_up_all(&rss->gp_wait);
> > + if (!rss->exclusive)
> > + wake_up_all(&rss->gp_comp.wait);
> > } else if (need_wait) {
> > - wait_event(rss->gp_wait, rss->gp_state == GP_PASSED);
> > + if (!rss->exclusive)
> > + wait_event(rss->gp_comp.wait, rss->gp_state == GP_PASSED);
> > + else
> > + wait_for_completion(&rss->gp_comp);
>
> I'm still not entirely sure why we need the completion; we already have
> the gp_count variable and a waitqueue;
and we also need the resource counter (like completion->done).
> together those should be able to
> implement the condition/semaphore variable, no?
>
> wait_for_completion:
>
> spin_lock_irq(&rss->rss_lock);
> if (rss->gp_count > 0) {
> __wait_event_locked(rss->gp_wait, (rss->gp_count > 0),
How? I do not even understand what did you mean ;) both conditions
are "gp_count > 0".
We simply can not define the CONDITION for wait_event() here, without
the additional accounting.
Hmm. perhaps you meant that this should be done before rcu_sync_enter()
increments ->gp_count. Perhaps this can work, but the code will be more
complex and this way rcu_sync_exit() will always schedule the callback?
And again, we do want to increment ->gp_count asap to disable this cb
if it is already pending.
Oleg.
next prev parent reply other threads:[~2013-10-04 20:03 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-04 18:46 [PATCH 0/5] rcusync: validations + dtor + exclusive Oleg Nesterov
2013-10-04 18:46 ` [PATCH 1/5] rcusync: introduce struct rcu_sync_ops Oleg Nesterov
2013-10-04 19:12 ` Linus Torvalds
2013-10-04 19:22 ` Oleg Nesterov
2013-10-04 19:30 ` Steven Rostedt
2013-10-04 19:38 ` Linus Torvalds
2013-10-04 19:42 ` Peter Zijlstra
2013-10-05 17:21 ` Oleg Nesterov
2013-10-05 17:17 ` Oleg Nesterov
2013-10-08 9:13 ` Peter Zijlstra
2013-10-08 15:33 ` Oleg Nesterov
2013-10-08 16:34 ` Paul E. McKenney
2013-10-04 18:46 ` [PATCH 2/5] rcusync: add the CONFIG_PROVE_RCU checks Oleg Nesterov
2013-10-04 18:46 ` [PATCH 3/5] rcusync: introduce rcu_sync_dtor() Oleg Nesterov
2013-10-04 18:46 ` [PATCH 4/5] rcusync: introduce rcu_sync_struct->exclusive mode Oleg Nesterov
2013-10-04 19:29 ` Peter Zijlstra
2013-10-04 19:56 ` Oleg Nesterov [this message]
2013-10-04 20:41 ` Peter Zijlstra
2013-10-06 13:22 ` Oleg Nesterov
2013-10-07 10:49 ` Peter Zijlstra
2013-10-04 18:46 ` [PATCH 5/5] rcusync: make rcu_sync_enter() return "bool" Oleg Nesterov
2013-10-04 19:32 ` [PATCH 0/5] rcusync: validations + dtor + exclusive Peter Zijlstra
2013-10-04 21:28 ` Paul E. McKenney
2013-10-05 17:22 ` Oleg Nesterov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131004195623.GA19436@redhat.com \
--to=oleg@redhat.com \
--cc=aarcange@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=rostedt@goodmis.org \
--cc=srikar@linux.vnet.ibm.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).