public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Greg KH <gregkh@linuxfoundation.org>
Cc: torvalds@linux-foundation.org, keescook@chromium.org,
	pbonzini@redhat.com, linux-kernel@vger.kernel.org,
	ojeda@kernel.org, ndesaulniers@google.com, mingo@redhat.com,
	will@kernel.org, longman@redhat.com, boqun.feng@gmail.com,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, rostedt@goodmis.org,
	bsegall@google.com, mgorman@suse.de, bristot@redhat.com,
	vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org,
	quic_neeraju@quicinc.com, joel@joelfernandes.org,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, qiang1.zhang@intel.com,
	rcu@vger.kernel.org, tj@kernel.org, tglx@linutronix.de
Subject: Re: [RFC][PATCH 2/2] sched: Use fancy new guards
Date: Fri, 26 May 2023 18:41:30 +0200	[thread overview]
Message-ID: <20230526164130.GA4053578@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <2023052626-blunderer-delegator-4b82@gregkh>

On Fri, May 26, 2023 at 05:25:58PM +0100, Greg KH wrote:
> On Fri, May 26, 2023 at 05:05:51PM +0200, Peter Zijlstra wrote:
> > Convert kernel/sched/core.c to use the fancy new guards to simplify
> > the error paths.
> 
> That's slightly crazy...
> 
> I like the idea, but is this really correct:
> 
> 
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> >  kernel/sched/core.c  | 1223 +++++++++++++++++++++++----------------------------
> >  kernel/sched/sched.h |   39 +
> >  2 files changed, 595 insertions(+), 667 deletions(-)
> > 
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -1097,24 +1097,21 @@ int get_nohz_timer_target(void)
> >  
> >  	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
> >  
> > -	rcu_read_lock();
> > -	for_each_domain(cpu, sd) {
> > -		for_each_cpu_and(i, sched_domain_span(sd), hk_mask) {
> > -			if (cpu == i)
> > -				continue;
> > +	void_scope(rcu) {
> > +		for_each_domain(cpu, sd) {
> > +			for_each_cpu_and(i, sched_domain_span(sd), hk_mask) {
> > +				if (cpu == i)
> > +					continue;
> >  
> > -			if (!idle_cpu(i)) {
> > -				cpu = i;
> > -				goto unlock;
> > +				if (!idle_cpu(i))
> > +					return i;
> 
> You can call return from within a "scope" and it will clean up properly?

Yep, that's the main feature here.

> I tried to read the cpp "mess" but couldn't figure out how to validate
> this at all, have a set of tests for this somewhere?

I have it in userspace with printf, but yeah, I'll go make a selftest
somewhere.

One advantage of using the scheduler locks as testbed is that if you get
it wrong it burns *real* fast -- been there done that etc.

> Anyway, the naming is whack, but I don't have a proposed better name,
> except you might want to put "scope_" as the prefix not the suffix, but
> then that might look odd to, so who knows.

Yeah, naming is certainly crazy, but I figured I should get it all
working before spending too much time on that.

I can certainly do 's/lock_scope/scope_lock/g' on it all.

> But again, the idea is good, it might save us lots of "you forgot to
> clean this up on the error path" mess that we are getting constant churn
> for these days...

That's the goal...

  parent reply	other threads:[~2023-05-26 16:42 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-26 15:05 [RFC][PATCH 0/2] Lock and Pointer guards Peter Zijlstra
2023-05-26 15:05 ` [RFC][PATCH 1/2] locking: Introduce __cleanup__ based guards Peter Zijlstra
2023-05-26 17:05   ` Kees Cook
2023-05-26 18:39     ` Miguel Ojeda
2023-05-26 18:22   ` Linus Torvalds
2023-05-26 19:10     ` Peter Zijlstra
2023-05-26 18:49   ` Waiman Long
2023-05-26 18:58     ` Mathieu Desnoyers
2023-05-26 19:04       ` Waiman Long
2023-05-26 15:05 ` [RFC][PATCH 2/2] sched: Use fancy new guards Peter Zijlstra
2023-05-26 16:25   ` Greg KH
2023-05-26 16:27     ` Mathieu Desnoyers
2023-05-26 16:43       ` Peter Zijlstra
2023-05-26 17:08       ` Kees Cook
2023-05-26 17:41         ` Miguel Ojeda
2023-05-26 16:41     ` Peter Zijlstra [this message]
2023-05-29 11:29 ` [RFC][PATCH 0/2] Lock and Pointer guards Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230526164130.GA4053578@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=boqun.feng@gmail.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=frederic@kernel.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jiangshanlai@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=josh@joshtriplett.org \
    --cc=juri.lelli@redhat.com \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=ndesaulniers@google.com \
    --cc=ojeda@kernel.org \
    --cc=paulmck@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=qiang1.zhang@intel.com \
    --cc=quic_neeraju@quicinc.com \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox