linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Mike Galbraith <bitbucket@online.de>, Peter Anvin <hpa@zytor.com>,
	Andi Kleen <ak@linux.intel.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [RFC] per-cpu preempt_count
Date: Mon, 12 Aug 2013 19:58:30 +0200	[thread overview]
Message-ID: <20130812175830.GB18691@gmail.com> (raw)
In-Reply-To: <CA+55aFznj9WqGa_Qa0B4=5iTio2br54uJG7xkm6otN2sG2P=8w@mail.gmail.com>


* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Mon, Aug 12, 2013 at 4:51 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > The below boots to wanting to mount a root filesystem with
> > CONFIG_PREEMPT=y using kvm -smp 4.
> 
> But doesn't work in general? Or you just never tested?

(I think Peter never tested it on real hw - this is an RFC patch to show 
the concept .)

> > Adding TIF_NEED_RESCHED into the preempt count would allow a single 
> > test in preempt_check_resched() instead of still needing the TI. 
> > Removing PREEMPT_ACTIVE from preempt count should allow us to get rid 
> > of ti::preempt_count altogether.
> >
> > The only problem with TIF_NEED_RESCHED is that its cross-cpu which 
> > would make the entire thing atomic which would suck donkey balls so 
> > maybe we need two separate per-cpu variables?
> 
> Agreed. Making it atomic would suck, and cancel all advantages of the 
> better code generation to access it. Good point.

We could still have the advantages of NEED_RESCHED in preempt_count() by 
realizing that we only rarely actually set/clear need_resched and mostly 
read it from the highest freq user, the preempt_enable() check.

So we could have it atomic, but do atomic_read() in the preempt_enable() 
hotpath which wouldn't suck donkey balls, right?

That would allow a really sweet preempt_enable() fastpath, on x86 at 
least.

Thanks,

	Ingo

  parent reply	other threads:[~2013-08-12 17:58 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-12 11:51 [RFC] per-cpu preempt_count Peter Zijlstra
2013-08-12 17:35 ` Linus Torvalds
2013-08-12 17:51   ` H. Peter Anvin
2013-08-12 18:53     ` Linus Torvalds
2013-08-13  8:39       ` Peter Zijlstra
2013-08-12 17:58   ` Ingo Molnar [this message]
2013-08-12 19:00     ` Linus Torvalds
2013-08-12 20:44       ` H. Peter Anvin
2013-08-13 10:30       ` Ingo Molnar
2013-08-13 12:26         ` Peter Zijlstra
2013-08-13 15:39           ` Linus Torvalds
2013-08-13 15:56             ` Ingo Molnar
2013-08-13 16:26               ` Peter Zijlstra
2013-08-13 16:28               ` H. Peter Anvin
2013-08-13 16:29             ` Peter Zijlstra
2013-08-13 16:38               ` Linus Torvalds
2013-08-18 17:57   ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130812175830.GB18691@gmail.com \
    --to=mingo@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=bitbucket@online.de \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).