From: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
To: Nick Piggin <npiggin@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
akpm@linux-foundation.org, Ingo Molnar <mingo@elte.hu>,
linux-kernel@vger.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Steven Rostedt <rostedt@goodmis.org>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Nicholas Miell <nmiell@comcast.net>,
laijs@cn.fujitsu.com, dipankar@in.ibm.com, josh@joshtriplett.org,
dvhltc@us.ibm.com, niv@us.ibm.com, tglx@linutronix.de,
Valdis.Kletnieks@vt.edu, dhowells@redhat.com
Subject: Re: [patch 2/3] scheduler: add full memory barriers upon task switch at runqueue lock/unlock
Date: Mon, 1 Feb 2010 09:47:59 -0500 [thread overview]
Message-ID: <20100201144759.GD10894@Krystal> (raw)
In-Reply-To: <20100201104901.GH12759@laptop>
* Nick Piggin (npiggin@suse.de) wrote:
> On Mon, Feb 01, 2010 at 11:36:01AM +0100, Peter Zijlstra wrote:
> > On Mon, 2010-02-01 at 21:11 +1100, Nick Piggin wrote:
> > > All, but one at a time, no? How much of a DoS really is taking these
> > > locks for a handful of cycles each, per syscall?
> >
> > I was more worrying about the cacheline trashing than lock hold times
> > there.
>
> Well, same issue really. Look at all the unprived files in /proc
> for example that can look through all per-cpu cachelines. It just
> takes a single read syscall to do a lot of them too.
>
>
> > > I mean, we have LOTS of syscalls that take locks, and for a lot longer,
> > > (look at dcache_lock).
> >
> > Yeah, and dcache is a massive pain, isn't it ;-)
>
> My point is, I don't think it is something we can realistically
> care much about and it is nowhere near a new or unique problem
> being added by this one patch.
>
> It is really a RoS, reduction of service, rather than a DoS. And
> any time we allow an unpriv user on our system, we have RoS potential :)
>
>
> > > I think we basically just have to say that locking primitives should be
> > > somewhat fair, and not be held for too long, it should more or less
> > > work.
> >
> > Sure, it'll more of less work, but he's basically making rq->lock a
> > global lock instead of a per-cpu lock.
> >
> > > If the locks are getting contended, then the threads calling
> > > sys_membarrier are going to be spinning longer too, using more CPU time,
> > > and will get scheduled away...
> >
> > Sure, and increased spinning reduces the total throughput.
> >
> > > If there is some particular problem on -rt because of the rq locks,
> > > then I guess you could consider whether to add more overhead to your
> > > ctxsw path to reduce the problem, or simply not support sys_membarrier
> > > for unprived users in the first place.
> >
> > Right, for -rt we might need to do that, but its just that rq->lock is a
> > very hot lock, and adding basically unlimited trashing to it didn't seem
> > like a good idea.
> >
> > Also, I'm thinking making it a priv syscall basically renders it useless
> > for Mathieu.
>
> Well I just mean that it's something for -rt to work out. Apps can
> still work if the call is unsupported completely.
OK, so we seem to be settling for the spinlock-based sys_membarrier()
this time, which is much less intrusive in terms of scheduler
fast path modification, but adds more system overhead each time
sys_membarrier() is called. This trade-off makes sense to me, as we
expect the scheduler to execute _much_ more often than sys_membarrier().
When I get confirmation that's the route to follow from both of you,
I'll go back to the spinlock-based scheme for v9.
Thanks,
Mathieu
>
>
> > Anyway, it might be I'm just paranoid... but archs with large core count
> > and lazy tlb flush seem particularly vulnerable.
--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
next prev parent reply other threads:[~2010-02-01 14:48 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-01-31 20:52 [patch 0/3] introduce sys_membarrier(): process-wide memory barrier (v8) Mathieu Desnoyers
2010-01-31 20:52 ` [patch 1/3] Create spin lock/spin unlock with distinct memory barrier Mathieu Desnoyers
2010-02-01 7:25 ` Nick Piggin
2010-02-01 14:08 ` Mathieu Desnoyers
2010-02-01 7:28 ` Nick Piggin
2010-02-01 14:10 ` Mathieu Desnoyers
2010-02-01 15:22 ` Linus Torvalds
2010-02-01 15:41 ` Steven Rostedt
2010-01-31 20:52 ` [patch 2/3] scheduler: add full memory barriers upon task switch at runqueue lock/unlock Mathieu Desnoyers
2010-02-01 7:33 ` Nick Piggin
2010-02-01 9:42 ` Peter Zijlstra
2010-02-01 10:11 ` Nick Piggin
2010-02-01 10:36 ` Peter Zijlstra
2010-02-01 10:49 ` Nick Piggin
2010-02-01 14:47 ` Mathieu Desnoyers [this message]
2010-02-01 14:58 ` Nick Piggin
2010-02-01 15:23 ` Steven Rostedt
2010-02-01 15:44 ` Steven Rostedt
2010-02-01 16:00 ` Mike Galbraith
2010-02-01 15:27 ` Linus Torvalds
2010-02-01 16:09 ` Mathieu Desnoyers
2010-02-01 16:23 ` Linus Torvalds
2010-02-01 16:48 ` Mathieu Desnoyers
2010-02-01 16:56 ` Linus Torvalds
2010-02-01 17:45 ` Mathieu Desnoyers
2010-02-01 18:00 ` Steven Rostedt
2010-02-01 18:36 ` Linus Torvalds
2010-02-01 19:56 ` Mathieu Desnoyers
2010-02-01 20:42 ` Linus Torvalds
2010-02-01 22:42 ` Mathieu Desnoyers
2010-02-01 20:33 ` Steven Rostedt
2010-02-01 20:52 ` Linus Torvalds
2010-02-01 22:39 ` Steven Rostedt
2010-02-01 23:09 ` Steven Rostedt
2010-02-01 17:13 ` Steven Rostedt
2010-02-01 17:34 ` Linus Torvalds
2010-02-01 16:24 ` Steven Rostedt
2010-02-01 16:29 ` Peter Zijlstra
2010-02-01 16:46 ` Steven Rostedt
2010-02-01 16:11 ` Steven Rostedt
2010-01-31 20:52 ` [patch 3/3] introduce sys_membarrier(): process-wide memory barrier (v8) Mathieu Desnoyers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100201144759.GD10894@Krystal \
--to=mathieu.desnoyers@polymtl.ca \
--cc=Valdis.Kletnieks@vt.edu \
--cc=akpm@linux-foundation.org \
--cc=dhowells@redhat.com \
--cc=dipankar@in.ibm.com \
--cc=dvhltc@us.ibm.com \
--cc=josh@joshtriplett.org \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=niv@us.ibm.com \
--cc=nmiell@comcast.net \
--cc=npiggin@suse.de \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).