From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Peter Zijlstra <peterz@infradead.org>,
David Miller <davem@davemloft.net>,
jarkao2@gmail.com, Larry.Finger@lwfinger.net, kaber@trash.net,
torvalds@linux-foundation.org, akpm@linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-wireless@vger.kernel.org, mingo@redhat.com
Subject: Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
Date: Fri, 1 Aug 2008 14:10:40 -0700 [thread overview]
Message-ID: <20080801211040.GV14851@linux.vnet.ibm.com> (raw)
In-Reply-To: <200807242038.36693.nickpiggin@yahoo.com.au>
On Thu, Jul 24, 2008 at 08:38:35PM +1000, Nick Piggin wrote:
> On Thursday 24 July 2008 20:08, Peter Zijlstra wrote:
> > On Thu, 2008-07-24 at 02:32 -0700, David Miller wrote:
> > > From: Peter Zijlstra <peterz@infradead.org>
> > > Date: Thu, 24 Jul 2008 11:27:05 +0200
> > >
> > > > Well, not only lockdep, taking a very large number of locks is
> > > > expensive as well.
> > >
> > > Right now it would be on the order of 16 or 32 for
> > > real hardware.
> > >
> > > Much less than the scheduler currently takes on some
> > > of my systems, so currently you are the pot calling the
> > > kettle black. :-)
> >
> > One nit, and then I'll let this issue rest :-)
> >
> > The scheduler has a long lock dependancy chain (nr_cpu_ids rq locks),
> > but it never takes all of them at the same time. Any one code path will
> > at most hold two rq locks.
>
> Aside from lockdep, is there a particular problem with taking 64k locks
> at once? (in a very slow path, of course) I don't think it causes a
> problem with preempt_count, does it cause issues with -rt kernel?
>
> Hey, something kind of cool (and OT) I've just thought of that we can
> do with ticket locks is to take tickets for 2 (or 64K) nested locks,
> and then wait for them both (all), so the cost is N*lock + longest spin,
> rather than N*lock + N*avg spin.
>
> That would mean even at the worst case of a huge amount of contention
> on all 64K locks, it should only take a couple of ms to take all of
> them (assuming max spin time isn't ridiculous).
>
> Probably not the kind of feature we want to expose widely, but for
> really special things like the scheduler, it might be a neat hack to
> save a few cycles ;) Traditional implementations would just have
> #define spin_lock_async spin_lock
> #define spin_lock_async_wait do {} while (0)
>
> Sorry it's offtopic, but if I didn't post it, I'd forget to. Might be
> a fun quick hack for someone.
FWIW, I did something similar in a previous life for the write-side of
a brlock-like locking mechanism. This was especially helpful if the
read-side critical sections were long.
Thanx, Paul
next prev parent reply other threads:[~2008-08-01 21:10 UTC|newest]
Thread overview: 83+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20080721133059.GA30637@elte.hu>
[not found] ` <20080721134506.GA27598@elte.hu>
[not found] ` <20080721143023.GA32451@elte.hu>
2008-07-21 15:10 ` [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370 David Miller
[not found] ` <20080721150446.GA17746@elte.hu>
2008-07-21 15:24 ` David Miller
2008-07-21 18:18 ` Ian Schram
2008-07-21 19:06 ` Ingo Molnar
2008-07-21 19:13 ` Larry Finger
2008-07-21 19:34 ` Ingo Molnar
2008-07-21 19:43 ` Larry Finger
2008-07-21 19:47 ` Linus Torvalds
2008-07-21 20:15 ` David Miller
2008-07-21 20:28 ` Larry Finger
2008-07-21 20:21 ` David Miller
2008-07-21 20:38 ` Larry Finger
2008-07-21 20:46 ` David Miller
2008-07-21 20:51 ` Patrick McHardy
2008-07-21 21:01 ` David Miller
2008-07-21 21:06 ` Patrick McHardy
2008-07-21 21:35 ` Patrick McHardy
2008-07-21 21:42 ` Patrick McHardy
2008-07-21 21:51 ` Larry Finger
2008-07-21 22:04 ` Patrick McHardy
2008-07-21 22:40 ` Larry Finger
2008-07-21 23:15 ` David Miller
2008-07-22 6:34 ` Larry Finger
2008-07-22 10:51 ` Jarek Poplawski
2008-07-22 11:32 ` David Miller
2008-07-22 12:52 ` Larry Finger
2008-07-22 20:43 ` David Miller
2008-07-22 13:02 ` Larry Finger
2008-07-22 14:53 ` Patrick McHardy
2008-07-22 21:17 ` David Miller
2008-07-22 16:39 ` Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98() Larry Finger
2008-07-22 17:20 ` Patrick McHardy
2008-07-22 18:39 ` Larry Finger
2008-07-22 18:44 ` Patrick McHardy
2008-07-22 19:30 ` Larry Finger
2008-07-22 23:04 ` David Miller
2008-07-23 6:20 ` Jarek Poplawski
2008-07-23 7:59 ` David Miller
2008-07-23 8:54 ` Jarek Poplawski
2008-07-23 9:03 ` Peter Zijlstra
2008-07-23 9:35 ` Jarek Poplawski
2008-07-23 9:50 ` Peter Zijlstra
2008-07-23 10:13 ` Jarek Poplawski
2008-07-23 10:58 ` Peter Zijlstra
2008-07-23 11:35 ` Jarek Poplawski
2008-07-23 11:49 ` Jarek Poplawski
2008-07-23 20:16 ` David Miller
2008-07-23 20:43 ` Jarek Poplawski
2008-07-23 20:55 ` David Miller
2008-07-24 9:10 ` Peter Zijlstra
2008-07-24 9:20 ` David Miller
2008-07-24 9:27 ` Peter Zijlstra
2008-07-24 9:32 ` David Miller
2008-07-24 10:08 ` Peter Zijlstra
2008-07-24 10:38 ` Nick Piggin
2008-07-24 10:55 ` Miklos Szeredi
2008-07-24 11:06 ` Nick Piggin
2008-08-01 21:10 ` Paul E. McKenney
2008-07-24 10:59 ` Peter Zijlstra
2008-08-01 21:10 ` Paul E. McKenney [this message]
2008-07-23 20:14 ` David Miller
2008-07-24 7:00 ` Peter Zijlstra
2008-07-25 17:04 ` Ingo Oeser
2008-07-25 18:36 ` Jarek Poplawski
2008-07-25 19:16 ` Johannes Berg
2008-07-25 19:34 ` Jarek Poplawski
2008-07-25 19:36 ` Johannes Berg
2008-07-25 20:01 ` Jarek Poplawski
2008-07-26 9:18 ` David Miller
2008-07-26 10:53 ` Jarek Poplawski
2008-07-26 13:18 ` Jarek Poplawski
2008-07-27 0:34 ` David Miller
2008-07-27 20:37 ` Jarek Poplawski
2008-07-31 12:29 ` David Miller
2008-07-31 12:38 ` Nick Piggin
2008-07-31 12:44 ` David Miller
2008-08-01 4:27 ` David Miller
2008-08-01 7:09 ` Peter Zijlstra
2008-08-01 6:48 ` Jarek Poplawski
2008-08-01 7:00 ` David Miller
2008-08-01 7:01 ` Jarek Poplawski
2008-08-01 7:01 ` David Miller
2008-08-01 7:41 ` Jarek Poplawski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080801211040.GV14851@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=Larry.Finger@lwfinger.net \
--cc=akpm@linux-foundation.org \
--cc=davem@davemloft.net \
--cc=jarkao2@gmail.com \
--cc=kaber@trash.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-wireless@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=nickpiggin@yahoo.com.au \
--cc=peterz@infradead.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).