public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: David Miller <davem@davemloft.net>
To: mingo@elte.hu
Cc: linux-kernel@vger.kernel.org, tglx@linutronix.de
Subject: Re: Soft lockup regression from today's sched.git merge.
Date: Tue, 22 Apr 2008 22:42:12 -0700 (PDT)	[thread overview]
Message-ID: <20080422.224212.32058127.davem@davemloft.net> (raw)
In-Reply-To: <20080422091456.GC9939@elte.hu>

From: Ingo Molnar <mingo@elte.hu>
Date: Tue, 22 Apr 2008 11:14:56 +0200

> thanks for reporting it. I havent seen this false positive happen in a 
> long time - but then again, PC CPUs are a lot less idle than a 128-CPU 
> Niagara2 :-/ I'm wondering what the best method would be to provoke a 
> CPU to stay idle that long - to make sure this bug is fixed.

I looked more closely at this.

There is no way the patch in question can work properly.

The algorithm is, essentialy "if time - prev_cpu_time is large enough,
call __sync_cpu_clock()" which if fine, except that nothing ever sets
prev_cpu_time.

The code is fatally flawed, once __sync_cpu_clock() calls start
happening, they will happen on every cpu_clock() call.

So like my bisect showed from the get-go, these cpu_clock() changes
have major problems, so it was quite a mind boggling stretch to stick
a touch_softlockup_watchdog() call somewhere to try and fix this
when the guilty change in question didn't touch that area at all.
:-(

Furthermore, this is an extremely expensive way to ensure monotonic
per-rq timestamps.  A global spinlock taken every 100000 ns on every
cpu?!?!  :-/

At least move any implication of "high speed" from the comments above
cpu_clock() if we're going to need something like this.  I have 128
cpus, that's 128 grabs of that spinlock every quantum.  My next system
I'm getting will have 256 cpus.  The expense of your solution
increases linearly with the number of cpus, which doesn't scale.

Anyways, I'll work on the group sched lockup bug next.  As if I have
nothing better to do during the merge window than fix sched tree
regressions :-(

  parent reply	other threads:[~2008-04-23  5:42 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-04-22  8:59 Soft lockup regression from today's sched.git merge David Miller
2008-04-22  9:14 ` Ingo Molnar
2008-04-22 10:05   ` David Miller
2008-04-22 12:45     ` Peter Zijlstra
2008-05-06 22:41       ` Rafael J. Wysocki
2008-05-06 23:05         ` David Miller
2008-05-07  6:43           ` Ingo Molnar
2008-05-07 18:56             ` Rafael J. Wysocki
2008-04-23  8:50     ` [patch] softlockup: fix false positives on nohz if CPU is 100% idle for more than 60 seconds Ingo Molnar
2008-04-23 10:55       ` David Miller
2008-04-23 12:29         ` David Miller
2008-04-23 13:36           ` Ingo Molnar
2008-04-23 23:23             ` David Miller
2008-04-23  5:42   ` David Miller [this message]
2008-04-23  7:32     ` Soft lockup regression from today's sched.git merge Dhaval Giani
2008-04-23  7:51     ` Ingo Molnar
2008-04-23  9:40     ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080422.224212.32058127.davem@davemloft.net \
    --to=davem@davemloft.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox