From: john stultz <johnstul@us.ibm.com>
To: drepper@redhat.com
Cc: Thomas Gleixner <tglx@linutronix.de>, mingo <mingo@redhat.com>,
Steven Rostedt <rostedt@goodmis.org>,
Dinakar Guniguntala <dino@in.ibm.com>,
Ankita Garg <ankita@in.ibm.com>, Darren Hart <dvhltc@us.ibm.com>,
Sripathi Kodi <sripathi@in.ibm.com>,
lkml <linux-kernel@vger.kernel.org>
Subject: Re: [BUG -rt] Priority inversion deadlock caused by condvars
Date: Fri, 12 Sep 2008 15:04:54 -0700 [thread overview]
Message-ID: <1221257094.6695.56.camel@localhost.localdomain> (raw)
In-Reply-To: <1221256895.6695.55.camel@localhost.localdomain>
Oops, originally sent to the wrong Ulrich.
Sorry
-john
On Fri, 2008-09-12 at 15:01 -0700, john stultz wrote:
> So we've been seeing application hangs with a very threaded (~8k
> threads) realtime java test. After a fair amount of debugging we found
> most of the SCHED_FIFO threads are blocked in futex_wait(). This raised
> some alarm, since futex_wait isn't priority-inheritance aware.
>
> After seeing what was going on, Dino came up with a possible deadlock
> case in the pthread_cond_wait() code.
>
> The problem, as I understand it, assuming there is only one cpu, is if a
> low priority thread is going to call pthread_cond_wait(), it takes the
> associated PI mutex, and calls the function. The glibc implementation
> acquires the condvar's internal non-PI lock, releases the PI mutex and
> tries to block on futex_wait().
>
> However if a medium priority cpu hog, and a high priority start up while
> the low priority thread holds the mutex, the low priority thread will be
> boosted until it releases that mutex, but not long enough for it to
> release the condvar's internal lock (since the internal lock is not
> priority inherited).
>
> Then the high priority thread will aquire the mutex, and try to acquire
> the condvar's internal lock (which is still held). However, since we
> also have a medium prio cpu hog, it will block the low priority thread
> from running, and thus block it from releasing the lock.
>
> And then we're deadlocked.
>
> Thomas mentioned this is a known problem, but I wanted to send this
> example out so maybe others might become aware.
>
> The attached test illustrates this hang as described above when bound to
> a single cpu. I believe its correct, but these sorts of tests often have
> their own bugs that create false positives, so please forgive me and let
> me know if you see any problems. :)
>
> Many thanks to Dino, Ankita and Sripathi for helping to sort out this
> issue.
>
> To run:
> ./pthread_cond_hang => will PASS (on SMP)
> taskset -c 0 ./pthread_cond_hang => will HANG
>
>
> thanks
> -john
next prev parent reply other threads:[~2008-09-12 22:05 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-09-12 22:01 [BUG -rt] Priority inversion deadlock caused by condvars john stultz
2008-09-12 22:04 ` john stultz [this message]
2008-09-15 9:21 ` Ankita Garg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1221257094.6695.56.camel@localhost.localdomain \
--to=johnstul@us.ibm.com \
--cc=ankita@in.ibm.com \
--cc=dino@in.ibm.com \
--cc=drepper@redhat.com \
--cc=dvhltc@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=rostedt@goodmis.org \
--cc=sripathi@in.ibm.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox