From: Ingo Molnar <mingo@elte.hu>
To: Esben Nielsen <simlo@phys.au.dk>
Cc: Thomas Gleixner <tglx@linutronix.de>, linux-kernel@vger.kernel.org
Subject: Re: PI patch against 2.6.16-rt9
Date: Mon, 27 Mar 2006 02:21:05 +0200 [thread overview]
Message-ID: <20060327002105.GA29649@elte.hu> (raw)
In-Reply-To: <Pine.LNX.4.44L0.0603270055090.2708-100000@lifa01.phys.au.dk>
* Esben Nielsen <simlo@phys.au.dk> wrote:
> > how do you guarantee that some other CPU doesnt send us on some
> > goose-chase?
>
> How should another CPU suddenly be able to insert stuff into a lock
> chain? Only the tasks themselves can do that and they are blocked on
> some lock - at least when we tested in some previous iteration.
> Ofcourse, they can have been signalled or timed out since, such they
> are already unblocked when the deadlock is reported. But that is not
> an error since the locks at some point actually were in a deadlock
> situation.
we are observing a non-time-coherent snapshot of the locking graph.
There is no guarantee that due to timeouts or signals the chain we
observe isnt artificially long - while if a time-coherent snapshot is
taken it is always fine. E.g. lets take dentry locks as an example:
their locking is ordered by the dentry (kernel-pointer) address. We
could in theory have a 'chain' of subsequent locking dependencies
related to 10,000 dentries, which are nicely ordered and create a
10,000-entry 'chain' if looked at in a non-time-coherent form. I.e. your
code could detect a deadlock where there's none. The more CPUs there
are, the larger the likelyhood is that other CPUs 'lure us' into a long
chain.
In other words: without taking all the locks we have no mathematical
proof that we detected a deadlock!
also, how does the taking of 2 locks only improve latencies? We still
have to hold the ->waiter_lock of this lock during this act, dont we? Or
can we do boosting with totally unlocked (and interrupts-enabled)
rescheduling points? If yes then the same situation could happen on UP
too: if there's lots of rescheduling of this boosting chain.
nevertheless it _might_ work in practice, and it's certainly elegant and
thus tempting. Could you try to port your patch to -rt10? [you can skip
most of the conflicting rt7->rt10 deltas in rtmutex.c i think.]
Ingo
next prev parent reply other threads:[~2006-03-27 0:23 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-03-26 23:42 PI patch against 2.6.16-rt9 Esben Nielsen
2006-03-26 23:47 ` Ingo Molnar
2006-03-27 0:07 ` Esben Nielsen
2006-03-27 0:11 ` Esben Nielsen
2006-03-27 0:21 ` Ingo Molnar [this message]
2006-03-27 15:00 ` Esben Nielsen
2006-03-27 23:05 ` Esben Nielsen
2006-03-28 21:02 ` Ingo Molnar
2006-03-28 20:55 ` Ingo Molnar
2006-03-28 21:17 ` Esben Nielsen
2006-03-28 21:24 ` Ingo Molnar
2006-03-28 22:51 ` Esben Nielsen
2006-03-29 7:14 ` Ingo Molnar
2006-03-29 7:59 ` Esben Nielsen
2006-03-29 12:35 ` Ingo Molnar
2006-03-28 21:36 ` Thomas Gleixner
2006-03-28 22:23 ` Esben Nielsen
2006-03-28 22:42 ` Thomas Gleixner
2006-03-28 23:34 ` Esben Nielsen
2006-03-28 23:59 ` Thomas Gleixner
2006-03-29 12:29 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060327002105.GA29649@elte.hu \
--to=mingo@elte.hu \
--cc=linux-kernel@vger.kernel.org \
--cc=simlo@phys.au.dk \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox