From: "Jan Beulich" <JBeulich@novell.com>
To: George Dunlap <dunlapg@umich.edu>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
Keir Fraser <keir.fraser@eu.citrix.com>
Subject: Re: Linux spin lock enhancement on xen
Date: Tue, 24 Aug 2010 14:25:57 +0100 [thread overview]
Message-ID: <4C73E4850200007800011E31@vpn.id2.novell.com> (raw)
In-Reply-To: <AANLkTinsXiOS5Sx6bBGnDGDMou_TO4wQGjq5gEqS4MX7@mail.gmail.com>
>>> On 24.08.10 at 11:09, George Dunlap <dunlapg@umich.edu> wrote:
> On Tue, Aug 24, 2010 at 9:48 AM, Jan Beulich <JBeulich@novell.com> wrote:
>>>> I thought the
>>>> solution he had was interesting: when yielding due to a spinlock,
>>>> rather than going to the back of the queue, just go behind one person.
>>>> I think an impleentation of "yield_to" that might make sense in the
>>>> credit scheduler is:
>>>> * Put the yielding vcpu behind one cpu
>>
>> Which clearly has the potential of burning more cycles without
>> allowing the vCPU to actually make progress.
>
> I think you may misunderstand; the yielding vcpu goes behind at least
> one vcpu on the runqueue, even if the next vcpu is lower priority. If
> there's another vcpu on the runqueue, the other vcpu always runs.
No, I understood it that way. What I was referring to is (as an
example) the case where two vCPU-s on the sam pCPU's run queue
both yield: They will each move after the other in the run queue in
close succession, but neither will really make progress, and neither
will really increase the likelihood of the respective lock holder to
get a chance to run.
> I posted some scheduler patches implementing this yield a week or two
> ago, and included some numbers. The numbers were with Windows Server
> 2008, which has queued spinlocks (equivalent of ticketed spinlocks).
> The throughput remained high even when highly over-committed. So a
> simple yield does have a significant effect. In the unlikely even
> that it is scheduled again, it will simply yield again when it sees
> that it's still waiting for the spinlock.
Immediately, or after a few (hundred) spin cycles?
> In fact, undirected-yield is one of yield-to's competitors: I don't
> think we should accept a "yield-to" patch unless it has significant
> performance gains over undirected-yield.
This position I agree with.
>> At the risk of fairness wrt other domains, or even within the
>> domain. As said above, I think it would be better to temporarily
>> merge the priorities and location in the run queue of the yielding
>> and yielded-to vCPU-s, to have the yielded-to one get the
>> better of both (with a way to revert to the original settings
>> under the control of the guest, or enforced when the borrowed
>> time quantum expires).
>
> I think doing tricks with priorities is too complicated. Complicated
> mechanisms are very difficult to predict and prone to nasty,
> hard-to-debug corner cases. I don't think it's worth exploring this
> kind of solution until it's clear that a simple solution cannot get
> reasonable performance. And I would oppose accepting any
> priority-inheritance solution into the tree unless there were
> repeatable measurements that showed that it had significant
> performance gain over a simpler solution.
And so I do with this. Apart from suspecting fairness issues with
your yield_to proposal (as I wrote), my point just is - we won't
know if a "complicated" solution outperforms a "simple" one if we
don't try it.
Jan
next prev parent reply other threads:[~2010-08-24 13:25 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-17 1:33 Linux spin lock enhancement on xen Mukesh Rathor
2010-08-17 7:33 ` Keir Fraser
2010-08-17 7:53 ` Jan Beulich
2010-08-18 1:58 ` Mukesh Rathor
2010-08-17 14:34 ` Ky Srinivasan
2010-08-18 1:58 ` Mukesh Rathor
2010-08-17 17:43 ` Jeremy Fitzhardinge
2010-08-18 1:58 ` Mukesh Rathor
2010-08-18 16:37 ` Jeremy Fitzhardinge
2010-08-18 17:09 ` Keir Fraser
2010-08-19 2:52 ` Mukesh Rathor
2010-08-24 8:08 ` George Dunlap
2010-08-24 8:20 ` Keir Fraser
2010-08-24 8:43 ` George Dunlap
2010-08-24 8:48 ` Jan Beulich
2010-08-24 9:09 ` George Dunlap
2010-08-24 13:25 ` Jan Beulich [this message]
2010-08-24 16:11 ` George Dunlap
2010-08-26 14:08 ` Tim Deegan
2010-08-25 1:03 ` Dong, Eddie
2010-08-26 2:13 ` Mukesh Rathor
2010-08-19 2:52 ` Mukesh Rathor
2010-08-23 21:33 ` Jeremy Fitzhardinge
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C73E4850200007800011E31@vpn.id2.novell.com \
--to=jbeulich@novell.com \
--cc=Xen-devel@lists.xensource.com \
--cc=dunlapg@umich.edu \
--cc=jeremy@goop.org \
--cc=keir.fraser@eu.citrix.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).