xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@novell.com>
To: Keir Fraser <keir.fraser@eu.citrix.com>,
	George Dunlap <dunlapg@umich.edu>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: Linux spin lock enhancement on xen
Date: Tue, 24 Aug 2010 09:48:27 +0100	[thread overview]
Message-ID: <4C73A37B0200007800011D97@vpn.id2.novell.com> (raw)
In-Reply-To: <C8993F5E.1EEDE%keir.fraser@eu.citrix.com>

>>> On 24.08.10 at 10:20, Keir Fraser <keir.fraser@eu.citrix.com> wrote:
> On 24/08/2010 09:08, "George Dunlap" <dunlapg@umich.edu> wrote:
> It seems to me that Jeremy's spinlock implementation provides all the info a
> scheduler would require: vcpus trying to acquire a lock are blocked, the
> lock holder wakes just the next vcpu in turn when it releases the lock. The
> scheduler at that point may have a decision to make as to whether to run the
> lock releaser, or the new lock holder, or both, but how can the guest help
> with that when its a system-wide scheduling decision? Obviously the guest
> would presumably like all its runnable vcpus to run all of the time!

Blocking on an unavailable lock is somewhat different imo: If the blocked
vCPU didn't exhaust its time slice, I think it is very valid to for it to
expect to not penalize the whole VM, and rather donate (part of) its
remaining time slice to the lock holder. That keeps other domains
unaffected, while allowing the subject domain to make better use of
its resources.

>>  I thought the
>> solution he had was interesting: when yielding due to a spinlock,
>> rather than going to the back of the queue, just go behind one person.
>>  I think an impleentation of "yield_to" that might make sense in the
>> credit scheduler is:
>> * Put the yielding vcpu behind one cpu

Which clearly has the potential of burning more cycles without
allowing the vCPU to actually make progress.

>> * If the yield-to vcpu is not running, pull it to the front within its
>> priority.  (I.e., if it's UNDER, put it at the front so it runs next;
>> if it's OVER, make it the first OVER cpu.)

At the risk of fairness wrt other domains, or even within the
domain. As said above, I think it would be better to temporarily
merge the priorities and location in the run queue of the yielding
and yielded-to vCPU-s, to have the yielded-to one get the
better of both (with a way to revert to the original settings
under the control of the guest, or enforced when the borrowed
time quantum expires).

The one more difficult case I would see in this model is what
needs to happen when the yielding vCPU has event delivery
enabled and receives an event, making it runnable again: In
this situation, the swapping of priority and/or run queue
placement might need to be forcibly reversed immediately,
not so much for fairness reasons than for keeping event
servicing latency reasonable. This includes the fact that in
such a case the vCPU wouldn't be able to do what it wants
with the waited for lock acquired, but would rather run the
event handling code first anyway, and hence the need for
boosting the lock holder went away.

Jan

  parent reply	other threads:[~2010-08-24  8:48 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-17  1:33 Linux spin lock enhancement on xen Mukesh Rathor
2010-08-17  7:33 ` Keir Fraser
2010-08-17  7:53 ` Jan Beulich
2010-08-18  1:58   ` Mukesh Rathor
2010-08-17 14:34 ` Ky Srinivasan
2010-08-18  1:58   ` Mukesh Rathor
2010-08-17 17:43 ` Jeremy Fitzhardinge
2010-08-18  1:58   ` Mukesh Rathor
2010-08-18 16:37     ` Jeremy Fitzhardinge
2010-08-18 17:09       ` Keir Fraser
2010-08-19  2:52         ` Mukesh Rathor
2010-08-24  8:08         ` George Dunlap
2010-08-24  8:20           ` Keir Fraser
2010-08-24  8:43             ` George Dunlap
2010-08-24  8:48             ` Jan Beulich [this message]
2010-08-24  9:09               ` George Dunlap
2010-08-24 13:25                 ` Jan Beulich
2010-08-24 16:11                   ` George Dunlap
2010-08-26 14:08                     ` Tim Deegan
2010-08-25  1:03           ` Dong, Eddie
2010-08-26  2:13           ` Mukesh Rathor
2010-08-19  2:52       ` Mukesh Rathor
2010-08-23 21:33         ` Jeremy Fitzhardinge

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C73A37B0200007800011D97@vpn.id2.novell.com \
    --to=jbeulich@novell.com \
    --cc=Xen-devel@lists.xensource.com \
    --cc=dunlapg@umich.edu \
    --cc=jeremy@goop.org \
    --cc=keir.fraser@eu.citrix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).