From: Avi Kivity <avi@redhat.com>
To: Joerg Roedel <joro@8bytes.org>
Cc: "Zhai, Edwin" <edwin.zhai@intel.com>, Ingo Molnar <mingo@elte.hu>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: [PATCH] [RESEND] KVM:VMX: Add support for Pause-Loop Exiting
Date: Sun, 27 Sep 2009 16:18:00 +0200 [thread overview]
Message-ID: <4ABF7418.2000404@redhat.com> (raw)
In-Reply-To: <20090927140752.GD29634@8bytes.org>
On 09/27/2009 04:07 PM, Joerg Roedel wrote:
> On Sun, Sep 27, 2009 at 03:47:55PM +0200, Avi Kivity wrote:
>
>> On 09/27/2009 03:46 PM, Joerg Roedel wrote:
>>
>>>
>>>> We can't find exactly which vcpu, but we can:
>>>>
>>>> - rule out threads that are not vcpus for this guest
>>>> - rule out threads that are already running
>>>>
>>>> A major problem with sleep() is that it effectively reduces the vm
>>>> priority relative to guests that don't have spinlock contention. By
>>>> selecting a random nonrunnable vcpu belonging to this guest, we at least
>>>> preserve the guest's timeslice.
>>>>
>>>>
>>> Ok, that makes sense. But before trying that we should probably try to
>>> call just yield() instead of schedule()? I remember someone from our
>>> team here at AMD did this for Xen a while ago and already had pretty
>>> good results with that. Xen has a completly other scheduler but maybe
>>> its worth trying?
>>>
>>>
>> yield() is a no-op in CFS.
>>
> Hmm, true. At least when kernel.sched_compat_yield == 0, which it is on my
> distro.
> If the scheduler would give us something like a real_yield() function
> which asumes kernel.sched_compat_yield = 1 might help. At least its
> better than sleeping for some random amount of time.
>
>
Depends. If it's a global yield(), yes. If it's a local yield() that
doesn't rebalance the runqueues we might be left with the spinning task
re-running.
Also, if yield means "give up the reminder of our timeslice", then we
potentially end up sleeping a much longer random amount of time. If we
yield to another vcpu in the same guest we might not care, but if we
yield to some other guest we're seriously penalizing ourselves.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
next prev parent reply other threads:[~2009-09-27 14:18 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-23 14:04 [PATCH] [RESEND] KVM:VMX: Add support for Pause-Loop Exiting Zhai, Edwin
2009-09-23 14:09 ` Avi Kivity
2009-09-25 1:11 ` Zhai, Edwin
2009-09-27 8:28 ` Avi Kivity
2009-09-28 9:33 ` Zhai, Edwin
2009-09-29 12:05 ` Zhai, Edwin
2009-09-29 13:34 ` Avi Kivity
2009-09-30 1:01 ` Zhai, Edwin
2009-09-30 6:28 ` Avi Kivity
2009-09-30 16:22 ` Marcelo Tosatti
2009-10-02 18:28 ` Marcelo Tosatti
2009-10-09 10:03 ` Zhai, Edwin
2009-10-11 15:34 ` Avi Kivity
2009-10-12 19:13 ` Marcelo Tosatti
2009-09-25 20:43 ` Joerg Roedel
2009-09-27 8:31 ` Avi Kivity
2009-09-27 13:46 ` Joerg Roedel
2009-09-27 13:47 ` Avi Kivity
2009-09-27 14:07 ` Joerg Roedel
2009-09-27 14:18 ` Avi Kivity [this message]
2009-09-27 14:53 ` Joerg Roedel
2009-09-29 16:46 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4ABF7418.2000404@redhat.com \
--to=avi@redhat.com \
--cc=edwin.zhai@intel.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).