From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
To: Avi Kivity <avi@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Rik van Riel <riel@redhat.com>, S390 <linux-s390@vger.kernel.org>,
Carsten Otte <cotte@de.ibm.com>,
Christian Borntraeger <borntraeger@de.ibm.com>,
KVM <kvm@vger.kernel.org>, chegu vinod <chegu_vinod@hp.com>,
"Andrew M. Theurer" <habanero@linux.vnet.ibm.com>,
LKML <linux-kernel@vger.kernel.org>, X86 <x86@kernel.org>,
Gleb Natapov <gleb@redhat.com>,
linux390@de.ibm.com,
Srivatsa Vaddagiri <srivatsa.vaddagiri@gmail.com>,
Joerg Roedel <joerg.roedel@amd.com>
Subject: Re: [PATCH RFC 1/2] kvm vcpu: Note down pause loop exit
Date: Wed, 11 Jul 2012 17:26:15 +0530 [thread overview]
Message-ID: <4FFD69DF.1070107@linux.vnet.ibm.com> (raw)
In-Reply-To: <4FFD60EE.6020604@redhat.com>
On 07/11/2012 04:48 PM, Avi Kivity wrote:
> On 07/11/2012 01:52 PM, Raghavendra K T wrote:
>> On 07/11/2012 02:23 PM, Avi Kivity wrote:
>>> On 07/09/2012 09:20 AM, Raghavendra K T wrote:
>>>> Signed-off-by: Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>
>>>>
>>>> Noting pause loop exited vcpu helps in filtering right candidate to
>>>> yield.
>>>> Yielding to same vcpu may result in more wastage of cpu.
>>>>
>>>>
>>>> struct kvm_lpage_info {
>>>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>>>> index f75af40..a492f5d 100644
>>>> --- a/arch/x86/kvm/svm.c
>>>> +++ b/arch/x86/kvm/svm.c
>>>> @@ -3264,6 +3264,7 @@ static int interrupt_window_interception(struct
>>>> vcpu_svm *svm)
>>>>
>>>> static int pause_interception(struct vcpu_svm *svm)
>>>> {
>>>> + svm->vcpu.arch.plo.pause_loop_exited = true;
>>>> kvm_vcpu_on_spin(&(svm->vcpu));
>>>> return 1;
>>>> }
>>>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>>>> index 32eb588..600fb3c 100644
>>>> --- a/arch/x86/kvm/vmx.c
>>>> +++ b/arch/x86/kvm/vmx.c
>>>> @@ -4945,6 +4945,7 @@ out:
>>>> static int handle_pause(struct kvm_vcpu *vcpu)
>>>> {
>>>> skip_emulated_instruction(vcpu);
>>>> + vcpu->arch.plo.pause_loop_exited = true;
>>>> kvm_vcpu_on_spin(vcpu);
>>>>
>>>
>>> This code is duplicated. Should we move it to kvm_vcpu_on_spin?
>>>
>>> That means the .plo structure needs to be in common code, but that's not
>>> too bad perhaps.
>>>
>>
>> Since PLE is very much tied to x86, and proposed changes are very much
>> specific to PLE handler, I thought it is better to make arch specific.
>>
>> So do you think it is good to move inside vcpu_on_spin and make ple
>> structure belong to common code?
>
> See the discussion with Christian. PLE is tied to x86, but cpu_relax()
> and facilities to trap it are not.
Yep.
>
>>>
>>> This adds some tiny overhead to vcpu entry. You could remove it by
>>> using the vcpu->requests mechanism to clear the flag, since
>>> vcpu->requests is already checked on every entry.
>>
>> So IIUC, let's have request bit for indicating PLE,
>>
>> pause_interception() /handle_pause()
>> {
>> make_request(PLE_REQUEST)
>> vcpu_on_spin()
>>
>> }
>>
>> check_eligibility()
>> {
>> !test_request(PLE_REQUEST) || ( test_request(PLE_REQUEST)&&
>> dy_eligible())
>> .
>> .
>> }
>>
>> vcpu_run()
>> {
>>
>> check_request(PLE_REQUEST)
>> .
>> .
>> }
>>
>> Is this is the expected flow you had in mind?
>
> Yes, something like that.
ok..
>
>>
>> [ But my only concern was not resetting for cases where we do not do
>> guest_enter(). will test how that goes].
>
> Hm, suppose we're the next-in-line for a ticket lock and exit due to
> PLE. The lock holder completes and unlocks, which really assigns the
> lock to us. So now we are the lock owner, yet we are marked as don't
> yield-to-us in the PLE code.
Yes.. off-topic but that is solved by kicked flag in PV spinlocks.
next prev parent reply other threads:[~2012-07-11 11:58 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-09 6:20 [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler Raghavendra K T
2012-07-09 6:20 ` [PATCH RFC 1/2] kvm vcpu: Note down pause loop exit Raghavendra K T
2012-07-09 6:33 ` Raghavendra K T
2012-07-09 22:39 ` Rik van Riel
2012-07-10 11:22 ` Raghavendra K T
2012-07-11 8:53 ` Avi Kivity
2012-07-11 10:52 ` Raghavendra K T
2012-07-11 11:18 ` Avi Kivity
2012-07-11 11:56 ` Raghavendra K T [this message]
2012-07-11 12:41 ` Andrew Jones
2012-07-12 10:58 ` Nikunj A Dadhania
2012-07-12 11:02 ` Raghavendra K T
2012-07-09 6:20 ` [PATCH RFC 2/2] kvm PLE handler: Choose better candidate for directed yield Raghavendra K T
2012-07-09 22:30 ` Rik van Riel
2012-07-10 11:46 ` Raghavendra K T
2012-07-09 7:55 ` [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler Christian Borntraeger
2012-07-10 8:27 ` Raghavendra K T
2012-07-11 9:06 ` Avi Kivity
2012-07-11 10:17 ` Christian Borntraeger
2012-07-11 11:04 ` Avi Kivity
2012-07-11 11:16 ` Alexander Graf
2012-07-11 11:23 ` Avi Kivity
2012-07-11 11:52 ` Alexander Graf
2012-07-11 12:48 ` Avi Kivity
2012-07-12 2:19 ` Benjamin Herrenschmidt
2012-07-11 11:18 ` Christian Borntraeger
2012-07-11 11:39 ` Avi Kivity
2012-07-12 5:11 ` Raghavendra K T
2012-07-12 8:11 ` Avi Kivity
2012-07-12 8:32 ` Raghavendra K T
2012-07-12 2:17 ` Benjamin Herrenschmidt
2012-07-12 8:12 ` Avi Kivity
2012-07-12 11:24 ` Benjamin Herrenschmidt
2012-07-12 10:38 ` Nikunj A Dadhania
2012-07-11 11:51 ` Raghavendra K T
2012-07-11 11:55 ` Christian Borntraeger
2012-07-11 12:04 ` Raghavendra K T
2012-07-11 13:04 ` Raghavendra K T
2012-07-09 21:47 ` Andrew Theurer
2012-07-10 9:26 ` Raghavendra K T
2012-07-10 10:07 ` [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler : detailed result Raghavendra K T
2012-07-10 11:54 ` [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler Raghavendra K T
2012-07-10 13:27 ` Andrew Theurer
2012-07-11 9:00 ` Avi Kivity
2012-07-11 13:59 ` Raghavendra K T
2012-07-11 14:01 ` Raghavendra K T
2012-07-12 8:15 ` Avi Kivity
2012-07-12 8:25 ` Raghavendra K T
2012-07-12 12:31 ` Avi Kivity
2012-07-09 22:28 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FFD69DF.1070107@linux.vnet.ibm.com \
--to=raghavendra.kt@linux.vnet.ibm.com \
--cc=avi@redhat.com \
--cc=borntraeger@de.ibm.com \
--cc=chegu_vinod@hp.com \
--cc=cotte@de.ibm.com \
--cc=gleb@redhat.com \
--cc=habanero@linux.vnet.ibm.com \
--cc=hpa@zytor.com \
--cc=joerg.roedel@amd.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux390@de.ibm.com \
--cc=mingo@redhat.com \
--cc=mtosatti@redhat.com \
--cc=riel@redhat.com \
--cc=srivatsa.vaddagiri@gmail.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).