From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
To: Avi Kivity <avi@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>,
Alan Meadows <alan.meadows@gmail.com>,
Ingo Molnar <mingo@elte.hu>,
Linus Torvalds <torvalds@linux-foundation.org>,
Peter Zijlstra <peterz@infradead.org>,
the arch/x86 maintainers <x86@kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Marcelo Tosatti <mtosatti@redhat.com>, KVM <kvm@vger.kernel.org>,
Andi Kleen <andi@firstfloor.org>,
Xen Devel <xen-devel@lists.xensource.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Virtualization <virtualization@lists.linux-foundation.org>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>,
Stephan Diestelhorst <stephan.diestelhorst@amd.com>,
Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [PATCH RFC V6 0/11] Paravirtualized ticketlocks
Date: Thu, 05 Apr 2012 16:10:52 +0530 [thread overview]
Message-ID: <4F7D76B4.1090600@linux.vnet.ibm.com> (raw)
In-Reply-To: <4F7D5F5B.2020209@redhat.com>
On 04/05/2012 02:31 PM, Avi Kivity wrote:
> On 04/02/2012 12:51 PM, Raghavendra K T wrote:
>> On 04/01/2012 07:23 PM, Avi Kivity wrote:
>>> On 04/01/2012 04:48 PM, Raghavendra K T wrote:
>>>>>> I have patch something like below in mind to try:
>>>>>>
>>>>>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>>>>>> index d3b98b1..5127668 100644
>>>>>> --- a/virt/kvm/kvm_main.c
>>>>>> +++ b/virt/kvm/kvm_main.c
>>>>>> @@ -1608,15 +1608,18 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
>>>>>> * else and called schedule in __vcpu_run. Hopefully that
>>>>>> * VCPU is holding the lock that we need and will release it.
>>>>>> * We approximate round-robin by starting at the last boosted
>>>>>> VCPU.
>>>>>> + * Priority is given to vcpu that are unhalted.
>>>>>> */
>>>>>> - for (pass = 0; pass< 2&& !yielded; pass++) {
>>>>>> + for (pass = 0; pass< 3&& !yielded; pass++) {
>>>>>> kvm_for_each_vcpu(i, vcpu, kvm) {
>>>>>> struct task_struct *task = NULL;
>>>>>> struct pid *pid;
>>>>>> - if (!pass&& i< last_boosted_vcpu) {
>>>>>> + if (!pass&& !vcpu->pv_unhalted)
>>>>>> + continue;
>>>>>> + else if (pass == 1&& i< last_boosted_vcpu) {
>>>>>> i = last_boosted_vcpu;
>>>>>> continue;
>>>>>> - } else if (pass&& i> last_boosted_vcpu)
>>>>>> + } else if (pass == 2&& i> last_boosted_vcpu)
>>>>>> break;
>>>>>> if (vcpu == me)
>>>>>> continue;
>>>>>>
>>>>>
>>>>> Actually I think this is unneeded. The loops tries to find vcpus
>> that
>>>>> are runnable but not running (vcpu_active(vcpu->wq)), and halted
>> vcpus
>>>>> don't match this condition.
>>>>>
>>
>> Oh! I think I misinterpreted your statement. hmm I got it. you told to
>> remove if (vcpu == me) condition.
>
> No, the entire patch is unneeded. My original comment was:
>
>> from the PLE handler, don't wake up a vcpu that is sleeping because it
> is waiting for a kick
>
> But the PLE handler never wakes up sleeping vcpus anyway.
I agree with you. It is already doing that. But my approach here is
little different.
In 2 classes of vcpus we have (one is subset of another when we try to
do yield_to) viz,
1) runnable and kicked < (subset of) 2) just runnable
what we are trying to do here is targeting 1) first so that we get good
lock progress.
Here was the sequence I was talking.
vcpu1 releases lock->finds that vcpu2 is next candidate ->
kick hypercall -> kvm_pv_kick_cpu_op -> set kicked flag ->
vcpu->kick(vcpu2)
at this point we have vcpu2 waiting for getting scheduled. But
above yield call can wake *anybody*.
I agree this is not serious (rather it is overhead) when there are are
less number of vcpus, But when we have more number of vcpu/vm.. it may
not scale well. My attempt was to fix that.
Let me know if I am completely missing something..
>
> There's still a conflict with PLE in that it may trigger during the spin
> phase and send a random yield_to() somewhere. Maybe it's sufficient to
> tune the PLE timeout to be longer than the spinlock timeout.
>
Ok ... But we also should be cautious that, we may do more halt, though
we are about to get spinlock.
Need more study on this.
next prev parent reply other threads:[~2012-04-05 10:40 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-21 10:20 [PATCH RFC V6 0/11] Paravirtualized ticketlocks Raghavendra K T
2012-03-21 10:20 ` [PATCH RFC V6 1/11] x86/spinlock: replace pv spinlocks with pv ticketlocks Raghavendra K T
2012-03-21 13:04 ` Attilio Rao
2012-03-21 13:22 ` Stephan Diestelhorst
2012-03-21 13:49 ` Attilio Rao
2012-03-21 14:25 ` Stephan Diestelhorst
2012-03-21 14:33 ` Attilio Rao
2012-03-21 14:49 ` Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 2/11] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Raghavendra K T
2012-03-21 17:13 ` Linus Torvalds
2012-03-22 10:06 ` Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 3/11] x86/ticketlock: collapse a layer of functions Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 4/11] xen: defer spinlock setup until boot CPU setup Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 5/11] xen/pvticketlock: Xen implementation for PV ticket locks Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 6/11] xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv ticketlocks Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 7/11] x86/pvticketlock: use callee-save for lock_spinning Raghavendra K T
2012-03-21 10:22 ` [PATCH RFC V6 8/11] x86/pvticketlock: when paravirtualizing ticket locks, increment by 2 Raghavendra K T
2012-03-21 10:22 ` [PATCH RFC V6 9/11] x86/ticketlock: add slowpath logic Raghavendra K T
2012-03-21 10:22 ` [PATCH RFC V6 10/11] xen/pvticketlock: allow interrupts to be enabled while blocking Raghavendra K T
2012-03-21 10:22 ` [PATCH RFC V6 11/11] xen: enable PV ticketlocks on HVM Xen Raghavendra K T
2012-03-26 14:25 ` [PATCH RFC V6 0/11] Paravirtualized ticketlocks Avi Kivity
2012-03-27 7:37 ` Raghavendra K T
2012-03-28 16:09 ` Alan Meadows
2012-03-28 18:21 ` Raghavendra K T
2012-03-29 9:58 ` Avi Kivity
2012-03-29 18:03 ` Raghavendra K T
2012-03-30 10:07 ` Raghavendra K T
2012-04-01 13:18 ` Avi Kivity
2012-04-01 13:48 ` Raghavendra K T
2012-04-01 13:53 ` Avi Kivity
2012-04-01 13:56 ` Raghavendra K T
2012-04-02 9:51 ` Raghavendra K T
2012-04-02 12:15 ` Raghavendra K T
2012-04-05 9:01 ` Avi Kivity
2012-04-05 10:40 ` Raghavendra K T [this message]
2012-04-05 8:43 ` Raghavendra K T
2012-03-30 20:26 ` H. Peter Anvin
2012-03-30 22:07 ` Thomas Gleixner
2012-03-30 22:18 ` Andi Kleen
2012-03-30 23:04 ` Thomas Gleixner
2012-03-31 0:08 ` Andi Kleen
2012-03-31 8:11 ` Ingo Molnar
2012-03-31 4:07 ` Srivatsa Vaddagiri
2012-03-31 4:09 ` Srivatsa Vaddagiri
2012-04-16 15:44 ` Konrad Rzeszutek Wilk
2012-04-16 16:36 ` [Xen-devel] " Ian Campbell
2012-04-16 16:42 ` Jeremy Fitzhardinge
2012-04-17 2:54 ` Srivatsa Vaddagiri
2012-04-01 13:31 ` Avi Kivity
2012-04-02 9:26 ` Thomas Gleixner
2012-04-05 9:15 ` Avi Kivity
2012-04-02 4:36 ` [Xen-devel] " Juergen Gross
2012-04-02 9:42 ` Ian Campbell
2012-04-11 1:29 ` Marcelo Tosatti
2012-03-31 0:51 ` Raghavendra K T
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F7D76B4.1090600@linux.vnet.ibm.com \
--to=raghavendra.kt@linux.vnet.ibm.com \
--cc=alan.meadows@gmail.com \
--cc=andi@firstfloor.org \
--cc=attilio.rao@citrix.com \
--cc=avi@redhat.com \
--cc=hpa@zytor.com \
--cc=jeremy.fitzhardinge@citrix.com \
--cc=konrad.wilk@oracle.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=mtosatti@redhat.com \
--cc=peterz@infradead.org \
--cc=stefano.stabellini@eu.citrix.com \
--cc=stephan.diestelhorst@amd.com \
--cc=torvalds@linux-foundation.org \
--cc=vatsa@linux.vnet.ibm.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=x86@kernel.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).