public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
       [not found]     ` <577C8FE3.3010208@linux.vnet.ibm.com>
@ 2016-07-06  6:46       ` Wanpeng Li
  2016-07-06  7:58         ` Peter Zijlstra
  0 siblings, 1 reply; 15+ messages in thread
From: Wanpeng Li @ 2016-07-06  6:46 UTC (permalink / raw)
  To: xinhui
  Cc: linux-s390, Davidlohr Bueso, kvm, Peter Zijlstra, mpe, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, benh, schwidefsky,
	Paolo Bonzini, Paul McKenney, linuxppc-dev

Cc Paolo, kvm ml
2016-07-06 12:58 GMT+08:00 xinhui <xinhui.pan@linux.vnet.ibm.com>:
> Hi, wanpeng
>
> On 2016年07月05日 17:57, Wanpeng Li wrote:
>>
>> Hi Xinhui,
>> 2016-06-28 22:43 GMT+08:00 Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>:
>>>
>>> This is to fix some lock holder preemption issues. Some other locks
>>> implementation do a spin loop before acquiring the lock itself. Currently
>>> kernel has an interface of bool vcpu_is_preempted(int cpu). It take the
>>> cpu
>>> as parameter and return true if the cpu is preempted. Then kernel can
>>> break
>>> the spin loops upon on the retval of vcpu_is_preempted.
>>>
>>> As kernel has used this interface, So lets support it.
>>>
>>> Only pSeries need supoort it. And the fact is powerNV are built into same
>>> kernel image with pSeries. So we need return false if we are runnig as
>>> powerNV. The another fact is that lppaca->yiled_count keeps zero on
>>> powerNV. So we can just skip the machine type.
>>
>>
>> Lock holder vCPU preemption can be detected by hardware pSeries or
>> paravirt method?
>>
> There is one shard struct between kernel and powerVM/KVM. And we read the
> yield_count of this struct to detect if one vcpu is running or not.
> SO it's easy for ppc to implement such interface. Note that yield_count is
> set by powerVM/KVM.
> and only pSeries can run a guest for now. :)
>
> I also review x86 related code, looks like we need add one hyer-call to get
> such vcpu preemption info?

There is no such stuff to record lock holder in x86 kvm, maybe we
don't need to depend on PLE handler algorithm to guess it if we can
know lock holder vCPU directly.

Regards,
Wanpeng Li
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
  2016-07-06  6:46       ` [PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check Wanpeng Li
@ 2016-07-06  7:58         ` Peter Zijlstra
  2016-07-06  8:32           ` Wanpeng Li
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2016-07-06  7:58 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, xinhui, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, mpe, schwidefsky,
	Paolo Bonzini, Paul McKenney, linuxppc-dev

On Wed, Jul 06, 2016 at 02:46:34PM +0800, Wanpeng Li wrote:
> > SO it's easy for ppc to implement such interface. Note that yield_count is
> > set by powerVM/KVM.
> > and only pSeries can run a guest for now. :)
> >
> > I also review x86 related code, looks like we need add one hyer-call to get
> > such vcpu preemption info?
> 
> There is no such stuff to record lock holder in x86 kvm, maybe we
> don't need to depend on PLE handler algorithm to guess it if we can
> know lock holder vCPU directly.

x86/kvm has vcpu->preempted to indicate if a vcpu is currently preempted
or not. I'm just not sure if that is visible to the guest or how to make
it so.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
  2016-07-06  7:58         ` Peter Zijlstra
@ 2016-07-06  8:32           ` Wanpeng Li
  2016-07-06 10:18             ` xinhui
  0 siblings, 1 reply; 15+ messages in thread
From: Wanpeng Li @ 2016-07-06  8:32 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, xinhui, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, mpe, schwidefsky,
	Paolo Bonzini, Paul McKenney, linuxppc-dev

2016-07-06 15:58 GMT+08:00 Peter Zijlstra <peterz@infradead.org>:
> On Wed, Jul 06, 2016 at 02:46:34PM +0800, Wanpeng Li wrote:
>> > SO it's easy for ppc to implement such interface. Note that yield_count is
>> > set by powerVM/KVM.
>> > and only pSeries can run a guest for now. :)
>> >
>> > I also review x86 related code, looks like we need add one hyer-call to get
>> > such vcpu preemption info?
>>
>> There is no such stuff to record lock holder in x86 kvm, maybe we
>> don't need to depend on PLE handler algorithm to guess it if we can
>> know lock holder vCPU directly.
>
> x86/kvm has vcpu->preempted to indicate if a vcpu is currently preempted
> or not. I'm just not sure if that is visible to the guest or how to make
> it so.

Yeah, I miss it. I can be the volunteer to do it if there is any idea,
ping Paolo. :)

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
  2016-07-06  8:32           ` Wanpeng Li
@ 2016-07-06 10:18             ` xinhui
  0 siblings, 0 replies; 15+ messages in thread
From: xinhui @ 2016-07-06 10:18 UTC (permalink / raw)
  To: Wanpeng Li, Peter Zijlstra
  Cc: linux-s390, Davidlohr Bueso, kvm, mpe, boqun.feng, will.deacon,
	linux-kernel@vger.kernel.org, Waiman Long, virtualization,
	Ingo Molnar, Paul Mackerras, benh, schwidefsky, Paolo Bonzini,
	Paul McKenney, linuxppc-dev


On 2016年07月06日 16:32, Wanpeng Li wrote:
> 2016-07-06 15:58 GMT+08:00 Peter Zijlstra <peterz@infradead.org>:
>> On Wed, Jul 06, 2016 at 02:46:34PM +0800, Wanpeng Li wrote:
>>>> SO it's easy for ppc to implement such interface. Note that yield_count is
>>>> set by powerVM/KVM.
>>>> and only pSeries can run a guest for now. :)
>>>>
>>>> I also review x86 related code, looks like we need add one hyer-call to get
>>>> such vcpu preemption info?
>>>
>>> There is no such stuff to record lock holder in x86 kvm, maybe we
>>> don't need to depend on PLE handler algorithm to guess it if we can
>>> know lock holder vCPU directly.
>>
>> x86/kvm has vcpu->preempted to indicate if a vcpu is currently preempted
>> or not. I'm just not sure if that is visible to the guest or how to make
>> it so.
>
> Yeah, I miss it. I can be the volunteer to do it if there is any idea,
> ping Paolo. :)
>
glad to know that. :)


> Regards,
> Wanpeng Li
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
       [not found]   ` <14a24854-9787-e4a1-c9a8-76eba4e97301@redhat.com>
@ 2016-07-06 12:08     ` Wanpeng Li
  2016-07-06 12:28       ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Wanpeng Li @ 2016-07-06 12:08 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, Peter Zijlstra,
	Pan Xinhui, boqun.feng, will.deacon, linux-kernel@vger.kernel.org,
	Waiman Long, virtualization, Ingo Molnar, Paul Mackerras, mpe,
	schwidefsky, Paul McKenney, linuxppc-dev

2016-07-06 18:44 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>
>
> On 06/07/2016 08:52, Peter Zijlstra wrote:
>> On Tue, Jun 28, 2016 at 10:43:07AM -0400, Pan Xinhui wrote:
>>> change fomr v1:
>>>      a simplier definition of default vcpu_is_preempted
>>>      skip mahcine type check on ppc, and add config. remove dedicated macro.
>>>      add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
>>>      add more comments
>>>      thanks boqun and Peter's suggestion.
>>>
>>> This patch set aims to fix lock holder preemption issues.
>>>
>>> test-case:
>>> perf record -a perf bench sched messaging -g 400 -p && perf report
>>>
>>> 18.09%  sched-messaging  [kernel.vmlinux]  [k] osq_lock
>>> 12.28%  sched-messaging  [kernel.vmlinux]  [k] rwsem_spin_on_owner
>>>  5.27%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
>>>  3.89%  sched-messaging  [kernel.vmlinux]  [k] wait_consider_task
>>>  3.64%  sched-messaging  [kernel.vmlinux]  [k] _raw_write_lock_irq
>>>  3.41%  sched-messaging  [kernel.vmlinux]  [k] mutex_spin_on_owner.is
>>>  2.49%  sched-messaging  [kernel.vmlinux]  [k] system_call
>>>
>>> We introduce interface bool vcpu_is_preempted(int cpu) and use it in some spin
>>> loops of osq_lock, rwsem_spin_on_owner and mutex_spin_on_owner.
>>> These spin_on_onwer variant also cause rcu stall before we apply this patch set
>>>
>>
>> Paolo, could you help out with an (x86) KVM interface for this?
>
> If it's just for spin loops, you can check if the version field in the
> steal time structure has changed.

Steal time will not be updated until ahead of next vmentry except
wrmsr MSR_KVM_STEAL_TIME. So it can't represent it is preempted
currently, right?

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-06 12:08     ` [PATCH v2 0/4] implement " Wanpeng Li
@ 2016-07-06 12:28       ` Paolo Bonzini
  2016-07-06 13:03         ` Wanpeng Li
  2016-07-07  8:48         ` Wanpeng Li
  0 siblings, 2 replies; 15+ messages in thread
From: Paolo Bonzini @ 2016-07-06 12:28 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, Peter Zijlstra,
	Pan Xinhui, boqun.feng, will.deacon, linux-kernel@vger.kernel.org,
	Waiman Long, virtualization, Ingo Molnar, Paul Mackerras, mpe,
	schwidefsky, Paul McKenney, linuxppc-dev



On 06/07/2016 14:08, Wanpeng Li wrote:
> 2016-07-06 18:44 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>>
>>
>> On 06/07/2016 08:52, Peter Zijlstra wrote:
>>> On Tue, Jun 28, 2016 at 10:43:07AM -0400, Pan Xinhui wrote:
>>>> change fomr v1:
>>>>      a simplier definition of default vcpu_is_preempted
>>>>      skip mahcine type check on ppc, and add config. remove dedicated macro.
>>>>      add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
>>>>      add more comments
>>>>      thanks boqun and Peter's suggestion.
>>>>
>>>> This patch set aims to fix lock holder preemption issues.
>>>>
>>>> test-case:
>>>> perf record -a perf bench sched messaging -g 400 -p && perf report
>>>>
>>>> 18.09%  sched-messaging  [kernel.vmlinux]  [k] osq_lock
>>>> 12.28%  sched-messaging  [kernel.vmlinux]  [k] rwsem_spin_on_owner
>>>>  5.27%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
>>>>  3.89%  sched-messaging  [kernel.vmlinux]  [k] wait_consider_task
>>>>  3.64%  sched-messaging  [kernel.vmlinux]  [k] _raw_write_lock_irq
>>>>  3.41%  sched-messaging  [kernel.vmlinux]  [k] mutex_spin_on_owner.is
>>>>  2.49%  sched-messaging  [kernel.vmlinux]  [k] system_call
>>>>
>>>> We introduce interface bool vcpu_is_preempted(int cpu) and use it in some spin
>>>> loops of osq_lock, rwsem_spin_on_owner and mutex_spin_on_owner.
>>>> These spin_on_onwer variant also cause rcu stall before we apply this patch set
>>>
>>> Paolo, could you help out with an (x86) KVM interface for this?
>>
>> If it's just for spin loops, you can check if the version field in the
>> steal time structure has changed.
> 
> Steal time will not be updated until ahead of next vmentry except
> wrmsr MSR_KVM_STEAL_TIME. So it can't represent it is preempted
> currently, right?

Hmm, you're right.  We can use bit 0 of struct kvm_steal_time's flags to
indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
VCPU has been scheduled out since the last time the guest reset the bit.
 The guest can use an xchg to test-and-clear it.  The bit can be
accessed at any time, independent of the version field.

Paolo

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-06 12:28       ` Paolo Bonzini
@ 2016-07-06 13:03         ` Wanpeng Li
  2016-07-07  8:48         ` Wanpeng Li
  1 sibling, 0 replies; 15+ messages in thread
From: Wanpeng Li @ 2016-07-06 13:03 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, Peter Zijlstra,
	Pan Xinhui, boqun.feng, will.deacon, linux-kernel@vger.kernel.org,
	Waiman Long, virtualization, Ingo Molnar, Paul Mackerras, mpe,
	schwidefsky, Paul McKenney, linuxppc-dev

2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>
>
> On 06/07/2016 14:08, Wanpeng Li wrote:
>> 2016-07-06 18:44 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>>>
>>>
>>> On 06/07/2016 08:52, Peter Zijlstra wrote:
>>>> On Tue, Jun 28, 2016 at 10:43:07AM -0400, Pan Xinhui wrote:
>>>>> change fomr v1:
>>>>>      a simplier definition of default vcpu_is_preempted
>>>>>      skip mahcine type check on ppc, and add config. remove dedicated macro.
>>>>>      add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
>>>>>      add more comments
>>>>>      thanks boqun and Peter's suggestion.
>>>>>
>>>>> This patch set aims to fix lock holder preemption issues.
>>>>>
>>>>> test-case:
>>>>> perf record -a perf bench sched messaging -g 400 -p && perf report
>>>>>
>>>>> 18.09%  sched-messaging  [kernel.vmlinux]  [k] osq_lock
>>>>> 12.28%  sched-messaging  [kernel.vmlinux]  [k] rwsem_spin_on_owner
>>>>>  5.27%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
>>>>>  3.89%  sched-messaging  [kernel.vmlinux]  [k] wait_consider_task
>>>>>  3.64%  sched-messaging  [kernel.vmlinux]  [k] _raw_write_lock_irq
>>>>>  3.41%  sched-messaging  [kernel.vmlinux]  [k] mutex_spin_on_owner.is
>>>>>  2.49%  sched-messaging  [kernel.vmlinux]  [k] system_call
>>>>>
>>>>> We introduce interface bool vcpu_is_preempted(int cpu) and use it in some spin
>>>>> loops of osq_lock, rwsem_spin_on_owner and mutex_spin_on_owner.
>>>>> These spin_on_onwer variant also cause rcu stall before we apply this patch set
>>>>
>>>> Paolo, could you help out with an (x86) KVM interface for this?
>>>
>>> If it's just for spin loops, you can check if the version field in the
>>> steal time structure has changed.
>>
>> Steal time will not be updated until ahead of next vmentry except
>> wrmsr MSR_KVM_STEAL_TIME. So it can't represent it is preempted
>> currently, right?
>
> Hmm, you're right.  We can use bit 0 of struct kvm_steal_time's flags to
> indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
> VCPU has been scheduled out since the last time the guest reset the bit.
>  The guest can use an xchg to test-and-clear it.  The bit can be
> accessed at any time, independent of the version field.

I will try to implement it tomorrow, thanks for your proposal. :)

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-06 12:28       ` Paolo Bonzini
  2016-07-06 13:03         ` Wanpeng Li
@ 2016-07-07  8:48         ` Wanpeng Li
  2016-07-07  9:42           ` Peter Zijlstra
  1 sibling, 1 reply; 15+ messages in thread
From: Wanpeng Li @ 2016-07-07  8:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, Peter Zijlstra,
	Pan Xinhui, boqun.feng, will.deacon, linux-kernel@vger.kernel.org,
	Waiman Long, virtualization, Ingo Molnar, Paul Mackerras, mpe,
	schwidefsky, Paul McKenney, linuxppc-dev

2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>
>
> On 06/07/2016 14:08, Wanpeng Li wrote:
>> 2016-07-06 18:44 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>>>
>>>
>>> On 06/07/2016 08:52, Peter Zijlstra wrote:
>>>> On Tue, Jun 28, 2016 at 10:43:07AM -0400, Pan Xinhui wrote:
>>>>> change fomr v1:
>>>>>      a simplier definition of default vcpu_is_preempted
>>>>>      skip mahcine type check on ppc, and add config. remove dedicated macro.
>>>>>      add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
>>>>>      add more comments
>>>>>      thanks boqun and Peter's suggestion.
>>>>>
>>>>> This patch set aims to fix lock holder preemption issues.
>>>>>
>>>>> test-case:
>>>>> perf record -a perf bench sched messaging -g 400 -p && perf report
>>>>>
>>>>> 18.09%  sched-messaging  [kernel.vmlinux]  [k] osq_lock
>>>>> 12.28%  sched-messaging  [kernel.vmlinux]  [k] rwsem_spin_on_owner
>>>>>  5.27%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
>>>>>  3.89%  sched-messaging  [kernel.vmlinux]  [k] wait_consider_task
>>>>>  3.64%  sched-messaging  [kernel.vmlinux]  [k] _raw_write_lock_irq
>>>>>  3.41%  sched-messaging  [kernel.vmlinux]  [k] mutex_spin_on_owner.is
>>>>>  2.49%  sched-messaging  [kernel.vmlinux]  [k] system_call
>>>>>
>>>>> We introduce interface bool vcpu_is_preempted(int cpu) and use it in some spin
>>>>> loops of osq_lock, rwsem_spin_on_owner and mutex_spin_on_owner.
>>>>> These spin_on_onwer variant also cause rcu stall before we apply this patch set
>>>>
>>>> Paolo, could you help out with an (x86) KVM interface for this?
>>>
>>> If it's just for spin loops, you can check if the version field in the
>>> steal time structure has changed.
>>
>> Steal time will not be updated until ahead of next vmentry except
>> wrmsr MSR_KVM_STEAL_TIME. So it can't represent it is preempted
>> currently, right?
>
> Hmm, you're right.  We can use bit 0 of struct kvm_steal_time's flags to
> indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
> VCPU has been scheduled out since the last time the guest reset the bit.
>  The guest can use an xchg to test-and-clear it.  The bit can be
> accessed at any time, independent of the version field.

If one vCPU is preempted, and guest check it several times before this
vCPU is scheded in, then the first time we can get "vCPU is
preempted", however, since the field is cleared, the second time we
will get "vCPU is running".

Do you mean we should call record_steal_time() in both kvm_sched_in()
and kvm_sched_out() to record this field? Btw, if we should keep both
vcpu->preempted and kvm_steal_time's "vCPU preempted" field present
simultaneous?

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-07  8:48         ` Wanpeng Li
@ 2016-07-07  9:42           ` Peter Zijlstra
  2016-07-07 10:12             ` Wanpeng Li
                               ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Peter Zijlstra @ 2016-07-07  9:42 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, mpe, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, Pan Xinhui,
	schwidefsky, Paolo Bonzini, Paul McKenney, linuxppc-dev

On Thu, Jul 07, 2016 at 04:48:05PM +0800, Wanpeng Li wrote:
> 2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> > Hmm, you're right.  We can use bit 0 of struct kvm_steal_time's flags to
> > indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
> > VCPU has been scheduled out since the last time the guest reset the bit.
> >  The guest can use an xchg to test-and-clear it.  The bit can be
> > accessed at any time, independent of the version field.
> 
> If one vCPU is preempted, and guest check it several times before this
> vCPU is scheded in, then the first time we can get "vCPU is
> preempted", however, since the field is cleared, the second time we
> will get "vCPU is running".
> 
> Do you mean we should call record_steal_time() in both kvm_sched_in()
> and kvm_sched_out() to record this field? Btw, if we should keep both
> vcpu->preempted and kvm_steal_time's "vCPU preempted" field present
> simultaneous?

I suspect you want something like so; except this has holes in.

We clear KVM_ST_PAD_PREEMPT before disabling preemption and we set it
after enabling it, this means that if we get preempted in between, the
vcpu is reported as running even though it very much is not.

Fixing that requires much larger surgery.

---
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b2766723c951..117270df43b6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1997,8 +1997,29 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu)
 	vcpu->arch.pv_time_enabled = false;
 }
 
+static void update_steal_time_preempt(struct kvm_vcpu *vcpu)
+{
+	struct kvm_steal_time *st;
+
+	if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+		return;
+
+	if (unlikely(kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+		&vcpu->arch.st.steal, sizeof(struct kvm_steal_time))))
+		return;
+
+	st = &vcpu->arch.st.steal;
+
+	st->pad[KVM_ST_PAD_PREEMPT] = 1; /* we've stopped running */
+
+	kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+		st, sizeof(struct kvm_steal_time));
+}
+
 static void record_steal_time(struct kvm_vcpu *vcpu)
 {
+	struct kvm_steal_time *st;
+
 	if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
 		return;
 
@@ -2006,29 +2027,34 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
 		&vcpu->arch.st.steal, sizeof(struct kvm_steal_time))))
 		return;
 
-	if (vcpu->arch.st.steal.version & 1)
-		vcpu->arch.st.steal.version += 1;  /* first time write, random junk */
+	st = &vcpu->arch.st.steal;
+
+	if (st->version & 1) {
+		st->flags = KVM_ST_FLAG_PREEMPT;
+		st->version += 1;  /* first time write, random junk */
+	}
 
-	vcpu->arch.st.steal.version += 1;
+	st->version += 1;
 
 	kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
-		&vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
+		st, sizeof(struct kvm_steal_time));
 
 	smp_wmb();
 
-	vcpu->arch.st.steal.steal += current->sched_info.run_delay -
+	st->steal += current->sched_info.run_delay -
 		vcpu->arch.st.last_steal;
 	vcpu->arch.st.last_steal = current->sched_info.run_delay;
+	st->pad[KVM_ST_PAD_PREEMPT] = 0; /* we're about to start running */
 
 	kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
-		&vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
+		st, sizeof(struct kvm_steal_time));
 
 	smp_wmb();
 
-	vcpu->arch.st.steal.version += 1;
+	st->version += 1;
 
 	kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
-		&vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
+		st, sizeof(struct kvm_steal_time));
 }
 
 int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
@@ -6693,6 +6719,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	preempt_enable();
 
+	update_steal_time_preempt(vcpu);
+
 	vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
 
 	/*

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-07  9:42           ` Peter Zijlstra
@ 2016-07-07 10:12             ` Wanpeng Li
  2016-07-07 10:27               ` Wanpeng Li
  2016-07-07 11:08               ` Peter Zijlstra
  2016-07-07 11:09             ` Peter Zijlstra
  2016-07-07 11:21             ` Peter Zijlstra
  2 siblings, 2 replies; 15+ messages in thread
From: Wanpeng Li @ 2016-07-07 10:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, mpe, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, Pan Xinhui,
	schwidefsky, Paolo Bonzini, Paul McKenney, linuxppc-dev

2016-07-07 17:42 GMT+08:00 Peter Zijlstra <peterz@infradead.org>:
> On Thu, Jul 07, 2016 at 04:48:05PM +0800, Wanpeng Li wrote:
>> 2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>> > Hmm, you're right.  We can use bit 0 of struct kvm_steal_time's flags to
>> > indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
>> > VCPU has been scheduled out since the last time the guest reset the bit.
>> >  The guest can use an xchg to test-and-clear it.  The bit can be
>> > accessed at any time, independent of the version field.
>>
>> If one vCPU is preempted, and guest check it several times before this
>> vCPU is scheded in, then the first time we can get "vCPU is
>> preempted", however, since the field is cleared, the second time we
>> will get "vCPU is running".
>>
>> Do you mean we should call record_steal_time() in both kvm_sched_in()
>> and kvm_sched_out() to record this field? Btw, if we should keep both
>> vcpu->preempted and kvm_steal_time's "vCPU preempted" field present
>> simultaneous?
>
> I suspect you want something like so; except this has holes in.
>
> We clear KVM_ST_PAD_PREEMPT before disabling preemption and we set it
> after enabling it, this means that if we get preempted in between, the
> vcpu is reported as running even though it very much is not.

Paolo also point out this to me offline yesterday: "Please change
pad[12] to "__u32 preempted; __u32 pad[11];" too, and remember to
update Documentation/virtual/kvm/msr.txt!". Btw, do this in preemption
notifier means that the vCPU is real preempted on host, however,
depends on vmexit is different semantic I think.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-07 10:12             ` Wanpeng Li
@ 2016-07-07 10:27               ` Wanpeng Li
  2016-07-07 11:15                 ` Peter Zijlstra
  2016-07-07 11:08               ` Peter Zijlstra
  1 sibling, 1 reply; 15+ messages in thread
From: Wanpeng Li @ 2016-07-07 10:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, mpe, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, Pan Xinhui,
	schwidefsky, Paolo Bonzini, Paul McKenney, linuxppc-dev

2016-07-07 18:12 GMT+08:00 Wanpeng Li <kernellwp@gmail.com>:
> 2016-07-07 17:42 GMT+08:00 Peter Zijlstra <peterz@infradead.org>:
>> On Thu, Jul 07, 2016 at 04:48:05PM +0800, Wanpeng Li wrote:
>>> 2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>>> > Hmm, you're right.  We can use bit 0 of struct kvm_steal_time's flags to
>>> > indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
>>> > VCPU has been scheduled out since the last time the guest reset the bit.
>>> >  The guest can use an xchg to test-and-clear it.  The bit can be
>>> > accessed at any time, independent of the version field.
>>>
>>> If one vCPU is preempted, and guest check it several times before this
>>> vCPU is scheded in, then the first time we can get "vCPU is
>>> preempted", however, since the field is cleared, the second time we
>>> will get "vCPU is running".
>>>
>>> Do you mean we should call record_steal_time() in both kvm_sched_in()
>>> and kvm_sched_out() to record this field? Btw, if we should keep both
>>> vcpu->preempted and kvm_steal_time's "vCPU preempted" field present
>>> simultaneous?
>>
>> I suspect you want something like so; except this has holes in.
>>
>> We clear KVM_ST_PAD_PREEMPT before disabling preemption and we set it
>> after enabling it, this means that if we get preempted in between, the
>> vcpu is reported as running even though it very much is not.
>
> Paolo also point out this to me offline yesterday: "Please change
> pad[12] to "__u32 preempted; __u32 pad[11];" too, and remember to
> update Documentation/virtual/kvm/msr.txt!". Btw, do this in preemption
> notifier means that the vCPU is real preempted on host, however,
> depends on vmexit is different semantic I think.

In addition, I see xen's vcpu_runstate_info::state is updated during
schedule, so I think I can do this similarly through kvm preemption
notifier. IIUC, xen hypervisor has VCPUOP_get_runstate_info hypercall
implemention, so the desired interface can be implemented if they add
hypercall callsite in domU. I can add hypercall to kvm similarly.

Paolo, thoughts?

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-07 10:12             ` Wanpeng Li
  2016-07-07 10:27               ` Wanpeng Li
@ 2016-07-07 11:08               ` Peter Zijlstra
  1 sibling, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2016-07-07 11:08 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, mpe, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, Pan Xinhui,
	schwidefsky, Paolo Bonzini, Paul McKenney, linuxppc-dev

On Thu, Jul 07, 2016 at 06:12:51PM +0800, Wanpeng Li wrote:

> Btw, do this in preemption
> notifier means that the vCPU is real preempted on host, however,
> depends on vmexit is different semantic I think.

Not sure; suppose the vcpu is about to reenter, eg, we're in
vcpu_enter_guest() but before the preempt_disable() and the thread gets
preempted. Are we then not preempted? The vcpu might still very much be
in running state but had to service an vmexit due to an MSR or IO port
or whatnot.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-07  9:42           ` Peter Zijlstra
  2016-07-07 10:12             ` Wanpeng Li
@ 2016-07-07 11:09             ` Peter Zijlstra
  2016-07-07 11:21             ` Peter Zijlstra
  2 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2016-07-07 11:09 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, mpe, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, Pan Xinhui,
	schwidefsky, Paolo Bonzini, Paul McKenney, linuxppc-dev

On Thu, Jul 07, 2016 at 11:42:15AM +0200, Peter Zijlstra wrote:
> I suspect you want something like so; except this has holes in.
> 
> We clear KVM_ST_PAD_PREEMPT before disabling preemption and we set it
> after enabling it, this means that if we get preempted in between, the
> vcpu is reported as running even though it very much is not.
> 
> Fixing that requires much larger surgery.

Note that this same hole is already a 'problem' for steal time
accounting. The thread can accrue further delays (iow steal time) after
we've called record_steal_time(). These delays will go unaccounted.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-07 10:27               ` Wanpeng Li
@ 2016-07-07 11:15                 ` Peter Zijlstra
  0 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2016-07-07 11:15 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, mpe, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, Pan Xinhui,
	schwidefsky, Paolo Bonzini, Paul McKenney, linuxppc-dev

On Thu, Jul 07, 2016 at 06:27:26PM +0800, Wanpeng Li wrote:
> In addition, I see xen's vcpu_runstate_info::state is updated during
> schedule, so I think I can do this similarly through kvm preemption
> notifier. IIUC, xen hypervisor has VCPUOP_get_runstate_info hypercall
> implemention, so the desired interface can be implemented if they add
> hypercall callsite in domU. I can add hypercall to kvm similarly.

So I suspect Xen has the page its writing to pinned in memory; so that a
write to it is guaranteed to not fault.

Otherwise I cannot see this working.

That is part of the larger surgery required for KVM steal time to get
'fixed'. Currently that steal time stuff uses kvm_write_guest_cached()
which appears to be able to fault in guest pages.

Or I'm not reading this stuff right; which is entirely possible.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] implement vcpu preempted check
  2016-07-07  9:42           ` Peter Zijlstra
  2016-07-07 10:12             ` Wanpeng Li
  2016-07-07 11:09             ` Peter Zijlstra
@ 2016-07-07 11:21             ` Peter Zijlstra
  2 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2016-07-07 11:21 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-s390, Davidlohr Bueso, benh, kvm, mpe, boqun.feng,
	will.deacon, linux-kernel@vger.kernel.org, Waiman Long,
	virtualization, Ingo Molnar, Paul Mackerras, Pan Xinhui,
	schwidefsky, Paolo Bonzini, Paul McKenney, linuxppc-dev

On Thu, Jul 07, 2016 at 11:42:15AM +0200, Peter Zijlstra wrote:
> +static void update_steal_time_preempt(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_steal_time *st;
> +
> +	if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
> +		return;
> +
> +	if (unlikely(kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
> +		&vcpu->arch.st.steal, sizeof(struct kvm_steal_time))))
> +		return;
> +
> +	st = &vcpu->arch.st.steal;
> +
> +	st->pad[KVM_ST_PAD_PREEMPT] = 1; /* we've stopped running */

So maybe have this be:

	... = kvm_vcpu_running();

That avoids marking the vcpu preempted when we do hlt and such.

> +	kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
> +		st, sizeof(struct kvm_steal_time));
> +}

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2016-07-07 11:21 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1467124991-13164-1-git-send-email-xinhui.pan@linux.vnet.ibm.com>
     [not found] ` <1467124991-13164-3-git-send-email-xinhui.pan@linux.vnet.ibm.com>
     [not found]   ` <CANRm+CxQMSrg69p2Xey829Mz4Au9YCai+4JD17i9urdbj0VgkQ@mail.gmail.com>
     [not found]     ` <577C8FE3.3010208@linux.vnet.ibm.com>
2016-07-06  6:46       ` [PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check Wanpeng Li
2016-07-06  7:58         ` Peter Zijlstra
2016-07-06  8:32           ` Wanpeng Li
2016-07-06 10:18             ` xinhui
     [not found] ` <20160706065255.GH30909@twins.programming.kicks-ass.net>
     [not found]   ` <14a24854-9787-e4a1-c9a8-76eba4e97301@redhat.com>
2016-07-06 12:08     ` [PATCH v2 0/4] implement " Wanpeng Li
2016-07-06 12:28       ` Paolo Bonzini
2016-07-06 13:03         ` Wanpeng Li
2016-07-07  8:48         ` Wanpeng Li
2016-07-07  9:42           ` Peter Zijlstra
2016-07-07 10:12             ` Wanpeng Li
2016-07-07 10:27               ` Wanpeng Li
2016-07-07 11:15                 ` Peter Zijlstra
2016-07-07 11:08               ` Peter Zijlstra
2016-07-07 11:09             ` Peter Zijlstra
2016-07-07 11:21             ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox