public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run
@ 2013-04-28  7:24 Jan Kiszka
  2013-04-28 10:36 ` Gleb Natapov
  2013-04-28 14:30 ` Ren, Yongjie
  0 siblings, 2 replies; 11+ messages in thread
From: Jan Kiszka @ 2013-04-28  7:24 UTC (permalink / raw)
  To: Gleb Natapov, Marcelo Tosatti; +Cc: kvm, Nakajima, Jun, Ren, Yongjie

From: Jan Kiszka <jan.kiszka@siemens.com>

While a nested run is pending, vmx_queue_exception is only called to
requeue exceptions that were previously picked up via
vmx_cancel_injection. Therefore, we must not check for PF interception
by L1, possibly causing a bogus nested vmexit.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---

This and the KVM_REQ_IMMEDIATE_EXIT fix allows me to boot an L2 Linux
without problems. Yongjie, please check if it resolves your issue(s) as
well.

 arch/x86/kvm/vmx.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index d663a59..45eb949 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1917,7 +1917,7 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu, unsigned nr,
 	u32 intr_info = nr | INTR_INFO_VALID_MASK;
 
 	if (nr == PF_VECTOR && is_guest_mode(vcpu) &&
-		nested_pf_handled(vcpu))
+	    !vmx->nested.nested_run_pending && nested_pf_handled(vcpu))
 		return;
 
 	if (has_error_code) {
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run
  2013-04-28  7:24 [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run Jan Kiszka
@ 2013-04-28 10:36 ` Gleb Natapov
  2013-04-28 14:30 ` Ren, Yongjie
  1 sibling, 0 replies; 11+ messages in thread
From: Gleb Natapov @ 2013-04-28 10:36 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Marcelo Tosatti, kvm, Nakajima, Jun, Ren, Yongjie

On Sun, Apr 28, 2013 at 09:24:41AM +0200, Jan Kiszka wrote:
> From: Jan Kiszka <jan.kiszka@siemens.com>
> 
> While a nested run is pending, vmx_queue_exception is only called to
> requeue exceptions that were previously picked up via
> vmx_cancel_injection. Therefore, we must not check for PF interception
> by L1, possibly causing a bogus nested vmexit.
> 
> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Applied thanks. We should get rid of nested_run_pending state, but
re-executing instruction if emulation cannot be complete.

> ---
> 
> This and the KVM_REQ_IMMEDIATE_EXIT fix allows me to boot an L2 Linux
> without problems. Yongjie, please check if it resolves your issue(s) as
> well.
> 
>  arch/x86/kvm/vmx.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index d663a59..45eb949 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -1917,7 +1917,7 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu, unsigned nr,
>  	u32 intr_info = nr | INTR_INFO_VALID_MASK;
>  
>  	if (nr == PF_VECTOR && is_guest_mode(vcpu) &&
> -		nested_pf_handled(vcpu))
> +	    !vmx->nested.nested_run_pending && nested_pf_handled(vcpu))
>  		return;
>  
>  	if (has_error_code) {
> -- 
> 1.7.3.4

--
			Gleb.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run
  2013-04-28  7:24 [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run Jan Kiszka
  2013-04-28 10:36 ` Gleb Natapov
@ 2013-04-28 14:30 ` Ren, Yongjie
  2013-04-28 14:33   ` Gleb Natapov
  1 sibling, 1 reply; 11+ messages in thread
From: Ren, Yongjie @ 2013-04-28 14:30 UTC (permalink / raw)
  To: Jan Kiszka, Gleb Natapov, Marcelo Tosatti; +Cc: kvm, Nakajima, Jun

> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org]
> On Behalf Of Jan Kiszka
> Sent: Sunday, April 28, 2013 3:25 PM
> To: Gleb Natapov; Marcelo Tosatti
> Cc: kvm; Nakajima, Jun; Ren, Yongjie
> Subject: [PATCH] KVM: nVMX: Skip PF interception check when queuing
> during nested run
> 
> From: Jan Kiszka <jan.kiszka@siemens.com>
> 
> While a nested run is pending, vmx_queue_exception is only called to
> requeue exceptions that were previously picked up via
> vmx_cancel_injection. Therefore, we must not check for PF interception
> by L1, possibly causing a bogus nested vmexit.
> 
> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> ---
> 
> This and the KVM_REQ_IMMEDIATE_EXIT fix allows me to boot an L2 Linux
> without problems. Yongjie, please check if it resolves your issue(s) as
> well.
> 
The two patches can fix my issue. When both of them are applied, I can have
more tests against next branch.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run
  2013-04-28 14:30 ` Ren, Yongjie
@ 2013-04-28 14:33   ` Gleb Natapov
  2013-04-28 16:20     ` Ren, Yongjie
  0 siblings, 1 reply; 11+ messages in thread
From: Gleb Natapov @ 2013-04-28 14:33 UTC (permalink / raw)
  To: Ren, Yongjie; +Cc: Jan Kiszka, Marcelo Tosatti, kvm, Nakajima, Jun

On Sun, Apr 28, 2013 at 02:30:38PM +0000, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org]
> > On Behalf Of Jan Kiszka
> > Sent: Sunday, April 28, 2013 3:25 PM
> > To: Gleb Natapov; Marcelo Tosatti
> > Cc: kvm; Nakajima, Jun; Ren, Yongjie
> > Subject: [PATCH] KVM: nVMX: Skip PF interception check when queuing
> > during nested run
> > 
> > From: Jan Kiszka <jan.kiszka@siemens.com>
> > 
> > While a nested run is pending, vmx_queue_exception is only called to
> > requeue exceptions that were previously picked up via
> > vmx_cancel_injection. Therefore, we must not check for PF interception
> > by L1, possibly causing a bogus nested vmexit.
> > 
> > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > ---
> > 
> > This and the KVM_REQ_IMMEDIATE_EXIT fix allows me to boot an L2 Linux
> > without problems. Yongjie, please check if it resolves your issue(s) as
> > well.
> > 
> The two patches can fix my issue. When both of them are applied, I can have
> more tests against next branch.
They are both applied now.

--
			Gleb.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run
  2013-04-28 14:33   ` Gleb Natapov
@ 2013-04-28 16:20     ` Ren, Yongjie
  2013-04-28 16:26       ` Jan Kiszka
  0 siblings, 1 reply; 11+ messages in thread
From: Ren, Yongjie @ 2013-04-28 16:20 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: Jan Kiszka, Marcelo Tosatti, kvm, Nakajima, Jun

> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org]
> On Behalf Of Gleb Natapov
> Sent: Sunday, April 28, 2013 10:34 PM
> To: Ren, Yongjie
> Cc: Jan Kiszka; Marcelo Tosatti; kvm; Nakajima, Jun
> Subject: Re: [PATCH] KVM: nVMX: Skip PF interception check when queuing
> during nested run
> 
> On Sun, Apr 28, 2013 at 02:30:38PM +0000, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: kvm-owner@vger.kernel.org
> [mailto:kvm-owner@vger.kernel.org]
> > > On Behalf Of Jan Kiszka
> > > Sent: Sunday, April 28, 2013 3:25 PM
> > > To: Gleb Natapov; Marcelo Tosatti
> > > Cc: kvm; Nakajima, Jun; Ren, Yongjie
> > > Subject: [PATCH] KVM: nVMX: Skip PF interception check when queuing
> > > during nested run
> > >
> > > From: Jan Kiszka <jan.kiszka@siemens.com>
> > >
> > > While a nested run is pending, vmx_queue_exception is only called to
> > > requeue exceptions that were previously picked up via
> > > vmx_cancel_injection. Therefore, we must not check for PF interception
> > > by L1, possibly causing a bogus nested vmexit.
> > >
> > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > > ---
> > >
> > > This and the KVM_REQ_IMMEDIATE_EXIT fix allows me to boot an L2
> Linux
> > > without problems. Yongjie, please check if it resolves your issue(s) as
> > > well.
> > >
> > The two patches can fix my issue. When both of them are applied, I can
> have
> > more tests against next branch.
> They are both applied now.
> 
There's some bug in Jan's patch "Rework request for immediate exit".
When I said 2 patches can fix my issue, I meant his original two patches.
"Check KVM_REQ_IMMEDIATE_EXIT after enable_irq_window" works for me. 
"Rework request for immediate exit" patch is buggy.
In L1, I can get the following error. (also some NMI in L2.)  
(BTW, I'll have holidays this week. I may not track this issue this week.)
[  167.248015] sending NMI to all CPUs:
[  167.252260] NMI backtrace for cpu 1
[  167.253007] CPU 1
[  167.253007] Pid: 0, comm: swapper/1 Tainted: GF            3.8.5 #1 Bochs Bochs
[  167.253007] RIP: 0010:[<ffffffff81045606>]  [<ffffffff81045606>] native_safe_halt+0x6/0x10
[  167.253007] RSP: 0018:ffff880290d51ed8  EFLAGS: 00000246
[  167.253007] RAX: 0000000000000000 RBX: ffff880290d50010 RCX: 0140000000000000
[  167.253007] RDX: 0000000000000000 RSI: 0140000000000000 RDI: 0000000000000086
[  167.253007] RBP: ffff880290d51ed8 R08: 0000000000000000 R09: 0000000000000000
[  167.253007] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
[  167.253007] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[  167.253007] FS:  0000000000000000(0000) GS:ffff88029fc40000(0000) knlGS:0000000000000000
[  167.253007] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  167.253007] CR2: ffffffffff600400 CR3: 000000028f12d000 CR4: 00000000000427e0
[  167.253007] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  167.253007] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  167.253007] Process swapper/1 (pid: 0, threadinfo ffff880290d50000, task ffff880290d49740)
[  167.253007] Stack:
[  167.253007]  ffff880290d51ef8 ffffffff8101d5cf ffff880290d50010 ffffffff81ce0680
[  167.253007]  ffff880290d51f28 ffffffff8101ce99 ffff880290d51f18 1de4884102b62f69
[  167.253007]  0000000000000000 0000000000000000 ffff880290d51f48 ffffffff81643595
[  167.253007] Call Trace:
[  167.253007]  [<ffffffff8101d5cf>] default_idle+0x4f/0x1a0
[  167.253007]  [<ffffffff8101ce99>] cpu_idle+0xd9/0x120
[  167.253007]  [<ffffffff81643595>] start_secondary+0x24c/0x24e
[  167.253007] Code: 00 00 00 00 00 55 48 89 e5 fa c9 c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb c9 c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <c9> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 c9 c3 66 0f 1f 84
[  167.248015] NMI backtrace for cpu 3
[  167.248015] CPU 3
[  167.248015] Pid: 0, comm: swapper/3 Tainted: GF            3.8.5 #1 Bochs Bochs
[  167.248015] RIP: 0010:[<ffffffff810454ca>]  [<ffffffff810454ca>] native_write_msr_safe+0xa/0x10
.......

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run
  2013-04-28 16:20     ` Ren, Yongjie
@ 2013-04-28 16:26       ` Jan Kiszka
  2013-04-28 16:40         ` [PATCH] KVM: x86: Account for failing enable_irq_window for NMI window request Jan Kiszka
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Kiszka @ 2013-04-28 16:26 UTC (permalink / raw)
  To: Ren, Yongjie; +Cc: Gleb Natapov, Marcelo Tosatti, kvm, Nakajima, Jun

[-- Attachment #1: Type: text/plain, Size: 4402 bytes --]

On 2013-04-28 18:20, Ren, Yongjie wrote:
>> -----Original Message-----
>> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org]
>> On Behalf Of Gleb Natapov
>> Sent: Sunday, April 28, 2013 10:34 PM
>> To: Ren, Yongjie
>> Cc: Jan Kiszka; Marcelo Tosatti; kvm; Nakajima, Jun
>> Subject: Re: [PATCH] KVM: nVMX: Skip PF interception check when queuing
>> during nested run
>>
>> On Sun, Apr 28, 2013 at 02:30:38PM +0000, Ren, Yongjie wrote:
>>>> -----Original Message-----
>>>> From: kvm-owner@vger.kernel.org
>> [mailto:kvm-owner@vger.kernel.org]
>>>> On Behalf Of Jan Kiszka
>>>> Sent: Sunday, April 28, 2013 3:25 PM
>>>> To: Gleb Natapov; Marcelo Tosatti
>>>> Cc: kvm; Nakajima, Jun; Ren, Yongjie
>>>> Subject: [PATCH] KVM: nVMX: Skip PF interception check when queuing
>>>> during nested run
>>>>
>>>> From: Jan Kiszka <jan.kiszka@siemens.com>
>>>>
>>>> While a nested run is pending, vmx_queue_exception is only called to
>>>> requeue exceptions that were previously picked up via
>>>> vmx_cancel_injection. Therefore, we must not check for PF interception
>>>> by L1, possibly causing a bogus nested vmexit.
>>>>
>>>> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
>>>> ---
>>>>
>>>> This and the KVM_REQ_IMMEDIATE_EXIT fix allows me to boot an L2
>> Linux
>>>> without problems. Yongjie, please check if it resolves your issue(s) as
>>>> well.
>>>>
>>> The two patches can fix my issue. When both of them are applied, I can
>> have
>>> more tests against next branch.
>> They are both applied now.
>>
> There's some bug in Jan's patch "Rework request for immediate exit".
> When I said 2 patches can fix my issue, I meant his original two patches.
> "Check KVM_REQ_IMMEDIATE_EXIT after enable_irq_window" works for me. 
> "Rework request for immediate exit" patch is buggy.
> In L1, I can get the following error. (also some NMI in L2.)  
> (BTW, I'll have holidays this week. I may not track this issue this week.)
> [  167.252260] NMI backtrace for cpu 1
> [  167.253007] CPU 1
> [  167.253007] Pid: 0, comm: swapper/1 Tainted: GF            3.8.5 #1 Bochs Bochs
> [  167.253007] RIP: 0010:[<ffffffff81045606>]  [<ffffffff81045606>] native_safe_halt+0x6/0x10
> [  167.253007] RSP: 0018:ffff880290d51ed8  EFLAGS: 00000246
> [  167.253007] RAX: 0000000000000000 RBX: ffff880290d50010 RCX: 0140000000000000
> [  167.253007] RDX: 0000000000000000 RSI: 0140000000000000 RDI: 0000000000000086
> [  167.253007] RBP: ffff880290d51ed8 R08: 0000000000000000 R09: 0000000000000000
> [  167.253007] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
> [  167.253007] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
> [  167.253007] FS:  0000000000000000(0000) GS:ffff88029fc40000(0000) knlGS:0000000000000000
> [  167.253007] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  167.253007] CR2: ffffffffff600400 CR3: 000000028f12d000 CR4: 00000000000427e0
> [  167.253007] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [  167.253007] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [  167.253007] Process swapper/1 (pid: 0, threadinfo ffff880290d50000, task ffff880290d49740)
> [  167.253007] Stack:
> [  167.253007]  ffff880290d51ef8 ffffffff8101d5cf ffff880290d50010 ffffffff81ce0680
> [  167.253007]  ffff880290d51f28 ffffffff8101ce99 ffff880290d51f18 1de4884102b62f69
> [  167.253007]  0000000000000000 0000000000000000 ffff880290d51f48 ffffffff81643595
> [  167.253007] Call Trace:
> [  167.253007]  [<ffffffff8101d5cf>] default_idle+0x4f/0x1a0
> [  167.253007]  [<ffffffff8101ce99>] cpu_idle+0xd9/0x120
> [  167.253007]  [<ffffffff81643595>] start_secondary+0x24c/0x24e
> [  167.253007] Code: 00 00 00 00 00 55 48 89 e5 fa c9 c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb c9 c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <c9> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 c9 c3 66 0f 1f 84
> [  167.248015] NMI backtrace for cpu 3
> [  167.248015] CPU 3
> [  167.248015] Pid: 0, comm: swapper/3 Tainted: GF            3.8.5 #1 Bochs Bochs
> [  167.248015] RIP: 0010:[<ffffffff810454ca>]  [<ffffffff810454ca>] native_write_msr_safe+0xa/0x10
> .......
> 

Argh, of course: We use enable_irq_window also for the NMI window in
certain scenarios. So enable_nmi_window must be changed accordingly.
Will send a patch.

Thanks,
Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] KVM: x86: Account for failing enable_irq_window for NMI window request
  2013-04-28 16:26       ` Jan Kiszka
@ 2013-04-28 16:40         ` Jan Kiszka
  2013-04-29 14:37           ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Kiszka @ 2013-04-28 16:40 UTC (permalink / raw)
  To: Gleb Natapov, Marcelo Tosatti; +Cc: Ren, Yongjie, kvm, Nakajima, Jun

From: Jan Kiszka <jan.kiszka@siemens.com>

With VMX, enable_irq_window can now return -EBUSY, in which case an
immediate exit shall be requested before entering the guest. Account for
this also in enable_nmi_window which uses enable_irq_window in absence
of vnmi support, e.g.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 arch/x86/include/asm/kvm_host.h |    2 +-
 arch/x86/kvm/svm.c              |    5 +++--
 arch/x86/kvm/vmx.c              |   16 +++++++---------
 arch/x86/kvm/x86.c              |    3 ++-
 4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index ec14b72..3741c65 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -695,7 +695,7 @@ struct kvm_x86_ops {
 	int (*nmi_allowed)(struct kvm_vcpu *vcpu);
 	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
 	void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
-	void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
+	int (*enable_nmi_window)(struct kvm_vcpu *vcpu);
 	int (*enable_irq_window)(struct kvm_vcpu *vcpu);
 	void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
 	int (*vm_has_apicv)(struct kvm *kvm);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7f896cb..3421d5a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3649,13 +3649,13 @@ static int enable_irq_window(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-static void enable_nmi_window(struct kvm_vcpu *vcpu)
+static int enable_nmi_window(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	if ((svm->vcpu.arch.hflags & (HF_NMI_MASK | HF_IRET_MASK))
 	    == HF_NMI_MASK)
-		return; /* IRET will cause a vm exit */
+		return 0; /* IRET will cause a vm exit */
 
 	/*
 	 * Something prevents NMI from been injected. Single step over possible
@@ -3664,6 +3664,7 @@ static void enable_nmi_window(struct kvm_vcpu *vcpu)
 	svm->nmi_singlestep = true;
 	svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
 	update_db_bp_intercept(vcpu);
+	return 0;
 }
 
 static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 55a1aa0..2f7af9c 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4417,22 +4417,20 @@ static int enable_irq_window(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-static void enable_nmi_window(struct kvm_vcpu *vcpu)
+static int enable_nmi_window(struct kvm_vcpu *vcpu)
 {
 	u32 cpu_based_vm_exec_control;
 
-	if (!cpu_has_virtual_nmis()) {
-		enable_irq_window(vcpu);
-		return;
-	}
+	if (!cpu_has_virtual_nmis())
+		return enable_irq_window(vcpu);
+
+	if (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI)
+		return enable_irq_window(vcpu);
 
-	if (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI) {
-		enable_irq_window(vcpu);
-		return;
-	}
 	cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
 	cpu_based_vm_exec_control |= CPU_BASED_VIRTUAL_NMI_PENDING;
 	vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
+	return 0;
 }
 
 static void vmx_inject_irq(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8747fef..6974ca8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5754,7 +5754,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 		/* enable NMI/IRQ window open exits if needed */
 		if (vcpu->arch.nmi_pending)
-			kvm_x86_ops->enable_nmi_window(vcpu);
+			req_immediate_exit =
+				kvm_x86_ops->enable_nmi_window(vcpu);
 		else if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
 			req_immediate_exit =
 				kvm_x86_ops->enable_irq_window(vcpu) != 0;
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] KVM: x86: Account for failing enable_irq_window for NMI window request
  2013-04-28 16:40         ` [PATCH] KVM: x86: Account for failing enable_irq_window for NMI window request Jan Kiszka
@ 2013-04-29 14:37           ` Paolo Bonzini
  2013-04-29 14:46             ` [PATCH v2] " Jan Kiszka
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2013-04-29 14:37 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Gleb Natapov, Marcelo Tosatti, Ren, Yongjie, kvm, Nakajima, Jun

Il 28/04/2013 18:40, Jan Kiszka ha scritto:
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 8747fef..6974ca8 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5754,7 +5754,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>  
>  		/* enable NMI/IRQ window open exits if needed */
>  		if (vcpu->arch.nmi_pending)
> -			kvm_x86_ops->enable_nmi_window(vcpu);
> +			req_immediate_exit =
> +				kvm_x86_ops->enable_nmi_window(vcpu);

!= 0 for consistency with below?

Paolo

>  		else if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
>  			req_immediate_exit =
>  				kvm_x86_ops->enable_irq_window(vcpu) != 0;
> -- 1.7.3.4


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2] KVM: x86: Account for failing enable_irq_window for NMI window request
  2013-04-29 14:37           ` Paolo Bonzini
@ 2013-04-29 14:46             ` Jan Kiszka
  2013-04-29 15:38               ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Kiszka @ 2013-04-29 14:46 UTC (permalink / raw)
  To: Gleb Natapov, Marcelo Tosatti
  Cc: Paolo Bonzini, Ren, Yongjie, kvm, Nakajima, Jun

With VMX, enable_irq_window can now return -EBUSY, in which case an
immediate exit shall be requested before entering the guest. Account for
this also in enable_nmi_window which uses enable_irq_window in absence
of vnmi support, e.g.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---

Changes in v2:
 - check return code of enable_nmi_window against 0 instead of using it
   directly

 arch/x86/include/asm/kvm_host.h |    2 +-
 arch/x86/kvm/svm.c              |    5 +++--
 arch/x86/kvm/vmx.c              |   16 +++++++---------
 arch/x86/kvm/x86.c              |    3 ++-
 4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index ec14b72..3741c65 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -695,7 +695,7 @@ struct kvm_x86_ops {
 	int (*nmi_allowed)(struct kvm_vcpu *vcpu);
 	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
 	void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
-	void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
+	int (*enable_nmi_window)(struct kvm_vcpu *vcpu);
 	int (*enable_irq_window)(struct kvm_vcpu *vcpu);
 	void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
 	int (*vm_has_apicv)(struct kvm *kvm);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7f896cb..3421d5a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3649,13 +3649,13 @@ static int enable_irq_window(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-static void enable_nmi_window(struct kvm_vcpu *vcpu)
+static int enable_nmi_window(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	if ((svm->vcpu.arch.hflags & (HF_NMI_MASK | HF_IRET_MASK))
 	    == HF_NMI_MASK)
-		return; /* IRET will cause a vm exit */
+		return 0; /* IRET will cause a vm exit */
 
 	/*
 	 * Something prevents NMI from been injected. Single step over possible
@@ -3664,6 +3664,7 @@ static void enable_nmi_window(struct kvm_vcpu *vcpu)
 	svm->nmi_singlestep = true;
 	svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
 	update_db_bp_intercept(vcpu);
+	return 0;
 }
 
 static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 55a1aa0..2f7af9c 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4417,22 +4417,20 @@ static int enable_irq_window(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-static void enable_nmi_window(struct kvm_vcpu *vcpu)
+static int enable_nmi_window(struct kvm_vcpu *vcpu)
 {
 	u32 cpu_based_vm_exec_control;
 
-	if (!cpu_has_virtual_nmis()) {
-		enable_irq_window(vcpu);
-		return;
-	}
+	if (!cpu_has_virtual_nmis())
+		return enable_irq_window(vcpu);
+
+	if (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI)
+		return enable_irq_window(vcpu);
 
-	if (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI) {
-		enable_irq_window(vcpu);
-		return;
-	}
 	cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
 	cpu_based_vm_exec_control |= CPU_BASED_VIRTUAL_NMI_PENDING;
 	vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
+	return 0;
 }
 
 static void vmx_inject_irq(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8747fef..24724b42 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5754,7 +5754,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 		/* enable NMI/IRQ window open exits if needed */
 		if (vcpu->arch.nmi_pending)
-			kvm_x86_ops->enable_nmi_window(vcpu);
+			req_immediate_exit =
+				kvm_x86_ops->enable_nmi_window(vcpu) != 0;
 		else if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
 			req_immediate_exit =
 				kvm_x86_ops->enable_irq_window(vcpu) != 0;

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] KVM: x86: Account for failing enable_irq_window for NMI window request
  2013-04-29 14:46             ` [PATCH v2] " Jan Kiszka
@ 2013-04-29 15:38               ` Paolo Bonzini
  2013-05-03  1:17                 ` Marcelo Tosatti
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2013-04-29 15:38 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Gleb Natapov, Marcelo Tosatti, Ren, Yongjie, kvm, Nakajima, Jun

Il 29/04/2013 16:46, Jan Kiszka ha scritto:
> With VMX, enable_irq_window can now return -EBUSY, in which case an
> immediate exit shall be requested before entering the guest. Account for
> this also in enable_nmi_window which uses enable_irq_window in absence
> of vnmi support, e.g.
> 
> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

> ---
> 
> Changes in v2:
>  - check return code of enable_nmi_window against 0 instead of using it
>    directly
> 
>  arch/x86/include/asm/kvm_host.h |    2 +-
>  arch/x86/kvm/svm.c              |    5 +++--
>  arch/x86/kvm/vmx.c              |   16 +++++++---------
>  arch/x86/kvm/x86.c              |    3 ++-
>  4 files changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index ec14b72..3741c65 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -695,7 +695,7 @@ struct kvm_x86_ops {
>  	int (*nmi_allowed)(struct kvm_vcpu *vcpu);
>  	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
>  	void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
> -	void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
> +	int (*enable_nmi_window)(struct kvm_vcpu *vcpu);
>  	int (*enable_irq_window)(struct kvm_vcpu *vcpu);
>  	void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
>  	int (*vm_has_apicv)(struct kvm *kvm);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 7f896cb..3421d5a 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3649,13 +3649,13 @@ static int enable_irq_window(struct kvm_vcpu *vcpu)
>  	return 0;
>  }
>  
> -static void enable_nmi_window(struct kvm_vcpu *vcpu)
> +static int enable_nmi_window(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_svm *svm = to_svm(vcpu);
>  
>  	if ((svm->vcpu.arch.hflags & (HF_NMI_MASK | HF_IRET_MASK))
>  	    == HF_NMI_MASK)
> -		return; /* IRET will cause a vm exit */
> +		return 0; /* IRET will cause a vm exit */
>  
>  	/*
>  	 * Something prevents NMI from been injected. Single step over possible
> @@ -3664,6 +3664,7 @@ static void enable_nmi_window(struct kvm_vcpu *vcpu)
>  	svm->nmi_singlestep = true;
>  	svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
>  	update_db_bp_intercept(vcpu);
> +	return 0;
>  }
>  
>  static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 55a1aa0..2f7af9c 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -4417,22 +4417,20 @@ static int enable_irq_window(struct kvm_vcpu *vcpu)
>  	return 0;
>  }
>  
> -static void enable_nmi_window(struct kvm_vcpu *vcpu)
> +static int enable_nmi_window(struct kvm_vcpu *vcpu)
>  {
>  	u32 cpu_based_vm_exec_control;
>  
> -	if (!cpu_has_virtual_nmis()) {
> -		enable_irq_window(vcpu);
> -		return;
> -	}
> +	if (!cpu_has_virtual_nmis())
> +		return enable_irq_window(vcpu);
> +
> +	if (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI)
> +		return enable_irq_window(vcpu);
>  
> -	if (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI) {
> -		enable_irq_window(vcpu);
> -		return;
> -	}
>  	cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
>  	cpu_based_vm_exec_control |= CPU_BASED_VIRTUAL_NMI_PENDING;
>  	vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
> +	return 0;
>  }
>  
>  static void vmx_inject_irq(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 8747fef..24724b42 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5754,7 +5754,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>  
>  		/* enable NMI/IRQ window open exits if needed */
>  		if (vcpu->arch.nmi_pending)
> -			kvm_x86_ops->enable_nmi_window(vcpu);
> +			req_immediate_exit =
> +				kvm_x86_ops->enable_nmi_window(vcpu) != 0;
>  		else if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
>  			req_immediate_exit =
>  				kvm_x86_ops->enable_irq_window(vcpu) != 0;
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2] KVM: x86: Account for failing enable_irq_window for NMI window request
  2013-04-29 15:38               ` Paolo Bonzini
@ 2013-05-03  1:17                 ` Marcelo Tosatti
  0 siblings, 0 replies; 11+ messages in thread
From: Marcelo Tosatti @ 2013-05-03  1:17 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Jan Kiszka, Gleb Natapov, Ren, Yongjie, kvm, Nakajima, Jun

On Mon, Apr 29, 2013 at 05:38:27PM +0200, Paolo Bonzini wrote:
> Il 29/04/2013 16:46, Jan Kiszka ha scritto:
> > With VMX, enable_irq_window can now return -EBUSY, in which case an
> > immediate exit shall be requested before entering the guest. Account for
> > this also in enable_nmi_window which uses enable_irq_window in absence
> > of vnmi support, e.g.
> > 
> > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

Applied, thanks.


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-05-03  1:21 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-28  7:24 [PATCH] KVM: nVMX: Skip PF interception check when queuing during nested run Jan Kiszka
2013-04-28 10:36 ` Gleb Natapov
2013-04-28 14:30 ` Ren, Yongjie
2013-04-28 14:33   ` Gleb Natapov
2013-04-28 16:20     ` Ren, Yongjie
2013-04-28 16:26       ` Jan Kiszka
2013-04-28 16:40         ` [PATCH] KVM: x86: Account for failing enable_irq_window for NMI window request Jan Kiszka
2013-04-29 14:37           ` Paolo Bonzini
2013-04-29 14:46             ` [PATCH v2] " Jan Kiszka
2013-04-29 15:38               ` Paolo Bonzini
2013-05-03  1:17                 ` Marcelo Tosatti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox