* [PATCH v2 0/3] Nonatomic interrupt injection
@ 2010-07-20 13:17 Avi Kivity
2010-07-20 13:17 ` [PATCH v2 1/3] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Avi Kivity @ 2010-07-20 13:17 UTC (permalink / raw)
To: kvm, Marcelo Tosatti
This patchset changes interrupt injection to be done from normal process
context instead of interrupts disabled context. This is useful for real
mode interrupt injection on Intel without the current hacks (injecting as
a software interrupt of a vm86 task), reducing latencies, and later, for
allowing nested virtualization code to use kvm_read_guest()/kvm_write_guest()
instead of kmap() to access the guest vmcb/vmcs.
Seems to survive a hack that cancels every 16th entry, after injection has
already taken place.
v2: svm support (easier than expected)
fix silly vmx warning
Avi Kivity (3):
KVM: VMX: Split up vmx_complete_interrupts()
KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and
entry
KVM: Non-atomic interrupt injection
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm.c | 12 +++++++
arch/x86/kvm/vmx.c | 65 ++++++++++++++++++++++++++++++---------
arch/x86/kvm/x86.c | 27 ++++++++--------
4 files changed, 77 insertions(+), 28 deletions(-)
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 1/3] KVM: VMX: Split up vmx_complete_interrupts()
2010-07-20 13:17 [PATCH v2 0/3] Nonatomic interrupt injection Avi Kivity
@ 2010-07-20 13:17 ` Avi Kivity
2010-07-20 13:17 ` [PATCH v2 2/3] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity
2010-07-20 13:17 ` [PATCH v2 3/3] KVM: Non-atomic interrupt injection Avi Kivity
2 siblings, 0 replies; 8+ messages in thread
From: Avi Kivity @ 2010-07-20 13:17 UTC (permalink / raw)
To: kvm, Marcelo Tosatti
vmx_complete_interrupts() does too much, split it up:
- vmx_vcpu_run() gets the "cache important vmcs fields" part
- a new vmx_complete_atomic_exit() gets the parts that must be done atomically
- a new vmx_recover_nmi_blocking() does what its name says
- vmx_complete_interrupts() retains the event injection recovery code
This helps in reducing the work done in atomic context.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 39 +++++++++++++++++++++++++++------------
1 files changed, 27 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 2fdcc98..1a35964 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -125,6 +125,7 @@ struct vcpu_vmx {
unsigned long host_rsp;
int launched;
u8 fail;
+ u32 exit_intr_info;
u32 idt_vectoring_info;
struct shared_msr_entry *guest_msrs;
int nmsrs;
@@ -3792,18 +3793,9 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
vmcs_write32(TPR_THRESHOLD, irr);
}
-static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
+static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
{
- u32 exit_intr_info;
- u32 idt_vectoring_info = vmx->idt_vectoring_info;
- bool unblock_nmi;
- u8 vector;
- int type;
- bool idtv_info_valid;
-
- exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
-
- vmx->exit_reason = vmcs_read32(VM_EXIT_REASON);
+ u32 exit_intr_info = vmx->exit_intr_info;
/* Handle machine checks before interrupts are enabled */
if ((vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY)
@@ -3818,8 +3810,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
asm("int $2");
kvm_after_handle_nmi(&vmx->vcpu);
}
+}
- idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
+static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
+{
+ u32 exit_intr_info = vmx->exit_intr_info;
+ bool unblock_nmi;
+ u8 vector;
+ bool idtv_info_valid;
+
+ idtv_info_valid = vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK;
if (cpu_has_virtual_nmis()) {
unblock_nmi = (exit_intr_info & INTR_INFO_UNBLOCK_NMI) != 0;
@@ -3841,6 +3841,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
} else if (unlikely(vmx->soft_vnmi_blocked))
vmx->vnmi_blocked_time +=
ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time));
+}
+
+static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
+{
+ u32 idt_vectoring_info = vmx->idt_vectoring_info;
+ u8 vector;
+ int type;
+ bool idtv_info_valid;
+
+ idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
vmx->vcpu.arch.nmi_injected = false;
kvm_clear_exception_queue(&vmx->vcpu);
@@ -4051,6 +4061,11 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
asm("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS));
vmx->launched = 1;
+ vmx->exit_reason = vmcs_read32(VM_EXIT_REASON);
+ vmx->exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
+
+ vmx_complete_atomic_exit(vmx);
+ vmx_recover_nmi_blocking(vmx);
vmx_complete_interrupts(vmx);
}
--
1.7.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/3] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry
2010-07-20 13:17 [PATCH v2 0/3] Nonatomic interrupt injection Avi Kivity
2010-07-20 13:17 ` [PATCH v2 1/3] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity
@ 2010-07-20 13:17 ` Avi Kivity
2010-07-20 13:17 ` [PATCH v2 3/3] KVM: Non-atomic interrupt injection Avi Kivity
2 siblings, 0 replies; 8+ messages in thread
From: Avi Kivity @ 2010-07-20 13:17 UTC (permalink / raw)
To: kvm, Marcelo Tosatti
Currently vmx_complete_interrupts() can decode event information from vmx
exit fields into the generic kvm event queues. Make it able to decode
the information from the entry fields as well by parametrizing it.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 19 ++++++++++++++-----
1 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 1a35964..53b6fc0 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3843,9 +3843,11 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time));
}
-static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
+static void __vmx_complete_interrupts(struct vcpu_vmx *vmx,
+ u32 idt_vectoring_info,
+ int instr_len_field,
+ int error_code_field)
{
- u32 idt_vectoring_info = vmx->idt_vectoring_info;
u8 vector;
int type;
bool idtv_info_valid;
@@ -3875,18 +3877,18 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
break;
case INTR_TYPE_SOFT_EXCEPTION:
vmx->vcpu.arch.event_exit_inst_len =
- vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+ vmcs_read32(instr_len_field);
/* fall through */
case INTR_TYPE_HARD_EXCEPTION:
if (idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK) {
- u32 err = vmcs_read32(IDT_VECTORING_ERROR_CODE);
+ u32 err = vmcs_read32(error_code_field);
kvm_queue_exception_e(&vmx->vcpu, vector, err);
} else
kvm_queue_exception(&vmx->vcpu, vector);
break;
case INTR_TYPE_SOFT_INTR:
vmx->vcpu.arch.event_exit_inst_len =
- vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+ vmcs_read32(instr_len_field);
/* fall through */
case INTR_TYPE_EXT_INTR:
kvm_queue_interrupt(&vmx->vcpu, vector,
@@ -3897,6 +3899,13 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
}
}
+static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
+{
+ __vmx_complete_interrupts(vmx, vmx->idt_vectoring_info,
+ VM_EXIT_INSTRUCTION_LEN,
+ IDT_VECTORING_ERROR_CODE);
+}
+
/*
* Failure to inject an interrupt should give us the information
* in IDT_VECTORING_INFO_FIELD. However, if the failure occurs
--
1.7.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 3/3] KVM: Non-atomic interrupt injection
2010-07-20 13:17 [PATCH v2 0/3] Nonatomic interrupt injection Avi Kivity
2010-07-20 13:17 ` [PATCH v2 1/3] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity
2010-07-20 13:17 ` [PATCH v2 2/3] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity
@ 2010-07-20 13:17 ` Avi Kivity
2010-07-21 0:55 ` Marcelo Tosatti
2 siblings, 1 reply; 8+ messages in thread
From: Avi Kivity @ 2010-07-20 13:17 UTC (permalink / raw)
To: kvm, Marcelo Tosatti
Change the interrupt injection code to work from preemptible, interrupts
enabled context. This works by adding a ->cancel_injection() operation
that undoes an injection in case we were not able to actually enter the guest
(this condition could never happen with atomic injection).
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm.c | 12 ++++++++++++
arch/x86/kvm/vmx.c | 11 +++++++++++
arch/x86/kvm/x86.c | 27 ++++++++++++++-------------
4 files changed, 38 insertions(+), 13 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 502e53f..5dd797c 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -505,6 +505,7 @@ struct kvm_x86_ops {
void (*queue_exception)(struct kvm_vcpu *vcpu, unsigned nr,
bool has_error_code, u32 error_code,
bool reinject);
+ void (*cancel_injection)(struct kvm_vcpu *vcpu);
int (*interrupt_allowed)(struct kvm_vcpu *vcpu);
int (*nmi_allowed)(struct kvm_vcpu *vcpu);
bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 56c9b6b..46d068e 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3135,6 +3135,17 @@ static void svm_complete_interrupts(struct vcpu_svm *svm)
}
}
+static void svm_cancel_injection(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ struct vmcb_control_area *control = &svm->vmcb->control;
+
+ control->exit_int_info = control->event_inj;
+ control->exit_int_info_err = control->event_inj_err;
+ control->event_inj = 0;
+ svm_complete_interrupts(svm);
+}
+
#ifdef CONFIG_X86_64
#define R "r"
#else
@@ -3493,6 +3504,7 @@ static struct kvm_x86_ops svm_x86_ops = {
.set_irq = svm_set_irq,
.set_nmi = svm_inject_nmi,
.queue_exception = svm_queue_exception,
+ .cancel_injection = svm_cancel_injection,
.interrupt_allowed = svm_interrupt_allowed,
.nmi_allowed = svm_nmi_allowed,
.get_nmi_mask = svm_get_nmi_mask,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 53b6fc0..72381b7 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3906,6 +3906,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
IDT_VECTORING_ERROR_CODE);
}
+static void vmx_cancel_injection(struct kvm_vcpu *vcpu)
+{
+ __vmx_complete_interrupts(to_vmx(vcpu),
+ vmcs_read32(VM_ENTRY_INTR_INFO_FIELD),
+ VM_ENTRY_INSTRUCTION_LEN,
+ VM_ENTRY_EXCEPTION_ERROR_CODE);
+
+ vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0);
+}
+
/*
* Failure to inject an interrupt should give us the information
* in IDT_VECTORING_INFO_FIELD. However, if the failure occurs
@@ -4360,6 +4370,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
.set_irq = vmx_inject_irq,
.set_nmi = vmx_inject_nmi,
.queue_exception = vmx_queue_exception,
+ .cancel_injection = vmx_cancel_injection,
.interrupt_allowed = vmx_interrupt_allowed,
.nmi_allowed = vmx_nmi_allowed,
.get_nmi_mask = vmx_get_nmi_mask,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 84bfb51..1040d3f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4709,6 +4709,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (unlikely(r))
goto out;
+ inject_pending_event(vcpu);
+
+ /* enable NMI/IRQ window open exits if needed */
+ if (vcpu->arch.nmi_pending)
+ kvm_x86_ops->enable_nmi_window(vcpu);
+ else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
+ kvm_x86_ops->enable_irq_window(vcpu);
+
+ if (kvm_lapic_enabled(vcpu)) {
+ update_cr8_intercept(vcpu);
+ kvm_lapic_sync_to_vapic(vcpu);
+ }
+
preempt_disable();
kvm_x86_ops->prepare_guest_switch(vcpu);
@@ -4727,23 +4740,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
smp_wmb();
local_irq_enable();
preempt_enable();
+ kvm_x86_ops->cancel_injection(vcpu);
r = 1;
goto out;
}
- inject_pending_event(vcpu);
-
- /* enable NMI/IRQ window open exits if needed */
- if (vcpu->arch.nmi_pending)
- kvm_x86_ops->enable_nmi_window(vcpu);
- else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
- kvm_x86_ops->enable_irq_window(vcpu);
-
- if (kvm_lapic_enabled(vcpu)) {
- update_cr8_intercept(vcpu);
- kvm_lapic_sync_to_vapic(vcpu);
- }
-
srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
kvm_guest_enter();
--
1.7.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] KVM: Non-atomic interrupt injection
2010-07-20 13:17 ` [PATCH v2 3/3] KVM: Non-atomic interrupt injection Avi Kivity
@ 2010-07-21 0:55 ` Marcelo Tosatti
2010-07-21 5:37 ` Avi Kivity
0 siblings, 1 reply; 8+ messages in thread
From: Marcelo Tosatti @ 2010-07-21 0:55 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
On Tue, Jul 20, 2010 at 04:17:07PM +0300, Avi Kivity wrote:
> Change the interrupt injection code to work from preemptible, interrupts
> enabled context. This works by adding a ->cancel_injection() operation
> that undoes an injection in case we were not able to actually enter the guest
> (this condition could never happen with atomic injection).
>
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/svm.c | 12 ++++++++++++
> arch/x86/kvm/vmx.c | 11 +++++++++++
> arch/x86/kvm/x86.c | 27 ++++++++++++++-------------
> 4 files changed, 38 insertions(+), 13 deletions(-)
>
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4709,6 +4709,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> if (unlikely(r))
> goto out;
>
> + inject_pending_event(vcpu);
> +
> + /* enable NMI/IRQ window open exits if needed */
> + if (vcpu->arch.nmi_pending)
> + kvm_x86_ops->enable_nmi_window(vcpu);
> + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
> + kvm_x86_ops->enable_irq_window(vcpu);
> +
> + if (kvm_lapic_enabled(vcpu)) {
> + update_cr8_intercept(vcpu);
> + kvm_lapic_sync_to_vapic(vcpu);
> + }
> +
> preempt_disable();
>
> kvm_x86_ops->prepare_guest_switch(vcpu);
> @@ -4727,23 +4740,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> smp_wmb();
> local_irq_enable();
> preempt_enable();
> + kvm_x86_ops->cancel_injection(vcpu);
> r = 1;
> goto out;
> }
>
> - inject_pending_event(vcpu);
> -
> - /* enable NMI/IRQ window open exits if needed */
> - if (vcpu->arch.nmi_pending)
> - kvm_x86_ops->enable_nmi_window(vcpu);
> - else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
> - kvm_x86_ops->enable_irq_window(vcpu);
> -
> - if (kvm_lapic_enabled(vcpu)) {
> - update_cr8_intercept(vcpu);
> - kvm_lapic_sync_to_vapic(vcpu);
> - }
> -
> srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
>
> kvm_guest_enter();
This breaks
int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
{
struct kvm_lapic *apic = vcpu->arch.apic;
int highest_irr;
/* This may race with setting of irr in __apic_accept_irq() and
* value returned may be wrong, but kvm_vcpu_kick() in
* __apic_accept_irq
* will cause vmexit immediately and the value will be
* recalculated
* on the next vmentry.
*/
(also valid for nmi_pending and PIC). Can't simply move
atomic_set(guest_mode, 1) in preemptible section as that would make it
possible for kvm_vcpu_kick to IPI stale vcpu->cpu.
Also should undo vmx.rmode.* ?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] KVM: Non-atomic interrupt injection
2010-07-21 0:55 ` Marcelo Tosatti
@ 2010-07-21 5:37 ` Avi Kivity
2010-07-21 16:27 ` Marcelo Tosatti
0 siblings, 1 reply; 8+ messages in thread
From: Avi Kivity @ 2010-07-21 5:37 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm
On 07/21/2010 03:55 AM, Marcelo Tosatti wrote:
>
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -4709,6 +4709,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>> if (unlikely(r))
>> goto out;
>>
>> + inject_pending_event(vcpu);
>> +
>> + /* enable NMI/IRQ window open exits if needed */
>> + if (vcpu->arch.nmi_pending)
>> + kvm_x86_ops->enable_nmi_window(vcpu);
>> + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
>> + kvm_x86_ops->enable_irq_window(vcpu);
>> +
>> + if (kvm_lapic_enabled(vcpu)) {
>> + update_cr8_intercept(vcpu);
>> + kvm_lapic_sync_to_vapic(vcpu);
>> + }
>> +
>> preempt_disable();
>>
>> kvm_x86_ops->prepare_guest_switch(vcpu);
>> @@ -4727,23 +4740,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>> smp_wmb();
>> local_irq_enable();
>> preempt_enable();
>> + kvm_x86_ops->cancel_injection(vcpu);
>> r = 1;
>> goto out;
>> }
>>
>> - inject_pending_event(vcpu);
>> -
>> - /* enable NMI/IRQ window open exits if needed */
>> - if (vcpu->arch.nmi_pending)
>> - kvm_x86_ops->enable_nmi_window(vcpu);
>> - else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
>> - kvm_x86_ops->enable_irq_window(vcpu);
>> -
>> - if (kvm_lapic_enabled(vcpu)) {
>> - update_cr8_intercept(vcpu);
>> - kvm_lapic_sync_to_vapic(vcpu);
>> - }
>> -
>> srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
>>
>> kvm_guest_enter();
>>
> This breaks
>
> int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
> {
> struct kvm_lapic *apic = vcpu->arch.apic;
> int highest_irr;
>
> /* This may race with setting of irr in __apic_accept_irq() and
> * value returned may be wrong, but kvm_vcpu_kick() in
> * __apic_accept_irq
> * will cause vmexit immediately and the value will be
> * recalculated
> * on the next vmentry.
> */
>
> (also valid for nmi_pending and PIC). Can't simply move
> atomic_set(guest_mode, 1) in preemptible section as that would make it
> possible for kvm_vcpu_kick to IPI stale vcpu->cpu.
>
Right. Can fix by adding a kvm_make_request() to force the retry loop.
> Also should undo vmx.rmode.* ?
>
Elaborate?
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] KVM: Non-atomic interrupt injection
2010-07-21 5:37 ` Avi Kivity
@ 2010-07-21 16:27 ` Marcelo Tosatti
2010-07-21 18:22 ` Avi Kivity
0 siblings, 1 reply; 8+ messages in thread
From: Marcelo Tosatti @ 2010-07-21 16:27 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
On Wed, Jul 21, 2010 at 08:37:26AM +0300, Avi Kivity wrote:
> On 07/21/2010 03:55 AM, Marcelo Tosatti wrote:
> >
> >>--- a/arch/x86/kvm/x86.c
> >>+++ b/arch/x86/kvm/x86.c
> >>@@ -4709,6 +4709,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >> if (unlikely(r))
> >> goto out;
> >>
> >>+ inject_pending_event(vcpu);
> >>+
> >>+ /* enable NMI/IRQ window open exits if needed */
> >>+ if (vcpu->arch.nmi_pending)
> >>+ kvm_x86_ops->enable_nmi_window(vcpu);
> >>+ else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
> >>+ kvm_x86_ops->enable_irq_window(vcpu);
> >>+
> >>+ if (kvm_lapic_enabled(vcpu)) {
> >>+ update_cr8_intercept(vcpu);
> >>+ kvm_lapic_sync_to_vapic(vcpu);
> >>+ }
> >>+
> >> preempt_disable();
> >>
> >> kvm_x86_ops->prepare_guest_switch(vcpu);
> >>@@ -4727,23 +4740,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >> smp_wmb();
> >> local_irq_enable();
> >> preempt_enable();
> >>+ kvm_x86_ops->cancel_injection(vcpu);
> >> r = 1;
> >> goto out;
> >> }
> >>
> >>- inject_pending_event(vcpu);
> >>-
> >>- /* enable NMI/IRQ window open exits if needed */
> >>- if (vcpu->arch.nmi_pending)
> >>- kvm_x86_ops->enable_nmi_window(vcpu);
> >>- else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
> >>- kvm_x86_ops->enable_irq_window(vcpu);
> >>-
> >>- if (kvm_lapic_enabled(vcpu)) {
> >>- update_cr8_intercept(vcpu);
> >>- kvm_lapic_sync_to_vapic(vcpu);
> >>- }
> >>-
> >> srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
> >>
> >> kvm_guest_enter();
> >This breaks
> >
> >int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
> >{
> > struct kvm_lapic *apic = vcpu->arch.apic;
> > int highest_irr;
> >
> > /* This may race with setting of irr in __apic_accept_irq() and
> > * value returned may be wrong, but kvm_vcpu_kick() in
> > * __apic_accept_irq
> > * will cause vmexit immediately and the value will be
> > * recalculated
> > * on the next vmentry.
> > */
> >
> >(also valid for nmi_pending and PIC). Can't simply move
> >atomic_set(guest_mode, 1) in preemptible section as that would make it
> >possible for kvm_vcpu_kick to IPI stale vcpu->cpu.
>
> Right. Can fix by adding a kvm_make_request() to force the retry loop.
>
> >Also should undo vmx.rmode.* ?
>
> Elaborate?
Undo vmx.rmode assignments on cancel_injection.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] KVM: Non-atomic interrupt injection
2010-07-21 16:27 ` Marcelo Tosatti
@ 2010-07-21 18:22 ` Avi Kivity
0 siblings, 0 replies; 8+ messages in thread
From: Avi Kivity @ 2010-07-21 18:22 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm
On 07/21/2010 07:27 PM, Marcelo Tosatti wrote:
>>
>>> Also should undo vmx.rmode.* ?
>>>
>> Elaborate?
>>
> Undo vmx.rmode assignments on cancel_injection.
>
Hm. Doesn't vmx_complete_interrupts() have to do that anyway if an
injection fails?
Ah:
vmx_vcpu_run()
{
...
vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);
if (vmx->rmode.irq.pending)
fixup_rmode_irq(vmx);
...
vmx_complete_interrupts(vmx);
So I'll just move that bit into vmx_complete_interrupts. Good catch.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2010-07-21 18:23 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-20 13:17 [PATCH v2 0/3] Nonatomic interrupt injection Avi Kivity
2010-07-20 13:17 ` [PATCH v2 1/3] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity
2010-07-20 13:17 ` [PATCH v2 2/3] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity
2010-07-20 13:17 ` [PATCH v2 3/3] KVM: Non-atomic interrupt injection Avi Kivity
2010-07-21 0:55 ` Marcelo Tosatti
2010-07-21 5:37 ` Avi Kivity
2010-07-21 16:27 ` Marcelo Tosatti
2010-07-21 18:22 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox