From: Gleb Natapov <gleb@redhat.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Yang Zhang <yang.z.zhang@intel.com>,
kvm@vger.kernel.org, haitao.shan@intel.com,
Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v8 2/3] x86, apicv: add virtual interrupt delivery support
Date: Mon, 7 Jan 2013 19:48:43 +0200 [thread overview]
Message-ID: <20130107174843.GA4872@redhat.com> (raw)
In-Reply-To: <20130107135221.GA25775@amt.cnet>
On Mon, Jan 07, 2013 at 11:52:21AM -0200, Marcelo Tosatti wrote:
> On Mon, Jan 07, 2013 at 10:02:36AM +0800, Yang Zhang wrote:
> > From: Yang Zhang <yang.z.zhang@Intel.com>
> >
> > Virtual interrupt delivery avoids KVM to inject vAPIC interrupts
> > manually, which is fully taken care of by the hardware. This needs
> > some special awareness into existing interrupr injection path:
> >
> > - for pending interrupt, instead of direct injection, we may need
> > update architecture specific indicators before resuming to guest.
> >
> > - A pending interrupt, which is masked by ISR, should be also
> > considered in above update action, since hardware will decide
> > when to inject it at right time. Current has_interrupt and
> > get_interrupt only returns a valid vector from injection p.o.v.
> >
> > Signed-off-by: Kevin Tian <kevin.tian@intel.com>
> > Signed-off-by: Yang Zhang <yang.z.zhang@intel.com>
> > ---
> > arch/ia64/kvm/lapic.h | 6 ++
> > arch/x86/include/asm/kvm_host.h | 8 ++
> > arch/x86/include/asm/vmx.h | 11 +++
> > arch/x86/kvm/irq.c | 56 +++++++++++-
> > arch/x86/kvm/lapic.c | 87 +++++++++++-------
> > arch/x86/kvm/lapic.h | 29 +++++-
> > arch/x86/kvm/svm.c | 36 ++++++++
> > arch/x86/kvm/vmx.c | 190 ++++++++++++++++++++++++++++++++++++++-
> > arch/x86/kvm/x86.c | 11 ++-
> > include/linux/kvm_host.h | 2 +
> > virt/kvm/ioapic.c | 41 +++++++++
> > virt/kvm/ioapic.h | 1 +
> > virt/kvm/irq_comm.c | 20 ++++
> > 13 files changed, 451 insertions(+), 47 deletions(-)
> >
> > diff --git a/arch/ia64/kvm/lapic.h b/arch/ia64/kvm/lapic.h
> > index c5f92a9..cb59eb4 100644
> > --- a/arch/ia64/kvm/lapic.h
> > +++ b/arch/ia64/kvm/lapic.h
> > @@ -27,4 +27,10 @@ int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq);
> > #define kvm_apic_present(x) (true)
> > #define kvm_lapic_enabled(x) (true)
> >
> > +static inline void kvm_update_eoi_exitmap(struct kvm *kvm,
> > + struct kvm_lapic_irq *irq)
> > +{
> > + /* IA64 has no apicv supporting, do nothing here */
> > +}
> > +
> > #endif
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index c431b33..135603f 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -697,6 +697,13 @@ struct kvm_x86_ops {
> > void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
> > void (*enable_irq_window)(struct kvm_vcpu *vcpu);
> > void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
> > + int (*has_virtual_interrupt_delivery)(struct kvm_vcpu *vcpu);
> > + void (*update_apic_irq)(struct kvm_vcpu *vcpu, int max_irr);
> > + void (*update_eoi_exitmap)(struct kvm *kvm, struct kvm_lapic_irq *irq);
> > + void (*update_exitmap_start)(struct kvm_vcpu *vcpu);
> > + void (*update_exitmap_end)(struct kvm_vcpu *vcpu);
> > + void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu);
> > + void (*restore_rvi)(struct kvm_vcpu *vcpu);
> > int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
> > int (*get_tdp_level)(void);
> > u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
> > @@ -991,6 +998,7 @@ int kvm_age_hva(struct kvm *kvm, unsigned long hva);
> > int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
> > void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
> > int cpuid_maxphyaddr(struct kvm_vcpu *vcpu);
> > +int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
> > int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu);
> > int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
> > int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
> > diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
> > index 44c3f7e..d1ab331 100644
> > --- a/arch/x86/include/asm/vmx.h
> > +++ b/arch/x86/include/asm/vmx.h
> > @@ -62,6 +62,7 @@
> > #define EXIT_REASON_MCE_DURING_VMENTRY 41
> > #define EXIT_REASON_TPR_BELOW_THRESHOLD 43
> > #define EXIT_REASON_APIC_ACCESS 44
> > +#define EXIT_REASON_EOI_INDUCED 45
> > #define EXIT_REASON_EPT_VIOLATION 48
> > #define EXIT_REASON_EPT_MISCONFIG 49
> > #define EXIT_REASON_WBINVD 54
> > @@ -143,6 +144,7 @@
> > #define SECONDARY_EXEC_WBINVD_EXITING 0x00000040
> > #define SECONDARY_EXEC_UNRESTRICTED_GUEST 0x00000080
> > #define SECONDARY_EXEC_APIC_REGISTER_VIRT 0x00000100
> > +#define SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY 0x00000200
> > #define SECONDARY_EXEC_PAUSE_LOOP_EXITING 0x00000400
> > #define SECONDARY_EXEC_ENABLE_INVPCID 0x00001000
> >
> > @@ -180,6 +182,7 @@ enum vmcs_field {
> > GUEST_GS_SELECTOR = 0x0000080a,
> > GUEST_LDTR_SELECTOR = 0x0000080c,
> > GUEST_TR_SELECTOR = 0x0000080e,
> > + GUEST_INTR_STATUS = 0x00000810,
> > HOST_ES_SELECTOR = 0x00000c00,
> > HOST_CS_SELECTOR = 0x00000c02,
> > HOST_SS_SELECTOR = 0x00000c04,
> > @@ -207,6 +210,14 @@ enum vmcs_field {
> > APIC_ACCESS_ADDR_HIGH = 0x00002015,
> > EPT_POINTER = 0x0000201a,
> > EPT_POINTER_HIGH = 0x0000201b,
> > + EOI_EXIT_BITMAP0 = 0x0000201c,
> > + EOI_EXIT_BITMAP0_HIGH = 0x0000201d,
> > + EOI_EXIT_BITMAP1 = 0x0000201e,
> > + EOI_EXIT_BITMAP1_HIGH = 0x0000201f,
> > + EOI_EXIT_BITMAP2 = 0x00002020,
> > + EOI_EXIT_BITMAP2_HIGH = 0x00002021,
> > + EOI_EXIT_BITMAP3 = 0x00002022,
> > + EOI_EXIT_BITMAP3_HIGH = 0x00002023,
> > GUEST_PHYSICAL_ADDRESS = 0x00002400,
> > GUEST_PHYSICAL_ADDRESS_HIGH = 0x00002401,
> > VMCS_LINK_POINTER = 0x00002800,
> > diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
> > index b111aee..e113440 100644
> > --- a/arch/x86/kvm/irq.c
> > +++ b/arch/x86/kvm/irq.c
> > @@ -38,6 +38,38 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
> > EXPORT_SYMBOL(kvm_cpu_has_pending_timer);
> >
> > /*
> > + * check if there is pending interrupt from
> > + * non-APIC source without intack.
> > + */
> > +static int kvm_cpu_has_extint(struct kvm_vcpu *v)
> > +{
> > + if (kvm_apic_accept_pic_intr(v))
> > + return pic_irqchip(v->kvm)->output; /* PIC */
> > + else
> > + return 0;
> > +}
> > +
> > +/*
> > + * check if there is injectable interrupt:
> > + * when virtual interrupt delivery enabled,
> > + * interrupt from apic will handled by hardware,
> > + * we don't need to check it here.
> > + */
> > +int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v)
> > +{
> > + if (!irqchip_in_kernel(v->kvm))
> > + return v->arch.interrupt.pending;
> > +
> > + if (kvm_cpu_has_extint(v))
> > + return 1;
> > +
> > + if (kvm_apic_vid_enabled(v))
> > + return 0;
> > +
> > + return kvm_apic_has_interrupt(v) != -1; /* LAPIC */
> > +}
> > +
> > +/*
> > * check if there is pending interrupt without
> > * intack.
> > */
> > @@ -46,27 +78,41 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *v)
> > if (!irqchip_in_kernel(v->kvm))
> > return v->arch.interrupt.pending;
> >
> > - if (kvm_apic_accept_pic_intr(v) && pic_irqchip(v->kvm)->output)
> > - return pic_irqchip(v->kvm)->output; /* PIC */
> > + if (kvm_cpu_has_extint(v))
> > + return 1;
> >
> > return kvm_apic_has_interrupt(v) != -1; /* LAPIC */
> > }
> > EXPORT_SYMBOL_GPL(kvm_cpu_has_interrupt);
> >
> > /*
> > + * Read pending interrupt(from non-APIC source)
> > + * vector and intack.
> > + */
> > +static int kvm_cpu_get_extint(struct kvm_vcpu *v)
> > +{
> > + if (kvm_cpu_has_extint(v))
> > + return kvm_pic_read_irq(v->kvm); /* PIC */
> > + return -1;
> > +}
> > +
> > +/*
> > * Read pending interrupt vector and intack.
> > */
> > int kvm_cpu_get_interrupt(struct kvm_vcpu *v)
> > {
> > + int vector;
> > +
> > if (!irqchip_in_kernel(v->kvm))
> > return v->arch.interrupt.nr;
> >
> > - if (kvm_apic_accept_pic_intr(v) && pic_irqchip(v->kvm)->output)
> > - return kvm_pic_read_irq(v->kvm); /* PIC */
> > + vector = kvm_cpu_get_extint(v);
> > +
> > + if (kvm_apic_vid_enabled(v) || vector != -1)
> > + return vector; /* PIC */
> >
> > return kvm_get_apic_interrupt(v); /* APIC */
> > }
> > -EXPORT_SYMBOL_GPL(kvm_cpu_get_interrupt);
> >
> > void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu)
> > {
> > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> > index 0664c13..e1baf37 100644
> > --- a/arch/x86/kvm/lapic.c
> > +++ b/arch/x86/kvm/lapic.c
> > @@ -133,6 +133,12 @@ static inline int apic_enabled(struct kvm_lapic *apic)
> > return kvm_apic_sw_enabled(apic) && kvm_apic_hw_enabled(apic);
> > }
> >
> > +bool kvm_apic_present(struct kvm_vcpu *vcpu)
> > +{
> > + return kvm_vcpu_has_lapic(vcpu) && kvm_apic_hw_enabled(vcpu->arch.apic);
> > +}
> > +EXPORT_SYMBOL_GPL(kvm_apic_present);
> > +
> > #define LVT_MASK \
> > (APIC_LVT_MASKED | APIC_SEND_PENDING | APIC_VECTOR_MASK)
> >
> > @@ -150,23 +156,6 @@ static inline int kvm_apic_id(struct kvm_lapic *apic)
> > return (kvm_apic_get_reg(apic, APIC_ID) >> 24) & 0xff;
> > }
> >
> > -static inline u16 apic_cluster_id(struct kvm_apic_map *map, u32 ldr)
> > -{
> > - u16 cid;
> > - ldr >>= 32 - map->ldr_bits;
> > - cid = (ldr >> map->cid_shift) & map->cid_mask;
> > -
> > - BUG_ON(cid >= ARRAY_SIZE(map->logical_map));
> > -
> > - return cid;
> > -}
> > -
> > -static inline u16 apic_logical_id(struct kvm_apic_map *map, u32 ldr)
> > -{
> > - ldr >>= (32 - map->ldr_bits);
> > - return ldr & map->lid_mask;
> > -}
> > -
> > static void recalculate_apic_map(struct kvm *kvm)
> > {
> > struct kvm_apic_map *new, *old = NULL;
> > @@ -236,12 +225,14 @@ static inline void kvm_apic_set_id(struct kvm_lapic *apic, u8 id)
> > {
> > apic_set_reg(apic, APIC_ID, id << 24);
> > recalculate_apic_map(apic->vcpu->kvm);
> > + ioapic_update_eoi_exitmap(apic->vcpu->kvm);
> > }
> >
> > static inline void kvm_apic_set_ldr(struct kvm_lapic *apic, u32 id)
> > {
> > apic_set_reg(apic, APIC_LDR, id);
> > recalculate_apic_map(apic->vcpu->kvm);
> > + ioapic_update_eoi_exitmap(apic->vcpu->kvm);
> > }
> >
> > static inline int apic_lvt_enabled(struct kvm_lapic *apic, int lvt_type)
> > @@ -345,6 +336,9 @@ static inline int apic_find_highest_irr(struct kvm_lapic *apic)
> > {
> > int result;
> >
> > + /* Note that irr_pending is just a hint. In the platform
> > + * that has virtual interrupt delivery supporting, virr
> > + * is cleared by hardware without updating irr_pending. */
> > if (!apic->irr_pending)
> > return -1;
> >
> > @@ -458,9 +452,13 @@ static void pv_eoi_clr_pending(struct kvm_vcpu *vcpu)
> > __clear_bit(KVM_APIC_PV_EOI_PENDING, &vcpu->arch.apic_attention);
> > }
> >
> > -static inline int apic_find_highest_isr(struct kvm_lapic *apic)
> > +int kvm_apic_find_highest_isr(struct kvm_lapic *apic)
> > {
> > int result;
> > +
> > + /* Note that isr_count is just a hint. In the platform
> > + * that has virtual interrupt delivery supporting, visr
> > + * will be set by hardware without updating irr_pending. */
> > if (!apic->isr_count)
> > return -1;
> > if (likely(apic->highest_isr_cache != -1))
> > @@ -471,6 +469,7 @@ static inline int apic_find_highest_isr(struct kvm_lapic *apic)
> >
> > return result;
> > }
> > +EXPORT_SYMBOL_GPL(kvm_apic_find_highest_isr);
> >
> > static void apic_update_ppr(struct kvm_lapic *apic)
> > {
> > @@ -479,7 +478,7 @@ static void apic_update_ppr(struct kvm_lapic *apic)
> >
> > old_ppr = kvm_apic_get_reg(apic, APIC_PROCPRI);
> > tpr = kvm_apic_get_reg(apic, APIC_TASKPRI);
> > - isr = apic_find_highest_isr(apic);
> > + isr = kvm_apic_find_highest_isr(apic);
> > isrv = (isr != -1) ? isr : 0;
> >
> > if ((tpr & 0xf0) >= (isrv & 0xf0))
> > @@ -740,9 +739,22 @@ int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2)
> > return vcpu1->arch.apic_arb_prio - vcpu2->arch.apic_arb_prio;
> > }
> >
> > +static void kvm_ioapic_send_eoi(struct kvm_lapic *apic, int vector)
> > +{
> > + if (!(kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI) &&
> > + kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) {
> > + int trigger_mode;
> > + if (apic_test_vector(vector, apic->regs + APIC_TMR))
> > + trigger_mode = IOAPIC_LEVEL_TRIG;
> > + else
> > + trigger_mode = IOAPIC_EDGE_TRIG;
> > + kvm_ioapic_update_eoi(apic->vcpu->kvm, vector, trigger_mode);
> > + }
> > +}
> > +
> > static int apic_set_eoi(struct kvm_lapic *apic)
> > {
> > - int vector = apic_find_highest_isr(apic);
> > + int vector = kvm_apic_find_highest_isr(apic);
> >
> > trace_kvm_eoi(apic, vector);
> >
> > @@ -756,19 +768,26 @@ static int apic_set_eoi(struct kvm_lapic *apic)
> > apic_clear_isr(vector, apic);
> > apic_update_ppr(apic);
> >
> > - if (!(kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI) &&
> > - kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) {
> > - int trigger_mode;
> > - if (apic_test_vector(vector, apic->regs + APIC_TMR))
> > - trigger_mode = IOAPIC_LEVEL_TRIG;
> > - else
> > - trigger_mode = IOAPIC_EDGE_TRIG;
> > - kvm_ioapic_update_eoi(apic->vcpu->kvm, vector, trigger_mode);
> > - }
> > + kvm_ioapic_send_eoi(apic, vector);
> > kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
> > return vector;
> > }
> >
> > +/*
> > + * this interface assumes a trap-like exit, which has already finished
> > + * desired side effect including vISR and vPPR update.
> > + */
> > +void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector)
> > +{
> > + struct kvm_lapic *apic = vcpu->arch.apic;
> > +
> > + trace_kvm_eoi(apic, vector);
> > +
> > + kvm_ioapic_send_eoi(apic, vector);
> > + kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
> > +}
> > +EXPORT_SYMBOL_GPL(kvm_apic_set_eoi_accelerated);
> > +
> > static void apic_send_ipi(struct kvm_lapic *apic)
> > {
> > u32 icr_low = kvm_apic_get_reg(apic, APIC_ICR);
> > @@ -1071,6 +1090,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
> > if (!apic_x2apic_mode(apic)) {
> > apic_set_reg(apic, APIC_DFR, val | 0x0FFFFFFF);
> > recalculate_apic_map(apic->vcpu->kvm);
> > + ioapic_update_eoi_exitmap(apic->vcpu->kvm);
> > } else
> > ret = 1;
> > break;
> > @@ -1318,6 +1338,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
> > else
> > static_key_slow_inc(&apic_hw_disabled.key);
> > recalculate_apic_map(vcpu->kvm);
> > + ioapic_update_eoi_exitmap(apic->vcpu->kvm);
> > }
> >
> > if (!kvm_vcpu_is_bsp(apic->vcpu))
> > @@ -1375,7 +1396,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu)
> > apic_set_reg(apic, APIC_TMR + 0x10 * i, 0);
> > }
> > apic->irr_pending = false;
> > - apic->isr_count = 0;
> > + apic->isr_count = kvm_apic_vid_enabled(vcpu);
> > apic->highest_isr_cache = -1;
> > update_divide_count(apic);
> > atomic_set(&apic->lapic_timer.pending, 0);
> > @@ -1590,8 +1611,10 @@ void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu,
> > update_divide_count(apic);
> > start_apic_timer(apic);
> > apic->irr_pending = true;
> > - apic->isr_count = count_vectors(apic->regs + APIC_ISR);
> > + apic->isr_count = kvm_apic_vid_enabled(vcpu) ?
> > + 1 : count_vectors(apic->regs + APIC_ISR);
> > apic->highest_isr_cache = -1;
> > + kvm_x86_ops->restore_rvi(vcpu);
> > kvm_make_request(KVM_REQ_EVENT, vcpu);
> > }
> >
> > @@ -1704,7 +1727,7 @@ void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu)
> > max_irr = apic_find_highest_irr(apic);
> > if (max_irr < 0)
> > max_irr = 0;
> > - max_isr = apic_find_highest_isr(apic);
> > + max_isr = kvm_apic_find_highest_isr(apic);
> > if (max_isr < 0)
> > max_isr = 0;
> > data = (tpr & 0xff) | ((max_isr & 0xf0) << 8) | (max_irr << 24);
> > diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
> > index 9a8ee22..52c66a5 100644
> > --- a/arch/x86/kvm/lapic.h
> > +++ b/arch/x86/kvm/lapic.h
> > @@ -39,6 +39,7 @@ void kvm_free_lapic(struct kvm_vcpu *vcpu);
> > int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu);
> > int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu);
> > int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu);
> > +int kvm_apic_get_highest_irr(struct kvm_vcpu *vcpu);
> > void kvm_lapic_reset(struct kvm_vcpu *vcpu);
> > u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu);
> > void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8);
> > @@ -60,11 +61,13 @@ void kvm_set_apic_base(struct kvm_vcpu *vcpu, u64 data);
> > void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu,
> > struct kvm_lapic_state *s);
> > int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu);
> > +int kvm_apic_find_highest_isr(struct kvm_lapic *apic);
> >
> > u64 kvm_get_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu);
> > void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data);
> >
> > void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset);
> > +void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector);
> >
> > void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
> > void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
> > @@ -75,6 +78,7 @@ int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
> >
> > int kvm_hv_vapic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data);
> > int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
> > +bool kvm_apic_present(struct kvm_vcpu *vcpu);
> >
> > static inline bool kvm_hv_vapic_assist_page_enabled(struct kvm_vcpu *vcpu)
> > {
> > @@ -116,14 +120,31 @@ static inline int kvm_apic_sw_enabled(struct kvm_lapic *apic)
> > return APIC_SPIV_APIC_ENABLED;
> > }
> >
> > -static inline bool kvm_apic_present(struct kvm_vcpu *vcpu)
> > +static inline int kvm_lapic_enabled(struct kvm_vcpu *vcpu)
> > {
> > - return kvm_vcpu_has_lapic(vcpu) && kvm_apic_hw_enabled(vcpu->arch.apic);
> > + return kvm_apic_present(vcpu) && kvm_apic_sw_enabled(vcpu->arch.apic);
> > }
> >
> > -static inline int kvm_lapic_enabled(struct kvm_vcpu *vcpu)
> > +static inline bool kvm_apic_vid_enabled(struct kvm_vcpu *vcpu)
> > {
> > - return kvm_apic_present(vcpu) && kvm_apic_sw_enabled(vcpu->arch.apic);
> > + return kvm_x86_ops->has_virtual_interrupt_delivery(vcpu);
> > +}
> > +
> > +static inline u16 apic_cluster_id(struct kvm_apic_map *map, u32 ldr)
> > +{
> > + u16 cid;
> > + ldr >>= 32 - map->ldr_bits;
> > + cid = (ldr >> map->cid_shift) & map->cid_mask;
> > +
> > + BUG_ON(cid >= ARRAY_SIZE(map->logical_map));
> > +
> > + return cid;
> > +}
> > +
> > +static inline u16 apic_logical_id(struct kvm_apic_map *map, u32 ldr)
> > +{
> > + ldr >>= (32 - map->ldr_bits);
> > + return ldr & map->lid_mask;
> > }
> >
> > #endif
> > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> > index d29d3cd..a8a8a4e 100644
> > --- a/arch/x86/kvm/svm.c
> > +++ b/arch/x86/kvm/svm.c
> > @@ -3571,6 +3571,36 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
> > set_cr_intercept(svm, INTERCEPT_CR8_WRITE);
> > }
> >
> > +static int svm_has_virtual_interrupt_delivery(struct kvm_vcpu *vcpu)
> > +{
> > + return 0;
> > +}
> > +
> > +static void svm_update_eoi_exitmap(struct kvm *kvm, struct kvm_lapic_irq *irq)
> > +{
> > + return;
> > +}
> > +
> > +static void svm_update_exitmap_start(struct kvm_vcpu *vcpu)
> > +{
> > + return;
> > +}
> > +
> > +static void svm_update_exitmap_end(struct kvm_vcpu *vcpu)
> > +{
> > + return;
> > +}
> > +
> > +static void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu)
> > +{
> > + return;
> > +}
> > +
> > +static void svm_restore_rvi(struct kvm_vcpu *vcpu)
> > +{
> > + return;
> > +}
> > +
> > static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
> > {
> > struct vcpu_svm *svm = to_svm(vcpu);
> > @@ -4290,6 +4320,12 @@ static struct kvm_x86_ops svm_x86_ops = {
> > .enable_nmi_window = enable_nmi_window,
> > .enable_irq_window = enable_irq_window,
> > .update_cr8_intercept = update_cr8_intercept,
> > + .has_virtual_interrupt_delivery = svm_has_virtual_interrupt_delivery,
> > + .update_eoi_exitmap = svm_update_eoi_exitmap,
> > + .update_exitmap_start = svm_update_exitmap_start,
> > + .update_exitmap_end = svm_update_exitmap_end,
> > + .load_eoi_exitmap = svm_load_eoi_exitmap,
> > + .restore_rvi = svm_restore_rvi,
> >
> > .set_tss_addr = svm_set_tss_addr,
> > .get_tdp_level = get_npt_level,
> > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> > index 730dc13..0c85c7e 100644
> > --- a/arch/x86/kvm/vmx.c
> > +++ b/arch/x86/kvm/vmx.c
> > @@ -433,6 +433,9 @@ struct vcpu_vmx {
> >
> > bool rdtscp_enabled;
> >
> > + unsigned long eoi_exit_bitmap[4];
> > + spinlock_t eoi_bitmap_lock;
> > +
> > /* Support for a guest hypervisor (nested VMX) */
> > struct nested_vmx nested;
> > };
> > @@ -771,6 +774,12 @@ static inline bool cpu_has_vmx_apic_register_virt(void)
> > SECONDARY_EXEC_APIC_REGISTER_VIRT;
> > }
> >
> > +static inline bool cpu_has_vmx_virtual_intr_delivery(void)
> > +{
> > + return vmcs_config.cpu_based_2nd_exec_ctrl &
> > + SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
> > +}
> > +
> > static inline bool cpu_has_vmx_flexpriority(void)
> > {
> > return cpu_has_vmx_tpr_shadow() &&
> > @@ -2549,7 +2558,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
> > SECONDARY_EXEC_PAUSE_LOOP_EXITING |
> > SECONDARY_EXEC_RDTSCP |
> > SECONDARY_EXEC_ENABLE_INVPCID |
> > - SECONDARY_EXEC_APIC_REGISTER_VIRT;
> > + SECONDARY_EXEC_APIC_REGISTER_VIRT |
> > + SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
> > if (adjust_vmx_controls(min2, opt2,
> > MSR_IA32_VMX_PROCBASED_CTLS2,
> > &_cpu_based_2nd_exec_control) < 0)
> > @@ -2563,7 +2573,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
> >
> > if (!(_cpu_based_exec_control & CPU_BASED_TPR_SHADOW))
> > _cpu_based_2nd_exec_control &= ~(
> > - SECONDARY_EXEC_APIC_REGISTER_VIRT);
> > + SECONDARY_EXEC_APIC_REGISTER_VIRT |
> > + SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
> >
> > if (_cpu_based_2nd_exec_control & SECONDARY_EXEC_ENABLE_EPT) {
> > /* CR3 accesses and invlpg don't need to cause VM Exits when EPT
> > @@ -2762,9 +2773,15 @@ static __init int hardware_setup(void)
> > if (!cpu_has_vmx_ple())
> > ple_gap = 0;
> >
> > - if (!cpu_has_vmx_apic_register_virt())
> > + if (!cpu_has_vmx_apic_register_virt() ||
> > + !cpu_has_vmx_virtual_intr_delivery())
> > enable_apicv_reg_vid = 0;
> >
> > + if (enable_apicv_reg_vid)
> > + kvm_x86_ops->update_cr8_intercept = NULL;
> > + else
> > + kvm_x86_ops->update_apic_irq = NULL;
> > +
> > if (nested)
> > nested_vmx_setup_ctls_msrs();
> >
> > @@ -3845,7 +3862,8 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
> > if (!ple_gap)
> > exec_control &= ~SECONDARY_EXEC_PAUSE_LOOP_EXITING;
> > if (!enable_apicv_reg_vid)
> > - exec_control &= ~SECONDARY_EXEC_APIC_REGISTER_VIRT;
> > + exec_control &= ~(SECONDARY_EXEC_APIC_REGISTER_VIRT |
> > + SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
> > return exec_control;
> > }
> >
> > @@ -3890,6 +3908,16 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
> > vmx_secondary_exec_control(vmx));
> > }
> >
> > + if (enable_apicv_reg_vid) {
> > + vmcs_write64(EOI_EXIT_BITMAP0, 0);
> > + vmcs_write64(EOI_EXIT_BITMAP1, 0);
> > + vmcs_write64(EOI_EXIT_BITMAP2, 0);
> > + vmcs_write64(EOI_EXIT_BITMAP3, 0);
> > + spin_lock_init(&vmx->eoi_bitmap_lock);
> > +
> > + vmcs_write16(GUEST_INTR_STATUS, 0);
> > + }
> > +
> > if (ple_gap) {
> > vmcs_write32(PLE_GAP, ple_gap);
> > vmcs_write32(PLE_WINDOW, ple_window);
> > @@ -4805,6 +4833,16 @@ static int handle_apic_access(struct kvm_vcpu *vcpu)
> > return emulate_instruction(vcpu, 0) == EMULATE_DONE;
> > }
> >
> > +static int handle_apic_eoi_induced(struct kvm_vcpu *vcpu)
> > +{
> > + unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
> > + int vector = exit_qualification & 0xff;
> > +
> > + /* EOI-induced VM exit is trap-like and thus no need to adjust IP */
> > + kvm_apic_set_eoi_accelerated(vcpu, vector);
> > + return 1;
> > +}
> > +
> > static int handle_apic_write(struct kvm_vcpu *vcpu)
> > {
> > unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
> > @@ -5750,6 +5788,7 @@ static int (*const kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
> > [EXIT_REASON_TPR_BELOW_THRESHOLD] = handle_tpr_below_threshold,
> > [EXIT_REASON_APIC_ACCESS] = handle_apic_access,
> > [EXIT_REASON_APIC_WRITE] = handle_apic_write,
> > + [EXIT_REASON_EOI_INDUCED] = handle_apic_eoi_induced,
> > [EXIT_REASON_WBINVD] = handle_wbinvd,
> > [EXIT_REASON_XSETBV] = handle_xsetbv,
> > [EXIT_REASON_TASK_SWITCH] = handle_task_switch,
> > @@ -6099,6 +6138,142 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
> > vmcs_write32(TPR_THRESHOLD, irr);
> > }
> >
> > +static int vmx_has_virtual_interrupt_delivery(struct kvm_vcpu *vcpu)
> > +{
> > + return enable_apicv_reg_vid;
> > +}
> > +
> > +static void vmx_restore_rvi(struct kvm_vcpu *vcpu)
> > +{
> > + int isr;
> > + u16 status;
> > + u8 old;
> > +
> > + if (!enable_apicv_reg_vid)
> > + return;
> > +
> > + isr = kvm_apic_find_highest_isr(vcpu->arch.apic);
> > + if (isr == -1)
> > + return;
> > +
> > + status = vmcs_read16(GUEST_INTR_STATUS);
> > + old = status >> 8;
> > + if (isr != old) {
> > + status &= 0xff;
> > + status |= isr << 8;
> > + vmcs_write16(GUEST_INTR_STATUS, status);
> > + }
> > +}
> > +
> > +static void vmx_update_rvi(int vector)
> > +{
> > + u16 status;
> > + u8 old;
> > +
> > + status = vmcs_read16(GUEST_INTR_STATUS);
> > + old = (u8)status & 0xff;
> > + if ((u8)vector != old) {
> > + status &= ~0xff;
> > + status |= (u8)vector;
> > + vmcs_write16(GUEST_INTR_STATUS, status);
> > + }
> > +}
> > +
> > +static void vmx_update_apic_irq(struct kvm_vcpu *vcpu, int max_irr)
> > +{
> > + if (max_irr == -1)
> > + return;
> > +
> > + vmx_update_rvi(max_irr);
> > +}
> > +
> > +static void set_eoi_exitmap_one(struct kvm_vcpu *vcpu,
> > + u32 vector)
> > +{
> > + struct vcpu_vmx *vmx = to_vmx(vcpu);
> > +
> > + if (!enable_apicv_reg_vid)
> > + return;
> > +
> > + if (WARN_ONCE((vector > 255),
> > + "KVM VMX: vector (%d) out of range\n", vector))
> > + return;
> > +
> > + set_bit(vector, vmx->eoi_exit_bitmap);
> > +
> > + kvm_make_request(KVM_REQ_EOIBITMAP, vcpu);
> > +}
> > +
> > +void vmx_update_eoi_exitmap(struct kvm *kvm, struct kvm_lapic_irq *irq)
> > +{
> > + struct kvm_vcpu *vcpu;
> > + struct kvm_lapic **dst;
> > + struct kvm_apic_map *map;
> > + unsigned long bitmap = 1;
> > + int i;
> > +
> > + rcu_read_lock();
> > + map = rcu_dereference(kvm->arch.apic_map);
> > +
> > + if (unlikely(!map)) {
> > + kvm_for_each_vcpu(i, vcpu, kvm)
> > + set_eoi_exitmap_one(vcpu, irq->vector);
> > + goto out;
> > + }
>
> The suggestion was
>
>
> ioapic_write (or any other ioapic update)
> lock()
> perform update
> make_all_vcpus_request(KVM_REQ_UPDATE_EOI_BITMAP) (*)
> unlock()
>
> (*) Similarly to TLB flush.
>
> The advantage is that all work becomes vcpu local. The end result
> is much simpler code.
What complexity will it remove?
--
Gleb.
next prev parent reply other threads:[~2013-01-07 17:48 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-07 2:02 [PATCH v8 0/3] x86, apicv: Add APIC virtualization support Yang Zhang
2013-01-07 2:02 ` [PATCH v8 1/3] x86, apicv: add APICv register " Yang Zhang
2013-01-07 2:02 ` [PATCH v8 2/3] x86, apicv: add virtual interrupt delivery support Yang Zhang
2013-01-07 7:51 ` Gleb Natapov
2013-01-07 13:04 ` Zhang, Yang Z
2013-01-07 13:52 ` Marcelo Tosatti
2013-01-07 17:48 ` Gleb Natapov [this message]
2013-01-07 21:32 ` Marcelo Tosatti
2013-01-08 0:43 ` Zhang, Yang Z
2013-01-08 13:40 ` Marcelo Tosatti
2013-01-08 10:03 ` Gleb Natapov
2013-01-08 12:57 ` Zhang, Yang Z
2013-01-08 13:57 ` Marcelo Tosatti
2013-01-08 15:43 ` Gleb Natapov
2013-01-09 8:07 ` Zhang, Yang Z
2013-01-09 15:10 ` Marcelo Tosatti
2013-01-10 0:22 ` Zhang, Yang Z
2013-01-08 13:53 ` Marcelo Tosatti
2013-01-07 2:02 ` [PATCH v8 3/3] x86, apicv: add virtual x2apic support Yang Zhang
2013-01-07 6:46 ` Gleb Natapov
2013-01-07 6:58 ` Zhang, Yang Z
2013-01-07 7:07 ` Gleb Natapov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130107174843.GA4872@redhat.com \
--to=gleb@redhat.com \
--cc=haitao.shan@intel.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=yang.z.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox