From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [PATCH v2 1/5] VMX: fix interaction of APIC-V and Viridian emulation Date: Mon, 24 Jun 2013 11:10:40 +0100 Message-ID: <51C81B20.7040000@eu.citrix.com> References: <51C8092A02000078000DFDA0@nat28.tlf.novell.com> <51C80B7602000078000DFDAE@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <51C80B7602000078000DFDAE@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Keir Fraser , Eddie Dong , xen-devel , paul.durrant@citrix.com, Jun Nakajima , Yang Z Zhang List-Id: xen-devel@lists.xenproject.org On 24/06/13 08:03, Jan Beulich wrote: > Viridian using a synthetic MSR for issuing EOI notifications bypasses > the normal in-processor handling, which would clear > GUEST_INTR_STATUS.SVI. Hence we need to do this in software in order > for future interrupts to get delivered. > > Based on analysis by Yang Z Zhang . > > Signed-off-by: Jan Beulich Hmm... so there are three paths which may end up calling this vmx EOI code -- from viridian.c:wrmsr_vidiridan_regs(), from vlapic.c:vlapic_reg_write(), and vmx_handle_eoi_write(). Obviously the viridian code is what we want. But which of the other two paths will also end up taking it, and is it correct? In other words, for which of those will cpu_has_vmx_virtual_intr_delivery be set? -George > --- > v2: Split off cleanup parts to new patch 3. > > --- a/xen/arch/x86/hvm/vlapic.c > +++ b/xen/arch/x86/hvm/vlapic.c > @@ -386,6 +386,9 @@ void vlapic_EOI_set(struct vlapic *vlapi > > vlapic_clear_vector(vector, &vlapic->regs->data[APIC_ISR]); > > + if ( hvm_funcs.handle_eoi ) > + hvm_funcs.handle_eoi(vector); > + > if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) ) > vioapic_update_EOI(vlapic_domain(vlapic), vector); > > --- a/xen/arch/x86/hvm/vmx/vmx.c > +++ b/xen/arch/x86/hvm/vmx/vmx.c > @@ -1502,6 +1502,15 @@ static void vmx_sync_pir_to_irr(struct v > vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]); > } > > +static void vmx_handle_eoi(u8 vector) > +{ > + unsigned long status = __vmread(GUEST_INTR_STATUS); > + > + /* We need to clear the SVI field. */ > + status &= VMX_GUEST_INTR_STATUS_SUBFIELD_BITMASK; > + __vmwrite(GUEST_INTR_STATUS, status); > +} > + > static struct hvm_function_table __initdata vmx_function_table = { > .name = "VMX", > .cpu_up_prepare = vmx_cpu_up_prepare, > @@ -1554,6 +1563,7 @@ static struct hvm_function_table __initd > .process_isr = vmx_process_isr, > .deliver_posted_intr = vmx_deliver_posted_intr, > .sync_pir_to_irr = vmx_sync_pir_to_irr, > + .handle_eoi = vmx_handle_eoi, > .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m, > }; > > @@ -1580,7 +1590,10 @@ const struct hvm_function_table * __init > > setup_ept_dump(); > } > - > + > + if ( !cpu_has_vmx_virtual_intr_delivery ) > + vmx_function_table.handle_eoi = NULL; > + > if ( cpu_has_vmx_posted_intr_processing ) > alloc_direct_apic_vector(&posted_intr_vector, event_check_interrupt); > else > --- a/xen/include/asm-x86/hvm/hvm.h > +++ b/xen/include/asm-x86/hvm/hvm.h > @@ -186,6 +186,7 @@ struct hvm_function_table { > void (*process_isr)(int isr, struct vcpu *v); > void (*deliver_posted_intr)(struct vcpu *v, u8 vector); > void (*sync_pir_to_irr)(struct vcpu *v); > + void (*handle_eoi)(u8 vector); > > /*Walk nested p2m */ > int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, > > >