From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tim Deegan Subject: Re: [PATCH 1/1 V4] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr Date: Mon, 29 Jul 2013 11:43:34 +0100 Message-ID: <20130729104334.GA37169@ocelot.phlegethon.org> References: <1374875167-2834-1-git-send-email-suravee.suthikulpanit@amd.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <1374875167-2834-1-git-send-email-suravee.suthikulpanit@amd.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: suravee.suthikulpanit@amd.com Cc: chegger@amazon.de, JBeulich@suse.com, xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org Hi, At 16:46 -0500 on 26 Jul (1374857167), suravee.suthikulpanit@amd.com wrote: > From: Suravee Suthikulpanit > > Fix assertion in __virt_to_maddr when starting nested SVM guest > in debug mode. Investigation has shown that svm_vmsave/svm_vmload > make use of __pa() with invalid address. > > Signed-off-by: Suravee Suthikulpanit This looks much better, but I have a few comments still: > +static struct page_info * > +_get_vmcb_page(struct domain *d, uint64_t vmcbaddr) Can you give this a name that makes it clearer that it's for nested VMCBs and not part of the handling of 'real' VMCBs? Also, please drop the leading underscore. > +{ > + struct page_info *page; > + p2m_type_t p2mt; > + > + page = get_page_from_gfn(d, vmcbaddr >> PAGE_SHIFT, > + &p2mt, P2M_ALLOC | P2M_UNSHARE); > + > + if (!page) Missing whitespace. > + return NULL; > + > + if ( !p2m_is_ram(p2mt) || p2m_is_readonly(p2mt) ) > + { > + put_page(page); > + return NULL; > + } > + > + return page; > +} > + > static void > svm_vmexit_do_vmload(struct vmcb_struct *vmcb, > struct cpu_user_regs *regs, > @@ -1802,6 +1823,7 @@ svm_vmexit_do_vmload(struct vmcb_struct *vmcb, > { > int ret; > unsigned int inst_len; > + struct page_info *page; > struct nestedvcpu *nv = &vcpu_nestedhvm(v); > > if ( (inst_len = __get_instruction_length(v, INSTR_VMLOAD)) == 0 ) > @@ -1819,7 +1841,19 @@ svm_vmexit_do_vmload(struct vmcb_struct *vmcb, > goto inject; > } > > - svm_vmload(nv->nv_vvmcx); > + /* Need to translate L1-GPA to MPA */ > + page = _get_vmcb_page(v->domain, nv->nv_vvmcxaddr); > + if (!page) Whitespace. > + { > + gdprintk(XENLOG_ERR, > + "VMLOAD: mapping vmcb L1-GPA to MPA failed, injecting #UD\n"); > + ret = TRAP_invalid_op; The documentation for VMLOAD suggests TRAP_gp_fault for this case. > + goto inject; > + } > + > + svm_vmload_pa(page_to_mfn(page) << PAGE_SHIFT); Please use page_to_maddr() for this. > + put_page(page); > + > /* State in L1 VMCB is stale now */ > v->arch.hvm_svm.vmcb_in_sync = 0; > > @@ -1838,6 +1872,7 @@ svm_vmexit_do_vmsave(struct vmcb_struct *vmcb, > { > int ret; > unsigned int inst_len; > + struct page_info *page; > struct nestedvcpu *nv = &vcpu_nestedhvm(v); > > if ( (inst_len = __get_instruction_length(v, INSTR_VMSAVE)) == 0 ) > @@ -1855,8 +1890,18 @@ svm_vmexit_do_vmsave(struct vmcb_struct *vmcb, > goto inject; > } > > - svm_vmsave(nv->nv_vvmcx); > + /* Need to translate L1-GPA to MPA */ > + page = _get_vmcb_page(v->domain, nv->nv_vvmcxaddr); > + if (!page) Whitespace. > + { > + gdprintk(XENLOG_ERR, > + "VMSAVE: mapping vmcb L1-GPA to MPA failed, injecting #UD\n"); > + ret = TRAP_invalid_op; > + goto inject; > + } > > + svm_vmsave_pa(page_to_mfn(page) << PAGE_SHIFT); Again, #GP, and page_to_maddr(). Thanks, Tim.