From mboxrd@z Thu Jan 1 00:00:00 1970 From: Suravee Suthikulanit Subject: Re: [PATCH 1/1 V5] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr Date: Thu, 8 Aug 2013 11:42:16 -0500 Message-ID: <5203CA68.50005@amd.com> References: <1375691514-3426-1-git-send-email-suravee.suthikulpanit@amd.com> <520264F102000078000E9EFA@nat28.tlf.novell.com> <20130808093824.GA68503@ocelot.phlegethon.org> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0482626915113083253==" Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1V7TIA-0003JX-Qa for xen-devel@lists.xenproject.org; Thu, 08 Aug 2013 16:42:27 +0000 In-Reply-To: <20130808093824.GA68503@ocelot.phlegethon.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Tim Deegan Cc: xen-devel , chegger@amazon.de, Jan Beulich List-Id: xen-devel@lists.xenproject.org --===============0482626915113083253== Content-Type: multipart/alternative; boundary="------------070304010808030406000304" --------------070304010808030406000304 Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit On 8/8/2013 4:38 AM, Tim Deegan wrote: > At 14:17 +0100 on 07 Aug (1375885025), Jan Beulich wrote: >>>>> On 05.08.13 at 10:31, wrote: >>> From: Suravee Suthikulpanit >>> >>> Fix assertion in __virt_to_maddr when starting nested SVM guest >>> in debug mode. Investigation has shown that svm_vmsave/svm_vmload >>> make use of __pa() with invalid address. >>> >>> Signed-off-by: Suravee Suthikulpanit >> Tim - have all your earlier comments been addressed in this version? > Yes, I'm happy with this one. > > Reviewed-by: Tim Deegan > >>> - if (!nestedhvm_enabled(v->domain)) { >>> + if ( !nestedhvm_enabled(v->domain) || !hvm_svm_enabled(v) ) { >> Suravee, why is this change needed (here and further down)? >> Can we really get here when hvm_svm_enabled(v) returns false? >> I don't recall this having been there in earlier versions. > This came from discussion of what fault to inject -- we always intercept > VM{RUN,LOAD,SAVE} so I think we can get here. The AMD docs for those say: > "Checks exceptions (#GP) before the intercept." > but nothing about checking guest_efer.SVME so AFAICT we have to do that > in Xen. > > Arguably this fix could could be a separate patch. Certainly the same > check ought to go into svm_exit_do_vmrun(). > > Tim. > Here, the "nestedhvm_enabled(v->domain)" is implemented as /* Nested HVM on/off per domain */ bool_t nestedhvm_enabled(struct domain *d) { return is_hvm_domain(d) && d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM]; } I'm not familiar with this, but I believe this is the option in the HVM config file"nestedhvm=1". Suravee --------------070304010808030406000304 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit
On 8/8/2013 4:38 AM, Tim Deegan wrote:
At 14:17 +0100 on 07 Aug (1375885025), Jan Beulich wrote:
On 05.08.13 at 10:31, <suravee.suthikulpanit@amd.com> wrote:
From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Fix assertion in __virt_to_maddr when starting nested SVM guest
in debug mode. Investigation has shown that svm_vmsave/svm_vmload
make use of __pa() with invalid address.

Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Tim - have all your earlier comments been addressed in this version?
Yes, I'm happy with this one.

Reviewed-by: Tim Deegan <tim@xen.org>

-    if (!nestedhvm_enabled(v->domain)) {
+    if ( !nestedhvm_enabled(v->domain) || !hvm_svm_enabled(v) ) {
Suravee, why is this change needed (here and further down)?
Can we really get here when hvm_svm_enabled(v) returns false?
I don't recall this having been there in earlier versions.
This came from discussion of what fault to inject -- we always intercept
VM{RUN,LOAD,SAVE} so I think we can get here.  The AMD docs for those say:
 "Checks exceptions (#GP) before the intercept."
but nothing about checking guest_efer.SVME so AFAICT we have to do that
in Xen.

Arguably this fix could could be a separate patch.  Certainly the same
check ought to go into svm_exit_do_vmrun().

Tim.

Here, the "nestedhvm_enabled(v->domain)" is implemented as

/* Nested HVM on/off per domain */
bool_t
nestedhvm_enabled(struct domain *d)
{
    return is_hvm_domain(d) &&
           d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM];
}

I'm not familiar with this, but I believe this is the option in the HVM config file "nestedhvm = 1". 

Suravee


--------------070304010808030406000304-- --===============0482626915113083253== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============0482626915113083253==--