From: Brian Woods <brian.woods@amd.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
jbeulich@suse.com, suravee.suthikulpanit@amd.com
Subject: Re: [PATCH 3/3] x86/svm: add virtual VMLOAD/VMSAVE support
Date: Tue, 31 Oct 2017 17:29:37 -0500 [thread overview]
Message-ID: <20171031222937.GA1646@amd.com> (raw)
In-Reply-To: <4722086f-af59-2711-5d08-9b941097eed1@citrix.com>
[-- Attachment #1: Type: text/plain, Size: 558 bytes --]
On Tue, Oct 31, 2017 at 10:15:08PM +0000, Andrew Cooper wrote:
>
> The style in this file is quite hit and miss, but we expect new code to
> conform to the standards. In this case, the correct style is:
>
> if ( cpu_has_svm_vloadsave )
> {
>
> This can be fixed on commit if there are no other comments.
>
> All 3 patches Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> ~Andrew
>
My mistake. Years of lknf has made it a habit. I'll make sure to
double check next time. Attached is the git format-patch for that
commit.
--
Brian Woods
[-- Attachment #2: 0003-x86-svm-add-virtual-VMLOAD-VMSAVE-support.patch --]
[-- Type: text/x-diff, Size: 3260 bytes --]
>From b0d7916a5a35096cb7309922176631f7e57efdf1 Mon Sep 17 00:00:00 2001
From: Brian Woods <brian.woods@amd.com>
Date: Tue, 31 Oct 2017 14:13:01 -0500
Subject: [PATCH 3/3] x86/svm: add virtual VMLOAD/VMSAVE support
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
On AMD family 17h server processors, there is a feature called virtual
VMLOAD/VMSAVE. This allows a nested hypervisor to preform a VMLOAD or
VMSAVE without needing to be intercepted by the host hypervisor.
Virtual VMLOAD/VMSAVE requires the host hypervisor to be in long mode
and nested page tables to be enabled. For more information about it
please see:
AMD64 Architecture Programmer’s Manual Volume 2: System Programming
http://support.amd.com/TechDocs/24593.pdf
Section: VMSAVE and VMLOAD Virtualization (Section 15.33.1)
This patch series adds support to check for and enable the virtual
VMLOAD/VMSAVE features if available.
Signed-off-by: Brian Woods <brian.woods@amd.com>
---
xen/arch/x86/hvm/svm/svm.c | 1 +
xen/arch/x86/hvm/svm/svmdebug.c | 2 ++
xen/arch/x86/hvm/svm/vmcb.c | 8 ++++++++
3 files changed, 11 insertions(+)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index c8ffb17515..60b1288a31 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1669,6 +1669,7 @@ const struct hvm_function_table * __init start_svm(void)
P(cpu_has_svm_nrips, "Next-RIP Saved on #VMEXIT");
P(cpu_has_svm_cleanbits, "VMCB Clean Bits");
P(cpu_has_svm_decode, "DecodeAssists");
+ P(cpu_has_svm_vloadsave, "Virtual VMLOAD/VMSAVE");
P(cpu_has_pause_filter, "Pause-Intercept Filter");
P(cpu_has_tsc_ratio, "TSC Rate MSR");
#undef P
diff --git a/xen/arch/x86/hvm/svm/svmdebug.c b/xen/arch/x86/hvm/svm/svmdebug.c
index 89ef2db932..7145e2f5ca 100644
--- a/xen/arch/x86/hvm/svm/svmdebug.c
+++ b/xen/arch/x86/hvm/svm/svmdebug.c
@@ -55,6 +55,8 @@ void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb)
vmcb->exitinfo1, vmcb->exitinfo2);
printk("np_enable = %#"PRIx64" guest_asid = %#x\n",
vmcb_get_np_enable(vmcb), vmcb_get_guest_asid(vmcb));
+ printk("virtual vmload/vmsave = %d virt_ext = %#"PRIx64"\n",
+ vmcb->virt_ext.fields.vloadsave_enable, vmcb->virt_ext.bytes);
printk("cpl = %d efer = %#"PRIx64" star = %#"PRIx64" lstar = %#"PRIx64"\n",
vmcb_get_cpl(vmcb), vmcb_get_efer(vmcb), vmcb->star, vmcb->lstar);
printk("CR0 = 0x%016"PRIx64" CR2 = 0x%016"PRIx64"\n",
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index 997e7597e0..eccc1e28bf 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -200,6 +200,14 @@ static int construct_vmcb(struct vcpu *v)
/* PAT is under complete control of SVM when using nested paging. */
svm_disable_intercept_for_msr(v, MSR_IA32_CR_PAT);
+
+ /* use virtual VMLOAD/VMSAVE if available */
+ if ( cpu_has_svm_vloadsave )
+ {
+ vmcb->virt_ext.fields.vloadsave_enable = 1;
+ vmcb->_general2_intercepts &= ~GENERAL2_INTERCEPT_VMLOAD;
+ vmcb->_general2_intercepts &= ~GENERAL2_INTERCEPT_VMSAVE;
+ }
}
else
{
--
2.11.0
[-- Attachment #3: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-10-31 22:29 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-31 22:03 [PATCH 0/3] x86/svm: virtual VMLOAD/VMSAVE brian.woods
2017-10-31 22:03 ` [PATCH 1/3] x86/svm: rename lbr control field in vmcb brian.woods
2017-10-31 22:03 ` [PATCH 2/3] x86/svm: add virtual VMLOAD/VMSAVE feature definition brian.woods
2017-10-31 22:03 ` [PATCH 3/3] x86/svm: add virtual VMLOAD/VMSAVE support brian.woods
2017-10-31 22:15 ` Andrew Cooper
2017-10-31 22:29 ` Brian Woods [this message]
2017-11-01 17:00 ` [PATCH 0/3] x86/svm: virtual VMLOAD/VMSAVE Boris Ostrovsky
2017-11-01 19:34 ` Andrew Cooper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171031222937.GA1646@amd.com \
--to=brian.woods@amd.com \
--cc=andrew.cooper3@citrix.com \
--cc=boris.ostrovsky@oracle.com \
--cc=jbeulich@suse.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).