From: Brian Woods <brian.woods@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
Brian Woods <brian.woods@amd.com>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86: use VMLOAD for PV context switch
Date: Fri, 17 Aug 2018 09:55:36 -0500 [thread overview]
Message-ID: <20180817145536.GB13654@amd.com> (raw)
In-Reply-To: <5B767A5702000078001DF3AD@prv1-mh.provo.novell.com>
On Fri, Aug 17, 2018 at 01:33:43AM -0600, Jan Beulich wrote:
> >>> On 17.08.18 at 00:04, <brian.woods@amd.com> wrote:
> > On Tue, Jul 10, 2018 at 04:14:11AM -0600, Jan Beulich wrote:
> >> Having noticed that VMLOAD alone is about as fast as a single of the
> >> involved WRMSRs, I thought it might be a reasonable idea to also use it
> >> for PV. Measurements, however, have shown that an actual improvement can
> >> be achieved only with an early prefetch of the VMCB (thanks to Andrew
> >> for suggesting to try this), which I have to admit I can't really
> >> explain. This way on my Fam15 box context switch takes over 100 clocks
> >> less on average (the measured values are heavily varying in all cases,
> >> though).
> >>
> >> This is intentionally not using a new hvm_funcs hook: For one, this is
> >> all about PV, and something similar can hardly be done for VMX.
> >> Furthermore the indirect to direct call patching that is meant to be
> >> applied to most hvm_funcs hooks would be ugly to make work with
> >> functions having more than 6 parameters.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >
> > I have confirmed it with a senior hardware engineer and using vmload in
> > this fashion is safe and recommended for performance. As far as using
> > vmload with PV.
> >
> > Acked-by: Brian Woods <brian.woods@amd.com>
>
> Thanks. There's another aspect in this same area that I'd like to
> improve, and hence seek clarification on up front: Currently SVM
> code uses two pages per CPU, one for host_vmcb and the other
> for hsa. Afaict the two uses are entirely dis-joint: The host save
> area looks to be simply yet another VMCB, and the parts accessed
> during VMRUN / VM exit are fully separate from the ones used by
> VMLOAD / VMSAVE. Therefore I think both could be folded,
> reducing code size as well as memory (and perhaps cache) footprint.
>
> I think this separation was done because the PM mentions both
> data structures separately, but iirc there's nothing said anywhere
> that the two structures indeed need to be distinct.
>
> Jan
From APM Vol 2
15.30.4 VM_HSAVE_PA MSR (C001_0117h)
The 64-bit read/write VM_HSAVE_PA MSR holds the physical address of a
4KB block of memory where VMRUN saves host state, and from which
#VMEXIT reloads host state. The VMM software is expected to set up this
register before issuing the first VMRUN instruction. Software must not
attempt to read or write the host save-state area directly.
Writing this MSR causes a #GP if:
• any of the low 12 bits of the address written are nonzero, or
• the address written is greater than or equal to the maximum
supported physical address for this implementation.
It seems that the HSA is needed for the state of the guest/host. I
don't see how they can be folded in together. Am I missing something?
--
Brian Woods
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-08-17 14:55 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-10 10:14 [PATCH] x86: use VMLOAD for PV context switch Jan Beulich
2018-08-16 22:04 ` Brian Woods
2018-08-17 7:33 ` Jan Beulich
2018-08-17 14:55 ` Brian Woods [this message]
2018-08-17 15:58 ` Jan Beulich
2018-08-29 7:06 ` Ping: " Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180817145536.GB13654@amd.com \
--to=brian.woods@amd.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).