From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [RFC] Hypervisor, x86 emulation deprivileged
Date: Tue, 5 Jul 2016 18:01:28 +0100 [thread overview]
Message-ID: <20160705170127.GB1729@perard.uk.xensource.com> (raw)
In-Reply-To: <577BCC0F02000078000FB38F@prv-mh.provo.novell.com>
On Tue, Jul 05, 2016 at 07:02:39AM -0600, Jan Beulich wrote:
> >>> On 05.07.16 at 13:22, <anthony.perard@citrix.com> wrote:
> > Hi,
> >
> > I've taken over the work from Ben to have a deprivileged mode in the
> > hypervisor, but I'm unsure about which direction to take.
> >
> > First, after understanding what have been done, and fixing a few things,
> > I did some benchmark to compare a simple "device" running in ring0 to
> > the same one running in ring3 and also in QEMU. This "device" would call
> > 'rdtsc' on 'outl' and return the value in 'inl' (I actually do not use
> > the value). The measurement is done from a kernel module in the guest
> > (simply rdtsc;inl;rdtsc multiple time). This is the result I've found:
> >
> > ring3 ~3.5x slower than ring0
> > qemu ~22x slower than ring0
> > ~6.5x slower than ring3
> >
> > So that would be the worst-case scenario, where an emulator barely do
> > anything.
> >
> >
> > There have been different methods proposed to do the depriv mode, in
> > <55A8D477.2060909@citrix.com>, one of which was to implement a per-vcpu
> > stack which could be more elegant.
>
> Sadly my mail frontend doesn't let me search for message IDs (and
> this old a mail would have been purged anyway meanwhile), so I
> think (also considering how much time has passed) it would be better
> if you actually summarized where things stopped back then.
https://lists.xen.org/archives/html/xen-devel/2015-07/msg03507.html
It has been said that a per-vcpu stack would be too much work for a
short term project.
> > So, would you suggest that I start working on a per-vcpu stack? Or
> > should I continue with the current direction?
>
> Was there any reason why using per-vCPU stacks would be assumed
> to meaningfully improve above numbers?
Probably not. I guess the context switch alone takes most time, and it
does not matter where the stack is and if there is a copy of it.
> I'm not sure pursuing this
> idea is really useful if more than a marginal performance degradation
> results.
Maybe the instruction emulator would be big enough that the impact of a
context swith would not matter as much? I don't know much about it so I
can not make a guess of how much code is running.
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-07-05 17:01 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-05 11:22 [RFC] Hypervisor, x86 emulation deprivileged Anthony PERARD
2016-07-05 12:58 ` George Dunlap
2016-07-05 17:20 ` Anthony PERARD
2016-07-05 13:02 ` Jan Beulich
2016-07-05 17:01 ` Anthony PERARD [this message]
2016-07-05 17:25 ` Andrew Cooper
2016-07-06 7:59 ` Paul Durrant
2016-07-08 9:07 ` Tim Deegan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160705170127.GB1729@perard.uk.xensource.com \
--to=anthony.perard@citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).