From: Sheng Yang <sheng@linux.intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Keir Fraser <keir.fraser@eu.citrix.com>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>,
"xen-devel" <xen-devel@lists.xensource.com>,
Eddie Dong <eddie.dong@intel.com>,
linux-kernel@vger.kernel.org,
Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [RFC][PATCH 0/10] Xen Hybrid extension support
Date: Thu, 17 Sep 2009 16:59:47 +0800 [thread overview]
Message-ID: <200909171659.48569.sheng@linux.intel.com> (raw)
In-Reply-To: <20090916133104.GB14725@phenom.dumpdata.com>
On Wednesday 16 September 2009 21:31:04 Konrad Rzeszutek Wilk wrote:
> On Wed, Sep 16, 2009 at 04:42:21PM +0800, Sheng Yang wrote:
> > Hi, Keir & Jeremy
> >
> > This patchset enabled Xen Hybrid extension support.
> >
> > As we know that PV guest have performance issue with x86_64 that guest
> > kernel and userspace resistent in the same ring, then the necessary TLB
> > flushes when switch between guest userspace and guest kernel cause
> > overhead, and much more syscall overhead is also introduced. The Hybrid
> > Extension estimated these overhead by putting guest kernel back in
> > (non-root) ring0 then achieve the better performance than PV guest.
>
> What was the overhead? Is there a step-by-step list of operations you did
> to figure out the performance numbers?
The overhead I mentioned is, in x86_64 pv guest, every syscall would be goes
to hypervisor first, then hypervisor transmit it to guest kernel, finally
guest kernel goes back to guest userspace. Due to the involvement of
hypervisor, there is certainly overhead. And every transition result in TLB
flush. In 32bit pv guest, guest use #int82 to emulate syscall, which can
specific the privilege level, so that hypervisor don't need involve.
And sorry, I don't have a step-by-step list for the performance tunning. All
above is a known issue of x86_64 pv guest.
>
> I am asking this b/c at some point I would like to compare the pv-ops vs
> native and I am not entirely sure what is the best way to do this.
Sorry, I don't have much advise on this. If you means tuning, what I can
purposed is just running some microbenchmark(lmbench is a favor of mine),
collect (guest) hot function with xenoprofile and compare the result of native
and pv-ops to figure out the gap...
--
regards
Yang, Sheng
prev parent reply other threads:[~2009-09-17 8:59 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-16 8:42 [RFC][PATCH 0/10] Xen Hybrid extension support Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 01/10] xen/pvhvm: add support for hvm_op Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 02/10] xen/hybrid: Import cpuid.h from Xen Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 03/10] xen/hybrid: Xen Hybrid Extension initialization Sheng Yang
2009-09-16 20:24 ` [Xen-devel] " Jeremy Fitzhardinge
2009-09-17 6:22 ` Keir Fraser
2009-09-17 16:46 ` Jeremy Fitzhardinge
2009-09-16 8:42 ` [RFC][PATCH 04/10] xen/hybrid: Modify pv_init_ops and xen_info Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 05/10] xen/hybrid: Add PV halt support Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 06/10] xen/hybrid: Add shared_info page for xen Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 07/10] xen/hybrid: Add PV timer support Sheng Yang
2009-09-16 20:25 ` [Xen-devel] " Jeremy Fitzhardinge
2009-09-17 5:54 ` Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 08/10] x86: Don't ack_APIC_irq() if lapic is disabled in GENERIC_INTERRUPT_VECTOR handler Sheng Yang
2009-09-16 8:58 ` Cyrill Gorcunov
2009-09-16 9:03 ` Cyrill Gorcunov
2009-09-16 9:37 ` Cyrill Gorcunov
2009-09-17 3:54 ` Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 09/10] xen/hybrid: Make event channel work with QEmu emulated devices Sheng Yang
2009-09-16 20:35 ` [Xen-devel] " Jeremy Fitzhardinge
2009-09-17 5:58 ` Sheng Yang
2009-09-16 8:42 ` [RFC][PATCH 10/10] xen/hybrid: Enable grant table and xenbus Sheng Yang
2009-09-16 13:31 ` [Xen-devel] [RFC][PATCH 0/10] Xen Hybrid extension support Konrad Rzeszutek Wilk
2009-09-17 8:59 ` Sheng Yang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200909171659.48569.sheng@linux.intel.com \
--to=sheng@linux.intel.com \
--cc=eddie.dong@intel.com \
--cc=jeremy.fitzhardinge@citrix.com \
--cc=jun.nakajima@intel.com \
--cc=keir.fraser@eu.citrix.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox