From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mukesh Rathor Subject: Re: HYBRID: PV in HVM container Date: Thu, 30 Jun 2011 18:54:31 -0700 Message-ID: <20110630185431.3ea308c6@mantra.us.oracle.com> References: <20110627122404.23d2d0ce@mantra.us.oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20110627122404.23d2d0ce@mantra.us.oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Mukesh Rathor Cc: Ian, "Xen-devel@lists.xensource.com" , Campbell List-Id: xen-devel@lists.xenproject.org On Mon, 27 Jun 2011 12:24:04 -0700 Mukesh Rathor wrote: > > Hi guys, > > Cheers!! I got PV in HVM container prototype working with single VCPU > (pinned to a cpu). Basically, I create a VMX container just like for > HVM guest (with some differences that I'll share soon when I clean up > the code). The PV guest starts in Protected mode with the usual > entry point startup_xen(). > > 0. Guest kernel runs in ring 0, CS:0x10. JFYI.. as expected, running in ring 0 and not bouncing syscalls thru xen, syscalls do very well. fork/execs are slow prob beause VPIDs are turned off right now. I'm trying to figure VPIDs out, and hopefully that would help. BTW, dont' compare to anything else, both kernels below are unoptimized debug kernels. LMbench: Processor, Processes - times in microseconds - smaller is better ---------------------------------------------------------------- Host OS Mhz null null open selct sig sig fork exec sh call I/O stat clos TCP inst hndl proc proc proc --------- ------------- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ---- STOCK Linux 2.6.39+ 2771 0.68 0.91 2.13 4.45 4.251 0.82 3.87 433. 1134 3145 HYBRID Linux 2.6.39m 2745 0.13 0.22 0.88 2.04 3.287 0.28 1.11 526. 1393 3923 thanks, Mukesh