From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ingo Molnar Subject: Re: [RFC] VMX CR3 cache Date: Mon, 28 Jan 2008 18:17:34 +0100 Message-ID: <20080128171734.GA19705@elte.hu> References: <20080128160444.GA3821@dmt> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel , Avi Kivity To: Marcelo Tosatti Return-path: Content-Disposition: inline In-Reply-To: <20080128160444.GA3821@dmt> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org * Marcelo Tosatti wrote: > lat_ctx numbers (output is "nr-procs overhead-in-us"): > > cr3-cache: > "size=0k ovr=1.30 > 2 6.63 > "size=0k ovr=1.31 > 4 7.43 > "size=0k ovr=1.32 > 8 11.02 when i did the testing then i was able to get zero VM exits in the lat_ctx hotpath and get the same performance as on native. The above numbers you got suggest that this is not the case. (your lat_ctx numbers on native are probably around 1 usec, right?) If that is the case, could you check via profile=kvm (or whatever other method you use to profile KVM) where the VM exits come from? Ingo ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/