From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paul Mackerras Subject: Re: [PATCH 13/23] KVM: PPC: Book3S HV: Accumulate timing information for real-mode code Date: Mon, 23 Mar 2015 09:57:58 +1100 Message-ID: <20150322225758.GA22196@iris.ozlabs.ibm.com> References: <1426844400-12017-1-git-send-email-paulus@samba.org> <1426844400-12017-14-git-send-email-paulus@samba.org> <550C0143.4040706@suse.de> <20150320112512.GA9425@iris.ozlabs.ibm.com> <550C060F.2080006@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org To: Alexander Graf Return-path: Content-Disposition: inline In-Reply-To: <550C060F.2080006@suse.de> Sender: kvm-ppc-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Fri, Mar 20, 2015 at 12:35:43PM +0100, Alexander Graf wrote: > > > On 20.03.15 12:25, Paul Mackerras wrote: > > On Fri, Mar 20, 2015 at 12:15:15PM +0100, Alexander Graf wrote: > >> Have you measure the additional overhead this brings? > > > > I haven't - in fact I did this patch so I could measure the overhead > > or improvement from other changes I did, but it doesn't measure its > > own overhead, of course. I guess I need a workload that does a > > defined number of guest entries and exits and measure how fast it runs > > with and without the patch (maybe something like H_SET_MODE in a > > loop). I'll figure something out and post the results. > > Yeah, just measure the number of exits you can handle for a simple > hcall. If there is measurable overhead, it's probably a good idea to > move the statistics gathering into #ifdef paths for DEBUGFS or maybe > even a separate EXIT_TIMING config option as we have it for booke. For 1-vcpu guest on POWER8, it adds 29ns to the time for an hcall that is handled in real mode (H_SET_DABR), which is 25%. It adds 43ns to the time for an hcall that is handled in the host kernel in virtual mode (H_PROD), which is 1.2%. I'll add a config option. Paul.