From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paul Mackerras Subject: Re: [PATCH 13/23] KVM: PPC: Book3S HV: Accumulate timing information for real-mode code Date: Fri, 20 Mar 2015 22:25:12 +1100 Message-ID: <20150320112512.GA9425@iris.ozlabs.ibm.com> References: <1426844400-12017-1-git-send-email-paulus@samba.org> <1426844400-12017-14-git-send-email-paulus@samba.org> <550C0143.4040706@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org To: Alexander Graf Return-path: Received: from ozlabs.org ([103.22.144.67]:59556 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750990AbbCTL04 (ORCPT ); Fri, 20 Mar 2015 07:26:56 -0400 Content-Disposition: inline In-Reply-To: <550C0143.4040706@suse.de> Sender: kvm-owner@vger.kernel.org List-ID: On Fri, Mar 20, 2015 at 12:15:15PM +0100, Alexander Graf wrote: > > > On 20.03.15 10:39, Paul Mackerras wrote: > > This reads the timebase at various points in the real-mode guest > > entry/exit code and uses that to accumulate total, minimum and > > maximum time spent in those parts of the code. Currently these > > times are accumulated per vcpu in 5 parts of the code: > > > > * rm_entry - time taken from the start of kvmppc_hv_entry() until > > just before entering the guest. > > * rm_intr - time from when we take a hypervisor interrupt in the > > guest until we either re-enter the guest or decide to exit to the > > host. This includes time spent handling hcalls in real mode. > > * rm_exit - time from when we decide to exit the guest until the > > return from kvmppc_hv_entry(). > > * guest - time spend in the guest > > * cede - time spent napping in real mode due to an H_CEDE hcall > > while other threads in the same vcore are active. > > > > These times are exposed in debugfs in a directory per vcpu that > > contains a file called "timings". This file contains one line for > > each of the 5 timings above, with the name followed by a colon and > > 4 numbers, which are the count (number of times the code has been > > executed), the total time, the minimum time, and the maximum time, > > all in nanoseconds. > > > > Signed-off-by: Paul Mackerras > > Have you measure the additional overhead this brings? I haven't - in fact I did this patch so I could measure the overhead or improvement from other changes I did, but it doesn't measure its own overhead, of course. I guess I need a workload that does a defined number of guest entries and exits and measure how fast it runs with and without the patch (maybe something like H_SET_MODE in a loop). I'll figure something out and post the results. Paul.