From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52968) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c29za-0005fz-NE for qemu-devel@nongnu.org; Thu, 03 Nov 2016 00:51:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c29zX-0007j3-L2 for qemu-devel@nongnu.org; Thu, 03 Nov 2016 00:51:10 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:41954) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c29zX-0007iv-BQ for qemu-devel@nongnu.org; Thu, 03 Nov 2016 00:51:07 -0400 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Wed, 02 Nov 2016 22:51:04 -0600 From: cov@codeaurora.org In-Reply-To: <1478125337-11770-3-git-send-email-wei@redhat.com> References: <1478125337-11770-1-git-send-email-wei@redhat.com> <1478125337-11770-3-git-send-email-wei@redhat.com> Message-ID: Subject: Re: [Qemu-devel] [kvm-unit-tests PATCHv7 2/3] arm: pmu: Check cycle count increases List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Wei Huang Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, shannon.zhao@linaro.org, alistair.francis@xilinx.com, croberts@codeaurora.org, alindsay@codeaurora.org, drjones@redhat.com Hi Wei, Thanks for your work on this. On 2016-11-02 16:22, Wei Huang wrote: > Ensure that reads of the PMCCNTR_EL0 are monotonically increasing, > even for the smallest delta of two subsequent reads. > > Signed-off-by: Christopher Covington > Signed-off-by: Wei Huang > --- > arm/pmu.c | 100 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 100 insertions(+) > > diff --git a/arm/pmu.c b/arm/pmu.c > index 42d0ee1..65b7df1 100644 > --- a/arm/pmu.c > +++ b/arm/pmu.c > @@ -14,6 +14,9 @@ > */ > #include "libcflat.h" > > +#define NR_SAMPLES 10 > +#define ARMV8_PMU_CYCLE_IDX 31 > + > #if defined(__arm__) > static inline uint32_t get_pmcr(void) > { > @@ -22,6 +25,43 @@ static inline uint32_t get_pmcr(void) > asm volatile("mrc p15, 0, %0, c9, c12, 0" : "=r" (ret)); > return ret; > } > + > +static inline void set_pmcr(uint32_t pmcr) > +{ > + asm volatile("mcr p15, 0, %0, c9, c12, 0" : : "r" (pmcr)); > +} > + > +static inline void set_pmccfiltr(uint32_t filter) > +{ > + uint32_t cycle_idx = ARMV8_PMU_CYCLE_IDX; > + > + asm volatile("mcr p15, 0, %0, c9, c12, 5" : : "r" (cycle_idx)); > + asm volatile("mcr p15, 0, %0, c9, c13, 1" : : "r" (filter)); > +} Down the road I'd like to add tests for the regular events. What if you added separate PMSELR and PMXEVTYPER accessors now and used them (with PMSELR.SEL = 31) for setting PMCCFILTR? Then we wouldn't need a specialized set_pmccfiltr function for the cycle counter versus PMSELR and PMXEVTYPER for the regular events. > +/* > + * While PMCCNTR can be accessed as a 64 bit coprocessor register, > returning 64 > + * bits doesn't seem worth the trouble when differential usage of the > result is > + * expected (with differences that can easily fit in 32 bits). So just > return > + * the lower 32 bits of the cycle count in AArch32. > + */ > +static inline unsigned long get_pmccntr(void) > +{ > + unsigned long cycles; > + > + asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r" (cycles)); > + return cycles; > +} > + > +static inline void enable_counter(uint32_t idx) > +{ > + asm volatile("mcr p15, 0, %0, c9, c12, 1" : : "r" (1 << idx)); > +} My personal preference, that I think would make this function look and act like the other system register accessor functions, would be to name the function "set_pmcntenset" and do a plain write of the input parameter without a shift, letting the shift be done in the C code. (As we scale up, the system register accessor functions should probably be generated by macros from a concise table.) > +static inline void disable_counter(uint32_t idx) > +{ > + asm volatile("mrc p15, 0, %0, c9, c12, 1" : : "r" (1 << idx)); > +} This function doesn't seem to be used yet. Consider whether it might make sense to defer introducing it until there is a user. > #elif defined(__aarch64__) > static inline uint32_t get_pmcr(void) > { > @@ -30,6 +70,34 @@ static inline uint32_t get_pmcr(void) > asm volatile("mrs %0, pmcr_el0" : "=r" (ret)); > return ret; > } > + > +static inline void set_pmcr(uint32_t pmcr) > +{ > + asm volatile("msr pmcr_el0, %0" : : "r" (pmcr)); > +} > > +static inline void set_pmccfiltr(uint32_t filter) > +{ > + asm volatile("msr pmccfiltr_el0, %0" : : "r" (filter)); > +} As above, consider whether using PMSELR and PMXEVTYPER might be a more reusable pair of accessors. > +static inline unsigned long get_pmccntr(void) > +{ > + unsigned long cycles; > + > + asm volatile("mrs %0, pmccntr_el0" : "=r" (cycles)); > + return cycles; > +} > + > +static inline void enable_counter(uint32_t idx) > +{ > + asm volatile("msr pmcntenset_el0, %0" : : "r" (1 << idx)); > +} Same thought as above about uniformity and generatability. > +static inline void disable_counter(uint32_t idx) > +{ > + asm volatile("msr pmcntensclr_el0, %0" : : "r" (1 << idx)); > +} As above, this function doesn't seem to be used yet. Thanks, Cov