linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: William Cohen <wcohen@redhat.com>
To: Vince Weaver <vincent.weaver@maine.edu>
Cc: linux-perf-users@vger.kernel.org,
	PAPI Developers <perfapi-devel@cs.utk.edu>,
	Michael Petlan <mpetlan@redhat.com>
Subject: Re: [Perfapi-devel] Lack of aarch64 checks for perf events schedulability
Date: Fri, 13 May 2016 14:28:57 -0400	[thread overview]
Message-ID: <8a3259e1-e78f-cb50-8bc7-963f924cda70@redhat.com> (raw)
In-Reply-To: <alpine.DEB.2.20.1605131345560.19648@macbook-air>

On 05/13/2016 01:58 PM, Vince Weaver wrote:
> On Fri, 13 May 2016, William Cohen wrote:
> 
>> When running the PAPI testsuite on RHEL for aarch64 Michael Petlan
>> found that the test overflow_allcounters was failing.  I investigated
>> and it looks like the the RHEL for aarch64 linux kernel perf support
>> suffers from a problem similar to MIPS kernels where perf_event_open
>> doesn't properly check that events can be scheduled together; then a
>> later read of the counters will fail.  This has been observed on the
>> RHEL for aarch64 4.5.0 based kernel. I have not tried this on the
>> latest kernel, so I don't know if this is still a problem with newer
>> kernels.
> 
> Let me see, yes I can reproduce this on my arm64 dragonboard running Linux 
> 3.16.

Hi Vince,

Thanks for trying it out on the dragonboard.  I have a dragonboard 410c at home also and will give that a try too.

> 
> Your fix does fix things in that the test runs but it fails for other 
> reasons at the validation stage.

I saw at times that the test fail the validation because there were a the number of samples wasn't exactly the count expected.

> 
> Only 6 out of 7 counters are used, but it's picking a weird set of 
> counters to use for the test.  I forget what overflow_allcounters actually
> does.
> 
> I doubt it's worth the trouble of trying to get this fixed at the Linux 
> level but it would be interesting to see why the 7th counter can't be 
> scheduled.
> 
> Vince
> 

It still is worth it to check whether that silent failure behavior of perf_event_open was expected.

According to the cortex a53 manual there are six general purpose counters and one cycle counter.  Depending on how the event selection is done it might not be able to use that cycle counter.  It might also be taking a counter for a watchdog timer.  The Xgene machine I am using state in the /var/log/messages:

hw perfevents: enabled with armv8_pmuv3 PMU driver, 5 counters available

How many counters does the dragonboard kernel claim the machine has?

-Will

  reply	other threads:[~2016-05-13 18:28 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-13 15:33 Lack of aarch64 checks for perf events schedulability William Cohen
2016-05-13 17:58 ` [Perfapi-devel] " Vince Weaver
2016-05-13 18:28   ` William Cohen [this message]
2016-05-13 19:02     ` Vince Weaver
2016-05-14  0:22       ` William Cohen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8a3259e1-e78f-cb50-8bc7-963f924cda70@redhat.com \
    --to=wcohen@redhat.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mpetlan@redhat.com \
    --cc=perfapi-devel@cs.utk.edu \
    --cc=vincent.weaver@maine.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).