public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Jiri Olsa <jolsa@redhat.com>
Cc: <peterz@infradead.org>, <mingo@redhat.com>, <acme@kernel.org>,
	<mark.rutland@arm.com>, <alexander.shishkin@linux.intel.com>,
	<namhyung@kernel.org>, <will@kernel.org>, <ak@linux.intel.com>,
	<linuxarm@huawei.com>, <linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>, <suzuki.poulose@arm.com>,
	<james.clark@arm.com>, <zhangshaokun@hisilicon.com>,
	<robin.murphy@arm.com>, "liuqi (BA)" <liuqi115@huawei.com>
Subject: Re: [PATCH RFC 5/7] perf pmu: Support matching by sysid
Date: Wed, 12 Feb 2020 10:08:44 +0000	[thread overview]
Message-ID: <f72c7f52-a285-e052-8656-de2940a6fc7f@huawei.com> (raw)
In-Reply-To: <2a51ce93-fa68-8088-f31f-2fd692253335@huawei.com>

On 11/02/2020 15:07, John Garry wrote:
> On 11/02/2020 13:47, Jiri Olsa wrote:
> 
> Hi Jirka,
> 
>>>>> +
>>>>> +    return buf;
>>>>> +}
>>>>> +
>>>
>>> I have another series to add kernel support for a system identifier 
>>> sysfs
>>> entry, which I sent after this series:
>>>
>>> https://lore.kernel.org/linux-acpi/1580210059-199540-1-git-send-email-john.garry@huawei.com/ 
>>>
>>>
>>> It is different to what I am relying on here - it uses a kernel soc 
>>> driver
>>> for firmware ACPI PPTT identifier. Progress is somewhat blocked at the
>>> moment however and I may have to use a different method:
>>>
>>> https://lore.kernel.org/linux-acpi/20200128123415.GB36168@bogus/
>>
>> I'll try to check ;-)
> 
> Summary is that there exists an ACPI firmware field which we could 
> expose to userspace via sysfs - this would provide the system id. 
> However there is a proposal to deprecate it in the ACPI standard and, as 
> such, would prefer that we don't add kernel support for it at this stage.
> 
> So I am evaluating the alternative in the meantime, which again is some 
> firmware method which should allow us to expose a system id to userspace 
> via sysfs. Unfortunately this is arm specific. However, other archs can 
> still provide their own method, maybe a soc driver:
> 
> Documentation/ABI/testing/sysfs-devices-soc#n15
> 
>>
>>>
>>>>> +static char *perf_pmu__getsysid(void)
>>>>> +{
>>>>> +    char *sysid;
>>>>> +    static bool printed;
>>>>> +
>>>>> +    sysid = getenv("PERF_SYSID");
>>>>> +    if (sysid)
>>>>> +        sysid = strdup(sysid);
>>>>> +
>>>>> +    if (!sysid)
>>>>> +        sysid = get_sysid_str();
>>>>> +    if (!sysid)
>>>>> +        return NULL;
>>>>> +
>>>>> +    if (!printed) {
>>>>> +        pr_debug("Using SYSID %s\n", sysid);
>>>>> +        printed = true;
>>>>> +    }
>>>>> +    return sysid;
>>>>> +}
>>>>
>>>> this part is getting complicated and AFAIK we have no tests for it
>>>>
>>>> if you could think of any tests that'd be great.. Perhaps we could
>>>> load 'our' json test files and check appropriate events/aliasses
>>>> via in pmu object.. or via parse_events interface.. those test aliases
>>>> would have to be part of perf, but we have tests compiled in anyway
>>>
>>> Sorry, I don't fully follow.
>>>
>>> Are you suggesting that we could load the specific JSONs tables for a 
>>> system
>>> from the host filesystem?
>>
>> I wish to see some test for all this.. I can only think about having
>> 'test' json files compiled with perf and 'perf test' that looks them
>> up and checks that all is in the proper place
> 
> OK, let me consider this part for perf test support.

I will note that perf test has many issues on my arm64 board:

do] password for john:
  1: vmlinux symtab matches kallsyms                       : Skip
  2: Detect openat syscall event                           : FAILED!
  3: Detect openat syscall event on all cpus               : FAILED!
  4: Read samples using the mmap interface                 : FAILED!
  5: Test data source output                               : Ok
  6: Parse event definition strings                        : FAILED!
  7: Simple expression parser                              : Ok
  8: PERF_RECORD_* events & perf_sample fields             : Ok
  9: Parse perf pmu format                                 : Ok
10: DSO data read                                         : Ok
11: DSO data cache                                        : Ok
12: DSO data reopen                                       : Ok
13: Roundtrip evsel->name                                 : Ok
14: Parse sched tracepoints fields                        : FAILED!
15: syscalls:sys_enter_openat event fields                : FAILED!
16: Setup struct perf_event_attr                          : Skip
17: Match and link multiple hists                         : Ok
18: 'import perf' in python                               : Ok
21: Breakpoint accounting                                 : Ok
22: Watchpoint                                            :
22.1: Read Only Watchpoint                                : Ok
22.2: Write Only Watchpoint                               : Ok
22.3: Read / Write Watchpoint                             : Ok
22.4: Modify Watchpoint                                   : Ok
23: Number of exit events of a simple workload            : Ok
24: Software clock events period values                   : Ok
25: Object code reading                                   : Ok
26: Sample parsing                                        : Ok
27: Use a dummy software event to keep tracking           : Ok
28: Parse with no sample_id_all bit set                   : Ok
29: Filter hist entries                                   : Ok
30: Lookup mmap thread                                    : Ok
31: Share thread maps                                     : Ok
32: Sort output of hist entries                           : Ok
33: Cumulate child hist entries                           : Ok
34: Track with sched_switch                               : Ok
35: Filter fds with revents mask in a fdarray             : Ok
36: Add fd to a fdarray, making it autogrow               : Ok
37: kmod_path__parse                                      : Ok
38: Thread map                                            : Ok
39: LLVM search and compile                               :
39.1: Basic BPF llvm compile                              : Skip
39.2: kbuild searching                                    : Skip
39.3: Compile source for BPF prologue generation          : Skip
39.4: Compile source for BPF relocation                   : Skip
40: Session topology                                      : FAILED!
41: BPF filter                                            :
41.1: Basic BPF filtering                                 : Skip
41.2: BPF pinning                                         : Skip
41.3: BPF prologue generation                             : Skip
41.4: BPF relocation checker                              : Skip
42: Synthesize thread map                                 : Ok
43: Remove thread map                                     : Ok
44: Synthesize cpu map                                    : Ok
45: Synthesize stat config                                : Ok
46: Synthesize stat                                       : Ok
47: Synthesize stat round                                 : Ok
48: Synthesize attr update                                : Ok
49: Event times                                           : Ok
50: Read backward ring buffer                             : FAILED!
51: Print cpu map                                         : Ok
52: Merge cpu map                                         : Ok
53: Probe SDT events                                      : Ok
54: is_printable_array                                    : Ok
55: Print bitmap                                          : Ok
56: perf hooks                          umber__scnprintf 
                : Ok
59: mem2node                                              : Ok
60: time utils                                            : Ok
61: Test jit_write_elf                                    : Ok
62: maps__merge_in                                        : Ok
63: DWARF unwind                                          : Ok
64: Check open filename arg using perf trace + vfs_getname: FAILED!
65: Add vfs_getname probe to get syscall args filenames   : FAILED!
66: Use vfs_getname probe to get syscall args filenames   : FAILED!
67: Zstd perf.data compression/decompression              : Ok
68: probe libc's inet_pton & backtrace it with ping       : Skip
john@ubuntu:~/linux$

I know that the perf tool definitely has issues for system topology for 
arm64, which I need to check on.

Maybe I can conscribe help internally to help check the rest...

Thanks,
john

  reply	other threads:[~2020-02-12 10:08 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-24 14:34 [PATCH RFC 0/7] perf pmu-events: Support event aliasing for system PMUs John Garry
2020-01-24 14:34 ` [PATCH RFC 1/7] perf jevents: Add support for an extra directory level John Garry
2020-02-10 12:07   ` Jiri Olsa
2020-02-10 15:47     ` John Garry
2020-01-24 14:35 ` [PATCH RFC 2/7] perf vendor events arm64: Relocate hip08 core events John Garry
2020-01-24 14:35 ` [PATCH RFC 3/7] perf jevents: Add support for a system events PMU John Garry
2020-02-10 12:07   ` Jiri Olsa
2020-02-10 12:07   ` Jiri Olsa
2020-02-10 15:55     ` John Garry
2020-02-11 14:46       ` Jiri Olsa
2020-01-24 14:35 ` [PATCH RFC 4/7] perf pmu: Rename uncore symbols to include system PMUs John Garry
2020-02-10 12:07   ` Jiri Olsa
2020-02-10 15:44     ` John Garry
2020-02-11 14:43       ` Jiri Olsa
2020-02-11 15:36         ` John Garry
2020-02-12 12:08           ` Jiri Olsa
2020-01-24 14:35 ` [PATCH RFC 5/7] perf pmu: Support matching by sysid John Garry
2020-02-10 12:07   ` Jiri Olsa
2020-02-10 16:22     ` John Garry
2020-02-11 13:47       ` Jiri Olsa
2020-02-11 15:07         ` John Garry
2020-02-12 10:08           ` John Garry [this message]
2020-02-12 12:16             ` Jiri Olsa
2020-02-12 12:24               ` John Garry
2020-01-24 14:35 ` [PATCH RFC 6/7] perf vendor events arm64: Relocate uncore events for hip08 John Garry
2020-01-24 14:35 ` [PATCH RFC 7/7] perf vendor events arm64: Add hip08 SMMUv3 PMCG IMP DEF events John Garry
2020-02-11 15:24 ` [PATCH RFC 0/7] perf pmu-events: Support event aliasing for system PMUs James Clark
2020-02-11 15:41   ` John Garry
2020-02-18 12:57 ` Will Deacon
2020-02-18 13:24   ` John Garry
2020-02-18 13:39     ` Will Deacon
2020-02-18 16:19       ` John Garry
2020-02-18 17:08         ` Mark Rutland
2020-02-18 17:58           ` John Garry
2020-02-18 18:13             ` Mark Rutland
2020-02-19  1:55               ` Joakim Zhang
2020-02-19  8:44                 ` John Garry
2020-02-19 12:40                   ` Joakim Zhang
2020-02-19 14:28                     ` John Garry
2020-02-19  8:50               ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f72c7f52-a285-e052-8656-de2940a6fc7f@huawei.com \
    --to=john.garry@huawei.com \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=james.clark@arm.com \
    --cc=jolsa@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=liuqi115@huawei.com \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=robin.murphy@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    --cc=zhangshaokun@hisilicon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox