public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Jin, Yao" <yao.jin@linux.intel.com>
To: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>,
	jolsa@kernel.org, peterz@infradead.org, mingo@redhat.com,
	alexander.shishkin@linux.intel.com, Linux-kernel@vger.kernel.org,
	ak@linux.intel.com, kan.liang@intel.com, yao.jin@intel.com,
	ying.huang@intel.com
Subject: Re: [PATCH v6] perf stat: Fix wrong skipping for per-die aggregation
Date: Mon, 18 Jan 2021 12:16:34 +0800	[thread overview]
Message-ID: <007d5b53-360c-1540-a7b1-b846cab1decd@linux.intel.com> (raw)
In-Reply-To: <20210115203119.GN457607@kernel.org>

Hi Arnaldo, Jiri,

On 1/16/2021 4:31 AM, Arnaldo Carvalho de Melo wrote:
> Em Fri, Jan 15, 2021 at 05:28:14PM -0300, Arnaldo Carvalho de Melo escreveu:
>> Em Thu, Jan 14, 2021 at 08:00:32PM +0100, Jiri Olsa escreveu:
>>> On Thu, Jan 14, 2021 at 09:27:55AM +0800, Jin Yao wrote:
>>>
>>> SNIP
>>>
>>>>       2.003776312 S1-D0           1             855616 Bytes llc_misses.mem_read
>>>>       2.003776312 S1-D1           1             949376 Bytes llc_misses.mem_read
>>>>       3.006512788 S0-D0           1            1338880 Bytes llc_misses.mem_read
>>>>       3.006512788 S0-D1           1             920064 Bytes llc_misses.mem_read
>>>>       3.006512788 S1-D0           1             877184 Bytes llc_misses.mem_read
>>>>       3.006512788 S1-D1           1            1020736 Bytes llc_misses.mem_read
>>>>       4.008895291 S0-D0           1             926592 Bytes llc_misses.mem_read
>>>>       4.008895291 S0-D1           1             906368 Bytes llc_misses.mem_read
>>>>       4.008895291 S1-D0           1             892224 Bytes llc_misses.mem_read
>>>>       4.008895291 S1-D1           1             987712 Bytes llc_misses.mem_read
>>>>       5.001590993 S0-D0           1             962624 Bytes llc_misses.mem_read
>>>>       5.001590993 S0-D1           1             912512 Bytes llc_misses.mem_read
>>>>       5.001590993 S1-D0           1             891200 Bytes llc_misses.mem_read
>>>>       5.001590993 S1-D1           1             978432 Bytes llc_misses.mem_read
>>>>
>>>> On no-die system, die_id is 0, actually it's hashmap(socket,0), original behavior
>>>> is not changed.
>>>>
>>>> Reported-by: Huang Ying <ying.huang@intel.com>
>>>> Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
>>>> ---
>>>> v6:
>>>>   Fix the perf test python failure by adding hashmap.c to python-ext-sources.
>>>>
>>>>   root@kbl-ppc:~# ./perf test python
>>>>   19: 'import perf' in python                                         : Ok
>>>
>>> Acked-by: Jiri Olsa <jolsa@redhat.com>
>>
>> Jin, this is breaking the build in some 32-bit system, can you please
>> take a look to validate these warnings?
> 
> One such system:
> 
>    28    13.75 debian:experimental-x-mipsel  : FAIL mipsel-linux-gnu-gcc (Debian 10.2.1-3) 10.2.1 20201224
> 
>   
>>    CC       /tmp/build/perf/util/srcline.o
>> util/stat.c: In function 'pkg_id_hash':
>> util/stat.c:285:9: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
>>    return (int64_t)key & 0xffffffff;
>>           ^
>> util/stat.c: In function 'pkg_id_equal':
>> util/stat.c:291:9: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
>>    return (int64_t)key1 == (int64_t)key2;
>>           ^
>> util/stat.c:291:26: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
>>    return (int64_t)key1 == (int64_t)key2;
>>                            ^
>> util/stat.c: In function 'check_per_pkg':
>> util/stat.c:342:26: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
>>    if (hashmap__find(mask, (void *)key, NULL))
>>                            ^
>> util/stat.c:345:28: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
>>     ret = hashmap__add(mask, (void *)key, (void *)1);
>>                              ^
>>    CC       /tmp/build/perf/tests/expand-cgroup.o
> 

Thanks for reporting this build issue on 32 bit system.

In v7, I use size_t to replace uint64_t and change the hash key to 'die_id << 16 | socket_id'. I 
assume that 16 bits is enough for socket id, is it true? But if it's not true, we have to use a more 
complicated way such as allocate a structure which has die_id and socket_id and add it to hashmap.

But I'm not sure if that's necessary because I can't imagine a system which has socket id > 65535.

Thanks
Jin Yao


      reply	other threads:[~2021-01-18  4:18 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-14  1:27 [PATCH v6] perf stat: Fix wrong skipping for per-die aggregation Jin Yao
2021-01-14 19:00 ` Jiri Olsa
2021-01-15 20:28   ` Arnaldo Carvalho de Melo
2021-01-15 20:31     ` Arnaldo Carvalho de Melo
2021-01-18  4:16       ` Jin, Yao [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=007d5b53-360c-1540-a7b1-b846cab1decd@linux.intel.com \
    --to=yao.jin@linux.intel.com \
    --cc=Linux-kernel@vger.kernel.org \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=jolsa@kernel.org \
    --cc=jolsa@redhat.com \
    --cc=kan.liang@intel.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=yao.jin@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox