public inbox for linux-perf-users@vger.kernel.org
 help / color / mirror / Atom feed
From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Ricky Ringler <ricky.ringler@proton.me>
Cc: peterz@infradead.org, namhyung@kernel.org, mingo@redhat.com,
	linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] perf utilities: Replace static cacheline size with sysconf cacheline size
Date: Fri, 6 Feb 2026 18:50:58 -0300	[thread overview]
Message-ID: <aYZiQk6Uftzlb_JV@x1> (raw)
In-Reply-To: <20260129004223.26799-1-ricky.ringler@proton.me>

On Thu, Jan 29, 2026 at 12:42:27AM +0000, Ricky Ringler wrote:
> Testing:
> - Built perf
> - Executed perf mem record and report
> 
> Tested-by: Ricky Ringler <ricky.ringler@proton.me>
> 
> Signed-off-by: Ricky Ringler <ricky.ringler@proton.me>
> ---
>  tools/perf/util/sort.c | 17 ++++++++++++-----
>  1 file changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
> index f3a565b0e230..aa79eb6476dd 100644
> --- a/tools/perf/util/sort.c
> +++ b/tools/perf/util/sort.c
> @@ -2474,8 +2474,7 @@ struct sort_entry sort_type_offset = {
>  
>  /* --sort typecln */
>  
> -/* TODO: use actual value in the system */
> -#define TYPE_CACHELINE_SIZE  64
> +#define DEFAULT_CACHELINE_SIZE 64

I'm applying as this addresses the TODO and for cases where both record
and report/c2c are performed on the same machine it is an improvement,
but we need to actually get this from the perf.data header, because we
can collect in one machine with a cacheline size and then do the
report/c2c on another, with a different cacheline size.

When doing 'perf report --header-only -I' to see cache info we get
thingsl like:

# CPU cache info:
#  L1 Data                 48K [0,16]
#  L1 Instruction          32K [0,16]
#  L1 Data                 48K [1,17]
#  L1 Instruction          32K [1,17]
#  L1 Data                 48K [2,18]
#  L1 Instruction          32K [2,18]
#  L1 Data                 48K [3,19]
#  L1 Instruction          32K [3,19]
#  L1 Data                 48K [4,20]
#  L1 Instruction          32K [4,20]
#  L1 Data                 48K [5,21]
#  L1 Instruction          32K [5,21]
#  L1 Data                 48K [6,22]
#  L1 Instruction          32K [6,22]
#  L1 Data                 48K [7,23]
#  L1 Instruction          32K [7,23]
#  L1 Data                 48K [8,24]
#  L1 Instruction          32K [8,24]
#  L1 Data                 48K [9,25]
#  L1 Instruction          32K [9,25]
#  L1 Data                 48K [10,26]
#  L1 Instruction          32K [10,26]
#  L1 Data                 48K [11,27]
#  L1 Instruction          32K [11,27]
#  L1 Data                 48K [12,28]
#  L1 Instruction          32K [12,28]
#  L1 Data                 48K [13,29]
#  L1 Instruction          32K [13,29]
#  L1 Data                 48K [14,30]
#  L1 Instruction          32K [14,30]
#  L1 Data                 48K [15,31]
#  L1 Instruction          32K [15,31]
#  L2 Unified            1024K [0,16]
#  L2 Unified            1024K [1,17]
#  L2 Unified            1024K [2,18]
#  L2 Unified            1024K [3,19]
:

But not the cacheline size :-\

Please consider adding this header info :-)

Applied.

- Arnaldo
  
>  static int64_t
>  sort__typecln_sort(struct hist_entry *left, struct hist_entry *right)
> @@ -2484,6 +2483,10 @@ sort__typecln_sort(struct hist_entry *left, struct hist_entry *right)
>  	struct annotated_data_type *right_type = right->mem_type;
>  	int64_t left_cln, right_cln;
>  	int64_t ret;
> +	int cln_size = cacheline_size();
> +
> +	if (cln_size == 0)
> +		cln_size = DEFAULT_CACHELINE_SIZE;
>  
>  	if (!left_type) {
>  		sort__type_init(left);
> @@ -2499,8 +2502,8 @@ sort__typecln_sort(struct hist_entry *left, struct hist_entry *right)
>  	if (ret)
>  		return ret;
>  
> -	left_cln = left->mem_type_off / TYPE_CACHELINE_SIZE;
> -	right_cln = right->mem_type_off / TYPE_CACHELINE_SIZE;
> +	left_cln = left->mem_type_off / cln_size;
> +	right_cln = right->mem_type_off / cln_size;
>  	return left_cln - right_cln;
>  }
>  
> @@ -2508,9 +2511,13 @@ static int hist_entry__typecln_snprintf(struct hist_entry *he, char *bf,
>  				     size_t size, unsigned int width __maybe_unused)
>  {
>  	struct annotated_data_type *he_type = he->mem_type;
> +	int cln_size = cacheline_size();
> +
> +	if (cln_size == 0)
> +		cln_size = DEFAULT_CACHELINE_SIZE;
>  
>  	return repsep_snprintf(bf, size, "%s: cache-line %d", he_type->self.type_name,
> -			       he->mem_type_off / TYPE_CACHELINE_SIZE);
> +			       he->mem_type_off / cln_size);
>  }
>  
>  struct sort_entry sort_type_cacheline = {
> -- 
> 2.52.0
> 

      reply	other threads:[~2026-02-06 21:51 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-29  0:42 [PATCH] perf utilities: Replace static cacheline size with sysconf cacheline size Ricky Ringler
2026-02-06 21:50 ` Arnaldo Carvalho de Melo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aYZiQk6Uftzlb_JV@x1 \
    --to=acme@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ricky.ringler@proton.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox