linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Namhyung Kim <namhyung@kernel.org>
To: Ian Rogers <irogers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>,
	Adrian Hunter <adrian.hunter@intel.com>,
	Yuan Can <yuancan@huawei.com>,
	Kan Liang <kan.liang@linux.intel.com>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Andres Freund <andres@anarazel.de>,
	linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: [PATCH v1 2/2] perf annotation: Switch lock from a mutex to a sharded_mutex
Date: Wed, 14 Jun 2023 17:34:29 -0700	[thread overview]
Message-ID: <CAM9d7ci5zL8NWMrJVq4FQ242LNx0cQoY3Z32B+yuO2HFu6R1gA@mail.gmail.com> (raw)
In-Reply-To: <20230611072751.637227-2-irogers@google.com>

Hi Ian,

On Sun, Jun 11, 2023 at 12:28 AM Ian Rogers <irogers@google.com> wrote:
>
> Remove the "struct mutex lock" variable from annotation that is
> allocated per symbol. This removes in the region of 40 bytes per
> symbol allocation. Use a sharded mutex where the number of shards is
> set to the number of CPUs. Assuming good hashing of the annotation
> (done based on the pointer), this means in order to contend there
> needs to be more threads than CPUs, which is not currently true in any
> perf command. Were contention an issue it is straightforward to
> increase the number of shards in the mutex.
>
> On my Debian/glibc based machine, this reduces the size of struct
> annotation from 136 bytes to 96 bytes, or nearly 30%.

That's quite a good improvement given the number of symbols
we can have in a report session!

>
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---

[SNIP]
> @@ -1291,17 +1292,64 @@ int disasm_line__scnprintf(struct disasm_line *dl, char *bf, size_t size, bool r
>         return ins__scnprintf(&dl->ins, bf, size, &dl->ops, max_ins_name);
>  }
>
> -void annotation__init(struct annotation *notes)
> +void annotation__exit(struct annotation *notes)
>  {
> -       mutex_init(&notes->lock);
> +       annotated_source__delete(notes->src);
>  }
>
> -void annotation__exit(struct annotation *notes)
> +static struct sharded_mutex *sharded_mutex;
> +
> +static void annotation__init_sharded_mutex(void)
>  {
> -       annotated_source__delete(notes->src);
> -       mutex_destroy(&notes->lock);
> +       /* As many mutexes as there are CPUs. */
> +       sharded_mutex = sharded_mutex__new(cpu__max_present_cpu().cpu);
> +}
> +
> +static size_t annotation__hash(const struct annotation *notes)
> +{
> +       return ((size_t)notes) >> 4;

But I'm afraid it might create more contention depending on the
malloc implementation.  If it always returns 128-byte (or 256)
aligned memory for this struct then it could always collide in the
slot 0 if the number of CPU is 8 or less, right?

Thanks,
Namhyung


>  }
>
> +static struct mutex *annotation__get_mutex(const struct annotation *notes)
> +{
> +       static pthread_once_t once = PTHREAD_ONCE_INIT;
> +
> +       pthread_once(&once, annotation__init_sharded_mutex);
> +       if (!sharded_mutex)
> +               return NULL;
> +
> +       return sharded_mutex__get_mutex(sharded_mutex, annotation__hash(notes));
> +}
> +
> +void annotation__lock(struct annotation *notes)
> +       NO_THREAD_SAFETY_ANALYSIS
> +{
> +       struct mutex *mutex = annotation__get_mutex(notes);
> +
> +       if (mutex)
> +               mutex_lock(mutex);
> +}
> +
> +void annotation__unlock(struct annotation *notes)
> +       NO_THREAD_SAFETY_ANALYSIS
> +{
> +       struct mutex *mutex = annotation__get_mutex(notes);
> +
> +       if (mutex)
> +               mutex_unlock(mutex);
> +}
> +
> +bool annotation__trylock(struct annotation *notes)
> +{
> +       struct mutex *mutex = annotation__get_mutex(notes);
> +
> +       if (!mutex)
> +               return false;
> +
> +       return mutex_trylock(mutex);
> +}
> +
> +
>  static void annotation_line__add(struct annotation_line *al, struct list_head *head)
>  {
>         list_add_tail(&al->node, head);

  reply	other threads:[~2023-06-15  0:34 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-11  7:27 [PATCH v1 1/2] perf sharded_mutex: Introduce sharded_mutex Ian Rogers
2023-06-11  7:27 ` [PATCH v1 2/2] perf annotation: Switch lock from a mutex to a sharded_mutex Ian Rogers
2023-06-15  0:34   ` Namhyung Kim [this message]
2023-06-15  1:49     ` Ian Rogers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAM9d7ci5zL8NWMrJVq4FQ242LNx0cQoY3Z32B+yuO2HFu6R1gA@mail.gmail.com \
    --to=namhyung@kernel.org \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=andres@anarazel.de \
    --cc=chenhuacai@kernel.org \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mhiramat@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=yuancan@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).