linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marco Elver <elver@google.com>
To: Dmitry Vyukov <dvyukov@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Frederic Weisbecker <frederic@kernel.org>,
	Ingo Molnar <mingo@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@redhat.com>, Namhyung Kim <namhyung@kernel.org>,
	linux-perf-users@vger.kernel.org, x86@kernel.org,
	linux-sh@vger.kernel.org, kasan-dev@googlegroups.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/8] perf/hw_breakpoint: Optimize list of per-task breakpoints
Date: Thu, 9 Jun 2022 20:37:04 +0200	[thread overview]
Message-ID: <CANpmjNNC7ry59OXsJrPMf56Xi63chexaDfnP4t8_4MG7S5ZgCg@mail.gmail.com> (raw)
In-Reply-To: <CACT4Y+Zq-1nczM2JH7Sr4mZo84gsCRd83RAwwnHwmap-wCOLTQ@mail.gmail.com>

On Thu, 9 Jun 2022 at 18:53, Dmitry Vyukov <dvyukov@google.com> wrote:
>
> .
> /On Thu, 9 Jun 2022 at 16:56, Marco Elver <elver@google.com> wrote:
> > > > On a machine with 256 CPUs, running the recently added perf breakpoint
> > > > benchmark results in:
> > > >
> > > >  | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64
> > > >  | # Running 'breakpoint/thread' benchmark:
> > > >  | # Created/joined 30 threads with 4 breakpoints and 64 parallelism
> > > >  |      Total time: 236.418 [sec]
> > > >  |
> > > >  |   123134.794271 usecs/op
> > > >  |  7880626.833333 usecs/op/cpu
> > > >
> > > > The benchmark tests inherited breakpoint perf events across many
> > > > threads.
> > > >
> > > > Looking at a perf profile, we can see that the majority of the time is
> > > > spent in various hw_breakpoint.c functions, which execute within the
> > > > 'nr_bp_mutex' critical sections which then results in contention on that
> > > > mutex as well:
> > > >
> > > >     37.27%  [kernel]       [k] osq_lock
> > > >     34.92%  [kernel]       [k] mutex_spin_on_owner
> > > >     12.15%  [kernel]       [k] toggle_bp_slot
> > > >     11.90%  [kernel]       [k] __reserve_bp_slot
> > > >
> > > > The culprit here is task_bp_pinned(), which has a runtime complexity of
> > > > O(#tasks) due to storing all task breakpoints in the same list and
> > > > iterating through that list looking for a matching task. Clearly, this
> > > > does not scale to thousands of tasks.
> > > >
> > > > While one option would be to make task_struct a breakpoint list node,
> > > > this would only further bloat task_struct for infrequently used data.
> > >
> > > task_struct already has:
> > >
> > > #ifdef CONFIG_PERF_EVENTS
> > >   struct perf_event_context *perf_event_ctxp[perf_nr_task_contexts];
> > >   struct mutex perf_event_mutex;
> > >   struct list_head perf_event_list;
> > > #endif
> > >
> > > Wonder if it's possible to use perf_event_mutex instead of the task_sharded_mtx?
> > > And possibly perf_event_list instead of task_bps_ht? It will contain
> > > other perf_event types, so we will need to test type as well, but on
> > > the positive side, we don't need any management of the separate
> > > container.
> >
> > Hmm, yes, I looked at that but then decided against messing the
> > perf/core internals. The main issue I have with using perf_event_mutex
> > is that we might interfere with perf/core's locking rules as well as
> > interfere with other concurrent perf event additions. Using
> > perf_event_list is very likely a no-go because it requires reworking
> > perf/core as well.
> >
> > I can already hear Peter shouting, but maybe I'm wrong. :-)
>
> Let's wait for Peter to shout then :)
> A significant part of this change is having per-task data w/o having
> per-task data.
>
> The current perf-related data in task_struct is already multiple words
> and it's also not used in lots of production cases.
> Maybe we could have something like:
>
>   struct perf_task_data* lazily_allocated_perf_data;
>
> that's lazily allocated on first use instead of the current
> perf_event_ctxp/perf_event_mutex/perf_event_list.
> This way we could both reduce task_size when perf is not used and have
> more perf-related data (incl breakpoints) when it's used.

I don't mind either option, so keeping task_struct bloat in mind, we have:

  1. rhashtable option, no changes to task_struct.

  2. add the breakpoint mutex + list to task_struct.

  3. add something like hw_breakpoint_task_data* and allocate lazily.

  4. (your proposal) move all of perf data into a new struct (+add
hw_breakpoint things in there) that is lazily allocated.

I don't think perf is that infrequently used, and I can't estimate
performance impact, so I don't like #4 too much personally. My
preferred compromise would be #3, but at the same time I'd rather not
bloat task_struct even with 8 extra infrequently used bytes. Am I too
paranoid?

Preferences?

  reply	other threads:[~2022-06-09 18:37 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-09 11:30 [PATCH 0/8] perf/hw_breakpoint: Optimize for thousands of tasks Marco Elver
2022-06-09 11:30 ` [PATCH 1/8] perf/hw_breakpoint: Optimize list of per-task breakpoints Marco Elver
2022-06-09 12:30   ` Dmitry Vyukov
2022-06-09 12:53     ` Marco Elver
2022-06-09 13:05       ` Dmitry Vyukov
2022-06-09 14:29   ` Dmitry Vyukov
2022-06-09 14:55     ` Marco Elver
2022-06-09 16:53       ` Dmitry Vyukov
2022-06-09 18:37         ` Marco Elver [this message]
2022-06-10  9:04           ` Dmitry Vyukov
2022-06-10  9:36             ` Marco Elver
2022-06-09 11:30 ` [PATCH 2/8] perf/hw_breakpoint: Mark data __ro_after_init Marco Elver
2022-06-09 11:45   ` Dmitry Vyukov
2022-06-09 11:30 ` [PATCH 3/8] perf/hw_breakpoint: Optimize constant number of breakpoint slots Marco Elver
2022-06-09 11:55   ` Dmitry Vyukov
2022-06-09 11:30 ` [PATCH 4/8] perf/hw_breakpoint: Make hw_breakpoint_weight() inlinable Marco Elver
2022-06-09 12:03   ` Dmitry Vyukov
2022-06-09 12:08     ` Marco Elver
2022-06-09 12:23       ` Dmitry Vyukov
2022-06-09 13:25     ` Peter Zijlstra
2022-06-09 11:30 ` [PATCH 5/8] perf/hw_breakpoint: Remove useless code related to flexible breakpoints Marco Elver
2022-06-09 12:04   ` Dmitry Vyukov
2022-06-09 13:41     ` Dmitry Vyukov
2022-06-09 14:00       ` Marco Elver
2022-06-09 11:30 ` [PATCH 6/8] perf/hw_breakpoint: Reduce contention with large number of tasks Marco Elver
2022-06-09 13:03   ` Dmitry Vyukov
2022-06-09 13:29     ` Marco Elver
2022-06-09 11:30 ` [PATCH 7/8] perf/hw_breakpoint: Optimize task_bp_pinned() if CPU-independent Marco Elver
2022-06-09 15:00   ` Dmitry Vyukov
2022-06-10  8:25     ` Marco Elver
2022-06-10  9:13       ` Dmitry Vyukov
2022-06-09 11:30 ` [PATCH 8/8] perf/hw_breakpoint: Clean up headers Marco Elver
2022-06-09 12:11   ` Dmitry Vyukov
2022-06-09 12:28 ` [PATCH 0/8] perf/hw_breakpoint: Optimize for thousands of tasks Dmitry Vyukov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANpmjNNC7ry59OXsJrPMf56Xi63chexaDfnP4t8_4MG7S5ZgCg@mail.gmail.com \
    --to=elver@google.com \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=dvyukov@google.com \
    --cc=frederic@kernel.org \
    --cc=jolsa@redhat.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).