linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrii Nakryiko <andrii.nakryiko@gmail.com>
To: paulmck@kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	 linux-trace-kernel@vger.kernel.org, rostedt@goodmis.org,
	mhiramat@kernel.org,  oleg@redhat.com, mingo@redhat.com,
	bpf@vger.kernel.org, jolsa@kernel.org,  clm@meta.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 00/12] uprobes: add batched register/unregister APIs and per-CPU RW semaphore
Date: Tue, 2 Jul 2024 21:54:43 -0700	[thread overview]
Message-ID: <CAEf4Bzbz2bXFFB_s=bD+8CFAvMNuRSXxJPQBkRxWjY303v4Caw@mail.gmail.com> (raw)
In-Reply-To: <fd1d8b71-2a42-4649-b7ba-1b2e88028a20@paulmck-laptop>

On Tue, Jul 2, 2024 at 4:56 PM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Tue, Jul 02, 2024 at 09:18:57PM +0200, Peter Zijlstra wrote:
> > On Tue, Jul 02, 2024 at 10:54:51AM -0700, Andrii Nakryiko wrote:
> >
> > > > @@ -593,6 +595,12 @@ static struct uprobe *get_uprobe(struct uprobe *uprobe)
> > > >         return uprobe;
> > > >  }
> > > >
> > > > +static void uprobe_free_rcu(struct rcu_head *rcu)
> > > > +{
> > > > +       struct uprobe *uprobe = container_of(rcu, struct uprobe, rcu);
> > > > +       kfree(uprobe);
> > > > +}
> > > > +
> > > >  static void put_uprobe(struct uprobe *uprobe)
> > > >  {
> > > >         if (refcount_dec_and_test(&uprobe->ref)) {
> > > > @@ -604,7 +612,8 @@ static void put_uprobe(struct uprobe *uprobe)
> > >
> > > right above this we have roughly this:
> > >
> > > percpu_down_write(&uprobes_treelock);
> > >
> > > /* refcount check */
> > > rb_erase(&uprobe->rb_node, &uprobes_tree);
> > >
> > > percpu_up_write(&uprobes_treelock);
> > >
> > >
> > > This writer lock is necessary for modification of the RB tree. And I
> > > was under impression that I shouldn't be doing
> > > percpu_(down|up)_write() inside the normal
> > > rcu_read_lock()/rcu_read_unlock() region (percpu_down_write has
> > > might_sleep() in it). But maybe I'm wrong, hopefully Paul can help to
> > > clarify.
> >
> > preemptible RCU or SRCU would work.
>
> I agree that SRCU would work from a functional viewpoint.  No so for
> preemptible RCU, which permits preemption (and on -rt, blocking for
> spinlocks), it does not permit full-up blocking, and for good reason.
>
> > > But actually what's wrong with RCU Tasks Trace flavor?
> >
> > Paul, isn't this the RCU flavour you created to deal with
> > !rcu_is_watching()? The flavour that never should have been created in
> > favour of just cleaning up the mess instead of making more.
>
> My guess is that you are instead thinking of RCU Tasks Rude, which can
> be eliminated once all architectures get their entry/exit/deep-idle
> functions either inlined or marked noinstr.
>
> > > I will
> > > ultimately use it anyway to avoid uprobe taking unnecessary refcount
> > > and to protect uprobe->consumers iteration and uc->handler() calls,
> > > which could be sleepable, so would need rcu_read_lock_trace().
> >
> > I don't think you need trace-rcu for that. SRCU would do nicely I think.
>
> From a functional viewpoint, agreed.
>
> However, in the past, the memory-barrier and array-indexing overhead
> of SRCU has made it a no-go for lightweight probes into fastpath code.
> And these cases were what motivated RCU Tasks Trace (as opposed to RCU
> Tasks Rude).

Yep, and this is a similar case here. I've actually implemented
SRCU-based protection and benchmarked it (all other things being the
same). I see 5% slowdown for the fastest uprobe kind (entry uprobe on
nop) for the single-threaded use case. We go down from 3.15 millions/s
triggerings to slightly below 3 millions/s. With more threads the
difference increases a bit, though numbers vary a bit from run to run,
so I don't want to put out the exact number. But I see that for
SRCU-based implementation total aggregated peak achievable throughput
is about 3.5-3.6 mln/s vs this implementation reaching 4-4.1 mln/s.
Again, some of that could be variability, but I did run multiple
rounds and that's the trend I'm seeing.

>
> The other rule for RCU Tasks Trace is that although readers are permitted
> to block, this blocking can be for no longer than a major page fault.
> If you need longer-term blocking, then you should instead use SRCU.
>

And this is the case here. Right now rcu_read_lock_trace() is
protecting uprobes_treelock, which is only taken for the duration of
RB tree lookup/insert/delete. In my subsequent changes to eliminate
register_rwsem we might be executing uprobe_consumer under this RCU
lock, but those also should be only sleeping for page faults.

On the other hand, hot path (reader side) is quite hot with
millions/second executions and should add as little overhead as
possible (which is why I'm seeing SRCU-based implementation being
slower, as I mentioned above).

>                                                         Thanx, Paul
>
> > > >                 mutex_lock(&delayed_uprobe_lock);
> > > >                 delayed_uprobe_remove(uprobe, NULL);
> > > >                 mutex_unlock(&delayed_uprobe_lock);
> > > > -               kfree(uprobe);
> > > > +
> > > > +               call_rcu(&uprobe->rcu, uprobe_free_rcu);
> > > >         }
> > > >  }
> > > >
> > > > @@ -668,12 +677,25 @@ static struct uprobe *__find_uprobe(struct inode *inode, loff_t offset)
> > > >  static struct uprobe *find_uprobe(struct inode *inode, loff_t offset)
> > > >  {
> > > >         struct uprobe *uprobe;
> > > > +       unsigned seq;
> > > >
> > > > -       read_lock(&uprobes_treelock);
> > > > -       uprobe = __find_uprobe(inode, offset);
> > > > -       read_unlock(&uprobes_treelock);
> > > > +       guard(rcu)();
> > > >
> > > > -       return uprobe;
> > > > +       do {
> > > > +               seq = read_seqcount_begin(&uprobes_seqcount);
> > > > +               uprobes = __find_uprobe(inode, offset);
> > > > +               if (uprobes) {
> > > > +                       /*
> > > > +                        * Lockless RB-tree lookups are prone to false-negatives.
> > > > +                        * If they find something, it's good. If they do not find,
> > > > +                        * it needs to be validated.
> > > > +                        */
> > > > +                       return uprobes;
> > > > +               }
> > > > +       } while (read_seqcount_retry(&uprobes_seqcount, seq));
> > > > +
> > > > +       /* Really didn't find anything. */
> > > > +       return NULL;
> > > >  }
> > >
> > > Honest question here, as I don't understand the tradeoffs well enough.
> > > Is there a lot of benefit to switching to seqcount lock vs using
> > > percpu RW semaphore (previously recommended by Ingo). The latter is a
> > > nice drop-in replacement and seems to be very fast and scale well.
> >
> > As you noted, that percpu-rwsem write side is quite insane. And you're
> > creating this batch complexity to mitigate that.
> >
> > The patches you propose are quite complex, this alternative not so much.
> >
> > > Right now we are bottlenecked on uprobe->register_rwsem (not
> > > uprobes_treelock anymore), which is currently limiting the scalability
> > > of uprobes and I'm going to work on that next once I'm done with this
> > > series.
> >
> > Right, but it looks fairly simple to replace that rwsem with a mutex and
> > srcu.

  reply	other threads:[~2024-07-03  4:54 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-01 22:39 [PATCH v2 00/12] uprobes: add batched register/unregister APIs and per-CPU RW semaphore Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 01/12] uprobes: update outdated comment Andrii Nakryiko
2024-07-03 11:38   ` Oleg Nesterov
2024-07-03 18:24     ` Andrii Nakryiko
2024-07-03 21:51     ` Andrii Nakryiko
2024-07-10 13:31     ` Oleg Nesterov
2024-07-10 15:14       ` Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 02/12] uprobes: correct mmap_sem locking assumptions in uprobe_write_opcode() Andrii Nakryiko
2024-07-03 11:41   ` Oleg Nesterov
2024-07-03 13:15   ` Masami Hiramatsu
2024-07-03 18:25     ` Andrii Nakryiko
2024-07-03 21:47       ` Masami Hiramatsu
2024-07-01 22:39 ` [PATCH v2 03/12] uprobes: simplify error handling for alloc_uprobe() Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 04/12] uprobes: revamp uprobe refcounting and lifetime management Andrii Nakryiko
2024-07-02 10:22   ` Peter Zijlstra
2024-07-02 17:54     ` Andrii Nakryiko
2024-07-03 13:36   ` Peter Zijlstra
2024-07-03 20:47     ` Andrii Nakryiko
2024-07-04  8:03       ` Peter Zijlstra
2024-07-04  8:45         ` Peter Zijlstra
2024-07-04 14:40           ` Masami Hiramatsu
2024-07-04  8:31       ` Peter Zijlstra
2024-07-05 15:37   ` Oleg Nesterov
2024-07-06 17:00     ` Jiri Olsa
2024-07-06 17:05       ` Jiri Olsa
2024-07-07 14:46     ` Oleg Nesterov
2024-07-08 17:47       ` Andrii Nakryiko
2024-07-09 18:47         ` Oleg Nesterov
2024-07-09 20:59           ` Andrii Nakryiko
2024-07-09 21:31             ` Oleg Nesterov
2024-07-09 21:45               ` Andrii Nakryiko
2024-07-08 17:47     ` Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 05/12] uprobes: move offset and ref_ctr_offset into uprobe_consumer Andrii Nakryiko
2024-07-03  8:13   ` Peter Zijlstra
2024-07-03 10:13     ` Masami Hiramatsu
2024-07-03 18:23       ` Andrii Nakryiko
2024-07-07 12:48   ` Oleg Nesterov
2024-07-08 17:56     ` Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 06/12] uprobes: add batch uprobe register/unregister APIs Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 07/12] uprobes: inline alloc_uprobe() logic into __uprobe_register() Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 08/12] uprobes: split uprobe allocation and uprobes_tree insertion steps Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 09/12] uprobes: batch uprobes_treelock during registration Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 10/12] uprobes: improve lock batching for uprobe_unregister_batch Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 11/12] uprobes,bpf: switch to batch uprobe APIs for BPF multi-uprobes Andrii Nakryiko
2024-07-01 22:39 ` [PATCH v2 12/12] uprobes: switch uprobes_treelock to per-CPU RW semaphore Andrii Nakryiko
2024-07-02 10:23 ` [PATCH v2 00/12] uprobes: add batched register/unregister APIs and " Peter Zijlstra
2024-07-02 11:54   ` Peter Zijlstra
2024-07-02 12:01     ` Peter Zijlstra
2024-07-02 17:54     ` Andrii Nakryiko
2024-07-02 19:18       ` Peter Zijlstra
2024-07-02 23:56         ` Paul E. McKenney
2024-07-03  4:54           ` Andrii Nakryiko [this message]
2024-07-03  7:50           ` Peter Zijlstra
2024-07-03 14:08             ` Paul E. McKenney
2024-07-04  8:39               ` Peter Zijlstra
2024-07-04 15:13                 ` Paul E. McKenney
2024-07-03 21:57             ` Steven Rostedt
2024-07-03 22:07               ` Paul E. McKenney
2024-07-03  4:47         ` Andrii Nakryiko
2024-07-03  8:07           ` Peter Zijlstra
2024-07-03 20:55             ` Andrii Nakryiko
2024-07-03 21:33 ` Andrii Nakryiko
2024-07-04  9:15   ` Peter Zijlstra
2024-07-04 13:56     ` Steven Rostedt
2024-07-04 15:44     ` Paul E. McKenney
2024-07-08 17:47       ` Andrii Nakryiko
2024-07-08 17:48     ` Andrii Nakryiko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAEf4Bzbz2bXFFB_s=bD+8CFAvMNuRSXxJPQBkRxWjY303v4Caw@mail.gmail.com' \
    --to=andrii.nakryiko@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=clm@meta.com \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=mhiramat@kernel.org \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).