public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Alexei Starovoitov <ast@plumgrid.com>
Cc: Daniel Wagner <daniel.wagner@bmw-carit.de>,
	LKML <linux-kernel@vger.kernel.org>,
	rostedt@goodmis.org
Subject: Re: call_rcu from trace_preempt
Date: Wed, 17 Jun 2015 17:20:43 -0700	[thread overview]
Message-ID: <20150618002043.GV3913@linux.vnet.ibm.com> (raw)
In-Reply-To: <558209B8.80405@plumgrid.com>

On Wed, Jun 17, 2015 at 04:58:48PM -0700, Alexei Starovoitov wrote:
> On 6/17/15 2:36 PM, Paul E. McKenney wrote:
> >Well, you do need to have something in each element to allow them to be
> >tracked.  You could indeed use llist_add() to maintain the per-CPU list,
> >and then use llist_del_all() bulk-remove all the elements from the per-CPU
> >list.  You can then pass each element in turn to kfree_rcu().  And yes,
> >I am suggesting that you open-code this, as it is going to be easier to
> >handle your special case then to provide a fully general solution.  For
> >one thing, the general solution would require a full rcu_head to track
> >offset and next.  In contrast, you can special-case the offset.  And
> >ignore the overload special cases.
> 
> yes. all makes sense.
> 
> > Locklessly enqueue onto a per-CPU list, but yes.  The freeing is up to
> 
> yes. per-cpu llist indeed.
> 
> > you -- you get called just before exit from __call_rcu(), and get to
> > figure out what to do.
> >
> > My guess would be if not in interrupt and not recursively invoked,
> > atomically remove all the elements from the list, then pass each to
> > kfree_rcu(), and finally let things take their course from there.
> > The llist APIs look like they would work.
> 
> Above and 'just before the exit from __call_rcu()' part of suggestion
> I still don't understand.
> To avoid reentry into call_rcu I can either create 1 or N new kthreads
> or work_queue and do manual wakeups, but that's very specialized and I
> don't want to permanently waste them, so I'm thinking to llist_add into
> per-cpu llists and do llist_del_all in rcu_process_callbacks() to take
> them from these llists and call kfree_rcu on them.

Another option is to drain the lists the next time you do an allocation.
That would avoid hooking both __call_rcu() and rcu_process_callbacks().

							Thanx, Paul

> The llist_add part will also do:
> if (!rcu_is_watching()) invoke_rcu_core();
> to raise softirq when necessary.
> So at the end it will look like two phase kfree_rcu.
> I'll try to code it up and see it explodes :)


  reply	other threads:[~2015-06-18  0:20 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-15 22:24 call_rcu from trace_preempt Alexei Starovoitov
2015-06-15 23:07 ` Paul E. McKenney
2015-06-16  1:09   ` Alexei Starovoitov
2015-06-16  2:14     ` Paul E. McKenney
2015-06-16  5:45       ` Alexei Starovoitov
2015-06-16  6:06         ` Daniel Wagner
2015-06-16  6:25           ` Alexei Starovoitov
2015-06-16  6:34             ` Daniel Wagner
2015-06-16  6:46               ` Alexei Starovoitov
2015-06-16  6:54                 ` Daniel Wagner
2015-06-16 12:27         ` Paul E. McKenney
2015-06-16 12:38           ` Daniel Wagner
2015-06-16 14:16             ` Paul E. McKenney
2015-06-16 15:43               ` Steven Rostedt
2015-06-16 16:07                 ` Paul E. McKenney
2015-06-16 17:13                   ` Daniel Wagner
2015-06-16 15:41             ` Steven Rostedt
2015-06-16 15:52               ` Steven Rostedt
2015-06-16 17:11               ` Daniel Wagner
2015-06-16 17:20             ` Alexei Starovoitov
2015-06-16 17:37               ` Steven Rostedt
2015-06-17  0:33                 ` Alexei Starovoitov
2015-06-17  0:47                   ` Steven Rostedt
2015-06-17  1:04                     ` Alexei Starovoitov
2015-06-17  1:19                       ` Steven Rostedt
2015-06-17  8:11               ` Daniel Wagner
2015-06-17  9:05                 ` Daniel Wagner
2015-06-17 18:39                   ` Alexei Starovoitov
2015-06-17 20:37                     ` Paul E. McKenney
2015-06-17 20:53                       ` Alexei Starovoitov
2015-06-17 21:36                         ` Paul E. McKenney
2015-06-17 23:58                           ` Alexei Starovoitov
2015-06-18  0:20                             ` Paul E. McKenney [this message]
2015-06-16 15:37           ` Steven Rostedt
2015-06-16 16:05             ` Paul E. McKenney
2015-06-16 17:14               ` Alexei Starovoitov
2015-06-16 17:39                 ` Paul E. McKenney
2015-06-16 18:57                   ` Steven Rostedt
2015-06-16 19:20                     ` Paul E. McKenney
2015-06-16 19:29                       ` Steven Rostedt
2015-06-16 19:34                         ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150618002043.GV3913@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=ast@plumgrid.com \
    --cc=daniel.wagner@bmw-carit.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox