public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Uladzislau Rezki <urezki@gmail.com>
To: Boqun Feng <boqun.feng@gmail.com>
Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	Vlastimil Babka <vbabka@suse.cz>, RCU <rcu@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Neeraj upadhyay <Neeraj.Upadhyay@amd.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Frederic Weisbecker <frederic@kernel.org>,
	Oleksiy Avramchenko <oleksiy.avramchenko@sony.com>
Subject: Re: [PATCH] rcu/kvfree: Add kvfree_rcu_barrier() API
Date: Mon, 5 Aug 2024 20:58:11 +0200	[thread overview]
Message-ID: <ZrEgwyQFnmToTNvl@pc636> (raw)
In-Reply-To: <ZrD537_itA2sWQoA@boqun-archlinux>

On Mon, Aug 05, 2024 at 09:12:15AM -0700, Boqun Feng wrote:
> On Thu, Aug 01, 2024 at 01:10:39PM +0200, Uladzislau Rezki (Sony) wrote:
> > Add a kvfree_rcu_barrier() function. It waits until all
> > in-flight pointers are freed over RCU machinery. It does
> > not wait any GP completion and it is within its right to
> > return immediately if there are no outstanding pointers.
> > 
> > This function is useful when there is a need to guarantee
> > that a memory is fully freed before destroying memory caches.
> > For example, during unloading a kernel module.
> > 
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > ---
> >  include/linux/rcutiny.h |   5 ++
> >  include/linux/rcutree.h |   1 +
> >  kernel/rcu/tree.c       | 103 ++++++++++++++++++++++++++++++++++++----
> >  3 files changed, 101 insertions(+), 8 deletions(-)
> > 
> > diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
> > index d9ac7b136aea..522123050ff8 100644
> > --- a/include/linux/rcutiny.h
> > +++ b/include/linux/rcutiny.h
> > @@ -111,6 +111,11 @@ static inline void __kvfree_call_rcu(struct rcu_head *head, void *ptr)
> >  	kvfree(ptr);
> >  }
> >  
> > +static inline void kvfree_rcu_barrier(void)
> > +{
> > +	rcu_barrier();
> > +}
> > +
> >  #ifdef CONFIG_KASAN_GENERIC
> >  void kvfree_call_rcu(struct rcu_head *head, void *ptr);
> >  #else
> > diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
> > index 254244202ea9..58e7db80f3a8 100644
> > --- a/include/linux/rcutree.h
> > +++ b/include/linux/rcutree.h
> > @@ -35,6 +35,7 @@ static inline void rcu_virt_note_context_switch(void)
> >  
> >  void synchronize_rcu_expedited(void);
> >  void kvfree_call_rcu(struct rcu_head *head, void *ptr);
> > +void kvfree_rcu_barrier(void);
> >  
> >  void rcu_barrier(void);
> >  void rcu_momentary_dyntick_idle(void);
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 28c7031711a3..1423013f9fe6 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -3550,18 +3550,15 @@ kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp)
> >  }
> >  
> >  /*
> > - * This function is invoked after the KFREE_DRAIN_JIFFIES timeout.
> > + * Return: %true if a work is queued, %false otherwise.
> >   */
> > -static void kfree_rcu_monitor(struct work_struct *work)
> > +static bool
> > +kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp)
> >  {
> > -	struct kfree_rcu_cpu *krcp = container_of(work,
> > -		struct kfree_rcu_cpu, monitor_work.work);
> >  	unsigned long flags;
> > +	bool queued = false;
> >  	int i, j;
> >  
> > -	// Drain ready for reclaim.
> > -	kvfree_rcu_drain_ready(krcp);
> > -
> >  	raw_spin_lock_irqsave(&krcp->lock, flags);
> >  
> >  	// Attempt to start a new batch.
> > @@ -3600,11 +3597,27 @@ static void kfree_rcu_monitor(struct work_struct *work)
> >  			// be that the work is in the pending state when
> >  			// channels have been detached following by each
> >  			// other.
> > -			queue_rcu_work(system_wq, &krwp->rcu_work);
> > +			queued = queue_rcu_work(system_wq, &krwp->rcu_work);
> >  		}
> >  	}
> >  
> >  	raw_spin_unlock_irqrestore(&krcp->lock, flags);
> > +	return queued;
> > +}
> > +
> > +/*
> > + * This function is invoked after the KFREE_DRAIN_JIFFIES timeout.
> > + */
> > +static void kfree_rcu_monitor(struct work_struct *work)
> > +{
> > +	struct kfree_rcu_cpu *krcp = container_of(work,
> > +		struct kfree_rcu_cpu, monitor_work.work);
> > +
> > +	// Drain ready for reclaim.
> > +	kvfree_rcu_drain_ready(krcp);
> > +
> > +	// Queue a batch for a rest.
> > +	kvfree_rcu_queue_batch(krcp);
> >  
> >  	// If there is nothing to detach, it means that our job is
> >  	// successfully done here. In case of having at least one
> > @@ -3825,6 +3838,80 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
> >  }
> >  EXPORT_SYMBOL_GPL(kvfree_call_rcu);
> >  
> > +/**
> > + * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete.
> > + *
> > + * Note that a single argument of kvfree_rcu() call has a slow path that
> > + * triggers synchronize_rcu() following by freeing a pointer. It is done
> > + * before the return from the function. Therefore for any single-argument
> > + * call that will result in a kfree() to a cache that is to be destroyed
> > + * during module exit, it is developer's responsibility to ensure that all
> > + * such calls have returned before the call to kmem_cache_destroy().
> > + */
> > +void kvfree_rcu_barrier(void)
> > +{
> > +	struct kfree_rcu_cpu_work *krwp;
> > +	struct kfree_rcu_cpu *krcp;
> > +	bool queued;
> > +	int i, cpu;
> > +
> > +	/*
> > +	 * Firstly we detach objects and queue them over an RCU-batch
> > +	 * for all CPUs. Finally queued works are flushed for each CPU.
> > +	 *
> > +	 * Please note. If there are outstanding batches for a particular
> > +	 * CPU, those have to be finished first following by queuing a new.
> > +	 */
> > +	for_each_possible_cpu(cpu) {
> > +		krcp = per_cpu_ptr(&krc, cpu);
> > +
> > +		/*
> > +		 * Check if this CPU has any objects which have been queued for a
> > +		 * new GP completion. If not(means nothing to detach), we are done
> > +		 * with it. If any batch is pending/running for this "krcp", below
> > +		 * per-cpu flush_rcu_work() waits its completion(see last step).
> > +		 */
> > +		if (!need_offload_krc(krcp))
> 
> Still trying to figure out the locking inside kfree_rcu(), but don't you
> need holding krcp->lock to performance these checks?
> 
Here we just need answer the question, "need" or not "need" in order to
bail out _early_ for this CPU. We are interested in _already_ in-flight
objects, i.e. the ones before entry to barrier function.

The reason why we have it is - it does not cost. In fact we could eliminate
it and directly queue a batch but it requires locking and more CPU cycles.

That is why there is such check :)

--
Uladzislau Rezki

      reply	other threads:[~2024-08-05 18:58 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-01 11:10 [PATCH] rcu/kvfree: Add kvfree_rcu_barrier() API Uladzislau Rezki (Sony)
2024-08-01 14:28 ` Vlastimil Babka
2024-08-01 16:57   ` Uladzislau Rezki
2024-08-05 16:12 ` Boqun Feng
2024-08-05 18:58   ` Uladzislau Rezki [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZrEgwyQFnmToTNvl@pc636 \
    --to=urezki@gmail.com \
    --cc=Neeraj.Upadhyay@amd.com \
    --cc=boqun.feng@gmail.com \
    --cc=frederic@kernel.org \
    --cc=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oleksiy.avramchenko@sony.com \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox