linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Oleg Nesterov <oleg@redhat.com>
To: Tejun Heo <tj@kernel.org>
Cc: torvalds@linux-foundation.org, jannh@google.com,
	paulmck@linux.vnet.ibm.com, bcrl@kvack.org,
	viro@zeniv.linux.org.uk, kent.overstreet@gmail.com,
	security@kernel.org, linux-kernel@vger.kernel.org,
	kernel-team@fb.com
Subject: Re: [PATCH 8/8] fs/aio: Use rcu_work instead of explicit rcu and work item
Date: Wed, 21 Mar 2018 16:58:13 +0100	[thread overview]
Message-ID: <20180321155812.GA9382@redhat.com> (raw)
In-Reply-To: <20180314194515.1661824-8-tj@kernel.org>

Hi Tejun,

sorry for late reply.

On 03/14, Tejun Heo wrote:
>
> Workqueue now has rcu_work.  Use it instead of open-coding rcu -> work
> item bouncing.

Yes, but a bit of open-coding may be more efficient...

> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -115,8 +115,7 @@ struct kioctx {
>  	struct page		**ring_pages;
>  	long			nr_pages;
>  
> -	struct rcu_head		free_rcu;
> -	struct work_struct	free_work;	/* see free_ioctx() */
> +	struct rcu_work		free_rwork;	/* see free_ioctx() */

IIUC, you can't easily share rcu_work's, thus every kioctx needs its own
->free_rwork and this looks sub-optimal.

What do you think about the (untested) patch below?

Oleg.


--- a/fs/aio.c
+++ b/fs/aio.c
@@ -115,8 +115,10 @@ struct kioctx {
 	struct page		**ring_pages;
 	long			nr_pages;
 
-	struct rcu_head		free_rcu;
-	struct work_struct	free_work;	/* see free_ioctx() */
+	union {
+		struct rcu_head		free_rcu;
+		struct llist_node	free_llist;
+	};
 
 	/*
 	 * signals when all in-flight requests are done
@@ -589,31 +591,38 @@ static int kiocb_cancel(struct aio_kiocb *kiocb)
 	return cancel(&kiocb->common);
 }
 
+static struct llist_head free_ioctx_llist;
+
 /*
  * free_ioctx() should be RCU delayed to synchronize against the RCU
  * protected lookup_ioctx() and also needs process context to call
  * aio_free_ring(), so the double bouncing through kioctx->free_rcu and
  * ->free_work.
  */
-static void free_ioctx(struct work_struct *work)
+static void free_ioctx_workfn(struct work_struct *work)
 {
-	struct kioctx *ctx = container_of(work, struct kioctx, free_work);
+	struct llist_node *llist = llist_del_all(&free_ioctx_llist);
+	struct kioctx *ctx, *tmp;
 
-	pr_debug("freeing %p\n", ctx);
+	llist_for_each_entry_safe(ctx, tmp, llist, free_llist) {
+		pr_debug("freeing %p\n", ctx);
 
-	aio_free_ring(ctx);
-	free_percpu(ctx->cpu);
-	percpu_ref_exit(&ctx->reqs);
-	percpu_ref_exit(&ctx->users);
-	kmem_cache_free(kioctx_cachep, ctx);
+		aio_free_ring(ctx);
+		free_percpu(ctx->cpu);
+		percpu_ref_exit(&ctx->reqs);
+		percpu_ref_exit(&ctx->users);
+		kmem_cache_free(kioctx_cachep, ctx);
+	}
 }
 
+static DECLARE_WORK(free_ioctx_work, free_ioctx_workfn);
+
 static void free_ioctx_rcufn(struct rcu_head *head)
 {
 	struct kioctx *ctx = container_of(head, struct kioctx, free_rcu);
 
-	INIT_WORK(&ctx->free_work, free_ioctx);
-	schedule_work(&ctx->free_work);
+	if (llist_add(&ctx->free_llist, &free_ioctx_llist))
+		schedule_work(&free_ioctx_work);
 }
 
 static void free_ioctx_reqs(struct percpu_ref *ref)

  parent reply	other threads:[~2018-03-21 15:58 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-14 19:41 [PATCHSET v2] percpu_ref, RCU: Audit RCU usages in percpu_ref users Tejun Heo
2018-03-14 19:45 ` [PATCH 1/8] fs/aio: Add explicit RCU grace period when freeing kioctx Tejun Heo
2018-03-14 19:45   ` [PATCH 2/8] fs/aio: Use RCU accessors for kioctx_table->table[] Tejun Heo
2018-03-14 19:45   ` [PATCH 3/8] RDMAVT: Fix synchronization around percpu_ref Tejun Heo
2018-03-15 22:24     ` Jason Gunthorpe
2018-03-14 19:45   ` [PATCH 4/8] HMM: Remove superflous RCU protection around radix tree lookup Tejun Heo
2018-03-26 14:54     ` Tejun Heo
2018-03-27 16:12       ` Jerome Glisse
2018-03-14 19:45   ` [PATCH 5/8] percpu_ref: Update doc to dissuade users from depending on internal RCU grace periods Tejun Heo
2018-03-19 17:10     ` Tejun Heo
2018-03-14 19:45   ` [PATCH 6/8] RCU, workqueue: Implement rcu_work Tejun Heo
2018-03-14 20:13     ` Paul E. McKenney
2018-03-16  6:01     ` Lai Jiangshan
2018-03-19 16:45       ` Tejun Heo
2018-03-20 10:04         ` Lai Jiangshan
2018-03-14 19:45   ` [PATCH 7/8] cgroup: Use rcu_work instead of explicit rcu and work item Tejun Heo
2018-03-14 19:45   ` [PATCH 8/8] fs/aio: " Tejun Heo
2018-03-19 17:12     ` Tejun Heo
2018-03-21 15:58     ` Oleg Nesterov [this message]
2018-03-21 16:40       ` Tejun Heo
2018-03-21 17:17         ` Oleg Nesterov
2018-03-21 17:53           ` Tejun Heo
2018-03-22 11:24             ` Oleg Nesterov
2018-03-26 15:04               ` Tejun Heo
2018-03-27 14:28                 ` Oleg Nesterov
2018-03-27 15:55                   ` Tejun Heo
2018-03-29 16:49                     ` Oleg Nesterov
2018-03-29 17:41                       ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180321155812.GA9382@redhat.com \
    --to=oleg@redhat.com \
    --cc=bcrl@kvack.org \
    --cc=jannh@google.com \
    --cc=kent.overstreet@gmail.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=security@kernel.org \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).