From: Tejun Heo <tj@kernel.org>
To: Waiman Long <longman@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
cgroups@vger.kernel.org, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org, Ming Lei <ming.lei@redhat.com>
Subject: Re: [PATCH v5 3/3] blk-cgroup: Optimize blkcg_rstat_flush()
Date: Thu, 2 Jun 2022 07:46:45 -1000 [thread overview]
Message-ID: <Ypj3hcodkAU1MUR7@slm.duckdns.org> (raw)
In-Reply-To: <42da456d-8f6a-3af0-4cd3-d33a07e3b81e@redhat.com>
On Thu, Jun 02, 2022 at 01:26:10PM -0400, Waiman Long wrote:
>
> On 6/2/22 12:58, Tejun Heo wrote:
> > Hello,
> >
> > On Thu, Jun 02, 2022 at 09:35:43AM -0400, Waiman Long wrote:
> > > @@ -2011,9 +2076,16 @@ void blk_cgroup_bio_start(struct bio *bio)
> > > }
> > > bis->cur.ios[rwd]++;
> > > + if (!READ_ONCE(bis->lnode.next)) {
> > > + struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu);
> > > +
> > > + llist_add(&bis->lnode, lhead);
> > > + percpu_ref_get(&bis->blkg->refcnt);
> > Hmm... what guarantees that more than one threads race here? llist assumes
> > that there's a single writer for a given llist_node and the ref count would
> > be off too, right?
>
> The llist_add() function is atomic. It calls into llist_add_batch() in
> lib/llist.c which uses cmpxchg() to make the change. There is a non-atomic
> version __llist_add() which may be problematic in this case. Note that irq
> is disabled in the u64_stats_update* critical section, there shouldn't be a
> racing thread running in the same cpu. Other cpus will modify their own
> version of lhead. Perhaps the non-atomic version can be used here as well.
Ah, right, this is per-cpu, so there can be no second writer trying to add
the same node at the same time. Can you add a comment explaining the overall
design / behavior? Other than that, please feel free to add
Acked-by: Tejun Heo <tj@kernel.org>
Thanks.
--
tejun
next prev parent reply other threads:[~2022-06-02 17:46 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-01 21:18 [PATCH v3 0/2] blk-cgroup: Optimize blkcg_rstat_flush() Waiman Long
2022-06-01 21:18 ` [PATCH v3 1/2] blk-cgroup: Correctly free percpu iostat_cpu in blkg on error exit Waiman Long
2022-06-01 21:18 ` [PATCH v3 2/2] blk-cgroup: Optimize blkcg_rstat_flush() Waiman Long
2022-06-01 21:26 ` Tejun Heo
2022-06-01 21:30 ` Waiman Long
2022-06-02 6:32 ` kernel test robot
2022-06-02 1:54 ` [PATCH v4 " Waiman Long
2022-06-02 13:35 ` [PATCH v5 0/3] " Waiman Long
2022-06-02 13:35 ` [PATCH v5 1/3] blk-cgroup: Correctly free percpu iostat_cpu in blkg on error exit Waiman Long
2022-06-02 18:54 ` [PATCH v5 4/4] blk-cgroup: Document the design of new lockless iostat_cpu list Waiman Long
2022-06-02 19:05 ` Tejun Heo
2022-06-02 19:12 ` Waiman Long
2022-06-02 13:35 ` [PATCH v5 2/3] blk-cgroup: Return -ENOMEM directly in blkcg_css_alloc() error path Waiman Long
2022-06-02 16:16 ` Tejun Heo
2022-06-02 17:17 ` Waiman Long
2022-06-02 13:35 ` [PATCH v5 3/3] blk-cgroup: Optimize blkcg_rstat_flush() Waiman Long
2022-06-02 16:58 ` Tejun Heo
2022-06-02 17:26 ` Waiman Long
2022-06-02 17:46 ` Tejun Heo [this message]
2022-06-02 18:18 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Ypj3hcodkAU1MUR7@slm.duckdns.org \
--to=tj@kernel.org \
--cc=axboe@kernel.dk \
--cc=cgroups@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=ming.lei@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox