From: peterz@infradead.org
To: Oleg Nesterov <oleg@redhat.com>
Cc: Hou Tao <houtao1@huawei.com>, Ingo Molnar <mingo@redhat.com>,
Will Deacon <will@kernel.org>, Dennis Zhou <dennis@kernel.org>,
Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
Jan Kara <jack@suse.cz>
Subject: Re: [RFC PATCH] locking/percpu-rwsem: use this_cpu_{inc|dec}() for read_count
Date: Tue, 15 Sep 2020 18:03:44 +0200 [thread overview]
Message-ID: <20200915160344.GH35926@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20200915155150.GD2674@hirez.programming.kicks-ass.net>
On Tue, Sep 15, 2020 at 05:51:50PM +0200, peterz@infradead.org wrote:
> Anyway, I'll rewrite the Changelog and stuff it in locking/urgent.
How's this?
---
Subject: locking/percpu-rwsem: Use this_cpu_{inc,dec}() for read_count
From: Hou Tao <houtao1@huawei.com>
Date: Tue, 15 Sep 2020 22:07:50 +0800
From: Hou Tao <houtao1@huawei.com>
The __this_cpu*() accessors are (in general) IRQ-unsafe which, given
that percpu-rwsem is a blocking primitive, should be just fine.
However, file_end_write() is used from IRQ context and will cause
load-store issues.
Fixing it by using the IRQ-safe this_cpu_*() for operations on
read_count. This will generate more expensive code on a number of
platforms, which might cause a performance regression for some of the
other percpu-rwsem users.
If any such is reported, we can consider alternative solutions.
Fixes: 70fe2f48152e ("aio: fix freeze protection of aio writes")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200915140750.137881-1-houtao1@huawei.com
---
include/linux/percpu-rwsem.h | 8 ++++----
kernel/locking/percpu-rwsem.c | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -60,7 +60,7 @@ static inline void percpu_down_read(stru
* anything we did within this RCU-sched read-size critical section.
*/
if (likely(rcu_sync_is_idle(&sem->rss)))
- __this_cpu_inc(*sem->read_count);
+ this_cpu_inc(*sem->read_count);
else
__percpu_down_read(sem, false); /* Unconditional memory barrier */
/*
@@ -79,7 +79,7 @@ static inline bool percpu_down_read_tryl
* Same as in percpu_down_read().
*/
if (likely(rcu_sync_is_idle(&sem->rss)))
- __this_cpu_inc(*sem->read_count);
+ this_cpu_inc(*sem->read_count);
else
ret = __percpu_down_read(sem, true); /* Unconditional memory barrier */
preempt_enable();
@@ -103,7 +103,7 @@ static inline void percpu_up_read(struct
* Same as in percpu_down_read().
*/
if (likely(rcu_sync_is_idle(&sem->rss))) {
- __this_cpu_dec(*sem->read_count);
+ this_cpu_dec(*sem->read_count);
} else {
/*
* slowpath; reader will only ever wake a single blocked
@@ -115,7 +115,7 @@ static inline void percpu_up_read(struct
* aggregate zero, as that is the only time it matters) they
* will also see our critical section.
*/
- __this_cpu_dec(*sem->read_count);
+ this_cpu_dec(*sem->read_count);
rcuwait_wake_up(&sem->writer);
}
preempt_enable();
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -45,7 +45,7 @@ EXPORT_SYMBOL_GPL(percpu_free_rwsem);
static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
{
- __this_cpu_inc(*sem->read_count);
+ this_cpu_inc(*sem->read_count);
/*
* Due to having preemption disabled the decrement happens on
@@ -71,7 +71,7 @@ static bool __percpu_down_read_trylock(s
if (likely(!atomic_read_acquire(&sem->block)))
return true;
- __this_cpu_dec(*sem->read_count);
+ this_cpu_dec(*sem->read_count);
/* Prod writer to re-evaluate readers_active_check() */
rcuwait_wake_up(&sem->writer);
next prev parent reply other threads:[~2020-09-15 22:20 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-15 14:07 [RFC PATCH] locking/percpu-rwsem: use this_cpu_{inc|dec}() for read_count Hou Tao
2020-09-15 15:06 ` peterz
2020-09-15 15:31 ` Oleg Nesterov
2020-09-15 15:51 ` peterz
2020-09-15 16:03 ` peterz [this message]
2020-09-15 16:11 ` Will Deacon
2020-09-15 18:11 ` peterz
2020-09-16 8:20 ` Will Deacon
2020-09-15 16:47 ` Oleg Nesterov
2020-09-16 12:32 ` Hou Tao
2020-09-16 12:51 ` peterz
2020-09-17 8:48 ` Will Deacon
2020-09-24 11:55 ` Hou Tao
2020-09-29 17:49 ` Will Deacon
2020-09-29 18:07 ` Ard Biesheuvel
2020-09-17 10:51 ` Boaz Harrosh
2020-09-17 12:01 ` Oleg Nesterov
2020-09-17 12:48 ` Matthew Wilcox
2020-09-17 13:22 ` peterz
2020-09-17 13:34 ` Oleg Nesterov
2020-09-17 13:46 ` Boaz Harrosh
2020-09-17 14:46 ` Christoph Hellwig
2020-09-18 9:07 ` Jan Kara
2020-09-18 10:01 ` peterz
2020-09-18 10:04 ` peterz
2020-09-18 10:07 ` peterz
2020-09-18 10:12 ` peterz
2020-09-18 10:48 ` Oleg Nesterov
2020-09-18 11:03 ` peterz
2020-09-18 13:09 ` Oleg Nesterov
2020-09-18 13:26 ` Jan Kara
2020-09-20 23:49 ` Dave Chinner
2020-09-18 8:36 ` [tip: locking/urgent] locking/percpu-rwsem: Use this_cpu_{inc,dec}() " tip-bot2 for Hou Tao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200915160344.GH35926@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=cl@linux.com \
--cc=dennis@kernel.org \
--cc=houtao1@huawei.com \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=oleg@redhat.com \
--cc=tj@kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox