From: pengdonglin <dolinux.peng@gmail.com>
To: tj@kernel.org, tony.luck@intel.com, jani.nikula@linux.intel.com,
ap420073@gmail.com, jv@jvosburgh.net, freude@linux.ibm.com,
bcrl@kvack.org, trondmy@kernel.org, longman@redhat.com,
kees@kernel.org
Cc: bigeasy@linutronix.de, hdanton@sina.com, paulmck@kernel.org,
linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev,
linux-nfs@vger.kernel.org, linux-aio@kvack.org,
linux-fsdevel@vger.kernel.org,
linux-security-module@vger.kernel.org, netdev@vger.kernel.org,
intel-gfx@lists.freedesktop.org, linux-wireless@vger.kernel.org,
linux-acpi@vger.kernel.org, linux-s390@vger.kernel.org,
cgroups@vger.kernel.org, pengdonglin <dolinux.peng@gmail.com>,
pengdonglin <pengdonglin@xiaomi.com>
Subject: [PATCH v3 06/14] ipc: Remove redundant rcu_read_lock/unlock() in spin_lock
Date: Tue, 16 Sep 2025 12:47:27 +0800 [thread overview]
Message-ID: <20250916044735.2316171-7-dolinux.peng@gmail.com> (raw)
In-Reply-To: <20250916044735.2316171-1-dolinux.peng@gmail.com>
From: pengdonglin <pengdonglin@xiaomi.com>
Since commit a8bb74acd8efe ("rcu: Consolidate RCU-sched update-side function definitions")
there is no difference between rcu_read_lock(), rcu_read_lock_bh() and
rcu_read_lock_sched() in terms of RCU read section and the relevant grace
period. That means that spin_lock(), which implies rcu_read_lock_sched(),
also implies rcu_read_lock().
There is no need no explicitly start a RCU read section if one has already
been started implicitly by spin_lock().
Simplify the code and remove the inner rcu_read_lock() invocation.
Signed-off-by: pengdonglin <pengdonglin@xiaomi.com>
Signed-off-by: pengdonglin <dolinux.peng@gmail.com>
---
ipc/msg.c | 1 -
ipc/sem.c | 1 -
ipc/shm.c | 1 -
ipc/util.c | 2 --
4 files changed, 5 deletions(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index ee6af4fe52bf..1e579b57023f 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -179,7 +179,6 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
}
ipc_unlock_object(&msq->q_perm);
- rcu_read_unlock();
return msq->q_perm.id;
}
diff --git a/ipc/sem.c b/ipc/sem.c
index a39cdc7bf88f..38ad57b2b558 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -579,7 +579,6 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params)
ns->used_sems += nsems;
sem_unlock(sma, -1);
- rcu_read_unlock();
return sma->sem_perm.id;
}
diff --git a/ipc/shm.c b/ipc/shm.c
index a9310b6dbbc3..61fae1b6a18e 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -795,7 +795,6 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
error = shp->shm_perm.id;
ipc_unlock_object(&shp->shm_perm);
- rcu_read_unlock();
return error;
no_id:
diff --git a/ipc/util.c b/ipc/util.c
index cae60f11d9c2..1be691b5dcad 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -293,7 +293,6 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int limit)
idr_preload(GFP_KERNEL);
spin_lock_init(&new->lock);
- rcu_read_lock();
spin_lock(&new->lock);
current_euid_egid(&euid, &egid);
@@ -316,7 +315,6 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int limit)
if (idx < 0) {
new->deleted = true;
spin_unlock(&new->lock);
- rcu_read_unlock();
return idx;
}
--
2.34.1
next prev parent reply other threads:[~2025-09-16 4:48 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-16 4:47 [PATCH v3 00/14] Remove redundant rcu_read_lock/unlock() in spin_lock pengdonglin
2025-09-16 4:47 ` [PATCH v3 01/14] ACPI: APEI: " pengdonglin
2025-09-27 3:22 ` Hanjun Guo
2025-09-28 10:33 ` Rafael J. Wysocki
2025-09-16 4:47 ` [PATCH v3 02/14] drm/i915/gt: " pengdonglin
2025-09-16 4:47 ` [PATCH v3 03/14] fs: aio: " pengdonglin
2025-09-16 4:47 ` [PATCH v3 04/14] nfs: " pengdonglin
2025-09-16 4:47 ` [PATCH v3 05/14] s390/pkey: " pengdonglin
2025-09-16 10:51 ` Harald Freudenberger
2025-09-16 4:47 ` pengdonglin [this message]
2025-09-16 4:47 ` [PATCH v3 07/14] yama: " pengdonglin
2025-09-16 4:47 ` [PATCH v3 08/14] cgroup: " pengdonglin
2025-09-16 18:37 ` Tejun Heo
2025-09-16 4:47 ` [PATCH v3 09/14] cgroup/cpuset: " pengdonglin
2025-09-16 18:37 ` Tejun Heo
2025-09-16 4:47 ` [PATCH v3 10/14] wifi: mac80211: " pengdonglin
2025-09-16 4:47 ` [PATCH v3 11/14] net: ncsi: " pengdonglin
2025-09-16 9:41 ` Paul Fertser
2025-09-16 4:47 ` [PATCH v3 12/14] net: amt: " pengdonglin
2025-09-16 4:47 ` [PATCH v3 13/14] net: bonding: " pengdonglin
2025-09-16 4:47 ` [PATCH v3 14/14] wifi: ath9k: " pengdonglin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250916044735.2316171-7-dolinux.peng@gmail.com \
--to=dolinux.peng@gmail.com \
--cc=ap420073@gmail.com \
--cc=bcrl@kvack.org \
--cc=bigeasy@linutronix.de \
--cc=cgroups@vger.kernel.org \
--cc=freude@linux.ibm.com \
--cc=hdanton@sina.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jani.nikula@linux.intel.com \
--cc=jv@jvosburgh.net \
--cc=kees@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-aio@kvack.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=linux-s390@vger.kernel.org \
--cc=linux-security-module@vger.kernel.org \
--cc=linux-wireless@vger.kernel.org \
--cc=longman@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=paulmck@kernel.org \
--cc=pengdonglin@xiaomi.com \
--cc=tj@kernel.org \
--cc=tony.luck@intel.com \
--cc=trondmy@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).