* [PATCH] mm, list_lru: refactor the locking code
@ 2025-05-26 18:06 Kairui Song
2025-05-27 2:16 ` Qi Zheng
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Kairui Song @ 2025-05-26 18:06 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, Johannes Weiner, Roman Gushchin, Chengming Zhou,
Qi Zheng, Muchun Song, linux-kernel, Kairui Song,
kernel test robot, Julia Lawall
From: Kairui Song <kasong@tencent.com>
Cocci is confused by the try lock then release RCU and return logic
here. So separate the try lock part out into a standalone helper. The
code is easier to follow too.
No feature change, fixes:
cocci warnings: (new ones prefixed by >>)
>> mm/list_lru.c:82:3-9: preceding lock on line 77
>> mm/list_lru.c:82:3-9: preceding lock on line 77
mm/list_lru.c:82:3-9: preceding lock on line 75
mm/list_lru.c:82:3-9: preceding lock on line 75
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@inria.fr>
Closes: https://lore.kernel.org/r/202505252043.pbT1tBHJ-lkp@intel.com/
Signed-off-by: Kairui Song <kasong@tencent.com>
---
mm/list_lru.c | 34 +++++++++++++++++++---------------
1 file changed, 19 insertions(+), 15 deletions(-)
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 490473af3122..ec48b5dadf51 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -60,30 +60,34 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx)
return &lru->node[nid].lru;
}
+static inline bool lock_list_lru(struct list_lru_one *l, bool irq)
+{
+ if (irq)
+ spin_lock_irq(&l->lock);
+ else
+ spin_lock(&l->lock);
+ if (unlikely(READ_ONCE(l->nr_items) == LONG_MIN)) {
+ if (irq)
+ spin_unlock_irq(&l->lock);
+ else
+ spin_unlock(&l->lock);
+ return false;
+ }
+ return true;
+}
+
static inline struct list_lru_one *
lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg,
bool irq, bool skip_empty)
{
struct list_lru_one *l;
- long nr_items;
rcu_read_lock();
again:
l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
- if (likely(l)) {
- if (irq)
- spin_lock_irq(&l->lock);
- else
- spin_lock(&l->lock);
- nr_items = READ_ONCE(l->nr_items);
- if (likely(nr_items != LONG_MIN)) {
- rcu_read_unlock();
- return l;
- }
- if (irq)
- spin_unlock_irq(&l->lock);
- else
- spin_unlock(&l->lock);
+ if (likely(l) && lock_list_lru(l, irq)) {
+ rcu_read_unlock();
+ return l;
}
/*
* Caller may simply bail out if raced with reparenting or
--
2.49.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm, list_lru: refactor the locking code
2025-05-26 18:06 [PATCH] mm, list_lru: refactor the locking code Kairui Song
@ 2025-05-27 2:16 ` Qi Zheng
2025-05-27 2:36 ` Muchun Song
2025-05-27 19:25 ` SeongJae Park
2 siblings, 0 replies; 4+ messages in thread
From: Qi Zheng @ 2025-05-27 2:16 UTC (permalink / raw)
To: Kairui Song
Cc: Andrew Morton, Johannes Weiner, Roman Gushchin, Chengming Zhou,
Muchun Song, linux-kernel, kernel test robot, Julia Lawall,
linux-mm
On 5/27/25 2:06 AM, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
>
> Cocci is confused by the try lock then release RCU and return logic
> here. So separate the try lock part out into a standalone helper. The
> code is easier to follow too.
>
> No feature change, fixes:
>
> cocci warnings: (new ones prefixed by >>)
>>> mm/list_lru.c:82:3-9: preceding lock on line 77
>>> mm/list_lru.c:82:3-9: preceding lock on line 77
> mm/list_lru.c:82:3-9: preceding lock on line 75
> mm/list_lru.c:82:3-9: preceding lock on line 75
>
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Julia Lawall <julia.lawall@inria.fr>
> Closes: https://lore.kernel.org/r/202505252043.pbT1tBHJ-lkp@intel.com/
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
> mm/list_lru.c | 34 +++++++++++++++++++---------------
> 1 file changed, 19 insertions(+), 15 deletions(-)
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 490473af3122..ec48b5dadf51 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -60,30 +60,34 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx)
> return &lru->node[nid].lru;
> }
>
> +static inline bool lock_list_lru(struct list_lru_one *l, bool irq)
> +{
> + if (irq)
> + spin_lock_irq(&l->lock);
> + else
> + spin_lock(&l->lock);
> + if (unlikely(READ_ONCE(l->nr_items) == LONG_MIN)) {
> + if (irq)
> + spin_unlock_irq(&l->lock);
> + else
> + spin_unlock(&l->lock);
> + return false;
> + }
> + return true;
> +}
> +
> static inline struct list_lru_one *
> lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg,
> bool irq, bool skip_empty)
> {
> struct list_lru_one *l;
> - long nr_items;
>
> rcu_read_lock();
> again:
> l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> - if (likely(l)) {
> - if (irq)
> - spin_lock_irq(&l->lock);
> - else
> - spin_lock(&l->lock);
> - nr_items = READ_ONCE(l->nr_items);
> - if (likely(nr_items != LONG_MIN)) {
> - rcu_read_unlock();
> - return l;
> - }
> - if (irq)
> - spin_unlock_irq(&l->lock);
> - else
> - spin_unlock(&l->lock);
> + if (likely(l) && lock_list_lru(l, irq)) {
> + rcu_read_unlock();
> + return l;
> }
> /*
> * Caller may simply bail out if raced with reparenting or
And the code readability has also been improved.
Reviewed-by: Qi Zheng <zhengqi.arch@bytedance.com>
Thanks!
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm, list_lru: refactor the locking code
2025-05-26 18:06 [PATCH] mm, list_lru: refactor the locking code Kairui Song
2025-05-27 2:16 ` Qi Zheng
@ 2025-05-27 2:36 ` Muchun Song
2025-05-27 19:25 ` SeongJae Park
2 siblings, 0 replies; 4+ messages in thread
From: Muchun Song @ 2025-05-27 2:36 UTC (permalink / raw)
To: Kairui Song
Cc: linux-mm, Andrew Morton, Johannes Weiner, Roman Gushchin,
Chengming Zhou, Qi Zheng, linux-kernel, kernel test robot,
Julia Lawall
> On May 27, 2025, at 02:06, Kairui Song <ryncsn@gmail.com> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> Cocci is confused by the try lock then release RCU and return logic
> here. So separate the try lock part out into a standalone helper. The
> code is easier to follow too.
>
> No feature change, fixes:
>
> cocci warnings: (new ones prefixed by >>)
>>> mm/list_lru.c:82:3-9: preceding lock on line 77
>>> mm/list_lru.c:82:3-9: preceding lock on line 77
> mm/list_lru.c:82:3-9: preceding lock on line 75
> mm/list_lru.c:82:3-9: preceding lock on line 75
>
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Julia Lawall <julia.lawall@inria.fr>
> Closes: https://lore.kernel.org/r/202505252043.pbT1tBHJ-lkp@intel.com/
> Signed-off-by: Kairui Song <kasong@tencent.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Thanks.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm, list_lru: refactor the locking code
2025-05-26 18:06 [PATCH] mm, list_lru: refactor the locking code Kairui Song
2025-05-27 2:16 ` Qi Zheng
2025-05-27 2:36 ` Muchun Song
@ 2025-05-27 19:25 ` SeongJae Park
2 siblings, 0 replies; 4+ messages in thread
From: SeongJae Park @ 2025-05-27 19:25 UTC (permalink / raw)
To: Kairui Song
Cc: SeongJae Park, linux-mm, Andrew Morton, Johannes Weiner,
Roman Gushchin, Chengming Zhou, Qi Zheng, Muchun Song,
linux-kernel, Kairui Song, kernel test robot, Julia Lawall
Hi Kairui,
On Tue, 27 May 2025 02:06:38 +0800 Kairui Song <ryncsn@gmail.com> wrote:
> From: Kairui Song <kasong@tencent.com>
>
> Cocci is confused by the try lock then release RCU and return logic
> here. So separate the try lock part out into a standalone helper. The
> code is easier to follow too.
>
> No feature change, fixes:
>
> cocci warnings: (new ones prefixed by >>)
> >> mm/list_lru.c:82:3-9: preceding lock on line 77
> >> mm/list_lru.c:82:3-9: preceding lock on line 77
> mm/list_lru.c:82:3-9: preceding lock on line 75
> mm/list_lru.c:82:3-9: preceding lock on line 75
>
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Julia Lawall <julia.lawall@inria.fr>
> Closes: https://lore.kernel.org/r/202505252043.pbT1tBHJ-lkp@intel.com/
> Signed-off-by: Kairui Song <kasong@tencent.com>
Reviewed-by: SeongJae Park <sj@kernel.org>
> ---
> mm/list_lru.c | 34 +++++++++++++++++++---------------
> 1 file changed, 19 insertions(+), 15 deletions(-)
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 490473af3122..ec48b5dadf51 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -60,30 +60,34 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx)
> return &lru->node[nid].lru;
> }
>
> +static inline bool lock_list_lru(struct list_lru_one *l, bool irq)
> +{
> + if (irq)
> + spin_lock_irq(&l->lock);
> + else
> + spin_lock(&l->lock);
> + if (unlikely(READ_ONCE(l->nr_items) == LONG_MIN)) {
> + if (irq)
> + spin_unlock_irq(&l->lock);
> + else
> + spin_unlock(&l->lock);
> + return false;
> + }
I'd prefer 'if (likely(...)) return true;' to reduce indentation and stop
wondering what goes to the likely case earlier. But that's my personal
preferrence that shouldn't block this.
> + return true;
> +}
> +
> static inline struct list_lru_one *
> lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg,
> bool irq, bool skip_empty)
> {
> struct list_lru_one *l;
> - long nr_items;
>
> rcu_read_lock();
> again:
> l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> - if (likely(l)) {
> - if (irq)
> - spin_lock_irq(&l->lock);
> - else
> - spin_lock(&l->lock);
> - nr_items = READ_ONCE(l->nr_items);
> - if (likely(nr_items != LONG_MIN)) {
> - rcu_read_unlock();
> - return l;
> - }
> - if (irq)
> - spin_unlock_irq(&l->lock);
> - else
> - spin_unlock(&l->lock);
> + if (likely(l) && lock_list_lru(l, irq)) {
> + rcu_read_unlock();
> + return l;
> }
Much easier to read, indeed :)
> /*
> * Caller may simply bail out if raced with reparenting or
> --
> 2.49.0
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-05-27 19:25 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-26 18:06 [PATCH] mm, list_lru: refactor the locking code Kairui Song
2025-05-27 2:16 ` Qi Zheng
2025-05-27 2:36 ` Muchun Song
2025-05-27 19:25 ` SeongJae Park
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).