From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8BB91C3F31 for ; Mon, 30 Mar 2026 19:03:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774897432; cv=none; b=dpC7LYvaoT5sya0ZXpVjMOD/ovx+mnt+BAoXMD2qWhW+S0YP12kiT8zUJvLC5d/cgJS+f6SKUI0eljNhpklfDkJ8JwSafUCzasHQNwV0e1RgplYiB4GdekkFGfMN2j0bqpnu0ZIwz3/iYKW66ERBxg5Z7NC20BILGN1+fS7c1+c= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774897432; c=relaxed/simple; bh=nZq4DYqhHc98SoqWz8gZRMTtmKv2bXg7UHe1kTQYDEo=; h=Date:To:From:Subject:Message-Id; b=VTioGiupxoB8yIy2dJb51WyT/sILNugaKjpE/OqiiaEv7FYe363jGKrKqGtkDUUg3o+cvmVp6LyrUo1tv4pY1nS1fKT9aX2JjslW9cYgWEzkCECl3pxxzEMx9vWMZL2pXZhNWg1kFsukOKGOMRJVQa1Ja2COhLxYV9A9PQjW6ww= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=cHa8fvcO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="cHa8fvcO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 491E8C4CEF7; Mon, 30 Mar 2026 19:03:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1774897432; bh=nZq4DYqhHc98SoqWz8gZRMTtmKv2bXg7UHe1kTQYDEo=; h=Date:To:From:Subject:From; b=cHa8fvcOFCNMMBIDzV5ch12IazLu68p1e9rXo5RBl97Do1TZGW3sa7PvDJyZa6Ews IL+TOMLVK6PATETgBs19wSAXmkzoI13DVtPzRc1enb8fmtuD5/kiNfqfrhXDoQ9ezH 5q8M/q9kYq2Wyyo1h3uvKZbZqPz+227FPfLQ98L4= Date: Mon, 30 Mar 2026 12:03:51 -0700 To: mm-commits@vger.kernel.org,hannes@cmpxchg.org,akpm@linux-foundation.org From: Andrew Morton Subject: [to-be-updated] mm-list_lru-introduce-caller-locking-for-additions-and-deletions.patch removed from -mm tree Message-Id: <20260330190352.491E8C4CEF7@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The quilt patch titled Subject: mm: list_lru: introduce caller locking for additions and deletions has been removed from the -mm tree. Its filename was mm-list_lru-introduce-caller-locking-for-additions-and-deletions.patch This patch was dropped because an updated version will be issued ------------------------------------------------------ From: Johannes Weiner Subject: mm: list_lru: introduce caller locking for additions and deletions Date: Wed, 18 Mar 2026 15:53:23 -0400 Locking is currently internal to the list_lru API. However, a caller might want to keep auxiliary state synchronized with the LRU state. For example, the THP shrinker uses the lock of its custom LRU to keep PG_partially_mapped and vmstats consistent. To allow the THP shrinker to switch to list_lru, provide normal and irqsafe locking primitives as well as caller-locked variants of the addition and deletion functions. Link: https://lkml.kernel.org/r/20260318200352.1039011-6-hannes@cmpxchg.org Signed-off-by: Johannes Weiner Reviewed-by: David Hildenbrand (Arm) Acked-by: Shakeel Butt Reviewed-by: Lorenzo Stoakes (Oracle) Cc: Axel Rasmussen Cc: Baolin Wang Cc: Barry Song Cc: Dave Chinner Cc: Dev Jain Cc: Lance Yang Cc: Liam Howlett Cc: Michal Hocko Cc: Mike Rapoport Cc: Muchun Song Cc: Nico Pache Cc: Roman Gushchin Cc: Ryan Roberts Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Wei Xu Cc: Yuanchu Xie Cc: Zi Yan Signed-off-by: Andrew Morton --- include/linux/list_lru.h | 34 +++++++++++ mm/list_lru.c | 107 ++++++++++++++++++++++++++----------- 2 files changed, 110 insertions(+), 31 deletions(-) --- a/include/linux/list_lru.h~mm-list_lru-introduce-caller-locking-for-additions-and-deletions +++ a/include/linux/list_lru.h @@ -84,6 +84,40 @@ int memcg_list_lru_alloc(struct mem_cgro void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent); /** + * list_lru_lock: lock the sublist for the given node and memcg + * @lru: the lru pointer + * @nid: the node id of the sublist to lock. + * @memcg: the cgroup of the sublist to lock. + * + * Returns the locked list_lru_one sublist. The caller must call + * list_lru_unlock() when done. + * + * You must ensure that the memcg is not freed during this call (e.g., with + * rcu or by taking a css refcnt). + * + * Return: the locked list_lru_one, or NULL on failure + */ +struct list_lru_one *list_lru_lock(struct list_lru *lru, int nid, + struct mem_cgroup *memcg); + +/** + * list_lru_unlock: unlock a sublist locked by list_lru_lock() + * @l: the list_lru_one to unlock + */ +void list_lru_unlock(struct list_lru_one *l); + +struct list_lru_one *list_lru_lock_irqsave(struct list_lru *lru, int nid, + struct mem_cgroup *memcg, unsigned long *irq_flags); +void list_lru_unlock_irqrestore(struct list_lru_one *l, + unsigned long *irq_flags); + +/* Caller-locked variants, see list_lru_add() etc for documentation */ +bool __list_lru_add(struct list_lru *lru, struct list_lru_one *l, + struct list_head *item, int nid, struct mem_cgroup *memcg); +bool __list_lru_del(struct list_lru *lru, struct list_lru_one *l, + struct list_head *item, int nid); + +/** * list_lru_add: add an element to the lru list's tail * @lru: the lru pointer * @item: the item to be added. --- a/mm/list_lru.c~mm-list_lru-introduce-caller-locking-for-additions-and-deletions +++ a/mm/list_lru.c @@ -15,17 +15,23 @@ #include "slab.h" #include "internal.h" -static inline void lock_list_lru(struct list_lru_one *l, bool irq) +static inline void lock_list_lru(struct list_lru_one *l, bool irq, + unsigned long *irq_flags) { - if (irq) + if (irq_flags) + spin_lock_irqsave(&l->lock, *irq_flags); + else if (irq) spin_lock_irq(&l->lock); else spin_lock(&l->lock); } -static inline void unlock_list_lru(struct list_lru_one *l, bool irq_off) +static inline void unlock_list_lru(struct list_lru_one *l, bool irq_off, + unsigned long *irq_flags) { - if (irq_off) + if (irq_flags) + spin_unlock_irqrestore(&l->lock, *irq_flags); + else if (irq_off) spin_unlock_irq(&l->lock); else spin_unlock(&l->lock); @@ -78,7 +84,7 @@ list_lru_from_memcg_idx(struct list_lru static inline struct list_lru_one * lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, - bool irq, bool skip_empty) + bool irq, unsigned long *irq_flags, bool skip_empty) { struct list_lru_one *l; @@ -86,12 +92,12 @@ lock_list_lru_of_memcg(struct list_lru * again: l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg)); if (likely(l)) { - lock_list_lru(l, irq); + lock_list_lru(l, irq, irq_flags); if (likely(READ_ONCE(l->nr_items) != LONG_MIN)) { rcu_read_unlock(); return l; } - unlock_list_lru(l, irq); + unlock_list_lru(l, irq, irq_flags); } /* * Caller may simply bail out if raced with reparenting or @@ -132,37 +138,81 @@ list_lru_from_memcg_idx(struct list_lru static inline struct list_lru_one * lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, - bool irq, bool skip_empty) + bool irq, unsigned long *irq_flags, bool skip_empty) { struct list_lru_one *l = &lru->node[nid].lru; - lock_list_lru(l, irq); + lock_list_lru(l, irq, irq_flags); return l; } #endif /* CONFIG_MEMCG */ -/* The caller must ensure the memcg lifetime. */ -bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, - struct mem_cgroup *memcg) +struct list_lru_one *list_lru_lock(struct list_lru *lru, int nid, + struct mem_cgroup *memcg) { - struct list_lru_node *nlru = &lru->node[nid]; - struct list_lru_one *l; + return lock_list_lru_of_memcg(lru, nid, memcg, /*irq=*/false, + /*irq_flags=*/NULL, /*skip_empty=*/false); +} + +void list_lru_unlock(struct list_lru_one *l) +{ + unlock_list_lru(l, /*irq_off=*/false, /*irq_flags=*/NULL); +} + +struct list_lru_one *list_lru_lock_irqsave(struct list_lru *lru, int nid, + struct mem_cgroup *memcg, + unsigned long *flags) +{ + return lock_list_lru_of_memcg(lru, nid, memcg, /*irq=*/true, + /*irq_flags=*/flags, /*skip_empty=*/false); +} + +void list_lru_unlock_irqrestore(struct list_lru_one *l, unsigned long *flags) +{ + unlock_list_lru(l, /*irq_off=*/true, /*irq_flags=*/flags); +} - l = lock_list_lru_of_memcg(lru, nid, memcg, false, false); +bool __list_lru_add(struct list_lru *lru, struct list_lru_one *l, + struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ if (list_empty(item)) { list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); - unlock_list_lru(l, false); - atomic_long_inc(&nlru->nr_items); + atomic_long_inc(&lru->node[nid].nr_items); + return true; + } + return false; +} + +bool __list_lru_del(struct list_lru *lru, struct list_lru_one *l, + struct list_head *item, int nid) +{ + if (!list_empty(item)) { + list_del_init(item); + l->nr_items--; + atomic_long_dec(&lru->node[nid].nr_items); return true; } - unlock_list_lru(l, false); return false; } +/* The caller must ensure the memcg lifetime. */ +bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg) +{ + struct list_lru_one *l; + bool ret; + + l = list_lru_lock(lru, nid, memcg); + ret = __list_lru_add(lru, l, item, nid, memcg); + list_lru_unlock(l); + return ret; +} + bool list_lru_add_obj(struct list_lru *lru, struct list_head *item) { bool ret; @@ -184,19 +234,13 @@ EXPORT_SYMBOL_GPL(list_lru_add_obj); bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid, struct mem_cgroup *memcg) { - struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; + bool ret; - l = lock_list_lru_of_memcg(lru, nid, memcg, false, false); - if (!list_empty(item)) { - list_del_init(item); - l->nr_items--; - unlock_list_lru(l, false); - atomic_long_dec(&nlru->nr_items); - return true; - } - unlock_list_lru(l, false); - return false; + l = list_lru_lock(lru, nid, memcg); + ret = __list_lru_del(lru, l, item, nid); + list_lru_unlock(l); + return ret; } bool list_lru_del_obj(struct list_lru *lru, struct list_head *item) @@ -269,7 +313,8 @@ __list_lru_walk_one(struct list_lru *lru unsigned long isolated = 0; restart: - l = lock_list_lru_of_memcg(lru, nid, memcg, irq_off, true); + l = lock_list_lru_of_memcg(lru, nid, memcg, /*irq=*/irq_off, + /*irq_flags=*/NULL, /*skip_empty=*/true); if (!l) return isolated; list_for_each_safe(item, n, &l->list) { @@ -310,7 +355,7 @@ restart: BUG(); } } - unlock_list_lru(l, irq_off); + unlock_list_lru(l, irq_off, NULL); out: return isolated; } _ Patches currently in -mm which might be from hannes@cmpxchg.org are mm-list_lru-introduce-folio_memcg_list_lru_alloc.patch mm-switch-deferred-split-shrinker-to-list_lru.patch