From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12A0A31D375 for ; Mon, 6 Apr 2026 21:37:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.181 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775511474; cv=none; b=uR4H5jpBy0LHbFCr7bRkcPROjrUkF31mkLu/6sMmvgLTu6/cxhL9VPuuxCeyekJEeCRrrpqGfseoXPYfwV/UKV+1c6HqpsldeHbWkgaPLgW/RU7P38gltmgqhlbQ/lKbcwK+fPlMyDbBUmRJ8x3mD4NF+6jSVYOM//evT5+aOT4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775511474; c=relaxed/simple; bh=gcXDTB+/46UkCuItfOUdQanZfF16gIy7SSW2YdNry2E=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=slJgEKOyPJ2KZIhlLlHWzmfv2jdiqCUSy+KdbG/bID8SBYDCE7XPvApSLDODG9kDZF2XoaWVsugJ7ZMNGw//mIWeJBOlIIFudWippjtTt/NOLgAKi5scRRljuxYsEacItivBQsAcODZIv+9sWC4UwDhPhvv5Aluvrr/SJR1Xpes= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b=ny8jGg0D; arc=none smtp.client-ip=209.85.160.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b="ny8jGg0D" Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-50d75bfb259so19703371cf.1 for ; Mon, 06 Apr 2026 14:37:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg.org; s=google; t=1775511470; x=1776116270; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=tg4IglZkL6FTu+jdUwhTxtA0loFZT6Zl00qSd/nFL+U=; b=ny8jGg0DAZi0KTYBg8K5EC08CXAc+ty+nNnqNI3KYvRpGXSU+vjDFcipEpHJP0fSWY MUxou8wr34Y9Gi41GAaAC3Kram/EAS741BKYCTh4oOS337lipb49FC8ydxoDKtyGn6UU QTTRQzX7n4yYOvWcpQ2xa+tq8MaSL3q9ohySPv92J8RMAWCM5A9SoJmE4B/lQa53SLzQ rF+gxS8eupVsVQBGEUgW8ADSBc1OrUy/qaY9aE+eBE+JCcVXmxtRQpzwZvqXOrUmW8nY V1/ghLF9vsSqHVWdX6Dp9RaCAN8dSrGpGP/R0p7gweDfjn349/8v1Dgp1d/rZ6b71DB2 uefw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775511470; x=1776116270; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tg4IglZkL6FTu+jdUwhTxtA0loFZT6Zl00qSd/nFL+U=; b=mz2unnuyyf1CeBt2p6tJlob7BiRKWrLAQy122wZVHP/GXDFIiV6UZtro65XBXpf6cC y2nZ3eUqzIzdkhKJeJ84aalWCCSaOkShQx/aSZxf8L3LczYDY6iJIYy2+L+3nqhyaYYg 04sJ0ECpqKIJdmNUs8FlUafmJBKyqYIKVz3w9szeO+Nr5ibZXO0EUeDliD3lS1aDM8L5 dy0t5nOMjB5WXVtuiL0JQxCURKz37fDWg0G+loK7QJsv0RTq/qxZR+KyrkRK9qkcilQd wbS2KlHcMtoPwS5Svk7PIgNNjRB/QAX8GI1NcJgVgKDM6vE33V5fRlG3SUXSOmP2KMy2 P8tA== X-Forwarded-Encrypted: i=1; AJvYcCUkpWfM3CLDGnf26tiXov8/kdr0qpy0WX01zmQ8mem1Xuf8N3dXg+24iPKZah4ufcv6FA/rxsQvvoRUJ2E=@vger.kernel.org X-Gm-Message-State: AOJu0Yxab/nCF3eZff2j8ooKOlHeQ0L8dX+LJsetUPIx8rk+WXxlDeX8 tN6IbFru3k/ynr5NyRFq2LpfNO/1S+hTJTdpqEIEXRa4NWPCphi6LTx1wlYH/ckaz1Q= X-Gm-Gg: AeBDiet6xU/ZX8llCnmxxNARiUBB2NppGZqbvzxVykq9kZP2Zn7ZVFFhyeiSFor0Sld 7GY7sShaj1OPYGFjemWPKrtKhAz/cN3QA/xCWfnFDlI8/GYMxPMRZCBUJ03EuTWSRYtKuiSIkau b1cLV50utgJLGh3E1trhVaIWyhq2hw57AeKsqsQDzWn4CKNI2RaEwVmGcn/ZEN+7eXL6WevCxqw 5gvh7eG40HzCqY3BqHFwFhGQoEqDn36+AEEDLm7ZhHqTF8KLMIzuY2BR6+ROMzN1bCiGVvsm8rm lRsOu9SKz+Zi2B5ZwwsAJqo5Rim5DS4YLTNdL6rFeIqfLQHcUJusrA7XpB//SNs3zwx2gZRbBrF 0spp1LwBpiG8kDoLGwI1lyPlY3YZnfCcnE27rcGlTMe8XA9R8kJMq4vkNubx0yjIwjh0QKM0StW kD+K6en5YjD0NR1VKV2urJKQ== X-Received: by 2002:ac8:75d6:0:b0:50b:3e4d:7feb with SMTP id d75a77b69052e-50d62b4facfmr151136841cf.53.1775511469616; Mon, 06 Apr 2026 14:37:49 -0700 (PDT) Received: from localhost ([2603:7000:c00:3a00:365a:60ff:fe62:ff29]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-50d4b746314sm136057621cf.16.2026.04.06.14.37.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Apr 2026 14:37:47 -0700 (PDT) Date: Mon, 6 Apr 2026 17:37:43 -0400 From: Johannes Weiner To: "Lorenzo Stoakes (Oracle)" Cc: Andrew Morton , David Hildenbrand , Shakeel Butt , Yosry Ahmed , Zi Yan , "Liam R. Howlett" , Usama Arif , Kiryl Shutsemau , Dave Chinner , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 7/7] mm: switch deferred split shrinker to list_lru Message-ID: References: <20260318200352.1039011-1-hannes@cmpxchg.org> <20260318200352.1039011-8-hannes@cmpxchg.org> <0cf8a859-b142-4e53-9113-94872dd68f40@lucifer.local> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Wed, Apr 01, 2026 at 06:33:04PM +0100, Lorenzo Stoakes (Oracle) wrote: > On Mon, Mar 30, 2026 at 12:40:22PM -0400, Johannes Weiner wrote: > > > > @@ -414,10 +414,9 @@ static inline int split_huge_page(struct page *page) > > > > { > > > > return split_huge_page_to_list_to_order(page, NULL, 0); > > > > } > > > > + > > > > +extern struct list_lru deferred_split_lru; > > > > > > It might be nice for the sake of avoiding a global to instead expose this > > > as a getter? > > > > > > Or actually better, since every caller outside of huge_memory.c that > > > references this uses folio_memcg_list_lru_alloc(), do something like: > > > > > > int folio_memcg_alloc_deferred(struct folio *folio, gfp_t gfp); > > > > > > in mm/huge_memory.c: > > > > > > /** > > > * blah blah blah put on error blah > > > */ > > > int folio_memcg_alloc_deferred(struct folio *folio, gfp_t gfp) > > > { > > > int err; > > > > > > err = folio_memcg_list_lru_alloc(folio, &deferred_split_lru, gfP); > > > if (err) { > > > folio_put(folio); > > > return err; > > > } > > > > > > return 0; > > > } > > > > > > And then the callers can just invoke this, and you can make > > > deferred_split_lru static in mm/huge_memory.c? > > > > That sounds reasonable. Let me make this change. > > Thanks! Done. This looks much nicer. Though I kept the folio_put() in the caller because that's who owns the reference. It would be quite unexpected for this one to consume a ref on error. > > > > @@ -939,6 +949,7 @@ static int __init thp_shrinker_init(void) > > > > > > > > huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero"); > > > > if (!huge_zero_folio_shrinker) { > > > > + list_lru_destroy(&deferred_split_lru); > > > > shrinker_free(deferred_split_shrinker); > > > > > > Presumably no probably-impossible-in-reality race on somebody entering the > > > shrinker and referencing the deferred_split_lru before the shrinker is freed? > > > > Ah right, I think for clarity it would indeed be better to destroy the > > shrinker, then the queue. Let me re-order this one. > > > > But yes, in practice, none of the above fails. If we have trouble > > doing a couple of small kmallocs during a subsys_initcall(), that > > machine is unlikely to finish booting, let alone allocate enough > > memory to enter the THP shrinker. > > Yeah I thought that might be the case, but seems more logical killing shrinker > first, thanks! Done. > > > > @@ -3854,34 +3761,34 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n > > > > struct folio *end_folio = folio_next(folio); > > > > struct folio *new_folio, *next; > > > > int old_order = folio_order(folio); > > > > + struct list_lru_one *l; > > > > > > Nit, and maybe this is a convention, but hate single letter variable names, > > > 'lru' or something might be nicer? > > > > Yeah I stuck with the list_lru internal naming, which uses `lru` for > > the struct list_lru, and `l` for struct list_lru_one. I suppose that > > was fine for the very domain-specific code and short functions in > > there, but it's grating in large, general MM functions like these. > > > > Since `lru` is taken, any preferences? llo? > > ljs? ;) > > Could be list? list is taken in some of these contexts already. I may have overthought this. lru works fine in those callsites, and is in line with what other sites are using (git grep list_lru_one). > But, and I _know_ it's nitty sorry, but maybe worth expanding that comment to > explain that e.g. 'we must take the folio look prior to the list_lru lock to > avoid racing with deferred_split_scan() in accessing the folio reference count' > or similar? Good idea! Done. > > > > + int nid = folio_nid(folio); > > > > unsigned long flags; > > > > bool unqueued = false; > > > > > > > > WARN_ON_ONCE(folio_ref_count(folio)); > > > > WARN_ON_ONCE(!mem_cgroup_disabled() && !folio_memcg_charged(folio)); > > > > > > > > - ds_queue = folio_split_queue_lock_irqsave(folio, &flags); > > > > - if (!list_empty(&folio->_deferred_list)) { > > > > - ds_queue->split_queue_len--; > > > > + rcu_read_lock(); > > > > + l = list_lru_lock_irqsave(&deferred_split_lru, nid, folio_memcg(folio), &flags); > > > > + if (__list_lru_del(&deferred_split_lru, l, &folio->_deferred_list, nid)) { > > > > > > Maybe worth factoring __list_lru_del() into something that explicitly > > > references &folio->_deferred_list rather than open codingin both places? > > > > Hm, I wouldn't want to encode this into list_lru API, but we could do > > a huge_memory.c-local helper? > > > > folio_deferred_split_del(folio, l, nid) > > Well, I kind of hate how we're using the global deferred_split_lru all over the > place, so a helper woudl be preferable but one that also could be used for > khugepaged.c and memory.c also? This function is used only in huge_memory.c. I managed to make the deferred_list_lru static as well without making any changes to this ^ particular function/callsite. Let me know, after looking at the delta diff below, if you'd still like to see changes here. > > > > @@ -4534,64 +4438,32 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, > > > > } > > > > folio_unlock(folio); > > > > next: > > > > - if (did_split || !folio_test_partially_mapped(folio)) > > > > - continue; > > > > /* > > > > * Only add back to the queue if folio is partially mapped. > > > > * If thp_underused returns false, or if split_folio fails > > > > * in the case it was underused, then consider it used and > > > > * don't add it back to split_queue. > > > > */ > > > > - fqueue = folio_split_queue_lock_irqsave(folio, &flags); > > > > - if (list_empty(&folio->_deferred_list)) { > > > > - list_add_tail(&folio->_deferred_list, &fqueue->split_queue); > > > > - fqueue->split_queue_len++; > > > > + if (!did_split && folio_test_partially_mapped(folio)) { > > > > + rcu_read_lock(); > > > > + l = list_lru_lock_irqsave(&deferred_split_lru, > > > > + folio_nid(folio), > > > > + folio_memcg(folio), > > > > + &flags); > > > > + __list_lru_add(&deferred_split_lru, l, > > > > + &folio->_deferred_list, > > > > + folio_nid(folio), folio_memcg(folio)); > > > > + list_lru_unlock_irqrestore(l, &flags); > > > > > > Hmm this does make me think it'd be nice to have a list_lru_add() variant > > > for irqsave/restore then, since it's a repeating pattern! > > > > Yeah, this site calls for it the most :( I tried to balance callsite > > prettiness with the need to extend the list_lru api; it's just one > > caller. And the possible mutations and variants with these locks is > > seemingly endless once you open that can of worms... > > True... > > > > > Case in point: this is process context and we could use > > spin_lock_irq() here. I'm just using list_lru_lock_irqsave() because > > that's the common variant used by the add and del paths already. > > > > If I went with a helper, I could do list_lru_add_irq(). > > > > I think it would actually nicely mirror the list_lru_shrink_walk_irq() > > a few lines up. > > Yeah, I mean I'm pretty sure this repeats quite a few times so is worthy of a > helper. It's only one callsite, actually. But I added the helper. It's churny on the list_lru side, but that callsite does look much better. Anyway, I hope I got everything. Can you take a look? Will obviously fold this into the respective patches, but just double checking whether these things are what you had in mind. --- diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 8d801ed378db..b473605b4d7d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -415,7 +415,8 @@ static inline int split_huge_page(struct page *page) return split_huge_page_to_list_to_order(page, NULL, 0); } -extern struct list_lru deferred_split_lru; +int folio_memcg_alloc_deferred(struct folio *folio); + void deferred_split_folio(struct folio *folio, bool partially_mapped); void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 4bd29b61c59a..733a262b91e5 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -83,6 +83,21 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, gfp_t gfp); #ifdef CONFIG_MEMCG +/** + * folio_memcg_list_lru_alloc - allocate list_lru heads for shrinkable folio + * @folio: the newly allocated & charged folio + * @lru: the list_lru this might be queued on + * @gfp: gfp mask + * + * Allocate list_lru heads (per-memcg, per-node) needed to queue this + * particular folio down the line. + * + * This does memcg_list_lru_alloc(), but on the memcg that @folio is + * associated with. Handles folio_memcg() access rules in the fast + * path (list_lru heads allocated) and the allocation slowpath. + * + * Returns 0 on success, a negative error value otherwise. + */ int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru, gfp_t gfp); #else @@ -118,6 +133,10 @@ struct list_lru_one *list_lru_lock(struct list_lru *lru, int nid, */ void list_lru_unlock(struct list_lru_one *l); +struct list_lru_one *list_lru_lock_irq(struct list_lru *lru, int nid, + struct mem_cgroup *memcg); +void list_lru_unlock_irq(struct list_lru_one *l); + struct list_lru_one *list_lru_lock_irqsave(struct list_lru *lru, int nid, struct mem_cgroup *memcg, unsigned long *irq_flags); void list_lru_unlock_irqrestore(struct list_lru_one *l, @@ -161,6 +180,9 @@ bool __list_lru_del(struct list_lru *lru, struct list_lru_one *l, bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, struct mem_cgroup *memcg); +bool list_lru_add_irq(struct list_lru *lru, struct list_head *item, int nid, + struct mem_cgroup *memcg); + /** * list_lru_add_obj: add an element to the lru list's tail * @lru: the lru pointer diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c8c6c4602cc7..a0cce6a56620 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -69,7 +69,7 @@ unsigned long transparent_hugepage_flags __read_mostly = (1<_refcount */ + /* + * If this folio can be on the deferred split queue, lock out + * the shrinker before freezing the ref. If the shrinker sees + * a 0-ref folio, it assumes it beat folio_put() to the list + * lock and must clean up the LRU state - the same dequeue we + * will do below as part of the split. + */ dequeue_deferred = folio_test_anon(folio) && old_order > 1; if (dequeue_deferred) { rcu_read_lock(); - l = list_lru_lock(&deferred_split_lru, - folio_nid(folio), folio_memcg(folio)); + lru = list_lru_lock(&deferred_split_lru, + folio_nid(folio), folio_memcg(folio)); } if (folio_ref_freeze(folio, folio_cache_ref_count(folio) + 1)) { struct swap_cluster_info *ci = NULL; struct lruvec *lruvec; if (dequeue_deferred) { - __list_lru_del(&deferred_split_lru, l, + __list_lru_del(&deferred_split_lru, lru, &folio->_deferred_list, folio_nid(folio)); if (folio_test_partially_mapped(folio)) { folio_clear_partially_mapped(folio); mod_mthp_stat(old_order, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); } - list_lru_unlock(l); + list_lru_unlock(lru); rcu_read_unlock(); } @@ -3890,7 +3901,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n swap_cluster_unlock(ci); } else { if (dequeue_deferred) { - list_lru_unlock(l); + list_lru_unlock(lru); rcu_read_unlock(); } return -EAGAIN; @@ -4268,7 +4279,7 @@ int split_folio_to_list(struct folio *folio, struct list_head *list) */ bool __folio_unqueue_deferred_split(struct folio *folio) { - struct list_lru_one *l; + struct list_lru_one *lru; int nid = folio_nid(folio); unsigned long flags; bool unqueued = false; @@ -4277,8 +4288,8 @@ bool __folio_unqueue_deferred_split(struct folio *folio) WARN_ON_ONCE(!mem_cgroup_disabled() && !folio_memcg_charged(folio)); rcu_read_lock(); - l = list_lru_lock_irqsave(&deferred_split_lru, nid, folio_memcg(folio), &flags); - if (__list_lru_del(&deferred_split_lru, l, &folio->_deferred_list, nid)) { + lru = list_lru_lock_irqsave(&deferred_split_lru, nid, folio_memcg(folio), &flags); + if (__list_lru_del(&deferred_split_lru, lru, &folio->_deferred_list, nid)) { if (folio_test_partially_mapped(folio)) { folio_clear_partially_mapped(folio); mod_mthp_stat(folio_order(folio), @@ -4286,7 +4297,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio) } unqueued = true; } - list_lru_unlock_irqrestore(l, &flags); + list_lru_unlock_irqrestore(lru, &flags); rcu_read_unlock(); return unqueued; /* useful for debug warnings */ @@ -4295,7 +4306,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio) /* partially_mapped=false won't clear PG_partially_mapped folio flag */ void deferred_split_folio(struct folio *folio, bool partially_mapped) { - struct list_lru_one *l; + struct list_lru_one *lru; int nid; struct mem_cgroup *memcg; unsigned long flags; @@ -4324,7 +4335,7 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped) rcu_read_lock(); memcg = folio_memcg(folio); - l = list_lru_lock_irqsave(&deferred_split_lru, nid, memcg, &flags); + lru = list_lru_lock_irqsave(&deferred_split_lru, nid, memcg, &flags); if (partially_mapped) { if (!folio_test_partially_mapped(folio)) { folio_set_partially_mapped(folio); @@ -4337,8 +4348,8 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped) /* partially mapped folios cannot become non-partially mapped */ VM_WARN_ON_FOLIO(folio_test_partially_mapped(folio), folio); } - __list_lru_add(&deferred_split_lru, l, &folio->_deferred_list, nid, memcg); - list_lru_unlock_irqrestore(l, &flags); + __list_lru_add(&deferred_split_lru, lru, &folio->_deferred_list, nid, memcg); + list_lru_unlock_irqrestore(lru, &flags); rcu_read_unlock(); } @@ -4411,8 +4422,6 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, list_for_each_entry_safe(folio, next, &dispose, _deferred_list) { bool did_split = false; bool underused = false; - struct list_lru_one *l; - unsigned long flags; list_del_init(&folio->_deferred_list); @@ -4446,14 +4455,10 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, */ if (!did_split && folio_test_partially_mapped(folio)) { rcu_read_lock(); - l = list_lru_lock_irqsave(&deferred_split_lru, - folio_nid(folio), - folio_memcg(folio), - &flags); - __list_lru_add(&deferred_split_lru, l, - &folio->_deferred_list, - folio_nid(folio), folio_memcg(folio)); - list_lru_unlock_irqrestore(l, &flags); + list_lru_add_irq(&deferred_split_lru, + &folio->_deferred_list, + folio_nid(folio), + folio_memcg(folio)); rcu_read_unlock(); } folio_put(folio); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a81470f529e3..44a9b1350dbd 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1121,7 +1121,7 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a if (result != SCAN_SUCCEED) goto out_nolock; - if (folio_memcg_list_lru_alloc(folio, &deferred_split_lru, GFP_KERNEL)) + if (folio_memcg_alloc_deferred(folio)) goto out_nolock; mmap_read_lock(mm); diff --git a/mm/list_lru.c b/mm/list_lru.c index 1ccdd45b1d14..23bf7c243083 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -160,6 +160,18 @@ void list_lru_unlock(struct list_lru_one *l) unlock_list_lru(l, /*irq_off=*/false, /*irq_flags=*/NULL); } +struct list_lru_one *list_lru_lock_irq(struct list_lru *lru, int nid, + struct mem_cgroup *memcg) +{ + return lock_list_lru_of_memcg(lru, nid, memcg, /*irq=*/true, + /*irq_flags=*/NULL, /*skip_empty=*/false); +} + +void list_lru_unlock_irq(struct list_lru_one *l) +{ + unlock_list_lru(l, /*irq_off=*/true, /*irq_flags=*/NULL); +} + struct list_lru_one *list_lru_lock_irqsave(struct list_lru *lru, int nid, struct mem_cgroup *memcg, unsigned long *flags) @@ -213,6 +225,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid, return ret; } +bool list_lru_add_irq(struct list_lru *lru, struct list_head *item, + int nid, struct mem_cgroup *memcg) +{ + struct list_lru_one *l; + bool ret; + + l = list_lru_lock_irq(lru, nid, memcg); + ret = __list_lru_add(lru, l, item, nid, memcg); + list_lru_unlock_irq(l); + return ret; +} + bool list_lru_add_obj(struct list_lru *lru, struct list_head *item) { bool ret; diff --git a/mm/memory.c b/mm/memory.c index 24dd531125b4..23da4720576d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4658,8 +4658,7 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) folio_put(folio); goto next; } - if (order > 1 && - folio_memcg_list_lru_alloc(folio, &deferred_split_lru, GFP_KERNEL)) { + if (order > 1 && folio_memcg_alloc_deferred(folio)) { folio_put(folio); goto fallback; } @@ -5183,8 +5182,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) folio_put(folio); goto next; } - if (order > 1 && - folio_memcg_list_lru_alloc(folio, &deferred_split_lru, GFP_KERNEL)) { + if (order > 1 && folio_memcg_alloc_deferred(folio)) { folio_put(folio); goto fallback; }