From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 278E747B425 for ; Thu, 30 Apr 2026 18:23:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777573424; cv=none; b=B0YZrOYaKlm9IkFwpj5T6Og8FtZdj+XPqL73i7Q08r4VfWQdA7pPEuH0Do06F0oXPYjsD5rGBEqP2Th/4xLH4Rz7lNFuBwdG/g1OrQn2oVcmUxQGs/Vk9khCK7hOk4qS37zDaLTgI8bAKIuZ/KRl/4jsIuBeG2Oj9GsFci4LeAA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777573424; c=relaxed/simple; bh=k3yJgH8D791bhoAFURbddfZvHGIe58qtd5P3zc3tVfk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=rxpnsX2G1w9hbfi7Xs2AwmdXAyngdvZD2mhS6W+IYdVrNHDK7DEL3mz2VqLaev+0uBexWaCvNk3XKBw6wwH2LvuvfZcsNF9x4hohNeIPOpExkQkuIauLKVpeOE2GLF+Eq7Pr5Ssq003OsNezniNW5NRr2Tf5jo4LQUyr54i0aDE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cbZ4RVyH; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cbZ4RVyH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1777573422; x=1809109422; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k3yJgH8D791bhoAFURbddfZvHGIe58qtd5P3zc3tVfk=; b=cbZ4RVyHFZlnZrrfO85bMXC7UvimTxpHnZOiMpLnZkRZbjpM79jiNeiS 8v2kpGR7o3bErZDwywtR9WQeJOi2V9Aj6FSVKVkJ3yVeOEd7OHxazVbMR aQRERvWCP9/cRCGYzKuZ+Kdt5yvg6S3I93ExFjTovadY68gyWkRANEilj 2C2DKYCzDYSxSe9ZaIxQCv6QOCwBAMTr4JY9DWImb/PDMqA88FIogunfj SwXppC9W5qAn7JNOemr2PE6H4MdgS1QVJySpQFPYgyT3sicYH1y9lpXl6 nODhUq9aF6QHPgDq/t/gGWv5DSusPGGywZZghg8++7BzyeEM/qZcRNUsC w==; X-CSE-ConnectionGUID: 8UgePi2TSeGSHasjW3+hUg== X-CSE-MsgGUID: wP8Jld/sT3SlUXpomBS6Jw== X-IronPort-AV: E=McAfee;i="6800,10657,11772"; a="82388980" X-IronPort-AV: E=Sophos;i="6.23,208,1770624000"; d="scan'208";a="82388980" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2026 11:23:41 -0700 X-CSE-ConnectionGUID: jJTzPlxCR6aSMAAmZ24yxQ== X-CSE-MsgGUID: CTLX63CKQDWyf1HT8KbPfA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,208,1770624000"; d="scan'208";a="231532217" Received: from gsse-cloud1.jf.intel.com ([10.54.39.91]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2026 11:23:41 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: Andrew Morton , Dave Chinner , Qi Zheng , Roman Gushchin , Muchun Song , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Johannes Weiner , Shakeel Butt , Kairui Song , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , linux-mm@kvack.org, linux-kernel@vger.kernel.org, =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= Subject: [PATCH v3 1/6] mm: Wire up order in shrink_control Date: Thu, 30 Apr 2026 11:23:30 -0700 Message-Id: <20260430182335.2132382-2-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260430182335.2132382-1-matthew.brost@intel.com> References: <20260430182335.2132382-1-matthew.brost@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pass the allocation order through shrink_control so shrinkers have visibility into the order that triggered reclaim. This allows shrinkers to implement better heuristics, such as detecting high-order allocation pressure or fragmentation and avoiding eviction of working sets when reclaim is invoked from kswapd. Cc: Andrew Morton Cc: Dave Chinner Cc: Qi Zheng Cc: Roman Gushchin Cc: Muchun Song Cc: David Hildenbrand Cc: Lorenzo Stoakes Cc: "Liam R. Howlett" Cc: Vlastimil Babka Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: Michal Hocko Cc: Johannes Weiner Cc: Shakeel Butt Cc: Kairui Song Cc: Barry Song Cc: Axel Rasmussen Cc: Yuanchu Xie Cc: Wei Xu Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Suggested-by: Thomas Hellström Signed-off-by: Matthew Brost --- include/linux/shrinker.h | 3 +++ mm/internal.h | 4 ++-- mm/shrinker.c | 11 +++++++---- mm/vmscan.c | 7 ++++--- 4 files changed, 16 insertions(+), 9 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 1a00be90d93a..7072f693b9be 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -37,6 +37,9 @@ struct shrink_control { /* current node being shrunk (for NUMA aware shrinkers) */ int nid; + /* Allocation order we are currently trying to fulfil. */ + s8 order; + /* * How many objects scan_objects should scan and try to reclaim. * This is reset before every call, so it is safe for callees diff --git a/mm/internal.h b/mm/internal.h index 5a2ddcf68e0b..ff8671dccf7b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1759,8 +1759,8 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, void __meminit __init_page_from_nid(unsigned long pfn, int nid); /* shrinker related functions */ -unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, - int priority); +unsigned long shrink_slab(gfp_t gfp_mask, int nid, s8 order, + struct mem_cgroup *memcg, int priority); int shmem_add_to_page_cache(struct folio *folio, struct address_space *mapping, diff --git a/mm/shrinker.c b/mm/shrinker.c index 76b3f750cf65..fb23a338fb22 100644 --- a/mm/shrinker.c +++ b/mm/shrinker.c @@ -466,7 +466,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, } #ifdef CONFIG_MEMCG -static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, +static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, s8 order, struct mem_cgroup *memcg, int priority) { struct shrinker_info *info; @@ -528,6 +528,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct shrink_control sc = { .gfp_mask = gfp_mask, .nid = nid, + .order = order, .memcg = memcg, }; struct shrinker *shrinker; @@ -598,6 +599,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, * shrink_slab - shrink slab caches * @gfp_mask: allocation context * @nid: node whose slab caches to target + * @order: order of allocation * @memcg: memory cgroup whose slab caches to target * @priority: the reclaim priority * @@ -614,8 +616,8 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, * * Returns the number of reclaimed slab objects. */ -unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, - int priority) +unsigned long shrink_slab(gfp_t gfp_mask, int nid, s8 order, + struct mem_cgroup *memcg, int priority) { unsigned long ret, freed = 0; struct shrinker *shrinker; @@ -628,7 +630,7 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, * oom. */ if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg)) - return shrink_slab_memcg(gfp_mask, nid, memcg, priority); + return shrink_slab_memcg(gfp_mask, nid, order, memcg, priority); /* * lockless algorithm of global shrink. @@ -656,6 +658,7 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, struct shrink_control sc = { .gfp_mask = gfp_mask, .nid = nid, + .order = order, .memcg = memcg, }; diff --git a/mm/vmscan.c b/mm/vmscan.c index bd1b1aa12581..a54d14ecad25 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -412,7 +412,7 @@ static unsigned long drop_slab_node(int nid) memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - freed += shrink_slab(GFP_KERNEL, nid, memcg, 0); + freed += shrink_slab(GFP_KERNEL, nid, 0, memcg, 0); } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); return freed; @@ -5068,7 +5068,8 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) success = try_to_shrink_lruvec(lruvec, sc); - shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority); + shrink_slab(sc->gfp_mask, pgdat->node_id, sc->order, memcg, + sc->priority); if (!sc->proactive) vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, @@ -6170,7 +6171,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) shrink_lruvec(lruvec, sc); - shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, + shrink_slab(sc->gfp_mask, pgdat->node_id, sc->order, memcg, sc->priority); /* Record the group's reclaim efficiency */ -- 2.34.1