From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65EC53EFD04 for ; Wed, 21 Jan 2026 11:28:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768994894; cv=none; b=UbAZtQP5leklt+RYUlVcNXiySNR08iPd2d0nzNG7/0u2vs7sJQuB4VNJ+HPSx7CUsfyZWlnGvGlS2JWVlvFmp8knD3e4D4MOVTp3Ceg0EOKuGKxcMpAV7Oa7d9PZtjVoVsvbyfmztEcjWGjbqAKzLUCwbbblZRO0zra7f4U2iyY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768994894; c=relaxed/simple; bh=CCH9R0dUeK27lj6nTiMlygWvHvLUeOlJiKPTeW47CHo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d/zfH1+qXPTwO1NOD4rEYpVF5UQ+2HCscxMx40NBcNgRMHgXHzAzxihn0AesuttSDdRfuP6JYx5GPZ8Cr1m06MOZkpFg86pF1uSf7NfRSU7ynWxIbYD9N4tLc3P8KnxwDtgozSm4dHd5YJdbiXnJn4/ITJwAGYZrNrkqVh6NeYg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dPFSdnX4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dPFSdnX4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4F28C116D0; Wed, 21 Jan 2026 11:28:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1768994893; bh=CCH9R0dUeK27lj6nTiMlygWvHvLUeOlJiKPTeW47CHo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dPFSdnX4zWMZG/yZfLVVCD0X6R2g7sX4INGgzubutZuLcXWKypegrxybpwPN5lJWa bnSiR9hNfEFGOoRKWa8XHZ0dIybqunGza8K8jr2kfkv8X5agnr5+Q0CJ5rkDeOLk3g KEM+ACYA/sKhivhsJjORCRVydbkBIehLz7gGBJ1q1N0vIHhbtO1DjAIlE7M8rGbImh Qjrk5c0bH5y/hst9DOPg7MJAJMALc0+ykWJ8Osmp3ezAVx8SLElUIANGvZIZ9KfiKd t/AgW12Ldav/yCYfO1Ua0CxhN5D662uM19K7ZwEJ6loM07SBWNCdnOwhxwXYU0CnS6 9nToCLq6ng5nA== From: Sasha Levin To: stable@vger.kernel.org Cc: Joshua Hahn , Chris Mason , Andrew Morton , Johannes Weiner , Vlastimil Babka , Brendan Jackman , "Kirill A. Shutemov" , Michal Hocko , SeongJae Park , Suren Baghdasaryan , Zi Yan , Sasha Levin Subject: [PATCH 6.12.y 2/3] mm/page_alloc: batch page freeing in decay_pcp_high Date: Wed, 21 Jan 2026 06:28:07 -0500 Message-ID: <20260121112808.1461983-2-sashal@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260121112808.1461983-1-sashal@kernel.org> References: <2026012041-wilder-jalapeno-0398@gregkh> <20260121112808.1461983-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Joshua Hahn [ Upstream commit fc4b909c368f3a7b08c895dd5926476b58e85312 ] It is possible for pcp->count - pcp->high to exceed pcp->batch by a lot. When this happens, we should perform batching to ensure that free_pcppages_bulk isn't called with too many pages to free at once and starve out other threads that need the pcp or zone lock. Since we are still only freeing the difference between the initial pcp->count and pcp->high values, there should be no change to how many pages are freed. Link: https://lkml.kernel.org/r/20251014145011.3427205-3-joshua.hahnjy@gmail.com Signed-off-by: Joshua Hahn Suggested-by: Chris Mason Suggested-by: Andrew Morton Co-developed-by: Johannes Weiner Reviewed-by: Vlastimil Babka Cc: Brendan Jackman Cc: "Kirill A. Shutemov" Cc: Michal Hocko Cc: SeongJae Park Cc: Suren Baghdasaryan Cc: Zi Yan Signed-off-by: Andrew Morton Stable-dep-of: 038a102535eb ("mm/page_alloc: prevent pcp corruption with SMP=n") Signed-off-by: Sasha Levin --- mm/page_alloc.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6e1669a562946..23ad33020f312 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2365,7 +2365,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, */ bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) { - int high_min, to_drain, batch; + int high_min, to_drain, to_drain_batched, batch; bool todo = false; high_min = READ_ONCE(pcp->high_min); @@ -2383,11 +2383,14 @@ bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) } to_drain = pcp->count - pcp->high; - if (to_drain > 0) { + while (to_drain > 0) { + to_drain_batched = min(to_drain, batch); spin_lock(&pcp->lock); - free_pcppages_bulk(zone, to_drain, pcp, 0); + free_pcppages_bulk(zone, to_drain_batched, pcp, 0); spin_unlock(&pcp->lock); todo = true; + + to_drain -= to_drain_batched; } return todo; -- 2.51.0