From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E3D1274FDC for ; Mon, 6 Apr 2026 21:58:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.181 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775512739; cv=none; b=onT5q1CKeKKq61W7EKxQ201GwnYjJeAjWKn4n6kDvuwrddXRD2qippkKOA2SNiRNTc8lACEETS0xkVgZMkJMiw9KUIdplanor5DgYTZp5x7AuVqS25VeqVt4TKpEUSdByRLRzJilOU7cdnk/Fq6JtTcXLV4Eia6FxGEU0iRSLvA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775512739; c=relaxed/simple; bh=fOqdEYL4OdGce6qQielftp3Bq7N72xvDpZaAncYEM6Q=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=tNM+C4YYwzLU3ieyzoPFTalTL0h+ebN7aTT+6ZEkYJ5Yj6kzcdNUqhig0SSZ+t8hj+UQQ+0ynGkmpy3OevIfMHud8cGrVVtQK4ELPj5tbkVFFnXJ6291ABX9avTLZqcaH2rRstgIM6MKl6iuDFIPpDhyejPq3BKF8fATIZyeOSY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b=l04Rt3bo; arc=none smtp.client-ip=209.85.222.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b="l04Rt3bo" Received: by mail-qk1-f181.google.com with SMTP id af79cd13be357-8cb3bae8d3eso423139485a.1 for ; Mon, 06 Apr 2026 14:58:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg.org; s=google; t=1775512735; x=1776117535; darn=vger.kernel.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=iDKsHzlIK0dErvGTUlDfEuG9BafHGJx5ejTHlmvGSXg=; b=l04Rt3bon8yWfIisDckfKjhfmAkkTr7N7SYqGvoTIa6CI0tqrACgJT0H6K5zUPIw/w 4NN2C9IVLPDyU7kUj/Ym3v/tMMKngq0zmJqQgKhH0KbAy+Qns6AuCFmdWCqKlcO9Y5En WBklwcrRjciou169rR2Poy6cWx87LLgWZ8sfIaIFR53AbJHWq0etENJEqHnqY1NW0GKE v43/mHGI9OLUTofWn0xvqoj1KURIxYc7FlXxBjqDhwCUhPvW2Q0glkM8CNR+YB0vKwBQ vlWbhSTg6PQf7KmB9weWKwMrb0fzYZATyEdXCpB+pzEQbO9HlD66n9CqvUW+e3kUfugd k2lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775512735; x=1776117535; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iDKsHzlIK0dErvGTUlDfEuG9BafHGJx5ejTHlmvGSXg=; b=aTGxyovzZiO7dEUJzZrRjgTf5mfUYRfDWFBYHn8nQHnRPlrKxZZ7OtAYuQ9bQ0+sFn 1DFF5Jbmi2NjHlR/MIet9PxGYmkFQUv0PiJsbvy4tFuskNeZ5wzPGC/aoWwLLk28pH0N vb4SAYXIpE2soqqCDlkNwxlN6RgWH2bSW7enPeVGfwcguyYnOutJxGA8pcuNxawdfy59 zEoxGPMC38Z7vlAqvUKJwXmIXM8tTH/evNrz8TaDGil7Aa8jyPp/PkozusYVC75SOhFI t8DjkNyY4FVBuknaNjHPMRu3xcYiUPWo6WeH95zaxzyg4/ElRrxDNu7twnIKc4jA/qGl iODg== X-Forwarded-Encrypted: i=1; AJvYcCVJO87GsRO81A/CnGXzMHkb2Xy7m+z7NXbyYQ7fNC5mzGYlDJ460ASgqJ6y3UJHBj4S2rbBVI4Wb8aN3GQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxVVo8Pr1Y7pm8p+KqWJsKRD0yz1Vpp0G5Sn9EwxYkj+4UEApDw 1ApWYmViApjQBr0URt6rTnKcV2re4kPM8JWF3GNF6+AKt2oeNEK3oOvUn6T3MAmYPGk= X-Gm-Gg: AeBDiesTRlhlPnINhQF7PW+JLTvjmazjuMXJ/YulotlFQvudQzWJzSpzT6MSTxXs8FH /ps574JyyF4g1D7onzpTzZ8Mrtmv/xeVhRzSYF0dXZTsoVcDhGf5tZ+Ii3mcrma6XsySDA3rdKW iG+rgmiW6vdVBQVvSzBVbTGYIRya0bex5G6URLS+5KbKUVHYtPP4hXoQnZm5sndoClyKawSn3m4 PqXdld7iDfZxoB+V/28pYi1CTxaBjgD9auCu3MGVID9p19n//xzbdSNsRbXX5Er5kuWHsqmOxnJ ANP0Nt3dWR7stRTCqA2oO8biNBatDDy8B2i0WRPmgAiH+YnCGF3pgrcJ+NmKsEExcgFYxZ0wmpW xYbJ1kVkUwow3zqYmW5QnZioaYj28dzN6jFtd0pc7P/UOHetTtjaF0b5LMbLoIF+q+TRMO6CHwk VL0JRPCwiL4OjvkhHiethUKV9xuqWCIF8f X-Received: by 2002:a05:620a:1995:b0:8cf:d7ac:1893 with SMTP id af79cd13be357-8d41db4cb38mr2013184185a.36.1775512735313; Mon, 06 Apr 2026 14:58:55 -0700 (PDT) Received: from localhost ([2603:7000:c00:3a00:365a:60ff:fe62:ff29]) by smtp.gmail.com with ESMTPSA id af79cd13be357-8d2a8c2a453sm1192629685a.47.2026.04.06.14.58.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Apr 2026 14:58:54 -0700 (PDT) Date: Mon, 6 Apr 2026 17:58:51 -0400 From: Johannes Weiner To: Frank van der Linden Cc: linux-mm@kvack.org, Vlastimil Babka , Zi Yan , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Rik van Riel , linux-kernel@vger.kernel.org Subject: Re: [RFC 2/2] mm: page_alloc: per-cpu pageblock buddy allocator Message-ID: References: <20260403194526.477775-1-hannes@cmpxchg.org> <20260403194526.477775-3-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Mon, Apr 06, 2026 at 10:31:02AM -0700, Frank van der Linden wrote: > On Fri, Apr 3, 2026 at 12:45 PM Johannes Weiner wrote: > > > > On large machines, zone->lock is a scaling bottleneck for page > > allocation. Two common patterns drive contention: > > > > 1. Affinity violations: pages are allocated on one CPU but freed on > > another (jemalloc, exit, reclaim). The freeing CPU's PCP drains to > > zone buddy, and the allocating CPU refills from zone buddy -- both > > under zone->lock, defeating PCP batching entirely. > > > > 2. Concurrent exits: processes tearing down large address spaces > > simultaneously overwhelm per-CPU PCP capacity, serializing on > > zone->lock for overflow. > > > > Solution > > > > Extend the PCP to operate on whole pageblocks with ownership tracking. > > > > Each CPU claims pageblocks from the zone buddy and splits them > > locally. Pages are tagged with their owning CPU, so frees route back > > to the owner's PCP regardless of which CPU frees. This eliminates > > affinity violations: the owner CPU's PCP absorbs both allocations and > > frees for its blocks without touching zone->lock. > > > > It also shortens zone->lock hold time during drain and refill > > cycles. Whole blocks are acquired under zone->lock and then split > > outside of it. Affinity routing to the owning PCP on free enables > > buddy merging outside the zone->lock as well; a bottom-up merge pass > > runs under pcp->lock on drain, freeing larger chunks under zone->lock. > > > > PCP refill uses a four-phase approach: > > > > Phase 0: recover owned fragments previously drained to zone buddy. > > Phase 1: claim whole pageblocks from zone buddy. > > Phase 2: grab sub-pageblock chunks without migratetype stealing. > > Phase 3: traditional __rmqueue() with migratetype fallback. > > > > Since the migrate type passed to rmqueue_bulk, where these changes > are, is the PCP migratetype, this will prefer MIGRATE_MOVABLE more > than before in the presence of MIGRATE_CMA pageblocks, right? > > Currently, the CMA fallback is done when > 50% of free zone memory is > MIGRATE_CMA. For a PCP list, this isn't strictly true of course, since > grabbing a page of the PCP list doesn't do this check, and MIGRATE_CMA > doesn't have its own PCP list. But since rmqueue_bulk does do it, I'm > guessing the fallback still mostly adheres to that 50%. > > With this change to rmqueue_bulk, it feels like it would prefer > MIGRATE_MOVABLE more, since that is the mt passed to it (never > MIGRATE_CMA), and the fallback is only done if the final phase is > needed. > > Have you tested this with a zone that has a large amount of CMA in it > and checked the percentages? Good catch. Yes, I think there are problems here wrt CMA: Phase 0 does not recover CMA blocks when movable is requested. That looks buggy. It should restore both block types. Phase 1 grabbing whole new blocks actually does use __rmqueue(), so it gets the CMA fallback. Phase 2 scans freelists based on requested type. This looks buggy as well. It should use the logic from the to of __rmqueue() to decide whether to grab CMA chunks instead. Phase 3 is the regular __rmqueue() path again, which honors it. It doesn't look hard to fix, but I'll be sure to test that.