From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C9DF3382DE for ; Mon, 23 Feb 2026 20:19:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.173 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771877968; cv=none; b=SZ7cNc/sA77V90oEwv+8ZzzaVsNRBA+MbAuXeI1QvZjRZQuMqlxpP+JL3fZaNn/s1Wcae4VwDdrKk3Hut0Q0xiAnxUagg7keajVZo5dRNXXvt4VjinkDcN0oJKB4So2gjHkt2beRNFrfWx0sabS3ekYyK7qmO+ATatTv55NKFgI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771877968; c=relaxed/simple; bh=RglV209+uxMRZHJF1BYwI/8OXRDEcHt50sSeY+NaMNk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=VbZQkU0LWDWdIMteyaHMeQKfCbcfh8lzSxELu8iRyVdPXVys3rKbz7C8mQUK2t5pNpJgZgERqkY3fBxnYjbFKa9hcN27cRNvkjecDNirqCxolxFyJTusbGBpKmp3MgIl4sth3MnQ/Xg0pwocqQpGHj55OtWEpCEapv/pG7rs5ew= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b=Q5WP4o9x; arc=none smtp.client-ip=209.85.160.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b="Q5WP4o9x" Received: by mail-qt1-f173.google.com with SMTP id d75a77b69052e-502a789834fso42147211cf.2 for ; Mon, 23 Feb 2026 12:19:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg.org; s=google; t=1771877965; x=1772482765; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=xI8/Up3XNsFX7BWIlObCd4llqnACUqRWzNuLSnRU3aU=; b=Q5WP4o9xx+d/Q2K/1RY4/k9xjdeIVh8RiZYC0p3m8gSXo2Cq37BfRuwB/bNKzjrfrm AXkynxF+o54a8i+qghpb//87QeBQueNok/xJcITscKFo9HaEwIx9CCO/9yy/Vz3XcFqk ZWGnkr6tLNUgrQ2UuqF123fVeiP95C/jmzBpMJY6b5p3WR2hIWUE7HKGZ4KmBmAo4kKw CpEH3ADF3J2D+vd5AiyW7Rm8WyHoXbwpJfDzBG8+fybPdFn75qo7Wd1X9l/8flZS26rQ o4w6giXuovmJvtPIMgA1VkiepnVMmdCoY7DJPseBUdsnegenn7f6ccdmeCbMfE/HnE1l LF4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771877965; x=1772482765; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xI8/Up3XNsFX7BWIlObCd4llqnACUqRWzNuLSnRU3aU=; b=ZKyupXF/kYS9mLmBa0OhYkJ7aw4xMdP+zuLibWqpMbOhQ3oaGjfKqtU0W8NHP7Fv6D NzsX6wEVNeMs3n+ea8iMFtgi9zL/wTr6janTjPuknbuw9pBcAgFEmEwIpX1KzkPtY1Tq tkPaBqJhowSo2PCpFuHaCMwxMaeViW6IB9qN47rZBQG1FYGKyzMpFf6FZNW0BVjP5Kj5 QcxovFmnIS0T6DbLzem2SrAv1GB7kjeuxrhlPEvxuo5PVQNR8QlPNlyjeWCsbBujtoEB GaFvNwEknah7xIlMv+pxsehPNJrraiDMbSAZk6Q5YgWwlrJapvhTVk7OA4Mv5R9EkaI5 1ZrA== X-Forwarded-Encrypted: i=1; AJvYcCUDfYmU3RpBwOzcUw6Kp6tbynujFktkKHp8ZHOPmTWjNjviHyI7FusXjX2xJ76vtQHyRn1uXYnF@vger.kernel.org X-Gm-Message-State: AOJu0YzVPLsLQf8RgWgS5u0lb1Fl8bH5Kr4L7Fk2UpSOa8YBXDX8q7ED xGMvq/ShUC3vV5bgVFfvREQ7LaPy+UqpIz7z4rR4qbIvuWkui9LtZR3NhdQj1KfcOs4= X-Gm-Gg: AZuq6aK04DHweLURs5MIkRJaV73RBaal00jd1jD6YpTdqJhrvfILgec7538miu+S/fv k+n5yZTXVuh19ScqB8FcueV4T3vQ2GiCcumCfBvl7PI4VgZWtZ1ywvrGtGrcgv/y6Kfk31MSC2i dbEt+EMPdU9UgodrBD5gg7dIE8bMhpKJ3eHmHehh0fR/aLSezTrN2TlY6FXqLtcEHPBKV24nFQQ n8iqISihuaihnsarUrtKq0bim76zCJsEaLeiPUxVV0UkN+lJOaxwLqqd5rnpHAgMB0qPGqcS6Jk M5pdRCafZL2GPJskMrvFXKwR9YCznP/a1qOJexGECSklI4rHY8Pm/WtC9fBJdTfpB5qTib3AGIc lSDpu6NOjOAX9EnXDbQ+JRGyGcipaDN0Uc3YGaCyHCDAeNtRRfyY3F4vz5HdTTj7/mynQzm7Ajt 1KjxK9U7AUHrdhsPwV5k3ywQ== X-Received: by 2002:ac8:7dc2:0:b0:502:a241:1eec with SMTP id d75a77b69052e-5070bba6b70mr144568591cf.3.1771877964441; Mon, 23 Feb 2026 12:19:24 -0800 (PST) Received: from localhost ([2603:7000:c00:3a00:365a:60ff:fe62:ff29]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-8997e6341c4sm76857036d6.35.2026.02.23.12.19.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Feb 2026 12:19:23 -0800 (PST) Date: Mon, 23 Feb 2026 15:19:20 -0500 From: Johannes Weiner To: Uladzislau Rezki Cc: Andrew Morton , Joshua Hahn , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] mm: vmalloc: streamline vmalloc memory accounting Message-ID: References: <20260220191035.3703800-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: cgroups@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Mon, Feb 23, 2026 at 04:30:32PM +0100, Uladzislau Rezki wrote: > On Fri, Feb 20, 2026 at 02:10:34PM -0500, Johannes Weiner wrote: > > @@ -3655,6 +3649,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > > continue; > > } > > > > + mod_node_page_state(page, NR_VMALLOC, 1 << large_order); > > + > > split_page(page, large_order); > > for (i = 0; i < (1U << large_order); i++) > > pages[nr_allocated + i] = page + i; > > @@ -3675,6 +3671,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > > if (!order) { > > while (nr_allocated < nr_pages) { > > unsigned int nr, nr_pages_request; > > + int i; > > > > /* > > * A maximum allowed request is hard-coded and is 100 > > @@ -3698,6 +3695,9 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > > nr_pages_request, > > pages + nr_allocated); > > > > + for (i = nr_allocated; i < nr_allocated + nr; i++) > > + inc_node_page_state(pages[i], NR_VMALLOC); > > + > > nr_allocated += nr; > > > > /* > > @@ -3722,6 +3722,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > > if (unlikely(!page)) > > break; > > > > + mod_node_page_state(page, NR_VMALLOC, 1 << order); > > + > > /* > Can we move *_node_page_stat() to the end of the vm_area_alloc_pages()? > > Or mod_node_page_state in first place should be invoked on high-order > page before split(to avoid of looping over small pages afterword)? > > I mean it would be good to place to the one solid place. If it is possible > of course. Note that the top one in the fast path IS called before the split. We're accounting in the same step size as the page allocator can give us. In the fallback paths (bulk allocator, and one-by-one loop), the issue is that the individual pages could be coming from different nodes, so they need to bump different counters. One possible solution would be to remember the last node and accumulate until it differs, then flush: fallback_loop() { page = alloc_pages(); nid = page_to_nid(page); if (nid != last_nid) { if (node_count) { mod_node_page_state(...); node_count = 0; } last_nid = nid; } } if (node_count) mod_node_page_state(...); But it IS the slow path, and these are fairly cheap per-cpu counters. Especially compared to the cost of calling into the allocator. So I'm not sure it's worth it... What do you think?