From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00BFF192B7D for ; Mon, 23 Feb 2026 20:19:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.178 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771877969; cv=none; b=hRhjIb0Mi70wmu7knjK2UQ18WvOERWwSpfpAJT/85FWlP9oOzPfXjcCStcuc1Uo7TDqaY3v2c6dNUiLLGCP1uIJ3VKajM9YUKhdrwcv5BVlMNKYxS2ErPss7ssLDIyVJtpqaMF9PbIipFXSs3y8K6gUf/oZnE31bM33jnLCkEGE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771877969; c=relaxed/simple; bh=RglV209+uxMRZHJF1BYwI/8OXRDEcHt50sSeY+NaMNk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=RDdGw0Q7MP22YydNTA2sVPkI2emVsBnyQjInA3Ro+IHsZ8eGVpAZjOlYRYgwFP7qxjqgVysjvNB97YgLtbYwj5wJT7FPXbRq9iK/mEUaPxAYkFgwA5DKTaxsi/m1ruQP+JqyU3cko86L1/LZmR9D+oOBIM8neHE9FIVvn4mKd6I= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b=Q5WP4o9x; arc=none smtp.client-ip=209.85.160.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b="Q5WP4o9x" Received: by mail-qt1-f178.google.com with SMTP id d75a77b69052e-506e287dd53so38678911cf.1 for ; Mon, 23 Feb 2026 12:19:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg.org; s=google; t=1771877965; x=1772482765; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=xI8/Up3XNsFX7BWIlObCd4llqnACUqRWzNuLSnRU3aU=; b=Q5WP4o9xx+d/Q2K/1RY4/k9xjdeIVh8RiZYC0p3m8gSXo2Cq37BfRuwB/bNKzjrfrm AXkynxF+o54a8i+qghpb//87QeBQueNok/xJcITscKFo9HaEwIx9CCO/9yy/Vz3XcFqk ZWGnkr6tLNUgrQ2UuqF123fVeiP95C/jmzBpMJY6b5p3WR2hIWUE7HKGZ4KmBmAo4kKw CpEH3ADF3J2D+vd5AiyW7Rm8WyHoXbwpJfDzBG8+fybPdFn75qo7Wd1X9l/8flZS26rQ o4w6giXuovmJvtPIMgA1VkiepnVMmdCoY7DJPseBUdsnegenn7f6ccdmeCbMfE/HnE1l LF4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771877965; x=1772482765; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xI8/Up3XNsFX7BWIlObCd4llqnACUqRWzNuLSnRU3aU=; b=rXfY2iTcPUrUiwX+nqoM2Osq5+mPW/QkA7iqosIAxIxkGIaqF+/0AL6KBQxY/SRmmH HNMJQqh96E3OvGFRpIgqvk8gwfoNNBczu/8Jc03nsYEHPpbPa5RGOIK6HnzVMPHAdT0d vHNPiLWI6dUSKK4hG84tD0X19rbd9i7tN6Wcfg2MaGDxgoiB2bqWKmG9SfuVpOe6LCfI EAMihFT5FyrXUJqqz8zDxT3b+Plfjwtzx2wgMGqUuE0QvvoRq4WGCwrQcdMYCuxmItbt QV69ANKKOWbGjtl2W2aGGet1p7iRAsBdqVetAGypcEg7e0oymq+psAUF6pOdkTwsv+gi hsCg== X-Forwarded-Encrypted: i=1; AJvYcCXnQ6c/6ag8SKYpAX9RNnVcNh9xwRbMxWBKe5vFWYE3Bxc1FGUkr1Y6XDvF4z8FSWbDVmBggwh0KA2PVoo=@vger.kernel.org X-Gm-Message-State: AOJu0YxXPl8JqM+uCXuW3xpvhrIfagmTUneRZA+Wcr3W2G7U64Nm5mBC te5B2Ya6jMyj3joDchY2nx4MuDXjK8meRT70/EGgxTqWUnngbunbp4/Jb7sfyrim668= X-Gm-Gg: AZuq6aIJ9o++gO+JWVfMpTHZD6ATIRDuNe640eGjWCuG03BqeeEfVhHbb6BEnzrnUec xEAVDh/do2f2Pki5o1ACDNhjqmO5pBFz25iAG3h4iyocBlkRR0+WmlBg8bDp7SeDPJJ0U9N8/Pt xsBmnl3LJoY+KgSAWat1S50spVhaqF6pn3sTo0BrJKzSqhkvwQ+UYLbP7vnIdOvxxeS+TD/sTjx uT0f6DeKjPUTrh50QkWxwZcYJvMrejqUQtWfKh74LyW3pknln1WqEgWSmuvtjmtHPSd8H5sAYI5 eUrNsSTIL6/5iZKUWsYTmUzb9I04AtCPoyzwGb8KkdteKymB2+uYBfxkTygTrLZqss1gc95Ye7V ZysofeKMFwtOIkiVf2CfjbX+h1DtlsJt+EOdBg/Db37a4y+ILjBot19hNCsh4xebAUh/05rrPl2 w6rWkgTUe0Qp90Pqev2CY8Ag== X-Received: by 2002:ac8:7dc2:0:b0:502:a241:1eec with SMTP id d75a77b69052e-5070bba6b70mr144568591cf.3.1771877964441; Mon, 23 Feb 2026 12:19:24 -0800 (PST) Received: from localhost ([2603:7000:c00:3a00:365a:60ff:fe62:ff29]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-8997e6341c4sm76857036d6.35.2026.02.23.12.19.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Feb 2026 12:19:23 -0800 (PST) Date: Mon, 23 Feb 2026 15:19:20 -0500 From: Johannes Weiner To: Uladzislau Rezki Cc: Andrew Morton , Joshua Hahn , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] mm: vmalloc: streamline vmalloc memory accounting Message-ID: References: <20260220191035.3703800-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Mon, Feb 23, 2026 at 04:30:32PM +0100, Uladzislau Rezki wrote: > On Fri, Feb 20, 2026 at 02:10:34PM -0500, Johannes Weiner wrote: > > @@ -3655,6 +3649,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > > continue; > > } > > > > + mod_node_page_state(page, NR_VMALLOC, 1 << large_order); > > + > > split_page(page, large_order); > > for (i = 0; i < (1U << large_order); i++) > > pages[nr_allocated + i] = page + i; > > @@ -3675,6 +3671,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > > if (!order) { > > while (nr_allocated < nr_pages) { > > unsigned int nr, nr_pages_request; > > + int i; > > > > /* > > * A maximum allowed request is hard-coded and is 100 > > @@ -3698,6 +3695,9 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > > nr_pages_request, > > pages + nr_allocated); > > > > + for (i = nr_allocated; i < nr_allocated + nr; i++) > > + inc_node_page_state(pages[i], NR_VMALLOC); > > + > > nr_allocated += nr; > > > > /* > > @@ -3722,6 +3722,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > > if (unlikely(!page)) > > break; > > > > + mod_node_page_state(page, NR_VMALLOC, 1 << order); > > + > > /* > Can we move *_node_page_stat() to the end of the vm_area_alloc_pages()? > > Or mod_node_page_state in first place should be invoked on high-order > page before split(to avoid of looping over small pages afterword)? > > I mean it would be good to place to the one solid place. If it is possible > of course. Note that the top one in the fast path IS called before the split. We're accounting in the same step size as the page allocator can give us. In the fallback paths (bulk allocator, and one-by-one loop), the issue is that the individual pages could be coming from different nodes, so they need to bump different counters. One possible solution would be to remember the last node and accumulate until it differs, then flush: fallback_loop() { page = alloc_pages(); nid = page_to_nid(page); if (nid != last_nid) { if (node_count) { mod_node_page_state(...); node_count = 0; } last_nid = nid; } } if (node_count) mod_node_page_state(...); But it IS the slow path, and these are fairly cheap per-cpu counters. Especially compared to the cost of calling into the allocator. So I'm not sure it's worth it... What do you think?