From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACCC03B892D for ; Mon, 13 Apr 2026 15:28:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.54 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776094120; cv=none; b=YMWm4GE7Vu4K8XI3036UuutoD+U40XRrWa6mubFLBwuhetNsdRKzMg5bs66brSenvJEd1TbFGNJlDw4d+l/H2oaogu2c89DoKgMJGNtb4rrwHMzAx40uZcLLhVwDFXntia4AcdDYnhDQZsY81dr+Q473ckRqoQGRCOWO9yDyAiU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776094120; c=relaxed/simple; bh=/z2vsGkuBMnx/La/Vbp1C6rg4cDiDCVcfCfF6edkQh8=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=h4C49wgc/XwWR7R63HPcbCaNSYYsA7Da3PxuTGwuAcLFdjKW8ZMqI6nXX62f1WtWOqfPkmiIRlAukjP5RAPNRJfQI/rpTvCZRkm3esq0EDP61kOJrrTN03W7dyphwr+wHGsPvCPkxagi8uFABKQ1EwVdm4lsUIJsSCnXq8/BrAA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=AQeZGQkP; arc=none smtp.client-ip=209.85.128.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="AQeZGQkP" Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-4887ca8e529so29671175e9.0 for ; Mon, 13 Apr 2026 08:28:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1776094117; x=1776698917; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=wet7hLRLN2GT7pig65ujdZ2ffuA5bew4whWaQBopYTA=; b=AQeZGQkPOagAR/mHg8EEJtGATup/6TfesmVPsZDKJ1aQKPf/TIYz+dXabTafUJGb5a yD/BBgbd0LV90OT47w3VkodNYlBbXrfqzf0HznceOf+1/oVBiEMWZ2JQmxFwWswUsTyA SGo9GevW2yVIRJX7UcHFR629NZ6TDFJudTSsUZTULArYmYGPfagE7pmFCgJr3sv5aDet 8RH4r8124zDvwDoWQVpfqNw91cvzS8pz02zaNLNPOx53eTFJLaczWQsyxX1Bs+jZqEB/ F2QZRpKb8UZyUqk6cGyQEG6TF1cJob3aJMrGnDiRadcmn12OXVRJn170bqu84eMbESJv aTPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776094117; x=1776698917; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wet7hLRLN2GT7pig65ujdZ2ffuA5bew4whWaQBopYTA=; b=demrFrp9AoTKbXgaW/GFKXEZxoZcWj1oDQJ7p61UVFO5rW3w+6Su7zuqn63pqZWZ6E u6lIInca2788QUa0y3DCRRqxccdQ49E4pAKVMest3NmrfvFszN8V+WgmfmNukuA0lR2N 3azHYpe/wOTs2ptiN4eWzuADi09uao+hsfhMmdXYk+KYsnMQcdu78Kx16qrSKoV+iWCs QtywYz0PTKptosXQ168dsaNGbmkiWPikZm57Qw9azceXMK9KbILLSX2lmly+UxjgvRND ncHqyyDph/GconDPz2RE8gpNC/SJW5fODF6X3TGDFl1YsiGvZMcHvXC8Sxbh8N9WmfUR y6rQ== X-Forwarded-Encrypted: i=1; AFNElJ/bEH5P1ugkbGlvze9DKeqBXVkAyvkWGwwjoadYrIt+nzm9KSKt/zntbdhN+Ncgb64VDbm3l9vmpXp0JbQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yy/Y6Zd0/2dYwbK3lqNtZNiO2bTpHMHckWPe8iL78yVsm4XXYGm ARw7ovtCOOMvDG/kQsH+rAeessp6Y6YXsUiZydrACqWs/IjkkSN19DejAfVqZpx50WA= X-Gm-Gg: AeBDieuLFxLHAd6DFeweDwApZdvt33fusdL4frr38Ryg5D4ve8F1XSyYU7/TvhhK/ot Hw4tDqWZuD+3Huh1KIfPMNqL5f4PwM4qOwRTiEhKNoY1erApe56/y+oEdTEP+ucucgg1dcgfatp nT6uGwKVWLq0Atsq1IsicF5VSelo2hud3TilB9rGYthR+UnnOJN2/dz+iudWJjqyzmVzKuKHGVh BjGqH9KFJUbn8I3byH4CeFULP3itT7U4K0AyXZYgDuNIJtGuBU1MHbUWoRyFMfKtlKVgCe/gRpQ Z8cP7xeKQZHKDek8K0TcslqxVvPC3cztgE+gWsqpXsKFxZC4szGOi22Fcae1tWnHKf26f138MMp Vt+KUMLLQjMMOp4PioCtPUwzggXVBe7Zbmpmh2/YgnwqblDrMWRWNecrW01GYjZxYPI9Cp3QJ72 GdpUtTcSpcYzGqUB8TD3IV5NB+AdEg8Y6VL4tW99fbo3GN X-Received: by 2002:a05:600c:154d:b0:485:3989:b3e4 with SMTP id 5b1f17b1804b1-488d685b6ddmr183708165e9.6.1776094116989; Mon, 13 Apr 2026 08:28:36 -0700 (PDT) Received: from localhost (109-81-29-22.rct.o2.cz. [109.81.29.22]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-488d67b4903sm128558245e9.5.2026.04.13.08.28.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Apr 2026 08:28:36 -0700 (PDT) Date: Mon, 13 Apr 2026 17:28:35 +0200 From: Michal Hocko To: Joshua Hahn Cc: Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: Re: [PATCH 0/8 RFC] mm/memcontrol, page_counter: move stock from mem_cgroup to page_counter Message-ID: References: <20260413142958.2037913-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260413142958.2037913-1-joshua.hahnjy@gmail.com> On Mon 13-04-26 07:29:58, Joshua Hahn wrote: > On Mon, 13 Apr 2026 09:23:38 +0200 Michal Hocko wrote: > > Hello Michal, > > Thank you for your review as always! > > > On Fri 10-04-26 14:06:54, Joshua Hahn wrote: > > > Memcg currently keeps a "stock" of 64 pages per-cpu to cache pre-charged > > > allocations, allowing small allocations and frees to avoid walking the > > > expensive mem_cgroup hierarchy traversal on each charge. This design > > > introduces a fastpath to charge/uncharge, but has several limitations: > > > > > > 1. Each CPU can track up to 7 (NR_MEMCG_STOCK) mem_cgroups. When more > > > than 7 mem_cgroups are actively charging on a single CPU, a random > > > victim is evicted, and its associated stock is drained, which > > > triggers unnecessary hierarchy walks. > > > > > > Note that previously there used to be a 1-1 mapping between CPU and > > > memcg stock; it was bumped up to 7 in f735eebe55f8f ("multi-memcg > > > percpu charge cache") because it was observed that stock would > > > frequently get flushed and refilled. > > > > All true but it is quite important to note that this all is bounded to > > nr_online_cpus*NR_MEMCG_STOCK*MEMCG_CHARGE_BATCH. You are proposing to > > increase this to s@NR_MEMCG_STOCK@nr_leaf_cgroups@. In invornments with > > many cpus and and directly charged cgroups this can be considerable > > hidden overcharge. Have you considered that and evaluated potential > > impact? > > This is a great point. I would like to note though, that for systems running > less than 7 leaf cgroups (I'm not sure what systems typically look like outside > of Meta, so I cannot say whether this is likely or not!) this change would > be an optimization since we allocate only for the leaf cgroups we need ; -) > > But let's do the math for the worst-case scenario: > Because we initialize the stock to be 0 and only refill on a charge / > uncharge, the worst-case scenario involves a workload that charges > to all CPUs just once, so that it is not enough to benefit from the > cacheing. On a very large system, say 300 CPUs, with 4k pages, that's > 300 * 64 * 4kb = 75 mb of overcharging per leaf-cgroup. > > This is definitely a serious amount of overcharging. With that said, I > would like to note that this seems like quite a rare scenario; what > would cause a workload to jump across 300 CPUs? A typical situation I would expect this to be more visible is a large machine hosting a lot of smaller containers. Not an untypical situation. Without an external pressure those caches could accumulate a lot. On the other hand a large machine the overall overcharging shouldn't cause the memory depletion even if we are talking about 1000s of memcgs. The behavior will change though and this is something you should explain in your changelog. There will certainly be cons that we need to weigh against pros. There are many good points below that you can use. [...] > > > 2. Stock management is tightly coupled to struct mem_cgroup, which > > > makes it difficult to add a new page_counter to struct mem_cgroup > > > and do its own stock management, since each operation has to be > > > duplicated. > > > > Could you expand why this is a problem we need to address? > > Yes of course. So to give some context, I realized that stock was a bit > uncomfortable to work with at a memcg granularity when I tried to introduce > a new page counter for toptier memory tracking (in order to enforce strict > limits. I didn't explicitly note this in the cover letter because I thought > that there was a lot of good motivation aside from the specific use case > I was thinking of, so decided to leave it out. What do you think? : -) Yes, if there are future plans that might benefit from this then this is worth mentioning. Because just based on 1 I cannot really tell whether going this way is better then tune NR_MEMCG_STOCK. As I've said I like the resulting code better but there are some practical cons as well. > I'm not a memcgv1 user so I cannot tell from experience whether this is a > pain point or not, but I also did find it awkward that one stock gated the > charges for two page_counters memsw and memory, which made the slowpath > incur double the hierarchy walks on a single stock failing, instead of keeping > them separate so that it is less likely for both the page hierarchy walks > to happen on a single charge attempt. v1 is legacy and we have decided to not invest into new optimizations/feature long ago. > > > > 3. Each stock slot requires a css reference, as well as a traversal > > > overhead on every stock operation to check which cpu-memcg we are > > > trying to consume stock for. > > > > Why is this a problem? > > I don't think this is really that big of a problem, but just something that > I wanted to note as a benefit of these changes. I remember being a bit > confused by the memcg slot scanning & traversal when reading the stock > code, personally I think being able to directly be able to attribute stock > to the page_cache it comes from, as well as not randomly evicting stock > could be helpful. OK so this boils down to code clarity. > > Please also be more explicit what kind of workloads are going to benefit > > from this change. The existing caching scheme is simple and ineffective > > but is it worth improving (likely your points 2 and 3 could clarify that)? > > I think that the biggest strength for this series is actually not with > performance gains but rather with more interpretable semantics for stock > management and transparent charging in try_charge_memcg. > > But to break it down, any systems using less than 7 cgroups will get > reduced memory overhead (from the percpu structs) and comparable performance. > Any systems using more than 7 leaf cgroups will benefit because stock is > no longer randomly evicted and needed to refill. > > >From my limited benchmark tests, these didn't seem too visible from a > wall time perspective. But I can trace for how often we refill the stock > in the next version, and I hope that it can show more tangible results. Another points for the changelog. -- Michal Hocko SUSE Labs