From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f48.google.com (mail-wr1-f48.google.com [209.85.221.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C9343A5429 for ; Mon, 30 Mar 2026 14:21:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.48 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774880477; cv=none; b=CqxXu8YTGJjQS2mjFQ4YcG7TaAutoYRiIisLeJogdx6pB0GHR9BUVOOWxxEhMLYE3qtSXiE6DByVs6tSIRnuvHps0ADfoQ5JG82dih5+BhAvc7vlHpKcRsaFOafvz6kf2dOiG0yRU0Pg31Hi813NjPVpyvWoURwoCcXYM1cd9J8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774880477; c=relaxed/simple; bh=UUKXlwT0m6VqmkPUCS2Ig+RSGPEazYfaChHNok//06Y=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Fnl2rKCVKyqwFGIPslE0rUN4A8ErhyCxkmq4uf7BGr1qaaZeX06vA8WpdwKotnGVn6EgoYKZn/Ocl5If9oD7DIEmPHNNbt2XqZjGO7i8caww7u+V1bGM2QUOA8/fdBq9FpWzkEx40oq6Dy6uaz/NGKnfB/KeXkyLTR/uPdLlGFc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=cXzUaOSO; arc=none smtp.client-ip=209.85.221.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="cXzUaOSO" Received: by mail-wr1-f48.google.com with SMTP id ffacd0b85a97d-43cff5dafc3so856723f8f.1 for ; Mon, 30 Mar 2026 07:21:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1774880475; x=1775485275; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=X/ozzOwDq7xepOmxDik1p69jmSO0OqDcfdyRDIq86y8=; b=cXzUaOSOpHf6I1xsf7SKsgHwpFGUiJAEPCaIZlSfS4oCpm/r013PX88WFPSdHaLAwA pdTVk8M5do9o1fViPDjeEjh6zH4PRw1vtqrFvhykXhwbAqKCsP3+6fxu6cA2FYKoLPiJ EQYr13GNkMvJHGuwcfwrL/SNjQUNXb+vOCHcydq3FggTZrFlu0WDXRDjJ5s9GL5jWr5I pi4fAdzuav6jFkUqCaBQ+eyXFr4n90ele+Dnt2WECb/B/Nvo673RJLoTuAm2TcKnmGGF MOcuIO8OxdexR60QTUuZLvbsYpca/5uKtG9e58XbxH7sbjPUkUZfL/MDVPfOwJcBu+rt 7Nyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774880475; x=1775485275; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X/ozzOwDq7xepOmxDik1p69jmSO0OqDcfdyRDIq86y8=; b=df+7nMuZCQ60GeYAl5vHeO3XjQ9KWgY8ftpSJbJQGuBnDwGQF46SUCdFV2BPwUlz7Y RTfzowGim0qa03J3zdpyr4FVVS1DUTZPDzITzIPK+zAVY9slavjgaL9lJ9zbZHgFAlBI /VBKpZKA2bGRCwp6NUTQaAbWsixiA3cOQ7eErfWPnPNX4i0B2f2qga8NkqWwE3tACmpR ZAWYrf3Kf1yPq92Ftgt9/nB8onTu8gSmTh6GgnYemt/mws2/fIqRWaAuuepoQXjbR8xN I9oa5ZIw0oqU2uAPfQN1HCB2WX80T/SiJSeTTcL9o9CpGFxr9KAMXiTqxsWc73Na/nnA 0FFA== X-Forwarded-Encrypted: i=1; AJvYcCVR0Cph9Hlan7PkHcPvfuzY7Eo8uP7InFOj7OAdQfo65gDxV7Nm+sgnptcFYoFC/DsZhZ/Ox3f6LAr3txA=@vger.kernel.org X-Gm-Message-State: AOJu0Yw551inN32/U51akkKSLYkA/EAXAJX2RWGgSrIayDrtNMXVkupu jKhRf/E9Oi8a004by/crVSMCyVTxlGKau0ZHv+i8QGqT7EgAU+UNkoJZUrMV/N+zFuY= X-Gm-Gg: ATEYQzyZyRvlmgE8/Yk14Rj4zRnNlfzYdt60PLvQHXuxUbPUYZxKV4h+RoYYM8oV98i SU/6g0reLrx+4li3arR3WX11ZInymjXTTfDToxAtZkHE9DbhdVSWJd7eFx0dXWt/veOv0LdkDqS HJ4Cv72S+CWmXp8hxhrjxqpq//QEqL86ceFzTDih6ld+ZjVyWfyuVZR9ukZ/orqoAMySxu09Lqw wql3yJPZuKJlXOioCJuuMqE0VuGLPFS2xdmgI8tDDReOGW02mzBET8enIy8+WR5v7KTbh2ff+F0 acDJqcnsU0arDJNpEZFPh1vgKQznrHST86e2wb9hmfoeV9x+d4WL12LxOzjzpBVP1OhQhFNjxOe Nr0j+o5YUScxaSGaoEMOUm4iL95J0Ye2ruPN0Tl1bACKHessvTFe0Yag6OJFjvWHz6Srcs7e/7Q kKfroL48AlBtb58BCPDRqlhbQE7V00y0JI9R58 X-Received: by 2002:a05:6000:290b:b0:43c:f976:b8de with SMTP id ffacd0b85a97d-43cf976b9famr9996886f8f.20.1774880474446; Mon, 30 Mar 2026 07:21:14 -0700 (PDT) Received: from localhost (109-81-17-175.rct.o2.cz. [109.81.17.175]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43cf2580194sm17882325f8f.37.2026.03.30.07.21.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Mar 2026 07:21:14 -0700 (PDT) Date: Mon, 30 Mar 2026 16:21:12 +0200 From: Michal Hocko To: Joshua Hahn Cc: Johannes Weiner , Andrew Morton , Roman Gushchin , Shakeel Butt , Muchun Song , David Hildenbrand , Lorenzo Stoakes , Vlastimil Babka , Dennis Zhou , Tejun Heo , Christoph Lameter , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: Re: [PATCH] mm/percpu, memcontrol: Per-memcg-lruvec percpu accounting Message-ID: References: <20260330141010.3126996-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260330141010.3126996-1-joshua.hahnjy@gmail.com> On Mon 30-03-26 07:10:10, Joshua Hahn wrote: > On Mon, 30 Mar 2026 14:03:29 +0200 Michal Hocko wrote: > > > On Fri 27-03-26 12:19:35, Joshua Hahn wrote: > > > Convert MEMCG_PERCPU_B from a memcg_stat_item to a memcg_node_stat_item > > > to give visibility into per-node breakdowns for percpu allocations and > > > turn it into NR_PERCPU_B. > > > > Why do we need/want this? > > Hello Michal, > > Thank you for reviewing my patch! I hope you are doing well. > > You're right, I could have done a better job of motivating the patch. > My intent with this patch is to give some more visibility into where > memory is physically, once you know which memcg it is in. Please keep in mind that WHY is very often much more important than HOW in the patch so you should always start with the intention and justification. > Percpu memory could probably be seen as "trivial" when it comes to figuring > out what node it is on, but I'm hoping to make similar transitions to the > rest of enum memcg_stat_item as well (you can see my work for the zswap > stats in [1]). > > When all of the memory is moved from being tracked per-memcg to per-lruvec, > then the final vision would be able to attribute node placement within > each memcg, which can help with diagnosing things like asymmetric node > pressure within a memcg, which is currently only partially accurate. > > Getting per-node breakdowns of percpu memory orthogonal to memcgs also > seems like a win to me. While unlikely, I think that we can benefit from > some amount of visibility into whether percpu allocations are happening > equally across all CPUs. > > What do you think? Thank you again, I hope you have a great day! I think that you should have started with this intended outcome first rather than slicing it in pieces. Why do we want to shift to per-node stats for other/all counters? What is the cost associated comparing to the existing accounting (if any)? Please go into details on how do you plan to use the data before we commit into a lot of code churn. TBH I do not see any fundamental reasons why this would be impossible but I am not really sure this is worth the work and I also do not see potential subtle issues that we might stumble over when getting there. So I would appreciate if you could have a look into that deeper and provide us with evaluation on how do you want to achieve your end goal and what can we expect on the way. It is, of course, impossible to see all potential problems without starting implementing the thing but a high level evaluation would be really helpful. > Joshua > > [1] https://lore.kernel.org/all/20260311195153.4013476-1-joshua.hahnjy@gmail.com/ -- Michal Hocko SUSE Labs