From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 001943A4F47 for ; Wed, 14 Jan 2026 16:48:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.47 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768409287; cv=none; b=aV/xl2u+rUD8TglVT0oOrzxuHJdPAWzzUev5SGEEecOVdiTBjbDAq24LW7oHK1Bbj2yfFub7Kr0O7nN3ewziuafrX4B/Uvt9YenIyw5faFweARCeyqKn47gSxK9uT9vOSBJokzZVYId11zZPz/fgsULW8okqn8HIf8wKA5jra70= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768409287; c=relaxed/simple; bh=j1zkDAHezLQkOdLJI+qtI0KRUXAmjFjMGFuiufQH8GE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=n/3XEHgFgu6fi44uigJl6gPAhuS8iJQFtf1JMvm2UYnsHEsCPgYT8/TlaU8PbyaKLnAapuoLNamZxTnI3VrVfW8jpsZPwoznfGhw5PRoNL22iLInAdjX+GYdq83FMB83H6RppIq5Mq+rLZs1QfBFAN+l1/tPXp70Y7n0wYlvz8Y= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=VIsIDqIo; arc=none smtp.client-ip=209.85.128.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="VIsIDqIo" Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-47ee07570deso316455e9.1 for ; Wed, 14 Jan 2026 08:48:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1768409282; x=1769014082; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=cWihw0KAMXVM/F7Z84yiMzol9tq02jR8sEQFJ5oi4pA=; b=VIsIDqIo/WCfOu6DAbJjmwZ0Cp7CqjXen3CO6klqC9n9ZfQvGH6SaqiSnAACKJWYV8 gzqxDlrZhlbDU16yclUSv/8Zu2J+0otKgCdjphBKwAGpx5AzRbzypCpYjIk0TIngg7Yr BPtWkCB+2Bx7yPCy6edV9HboDVaC+8C8OpZHZIImZPrKlHIifm+8N0jzw5+PDvYAInzn trhYFSHyZw2jzZhk9zepkcSP+C7HJ4jU4REkH4HVCfdHxP7fseVLpoMpw6JNzMBbX8ns 65l1WeNBbFd3+HdioocssQL3uVjQn9qiDTexT8c0nJ+im6Du84oZtCNhjywg5wvGhWIY sHCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768409282; x=1769014082; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cWihw0KAMXVM/F7Z84yiMzol9tq02jR8sEQFJ5oi4pA=; b=sliU646hQ/EIQJAtTwOay9B6u6h4sS/WtJZRRONA4/VwgRotAtWkb8ovDNtJZiinXO iwv0X+cYrlcUWfbfUh0sqpGoglCAngykzoFHRWt0ga51emGFwIbE4sR1Qx3emNqkjmFY KUWe3eO60YKCpCLu+aIM/53U6cdmxaW4z2m6wiWlllPnzXiohbtRwpPu93n79kZDdgXJ QInAs7VZbn262rdmTKDcuoMJ2Xc9RzJIEKQp3Qx4qO8ZuxUSHQiO0QyFXaF3BDmzb2i0 Bc0OcpdGWnLzYMYNQ4G+ewe3IcI3tzpvDF2QFMQuRbvRjWvRZpB1VyBQDGiIdV7Ixinj Y4hg== X-Forwarded-Encrypted: i=1; AJvYcCU2O1i9pjv9or3uzM98CwjlDBBJUq9b4qDkXVMDssAYdZvfeMqgucHifPIZAFYUvn6LASbo7o1bbBzoFno=@vger.kernel.org X-Gm-Message-State: AOJu0YzsRqC2mZxf1g8dCiMA2WO+gIR+K5HzKEUNltazDdiRr+qXTWFF 8HxM34/s5QHknTJRUzslwVZ/7DqhHVk9CCx8hQl2Hxk1ZIEX1gIKwmyylgN25RJxdaQ= X-Gm-Gg: AY/fxX5jiUjmAZvgeBgJcVffFTdKsFOdrxZ1EU4olj9puK5YHkx5FH2QysBbb5wfq+p dEQ9eZ0vzn9/sbWFRboXeAidpu0qIwzaljG4+WL8wzCDo7dh5FPZj4x+vsUipB/vRqNKmGEReQx d0O8mlR/1jgXed9c4GFKCIzFdPc1DJPZUotrcA3NM0TjZTzEGyyEV3lXmAMsXtqN/k4f5oQYqoZ rwVPf/7HJRMKzXzSr37FvisZ5s7XR0F5Ju6hUKeGb1CnU7zAiddCF+2VljbkRDk7b9w/ypKLG7x GMgG8RIKn6ZfSHwcGsqL5fyXKAutYv/xwA88/AetbtJuQIGOVe4j6RrFyVIbquwVfhDuUXsSug2 D1G6uPPszQ2Ph1f/QC893xaIzcTGo+Q5CSHlkh408RjbBOAcsKNlIhdxELCkmsDmqRGHSPSlM+y HMPYls7YWFNhLA/mHBAnf383XO X-Received: by 2002:a05:600c:8b88:b0:477:561f:6fc8 with SMTP id 5b1f17b1804b1-47ee32e0281mr38831005e9.5.1768409282338; Wed, 14 Jan 2026 08:48:02 -0800 (PST) Received: from localhost (109-81-19-111.rct.o2.cz. [109.81.19.111]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47f4b267661sm439365e9.13.2026.01.14.08.48.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jan 2026 08:48:01 -0800 (PST) Date: Wed, 14 Jan 2026 17:48:00 +0100 From: Michal Hocko To: Mathieu Desnoyers Cc: Andrew Morton , linux-kernel@vger.kernel.org, "Paul E. McKenney" , Steven Rostedt , Masami Hiramatsu , Dennis Zhou , Tejun Heo , Christoph Lameter , Martin Liu , David Rientjes , christian.koenig@amd.com, Shakeel Butt , SeongJae Park , Johannes Weiner , Sweet Tea Dorminy , Lorenzo Stoakes , "Liam R . Howlett" , Mike Rapoport , Suren Baghdasaryan , Vlastimil Babka , Christian Brauner , Wei Yang , David Hildenbrand , Miaohe Lin , Al Viro , linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, Yu Zhao , Roman Gushchin , Mateusz Guzik , Matthew Wilcox , Baolin Wang , Aboorva Devarajan Subject: Re: [PATCH v16 2/3] mm: Improve RSS counter approximation accuracy for proc interfaces Message-ID: References: <20260114145915.49926-1-mathieu.desnoyers@efficios.com> <20260114145915.49926-3-mathieu.desnoyers@efficios.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260114145915.49926-3-mathieu.desnoyers@efficios.com> On Wed 14-01-26 09:59:14, Mathieu Desnoyers wrote: > Use hierarchical per-cpu counters for RSS tracking to improve the > accuracy of per-mm RSS sum approximation on large many-core systems [1]. > This improves the accuracy of the RSS values returned by proc > interfaces. > > This is also a preparation step to introduce a 2-pass OOM killer task > selection which leverages the approximation and accuracy ranges to > quickly eliminate tasks which are outside of the range of the current > selection, and thus reduce the latency introduced by execution of the > OOM killer. > > Here is a (possibly incomplete) list of the prior approaches that were > used or proposed, along with their downside: > > 1) Per-thread rss tracking: large error on many-thread processes. > > 2) Per-CPU counters: up to 12% slower for short-lived processes and 9% > increased system time in make test workloads [1]. Moreover, the > inaccuracy increases with O(n^2) with the number of CPUs. > > 3) Per-NUMA-node counters: requires atomics on fast-path (overhead), > error is high with systems that have lots of NUMA nodes (32 times > the number of NUMA nodes). > > 4) Use a percise per-cpu counter sum for each counter value query: > Requires iteration on each possible CPUs for each sum, which > adds overhead (and thus increases OOM killer latency) on large > many-core systems running many processes. > > The approach proposed here is to replace the per-cpu counters by the > hierarchical per-cpu counters, which bounds the inaccuracy based on the > system topology with O(N*logN). > > * Testing results: > > Test hardware: 2 sockets AMD EPYC 9654 96-Core Processor (384 logical CPUs total) > > Methodology: > > Comparing the current upstream implementation with the hierarchical > counters is done by keeping both implementations wired up in parallel, > and running a single-process, single-threaded program which hops > randomly across CPUs in the system, calling mmap(2) and munmap(2) on > random CPUs, keeping track of an array of allocated mappings, randomly > choosing entries to either map or unmap. > > get_mm_counter() is instrumented to compare the upstream counter > approximation to the precise value, and print the delta when going over > a given threshold. The delta of the hierarchical counter approximation > to the precise value is also printed for comparison. > > After a few minutes running this test, the upstream implementation > counter approximation reaches a 1GB delta from the > precise value, compared to 80MB delta with the hierarchical counter. > The hierarchical counter provides a guaranteed maximum approximation > inaccuracy of 192MB on that hardware topology. > > * Fast path implementation comparison > > The new inline percpu_counter_tree_add() uses a this_cpu_add_return() > for the fast path (under a certain allocation size threshold). Above > that, it calls a slow path which "trickles up" the carry to upper level > counters with atomic_add_return. > > In comparison, the upstream counters implementation calls > percpu_counter_add_batch which uses this_cpu_try_cmpxchg() on the fast > path, and does a raw_spin_lock_irqsave above a certain threshold. > > The hierarchical implementation is therefore expected to have less > contention on mid-sized allocations than the upstream counters because > the atomic counters tracking those bits are only shared across nearby > CPUs. In comparison, the upstream counters immediately use a global > spinlock when reaching the threshold. > > * Benchmarks > > Using will-it-scale page_fault1 benchmarks to compare the upstream > counters to the hierarchical counters. This is done with hyperthreading > disabled. The speedup is within the standard deviation of the upstream > runs, so the overhead is not significant. > > upstream hierarchical speedup > page_fault1_processes -s 100 -t 1 614783 615558 +0.1% > page_fault1_threads -s 100 -t 1 612788 612447 -0.1% > page_fault1_processes -s 100 -t 96 37994977 37932035 -0.2% > page_fault1_threads -s 100 -t 96 2484130 2504860 +0.8% > page_fault1_processes -s 100 -t 192 71262917 71118830 -0.2% > page_fault1_threads -s 100 -t 192 2446437 2469296 +0.1% > > This change depends on the following patch: > "mm: Fix OOM killer inaccuracy on large many-core systems" [2] As mentioned in the previous patch, it would be great to explicitly mention what is the memory price for the new tracking data structure. Other than that this seems like a generally useful improvement for larger systems and it is my understanding that it doesn't add almost any overhead on small end systems, correct? -- Michal Hocko SUSE Labs