From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A75C837F73E for ; Thu, 14 May 2026 07:46:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778744800; cv=none; b=jhfCiwKTOvHIZgP9eRvpz9iKm9PVcpKOkxbA5Ld7YCLxXHxFPH2nAm1bWqVB34X9i+XSOF/ZQWeGT3U918lthwVFu/PuZTRY/avf8bwjvf69sOltTbLXORF3py5H+wAiFAdeCfJlGiv5WRvznB280QoVFzxd44nhoSejaJYmFL4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778744800; c=relaxed/simple; bh=QTXfmFdDXFBYEfG6I6dRnP123e7jkN4kr88HIs+Au0U=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=twg94vHTcqcyrk0ANHZeBPf3OxZTbbdOw8zBHesx1qAS7y22xqVxjAx/yGPGbilyyy3vE/9LQS0qwlZ+ZapFdKrbcz1cQd+bIVxlWULtznrZmmzHhVQ/f7zSF2PJE03syCYwXRYNtQLweMEq3IoRisoGMjCqjPO+lVbRVV8cCRM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=LnuxCmOE; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="LnuxCmOE" Message-ID: <46e9f5cf-34cb-466d-a53a-5778768af4d9@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1778744786; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7BXh7Wq3QIg5RrVVGGpiWPGFI8Di4Dx49OSkadeSPsk=; b=LnuxCmOE/Gt9rOs6iQDfu++nrJdD3Dr7snau9TxwVon6wjjgXBsx51tlvO4SmGP6EL18mF UfdVt2E/i4rSLYRU1nZWUUFNi7Qc81c236yHMNGcbcyDu4gUdcTv30xImxIisvC0R9zWkW Qx+0qbUp/x29Tjx6uP8IyFf1sgCBAfA= Date: Thu, 14 May 2026 15:46:06 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [linus:master] [mm] 01b9da291c: stress-ng.switch.ops_per_sec 67.7% regression To: Shakeel Butt , kernel test robot Cc: oe-lkp@lists.linux.dev, lkp@intel.com, linux-kernel@vger.kernel.org, Andrew Morton , David Carlier , Allen Pais , Axel Rasmussen , Baoquan He , Chengming Zhou , Chen Ridong , David Hildenbrand , Hamza Mahfooz , Harry Yoo , Hugh Dickins , Imran Khan , Johannes Weiner , Kamalesh Babulal , Lance Yang , Liam Howlett , Lorenzo Stoakes , Michal Hocko , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Mike Rapoport , Muchun Song , Muchun Song , Nhat Pham , Roman Gushchin , Suren Baghdasaryan , Usama Arif , Vlastimil Babka , Wei Xu , Yosry Ahmed , Yuanchu Xie , Zi Yan , Usama Arif , cgroups@vger.kernel.org, linux-mm@kvack.org References: <202605121641.b6a60cb0-lkp@intel.com> <0e1b8994-944d-4dda-8966-3cd43661796d@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Qi Zheng In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT On 5/13/26 10:27 PM, Shakeel Butt wrote: > On Wed, May 13, 2026 at 06:49:45AM -0700, Shakeel Butt wrote: >> On Wed, May 13, 2026 at 10:10:34AM +0800, Qi Zheng wrote: >>> >>> >>> On 5/13/26 12:03 AM, Shakeel Butt wrote: >>>> On Tue, May 12, 2026 at 08:56:52PM +0800, kernel test robot wrote: >>>>> >>>>> >>>>> Hello, >>>>> >>>>> kernel test robot noticed a 67.7% regression of stress-ng.switch.ops_per_sec on: >>>>> >>>>> >>>>> commit: 01b9da291c4969354807b52956f4aae1f41b4924 ("mm: memcontrol: convert objcg to be per-memcg per-node type") >>>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master >>>> >>>> This is most probably due to shuffling of struct mem_cgroup and struct >>>> mem_cgroup_per_node members. >>> >>> Another possibility is that after objcg was split into per-node, the >>> slab accounting fast path is still designed assuming only one current >>> objcg per CPU: >>> >>> struct obj_stock_pcp { >>> struct obj_cgroup *cached_objcg; >>> }; >>> >>> So it's may cause the following thrashing: >>> >>> CPU stock cached = memcg/node0 objcg >>> free object tagged = memcg/node1 objcg >>> => __refill_obj_stock --> objcg mismatch >>> => drain_obj_stock() >>> => cache switches to node1 objcg >>> >>> next local allocation tagged = node0 objcg >>> => mismatch again >>> => drain_obj_stock() >> >> Actually I think this is the issue, we have ping pong threads running on >> different nodes where though theu are in same cgroup but their current->obcg is >> for local node and thus this ping pong is thrashing the per-cpu objcg stock. >> >> The easier fix would be to compare objcg->memcg instead of just objcg during >> draining and caching. In addition we can add support for multiple objcg per-cpu >> stock caching. > > Something like the following: > > From d756abe831a905d6fe32bad9a984fc619dafb7e0 Mon Sep 17 00:00:00 2001 > From: Shakeel Butt > Date: Wed, 13 May 2026 07:24:55 -0700 > Subject: [PATCH] mm/memcontrol: skip obj_stock drain when refilled objcg > shares memcg > > Signed-off-by: Shakeel Butt > --- > mm/memcontrol.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index d978e18b9b2d..01ed7a8e18ac 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3318,6 +3318,7 @@ static void __refill_obj_stock(struct obj_cgroup *objcg, > unsigned int nr_bytes, > bool allow_uncharge) > { > + struct obj_cgroup *cached; > unsigned int nr_pages = 0; > > if (!stock) { > @@ -3327,7 +3328,18 @@ static void __refill_obj_stock(struct obj_cgroup *objcg, > goto out; > } > > - if (READ_ONCE(stock->cached_objcg) != objcg) { /* reset if necessary */ > + cached = READ_ONCE(stock->cached_objcg); > + if (cached != objcg && > + (!cached || obj_cgroup_memcg(cached) != obj_cgroup_memcg(objcg))) { > drain_obj_stock(stock); > obj_cgroup_get(objcg); > stock->nr_bytes = atomic_read(&objcg->nr_charged_bytes) This change looks like it should be able to fix the ping-pong issue, but I stiil haven't reproduced the performance regression locally. I'll continue testing it. Hi kernel-test-robot, could you help check if the patch above fixes the issue on your end? Thanks, Qi