From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-189.mta0.migadu.com (out-189.mta0.migadu.com [91.218.175.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5149D218EB1 for ; Wed, 13 May 2026 13:50:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.189 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778680209; cv=none; b=PMzUYPtNZ+AQoapA8NBc+qe9TO3cW7nPbh8Qolt8pTyI5hHVpinjo0NqclfbNf/JO7w5ykZ9ehNJSygiWqakrtp89Gq8m8M/8lokLf6Ecu6Fmxhka8fPLfxIyALqt2M4gMcG95+huG97Wwg6AQCWPubDG03VhcZ27XtcdAevuHc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778680209; c=relaxed/simple; bh=ZUlh9sdk7b7Xm5NVsZCzjPstKTSj9Q48shE7IXi2cU0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=ugv8HNdnP288HDr5xMY/JY7/TMo0CBbm8irnQAZuw7LQlsXmULaGvn3sykvLN8/yWXj5ZKEA7FtAQ4JpTQsVYt47M4WsNhehzPqF96Zq4CXCHf/0rVfSAqm7rrecuJjrE86bv3AN67HtFXYh91LOn5CXCeXUcTbgM179g5QDdMw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=n2uai5Ix; arc=none smtp.client-ip=91.218.175.189 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="n2uai5Ix" Date: Wed, 13 May 2026 06:49:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1778680195; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=/Tp+YGNU/YWh4+mnxkJ6DD+LKecTb+noZ/uA14eabz0=; b=n2uai5IxdJAShmUa+j/JoWwab5GJNy3E6yWqDJCvqLjWm3jHAKlVgCZErANW15rfgwTodd NtXRhFrNHsQMPvq46vHzvy8Ev9BQtvpXhqkVwYoXO3l0t/H3dxEgDfLFkLjjHboVLDP3f/ SPrO/lYuYclHVrdAv/qnWRFAs9+uPjc= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Shakeel Butt To: Qi Zheng Cc: kernel test robot , oe-lkp@lists.linux.dev, lkp@intel.com, linux-kernel@vger.kernel.org, Andrew Morton , David Carlier , Allen Pais , Axel Rasmussen , Baoquan He , Chengming Zhou , Chen Ridong , David Hildenbrand , Hamza Mahfooz , Harry Yoo , Hugh Dickins , Imran Khan , Johannes Weiner , Kamalesh Babulal , Lance Yang , Liam Howlett , Lorenzo Stoakes , Michal Hocko , Michal =?utf-8?Q?Koutn=C3=BD?= , Mike Rapoport , Muchun Song , Muchun Song , Nhat Pham , Roman Gushchin , Suren Baghdasaryan , Usama Arif , Vlastimil Babka , Wei Xu , Yosry Ahmed , Yuanchu Xie , Zi Yan , Usama Arif , cgroups@vger.kernel.org, linux-mm@kvack.org Subject: Re: [linus:master] [mm] 01b9da291c: stress-ng.switch.ops_per_sec 67.7% regression Message-ID: References: <202605121641.b6a60cb0-lkp@intel.com> <0e1b8994-944d-4dda-8966-3cd43661796d@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0e1b8994-944d-4dda-8966-3cd43661796d@linux.dev> X-Migadu-Flow: FLOW_OUT On Wed, May 13, 2026 at 10:10:34AM +0800, Qi Zheng wrote: > > > On 5/13/26 12:03 AM, Shakeel Butt wrote: > > On Tue, May 12, 2026 at 08:56:52PM +0800, kernel test robot wrote: > > > > > > > > > Hello, > > > > > > kernel test robot noticed a 67.7% regression of stress-ng.switch.ops_per_sec on: > > > > > > > > > commit: 01b9da291c4969354807b52956f4aae1f41b4924 ("mm: memcontrol: convert objcg to be per-memcg per-node type") > > > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master > > > > This is most probably due to shuffling of struct mem_cgroup and struct > > mem_cgroup_per_node members. > > Another possibility is that after objcg was split into per-node, the > slab accounting fast path is still designed assuming only one current > objcg per CPU: > > struct obj_stock_pcp { > struct obj_cgroup *cached_objcg; > }; > > So it's may cause the following thrashing: > > CPU stock cached = memcg/node0 objcg > free object tagged = memcg/node1 objcg > => __refill_obj_stock --> objcg mismatch > => drain_obj_stock() > => cache switches to node1 objcg > > next local allocation tagged = node0 objcg > => mismatch again > => drain_obj_stock() Actually I think this is the issue, we have ping pong threads running on different nodes where though theu are in same cgroup but their current->obcg is for local node and thus this ping pong is thrashing the per-cpu objcg stock. The easier fix would be to compare objcg->memcg instead of just objcg during draining and caching. In addition we can add support for multiple objcg per-cpu stock caching.