From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0F203EF0DE for ; Tue, 24 Mar 2026 12:11:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.92.199 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774354319; cv=none; b=tSXcw5kSWfvWRj2Ykeb4I1Xs4Fp00+EXS6AcnVMO45dfZwfXdATtbrzlIJVCRtBo6xx37xQJpcHQT+NHkXsViKNbT2Ub6NtPFLKRssbOm9bBh1ceDJnHsCc9afz9tq1g+CtPGMYh2nYw6rlFBkeBgGLz/Km3msNauXqFy6iLYXY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774354319; c=relaxed/simple; bh=izMYiLdWar4H+HdeZwbiorTRh3Z62TheGSFjax9IOok=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Z9pLMxcCVStH4RmIzhpciFR+gv73ie0hR4SuIDeEydMfszyaFIrFd2o/nZ0/oQQjPUk8+acZo8d8tvE8DJOgPAClCw/n0SJ4ImEn3eKinplkgnAdJ5fuBwnBtAFlL7SbF55XARBrzQRgctLsMeStTutMme9CWglkETlljebNu4A= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=BqhMRZKG; arc=none smtp.client-ip=90.155.92.199 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="BqhMRZKG" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=izMYiLdWar4H+HdeZwbiorTRh3Z62TheGSFjax9IOok=; b=BqhMRZKGyoPAJBsF2aHarLM7D0 YMF5fcBN9DgbsCcmtLN2Ak1EukEWGX0lu5quNqsmCMvgaXTx4pDRUx6iPX36GKKATytOqMg2feNZF T/n/KnG8pV7cCdW2z/Qxl9sQIOi2iZZxf3hkREPFquGUmr7LsyY2tpDOTn9c6aYRLj462iVbCRJSF e6LqX9//VStY85Zz0W7Bdm+4ansmVILgoMpv5dj4pRbNliFnGSIP1xI8N1Dg8Dy8Wmc6v15N3ARtf 99vusmXYpiagxYrFJdPaCmxdomO568gq67ejt8ohlLQ2Lus81dC13xSlBRB6r1UuN0W6OTfnm3boD uLqDURQA==; Received: from 2001-1c00-8d85-5700-266e-96ff-fe07-7dcc.cable.dynamic.v6.ziggo.nl ([2001:1c00:8d85:5700:266e:96ff:fe07:7dcc] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1w50bs-00000003zkp-0oOx; Tue, 24 Mar 2026 12:11:48 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id AF566300312; Tue, 24 Mar 2026 13:11:46 +0100 (CET) Date: Tue, 24 Mar 2026 13:11:46 +0100 From: Peter Zijlstra To: "Deng, Pan" Cc: "mingo@kernel.org" , "rostedt@goodmis.org" , "linux-kernel@vger.kernel.org" , "Li, Tianyou" , "tim.c.chen@linux.intel.com" , "Chen, Yu C" Subject: Re: [PATCH v2 1/4] sched/rt: Optimize cpupri_vec layout to mitigate cache line contention Message-ID: <20260324121146.GC3738010@noisy.programming.kicks-ass.net> References: <24c460fb48d86a5b990acbb42d0d29d91dfc427c.1753076363.git.pan.deng@intel.com> <20260320100903.GR3738786@noisy.programming.kicks-ass.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Tue, Mar 24, 2026 at 09:36:14AM +0000, Deng, Pan wrote: > Regarding this patch, yes, using cacheline aligned could increase potential > memory usage. > After internal discussion, we are thinking of an alternative method to > mitigate the waste of memory usage, that is, using kmalloc() to allocate > count in a different memory space rather than placing the count and > cpumask together in this structure. The rationale is that, writing to > address pointed by the counter and reading the address from cpumask > is isolated in different memory space which could reduce the ratio of > cache false sharing, besides, kmalloc() based on slub/slab could place > the objects in different cache lines to reduce the cache contention. > The drawback of dynamic allocation counter is that, we have to maintain > the life cycle of the counters. > Could you please advise if sticking with current cache_align attribute > method or using kmalloc() is preferred? Well, you'd have to allocate a full cacheline anyway. If you allocate N 4 byte (counter) objects, there's a fair chance they end up in the same cacheline (its a SLAB after all) and then you're back to having a ton of false sharing. Anyway, for you specific workload, why isn't partitioning a viable solution? It would not need any kernel modifications and would get rid of the contention entirely.