From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6AF8288C0E for ; Tue, 13 Jan 2026 01:59:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768269585; cv=none; b=OGAckB40DitZvSIA2MG80+7tdVeVv3B21BSLf7x840prETR87llq1yhQYNMx3MzqQ3TrLIPXR4sCiGThSTeiUOayhQ2wUIHYoSWCu7KblODTJbYYgUEBk6s+5ulyaNrYrLqm4Im9MpOwunc7sJrpIWC4nLuA0Yd59govUFuzVDw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768269585; c=relaxed/simple; bh=p/jBbFozp6cKA9TAibn/DMY94WBJ0VO4hjf50ia9UOw=; h=Message-ID:Date:MIME-Version:Subject:From:To:Cc:References: In-Reply-To:Content-Type; b=bDiGnsxACIVmpaXm+nI2dM1yBH3VutEfMcPlGicbSnhsAjPOQJ84BoKh7E/y6EOFvy4S8sarSwDLXHoGg9XE/iv12EvMcNK1WBKm0anIyWBavxRE7DXeMgPDt8PuY2CPCzLoZCzORwfzV2/izbXhbJE5AJrNSgNlLFFCsMyxZIk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VjnV4MyS; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VjnV4MyS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1768269584; x=1799805584; h=message-id:date:mime-version:subject:from:to:cc: references:in-reply-to:content-transfer-encoding; bh=p/jBbFozp6cKA9TAibn/DMY94WBJ0VO4hjf50ia9UOw=; b=VjnV4MyS8GZERKPi5D1L2tIbUTNxcRy6NqttMesbpPftOcTCqd/bh5Ro 3l6Igbgs3f/5Ni/UCpU619ixHQqkcX83l8Lew0yRnkqgIRDuHNcXOeEZS MgxQ3A+3VFJHWsIZIv/tlx8+Tr7vV/Ab0i2tSkF9C29KUFhnm2EMGnguF 4mWZNtS0UIcs90OBFLDN+YPNhv4FIhHvi0yvtKMyUagiFj+xuHMbny64q PreZBZWu4RGkta8prqHsC0IlAV8oAxzpLsb0XaP7JIv/7V0AddbIOc2vc vr/7UUjk2TDPC7Z8i7lyfaTIe6q8RyVgQE1db36Fo4ykMxPKCcQ3AR2Fo Q==; X-CSE-ConnectionGUID: ut3uTlfjRfGj5N1DoTGIjA== X-CSE-MsgGUID: lqbps+Y0RZGQbH3bNWtViQ== X-IronPort-AV: E=McAfee;i="6800,10657,11669"; a="57106899" X-IronPort-AV: E=Sophos;i="6.21,222,1763452800"; d="scan'208";a="57106899" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2026 17:59:43 -0800 X-CSE-ConnectionGUID: 8BnZCJPYSfO3KTNyf9x59Q== X-CSE-MsgGUID: Ps4UDR7oT++sRZiJBny6gw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,222,1763452800"; d="scan'208";a="204340275" Received: from unknown (HELO [10.238.1.186]) ([10.238.1.186]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2026 17:59:41 -0800 Message-ID: <8f014fc5-093c-4614-b1eb-81695cb33d8c@intel.com> Date: Tue, 13 Jan 2026 09:59:39 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] lib/group_cpus: make group CPU cluster aware From: "Guo, Wangyang" To: Radu Rendec , Andrew Morton Cc: Thomas Gleixner , linux-kernel@vger.kernel.org, Tianyou Li , Tim Chen , Dan Liang References: <20251024023038.872616-1-wangyang.guo@intel.com> <20251221111047.597248db9868d278c7786f6b@linux-foundation.org> <8ba50768-2f05-40a8-b8e8-4364f33ad269@intel.com> Content-Language: en-US In-Reply-To: <8ba50768-2f05-40a8-b8e8-4364f33ad269@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 1/10/2026 10:24 AM, Guo, Wangyang wrote: > On 1/10/2026 3:13 AM, Radu Rendec wrote: >> Hi all, >> >> On Mon, 2025-12-22 at 11:03 +0800, Guo, Wangyang wrote: >>> On 12/22/2025 3:10 AM, Andrew Morton wrote: >>>> On Fri, 24 Oct 2025 10:30:38 +0800 Wangyang Guo >>>> wrote: >>>> >>>>> As CPU core counts increase, the number of NVMe IRQs may be smaller >>>>> than >>>>> the total number of CPUs. This forces multiple CPUs to share the same >>>>> IRQ. If the IRQ affinity and the CPU’s cluster do not align, a >>>>> performance penalty can be observed on some platforms. >>>> >>>> It would be helpful to quantify "performance penalty".  At least give >>>> readers some approximate understanding of how serious this issue is, >>>> please. >>>> >>> Thanks for your reminder, will update changelog in next version. We see >>> 15%+ performance difference in FIO libaio/randread/bs=8k. >>> >>>>> This patch improves IRQ affinity by grouping CPUs by cluster within >>>>> each >>>>> NUMA domain, ensuring better locality between CPUs and their assigned >>>>> NVMe IRQs. >>>>> >>>>> Reviewed-by: Tianyou Li >>>>> Reviewed-by: Tim Chen >>>>> Tested-by: Dan Liang >>>>> Signed-off-by: Wangyang Guo >>>> >>>> Patch hasn't attracted additional review so I'll queue this version for >>>> some testing in mm.git's mm-nonmm-unstable branch.  I'll add a >>>> note-to-self that a changelog addition is desirable. >>> >>> >>> Thanks a lot for your time and support! Please let me know if you have >>> any further comments or guidance. Any feedback would be appreciated. >> >> With this patch applied, I see a weird issue in a qemu x86_64 vm if I >> start it with a higher number of max CPUs than active CPUs, for example >> `-smp 4,maxcpus=8` on the qemu command line. >> >> What I see is the `while (1)` loop in alloc_cluster_groups() spinning >> forever. Removing the `maxcpus=8` from the qemu command line fixes the >> issue but so does reverting the patch :) > > Thanks for the reporting. I will investigate this problem. The problem happens in this loop: /* Probe how many clusters in this node. */ while (1) { cpu = cpumask_first(msk); if (cpu >= nr_cpu_ids) break; cluster_mask = topology_cluster_cpumask(cpu); /* Clean out CPUs on the same cluster. */ cpumask_andnot(msk, msk, cluster_mask); ncluster++; } In this case, topology_cluster_cpumask(cpu) return an empty cluster_mask, which causes later cpumask_andnot invalid, entering a endless loop. It can be fixed by checking returned cluster_mask: cluster_mask = topology_cluster_cpumask(cpu); + if (!cpumask_weight(cluster_mask)) + goto no_cluster; /* Clean out CPUs on the same cluster. */ cpumask_andnot(msk, msk, cluster_mask); ncluster++; BR Wangyang