From mboxrd@z Thu Jan 1 00:00:00 1970 From: brice.goglin@gmail.com (Brice Goglin) Date: Thu, 8 Mar 2018 21:41:17 +0100 Subject: [PATCH v7 13/13] arm64: topology: divorce MC scheduling domain from core_siblings In-Reply-To: <20180307130623.GK4589@e105550-lin.cambridge.arm.com> References: <20180228220619.6992-1-jeremy.linton@arm.com> <20180228220619.6992-14-jeremy.linton@arm.com> <20180301155216.GI4589@e105550-lin.cambridge.arm.com> <5d6bf4cf-2f6d-d123-f17f-d47d8e74c16c@arm.com> <20180306160721.GJ4589@e105550-lin.cambridge.arm.com> <8ac3567c-9fd8-4b0c-121c-287a027b5156@arm.com> <20180307130623.GK4589@e105550-lin.cambridge.arm.com> Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org > Is there a good reason for diverging instead of adjusting the > core_sibling mask? On x86 the core_siblings mask is defined by the last > level cache span so they don't have this issue. No. core_siblings is defined as the list of cores that have the same physical_package_id (see the doc of sysfs topology files), and LLC can be smaller than that. Example with E5v3 with cluster-on-die (two L3 per package, core_siblings is twice larger than L3 cpumap): https://www.open-mpi.org/projects/hwloc/lstopo/images/2XeonE5v3.v1.11.png On AMD EPYC, you even have up to 8 LLC per package. Brice