From: Thierry Reding <thierry.reding@gmail.com>
To: Bo Yan <byan@nvidia.com>
Cc: jonathanh@nvidia.com, linux-tegra@vger.kernel.org,
mark.rutland@arm.com, robh+dt@kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] arm64: tegra: add topology data for Tegra194 cpu
Date: Thu, 31 Jan 2019 23:25:18 +0100 [thread overview]
Message-ID: <20190131222517.GB13156@mithrandir> (raw)
In-Reply-To: <1548959754-3941-1-git-send-email-byan@nvidia.com>
[-- Attachment #1: Type: text/plain, Size: 6584 bytes --]
On Thu, Jan 31, 2019 at 10:35:54AM -0800, Bo Yan wrote:
> The xavier CPU architecture includes 8 CPU cores organized in
> 4 clusters. Add cpu-map data for topology initialization, add
> cache data for cache node creation in sysfs.
>
> Signed-off-by: Bo Yan <byan@nvidia.com>
> ---
> arch/arm64/boot/dts/nvidia/tegra194.dtsi | 148 +++++++++++++++++++++++++++++--
> 1 file changed, 140 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
> index 6dfa1ca..7c2a1fb 100644
> --- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
> +++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
> @@ -870,63 +870,195 @@
> #address-cells = <1>;
> #size-cells = <0>;
>
> - cpu@0 {
> + cpu-map {
> + cluster0 {
> + core0 {
> + cpu = <&cl0_0>;
> + };
> +
> + core1 {
> + cpu = <&cl0_1>;
> + };
> + };
> +
> + cluster1 {
> + core0 {
> + cpu = <&cl1_0>;
> + };
> +
> + core1 {
> + cpu = <&cl1_1>;
> + };
> + };
> +
> + cluster2 {
> + core0 {
> + cpu = <&cl2_0>;
> + };
> +
> + core1 {
> + cpu = <&cl2_1>;
> + };
> + };
> +
> + cluster3 {
> + core0 {
> + cpu = <&cl3_0>;
> + };
> +
> + core1 {
> + cpu = <&cl3_1>;
> + };
> + };
> + };
> +
> + cl0_0: cpu@0 {
> compatible = "nvidia,tegra194-carmel", "arm,armv8";
> device_type = "cpu";
> reg = <0x10000>;
> enable-method = "psci";
> + i-cache-size = <131072>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <512>;
> + d-cache-size = <65536>;
> + d-cache-line-size = <64>;
> + d-cache_sets = <256>;
These don't seem to be well-defined. They are mentioned in a very weird
locations (Documentation/devicetree/booting-without-of.txt) but there
seem to be examples and other device tree files that use them so maybe
those are all valid. It might be worth mentioning these in other places
where people can more easily find them.
According to the above document, {i,d}-cache-line-size are deprecated in
favour of {i,d}-cache-block-size.
I also don't see any mention of {i,d}-cache_sets in the device tree
bindings, though riscv/cpus.txt mentions {i,d}-cache-sets (note the
hyphen instead of underscore) in the examples. arm/l2c2x0.txt and
arm/uniphier/cache-unifier.txt describe cache-sets, though that's
slightly different.
Might make sense to document all these in more standard places. Maybe
adding them to arm/cpus.txt. For consistency with other properties, I
think there should be called {i,d}-cache-sets like for RISC-V.
> + l2-cache = <&l2_0>;
This seems to be called next-level-cache everywhere else, though it's
only formally described in arm/uniphier/cache-uniphier.txt. So might
also make sense to add this to arm/cpus.txt.
> };
>
> - cpu@1 {
> + cl0_1: cpu@1 {
> compatible = "nvidia,tegra194-carmel", "arm,armv8";
> device_type = "cpu";
> reg = <0x10001>;
> enable-method = "psci";
> + i-cache-size = <131072>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <512>;
> + d-cache-size = <65536>;
> + d-cache-line-size = <64>;
> + d-cache_sets = <256>;
> + l2-cache = <&l2_0>;
> };
>
> - cpu@2 {
> + cl1_0: cpu@2 {
> compatible = "nvidia,tegra194-carmel", "arm,armv8";
> device_type = "cpu";
> reg = <0x100>;
> enable-method = "psci";
> + i-cache-size = <131072>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <512>;
> + d-cache-size = <65536>;
> + d-cache-line-size = <64>;
> + d-cache_sets = <256>;
> + l2-cache = <&l2_1>;
> };
>
> - cpu@3 {
> + cl1_1: cpu@3 {
> compatible = "nvidia,tegra194-carmel", "arm,armv8";
> device_type = "cpu";
> reg = <0x101>;
> enable-method = "psci";
> + i-cache-size = <131072>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <512>;
> + d-cache-size = <65536>;
> + d-cache-line-size = <64>;
> + d-cache_sets = <256>;
> + l2-cache = <&l2_1>;
> };
>
> - cpu@4 {
> + cl2_0: cpu@4 {
> compatible = "nvidia,tegra194-carmel", "arm,armv8";
> device_type = "cpu";
> reg = <0x200>;
> enable-method = "psci";
> + i-cache-size = <131072>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <512>;
> + d-cache-size = <65536>;
> + d-cache-line-size = <64>;
> + d-cache_sets = <256>;
> + l2-cache = <&l2_2>;
> };
>
> - cpu@5 {
> + cl2_1: cpu@5 {
> compatible = "nvidia,tegra194-carmel", "arm,armv8";
> device_type = "cpu";
> reg = <0x201>;
> enable-method = "psci";
> + i-cache-size = <131072>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <512>;
> + d-cache-size = <65536>;
> + d-cache-line-size = <64>;
> + d-cache_sets = <256>;
> + l2-cache = <&l2_2>;
> };
>
> - cpu@6 {
> + cl3_0: cpu@6 {
> compatible = "nvidia,tegra194-carmel", "arm,armv8";
> device_type = "cpu";
> reg = <0x10300>;
> enable-method = "psci";
> + i-cache-size = <131072>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <512>;
> + d-cache-size = <65536>;
> + d-cache-line-size = <64>;
> + d-cache_sets = <256>;
> + l2-cache = <&l2_3>;
> };
>
> - cpu@7 {
> + cl3_1: cpu@7 {
> compatible = "nvidia,tegra194-carmel", "arm,armv8";
> device_type = "cpu";
> reg = <0x10301>;
> enable-method = "psci";
> + i-cache-size = <131072>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <512>;
> + d-cache-size = <65536>;
> + d-cache-line-size = <64>;
> + d-cache_sets = <256>;
> + l2-cache = <&l2_3>;
> };
> };
>
> + l2_0: l2-cache0 {
> + cache-size = <2097152>;
> + cache-line-size = <64>;
> + cache-sets = <2048>;
> + next-level-cache = <&l3>;
> + };
Does this need a compatible string? Also, are there controllers behind
these caches? I'm just wondering if these also need reg properties and
unit-addresses.
arm/l2c2x0.txt and arm/uniphier/cache-uniphier.txt describe an
additional property that you don't specify here: cache-level. This
sounds useful to have so that we don't have to guess the cache level
from the name, which may or may not work depending on what people name
the nodes.
Also, similar to the L1 cache, cache-block-size is preferred over
cache-line-size.
> + l3: l3-cache {
> + cache-size = <4194304>;
> + cache-line-size = <64>;
> + cache-sets = <4096>;
> + };
The same comments apply as for the L2 caches.
Thierry
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2019-01-31 22:25 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-31 18:35 [PATCH] arm64: tegra: add topology data for Tegra194 cpu Bo Yan
2019-01-31 22:25 ` Thierry Reding [this message]
2019-01-31 23:29 ` Bo Yan
2019-02-11 23:34 ` Bo Yan
2019-02-11 23:47 ` [PATCH V2] " Bo Yan
2019-02-13 8:12 ` Thierry Reding
2019-02-13 16:24 ` Bo Yan
2019-02-13 16:33 ` [PATCH V3] " Bo Yan
2019-02-22 0:52 ` Bo Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190131222517.GB13156@mithrandir \
--to=thierry.reding@gmail.com \
--cc=byan@nvidia.com \
--cc=jonathanh@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox