From: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
To: Alim Akhtar <alim.akhtar@samsung.com>,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, robh+dt@kernel.org
Cc: linux-samsung-soc@vger.kernel.org
Subject: Re: [PATCH 2/2] arm64: dts: exynos5433: Add cpu cache information
Date: Mon, 21 Jun 2021 10:51:53 +0200 [thread overview]
Message-ID: <0120db2f-e25e-a4ae-669b-a404dbfae05b@canonical.com> (raw)
In-Reply-To: <20210617113739.66911-2-alim.akhtar@samsung.com>
On 17/06/2021 13:37, Alim Akhtar wrote:
> This patch adds cpu caches information to its dt
> nodes so that the same is available to userspace
> via sysfs.
> This SoC has 48/32 KB I/D cache for each A57 cores
> with 2MB L2 cache.
> And 32/32 KB I/D cache for each A53 cores with
> 256KB L2 cache.
>
> Signed-off-by: Alim Akhtar <alim.akhtar@samsung.com>
> ---
> arch/arm64/boot/dts/exynos/exynos5433.dtsi | 70 ++++++++++++++++++++++
> 1 file changed, 70 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/exynos/exynos5433.dtsi b/arch/arm64/boot/dts/exynos/exynos5433.dtsi
> index 18a912eee360..8183a59e9046 100644
> --- a/arch/arm64/boot/dts/exynos/exynos5433.dtsi
> +++ b/arch/arm64/boot/dts/exynos/exynos5433.dtsi
> @@ -62,6 +62,13 @@
> clock-names = "apolloclk";
> operating-points-v2 = <&cluster_a53_opp_table>;
> #cooling-cells = <2>;
> + i-cache-size = <0x8000>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <256>;
> + d-cache-size = <0x8000>;
> + d-cache-line-size = <64>;
> + d-cache-sets = <128>;
> + next-level-cache = <&apollo_l2>;
> };
>
> cpu1: cpu@101 {
> @@ -72,6 +79,13 @@
> clock-frequency = <1300000000>;
> operating-points-v2 = <&cluster_a53_opp_table>;
> #cooling-cells = <2>;
> + i-cache-size = <0x8000>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <256>;
> + d-cache-size = <0x8000>;
> + d-cache-line-size = <64>;
> + d-cache-sets = <128>;
> + next-level-cache = <&apollo_l2>;
> };
>
> cpu2: cpu@102 {
> @@ -82,6 +96,13 @@
> clock-frequency = <1300000000>;
> operating-points-v2 = <&cluster_a53_opp_table>;
> #cooling-cells = <2>;
> + i-cache-size = <0x8000>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <256>;
> + d-cache-size = <0x8000>;
> + d-cache-line-size = <64>;
> + d-cache-sets = <128>;
> + next-level-cache = <&apollo_l2>;
> };
>
> cpu3: cpu@103 {
> @@ -92,6 +113,13 @@
> clock-frequency = <1300000000>;
> operating-points-v2 = <&cluster_a53_opp_table>;
> #cooling-cells = <2>;
> + i-cache-size = <0x8000>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <256>;
> + d-cache-size = <0x8000>;
> + d-cache-line-size = <64>;
> + d-cache-sets = <128>;
> + next-level-cache = <&apollo_l2>;
> };
>
> cpu4: cpu@0 {
> @@ -104,6 +132,13 @@
> clock-names = "atlasclk";
> operating-points-v2 = <&cluster_a57_opp_table>;
> #cooling-cells = <2>;
> + i-cache-size = <0xc000>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <256>;
> + d-cache-size = <0x8000>;
> + d-cache-line-size = <64>;
> + d-cache-sets = <256>;
> + next-level-cache = <&atlas_l2>;
> };
>
> cpu5: cpu@1 {
> @@ -114,6 +149,13 @@
> clock-frequency = <1900000000>;
> operating-points-v2 = <&cluster_a57_opp_table>;
> #cooling-cells = <2>;
> + i-cache-size = <0xc000>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <256>;
> + d-cache-size = <0x8000>;
> + d-cache-line-size = <64>;
> + d-cache-sets = <256>;
> + next-level-cache = <&atlas_l2>;
> };
>
> cpu6: cpu@2 {
> @@ -124,6 +166,13 @@
> clock-frequency = <1900000000>;
> operating-points-v2 = <&cluster_a57_opp_table>;
> #cooling-cells = <2>;
> + i-cache-size = <0xc000>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <256>;
> + d-cache-size = <0x8000>;
> + d-cache-line-size = <64>;
> + d-cache-sets = <256>;
> + next-level-cache = <&atlas_l2>;
> };
>
> cpu7: cpu@3 {
> @@ -134,6 +183,27 @@
> clock-frequency = <1900000000>;
> operating-points-v2 = <&cluster_a57_opp_table>;
> #cooling-cells = <2>;
> + i-cache-size = <0xc000>;
> + i-cache-line-size = <64>;
> + i-cache-sets = <256>;
> + d-cache-size = <0x8000>;
> + d-cache-line-size = <64>;
> + d-cache-sets = <256>;
> + next-level-cache = <&atlas_l2>;
> + };
> +
> + atlas_l2: l2-cache0 {
Few other nodes (PMU, OPP tables) use a57/a53 names instead of
codenames, so I would prefer to stay with them (so cluster_a57_l2).
For Exynos7 it's fine as it uses Atlas already in labels.
> + compatible = "cache";
> + cache-size = <0x200000>;
> + cache-line-size = <64>;
> + cache-sets = <2048>;
> + };
> +
> + apollo_l2: l2-cache1 {
> + compatible = "cache";
> + cache-size = <0x40000>;
> + cache-line-size = <64>;
> + cache-sets = <256>;
> };
> };
>
>
Best regards,
Krzysztof
next prev parent reply other threads:[~2021-06-21 8:51 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20210617113313epcas5p1fb3fff0b301b9e67a771349e72c2445b@epcas5p1.samsung.com>
2021-06-17 11:37 ` [PATCH 1/2] arm64: dts: exynos7: Add cpu cache information Alim Akhtar
2021-06-17 11:37 ` [PATCH 2/2] arm64: dts: exynos5433: " Alim Akhtar
2021-06-21 8:51 ` Krzysztof Kozlowski [this message]
2021-06-21 13:46 ` Alim Akhtar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0120db2f-e25e-a4ae-669b-a404dbfae05b@canonical.com \
--to=krzysztof.kozlowski@canonical.com \
--cc=alim.akhtar@samsung.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-samsung-soc@vger.kernel.org \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox