devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Lin <eric.lin@sifive.com>
To: Conor Dooley <conor@kernel.org>
Cc: robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org,
	palmer@dabbelt.com, paul.walmsley@sifive.com,
	aou@eecs.berkeley.edu, maz@kernel.org, chenhuacai@kernel.org,
	baolu.lu@linux.intel.com, will@kernel.org,
	kan.liang@linux.intel.com, nnac123@linux.ibm.com,
	pierre.gondois@arm.com, huangguangbin2@huawei.com,
	jgross@suse.com, chao.gao@intel.com, maobibo@loongson.cn,
	linux-riscv@lists.infradead.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org, dslin1010@gmail.com,
	Nick Hu <nick.hu@sifive.com>, Zong Li <zong.li@sifive.com>
Subject: Re: [PATCH 1/3] soc: sifive: Add SiFive private L2 cache support
Date: Fri, 23 Jun 2023 17:49:22 +0800	[thread overview]
Message-ID: <CAPqJEFr1FfuHYX=BP3BXAsZ0V4JFDqzu7_jjPVgcijOFQzddmQ@mail.gmail.com> (raw)
In-Reply-To: <20230616-errand-glutton-f64783da058c@spud>

Hi Conor,

On Sat, Jun 17, 2023 at 5:05 AM Conor Dooley <conor@kernel.org> wrote:
>
> Hey Eric,
>
> On Fri, Jun 16, 2023 at 02:32:08PM +0800, Eric Lin wrote:
> > This adds SiFive private L2 cache driver which will show
> > cache config information when booting and add cpu hotplug
> > callback functions.
> >
> > Signed-off-by: Eric Lin <eric.lin@sifive.com>
> > Signed-off-by: Nick Hu <nick.hu@sifive.com>
>
> Missing a Co-developed-by for Nick?

Yes, I'll add Co-developed-by for Nick in v2. Thanks.

>
>
> > +static void pl2_config_read(void __iomem *pl2_base, int cpu)
> > +{
> > +     u32 regval, bank, way, set, cacheline;
> > +
> > +     regval = readl(pl2_base);
> > +     bank = regval & 0xff;
> > +     pr_info("in the CPU: %d\n", cpu);
> > +     pr_info("No. of Banks in the cache: %d\n", bank);
> > +     way = (regval & 0xff00) >> 8;
> > +     pr_info("No. of ways per bank: %d\n", way);
> > +     set = (regval & 0xff0000) >> 16;
> > +     pr_info("Total sets: %llu\n", (uint64_t)1 << set);
> > +     cacheline = (regval & 0xff000000) >> 24;
> > +     pr_info("Bytes per cache block: %llu\n", (uint64_t)1 << cacheline);
> > +     pr_info("Size: %d\n", way << (set + cacheline));
> > +}
>
> Isn't this basically all information that we get anyway in sysfs based
> on what gets put into the DT, except printed out once per CPU at
> boottime?
> If there's reason to keep it, please do as suggested by Ben and cut down
> the number of lines emitted. Look at the ccache one for comparison:
>         static void ccache_config_read(void)
>         {
>                 u32 cfg;
>
>                 cfg = readl(ccache_base + SIFIVE_CCACHE_CONFIG);
>                 pr_info("%llu banks, %llu ways, sets/bank=%llu, bytes/block=%llu\n",
>                         FIELD_GET(SIFIVE_CCACHE_CONFIG_BANK_MASK, cfg),
>                         FIELD_GET(SIFIVE_CCACHE_CONFIG_WAYS_MASK, cfg),
>                         BIT_ULL(FIELD_GET(SIFIVE_CCACHE_CONFIG_SETS_MASK, cfg)),
>                         BIT_ULL(FIELD_GET(SIFIVE_CCACHE_CONFIG_BLKS_MASK, cfg)));
>
>                 cfg = readl(ccache_base + SIFIVE_CCACHE_WAYENABLE);
>                 pr_info("Index of the largest way enabled: %u\n", cfg);
>         }
> It'd also be good to print the same things as the ccache, no?
>

Yes, I'll cut down the number of lines as the ccache in v2. Thanks for
the suggestion.

> > +static int sifive_pl2_cache_dev_probe(struct platform_device *pdev)
> > +{
> > +     struct resource *res;
> > +     int cpu, ret = -EINVAL;
> > +     struct device_node *cpu_node, *pl2_node;
> > +     struct sifive_pl2_state *pl2_state = NULL;
> > +     void __iomem *pl2_base;
>
> Please pick a sensible ordering for variables. IDC if it is reverse xmas
> tree, or sorting by types, but this just seems quite random..
>

Yes, I'll sort by type in v2.

> > +     /* Traverse all cpu nodes to find the one mapping to its pl2 node. */
> > +     for_each_cpu(cpu, cpu_possible_mask) {
> > +             cpu_node = of_cpu_device_node_get(cpu);
> > +             pl2_node = of_parse_phandle(cpu_node, "next-level-cache", 0);
> > +
> > +             /* Found it! */
> > +             if (dev_of_node(&pdev->dev) == pl2_node) {
> > +                     /* Use cpu to get its percpu data sifive_pl2_state. */
> > +                     pl2_state = per_cpu_ptr(&sifive_pl2_state, cpu);
> > +                     break;
> > +             }
> > +     }
> > +
> > +     if (!pl2_state) {
> > +             pr_err("Not found the corresponding cpu_node in dts.\n");
>
> I don't think this error message is going to be helpful in figuring out
> where the problem is on a machine with many of the caches. More
> information about *which* cache caused it would be good.
> Also it is not grammatically correct, it should read something like
> "Failed to find CPU node for cache@abc" or something along those lines.
>

OK, I'll rewrite the error message to make it more helpful for the user.
I'll fix it in v2. Thanks for the suggestion.

> > +             goto early_err;
>
> early_err just returns ret. Why not just return the error directly?
>

Yeah, it can just return ret. I'll fix it in v2.

> > +     }
> > +
> > +     /* Set base address of select and counter registers. */
> > +     pl2_base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
> > +     if (IS_ERR(pl2_base)) {
> > +             ret = PTR_ERR(pl2_base);
> > +             goto early_err;
> > +     }
> > +
> > +     /* Print pL2 configs. */
> > +     pl2_config_read(pl2_base, cpu);
> > +     pl2_state->pl2_base = pl2_base;
> > +
> > +     return 0;
> > +
> > +early_err:
> > +     return ret;
> > +}
>
> > +static struct platform_driver sifive_pl2_cache_driver = {
> > +     .driver = {
> > +                .name = "SiFive-pL2-cache",
> > +                .of_match_table = sifive_pl2_cache_of_ids,
> > +                },
> > +     .probe = sifive_pl2_cache_dev_probe,
> > +};
> > +
> > +static int __init sifive_pl2_cache_init(void)
> > +{
> > +     int ret;
> > +
> > +     ret = cpuhp_setup_state(CPUHP_AP_RISCV_SIFIVE_PL2_ONLINE,
> > +                             "soc/sifive/pl2:online",
> > +                                   sifive_pl2_online_cpu,
> > +                                   sifive_pl2_offline_cpu);
>
> Got some weird use of whitespace here & above, please remove the spaces.
>

Yes, I'll remove the whitespace in v2.
Thanks for the review.

Best Regards,
Eric Lin.

> Cheers,
> Conor.

  reply	other threads:[~2023-06-23  9:50 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-16  6:32 [PATCH 0/3] Add SiFive Private L2 cache and PMU driver Eric Lin
2023-06-16  6:32 ` [PATCH 1/3] soc: sifive: Add SiFive private L2 cache support Eric Lin
2023-06-16  8:30   ` Ben Dooks
2023-06-23  8:21     ` Eric Lin
2023-06-16 19:02   ` Christophe JAILLET
2023-06-23  8:28     ` Eric Lin
2023-06-16 21:05   ` Conor Dooley
2023-06-23  9:49     ` Eric Lin [this message]
2023-06-16  6:32 ` [PATCH 2/3] soc: sifive: Add SiFive private L2 cache PMU driver Eric Lin
2023-06-16 10:12   ` Conor Dooley
2023-06-20  3:14     ` Eric Lin
2023-06-21 15:17       ` Conor Dooley
2023-06-23 13:24         ` Will Deacon
2023-06-23 16:03           ` Eric Lin
2023-07-11  8:41       ` Ben Dooks
2023-06-16 19:05   ` Christophe JAILLET
2023-06-16  6:32 ` [PATCH 3/3] dt-bindings: riscv: sifive: Add SiFive Private L2 cache controller Eric Lin
2023-06-16 10:10   ` Conor Dooley
2023-06-16 10:37     ` Ben Dooks
2023-06-26  3:06     ` Eric Lin
2023-06-16 10:45   ` Krzysztof Kozlowski
2023-06-26  3:26     ` Eric Lin
2023-06-26  6:19       ` Krzysztof Kozlowski
2023-06-28 16:31         ` Eric Lin
2023-07-01  8:22           ` Krzysztof Kozlowski
2023-07-12 11:09             ` Eric Lin
2023-07-12 12:30               ` Krzysztof Kozlowski
2023-07-12 12:48                 ` Conor Dooley
2023-07-20 10:16                   ` Eric Lin
2023-07-20  9:49                 ` Eric Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPqJEFr1FfuHYX=BP3BXAsZ0V4JFDqzu7_jjPVgcijOFQzddmQ@mail.gmail.com' \
    --to=eric.lin@sifive.com \
    --cc=aou@eecs.berkeley.edu \
    --cc=baolu.lu@linux.intel.com \
    --cc=chao.gao@intel.com \
    --cc=chenhuacai@kernel.org \
    --cc=conor@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=dslin1010@gmail.com \
    --cc=huangguangbin2@huawei.com \
    --cc=jgross@suse.com \
    --cc=kan.liang@linux.intel.com \
    --cc=krzysztof.kozlowski+dt@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=maobibo@loongson.cn \
    --cc=maz@kernel.org \
    --cc=nick.hu@sifive.com \
    --cc=nnac123@linux.ibm.com \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pierre.gondois@arm.com \
    --cc=robh+dt@kernel.org \
    --cc=will@kernel.org \
    --cc=zong.li@sifive.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).