linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sumit Gupta <sumitg@nvidia.com>
To: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>,
	<treding@nvidia.com>, <dmitry.osipenko@collabora.com>,
	<viresh.kumar@linaro.org>, <rafael@kernel.org>,
	<jonathanh@nvidia.com>, <robh+dt@kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-tegra@vger.kernel.org>,
	<linux-pm@vger.kernel.org>, <devicetree@vger.kernel.org>
Cc: <sanjayc@nvidia.com>, <ksitaraman@nvidia.com>, <ishah@nvidia.com>,
	<bbasu@nvidia.com>, Sumit Gupta <sumitg@nvidia.com>
Subject: Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234
Date: Tue, 7 Mar 2023 00:58:19 +0530	[thread overview]
Message-ID: <841dd7b5-98b2-e1a5-2387-a48d7abf4f38@nvidia.com> (raw)
In-Reply-To: <b4777025-0220-b1e4-f6f3-00d75ec8f0be@linaro.org>



On 22/12/22 17:02, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20/12/2022 17:02, Sumit Gupta wrote:
>> Adding Interconnect framework support to dynamically set the DRAM
>> bandwidth from different clients. Both the MC and EMC drivers are
>> added as ICC providers. The path for any request will be:
>>   MC-Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>
>> MC clients will request for bandwidth to the MC driver which will
>> pass the tegra icc node having current request info to the EMC driver.
>> The EMC driver will send the BPMP Client ID, Client type and bandwidth
>> request info to the BPMP-FW where the final DRAM freq for achieving the
>> requested bandwidth is set based on the passed parameters.
>>
>> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
>> ---
>>   drivers/memory/tegra/mc.c           |  18 ++-
>>   drivers/memory/tegra/tegra186-emc.c | 166 ++++++++++++++++++++++++++++
>>   drivers/memory/tegra/tegra234.c     | 101 ++++++++++++++++-
>>   include/soc/tegra/mc.h              |   7 ++
>>   include/soc/tegra/tegra-icc.h       |  72 ++++++++++++
>>   5 files changed, 362 insertions(+), 2 deletions(-)
>>   create mode 100644 include/soc/tegra/tegra-icc.h
>>
>> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
>> index 592907546ee6..ff887fb03bce 100644
>> --- a/drivers/memory/tegra/mc.c
>> +++ b/drivers/memory/tegra/mc.c
>> @@ -17,6 +17,7 @@
>>   #include <linux/sort.h>
>>
>>   #include <soc/tegra/fuse.h>
>> +#include <soc/tegra/tegra-icc.h>
>>
>>   #include "mc.h"
>>
>> @@ -779,6 +780,7 @@ const char *const tegra_mc_error_names[8] = {
>>    */
>>   static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>   {
>> +     struct tegra_icc_node *tnode;
>>        struct icc_node *node;
>>        unsigned int i;
>>        int err;
>> @@ -792,7 +794,11 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>        mc->provider.data = &mc->provider;
>>        mc->provider.set = mc->soc->icc_ops->set;
>>        mc->provider.aggregate = mc->soc->icc_ops->aggregate;
>> -     mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
>> +     mc->provider.get_bw = mc->soc->icc_ops->get_bw;
>> +     if (mc->soc->icc_ops->xlate)
>> +             mc->provider.xlate = mc->soc->icc_ops->xlate;
>> +     if (mc->soc->icc_ops->xlate_extended)
>> +             mc->provider.xlate_extended = mc->soc->icc_ops->xlate_extended;
>>
>>        err = icc_provider_add(&mc->provider);
>>        if (err)
>> @@ -814,6 +820,10 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>                goto remove_nodes;
>>
>>        for (i = 0; i < mc->soc->num_clients; i++) {
>> +             tnode = kzalloc(sizeof(*tnode), GFP_KERNEL);
>> +             if (!tnode)
>> +                     return -ENOMEM;
>> +
>>                /* create MC client node */
>>                node = icc_node_create(mc->soc->clients[i].id);
>>                if (IS_ERR(node)) {
>> @@ -828,6 +838,12 @@ static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>                err = icc_link_create(node, TEGRA_ICC_MC);
>>                if (err)
>>                        goto remove_nodes;
>> +
>> +             node->data = tnode;
> 
> Where is it freed?
> 
> 
> (...)
> 
Have removed 'struct tegra_icc_node' in v2. Instead 'node->data'
now points to the entry of MC client in the static table 
'tegra234_mc_clients' as below. So, the old alloc of 'struct 
tegra_icc_node' is not required now.

  + node->data = (char *)&(mc->soc->clients[i]);

>>
>>   struct tegra_mc_ops {
>> @@ -238,6 +243,8 @@ struct tegra_mc {
>>        struct {
>>                struct dentry *root;
>>        } debugfs;
>> +
>> +     struct tegra_icc_node *curr_tnode;
>>   };
>>
>>   int tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate);
>> diff --git a/include/soc/tegra/tegra-icc.h b/include/soc/tegra/tegra-icc.h
>> new file mode 100644
>> index 000000000000..3855d8571281
>> --- /dev/null
>> +++ b/include/soc/tegra/tegra-icc.h
> 
> Why not in linux?
> 
Moved the file to 'include/linux/tegra-icc.h' in v2.

>> @@ -0,0 +1,72 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/*
>> + * Copyright (C) 2022-2023 NVIDIA CORPORATION.  All rights reserved.
>> + */
>> +
>> +#ifndef MEMORY_TEGRA_ICC_H
> 
> This does not match the path/name.
> 
Have changed path name to below in v2.

   +#ifndef LINUX_TEGRA_ICC_H
   +#define LINUX_TEGRA_ICC_H

>> +#define MEMORY_TEGRA_ICC_H
>> +
>> +enum tegra_icc_client_type {
>> +     TEGRA_ICC_NONE,
>> +     TEGRA_ICC_NISO,
>> +     TEGRA_ICC_ISO_DISPLAY,
>> +     TEGRA_ICC_ISO_VI,
>> +     TEGRA_ICC_ISO_AUDIO,
>> +     TEGRA_ICC_ISO_VIFAL,
>> +};
>> +
>> +struct tegra_icc_node {
>> +     struct icc_node *node;
>> +     struct tegra_mc *mc;
>> +     u32 bpmp_id;
>> +     u32 type;
>> +};
>> +
>> +/* ICC ID's for MC client's used in BPMP */
>> +#define TEGRA_ICC_BPMP_DEBUG         1
>> +#define TEGRA_ICC_BPMP_CPU_CLUSTER0  2
>> +#define TEGRA_ICC_BPMP_CPU_CLUSTER1  3
>> +#define TEGRA_ICC_BPMP_CPU_CLUSTER2  4
>> +#define TEGRA_ICC_BPMP_GPU           5
>> +#define TEGRA_ICC_BPMP_CACTMON               6
>> +#define TEGRA_ICC_BPMP_DISPLAY               7
>> +#define TEGRA_ICC_BPMP_VI            8
>> +#define TEGRA_ICC_BPMP_EQOS          9
>> +#define TEGRA_ICC_BPMP_PCIE_0                10
>> +#define TEGRA_ICC_BPMP_PCIE_1                11
>> +#define TEGRA_ICC_BPMP_PCIE_2                12
>> +#define TEGRA_ICC_BPMP_PCIE_3                13
>> +#define TEGRA_ICC_BPMP_PCIE_4                14
>> +#define TEGRA_ICC_BPMP_PCIE_5                15
>> +#define TEGRA_ICC_BPMP_PCIE_6                16
>> +#define TEGRA_ICC_BPMP_PCIE_7                17
>> +#define TEGRA_ICC_BPMP_PCIE_8                18
>> +#define TEGRA_ICC_BPMP_PCIE_9                19
>> +#define TEGRA_ICC_BPMP_PCIE_10               20
>> +#define TEGRA_ICC_BPMP_DLA_0         21
>> +#define TEGRA_ICC_BPMP_DLA_1         22
>> +#define TEGRA_ICC_BPMP_SDMMC_1               23
>> +#define TEGRA_ICC_BPMP_SDMMC_2               24
>> +#define TEGRA_ICC_BPMP_SDMMC_3               25
>> +#define TEGRA_ICC_BPMP_SDMMC_4               26
>> +#define TEGRA_ICC_BPMP_NVDEC         27
>> +#define TEGRA_ICC_BPMP_NVENC         28
>> +#define TEGRA_ICC_BPMP_NVJPG_0               29
>> +#define TEGRA_ICC_BPMP_NVJPG_1               30
>> +#define TEGRA_ICC_BPMP_OFAA          31
>> +#define TEGRA_ICC_BPMP_XUSB_HOST     32
>> +#define TEGRA_ICC_BPMP_XUSB_DEV              33
>> +#define TEGRA_ICC_BPMP_TSEC          34
>> +#define TEGRA_ICC_BPMP_VIC           35
>> +#define TEGRA_ICC_BPMP_APE           36
>> +#define TEGRA_ICC_BPMP_APEDMA                37
>> +#define TEGRA_ICC_BPMP_SE            38
>> +#define TEGRA_ICC_BPMP_ISP           39
>> +#define TEGRA_ICC_BPMP_HDA           40
>> +#define TEGRA_ICC_BPMP_VIFAL         41
>> +#define TEGRA_ICC_BPMP_VI2FAL                42
>> +#define TEGRA_ICC_BPMP_VI2           43
>> +#define TEGRA_ICC_BPMP_RCE           44
>> +#define TEGRA_ICC_BPMP_PVA           45
>> +
>> +#endif /* MEMORY_TEGRA_ICC_H */
> 
> Best regards,
> Krzysztof
> 

  reply	other threads:[~2023-03-06 19:28 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
2022-12-20 18:05   ` Dmitry Osipenko
2022-12-21  7:53     ` Sumit Gupta
2022-12-20 18:06   ` Dmitry Osipenko
2022-12-21  7:54     ` Sumit Gupta
2022-12-20 18:07   ` Dmitry Osipenko
2022-12-21  8:05     ` Sumit Gupta
2022-12-21 16:44       ` Dmitry Osipenko
2023-01-17 13:03         ` Sumit Gupta
2022-12-20 18:10   ` Dmitry Osipenko
2022-12-21  9:35     ` Sumit Gupta
2022-12-21 16:43       ` Dmitry Osipenko
2023-01-13 12:15         ` Sumit Gupta
2022-12-21  0:55   ` Dmitry Osipenko
2022-12-21  8:07     ` Sumit Gupta
2022-12-21 16:54   ` Dmitry Osipenko
2023-01-13 12:25     ` Sumit Gupta
2022-12-21 19:17   ` Dmitry Osipenko
2022-12-21 19:20   ` Dmitry Osipenko
2022-12-22 15:56     ` Dmitry Osipenko
2023-01-13 12:35       ` Sumit Gupta
2023-01-13 12:40     ` Sumit Gupta
2022-12-21 19:43   ` Dmitry Osipenko
2022-12-22 11:32   ` Krzysztof Kozlowski
2023-03-06 19:28     ` Sumit Gupta [this message]
2022-12-20 16:02 ` [Patch v1 02/10] memory: tegra: adding iso mc clients for Tegra234 Sumit Gupta
2022-12-20 16:02 ` [Patch v1 03/10] memory: tegra: add pcie " Sumit Gupta
2022-12-22 11:33   ` Krzysztof Kozlowski
2023-01-13 14:51     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234 Sumit Gupta
2022-12-22 11:36   ` Krzysztof Kozlowski
2023-03-06 19:41     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
2022-12-22 11:29   ` Krzysztof Kozlowski
2023-01-13 14:44     ` Sumit Gupta
2023-01-13 17:11   ` Krzysztof Kozlowski
2022-12-20 16:02 ` [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
2023-01-16 16:29   ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist Sumit Gupta
2022-12-21  5:01   ` Viresh Kumar
2022-12-20 16:02 ` [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
2022-12-22 15:46   ` Dmitry Osipenko
2023-01-13 13:50     ` Sumit Gupta
2023-01-16 12:16       ` Dmitry Osipenko
2023-01-19 10:26         ` Thierry Reding
2023-01-19 13:01           ` Dmitry Osipenko
2023-02-06 13:31             ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 09/10] memory: tegra: get number of enabled mc channels Sumit Gupta
2022-12-22 11:37   ` Krzysztof Kozlowski
2023-01-13 15:04     ` Sumit Gupta
2023-01-16 16:30       ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 10/10] memory: tegra: make cluster bw request a multiple of mc_channels Sumit Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=841dd7b5-98b2-e1a5-2387-a48d7abf4f38@nvidia.com \
    --to=sumitg@nvidia.com \
    --cc=bbasu@nvidia.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dmitry.osipenko@collabora.com \
    --cc=ishah@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=krzysztof.kozlowski@linaro.org \
    --cc=ksitaraman@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=rafael@kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=sanjayc@nvidia.com \
    --cc=treding@nvidia.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).