From: Channa <ckadabi@codeaurora.org>
To: Rob Herring <robh@kernel.org>
Cc: Rishabh Bhatnagar <rishabhb@codeaurora.org>,
"moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE"
<linux-arm-kernel@lists.infradead.org>,
linux-arm-msm <linux-arm-msm@vger.kernel.org>,
devicetree@vger.kernel.org, linux-arm@lists.infradead.org,
linux-kernel@vger.kernel.org, Trilok Soni <tsoni@codeaurora.org>,
Kyle Yan <kyan@codeaurora.org>,
Stanimir Varbanov <stanimir.varbanov@linaro.org>,
Evan Green <evgreen@chromium.org>
Subject: Re: [PATCH v4 1/2] Documentation: Documentation for qcom, llcc
Date: Wed, 18 Apr 2018 11:11:58 -0700 [thread overview]
Message-ID: <589e84221ca7723c1739f713216abce5@codeaurora.org> (raw)
In-Reply-To: <CAL_JsqKihiUwPW-aKYtG2cSFnOZonygahUeO5kgLjL3GYO7w=Q@mail.gmail.com>
On 2018-04-18 07:52, Rob Herring wrote:
> On Tue, Apr 17, 2018 at 5:12 PM, <rishabhb@codeaurora.org> wrote:
>> On 2018-04-17 10:43, rishabhb@codeaurora.org wrote:
>>>
>>> On 2018-04-16 07:59, Rob Herring wrote:
>>>>
>>>> On Tue, Apr 10, 2018 at 01:08:12PM -0700, Rishabh Bhatnagar wrote:
>>>>>
>>>>> Documentation for last level cache controller device tree bindings,
>>>>> client bindings usage examples.
>>>>
>>>>
>>>> "Documentation: Documentation ..."? That wastes a lot of the subject
>>>> line... The preferred prefix is "dt-bindings: ..."
>>>>
>>>>>
>>>>> Signed-off-by: Channagoud Kadabi <ckadabi@codeaurora.org>
>>>>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>>>>> ---
>>>>> .../devicetree/bindings/arm/msm/qcom,llcc.txt | 58
>>>>> ++++++++++++++++++++++
>>>>> 1 file changed, 58 insertions(+)
>>>>> create mode 100644
>>>>> Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
>>>>>
>>>>> diff --git
>>>>> a/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
>>>>> b/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
>>>>> new file mode 100644
>>>>> index 0000000..497cf0f
>>>>> --- /dev/null
>>>>> +++ b/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
>>>>> @@ -0,0 +1,58 @@
>>>>> +== Introduction==
>>>>> +
>>>>> +LLCC (Last Level Cache Controller) provides last level of cache
>>>>> memory
>>>>> in SOC,
>>>>> +that can be shared by multiple clients. Clients here are different
>>>>> cores in the
>>>>> +SOC, the idea is to minimize the local caches at the clients and
>>>>> migrate to
>>>>> +common pool of memory
>>>>> +
>>>>> +Properties:
>>>>> +- compatible:
>>>>> + Usage: required
>>>>> + Value type: <string>
>>>>> + Definition: must be "qcom,sdm845-llcc"
>>>>> +
>>>>> +- reg:
>>>>> + Usage: required
>>>>> + Value Type: <prop-encoded-array>
>>>>> + Definition: must be addresses and sizes of the LLCC
>>>>> registers
>>>>
>>>>
>>>> How many address ranges?
>>>>
>>> It consists of just one address range. I'll edit the definition to
>>> make
>>> it more clear.
>>>>>
>>>>> +
>>>>> +- #cache-cells:
>>>>
>>>>
>>>> This is all written as it is a common binding, but it is not one.
>>>>
>>>> You already have most of the configuration data for each client in
>>>> the
>>>> driver, I think I'd just put the client connection there too. Is
>>>> there
>>>> any variation of this for a given SoC?
>>>>
>>> #cache-cells and max-slices won't change for a given SOC. So you want
>>> me
>>> to hard-code in the driver itself?
>>>
>> I can use of_parse_phandle_with_fixed_args function and fix the number
>> of
>> args as 1 instead of keeping #cache-cells here in DT. Does that look
>> fine?
>
> No, I'm saying why even put cache-slices properties in DT to begin
> with? You could just define client id's within the kernel and clients
> can use those instead of getting the id from the DT.
The reason to add cache-slices here is to establish a connection between
client and system cache. For example if we have multiple instances of
system cache blocks and client wants to choose a system cache instance
based on the usecase then its easier to establish this connection using
device tree than hard coding in the driver.
>
> I have a couple of hesitations with putting this into the DT. First, I
> think a cache is just one aspect of describing the interconnect
> between masters and memory (and there's been discussions on
> interconnect bindings too) and any binding needs to consider all of
> the aspects of the interconnect. Second, I'd expect this cache
> architecture will change SoC to SoC and the binding here is pretty
> closely tied to the current cache implementation (e.g. slices). If
> there were a bunch of SoCs with the same design and just different
> client IDs (like interrupt IDs), then I'd feel differently.
This is partially true, a bunch of SoCs would support this design but
clients IDs are not expected to change. So Ideally client drivers could
hard code these IDs.
However I have other concerns of moving the client Ids in the driver.
The way the APIs implemented today are as follows:
#1. Client calls into system cache driver to get cache slice handle
with the usecase Id as input.
#2. System cache driver gets the phandle of system cache instance from
the client device to obtain the private data.
#3. Based on the usecase Id perform look up in the private data to get
cache slice handle.
#4. Return the cache slice handle to client
If we don't have the connection between client & system cache then the
private data needs to declared as static global in the system cache
driver,
that limits us to have just once instance of system cache block.
>
> Rob
--
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
Forum,
a Linux Foundation Collaborative Project
next prev parent reply other threads:[~2018-04-18 18:11 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-10 20:08 [PATCH v4 0/2] SDM845 System Cache Driver Rishabh Bhatnagar
2018-04-10 20:08 ` [PATCH v4 1/2] Documentation: Documentation for qcom, llcc Rishabh Bhatnagar
2018-04-12 22:07 ` Evan Green
2018-04-16 14:59 ` Rob Herring
2018-04-17 17:43 ` rishabhb
2018-04-17 22:12 ` rishabhb
2018-04-18 14:52 ` Rob Herring
2018-04-18 18:11 ` Channa [this message]
2018-04-20 18:51 ` Channa
2018-04-10 20:08 ` [PATCH v4 2/2] drivers: soc: Add LLCC driver Rishabh Bhatnagar
2018-04-10 20:31 ` Jordan Crouse
2018-04-12 22:02 ` Evan Green
2018-04-13 23:08 ` rishabhb
2018-04-16 17:14 ` Evan Green
2018-04-16 20:50 ` rishabhb
2018-04-16 17:20 ` saiprakash.ranjan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=589e84221ca7723c1739f713216abce5@codeaurora.org \
--to=ckadabi@codeaurora.org \
--cc=devicetree@vger.kernel.org \
--cc=evgreen@chromium.org \
--cc=kyan@codeaurora.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-arm@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=rishabhb@codeaurora.org \
--cc=robh@kernel.org \
--cc=stanimir.varbanov@linaro.org \
--cc=tsoni@codeaurora.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).