From: Sibi Sankar <quic_sibis@quicinc.com>
To: Rob Herring <robh@kernel.org>
Cc: <andersson@kernel.org>, <krzysztof.kozlowski+dt@linaro.org>,
<sudeep.holla@arm.com>, <cristian.marussi@arm.com>,
<agross@kernel.org>, <linux-arm-msm@vger.kernel.org>,
<devicetree@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<konrad.dybcio@somainline.org>, <quic_avajid@quicinc.com>
Subject: Re: [RFC 1/2] dt-bindings: firmware: arm,scmi: Add support for memlat vendor protocol
Date: Tue, 8 Nov 2022 16:18:15 +0530 [thread overview]
Message-ID: <dd4821da-331f-4529-8162-90bfe95aa8f8@quicinc.com> (raw)
In-Reply-To: <20221104180339.GA2079655-robh@kernel.org>
Hey Rob,
Thanks for taking time to review the series.
On 11/4/22 23:33, Rob Herring wrote:
> On Thu, Nov 03, 2022 at 10:28:31AM +0530, Sibi Sankar wrote:
>> Add bindings support for the SCMI QTI memlat (memory latency) vendor
>> protocol. The memlat vendor protocol enables the frequency scaling of
>> various buses (L3/LLCC/DDR) based on the memory latency governor
>> running on the CPUSS Control Processor.
>
> I thought the interconnect binding was what provided details for bus
> scaling.
The bus scaling in this particular case is done by SCP FW and not
from any kernel client. The SCMI vendor protocol would be used to
pass on the bandwidth requirements during initialization and SCP FW
would vote on it independently after it is
>
>>
>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>> ---
>> .../devicetree/bindings/firmware/arm,scmi.yaml | 164 +++++++++++++++++++++
>> 1 file changed, 164 insertions(+)
>>
>> diff --git a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
>> index 1c0388da6721..efc8a5a8bffe 100644
>> --- a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
>> +++ b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
>> @@ -189,6 +189,47 @@ properties:
>> reg:
>> const: 0x18
>>
>> + protocol@80:
>> + type: object
>> + properties:
>> + reg:
>> + const: 0x80
>> +
>> + qcom,bus-type:
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + items:
>> + minItems: 1
>> + description:
>> + Identifier of the bus type to be scaled by the memlat protocol.
>> +
>> + cpu-map:
>
> cpu-map only goes under /cpus node.
sure will use a qcom specific node instead
>
>> + type: object
>> + description:
>> + The list of all cpu cluster configurations to be tracked by the memlat protocol
>> +
>> + patternProperties:
>> + '^cluster[0-9]':
>> + type: object
>> + description:
>> + Each cluster node describes the frequency domain associated with the
>> + CPUFREQ HW engine and bandwidth requirements of the buses to be scaled.
>> +
>> + properties:
>
> cpu-map nodes don't have properties.
ack
>
>> + operating-points-v2: true
>> +
>> + qcom,freq-domain:
>
> Please don't add new users of this. Use the performance-domains binding
> instead.
The plan was to re-use the ^^ to determine frequency domain of
the cpus since they are already present in the dts. I guess using
performance-domains bindings would require a corresponding change in
qcom-cpufreq-hw driver as well. Ack.
>
>> + $ref: /schemas/types.yaml#/definitions/phandle-array
>> + description:
>> + Reference to the frequency domain of the CPUFREQ HW engine
>> + items:
>> + - items:
>> + - description: phandle to CPUFREQ HW engine
>> + - description: frequency domain associated with the cluster
>> +
>> + required:
>> + - qcom,freq-domain
>> + - operating-points-v2
>> +
>> additionalProperties: false
>>
>> patternProperties:
>> @@ -429,4 +470,127 @@ examples:
>> };
>> };
>>
>> + - |
>> + #include <dt-bindings/interrupt-controller/arm-gic.h>
>> +
>> + firmware {
>> + scmi {
>> + compatible = "arm,scmi";
>> +
>> + #address-cells = <1>;
>> + #size-cells = <0>;
>> +
>> + mboxes = <&cpucp_mbox>;
>> + mbox-names = "tx";
>> + shmem = <&cpu_scp_lpri>;
>> +
>> + scmi_memlat: protocol@80 {
>> + reg = <0x80>;
>> + qcom,bus-type = <0x2>;
>> +
>> + cpu-map {
>> + cluster0 {
>> + qcom,freq-domain = <&cpufreq_hw 0>;
>> + operating-points-v2 = <&cpu0_opp_table>;
>> + };
>> +
>> + cluster1 {
>> + qcom,freq-domain = <&cpufreq_hw 1>;
>> + operating-points-v2 = <&cpu4_opp_table>;
>> + };
>> +
>> + cluster2 {
>> + qcom,freq-domain = <&cpufreq_hw 2>;
>> + operating-points-v2 = <&cpu7_opp_table>;
>> + };
>> + };
>> + };
>> + };
>> +
>> + cpu0_opp_table: opp-table-cpu0 {
>> + compatible = "operating-points-v2";
>> +
>> + cpu0_opp_300mhz: opp-300000000 {
>> + opp-hz = /bits/ 64 <300000000>;
>> + opp-peak-kBps = <9600000>;
>> + };
>> +
>> + cpu0_opp_1325mhz: opp-1324800000 {
>> + opp-hz = /bits/ 64 <1324800000>;
>> + opp-peak-kBps = <33792000>;
>> + };
>> +
>> + cpu0_opp_2016mhz: opp-2016000000 {
>> + opp-hz = /bits/ 64 <2016000000>;
>> + opp-peak-kBps = <48537600>;
>> + };
>> + };
>> +
>> + cpu4_opp_table: opp-table-cpu4 {
>> + compatible = "operating-points-v2";
>> +
>> + cpu4_opp_691mhz: opp-691200000 {
>> + opp-hz = /bits/ 64 <691200000>;
>> + opp-peak-kBps = <9600000>;
>> + };
>> +
>> + cpu4_opp_941mhz: opp-940800000 {
>> + opp-hz = /bits/ 64 <940800000>;
>> + opp-peak-kBps = <17817600>;
>> + };
>> +
>> + cpu4_opp_2611mhz: opp-2611200000 {
>> + opp-hz = /bits/ 64 <2611200000>;
>> + opp-peak-kBps = <48537600>;
>> + };
>> + };
>> +
>> + cpu7_opp_table: opp-table-cpu7 {
>> + compatible = "operating-points-v2";
>> +
>> + cpu7_opp_806mhz: opp-806400000 {
>> + opp-hz = /bits/ 64 <806400000>;
>> + opp-peak-kBps = <9600000>;
>> + };
>> +
>> + cpu7_opp_2381mhz: opp-2380800000 {
>> + opp-hz = /bits/ 64 <2380800000>;
>> + opp-peak-kBps = <44851200>;
>> + };
>> +
>> + cpu7_opp_2515mhz: opp-2515200000 {
>> + opp-hz = /bits/ 64 <2515200000>;
>> + opp-peak-kBps = <48537600>;
>> + };
>> + };
>> + };
>> +
>> +
>> + soc {
>> + #address-cells = <2>;
>> + #size-cells = <2>;
>> +
>> + cpucp_mbox: mailbox@17400000 {
>> + compatible = "qcom,cpucp-mbox";
>> + reg = <0x0 0x17c00000 0x0 0x10>, <0x0 0x18590300 0x0 0x700>;
>> + interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
>> + #mbox-cells = <0>;
>> + };
>> +
>> + sram@18509400 {
>> + compatible = "mmio-sram";
>> + reg = <0x0 0x18509400 0x0 0x400>;
>> + no-memory-wc;
>> +
>> + #address-cells = <1>;
>> + #size-cells = <1>;
>> + ranges = <0x0 0x0 0x18509400 0x400>;
>> +
>> + cpu_scp_lpri: scp-sram-section@0 {
>> + compatible = "arm,scmi-shmem";
>> + reg = <0x0 0x80>;
>> + };
>> + };
>> + };
>> +
>> ...
>> --
>> 2.7.4
>>
>>
next prev parent reply other threads:[~2022-11-08 10:49 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-03 4:58 [RFC 0/2] Add support for SCMI QTI Memlat Vendor Protocol Sibi Sankar
2022-11-03 4:58 ` [RFC 1/2] dt-bindings: firmware: arm,scmi: Add support for memlat vendor protocol Sibi Sankar
2022-11-03 10:19 ` Sudeep Holla
2022-11-03 12:35 ` Rob Herring
2022-11-04 18:03 ` Rob Herring
2022-11-08 10:48 ` Sibi Sankar [this message]
2022-11-03 4:58 ` [RFC 2/2] firmware: arm_scmi: Add SCMI QTI Memlat " Sibi Sankar
2022-11-03 10:24 ` Sudeep Holla
2022-11-03 10:37 ` Sudeep Holla
2022-11-09 7:12 ` Sibi Sankar
2022-11-03 20:02 ` Matthias Kaehlcke
2022-11-08 11:06 ` Sibi Sankar
2022-11-03 9:41 ` [RFC 0/2] Add support for SCMI QTI Memlat Vendor Protocol Cristian Marussi
2022-11-08 11:01 ` Sibi Sankar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dd4821da-331f-4529-8162-90bfe95aa8f8@quicinc.com \
--to=quic_sibis@quicinc.com \
--cc=agross@kernel.org \
--cc=andersson@kernel.org \
--cc=cristian.marussi@arm.com \
--cc=devicetree@vger.kernel.org \
--cc=konrad.dybcio@somainline.org \
--cc=krzysztof.kozlowski+dt@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=quic_avajid@quicinc.com \
--cc=robh@kernel.org \
--cc=sudeep.holla@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox