Linux ARM-MSM sub-architecture
 help / color / mirror / Atom feed
From: Taniya Das <taniya.das@oss.qualcomm.com>
To: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>,
	Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
Cc: Bjorn Andersson <andersson@kernel.org>,
	Michael Turquette <mturquette@baylibre.com>,
	Stephen Boyd <sboyd@kernel.org>,
	Dmitry Baryshkov <lumag@kernel.org>,
	Taniya Das <quic_tdas@quicinc.com>,
	Ajit Pandey <quic_ajipan@quicinc.com>,
	Imran Shaik <quic_imrashai@quicinc.com>,
	Jagadeesh Kona <quic_jkona@quicinc.com>,
	linux-arm-msm@vger.kernel.org, linux-clk@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] clk: qcom: gcc: Update the SDCC clock to use shared_floor_ops
Date: Thu, 9 Oct 2025 09:58:00 +0530	[thread overview]
Message-ID: <4023c6ba-7ff0-46e0-bb09-a0cd864441ac@oss.qualcomm.com> (raw)
In-Reply-To: <5ba5fb11-96ed-41bd-ba21-f30476cdd570@oss.qualcomm.com>



On 10/8/2025 5:56 PM, Konrad Dybcio wrote:
> On 9/26/25 11:41 AM, Taniya Das wrote:
>>
>>
>> On 8/8/2025 5:48 PM, Dmitry Baryshkov wrote:
>>> On Fri, Aug 08, 2025 at 02:51:50PM +0530, Taniya Das wrote:
>>>>
>>>>
>>>> On 8/7/2025 10:32 PM, Konrad Dybcio wrote:
>>>>> On 8/6/25 11:39 AM, Taniya Das wrote:
>>>>>>
>>>>>>
>>>>>> On 8/6/2025 3:00 PM, Konrad Dybcio wrote:
>>>>>>> On 8/6/25 11:27 AM, Taniya Das wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 8/5/2025 10:52 AM, Dmitry Baryshkov wrote:
>>>>>>>>> On Mon, Aug 04, 2025 at 11:59:21PM +0530, Taniya Das wrote:
>>>>>>>>>> gcc_sdcc2_apps_clk_src: rcg didn't update its configuration" during
>>>>>>>>>> boot. This happens due to the floor_ops tries to update the rcg
>>>>>>>>>> configuration even if the clock is not enabled.
>>>>>>>>>
>>>>>>>>> This has been working for other platforms (I see Milos, SAR2130P,
>>>>>>>>> SM6375, SC8280XP, SM8550, SM8650 using shared ops, all other platforms
>>>>>>>>> seem to use non-shared ops). What's the difference? Should we switch all
>>>>>>>>> platforms? Is it related to the hypervisor?
>>>>>>>>>
>>>>>>>>
>>>>>>>> If a set rate is called on a clock before clock enable, the
>>>>>>>
>>>>>>> Is this something we should just fix up the drivers not to do?
>>>>>>>
>>>>>>
>>>>>> I do not think CCF has any such limitation where the clock should be
>>>>>> enabled and then a clock rate should be invoked. We should handle it
>>>>>> gracefully and that is what we have now when the caching capabilities
>>>>>> were added in the code. This has been already in our downstream drivers.
>>>>>
>>>>> Should we do CFG caching on *all* RCGs to avoid having to scratch our
>>>>> heads over which ops to use with each clock individually?
>>>>>
>>>>
>>>> Yes, Konrad, that’s definitely the cleanest approach. If you're okay
>>>> with it, we can proceed with the current change first and then follow up
>>>> with a broader cleanup of the rcg2 ops. As part of that, we can also
>>>> transition the relevant SDCC clock targets to use floor_ops. This way,
>>>> we can avoid the rcg configuration failure logs in the boot sequence on
>>>> QCS615.
>>>
>>> the rcg2_shared_ops have one main usecase - parking of the clock to the
>>> safe source. If it is not required for the SDCC clock, then it is
>>> incorrect to land this patch.
>>
>> Along with the floor functionality we require parking of the clock to
>> safe source for SDCC clock. I am reusing the shared_floor_ops introduced
>> recently for SAR2130P explicitly for SDCC clocks.
>>
>>>
>>> If you are saying that we should be caching CFG value for all clock
>>> controllers, then we should change instead the clk_rcg2_ops.
>>>
>>
>> That is not required for all clock controllers and which ever clock
>> controller's clock requires it we use rcg2_shared_ops which was updated
>> to park the cfg.
> 
> I think Dmitry just wanted you to confirm that what you're doing in this
> patch is guided by the necessity of safe parking and not only to enable
> rcg caching
> 
> Konrad

Yes, that’s correct. The primary motivation for this patch is to ensure
safe parking, and the RCG caching is a secondary benefit.

-- 
Thanks,
Taniya Das


      reply	other threads:[~2025-10-09  4:28 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-04 18:29 [PATCH] clk: qcom: gcc: Update the SDCC clock to use shared_floor_ops Taniya Das
2025-08-05  5:22 ` Dmitry Baryshkov
2025-08-06  9:27   ` Taniya Das
2025-08-06  9:30     ` Konrad Dybcio
2025-08-06  9:39       ` Taniya Das
2025-08-07 17:02         ` Konrad Dybcio
2025-08-08  9:21           ` Taniya Das
2025-08-08 12:18             ` Dmitry Baryshkov
2025-09-26  9:41               ` Taniya Das
2025-10-08 12:26                 ` Konrad Dybcio
2025-10-09  4:28                   ` Taniya Das [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4023c6ba-7ff0-46e0-bb09-a0cd864441ac@oss.qualcomm.com \
    --to=taniya.das@oss.qualcomm.com \
    --cc=andersson@kernel.org \
    --cc=dmitry.baryshkov@oss.qualcomm.com \
    --cc=konrad.dybcio@oss.qualcomm.com \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-clk@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lumag@kernel.org \
    --cc=mturquette@baylibre.com \
    --cc=quic_ajipan@quicinc.com \
    --cc=quic_imrashai@quicinc.com \
    --cc=quic_jkona@quicinc.com \
    --cc=quic_tdas@quicinc.com \
    --cc=sboyd@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox