From: Stanimir Varbanov <stanimir.varbanov@linaro.org>
To: Bjorn Andersson <bjorn.andersson@linaro.org>,
Stephen Boyd <sboyd@codeaurora.org>
Cc: Stanimir Varbanov <stanimir.varbanov@linaro.org>,
"Dwivedi, Avaneesh Kumar (avani)" <akdwived@codeaurora.org>,
agross@codeaurora.org, linux-arm-msm@vger.kernel.org
Subject: Re: [RESEND PATCH v2] remoteproc: qcom: Add venus rproc support on msm8996 platform.
Date: Wed, 25 Jan 2017 14:37:58 +0200 [thread overview]
Message-ID: <4321016b-cfd3-fa44-e374-e61512659cac@linaro.org> (raw)
In-Reply-To: <20161217071157.GU3439@tuxbot>
Hi Bjorn,
On 12/17/2016 09:11 AM, Bjorn Andersson wrote:
> On Fri 16 Dec 16:56 PST 2016, Stephen Boyd wrote:
>
>> On 12/12, Bjorn Andersson wrote:
>>> On Wed 30 Nov 01:22 PST 2016, Stanimir Varbanov wrote:
>>> [..]
>>>
>>>> I think using power domain in remoteproc driver will break power
>>>> management of the v4l2 venus driver. Presently I use runtime pm to
>>>> switch ON/OFF GDSC and enable/disable Venus clocks.
>>>>
>>>> When the .probe of venus remoteproc driver is called the power domain
>>>> will become ON and never switched OFF (except when platform driver
>>>> .remove method is called). One possible solution would be to add runtime
>>>> pm in venus remoreproc driver.
>>>>
>>>> IMO we should think more about that.
>>>>
>>>> Bjorn, Stephen what are your opinions?
>>>>
>>>
>>> Sorry for not getting back to you on this after our earlier discussion
>>> on the topic.
>>>
>>> Modelling the remoteproc and codec drivers as completely separate
>>> entities (with only the rproc phandle) gives us issues with the timing
>>> between the two. As I pulled down and started playing with [1] I noticed
>>> that I was not able to get the codec driver to probe unless I compiled
>>> it as a module which I insmoded after my /lib/firmware became available.
>>>
>>> In addition to this, there's no way for the remoteproc driver to inform
>>> the codec driver when it's actually shutting down and booting back up
>>> during a crash recovery (an async operation).
>>>
>>> And lastly when I talked to Rob about the DT binding the obvious
>>> question of why one piece of hardware have two disjunct nodes in the DT.
>>>
>>>
>>> So I believe we should represent the codec driver as a child of the
>>> remoteproc driver, utilizing the newly introduced "remoteproc
>>> subdevices" to probe and remove the codec driver based on the state of
>>> the remoteproc driver. This relationship will also allow us to tie
>>> certain resources (e.g. the clocks and power-domain) to the remoteproc
>>> driver and use pm_runtime in either driver to make sure the resources
>>> are enabled appropriately.
>>>
>>> I did backport some patches from my v4.10 remoteproc pull request into
>>> [2] and merged this into [1] and made a prototype of this. You can find
>>> it here [3], I did test that I can boot the remoteproc and run
>>> v4l2-compliance, shut it down, boot it back up and re-run
>>> v4l2-compliance, and it seems to work fine.
>>>
>>
>> From a DT perspective this seems backwards. We shouldn't be
>> putting something with a reg property underneath a "virtual" node
>> that doesn't have a reg property. It's already causing pain,
>> evident by these sorts of patches, so something seems wrong.
>>
>
> I agree.
>
>> Are we gaining anything by having a remoteproc driver and device
>> for the video hardware here?
>
> After discussing the matter with Avaneesh I realized that the downstream
> driver powers the venus core up and down based on the presence of
> clients. I expect that this means we have to do the same to be able to
> meet our power KPIs.
>
> With this in mind the remoteproc driver becomes a wrapper for a mdt
> loader and the scm pas interface - and brings with it a bunch of other
> things, expectations and challenges.
>
>> Why can't the video driver load/unload firmware with the mdt loader
>> code by itself when it wants to boot the processor?
>
> Providing a saner api around the mdt-loader and pas would allow venus
> and Adreno to reuse the code, without the remoteproc bloat.
Does it make sense if we move mdt parser in drivers/firmware ?
--
regards,
Stan
next prev parent reply other threads:[~2017-01-25 12:38 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-29 10:50 [RESEND PATCH v2] remoteproc: qcom: Venus firmware loader support for msm8996 Avaneesh Kumar Dwivedi
2016-11-29 10:50 ` [RESEND PATCH v2] remoteproc: qcom: Add venus rproc support on msm8996 platform Avaneesh Kumar Dwivedi
2016-11-29 19:27 ` Stephen Boyd
2016-11-30 5:24 ` Dwivedi, Avaneesh Kumar (avani)
2016-11-30 9:22 ` Stanimir Varbanov
2016-12-05 9:46 ` Dwivedi, Avaneesh Kumar (avani)
2016-12-07 10:49 ` Stanimir Varbanov
2016-12-13 6:51 ` Bjorn Andersson
2016-12-17 0:56 ` Stephen Boyd
2016-12-17 7:11 ` Bjorn Andersson
2016-12-21 6:40 ` Dwivedi, Avaneesh Kumar (avani)
2017-01-25 12:37 ` Stanimir Varbanov [this message]
2016-11-29 19:24 ` [RESEND PATCH v2] remoteproc: qcom: Venus firmware loader support for msm8996 Stephen Boyd
2016-11-30 5:15 ` Dwivedi, Avaneesh Kumar (avani)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4321016b-cfd3-fa44-e374-e61512659cac@linaro.org \
--to=stanimir.varbanov@linaro.org \
--cc=agross@codeaurora.org \
--cc=akdwived@codeaurora.org \
--cc=bjorn.andersson@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=sboyd@codeaurora.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).