From: Thierry Reding <thierry.reding@gmail.com>
To: Dmitry Osipenko <digetx@gmail.com>
Cc: Georgi Djakov <georgi.djakov@linaro.org>,
Rob Herring <robh+dt@kernel.org>,
Jon Hunter <jonathanh@nvidia.com>,
linux-tegra@vger.kernel.org, devicetree@vger.kernel.org
Subject: Re: [RFC 2/2] dt-bindings: firmware: tegra186-bpmp: Document interconnects property
Date: Mon, 27 Jan 2020 13:49:03 +0100 [thread overview]
Message-ID: <20200127124903.GB2117209@ulmo> (raw)
In-Reply-To: <853bb7bd-8e04-38ac-d0d6-a958135a49be@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 3276 bytes --]
On Mon, Jan 27, 2020 at 12:56:24AM +0300, Dmitry Osipenko wrote:
[...]
> Thinking a bit more about how to define the ICC, I'm now leaning to a
> variant like this:
>
> interconnects =
> <&mc TEGRA186_MEMORY_CLIENT_BPMP &emc TEGRA_ICC_EMEM>,
> <&mc TEGRA186_MEMORY_CLIENT_BPMPR>,
> <&mc TEGRA186_MEMORY_CLIENT_BPMPW>,
> <&mc TEGRA186_MEMORY_CLIENT_BPMPDMAR>,
> <&mc TEGRA186_MEMORY_CLIENT_BPMPDMAW>;
>
> interconnect-names = "dma-mem", "read", "write", "dma-read", "dma-write";
>
> Looks like there is a problem with having a full MC-EMEM path being
> defined for each memory client.. it's not very practical in terms of
> memory frequency scaling.
>
> Take Display Controller for example, it has a memory client for each
> display (overlay) plane. If planes are not overlapping on the displayed
> area, then the required total memory bandwidth equals to the peak
> bandwidth selected among of the visible planes. But if planes are
> overlapping, then the bandwidths of each overlapped plane are
> accumulated because overlapping planes will issue read request
> simultaneously for the overlapping areas.
>
> The Memory Controller doesn't have any knowledge about the Display
> Controller's specifics. Thus in the end it should be a responsibility of
> Display Controller's driver to calculate the required bandwidth for the
> hardware unit, since only the driver has all required knowledge about
> planes overlapping state and whatnot.
I agree that the device-specific knowledge should live in the device-
specific drivers. However, what you're doing above is basically putting
the OS-specific knowledge into the device tree.
The memory client interfaces are a real thing in hardware that can be
described using the corresponding IDs. But there is no such thing as the
"BPMP" memory client. Rather it's composed of the other four.
So I think a better thing to do would be for the consumer driver to deal
with all of that. If looking at only bandwidth, the consumer driver can
simply pick any one of the clients/paths for any of the bandwidth
requests and for per-interface settings like latency allowance the
consumer would choose the appropriate path.
> The similar applies to multimedia things, like GPU or Video Decoder.
> They have multiple memory clients and (I'm pretty sure that) nobody is
> going to calculate memory bandwidth requirements for every client, it's
> simply impractical.
>
> So, I'm suggesting that we should have a single "dma-mem" ICC path for
> every hardware unit.
>
> The rest of the ICC paths could be memory_client -> memory_controller
> paths, providing knobs for things like MC arbitration (latency)
> configuration for each memory client. I think this variant of
> description is actually closer to the hardware, since the client's
> arbitration configuration ends in the Memory Controller.
Not necessarily. The target of the access doesn't always have to be the
EMC. It could equally well be IRAM, in which case there are additional
controls that need to be programmed within the MC to allow the memory
client to access IRAM. If you don't have a phandle to IRAM in the
interconnect properties, there's no way to make this distinction.
Thierry
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2020-01-27 12:49 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-14 18:15 [PATCH 1/2] dt-bindings: firmware: Convert Tegra186 BPMP bindings to json-schema Thierry Reding
2020-01-14 18:15 ` [RFC 2/2] dt-bindings: firmware: tegra186-bpmp: Document interconnects property Thierry Reding
2020-01-17 15:23 ` Georgi Djakov
2020-01-20 15:06 ` Thierry Reding
2020-01-21 6:53 ` Dmitry Osipenko
2020-01-21 14:10 ` Thierry Reding
2020-01-21 15:18 ` Georgi Djakov
2020-01-21 15:54 ` Thierry Reding
2020-01-21 20:12 ` Dmitry Osipenko
2020-01-26 21:56 ` Dmitry Osipenko
2020-01-26 22:03 ` Dmitry Osipenko
2020-01-27 12:52 ` Thierry Reding
2020-01-27 12:49 ` Thierry Reding [this message]
2020-02-05 21:34 ` Dmitry Osipenko
2020-01-27 12:21 ` Thierry Reding
2020-01-28 19:27 ` Dmitry Osipenko
2020-01-29 9:36 ` Thierry Reding
2020-01-29 16:02 ` Dmitry Osipenko
2020-01-29 16:13 ` Georgi Djakov
2020-01-29 16:16 ` Dmitry Osipenko
2020-01-16 19:28 ` [PATCH 1/2] dt-bindings: firmware: Convert Tegra186 BPMP bindings to json-schema Rob Herring
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200127124903.GB2117209@ulmo \
--to=thierry.reding@gmail.com \
--cc=devicetree@vger.kernel.org \
--cc=digetx@gmail.com \
--cc=georgi.djakov@linaro.org \
--cc=jonathanh@nvidia.com \
--cc=linux-tegra@vger.kernel.org \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).