From: Georgi Djakov <djakov@kernel.org>
To: linux-pm@vger.kernel.org, linaro-open-discussions@op-lists.linaro.org
Cc: sudeep.holla@arm.com, cristian.marussi@arm.com,
souvik.chakravarty@arm.com,
Vincent Guittot <vincent.guittot@linaro.org>
Subject: SCMI protocol for interconnect scaling
Date: Thu, 21 Oct 2021 14:06:17 +0300 [thread overview]
Message-ID: <42432cc2-5cb2-ea74-0980-8575e3a343fd@kernel.org> (raw)
Hi all,
I am recently getting questions about hooking the interconnect framework
to SCMI, so i am starting a discussion on this problem and see who might
be interested in it.
The SCMI spec contains various protocols like the "Performance domain
management protocol". But none of the protocols mentioned in the current
spec (3.0) seem to fit well into the concept we are using to scale
interconnect bandwidth in Linux. I see that people are working in this
area and there is already some support for clocks, resets etc. I am
wondering what would be the right approach to support also interconnect
bus scaling via SCMI.
The interconnect framework is part of the linux kernel and it's goal
is to manage the hardware and tune it to the most optimal power-
performance profile according to the aggregated bandwidth demand between
the various endpoints in the system (SoC). This is based on the requests
coming from consumer drivers.
As interconnects scaling does not map directly to any of the currently
available protocols in the SCMI spec, i am curious whether there is
work in progress on some other protocol that could support managing
resources based on path endpoints (instead of a single ID). The
interconnect framework doesn't populate every possible path, but
it exposes endpoints to client drivers and the path lookup is dynamic,
based on what the clients request. Maybe the SCMI host could also expose
all possible endpoints and let the guest request a path from the host,
based on those endpoints.
There are already suggestions to create vendor-specific SCMI protocols
for that, but i fear that we may end up with more than one protocol for
the same thing, so that's why it might be best to discuss it in public
and have a common solution that works for everyone.
Thanks,
Georgi
next reply other threads:[~2021-10-21 11:06 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-21 11:06 Georgi Djakov [this message]
2021-10-25 14:55 ` SCMI protocol for interconnect scaling Souvik Chakravarty
2021-11-01 17:05 ` Georgi Djakov
2021-11-02 10:08 ` Sudeep Holla
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=42432cc2-5cb2-ea74-0980-8575e3a343fd@kernel.org \
--to=djakov@kernel.org \
--cc=cristian.marussi@arm.com \
--cc=linaro-open-discussions@op-lists.linaro.org \
--cc=linux-pm@vger.kernel.org \
--cc=souvik.chakravarty@arm.com \
--cc=sudeep.holla@arm.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox