From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: Ivan Vecera <ivecera@redhat.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>,
Mika Westerberg <mika.westerberg@linux.intel.com>,
Rob Herring <robh@kernel.org>,
Krzysztof Kozlowski <krzk@kernel.org>,
linux-acpi@vger.kernel.org, Andrew Lunn <andrew@lunn.ch>
Subject: Re: [Question] Best practice for ACPI representation of DPLL/Ethernet dependencies (SyncE)
Date: Thu, 15 Jan 2026 15:18:58 +0200 [thread overview]
Message-ID: <aWjpQhGyHXXjsx2b@smile.fi.intel.com> (raw)
In-Reply-To: <16e32f1c-8419-44cf-9da8-4c0cae6165e7@redhat.com>
On Thu, Jan 15, 2026 at 08:34:05AM +0100, Ivan Vecera wrote:
> thank you for the honest feedback.
You're welcome!
> I suspect I might have described the
> topology poorly in my previous email, leading to a misunderstanding
> regarding the nature of the "pins".
Quite possible.
> On 1/14/26 9:45 PM, Andy Shevchenko wrote:
> > On Wed, Jan 14, 2026 at 08:19:05PM +0100, Ivan Vecera wrote:
> >
> > > I would like to ask for your opinion regarding an ACPI implementation
> > > detail for a patch-set I currently have on the netdev mailing list [1].
> > > ...
> > > Question:
> > > Is reusing the DT binding definitions within ACPI _DSD (to allow unified
> > > fwnode property parsing) the recommended approach for this type of
> > > device relationship?
> >
> > TL;DR: Seems to me you are pretty much doing an ugly hack and yes, you violate
> > the existing ACPI resources. More details below.
> >
> > > Or should I define strictly ACPI-specific bindings/objects, considering
> > > that the DT bindings for this feature are also new and currently under
> > > review?
> > >
> > > I want to ensure I am not violating any ACPI abstraction layers by
> > > relying too heavily on the DT-style representation in _DSD.
> > >
> > > Thanks for your guidance.
> >
> > First of all, if I understood the HW topology right — it has an I²C muxer
> > which has a channel connected to DPLL, which among other functions provides
> > some kind of GPIO/pin muxing facility — (correct me, if I'm wrong), the
> > irrelevant to ACPI hack is an avoidance of having proper GPIO controller
> > driver / description provided with likely pin control and pin muxing
> > flavours, which is missing (hence drivers/pinctrl/... should be and it should
> > be described in DT).
>
> This is not a GPIO or Pin Control scenario. The "pins" I am referring to are
> clock input/output pads dedicated to frequency synchronization (Synchronous
> Ethernet). They carry continuous clock signals (e.g., 10MHz, 25MHz, or
> recovered network clock), not logic levels controllable via a GPIO
> subsystem.
>
> The Hardware Setup:
>
> Control Plane: A user configures the DPLL device (e.g., via I2C/SPI
> managed by standard ACPI resources/drivers). This part is standard.
>
> Data/Clock/Signal Plane (The issue at hand): There are physical clock
> traces on the board connecting the Ethernet PHY directly to the DPLL.
>
> PHY Output(s) -> DPLL Input Pin(s) (Recovered Clock)
>
> DPLL Output Pin(s) -> PHY Input(s) (Clean Reference Clock)
>
> Since these are purely clock signals between two peripheral devices (not
> connected to the CPU's GPIO controller), standard ACPI _CRS resources
> like GpioIo or PinFunction do not seem applicable here. To my knowledge,
> ACPI does not have a native "Clock Resource" descriptor for inter-device
> clock dependencies.
>
> My intention with _DSD was to model this clock dependency graph, similar
> to how clocks and clock-names are handled in Device Tree (or how camera
> sensors often use _DSD to reference related components).
>
> Does your objection regarding the "ugly hack" still stand, or is
> modeling these clock dependencies via _DSD properties (referencing
> sub-nodes) an acceptable approach in the absence of a dedicated ACPI
> Clock Resource?
Now my "ugly hack" is irrelevant, but still sounds not good.
I hope you have researched what has been done before [6].
(Please, return links to the our emails, as it helps to understand the
discussion.)
I.o.w. there was an attempt a few years ago to fill the gaps, one of which
you are mentioning here. Note that the ACPI specification gains something
related (but I don't remember from top of my head what exactly, please
refer to it directly [7]).
> I can provide a simple ASCII diagram of the board layout if that helps
> clarify the signal flow.
Yes, please.
> > Second, ACPI provides the _CRS resources specifically for pin configuration,
> > pin control (pin muxing as well). In case it's related those resources must
> > be used. The caveat, however, the Linux kernel has not yet implemented the
> > glue layer between ACPICA and pin control subsystem (see [5] for more).
> >
> > It might be that I didn't get the picture correctly, but it smells badly to me.
> > In any case, I would like to help you and I'm open to more details about this
> > case.
[1]: <please return them>
...
[6]: https://linaro.atlassian.net/wiki/spaces/CLIENTPC/overview
[7]: https://uefi.org/specs/ACPI/6.6/
--
With Best Regards,
Andy Shevchenko
next prev parent reply other threads:[~2026-01-15 13:19 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-14 19:19 [Question] Best practice for ACPI representation of DPLL/Ethernet dependencies (SyncE) Ivan Vecera
2026-01-14 20:45 ` Andy Shevchenko
2026-01-15 7:34 ` Ivan Vecera
2026-01-15 13:18 ` Andy Shevchenko [this message]
2026-01-15 14:09 ` Ivan Vecera
2026-01-19 17:15 ` Andy Shevchenko
2026-01-19 17:43 ` Andrew Lunn
2026-01-19 17:49 ` Andy Shevchenko
2026-01-19 19:23 ` Ivan Vecera
2026-01-19 19:42 ` Andy Shevchenko
2026-01-19 19:55 ` Ivan Vecera
2026-01-19 20:07 ` Ivan Vecera
2026-01-19 20:28 ` Andy Shevchenko
2026-01-19 23:34 ` Linus Walleij
2026-01-20 5:39 ` Ivan Vecera
2026-01-20 7:17 ` Andy Shevchenko
2026-01-20 12:30 ` Rafael J. Wysocki
2026-01-20 13:30 ` Andy Shevchenko
2026-01-20 14:18 ` Rafael J. Wysocki
2026-01-20 14:39 ` Andy Shevchenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aWjpQhGyHXXjsx2b@smile.fi.intel.com \
--to=andriy.shevchenko@linux.intel.com \
--cc=andrew@lunn.ch \
--cc=ivecera@redhat.com \
--cc=krzk@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=mika.westerberg@linux.intel.com \
--cc=rafael@kernel.org \
--cc=robh@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox