public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
@ 2026-01-21 14:42 Ivan Vecera
  2026-01-22  0:09 ` Andrew Lunn
  2026-01-22 18:04 ` Rafael J. Wysocki
  0 siblings, 2 replies; 9+ messages in thread
From: Ivan Vecera @ 2026-01-21 14:42 UTC (permalink / raw)
  To: Andy Shevchenko, Rafael J. Wysocki
  Cc: Linus Walleij, Andrew Lunn, Mika Westerberg, Rob Herring,
	Krzysztof Kozlowski, Arkadiusz Kubalewski, Jiri Pirko,
	Vadim Fedorenko, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Mark Brown, Jan Aleszczyk, Michal Schmidt, Petr Oros, linux-acpi,
	netdev@vger.kernel.org

Hi Andy, Rafael and others,

(based on the previous thread [1] - now involving more people from
  networking and DPLL)

Thank you for the insights on _CRS and ClockInput.

I think we have circled the issue enough to identify the core disconnect:
* While the physical signals on these wires are indeed clocks (10MHz,
   etc.), from the OS driver perspective, this is not a "Clock Resource"
   issue. The NIC driver does not need to gate, rate-set, or power-manage
   these clocks (which is what _CRS/ClockInput implies).
* Instead, the NIC driver simply needs a Topology Map. It needs to know:
   "My Port 0 (Consumer) is physically wired to DPLL Pin 3 (Provider)."

The NIC driver needs this Pin Index (3) specifically to report it via
the RtNetlink. This allows the userspace daemon (e.g., synce4l or
linux-ptp) to see the relationship and decide to configure the DPLL via 
the DPLL Netlink API to lock onto that specific input.

A generic ClockInput resource in _CRS is anonymous and unordered. The OS
abstracts it into a handle, but it fails to convey the specific pin
index required for this userspace reporting.

Since ACPI lacks a native "Graph/Topology" object for inter-device
dependencies of this nature, and _CRS obscures the index information
required by userspace, I propose we treat _DSD properties as the
de-facto standard [2] for modeling SyncE topology in ACPI.

To avoid the confusion Andy mentioned regarding "Clock Bindings" in
ACPI, I suggest we explicitly define a schema using 'dpll-' prefixed
properties. This effectively decouples it from the Clock subsystem
expectations and treats it purely as a wiring map.

Proposed ACPI Representation with proposed documentation [3]

If the ACPI maintainers and Netdev maintainers agree that this
_DSD-based topology map is the acceptable "Pragmatic Standard" for this
feature, I will document this schema in the kernel documentation and
proceed with the implementation.

This solves the immediate need for an upcoming Intel SyncE enabled
platform and provides a consistent blueprint for other vendors
implementing SyncE on ACPI.

Regards,
Ivan

[1] 
https://lore.kernel.org/linux-acpi/3bf214b9-8691-44f7-aa13-8169276a6c2b@redhat.com/
[2] 
https://docs.kernel.org/firmware-guide/acpi/dsd/data-node-references.html
[3] https://gist.github.com/ivecera/964c25f47f688f44ec70984742cf7fbd


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
  2026-01-21 14:42 [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE) Ivan Vecera
@ 2026-01-22  0:09 ` Andrew Lunn
  2026-01-22 11:50   ` Ivan Vecera
  2026-01-22 18:04 ` Rafael J. Wysocki
  1 sibling, 1 reply; 9+ messages in thread
From: Andrew Lunn @ 2026-01-22  0:09 UTC (permalink / raw)
  To: Ivan Vecera
  Cc: Andy Shevchenko, Rafael J. Wysocki, Linus Walleij,
	Mika Westerberg, Rob Herring, Krzysztof Kozlowski,
	Arkadiusz Kubalewski, Jiri Pirko, Vadim Fedorenko, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Mark Brown, Jan Aleszczyk,
	Michal Schmidt, Petr Oros, linux-acpi, netdev@vger.kernel.org

> * While the physical signals on these wires are indeed clocks (10MHz,
>   etc.), from the OS driver perspective, this is not a "Clock Resource"
>   issue. The NIC driver does not need to gate, rate-set, or power-manage
>   these clocks (which is what _CRS/ClockInput implies).

Is this a peculiarity of the zl3073x? No gating, no rate-set, no power
management?

I had a quick look at the Renesas 8V89307 

https://www.renesas.com/en/document/dst/8v89307-final-data-sheet?r=177681

Two of the three inputs have an optional inverter. CCF has
clk_set_phase(), which when passed 180 would be a good model for this.
The inputs then have dividers which can be configured. I would
probably model them using CCF clk-divider.c for that. There is then a
mux, which clk-mux.c could model. After the DPLL there are more muxes
to optionally route the output through an APLL. The output block then
has yet more muxes and dividers.

All that could be described using a number of CCF parts chained
together in a clock tree.

And what about the TI LMK05028

https://www.ti.com/product/LMK05028

It also has inverters and muxes, but no dividers.

Analog Devices ad9546 also has lots of internal components which could
be described using CCF

https://www.analog.com/media/en/technical-documentation/data-sheets/ad9546.pdf

So i do wounder if we are being short sighted by using the clock
bindings but not Linux clocks.

	Andrew

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
  2026-01-22  0:09 ` Andrew Lunn
@ 2026-01-22 11:50   ` Ivan Vecera
  2026-01-22 15:46     ` Andrew Lunn
  0 siblings, 1 reply; 9+ messages in thread
From: Ivan Vecera @ 2026-01-22 11:50 UTC (permalink / raw)
  To: Andrew Lunn, Andy Shevchenko
  Cc: Rafael J. Wysocki, Linus Walleij, Mika Westerberg, Rob Herring,
	Krzysztof Kozlowski, Arkadiusz Kubalewski, Jiri Pirko,
	Vadim Fedorenko, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Mark Brown, Jan Aleszczyk, Michal Schmidt, Petr Oros, linux-acpi,
	netdev@vger.kernel.org, Sakari Ailus

Hi Andrew,

(Adding Sakari Ailus to CC, who might have insights on modeling
  component topologies in ACPI).

On 1/22/26 1:09 AM, Andrew Lunn wrote:
>> * While the physical signals on these wires are indeed clocks (10MHz,
>>    etc.), from the OS driver perspective, this is not a "Clock Resource"
>>    issue. The NIC driver does not need to gate, rate-set, or power-manage
>>    these clocks (which is what _CRS/ClockInput implies).
> 
> Is this a peculiarity of the zl3073x? No gating, no rate-set, no power
> management?
> 
> I had a quick look at the Renesas 8V89307
> 
> https://www.renesas.com/en/document/dst/8v89307-final-data-sheet?r=177681
> 
> Two of the three inputs have an optional inverter. CCF has
> clk_set_phase(), which when passed 180 would be a good model for this.
> The inputs then have dividers which can be configured. I would
> probably model them using CCF clk-divider.c for that. There is then a
> mux, which clk-mux.c could model. After the DPLL there are more muxes
> to optionally route the output through an APLL. The output block then
> has yet more muxes and dividers.
> 
> All that could be described using a number of CCF parts chained
> together in a clock tree.
> 
> And what about the TI LMK05028
> 
> https://www.ti.com/product/LMK05028
> 
> It also has inverters and muxes, but no dividers.
> 
> Analog Devices ad9546 also has lots of internal components which could
> be described using CCF
> 
> https://www.analog.com/media/en/technical-documentation/data-sheets/ad9546.pdf

I agree with you that the hardware itself (ZL3073x, Renesas 8V89307,
etc.) is complex and has internal structures (dividers, muxes) that
technically fit the CCF model.

However, I believe the distinction lies in how the inter-device topology
is used versus how the device is managed internally.

The kernel now uses the dedicated DPLL Subsystem (drivers/dpll) for
SyncE and similar applications. This subsystem was created because CCF
captures "rate and parent" well, but does not capture SyncE-specific
aspects like lock status, holdover, priority lists, and phase-slope
limiting.

In our architecture, the complex configuration you mentioned (dividers,
muxes) is managed via the DPLL Netlink ABI. The control logic largely
resides in userspace daemons (e.g., synce4l), which send Netlink
commands to the DPLL driver to configure those internal muxes/dividers
based on network conditions.

The NIC driver's role here is passive; it effectively operates in a
"bypass" mode regarding these signals. The NIC does not need to call
clk_set_rate() or clk_prepare_enable() on these pins to function. It
simply needs to report the physical wiring linkage: "My input / output
is wired to DPLL pin with index X."

If we use standard Clock bindings (CCF), we imply a functional
dependency where the NIC acts as a controller/consumer that actively
manages the clock's state. In reality, the NIC is just a conduit mapping
a local port to a remote pin index.

We are effectively modeling a graph linkage (similar to ports / remote-
endpoint in media graphs) rather than a functional resource (like
clocks=<&clk0> or regulators=<&some_reg>).

We are utilizing _DSD properties to model this topology edge, which is
consistent with how other subsystems (like Media) utilize firmware node
graphs in ACPI to describe complex non-resource connections.

This provides the NIC driver with the "remote endpoint ID" (the pin
index) it needs to populate the Netlink ABI, without forcing the driver
to import the complexity of a full Clock Tree that it has no intention
of managing.

Does this distinction—modelling the "topology graph" rather than
a "clock resource" make sense as a rationale for using _DSD here?

Regards,
Ivan

> So i do wounder if we are being short sighted by using the clock
> bindings but not Linux clocks.
> 
> 	Andrew
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
  2026-01-22 11:50   ` Ivan Vecera
@ 2026-01-22 15:46     ` Andrew Lunn
  2026-01-22 17:16       ` Ivan Vecera
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Lunn @ 2026-01-22 15:46 UTC (permalink / raw)
  To: Ivan Vecera
  Cc: Andy Shevchenko, Rafael J. Wysocki, Linus Walleij,
	Mika Westerberg, Rob Herring, Krzysztof Kozlowski,
	Arkadiusz Kubalewski, Jiri Pirko, Vadim Fedorenko, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Mark Brown, Jan Aleszczyk,
	Michal Schmidt, Petr Oros, linux-acpi, netdev@vger.kernel.org,
	Sakari Ailus

On Thu, Jan 22, 2026 at 12:50:50PM +0100, Ivan Vecera wrote:
> Hi Andrew,
> 
> (Adding Sakari Ailus to CC, who might have insights on modeling
>  component topologies in ACPI).
> 
> On 1/22/26 1:09 AM, Andrew Lunn wrote:
> > > * While the physical signals on these wires are indeed clocks (10MHz,
> > >    etc.), from the OS driver perspective, this is not a "Clock Resource"
> > >    issue. The NIC driver does not need to gate, rate-set, or power-manage
> > >    these clocks (which is what _CRS/ClockInput implies).
> > 
> > Is this a peculiarity of the zl3073x? No gating, no rate-set, no power
> > management?
> > 
> > I had a quick look at the Renesas 8V89307
> > 
> > https://www.renesas.com/en/document/dst/8v89307-final-data-sheet?r=177681
> > 
> > Two of the three inputs have an optional inverter. CCF has
> > clk_set_phase(), which when passed 180 would be a good model for this.
> > The inputs then have dividers which can be configured. I would
> > probably model them using CCF clk-divider.c for that. There is then a
> > mux, which clk-mux.c could model. After the DPLL there are more muxes
> > to optionally route the output through an APLL. The output block then
> > has yet more muxes and dividers.
> > 
> > All that could be described using a number of CCF parts chained
> > together in a clock tree.
> > 
> > And what about the TI LMK05028
> > 
> > https://www.ti.com/product/LMK05028
> > 
> > It also has inverters and muxes, but no dividers.
> > 
> > Analog Devices ad9546 also has lots of internal components which could
> > be described using CCF
> > 
> > https://www.analog.com/media/en/technical-documentation/data-sheets/ad9546.pdf
> 
> I agree with you that the hardware itself (ZL3073x, Renesas 8V89307,
> etc.) is complex and has internal structures (dividers, muxes) that
> technically fit the CCF model.
> 
> However, I believe the distinction lies in how the inter-device topology
> is used versus how the device is managed internally.
> 
> The kernel now uses the dedicated DPLL Subsystem (drivers/dpll) for
> SyncE and similar applications. This subsystem was created because CCF
> captures "rate and parent" well, but does not capture SyncE-specific
> aspects like lock status, holdover, priority lists, and phase-slope
> limiting.
> 
> In our architecture, the complex configuration you mentioned (dividers,
> muxes) is managed via the DPLL Netlink ABI. The control logic largely
> resides in userspace daemons (e.g., synce4l), which send Netlink
> commands to the DPLL driver to configure those internal muxes/dividers
> based on network conditions.

So you are effectively doing user space drivers? You have a library of
DPLL drivers, which gets linked to synce4l? The library can then poke
registers in the device to configure all the muxes, inverters,
dividers?

But doesn't that also require that synce4l/the library knows about
every single board? It needs to know if the board requires the input
clock to be inverted? The output clock needs to be inverted? It needs
to know about the PHY, is it producing a 50Mhz clock, or 125MHz which
some devices provide, so will need the divider to reduce it to 50MHz?
Doesn't the library also need to know the clock driving the DPLL
package? Some of these products allow you to apply dividers to that as
well, and that clock is a board property.

To me, it seems like there are a collection of board properties, and
to make this scale, those need to be in DT/ACPI, not a user space
library. 

> The NIC driver's role here is passive; it effectively operates in a
> "bypass" mode regarding these signals. The NIC does not need to call
> clk_set_rate() or clk_prepare_enable() on these pins to function. It
> simply needs to report the physical wiring linkage: "My input / output
> is wired to DPLL pin with index X."

I can understand this bit, although actually using
clk_prepare_enable() would allow for runtime power management.

But i'm thinking more about these board properties. If i model the
internals of the DPLL using CCF, CCF probably has all the needed
control interfaces. The board properties then just set these controls.
It then seems odd that i have a Linux standard description of the
internals of the DPLL using CCF, i use the CCF binding to describe the
external interconnects, but don't actually use CCF to implement these
external interconnect?

> If we use standard Clock bindings (CCF), we imply a functional
> dependency where the NIC acts as a controller/consumer that actively
> manages the clock's state. In reality, the NIC is just a conduit mapping
> a local port to a remote pin index.

If you look at MAC drivers, all they really do is
clk_prepare_enable(). Few do more than that. So i don't really see
this as being a burden.

	Andrew

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
  2026-01-22 15:46     ` Andrew Lunn
@ 2026-01-22 17:16       ` Ivan Vecera
  2026-01-22 19:57         ` Andrew Lunn
  0 siblings, 1 reply; 9+ messages in thread
From: Ivan Vecera @ 2026-01-22 17:16 UTC (permalink / raw)
  To: Andrew Lunn, Andy Shevchenko
  Cc: Rafael J. Wysocki, Linus Walleij, Mika Westerberg, Rob Herring,
	Krzysztof Kozlowski, Arkadiusz Kubalewski, Jiri Pirko,
	Vadim Fedorenko, Jakub Kicinski, Paolo Abeni, David S. Miller,
	Mark Brown, Jan Aleszczyk, Michal Schmidt, Petr Oros, linux-acpi,
	netdev@vger.kernel.org, Sakari Ailus

Hi Andrew,

I think there is a significant misunderstanding regarding the
architecture of the DPLL subsystem.

On 1/22/26 4:46 PM, Andrew Lunn wrote:
> On Thu, Jan 22, 2026 at 12:50:50PM +0100, Ivan Vecera wrote:
>> Hi Andrew,
>>
>> (Adding Sakari Ailus to CC, who might have insights on modeling
>>   component topologies in ACPI).
>>
>> On 1/22/26 1:09 AM, Andrew Lunn wrote:
>>>> * While the physical signals on these wires are indeed clocks (10MHz,
>>>>     etc.), from the OS driver perspective, this is not a "Clock Resource"
>>>>     issue. The NIC driver does not need to gate, rate-set, or power-manage
>>>>     these clocks (which is what _CRS/ClockInput implies).
>>>
>>> Is this a peculiarity of the zl3073x? No gating, no rate-set, no power
>>> management?
>>>
>>> I had a quick look at the Renesas 8V89307
>>>
>>> https://www.renesas.com/en/document/dst/8v89307-final-data-sheet?r=177681
>>>
>>> Two of the three inputs have an optional inverter. CCF has
>>> clk_set_phase(), which when passed 180 would be a good model for this.
>>> The inputs then have dividers which can be configured. I would
>>> probably model them using CCF clk-divider.c for that. There is then a
>>> mux, which clk-mux.c could model. After the DPLL there are more muxes
>>> to optionally route the output through an APLL. The output block then
>>> has yet more muxes and dividers.
>>>
>>> All that could be described using a number of CCF parts chained
>>> together in a clock tree.
>>>
>>> And what about the TI LMK05028
>>>
>>> https://www.ti.com/product/LMK05028
>>>
>>> It also has inverters and muxes, but no dividers.
>>>
>>> Analog Devices ad9546 also has lots of internal components which could
>>> be described using CCF
>>>
>>> https://www.analog.com/media/en/technical-documentation/data-sheets/ad9546.pdf
>>
>> I agree with you that the hardware itself (ZL3073x, Renesas 8V89307,
>> etc.) is complex and has internal structures (dividers, muxes) that
>> technically fit the CCF model.
>>
>> However, I believe the distinction lies in how the inter-device topology
>> is used versus how the device is managed internally.
>>
>> The kernel now uses the dedicated DPLL Subsystem (drivers/dpll) for
>> SyncE and similar applications. This subsystem was created because CCF
>> captures "rate and parent" well, but does not capture SyncE-specific
>> aspects like lock status, holdover, priority lists, and phase-slope
>> limiting.
>>
>> In our architecture, the complex configuration you mentioned (dividers,
>> muxes) is managed via the DPLL Netlink ABI. The control logic largely
>> resides in userspace daemons (e.g., synce4l), which send Netlink
>> commands to the DPLL driver to configure those internal muxes/dividers
>> based on network conditions.
> 
> So you are effectively doing user space drivers? You have a library of
> DPLL drivers, which gets linked to synce4l? The library can then poke
> registers in the device to configure all the muxes, inverters,
> dividers?

No, absolutely not. The drivers for these devices reside entirely in the
kernel. They handle all the low-level register access, mux config, and
hardware abstraction.

The userspace (e.g. synce4l daemon) is purely a Policy Engine. It uses
a  generic, hardware-agnostic Netlink API to send high-level commands
like "Lock to Pin 0" or "Set Priority 1". The in-kernel driver
translates these generic commands into the specific register writes
required for that chip (ZL3073x, etc.).

See DPLL docs:
https://docs.kernel.org/driver-api/dpll.html

> But doesn't that also require that synce4l/the library knows about
> every single board? It needs to know if the board requires the input
> clock to be inverted? The output clock needs to be inverted? It needs
> to know about the PHY, is it producing a 50Mhz clock, or 125MHz which
> some devices provide, so will need the divider to reduce it to 50MHz?
> Doesn't the library also need to know the clock driving the DPLL
> package? Some of these products allow you to apply dividers to that as
> well, and that clock is a board property.
>
> To me, it seems like there are a collection of board properties, and
> to make this scale, those need to be in DT/ACPI, not a user space
> library.
 >>
>> The NIC driver's role here is passive; it effectively operates in a
>> "bypass" mode regarding these signals. The NIC does not need to call
>> clk_set_rate() or clk_prepare_enable() on these pins to function. It
>> simply needs to report the physical wiring linkage: "My input / output
>> is wired to DPLL pin with index X."
> 
> I can understand this bit, although actually using
> clk_prepare_enable() would allow for runtime power management.
> 
> But i'm thinking more about these board properties. If i model the
> internals of the DPLL using CCF, CCF probably has all the needed
> control interfaces. The board properties then just set these controls.
> It then seems odd that i have a Linux standard description of the
> internals of the DPLL using CCF, i use the CCF binding to describe the
> external interconnects, but don't actually use CCF to implement these
> external interconnect?

But I don't use CCF bindings in this design, this design is about
representing an opaque wire between two devices.

What you are talking about is rather a possibility of implementing DPLL 
support in a CCF based clock driver.

>> If we use standard Clock bindings (CCF), we imply a functional
>> dependency where the NIC acts as a controller/consumer that actively
>> manages the clock's state. In reality, the NIC is just a conduit mapping
>> a local port to a remote pin index.
> 
> If you look at MAC drivers, all they really do is
> clk_prepare_enable(). Few do more than that. So i don't really see
> this as being a burden.

For SyncE where the NIC produces clock this would:
* require to implement clock provider in the NIC driver (clk_ops...)
* require to expose this clock source in ACPI

But why? For representing opaque wire between two devices?

This design is not a classic consumer/producer. The DPLL driver doesn't
care where its inputs and outputs are connected, it doesn't need to
know. And the NIC driver only cares where it is connected, but not from
resources perspective, but to inform the userspace about this fact.

Ivan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
  2026-01-21 14:42 [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE) Ivan Vecera
  2026-01-22  0:09 ` Andrew Lunn
@ 2026-01-22 18:04 ` Rafael J. Wysocki
  2026-01-23  9:42   ` Kubalewski, Arkadiusz
  1 sibling, 1 reply; 9+ messages in thread
From: Rafael J. Wysocki @ 2026-01-22 18:04 UTC (permalink / raw)
  To: Ivan Vecera
  Cc: Andy Shevchenko, Rafael J. Wysocki, Linus Walleij, Andrew Lunn,
	Mika Westerberg, Rob Herring, Krzysztof Kozlowski,
	Arkadiusz Kubalewski, Jiri Pirko, Vadim Fedorenko, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Mark Brown, Jan Aleszczyk,
	Michal Schmidt, Petr Oros, linux-acpi, netdev@vger.kernel.org

Hi,

On Wed, Jan 21, 2026 at 3:43 PM Ivan Vecera <ivecera@redhat.com> wrote:
>
> Hi Andy, Rafael and others,
>
> (based on the previous thread [1] - now involving more people from
>   networking and DPLL)
>
> Thank you for the insights on _CRS and ClockInput.
>
> I think we have circled the issue enough to identify the core disconnect:
> * While the physical signals on these wires are indeed clocks (10MHz,
>    etc.), from the OS driver perspective, this is not a "Clock Resource"
>    issue. The NIC driver does not need to gate, rate-set, or power-manage
>    these clocks (which is what _CRS/ClockInput implies).
> * Instead, the NIC driver simply needs a Topology Map. It needs to know:
>    "My Port 0 (Consumer) is physically wired to DPLL Pin 3 (Provider)."
>
> The NIC driver needs this Pin Index (3) specifically to report it via
> the RtNetlink. This allows the userspace daemon (e.g., synce4l or
> linux-ptp) to see the relationship and decide to configure the DPLL via
> the DPLL Netlink API to lock onto that specific input.
>
> A generic ClockInput resource in _CRS is anonymous and unordered. The OS
> abstracts it into a handle, but it fails to convey the specific pin
> index required for this userspace reporting.
>
> Since ACPI lacks a native "Graph/Topology" object for inter-device
> dependencies of this nature, and _CRS obscures the index information
> required by userspace, I propose we treat _DSD properties as the
> de-facto standard [2] for modeling SyncE topology in ACPI.

If you want to call something a "standard", especially if it involves
ACPI, it is generally not sufficient to talk to Linux kernel people
only about it.

ACPI is about agreements between multiple parties, including multiple
OS providers (Linux being just one of them) and multiple platform
vendors (OEMs).

To a minimum, you'd need commitment from at least one platform vendor
to ship the requisite _DSD data in their platform firmware.

> To avoid the confusion Andy mentioned regarding "Clock Bindings" in
> ACPI, I suggest we explicitly define a schema using 'dpll-' prefixed
> properties. This effectively decouples it from the Clock subsystem
> expectations and treats it purely as a wiring map.
>
> Proposed ACPI Representation with proposed documentation [3]
>
> If the ACPI maintainers and Netdev maintainers agree that this

So long as you don't try to update the general ACPI support code in
drivers/acpi/ or the related header files, the matter is beyond the
role of the "ACPI maintainers".

That code though is based on the ACPI specification and the related
support documentation, modulo what is actually shipping in platform
firmware on systems in the field, so if you want or plan to modify it,
that needs to be based on something beyond kernel documentation.

> _DSD-based topology map is the acceptable "Pragmatic Standard" for this
> feature, I will document this schema in the kernel documentation and
> proceed with the implementation.

Kernel documentation is generally insufficient for defining new
OS-firmware interfaces based on ACPI because there are parties
involved in ACPI development beyond the kernel that may be interested
in the given interface and they may be able to provide useful
feedback.

I, personally, cannot really say how useful the interface you are
proposing would be and what it would be useful for.  Even if I liked
it, there still would be a problem of getting at least one platform
vendor on board.

> This solves the immediate need for an upcoming Intel SyncE enabled
> platform and provides a consistent blueprint for other vendors
> implementing SyncE on ACPI.

And what if, say, MSFT come up with their own version of an interface
addressing the same problem space in the meantime and convince
platform vendors to ship support for their variant instead of yours?

> [1] https://lore.kernel.org/linux-acpi/3bf214b9-8691-44f7-aa13-8169276a6c2b@redhat.com/
> [2] https://docs.kernel.org/firmware-guide/acpi/dsd/data-node-references.html
> [3] https://gist.github.com/ivecera/964c25f47f688f44ec70984742cf7fbd

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
  2026-01-22 17:16       ` Ivan Vecera
@ 2026-01-22 19:57         ` Andrew Lunn
  2026-01-23  9:18           ` Ivan Vecera
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Lunn @ 2026-01-22 19:57 UTC (permalink / raw)
  To: Ivan Vecera
  Cc: Andy Shevchenko, Rafael J. Wysocki, Linus Walleij,
	Mika Westerberg, Rob Herring, Krzysztof Kozlowski,
	Arkadiusz Kubalewski, Jiri Pirko, Vadim Fedorenko, Jakub Kicinski,
	Paolo Abeni, David S. Miller, Mark Brown, Jan Aleszczyk,
	Michal Schmidt, Petr Oros, linux-acpi, netdev@vger.kernel.org,
	Sakari Ailus

> No, absolutely not. The drivers for these devices reside entirely in the
> kernel. They handle all the low-level register access, mux config, and
> hardware abstraction.
> 
> The userspace (e.g. synce4l daemon) is purely a Policy Engine. It uses
> a  generic, hardware-agnostic Netlink API to send high-level commands
> like "Lock to Pin 0" or "Set Priority 1". The in-kernel driver
> translates these generic commands into the specific register writes
> required for that chip (ZL3073x, etc.).

Great. But i've not seen the needed board configuration to allow
this. Where do you describe if the ingress inverter is needed? The
egress inverter? The clock divider on ingress, the clock divider on
egress. The frequency of the clock feeding the package, etc.

> But I don't use CCF bindings in this design, this design is about
> representing an opaque wire between two devices.

Is it really opaque? Don't you need to know its frequency, so you can
set the ingress divider? You can either ask it what it is, or you can
have a board property. Don't you want to know its phase? So you can
enable/disable the ingress inverter?

> This design is not a classic consumer/producer. The DPLL driver doesn't
> care where its inputs and outputs are connected, it doesn't need to
> know.

I really doubt that. Given how configurable these devices are, there
must be a need to know what is around them, in order for all this
configuration to make sense.

	Andrew

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
  2026-01-22 19:57         ` Andrew Lunn
@ 2026-01-23  9:18           ` Ivan Vecera
  0 siblings, 0 replies; 9+ messages in thread
From: Ivan Vecera @ 2026-01-23  9:18 UTC (permalink / raw)
  To: Andrew Lunn, Andy Shevchenko, Vadim Fedorenko,
	Arkadiusz Kubalewski, Nitka, Grzegorz
  Cc: Rafael J. Wysocki, Linus Walleij, Mika Westerberg, Rob Herring,
	Krzysztof Kozlowski, Jiri Pirko, Jakub Kicinski, Paolo Abeni,
	David S. Miller, Mark Brown, Jan Aleszczyk, Michal Schmidt,
	Petr Oros, linux-acpi, netdev@vger.kernel.org, Sakari Ailus



On 1/22/26 8:57 PM, Andrew Lunn wrote:
>> No, absolutely not. The drivers for these devices reside entirely in the
>> kernel. They handle all the low-level register access, mux config, and
>> hardware abstraction.
>>
>> The userspace (e.g. synce4l daemon) is purely a Policy Engine. It uses
>> a  generic, hardware-agnostic Netlink API to send high-level commands
>> like "Lock to Pin 0" or "Set Priority 1". The in-kernel driver
>> translates these generic commands into the specific register writes
>> required for that chip (ZL3073x, etc.).
> 
> Great. But i've not seen the needed board configuration to allow
> this. Where do you describe if the ingress inverter is needed? The
> egress inverter? The clock divider on ingress, the clock divider on
> egress. The frequency of the clock feeding the package, etc.

On this specific platform, the DPLL device is not a "blank slate" at
boot. It has internal flash that is pre-programmed by the OEM during
manufacturing. So the ACPI does not need to store exact state of each
piece of the HW you mentioned (like dividers, inverters, etc.)

The flash contains the board-specific initialization:
* Base frequencies (XO inputs)
* Default divider values
* Signal conditioning (inverters, impedance) required for the specific
   board layout.

When the OS loads, the DPLL state is already in a valid, board-compliant
state. The zl3073x driver reads this current state from the hardware.

The one thing the DPLL's internal flash cannot know is "Which OS device
is connected to my pin X?". The DPLL knows "pin X is configured for
25MHz," but it doesn't know that "pin X is physically wired to eth0
(PCIe Bus 04:00.0)." and cannot provide such information.

At this point only ACPI know such information as it knows the board
topology.

This is precisely why we need the ACPI schema I proposed. We do not need
ACPI to replicate the entire configuration that already exists in the
DPLL's flash. We only need ACPI to bridge that specific Topology Gap:

* ACPI: "DPLL pin 0 connected to this NIC as 'rclk'."
         "DPLL pin 1 connected to this NIC as 'synce_ref'."
* NIC Driver: "DPLL pin 0 is connected to my recovered clock"
               "DPLL pin 1 is connected to my SyncE ref"
* Userspace (synce4l): Uses this information map to make runtime
   decisions (e.g., "Network condition changed, switch DPLL to lock on
   NIC2's recovered-clock").

>> But I don't use CCF bindings in this design, this design is about
>> representing an opaque wire between two devices.
> 
> Is it really opaque? Don't you need to know its frequency, so you can
> set the ingress divider? You can either ask it what it is, or you can
> have a board property. Don't you want to know its phase? So you can
> enable/disable the ingress inverter?

Yes, the system needs to know this. But the NIC driver does not. The NIC
driver just says "I am wired to pins X,Y." The DPLL driver monitors DPLL
device and its pins states by querying the hardware, reports state 
(device lock status, pins' phase offset, signal quality, fractional
frequency offset, etc.) to userspace (e.g. synce4l) and accepts new
commands from it to change the device & pins configuration (pins'
priority, phase compensation etc.).

Thanks,
Ivan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE)
  2026-01-22 18:04 ` Rafael J. Wysocki
@ 2026-01-23  9:42   ` Kubalewski, Arkadiusz
  0 siblings, 0 replies; 9+ messages in thread
From: Kubalewski, Arkadiusz @ 2026-01-23  9:42 UTC (permalink / raw)
  To: Rafael J. Wysocki, Vecera, Ivan
  Cc: Andy Shevchenko, Linus Walleij, Andrew Lunn, Mika Westerberg,
	Rob Herring, Krzysztof Kozlowski, Jiri Pirko, Vadim Fedorenko,
	Jakub Kicinski, Paolo Abeni, David S. Miller, Mark Brown,
	Jan Aleszczyk, Schmidt, Michal, Oros, Petr,
	linux-acpi@vger.kernel.org, netdev@vger.kernel.org

>From: Rafael J. Wysocki <rafael@kernel.org>
>Sent: Thursday, January 22, 2026 7:04 PM
>
>Hi,
>
>On Wed, Jan 21, 2026 at 3:43 PM Ivan Vecera <ivecera@redhat.com> wrote:
>>
>> Hi Andy, Rafael and others,
>>
>> (based on the previous thread [1] - now involving more people from
>>   networking and DPLL)
>>
>> Thank you for the insights on _CRS and ClockInput.
>>
>> I think we have circled the issue enough to identify the core
>> disconnect:
>> * While the physical signals on these wires are indeed clocks (10MHz,
>>    etc.), from the OS driver perspective, this is not a "Clock Resource"
>>    issue. The NIC driver does not need to gate, rate-set, or power-
>> manage
>>    these clocks (which is what _CRS/ClockInput implies).
>> * Instead, the NIC driver simply needs a Topology Map. It needs to know:
>>    "My Port 0 (Consumer) is physically wired to DPLL Pin 3 (Provider)."
>>
>> The NIC driver needs this Pin Index (3) specifically to report it via
>> the RtNetlink. This allows the userspace daemon (e.g., synce4l or
>> linux-ptp) to see the relationship and decide to configure the DPLL
>> via the DPLL Netlink API to lock onto that specific input.
>>
>> A generic ClockInput resource in _CRS is anonymous and unordered. The
>> OS abstracts it into a handle, but it fails to convey the specific pin
>> index required for this userspace reporting.
>>
>> Since ACPI lacks a native "Graph/Topology" object for inter-device
>> dependencies of this nature, and _CRS obscures the index information
>> required by userspace, I propose we treat _DSD properties as the
>> de-facto standard [2] for modeling SyncE topology in ACPI.
>
>If you want to call something a "standard", especially if it involves
>ACPI, it is generally not sufficient to talk to Linux kernel people only
>about it.
>
>ACPI is about agreements between multiple parties, including multiple OS
>providers (Linux being just one of them) and multiple platform vendors
>(OEMs).
>
>To a minimum, you'd need commitment from at least one platform vendor to
>ship the requisite _DSD data in their platform firmware.
>
>> To avoid the confusion Andy mentioned regarding "Clock Bindings" in
>> ACPI, I suggest we explicitly define a schema using 'dpll-' prefixed
>> properties. This effectively decouples it from the Clock subsystem
>> expectations and treats it purely as a wiring map.
>>
>> Proposed ACPI Representation with proposed documentation [3]
>>
>> If the ACPI maintainers and Netdev maintainers agree that this
>
>So long as you don't try to update the general ACPI support code in
>drivers/acpi/ or the related header files, the matter is beyond the role
>of the "ACPI maintainers".
>
>That code though is based on the ACPI specification and the related
>support documentation, modulo what is actually shipping in platform
>firmware on systems in the field, so if you want or plan to modify it,
>that needs to be based on something beyond kernel documentation.
>
>> _DSD-based topology map is the acceptable "Pragmatic Standard" for
>> this feature, I will document this schema in the kernel documentation
>> and proceed with the implementation.
>
>Kernel documentation is generally insufficient for defining new OS-
>firmware interfaces based on ACPI because there are parties involved in
>ACPI development beyond the kernel that may be interested in the given
>interface and they may be able to provide useful feedback.
>
>I, personally, cannot really say how useful the interface you are
>proposing would be and what it would be useful for.  Even if I liked it,
>there still would be a problem of getting at least one platform vendor on
>board.
>

Hi Rafael,

Thank you for your review!

Yes, we are here, do you need someone/someway specific to step up and
provide such proof?
The patch series proposed by Ivan, which is part of this discussion also
contains intel patches.

>> This solves the immediate need for an upcoming Intel SyncE enabled
>> platform and provides a consistent blueprint for other vendors
>> implementing SyncE on ACPI.
>
>And what if, say, MSFT come up with their own version of an interface
>addressing the same problem space in the meantime and convince platform
>vendors to ship support for their variant instead of yours?
>

We can only believe that some expert/maintainer would catch it during the
review process?
Or else we would end up in situation where 2 interfaces are similar.

Thank you!
Arkadiusz

>> [1]
>> https://lore.kernel.org/linux-acpi/3bf214b9-8691-44f7-aa13-8169276a6c2
>> b@redhat.com/ [2]
>> https://docs.kernel.org/firmware-guide/acpi/dsd/data-node-references.h
>> tml [3]
>> https://gist.github.com/ivecera/964c25f47f688f44ec70984742cf7fbd

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-01-23  9:42 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-21 14:42 [Proposal,Question - refresh] ACPI representation of DPLL/Ethernet dependencies (SyncE) Ivan Vecera
2026-01-22  0:09 ` Andrew Lunn
2026-01-22 11:50   ` Ivan Vecera
2026-01-22 15:46     ` Andrew Lunn
2026-01-22 17:16       ` Ivan Vecera
2026-01-22 19:57         ` Andrew Lunn
2026-01-23  9:18           ` Ivan Vecera
2026-01-22 18:04 ` Rafael J. Wysocki
2026-01-23  9:42   ` Kubalewski, Arkadiusz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox