linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Ilpo Järvinen" <ilpo.jarvinen@linux.intel.com>
To: Lukas Wunner <lukas@wunner.de>
Cc: linux-pci@vger.kernel.org, Bjorn Helgaas <helgaas@kernel.org>,
	 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	 Rob Herring <robh@kernel.org>,
	Krzysztof Wilczy??ski <kw@linux.com>,
	 Alexandru Gagniuc <mr.nuke.me@gmail.com>,
	 Krishna chaitanya chundru <quic_krichai@quicinc.com>,
	 Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
	 "Rafael J . Wysocki" <rafael@kernel.org>,
	linux-pm@vger.kernel.org,  Bjorn Helgaas <bhelgaas@google.com>,
	LKML <linux-kernel@vger.kernel.org>,
	 Alex Deucher <alexdeucher@gmail.com>,
	 Daniel Lezcano <daniel.lezcano@linaro.org>,
	 Amit Kucheria <amitk@kernel.org>,
	Zhang Rui <rui.zhang@intel.com>
Subject: Re: [PATCH v3 06/10] PCI: Cache PCIe device's Supported Speed Vector
Date: Mon, 1 Jan 2024 20:31:39 +0200 (EET)	[thread overview]
Message-ID: <94973372-91fc-27fc-b187-7427af9e4b7d@linux.intel.com> (raw)
In-Reply-To: <20231230151931.GA25718@wunner.de>

[-- Attachment #1: Type: text/plain, Size: 3096 bytes --]

On Sat, 30 Dec 2023, Lukas Wunner wrote:

> On Fri, Sep 29, 2023 at 02:57:19PM +0300, Ilpo Järvinen wrote:
> > The Supported Link Speeds Vector in the Link Capabilities Register 2
> > corresponds to the bus below on Root Ports and Downstream Ports,
> > whereas it corresponds to the bus above on Upstream Ports and
> > Endpoints.
> 
> It would be good to add a pointer to the spec here.  I think the
> relevant section is PCIe r6.1 sec 7.5.3.18 which says:
> 
>  "Supported Link Speeds Vector - This field indicates the supported
>   Link speed(s) of the associated Port."
>                        ^^^^^^^^^^^^^^^
> 
> Obviously the associated port is upstream on a Switch Upstream Port
> or Endpoint, whereas it is downstream on a Switch Downstream Port
> or Root Port.
> 
> Come to think of it, what about edge cases such as RCiEPs?

On real HW I've seen, RCiEPs don't seem to have these speeds at all 
(PCIe r6.1, sec 7.5.3):

"The Link Capabilities, Link Status, and Link Control registers are 
required for all Root Ports, Switch Ports, Bridges, and Endpoints that are 
not RCiEPs. For Functions that do not implement the Link Capabilities, 
Link Status, and Link Contro registers, these spaces must be hardwired to 
0. Link Capabilities 2, Link Status 2, and Link Control 2 registers are
required for all Root Ports, Switch Ports, Bridges, and Endpoints (except 
for RCiEPs) that implement capabilities requiring those registers. For 
Functions that do not implement the Link Capabilities 2, Link Status 2, 
and Link Control 2 registers, these spaces must be hardwired to 0b."

> > Only the former is currently cached in pcie_bus_speeds in
> > the struct pci_bus. The link speeds that are supported is the
> > intersection of these two.
> 
> I'm wondering if caching both is actually necessary.  Why not cache
> just the intersection?  Do we need either of the two somewhere?

Intersection is enough at least for bwctrl. The only downside that is 
barely worth mentioning is that the bus SLSV has to be re-read when
function 0 sets the intersection.

I can think of somebody wanting to expose the list of both supported speed 
to userspace though sysfs (not done by this patch series), but they could 
be read from the registers in that case so that use case doesn't really 
matter much, IMO.

> > Store the device's Supported Link Speeds Vector into the struct pci_bus
> > when the Function 0 is enumerated (the Multi-Function Devices must have
> > same speeds the same for all Functions) to be easily able to calculate
> > the intersection of Supported Link Speeds.
> 
> Might want to add an explanation what you're going to need this for,
> I assume it's accessed frequently by the bandwidth throttling driver
> in a subsequent patch?

Yes. I tend to try to avoid forward references because some maintainers 
complain about them (leading to minimal changes where true motivations 
have to be hidden because "future" cannot be used to motivate a change 
even if that's often the truest motivation within a patch series). But 
I'll add a fwd ref here to make it more obvious. :-)

-- 
 i.

  reply	other threads:[~2024-01-01 18:31 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-29 11:57 [PATCH v3 00/10] Add PCIe Bandwidth Controller Ilpo Järvinen
2023-09-29 11:57 ` [PATCH v3 01/10] PCI: Protect Link Control 2 Register with RMW locking Ilpo Järvinen
2023-12-30 10:33   ` Lukas Wunner
2023-09-29 11:57 ` [PATCH v3 02/10] drm/radeon: Use RMW accessors for changing LNKCTL2 Ilpo Järvinen
2023-09-29 11:57 ` [PATCH v3 03/10] drm/amdgpu: " Ilpo Järvinen
2023-09-29 11:57 ` [PATCH v3 04/10] RDMA/hfi1: " Ilpo Järvinen
2023-09-29 13:03   ` Dean Luick
2023-09-29 11:57 ` [PATCH v3 05/10] PCI: Store all PCIe Supported Link Speeds Ilpo Järvinen
2023-12-30 11:45   ` Lukas Wunner
2023-12-30 19:30     ` Lukas Wunner
2024-01-01 16:26       ` Ilpo Järvinen
2024-01-01 16:40         ` Lukas Wunner
2024-01-01 16:53           ` Ilpo Järvinen
2023-09-29 11:57 ` [PATCH v3 06/10] PCI: Cache PCIe device's Supported Speed Vector Ilpo Järvinen
2023-12-30 15:19   ` Lukas Wunner
2024-01-01 18:31     ` Ilpo Järvinen [this message]
2024-01-03 16:51       ` Lukas Wunner
2023-09-29 11:57 ` [PATCH v3 07/10] PCI/LINK: Re-add BW notification portdrv as PCIe BW controller Ilpo Järvinen
2023-12-30 15:58   ` Lukas Wunner
2024-01-01 17:37     ` Ilpo Järvinen
2024-01-01 18:11       ` Lukas Wunner
2023-09-29 11:57 ` [PATCH v3 08/10] PCI/bwctrl: Add "controller" part into PCIe bwctrl Ilpo Järvinen
2023-12-30 18:49   ` Lukas Wunner
2024-01-01 18:12     ` Ilpo Järvinen
2024-01-03 16:40       ` Lukas Wunner
2023-09-29 11:57 ` [PATCH v3 09/10] thermal: Add PCIe cooling driver Ilpo Järvinen
2023-12-30 19:08   ` Lukas Wunner
2024-01-01 16:39     ` Ilpo Järvinen
2023-09-29 11:57 ` [PATCH v3 10/10] selftests/pcie_bwctrl: Create selftests Ilpo Järvinen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=94973372-91fc-27fc-b187-7427af9e4b7d@linux.intel.com \
    --to=ilpo.jarvinen@linux.intel.com \
    --cc=alexdeucher@gmail.com \
    --cc=amitk@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=helgaas@kernel.org \
    --cc=kw@linux.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=lukas@wunner.de \
    --cc=mr.nuke.me@gmail.com \
    --cc=quic_krichai@quicinc.com \
    --cc=rafael@kernel.org \
    --cc=robh@kernel.org \
    --cc=rui.zhang@intel.com \
    --cc=srinivas.pandruvada@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).