From: Vidya Sagar <vidyas@nvidia.com>
To: bhelgaas@google.com, linux-pci@vger.kernel.org
Cc: vsethi@nvidia.com, jbodner@nvidia.com, kthota@nvidia.com
Subject: Query about setting MaxPayloadSize for the best performance
Date: Thu, 22 Jun 2023 11:04:03 +0530 [thread overview]
Message-ID: <8bde8aa8-d385-aadb-f60b-9a81e7bf165c@nvidia.com> (raw)
Hi,
This is about configuring the MPS (MaxPayloadSize) in the PCIe hierarchy
during enumeration. I would like to highlight the dependency on how the
MPS gets configured in a hierarchy based on the MPS value already
present in the root port's DevCtl register.
Initial root port's configuration (CASE-A):
Root port is capable of 128 & 256 MPS, but its MPS is set to "128"
in its DevCtl register.
Observation:
CASE-A-1:
When a device with support for 256MPS is connected directly to
this root port, only 128MPS is set in its DevCtl register (though both
root port and endpoint support 256MPS). This results in sub-optimal
performance.
CASE-A-2:
When a device with only support for 128MPS is connected to the
root port through a PCIe switch (that has support for up to 256MPS),
entire hierarchy is configured for 128MPS.
Initial root port's configuration (CASE-B):
Root port is capable of 128 & 256 MPS, but its MPS is set to "256"
in its DevCtl register.
Observation:
CASE-B-1:
When a device with support for 256MPS is connected directly to
this root port, 256MPS is set in its DevCtl register. This gives the
expected performance.
CASE-B-2:
When a device with only support for 128MPS is connected to the
root port through a PCIe switch (that has support for upto 256MPS), rest
of the hierarchy gets configured for 256MPS, but since the endpoint
behind the switch has support for only 128MPS, functionality of this
endpoint gets broken.
One solution to address this issue is to leave the DevCtl of RP at
128MPS and append 'pci=pcie_bus_perf' to the kernel command line. This
would change both MPS and MRRS (Max Read Request Size) in the hierarchy
in such a way that the system offers the best performance.
I'm not fully aware of the history of various 'pcie_bus_xxxx' options,
but, since there is no downside to making 'pcie_bus_perf' as the
default, I'm wondering why can't we just use 'pcie_bus_perf' itself as
the default configuration instead of the existing default configuration
which has the issues mentioned in CASE-A-1 and CASE-B-2.
Thanks,
Vidya Sagar
next reply other threads:[~2023-06-22 5:34 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-22 5:34 Vidya Sagar [this message]
2023-06-22 22:52 ` Query about setting MaxPayloadSize for the best performance Bjorn Helgaas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8bde8aa8-d385-aadb-f60b-9a81e7bf165c@nvidia.com \
--to=vidyas@nvidia.com \
--cc=bhelgaas@google.com \
--cc=jbodner@nvidia.com \
--cc=kthota@nvidia.com \
--cc=linux-pci@vger.kernel.org \
--cc=vsethi@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox