Linux PCI subsystem development
 help / color / mirror / Atom feed
* Query about setting MaxPayloadSize for the best performance
@ 2023-06-22  5:34 Vidya Sagar
  2023-06-22 22:52 ` Bjorn Helgaas
  0 siblings, 1 reply; 2+ messages in thread
From: Vidya Sagar @ 2023-06-22  5:34 UTC (permalink / raw)
  To: bhelgaas, linux-pci; +Cc: vsethi, jbodner, kthota


Hi,
This is about configuring the MPS (MaxPayloadSize) in the PCIe hierarchy 
during enumeration. I would like to highlight the dependency on how the 
MPS gets configured in a hierarchy based on the MPS value already 
present in the root port's DevCtl register.

Initial root port's configuration (CASE-A):
     Root port is capable of 128 & 256 MPS, but its MPS is set to "128" 
in its DevCtl register.

Observation:
     CASE-A-1:
         When a device with support for 256MPS is connected directly to 
this root port, only 128MPS is set in its DevCtl register (though both 
root port and endpoint support 256MPS). This results in sub-optimal 
performance.
     CASE-A-2:
         When a device with only support for 128MPS is connected to the 
root port through a PCIe switch (that has support for up to 256MPS), 
entire hierarchy is configured for 128MPS.

Initial root port's configuration (CASE-B):
     Root port is capable of 128 & 256 MPS, but its MPS is set to "256" 
in its DevCtl register.

Observation:
     CASE-B-1:
         When a device with support for 256MPS is connected directly to 
this root port, 256MPS is set in its DevCtl register. This gives the 
expected performance.
     CASE-B-2:
         When a device with only support for 128MPS is connected to the 
root port through a PCIe switch (that has support for upto 256MPS), rest 
of the hierarchy gets configured for 256MPS, but since the endpoint 
behind the switch has support for only 128MPS, functionality of this 
endpoint gets broken.

One solution to address this issue is to leave the DevCtl of RP at 
128MPS and append 'pci=pcie_bus_perf' to the kernel command line. This 
would change both MPS and MRRS (Max Read Request Size) in the hierarchy 
in such a way that the system offers the best performance.

I'm not fully aware of the history of various 'pcie_bus_xxxx' options, 
but, since there is no downside to making 'pcie_bus_perf' as the 
default, I'm wondering why can't we just use 'pcie_bus_perf' itself as 
the default configuration instead of the existing default configuration 
which has the issues mentioned in CASE-A-1 and CASE-B-2.

Thanks,
Vidya Sagar

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-06-22 22:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-22  5:34 Query about setting MaxPayloadSize for the best performance Vidya Sagar
2023-06-22 22:52 ` Bjorn Helgaas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox