From: Johannes Erdfelt <johannes@erdfelt.com>
To: Alex Elder <elder@riscstar.com>,
robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org,
bhelgaas@google.com, lpieralisi@kernel.org,
kwilczynski@kernel.org, mani@kernel.org, vkoul@kernel.org,
kishon@kernel.org, dlan@gentoo.org, guodong@riscstar.com,
pjw@kernel.org, palmer@dabbelt.com, aou@eecs.berkeley.edu,
alex@ghiti.fr, p.zabel@pengutronix.de,
christian.bruel@foss.st.com, shradha.t@samsung.com,
krishna.chundru@oss.qualcomm.com, qiang.yu@oss.qualcomm.com,
namcao@linutronix.de, thippeswamy.havalige@amd.com,
inochiama@gmail.com, devicetree@vger.kernel.org,
linux-pci@vger.kernel.org, linux-phy@lists.infradead.org,
spacemit@lists.linux.dev, linux-riscv@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 0/7] Introduce SpacemiT K1 PCIe phy and host controller
Date: Tue, 28 Oct 2025 11:42:50 -0700 [thread overview]
Message-ID: <20251028184250.GM15521@sventech.com> (raw)
In-Reply-To: <aQEElhSCRNqaPf8m@aurel32.net>
On Tue, Oct 28, 2025, Aurelien Jarno <aurelien@aurel32.net> wrote:
> Hi Alex,
>
> On 2025-10-17 11:21, Alex Elder wrote:
> > On 10/16/25 11:47 AM, Aurelien Jarno wrote:
> > > Hi Alex,
> > >
> > > On 2025-10-13 10:35, Alex Elder wrote:
> > > > This series introduces a PHY driver and a PCIe driver to support PCIe
> > > > on the SpacemiT K1 SoC. The PCIe implementation is derived from a
> > > > Synopsys DesignWare PCIe IP. The PHY driver supports one combination
> > > > PCIe/USB PHY as well as two PCIe-only PHYs. The combo PHY port uses
> > > > one PCIe lane, and the other two ports each have two lanes. All PCIe
> > > > ports operate at 5 GT/second.
> > > >
> > > > The PCIe PHYs must be configured using a value that can only be
> > > > determined using the combo PHY, operating in PCIe mode. To allow
> > > > that PHY to be used for USB, the calibration step is performed by
> > > > the PHY driver automatically at probe time. Once this step is done,
> > > > the PHY can be used for either PCIe or USB.
> > > >
> > > > Version 2 of this series incorporates suggestions made during the
> > > > review of version 1. Specific highlights are detailed below.
> > >
> > > With the issues mentioned in patch 4 fixed, this patchset works fine for
> > > me. That said I had to disable ASPM by passing pcie_aspm=off on the
> > > command line, as it is now enabled by default since 6.18-rc1 [1]. At
> > > this stage, I am not sure if it is an issue with my NVME drive or an
> > > issue with the controller.
> >
> > Can you describe what symptoms you had that required you to pass
> > "pcie_aspm=off" on the kernel command line?
> >
> > I see these lines in my boot log related to ASPM (and added by
> > the commit you link to), for both pcie1 and pcie2:
> >
> > pci 0000:01:00.0: ASPM: DT platform, enabling L0s-up L0s-dw L1 AS
> > PM-L1.1 ASPM-L1.2 PCI-PM-L1.1 PCI-PM-L1.2
> > pci 0000:01:00.0: ASPM: DT platform, enabling ClockPM
> >
> > . . .
> >
> > nvme nvme0: pci function 0000:01:00.0
> > nvme 0000:01:00.0: enabling device (0000 -> 0002)
> > nvme nvme0: allocated 64 MiB host memory buffer (16 segments).
> > nvme nvme0: 8/0/0 default/read/poll queues
> > nvme0n1: p1
> >
> > My NVMe drive on pcie1 works correctly.
> > https://www.crucial.com/ssd/p3/CT1000P3SSD8
> >
> > root@bananapif3:~# df /a
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > /dev/nvme0n1p1 960302804 32063304 879385040 4% /a
> > root@bananapif3:~#
>
> Sorry for the delay, it took me time to test some more things and
> different SSDs. First of all I still see the issue with your v3 on top
> of v6.18-rc3, which includes some fixes for ASPM support [1].
>
> I have tried 3 different SSDs, none of them are working, but the
> symptoms are different, although all related with ASPM (pcie_aspm=off
> workarounds the issue).
>
> With a Fox Spirit PM18 SSD (Silicon Motion, Inc. SM2263EN/SM2263XT
> controller), I do not have more than this:
> [ 5.196723] nvme nvme0: pci function 0000:01:00.0
> [ 5.198843] nvme 0000:01:00.0: enabling device (0000 -> 0002)
>
> With a WD Blue SN570 SSD, I get this:
> [ 5.199513] nvme nvme0: pci function 0000:01:00.0
> [ 5.201653] nvme 0000:01:00.0: enabling device (0000 -> 0002)
> [ 5.270334] nvme nvme0: allocated 32 MiB host memory buffer (8 segments).
> [ 5.277624] nvme nvme0: 8/0/0 default/read/poll queues
> [ 19.192350] nvme nvme0: using unchecked data buffer
> [ 48.108400] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10
> [ 48.113885] nvme nvme0: Does your device have a faulty power saving mode enabled?
> [ 48.121346] nvme nvme0: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug
> [ 48.176878] nvme0n1: I/O Cmd(0x2) @ LBA 0, 8 blocks, I/O Error (sct 0x3 / sc 0x71)
> [ 48.181926] I/O error, dev nvme0n1, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
> [ 48.243670] nvme 0000:01:00.0: enabling device (0000 -> 0002)
> [ 48.246914] nvme nvme0: Disabling device after reset failure: -19
> [ 48.280495] Buffer I/O error on dev nvme0n1, logical block 0, async page read
>
>
> Finally with a PNY CS1030 SSD (Phison PS5015-E15 controller), I get this:
> [ 5.215631] nvme nvme0: pci function 0000:01:00.0
> [ 5.220435] nvme 0000:01:00.0: enabling device (0000 -> 0002)
> [ 5.329565] nvme nvme0: allocated 64 MiB host memory buffer (16 segments).
> [ 66.540485] nvme nvme0: I/O tag 28 (401c) QID 0 timeout, disable controller
> [ 66.585245] nvme 0000:01:00.0: probe with driver nvme failed with error -4
>
> Note that I also tested this latest SSD on a VisionFive 2 board with exactly
> the same kernel (I just moved the SSD and booted), and it works fine with ASPM
> enabled (confirmed with lspci).
I have been testing this patchset recently as well, but on an Orange Pi
RV2 board instead (and an extra RV2 specific patch to enable power to
the M.2 slot).
I ran into the same symptoms you had ("QID 0 timeout" after about 60
seconds). However, I'm using an Intel 600p. I can confirm my NVME drive
seems to work fine with the "pcie_aspm=off" workaround as well.
Of note, I don't have this problem with the vendor 6.6.63 kernel.
> > I basically want to know if there's something I should do with this
> > driver to address this. (Mani, can you explain?)
>
> I am not sure on my side how to debug that. What I know is that it is
> linked to ASPM L1, L0 works fine. In other words the SSDs work fine with
> this patch:
>
> diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> index 79b9651584737..1a134ec68b591 100644
> --- a/drivers/pci/pcie/aspm.c
> +++ b/drivers/pci/pcie/aspm.c
> @@ -801,8 +801,8 @@ static void pcie_aspm_override_default_link_state(struct pcie_link_state *link)
> if (of_have_populated_dt()) {
> if (link->aspm_support & PCIE_LINK_STATE_L0S)
> link->aspm_default |= PCIE_LINK_STATE_L0S;
> - if (link->aspm_support & PCIE_LINK_STATE_L1)
> - link->aspm_default |= PCIE_LINK_STATE_L1;
> +// if (link->aspm_support & PCIE_LINK_STATE_L1)
> +// link->aspm_default |= PCIE_LINK_STATE_L1;
> override = link->aspm_default & ~link->aspm_enabled;
> if (override)
> pci_info(pdev, "ASPM: default states%s%s\n",
>
> I can test more things if needed, but I don't know where to start.
I'm not a PCIe expert, but I'm more than happy to test as well.
JE
--
linux-phy mailing list
linux-phy@lists.infradead.org
https://lists.infradead.org/mailman/listinfo/linux-phy
next prev parent reply other threads:[~2025-10-28 18:43 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-13 15:35 [PATCH v2 0/7] Introduce SpacemiT K1 PCIe phy and host controller Alex Elder
2025-10-13 15:35 ` [PATCH v2 1/7] dt-bindings: phy: spacemit: add SpacemiT PCIe/combo PHY Alex Elder
2025-10-15 14:52 ` Rob Herring
2025-10-17 16:20 ` Alex Elder
2025-10-13 15:35 ` [PATCH v2 2/7] dt-bindings: phy: spacemit: introduce PCIe PHY Alex Elder
2025-10-15 16:41 ` Rob Herring (Arm)
2025-10-17 16:20 ` Alex Elder
2025-10-13 15:35 ` [PATCH v2 3/7] dt-bindings: pci: spacemit: introduce PCIe host controller Alex Elder
2025-10-14 1:55 ` Yao Zi
2025-10-14 1:57 ` Alex Elder
2025-10-15 16:47 ` Rob Herring
2025-10-17 16:20 ` Alex Elder
2025-10-26 16:38 ` Manivannan Sadhasivam
2025-10-27 22:24 ` Alex Elder
2025-10-28 5:58 ` Manivannan Sadhasivam
2025-10-30 0:10 ` Alex Elder
2025-10-13 15:35 ` [PATCH v2 4/7] phy: spacemit: introduce PCIe/combo PHY Alex Elder
2025-10-15 21:51 ` Aurelien Jarno
2025-10-17 16:21 ` Alex Elder
2025-10-13 15:35 ` [PATCH v2 5/7] PCI: spacemit: introduce SpacemiT PCIe host driver Alex Elder
2025-10-26 16:55 ` Manivannan Sadhasivam
2025-10-27 22:24 ` Alex Elder
2025-10-28 7:06 ` Manivannan Sadhasivam
2025-10-30 0:10 ` Alex Elder
2025-10-31 6:05 ` Manivannan Sadhasivam
2025-10-31 13:38 ` Alex Elder
2025-10-13 15:35 ` [PATCH v2 6/7] riscv: dts: spacemit: add a PCIe regulator Alex Elder
2025-10-13 15:35 ` [PATCH v2 7/7] riscv: dts: spacemit: PCIe and PHY-related updates Alex Elder
2025-10-16 16:47 ` [PATCH v2 0/7] Introduce SpacemiT K1 PCIe phy and host controller Aurelien Jarno
2025-10-17 16:21 ` Alex Elder
2025-10-28 17:59 ` Aurelien Jarno
2025-10-28 18:42 ` Johannes Erdfelt [this message]
2025-10-28 19:10 ` Alex Elder
2025-10-28 20:48 ` Johannes Erdfelt
2025-10-28 20:49 ` Alex Elder
2025-10-30 16:41 ` Manivannan Sadhasivam
2025-10-30 17:49 ` Aurelien Jarno
2025-10-31 6:10 ` Manivannan Sadhasivam
2025-11-03 16:42 ` Alex Elder
2025-10-28 21:08 ` Aurelien Jarno
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251028184250.GM15521@sventech.com \
--to=johannes@erdfelt.com \
--cc=alex@ghiti.fr \
--cc=aou@eecs.berkeley.edu \
--cc=bhelgaas@google.com \
--cc=christian.bruel@foss.st.com \
--cc=conor+dt@kernel.org \
--cc=devicetree@vger.kernel.org \
--cc=dlan@gentoo.org \
--cc=elder@riscstar.com \
--cc=guodong@riscstar.com \
--cc=inochiama@gmail.com \
--cc=kishon@kernel.org \
--cc=krishna.chundru@oss.qualcomm.com \
--cc=krzk+dt@kernel.org \
--cc=kwilczynski@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-phy@lists.infradead.org \
--cc=linux-riscv@lists.infradead.org \
--cc=lpieralisi@kernel.org \
--cc=mani@kernel.org \
--cc=namcao@linutronix.de \
--cc=p.zabel@pengutronix.de \
--cc=palmer@dabbelt.com \
--cc=pjw@kernel.org \
--cc=qiang.yu@oss.qualcomm.com \
--cc=robh@kernel.org \
--cc=shradha.t@samsung.com \
--cc=spacemit@lists.linux.dev \
--cc=thippeswamy.havalige@amd.com \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).