From: "Heiko Stübner" <heiko@sntech.de>
To: Vicente Bergas <vicencb@gmail.com>
Cc: "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>
Subject: Re: PCIe missing on RK3399
Date: Wed, 11 Dec 2024 14:36:38 +0100 [thread overview]
Message-ID: <4929144.F8r316W7xa@diego> (raw)
In-Reply-To: <CAAMcf8DL27-PYdEJ4eX1-PJnQDWB0osgHR0gbHAcr_oRniWOOg@mail.gmail.com>
Hi Vicente,
Am Mittwoch, 11. Dezember 2024, 13:55:01 CET schrieb Vicente Bergas:
> i've tested the Linux kernel 6.13-rc1 and rc2 and in both cases PCIe
> is not detected on the RK3399 platform (rk3399-gru-kevin), whereas the
> kernel version 6.12.3 works fine.
>
> 6.13 configuration is based on the same one as 6.12 and there aren't
> any significant PCI-related differences.
>
> The messages from dmesg on 6.13 don't show any PCI-related errors.
>
> Does somebody know what is going on?
so I just booted a rk3399-puma-haikou with a pci-nvme-adapter in the
pcie slot. And I get:
[ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
[ 0.000000] Linux version 6.13.0-rc2-00101-g260ae63734ff-dirty (hstuebner@phil) (aarch64-linux-gnu-gcc (Debian 14.2.0-6) 14.2.0, GNU ld (GNU Binutils for Debian) 2.43.1) #1134 SMP PREEMPT Tue Dec 10 21:06:34 CET 2024
...
[ 3.428114] rockchip-pcie f8000000.pcie: host bridge /pcie@f8000000 ranges:
[ 3.435978] rockchip-pcie f8000000.pcie: MEM 0x00fa000000..0x00fbdfffff -> 0x00fa000000
[ 3.445478] rockchip-pcie f8000000.pcie: IO 0x00fbe00000..0x00fbefffff -> 0x00fbe00000
[ 3.455298] rockchip-pcie f8000000.pcie: using DT '/pcie@f8000000' for 'ep' GPIO lookup
[ 3.455332] of_get_named_gpiod_flags: parsed 'ep-gpios' property of node '/pcie@f8000000[0]' - status (0)
[ 3.455359] gpio gpiochip4: Persistence not supported for GPIO 22
[ 3.499404] usb 3-1: new high-speed USB device number 2 using xhci-hcd
[ 3.664293] rockchip-pcie f8000000.pcie: PCI host bridge to bus 0000:00
[ 3.671770] pci_bus 0000:00: root bus resource [bus 00-1f]
[ 3.677936] pci_bus 0000:00: root bus resource [mem 0xfa000000-0xfbdfffff]
[ 3.685680] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff] (bus address [0xfbe00000-0xfbefffff])
[ 3.696474] pci 0000:00:00.0: [1d87:0100] type 01 class 0x060400 PCIe Root Port
[ 3.704852] hub 3-1:1.0: USB hub found
[ 3.709114] pci 0000:00:00.0: PCI bridge to [bus 00]
[ 3.714682] hub 3-1:1.0: 4 ports detected
[ 3.719334] pci 0000:00:00.0: bridge window [mem 0x00000000-0x000fffff]
[ 3.727028] pci 0000:00:00.0: supports D1
[ 3.731523] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[ 3.739745] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[ 3.748858] pci 0000:01:00.0: [144d:a804] type 00 class 0x010802 PCIe Endpoint
[ 3.757011] pci 0000:01:00.0: BAR 0 [mem 0x00000000-0x00003fff 64bit]
[ 3.764358] pci 0000:01:00.0: Max Payload Size set to 256 (was 128, max 256)
[ 3.772374] usb 4-1: new SuperSpeed USB device number 2 using xhci-hcd
[ 3.780132] pci 0000:01:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x4 link at 0000:00:00.0 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
[ 3.803225] pci_bus 0000:01: busn_res: [bus 01-1f] end is updated to 01
[ 3.810659] pci 0000:00:00.0: bridge window [mem 0xfa000000-0xfa0fffff]: assigned
[ 3.819082] pci 0000:01:00.0: BAR 0 [mem 0xfa000000-0xfa003fff 64bit]: assigned
[ 3.827302] pci 0000:00:00.0: PCI bridge to [bus 01]
[ 3.832873] pci 0000:00:00.0: bridge window [mem 0xfa000000-0xfa0fffff]
[ 3.844769] pci_bus 0000:00: resource 4 [mem 0xfa000000-0xfbdfffff]
[ 3.856277] pci_bus 0000:00: resource 5 [io 0x0000-0xfffff]
[ 3.862618] pci_bus 0000:01: resource 1 [mem 0xfa000000-0xfa0fffff]
[ 3.869750] pcieport 0000:00:00.0: enabling device (0000 -> 0002)
[ 3.876911] pcieport 0000:00:00.0: PME: Signaling with IRQ 85
[ 3.884054] nvme nvme0: pci function 0000:01:00.0
[ 3.889336] nvme 0000:01:00.0: enabling device (0000 -> 0002)
[ 3.915282] nvme nvme0: 6/0/0 default/read/poll queues
[ 3.993297] nvme0n1: p1 p2
So there seems to be not some general failure.
Does
# ls /sys/devices/platform/f8000000.pcie
list some "waiting_for_supplies" or something?
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
next prev parent reply other threads:[~2024-12-11 13:36 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-11 12:55 PCIe missing on RK3399 Vicente Bergas
2024-12-11 13:36 ` Heiko Stübner [this message]
2024-12-11 15:10 ` Vicente Bergas
2024-12-11 15:35 ` Heiko Stübner
2024-12-11 17:31 ` Vicente Bergas
2024-12-12 12:12 ` Vicente Bergas
2024-12-12 13:06 ` Heiko Stübner
2024-12-12 16:50 ` Vicente Bergas
2024-12-28 0:51 ` Vicente Bergas
2024-12-28 9:35 ` Johan Jonker
2025-01-13 21:02 ` Vicente Bergas
2025-01-16 14:36 ` [PATCH] arm64: dts: rockchip: fix fixed-regulator renames on rk3399-gru devices Heiko Stuebner
2025-01-17 0:44 ` Vicente Bergas
2025-02-03 8:15 ` Heiko Stuebner
2025-02-07 0:17 ` PCIe missing on RK3399 Trevor Woerner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4929144.F8r316W7xa@diego \
--to=heiko@sntech.de \
--cc=linux-rockchip@lists.infradead.org \
--cc=vicencb@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox