Linux PCI subsystem development
 help / color / mirror / Atom feed
From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
To: Jiwei Sun <sjiwei@163.com>, <nirmal.patel@linux.intel.com>,
	<jonathan.derrick@linux.dev>
Cc: <lpieralisi@kernel.org>, <kw@linux.com>, <robh@kernel.org>,
	<bhelgaas@google.com>, <linux-pci@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <sunjw10@lenovo.com>,
	<ahuang12@lenovo.com>
Subject: Re: [PATCH] PCI: vmd: Create domain symlink before pci_bus_add_devices
Date: Mon, 3 Jun 2024 08:47:56 -0700	[thread overview]
Message-ID: <e10398dc-53b7-446f-b22f-f992ba1cc37e@intel.com> (raw)
In-Reply-To: <20240603140329.7222-1-sjiwei@163.com>

On 6/3/2024 7:03 AM, Jiwei Sun wrote:
> From: Jiwei Sun <sunjw10@lenovo.com>
> 
> During booting into the kernel, the following error message appears:
> 
>    (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: Unable to get real path for '/sys/bus/pci/drivers/vmd/0000:c7:00.5/domain/device''
>    (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: /dev/nvme1n1 is not attached to Intel(R) RAID controller.'
>    (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: No OROM/EFI properties for /dev/nvme1n1'
>    (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: no RAID superblock on /dev/nvme1n1.'
>    (udev-worker)[2149]: nvme1n1: Process '/sbin/mdadm -I /dev/nvme1n1' failed with exit code 1.
> 
> This symptom prevents the OS from booting successfully.
> 

I'm just curious: has this been doing this forever or has this just 
started recently?

Paul

> After a NVMe disk is probed/added by the nvme driver, the udevd executes
> some rule scripts by invoking mdadm command to detect if there is a
> mdraid associated with this NVMe disk. The mdadm determines if one
> NVMe devce is connected to a particular VMD domain by checking the
> domain symlink. Here is the root cause:
> 
> Thread A                   Thread B             Thread mdadm
> vmd_enable_domain
>    pci_bus_add_devices
>      __driver_probe_device
>       ...
>       work_on_cpu
>         schedule_work_on
>         : wakeup Thread B
>                             nvme_probe
>                             : wakeup scan_work
>                               to scan nvme disk
>                               and add nvme disk
>                               then wakeup udevd
>                                                  : udevd executes
>                                                    mdadm command
>         flush_work                               main
>         : wait for nvme_probe done                ...
>      __driver_probe_device                        find_driver_devices
>      : probe next nvme device                     : 1) Detect the domain
>      ...                                            symlink; 2) Find the
>      ...                                            domain symlink from
>      ...                                            vmd sysfs; 3) The
>      ...                                            domain symlink is not
>      ...                                            created yet, failed
>    sysfs_create_link
>    : create domain symlink
> 
> sysfs_create_link is invoked at the end of vmd_enable_domain. However,
> this implementation introduces a timing issue, where mdadm might fail
> to retrieve the vmd symlink path because the symlink has not been
> created yet.
> 
> Fix the issue by creating VMD domain symlinks before invoking
> pci_bus_add_devices.
> 
> Signed-off-by: Jiwei Sun <sunjw10@lenovo.com>
> Suggested-by: Adrian Huang <ahuang12@lenovo.com>
> ---
>   drivers/pci/controller/vmd.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 87b7856f375a..3f208c5f9ec9 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -961,12 +961,12 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>   	list_for_each_entry(child, &vmd->bus->children, node)
>   		pcie_bus_configure_settings(child);
>   
> +	WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
> +			       "domain"), "Can't create symlink to domain\n");
> +
>   	pci_bus_add_devices(vmd->bus);
>   
>   	vmd_acpi_end();
> -
> -	WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
> -			       "domain"), "Can't create symlink to domain\n");
>   	return 0;
>   }
>   


  reply	other threads:[~2024-06-03 15:48 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-03 14:03 [PATCH] PCI: vmd: Create domain symlink before pci_bus_add_devices Jiwei Sun
2024-06-03 15:47 ` Paul M Stillwell Jr [this message]
2024-06-04 10:00   ` Jiwei Sun
2024-06-03 20:55 ` Bjorn Helgaas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e10398dc-53b7-446f-b22f-f992ba1cc37e@intel.com \
    --to=paul.m.stillwell.jr@intel.com \
    --cc=ahuang12@lenovo.com \
    --cc=bhelgaas@google.com \
    --cc=jonathan.derrick@linux.dev \
    --cc=kw@linux.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=nirmal.patel@linux.intel.com \
    --cc=robh@kernel.org \
    --cc=sjiwei@163.com \
    --cc=sunjw10@lenovo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox