public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
To: Yi Zhang <yi.zhang@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>,
	Chaitanya Kulkarni <chaitanyak@nvidia.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"mstowe@redhat.com" <mstowe@redhat.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>
Subject: Re: blktests failures with v5.19-rc1
Date: Tue, 14 Jun 2022 01:09:07 +0000	[thread overview]
Message-ID: <20220614010907.bvbrgbz7nnvpnw5w@shindev> (raw)
In-Reply-To: <CAHj4cs9G0WDrnSS6iVZJfgfOcRR0ysJhw+9yqcbqE=_8mkF0zw@mail.gmail.com>

(CC+: linux-pci)

On Jun 11, 2022 / 16:34, Yi Zhang wrote:
> On Fri, Jun 10, 2022 at 10:49 PM Keith Busch <kbusch@kernel.org> wrote:
> >
> > On Fri, Jun 10, 2022 at 12:25:17PM +0000, Shinichiro Kawasaki wrote:
> > > On Jun 10, 2022 / 09:32, Chaitanya Kulkarni wrote:
> > > > >> #6: nvme/032: Failed at the first run after system reboot.
> > > > >>                 Used QEMU NVME device as TEST_DEV.
> > > > >>
> > > >
> > > > ofcourse we need to fix this issue but can you also
> > > > try it with the real H/W ?
> > >
> > > Hi Chaitanya, thank you for looking into the failures. I have just run the test
> > > case nvme/032 with real NVME device and observed the exactly same symptom as
> > > QEMU NVME device.
> >
> > QEMU is perfectly fine for this test. There's no need to bring in "real"
> > hardware for this.
> >
> > And I am not even sure this is real. I don't know yet why this is showing up
> > only now, but this should fix it:
> 
> Hi Keith
> 
> Confirmed the WARNING issue was fixed with the change, here is the log:

Thanks. I also confirmed that Keith's change to add __ATTR_IGNORE_LOCKDEP to
dev_attr_dev_rescan avoids the fix, on v5.19-rc2.

I took a closer look into this issue and found The deadlock WARN can be
recreated with following two commands:

# echo 1 > /sys/bus/pci/devices/0000\:00\:09.0/rescan
# echo 1 > /sys/bus/pci/devices/0000\:00\:09.0/remove

And it can be recreated with PCI devices other than NVME controller, such as
SCSI controller or VGA controller. Then this is not a storage sub-system issue.

I checked function call stacks of the two commands above. As shown below, it
looks like ABBA deadlock possibility is detected and warned.

echo 1 > /sys/bus/pci/devices/*/rescan
  kernfs_fop_write_iter
    kernfs_get_active
      atomic_inc_unless_nagative
      rwsem_acquire_read(&kn->dep_map, 0, 1, _RET_IP) :lock L1 for "rescan" file
    dev_rescan_store
      pci_lock_rescan_remove
        mutex_lock(&pci_rescan_remove_lock)           :lock L2

echo 1 > /sys/bus/pci/devices/*/remove
  kernfs_fop_write_iter
    remove_store
      pci_stop_and_remove_bus_device_locked
        pci_lock_rescan_remove
          mutex_lock(&pci_rescan_remove_lock)         :lock L2
        pci_stop_and_remove_bus_device
	  pci_remove_bus_device
	    device_del
	      device_remove_attrs
		sysfs_remove_attrs
		  sysfs_remove_groups
		    sysfs_remove_group
		      remove_files    .... iterates for pci device sysfs files including "rescan" file?
			kernfs_remove_by_name_ns
			  __kernfs_remove
			    kernfs_drain
			      rwsem_acquire(&kn->dep_map, 0, 0, _RET_IP): lock L1

It looks for me that the deadlock possibility exists for real, even though the
race between rescan operation and remove operation is really rare use case.

I would like to hear comments on the guess above. I take the liberty to CC this
to linux-pci list. Is this an issue to fix?

-- 
Shin'ichiro Kawasaki

  reply	other threads:[~2022-06-14  1:09 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-09 23:53 blktests failures with v5.19-rc1 Shinichiro Kawasaki
2022-06-10  8:07 ` Christoph Hellwig
2022-06-10  9:22 ` Chaitanya Kulkarni
2022-06-10  9:32   ` Chaitanya Kulkarni
2022-06-10 12:25     ` Shinichiro Kawasaki
2022-06-10 13:15       ` Yi Zhang
2022-06-10 14:47       ` Keith Busch
2022-06-11  8:34         ` Yi Zhang
2022-06-14  1:09           ` Shinichiro Kawasaki [this message]
2022-06-14  2:23             ` Keith Busch
2022-06-14  2:38               ` Chaitanya Kulkarni
2022-06-14  4:00                 ` Shinichiro Kawasaki
2022-06-15 19:47                   ` Bjorn Helgaas
2022-06-15 22:01                     ` Chaitanya Kulkarni
2022-06-15 23:13                       ` Yi Zhang
2022-06-16  4:42                         ` Shinichiro Kawasaki
2022-06-16 17:55                           ` Chaitanya Kulkarni
2022-06-15 23:16                     ` Keith Busch
2022-07-19  4:50                       ` Shinichiro Kawasaki
2022-07-19 22:31                         ` Bjorn Helgaas
2022-07-20  2:27                           ` Shinichiro Kawasaki
2022-12-19 11:27                             ` Shinichiro Kawasaki
2022-12-29 18:13                               ` Bjorn Helgaas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220614010907.bvbrgbz7nnvpnw5w@shindev \
    --to=shinichiro.kawasaki@wdc.com \
    --cc=chaitanyak@nvidia.com \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=mstowe@redhat.com \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox