linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Why hold device_lock when calling callback in pci_walk_bus?
@ 2012-09-28  8:15 Huang Ying
  2012-09-28  8:29 ` Zhang, Yanmin
  0 siblings, 1 reply; 3+ messages in thread
From: Huang Ying @ 2012-09-28  8:15 UTC (permalink / raw)
  To: bhelgaas; +Cc: Greg Kroah-Hartman, yanmin.zhang, linux-pci, linux-kernel, rjw

Hi, All,

If my understanding were correct, device_lock is used to provide mutual
exclusion between device probe/remove/suspend/resume etc.  Why hold
device_lock when calling callback in pci_walk_bus.

This is introduced by the following commit.

commit d71374dafbba7ec3f67371d3b7e9f6310a588808
Author: Zhang Yanmin <yanmin.zhang@intel.com>
Date:   Fri Jun 2 12:35:43 2006 +0800

    [PATCH] PCI: fix race with pci_walk_bus and pci_destroy_dev
    
    pci_walk_bus has a race with pci_destroy_dev. When cb is called
    in pci_walk_bus, pci_destroy_dev might unlink the dev pointed by next.
    Later on in the next loop, pointer next becomes NULL and cause
    kernel panic.
    
    Below patch against 2.6.17-rc4 fixes it by changing pci_bus_lock (spin_lock)
    to pci_bus_sem (rw_semaphore).
    
    Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>

Corresponding email thread is: https://lkml.org/lkml/2006/5/26/38

But from the commit and email thread, I can not find why we need to do
that.

I ask this question because I want to use pci_walk_bus in a function (in
pci runtime resume path) which may be called with device_lock held.

Can anyone help me on that?

Best Regards,
Huang Ying



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2012-09-28 13:28 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-28  8:15 Why hold device_lock when calling callback in pci_walk_bus? Huang Ying
2012-09-28  8:29 ` Zhang, Yanmin
2012-09-28 13:27   ` Huang Ying

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).