linux-acpi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hanjun Guo <hanjun.guo@linaro.org>
To: "Rafael J. Wysocki" <rjw@rjwysocki.net>, linux-acpi@vger.kernel.org
Cc: Xie XiuQi <xiexiuqi@huawei.com>,
	guohanjun@huawei.com, linux-kernel@vger.kernel.org,
	Toshi Kani <toshi.kani@hp.com>
Subject: Re: [PATCH] ACPI / scan: Annotate physical_node_lock in acpi_scan_is_offline()
Date: Mon, 13 Apr 2015 16:27:16 +0800	[thread overview]
Message-ID: <552B7DE4.3070008@linaro.org> (raw)
In-Reply-To: <2181841.0nPiIAbDNm@vostro.rjw.lan>

On 2015年04月11日 07:31, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
>
> acpi_scan_is_offline() may be called under the physical_node_lock
> of the given device object's parent, so prevent lockdep from
> complaining about that by annotating that instance with
> SINGLE_DEPTH_NESTING.

I think this is trigged by setting acpi_force_hot_remove to 1,
in acpi_scan_hot_remove():

         if (device->handler && device->handler->hotplug.demand_offline
             && !acpi_force_hot_remove) {
                 if (!acpi_scan_is_offline(device, true))
                         return -EBUSY;
         } else {
                 int error = acpi_scan_try_to_offline(device);
                 if (error)
                         return error;
         }

then the container device will be removed by acpi_scan_try_to_offline(),
let's wait for Xiuqi's test result.

Thanks
Hanjun

>
> Reported-by: Xie XiuQi <xiexiuqi@huawei.com>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>   drivers/acpi/scan.c |    6 +++++-
>   1 file changed, 5 insertions(+), 1 deletion(-)
>
> Index: linux-pm/drivers/acpi/scan.c
> ===================================================================
> --- linux-pm.orig/drivers/acpi/scan.c
> +++ linux-pm/drivers/acpi/scan.c
> @@ -298,7 +298,11 @@ bool acpi_scan_is_offline(struct acpi_de
>   	struct acpi_device_physical_node *pn;
>   	bool offline = true;
>
> -	mutex_lock(&adev->physical_node_lock);
> +	/*
> +	 * acpi_container_offline() calls this for all of the container's
> +	 * children under the container's physical_node_lock lock.
> +	 */
> +	mutex_lock_nested(&adev->physical_node_lock, SINGLE_DEPTH_NESTING);
>
>   	list_for_each_entry(pn, &adev->physical_node_list, node)
>   		if (device_supports_offline(pn->dev) && !pn->dev->offline) {
>

  parent reply	other threads:[~2015-04-13  8:27 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-07  9:03 [PATCH] ACPI / HOTPLUG: fix device->physical_node_lock deadlock Xie XiuQi
2015-04-07 11:22 ` Rafael J. Wysocki
2015-04-07 11:50   ` Rafael J. Wysocki
2015-04-10 23:31     ` [PATCH] ACPI / scan: Annotate physical_node_lock in acpi_scan_is_offline() Rafael J. Wysocki
2015-04-13  1:28       ` Xie XiuQi
2015-04-13  8:27       ` Hanjun Guo [this message]
2015-04-13 13:48         ` Rafael J. Wysocki
2015-04-16 19:01       ` Toshi Kani
2015-04-17  7:19       ` Xie XiuQi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=552B7DE4.3070008@linaro.org \
    --to=hanjun.guo@linaro.org \
    --cc=guohanjun@huawei.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=toshi.kani@hp.com \
    --cc=xiexiuqi@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).