* Patch "drm/dp/mst: close deadlock in connector destruction." has been added to the 4.1-stable tree
@ 2015-07-30 19:12 gregkh
0 siblings, 0 replies; only message in thread
From: gregkh @ 2015-07-30 19:12 UTC (permalink / raw)
To: airlied, daniel.vetter, gregkh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
drm/dp/mst: close deadlock in connector destruction.
to the 4.1-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
drm-dp-mst-close-deadlock-in-connector-destruction.patch
and it can be found in the queue-4.1 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From 6b8eeca65b18ae77e175cc2b6571731f0ee413bf Mon Sep 17 00:00:00 2001
From: Dave Airlie <airlied@redhat.com>
Date: Mon, 15 Jun 2015 10:34:28 +1000
Subject: drm/dp/mst: close deadlock in connector destruction.
From: Dave Airlie <airlied@redhat.com>
commit 6b8eeca65b18ae77e175cc2b6571731f0ee413bf upstream.
I've only seen this once, and I failed to capture the
lockdep backtrace, but I did some investigations.
If we are calling into the MST layer from EDID probing,
we have the mode_config mutex held, if during that EDID
probing, the MST hub goes away, then we can get a deadlock
where the connector destruction function in the driver
tries to retake the mode config mutex.
This offloads connector destruction to a workqueue,
and avoid the subsequenct lock ordering issue.
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
drivers/gpu/drm/drm_dp_mst_topology.c | 40 ++++++++++++++++++++++++++++++++--
include/drm/drm_crtc.h | 2 +
include/drm/drm_dp_mst_helper.h | 4 +++
3 files changed, 44 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -867,8 +867,16 @@ static void drm_dp_destroy_port(struct k
port->vcpi.num_slots = 0;
kfree(port->cached_edid);
- if (port->connector)
- (*port->mgr->cbs->destroy_connector)(mgr, port->connector);
+
+ /* we can't destroy the connector here, as
+ we might be holding the mode_config.mutex
+ from an EDID retrieval */
+ if (port->connector) {
+ mutex_lock(&mgr->destroy_connector_lock);
+ list_add(&port->connector->destroy_list, &mgr->destroy_connector_list);
+ mutex_unlock(&mgr->destroy_connector_lock);
+ schedule_work(&mgr->destroy_connector_work);
+ }
drm_dp_port_teardown_pdt(port, port->pdt);
if (!port->input && port->vcpi.vcpi > 0)
@@ -2632,6 +2640,30 @@ static void drm_dp_tx_work(struct work_s
mutex_unlock(&mgr->qlock);
}
+static void drm_dp_destroy_connector_work(struct work_struct *work)
+{
+ struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, destroy_connector_work);
+ struct drm_connector *connector;
+
+ /*
+ * Not a regular list traverse as we have to drop the destroy
+ * connector lock before destroying the connector, to avoid AB->BA
+ * ordering between this lock and the config mutex.
+ */
+ for (;;) {
+ mutex_lock(&mgr->destroy_connector_lock);
+ connector = list_first_entry_or_null(&mgr->destroy_connector_list, struct drm_connector, destroy_list);
+ if (!connector) {
+ mutex_unlock(&mgr->destroy_connector_lock);
+ break;
+ }
+ list_del(&connector->destroy_list);
+ mutex_unlock(&mgr->destroy_connector_lock);
+
+ mgr->cbs->destroy_connector(mgr, connector);
+ }
+}
+
/**
* drm_dp_mst_topology_mgr_init - initialise a topology manager
* @mgr: manager struct to initialise
@@ -2651,10 +2683,13 @@ int drm_dp_mst_topology_mgr_init(struct
mutex_init(&mgr->lock);
mutex_init(&mgr->qlock);
mutex_init(&mgr->payload_lock);
+ mutex_init(&mgr->destroy_connector_lock);
INIT_LIST_HEAD(&mgr->tx_msg_upq);
INIT_LIST_HEAD(&mgr->tx_msg_downq);
+ INIT_LIST_HEAD(&mgr->destroy_connector_list);
INIT_WORK(&mgr->work, drm_dp_mst_link_probe_work);
INIT_WORK(&mgr->tx_work, drm_dp_tx_work);
+ INIT_WORK(&mgr->destroy_connector_work, drm_dp_destroy_connector_work);
init_waitqueue_head(&mgr->tx_waitq);
mgr->dev = dev;
mgr->aux = aux;
@@ -2679,6 +2714,7 @@ EXPORT_SYMBOL(drm_dp_mst_topology_mgr_in
*/
void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)
{
+ flush_work(&mgr->destroy_connector_work);
mutex_lock(&mgr->payload_lock);
kfree(mgr->payloads);
mgr->payloads = NULL;
--- a/include/drm/drm_crtc.h
+++ b/include/drm/drm_crtc.h
@@ -731,6 +731,8 @@ struct drm_connector {
uint8_t num_h_tile, num_v_tile;
uint8_t tile_h_loc, tile_v_loc;
uint16_t tile_h_size, tile_v_size;
+
+ struct list_head destroy_list;
};
/**
--- a/include/drm/drm_dp_mst_helper.h
+++ b/include/drm/drm_dp_mst_helper.h
@@ -463,6 +463,10 @@ struct drm_dp_mst_topology_mgr {
struct work_struct work;
struct work_struct tx_work;
+
+ struct list_head destroy_connector_list;
+ struct mutex destroy_connector_lock;
+ struct work_struct destroy_connector_work;
};
int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, struct device *dev, struct drm_dp_aux *aux, int max_dpcd_transaction_bytes, int max_payloads, int conn_base_id);
Patches currently in stable-queue which might be from airlied@redhat.com are
queue-4.1/drm-vgem-set-unique-to-vgem.patch
queue-4.1/drm-dp-mst-take-lock-around-looking-up-the-branch-device-on-hpd-irq.patch
queue-4.1/drm-dp-mst-make-sure-mst_primary-mstb-is-valid-in-work-function.patch
queue-4.1/drm-qxl-do-not-leak-memory-if-qxl_release_list_add-fails.patch
queue-4.1/drm-qxl-do-not-cause-spice-server-to-clean-our-objects.patch
queue-4.1/drm-dp-mst-close-deadlock-in-connector-destruction.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2015-07-30 19:12 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-30 19:12 Patch "drm/dp/mst: close deadlock in connector destruction." has been added to the 4.1-stable tree gregkh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).