linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Doug Ledford <dledford@redhat.com>, Jason Gunthorpe <jgg@mellanox.com>
Cc: linux-rdma@vger.kernel.org
Subject: [PATCH rdma-next 12/15] RDMA/cm: Allow ib_send_cm_drep() to be done under lock
Date: Tue, 10 Mar 2020 11:25:42 +0200	[thread overview]
Message-ID: <20200310092545.251365-13-leon@kernel.org> (raw)
In-Reply-To: <20200310092545.251365-1-leon@kernel.org>

From: Jason Gunthorpe <jgg@mellanox.com>

The first thing ib_send_cm_drep() does is obtain the lock, so use the
usual unlocked wrapper, locked actor pattern here.

This avoids a sketchy lock/unlock sequence (which could allow state to
change) during cm_destroy_id().

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c | 55 +++++++++++++++++++++---------------
 1 file changed, 33 insertions(+), 22 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index dd99387d0839..2d3b661153a4 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -87,6 +87,8 @@ static void cm_add_one(struct ib_device *device);
 static void cm_remove_one(struct ib_device *device, void *client_data);
 static int cm_send_dreq_locked(struct cm_id_private *cm_id_priv,
 			       const void *private_data, u8 private_data_len);
+static int cm_send_drep_locked(struct cm_id_private *cm_id_priv,
+			       void *private_data, u8 private_data_len);
 
 static struct ib_client cm_client = {
 	.name   = "cm",
@@ -1104,8 +1106,8 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
 		spin_unlock_irq(&cm_id_priv->lock);
 		break;
 	case IB_CM_DREQ_RCVD:
+		cm_send_drep_locked(cm_id_priv, NULL, 0);
 		spin_unlock_irq(&cm_id_priv->lock);
-		ib_send_cm_drep(cm_id, NULL, 0);
 		break;
 	default:
 		spin_unlock_irq(&cm_id_priv->lock);
@@ -2712,51 +2714,60 @@ static void cm_format_drep(struct cm_drep_msg *drep_msg,
 			    private_data_len);
 }
 
-int ib_send_cm_drep(struct ib_cm_id *cm_id,
-		    const void *private_data,
-		    u8 private_data_len)
+static int cm_send_drep_locked(struct cm_id_private *cm_id_priv,
+			       void *private_data, u8 private_data_len)
 {
-	struct cm_id_private *cm_id_priv;
 	struct ib_mad_send_buf *msg;
-	unsigned long flags;
-	void *data;
 	int ret;
 
+	lockdep_assert_held(&cm_id_priv->lock);
+
 	if (private_data && private_data_len > IB_CM_DREP_PRIVATE_DATA_SIZE)
 		return -EINVAL;
 
-	data = cm_copy_private_data(private_data, private_data_len);
-	if (IS_ERR(data))
-		return PTR_ERR(data);
-
-	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
-	spin_lock_irqsave(&cm_id_priv->lock, flags);
-	if (cm_id->state != IB_CM_DREQ_RCVD) {
-		pr_debug("%s: local_id %d, cm_idcm_id->state(%d) != IB_CM_DREQ_RCVD\n",
-			 __func__, be32_to_cpu(cm_id->local_id), cm_id->state);
-		spin_unlock_irqrestore(&cm_id_priv->lock, flags);
-		kfree(data);
+	if (cm_id_priv->id.state != IB_CM_DREQ_RCVD) {
+		pr_debug(
+			"%s: local_id %d, cm_idcm_id->state(%d) != IB_CM_DREQ_RCVD\n",
+			__func__, be32_to_cpu(cm_id_priv->id.local_id),
+			cm_id_priv->id.state);
+		kfree(private_data);
 		return -EINVAL;
 	}
 
-	cm_set_private_data(cm_id_priv, data, private_data_len);
+	cm_set_private_data(cm_id_priv, private_data, private_data_len);
 	cm_enter_timewait(cm_id_priv);
 
 	ret = cm_alloc_msg(cm_id_priv, &msg);
 	if (ret)
-		goto out;
+		return ret;
 
 	cm_format_drep((struct cm_drep_msg *) msg->mad, cm_id_priv,
 		       private_data, private_data_len);
 
 	ret = ib_post_send_mad(msg, NULL);
 	if (ret) {
-		spin_unlock_irqrestore(&cm_id_priv->lock, flags);
 		cm_free_msg(msg);
 		return ret;
 	}
+	return 0;
+}
 
-out:	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+int ib_send_cm_drep(struct ib_cm_id *cm_id, const void *private_data,
+		    u8 private_data_len)
+{
+	struct cm_id_private *cm_id_priv =
+		container_of(cm_id, struct cm_id_private, id);
+	unsigned long flags;
+	void *data;
+	int ret;
+
+	data = cm_copy_private_data(private_data, private_data_len);
+	if (IS_ERR(data))
+		return PTR_ERR(data);
+
+	spin_lock_irqsave(&cm_id_priv->lock, flags);
+	ret = cm_send_drep_locked(cm_id_priv, data, private_data_len);
+	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL(ib_send_cm_drep);
-- 
2.24.1


  parent reply	other threads:[~2020-03-10  9:26 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-10  9:25 [PATCH rdma-next 00/15] Fix locking around cm_id.state in the ib_cm Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 01/15] RDMA/cm: Fix ordering of xa_alloc_cyclic() in ib_create_cm_id() Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 02/15] RDMA/cm: Fix checking for allowed duplicate listens Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 03/15] RDMA/cm: Remove a race freeing timewait_info Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 04/15] RDMA/cm: Make the destroy_id flow more robust Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 05/15] RDMA/cm: Simplify establishing a listen cm_id Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 06/15] RDMA/cm: Read id.state under lock when doing pr_debug() Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 07/15] RDMA/cm: Make it clear that there is no concurrency in cm_sidr_req_handler() Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 08/15] RDMA/cm: Make it clearer how concurrency works in cm_req_handler() Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 09/15] RDMA/cm: Add missing locking around id.state in cm_dup_req_handler Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 10/15] RDMA/cm: Add some lockdep assertions for cm_id_priv->lock Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 11/15] RDMA/cm: Allow ib_send_cm_dreq() to be done under lock Leon Romanovsky
2020-03-10  9:25 ` Leon Romanovsky [this message]
2020-03-10  9:25 ` [PATCH rdma-next 13/15] RDMA/cm: Allow ib_send_cm_rej() " Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 14/15] RDMA/cm: Allow ib_send_cm_sidr_rep() " Leon Romanovsky
2020-03-10  9:25 ` [PATCH rdma-next 15/15] RDMA/cm: Make sure the cm_id is in the IB_CM_IDLE state in destroy Leon Romanovsky
2020-03-17 20:15 ` [PATCH rdma-next 00/15] Fix locking around cm_id.state in the ib_cm Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200310092545.251365-13-leon@kernel.org \
    --to=leon@kernel.org \
    --cc=dledford@redhat.com \
    --cc=jgg@mellanox.com \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).