linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V3 0/2] Fix mid_q_entry memory leaks in SMB client and further cleanup
@ 2025-08-11 14:07 Wang Zhaolong
  2025-08-11 14:07 ` [PATCH V3 1/2] smb: client: fix mid_q_entry memleak leak with per-mid locking Wang Zhaolong
  2025-08-11 14:07 ` [PATCH V3 2/2] smb: client: Clean up mid_queue_lock usage and standardize mid_state access Wang Zhaolong
  0 siblings, 2 replies; 3+ messages in thread
From: Wang Zhaolong @ 2025-08-11 14:07 UTC (permalink / raw)
  To: sfrench, pshilov
  Cc: linux-cifs, samba-technical, linux-kernel, chengzhihao1, yi.zhang,
	yangerkun

This series contains the remaining patch from my previous mid processing
fix series, plus an additional cleanup patch. The first three patches of
my earlier series have already been merged into mainline. 

Patch 1/2: The final patch from that series, which improves the helper
function for mid cancellation conditions. The previous helper function
had awkward semantics requiring callers to negate the return value,
and was inefficient due to the extra negation in wait loops.

Patch 2/2: A cleanup patch that removes unnecessary mid_queue_lock usage
and standardizes mid_state access with READ_ONCE/WRITE_ONCE. This builds
on the fixes to make the locking model cleaner and more consistent.

These patches are independent and can be applied separately if needed.

V3:
 - Use mid_lock to protect the update of mid->callback rather than
   its execution. 

V2:
 - Inline the mid_execute_callback() in the smb2ops.c to eliminate
   the sparse warning.

Wang Zhaolong (2):
  smb: client: fix mid_q_entry memleak leak with per-mid locking
  smb: client: Clean up mid_queue_lock usage and standardize mid_state
    access

 fs/smb/client/cifsglob.h      | 21 +++++++++++
 fs/smb/client/cifssmb.c       |  8 ++--
 fs/smb/client/cifstransport.c | 69 +++++++++++++++++++++++------------
 fs/smb/client/connect.c       | 25 ++++++-------
 fs/smb/client/smb1ops.c       |  5 ++-
 fs/smb/client/smb2ops.c       | 26 ++++++-------
 fs/smb/client/smb2pdu.c       | 10 +++--
 fs/smb/client/smb2transport.c |  3 +-
 fs/smb/client/transport.c     | 50 ++++++++++++++-----------
 9 files changed, 135 insertions(+), 82 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH V3 1/2] smb: client: fix mid_q_entry memleak leak with per-mid locking
  2025-08-11 14:07 [PATCH V3 0/2] Fix mid_q_entry memory leaks in SMB client and further cleanup Wang Zhaolong
@ 2025-08-11 14:07 ` Wang Zhaolong
  2025-08-11 14:07 ` [PATCH V3 2/2] smb: client: Clean up mid_queue_lock usage and standardize mid_state access Wang Zhaolong
  1 sibling, 0 replies; 3+ messages in thread
From: Wang Zhaolong @ 2025-08-11 14:07 UTC (permalink / raw)
  To: sfrench, pshilov
  Cc: linux-cifs, samba-technical, linux-kernel, chengzhihao1, yi.zhang,
	yangerkun

This is step 4/4 of a patch series to fix mid_q_entry memory leaks
caused by race conditions in callback execution.

In compound_send_recv(), when wait_for_response() is interrupted by
signals, the code attempts to cancel pending requests by changing
their callbacks to cifs_cancelled_callback. However, there's a race
condition between signal interruption and network response processing
that causes both mid_q_entry and server buffer leaks:

```
User foreground process                    cifsd
cifs_readdir
 open_cached_dir
  cifs_send_recv
   compound_send_recv
    smb2_setup_request
     smb2_mid_entry_alloc
      smb2_get_mid_entry
       smb2_mid_entry_alloc
        mempool_alloc // alloc mid
        kref_init(&temp->refcount); // refcount = 1
     mid[0]->callback = cifs_compound_callback;
     mid[1]->callback = cifs_compound_last_callback;
     smb_send_rqst
     rc = wait_for_response
      wait_event_state TASK_KILLABLE
                                  cifs_demultiplex_thread
                                    allocate_buffers
                                      server->bigbuf = cifs_buf_get()
                                    standard_receive3
                                      ->find_mid()
                                        smb2_find_mid
                                          __smb2_find_mid
                                           kref_get(&mid->refcount) // +1
                                      cifs_handle_standard
                                        handle_mid
                                         /* bigbuf will also leak */
                                         mid->resp_buf = server->bigbuf
                                         server->bigbuf = NULL;
                                         dequeue_mid
                                     /* in for loop */
                                    mids[0]->callback
                                      cifs_compound_callback
    /* Signal interrupts wait: rc = -ERESTARTSYS */
    /* if (... || midQ[i]->mid_state == MID_RESPONSE_RECEIVED) *?
    midQ[0]->callback = cifs_cancelled_callback;
    cancelled_mid[i] = true;
                                       /* The change comes too late */
                                       mid->mid_state = MID_RESPONSE_READY
                                    release_mid  // -1
    /* cancelled_mid[i] == true causes mid won't be released
       in compound_send_recv cleanup */
    /* cifs_cancelled_callback won't executed to release mid */
```

The root cause is that there's a race between callback assignment and
execution.

Fix this by introducing per-mid locking:

- Add spinlock_t mid_lock to struct mid_q_entry
- Add mid_execute_callback() for atomic callback execution
- Use mid_lock in cancellation paths to ensure atomicity

This ensures that either the original callback or the cancellation
callback executes atomically, preventing reference count leaks when
requests are interrupted by signals.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=220404
Fixes: ee258d79159a ("CIFS: Move credit processing to mid callbacks for SMB3")
Signed-off-by: Wang Zhaolong <wangzhaolong@huaweicloud.com>

Signed-off-by: Wang Zhaolong <wangzhaolong@huaweicloud.com>
---
 fs/smb/client/cifsglob.h      | 21 +++++++++++++++++++++
 fs/smb/client/cifstransport.c | 19 +++++++++----------
 fs/smb/client/connect.c       |  8 ++++----
 fs/smb/client/smb2ops.c       |  4 ++--
 fs/smb/client/smb2transport.c |  1 +
 fs/smb/client/transport.c     |  7 +++----
 6 files changed, 40 insertions(+), 20 deletions(-)

diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index e6830ab3a546..1e64a4fb6af0 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -1730,10 +1730,11 @@ struct mid_q_entry {
 	unsigned int resp_buf_size;
 	int mid_state;	/* wish this were enum but can not pass to wait_event */
 	int mid_rc;		/* rc for MID_RC */
 	__le16 command;		/* smb command code */
 	unsigned int optype;	/* operation type */
+	spinlock_t mid_lock;
 	bool wait_cancelled:1;  /* Cancelled while waiting for response */
 	bool deleted_from_q:1;  /* Whether Mid has been dequeued frem pending_mid_q */
 	bool large_buf:1;	/* if valid response, is pointer to large buf */
 	bool multiRsp:1;	/* multiple trans2 responses for one request  */
 	bool multiEnd:1;	/* both received */
@@ -2034,10 +2035,13 @@ require use of the stronger protocol */
  *								init_cached_dir
  * cifsFileInfo->fh_mutex	cifsFileInfo			cifs_new_fileinfo
  * cifsFileInfo->file_info_lock	cifsFileInfo->count		cifs_new_fileinfo
  *				->invalidHandle			initiate_cifs_search
  *				->oplock_break_cancelled
+ * mid_q_entry->mid_lock	mid_q_entry->callback           alloc_mid
+ *								smb2_mid_entry_alloc
+ *				(Any fields of mid_q_entry that will need protection)
  ****************************************************************************/
 
 #ifdef DECLARE_GLOBALS_HERE
 #define GLOBAL_EXTERN
 #else
@@ -2373,10 +2377,27 @@ static inline bool cifs_netbios_name(const char *name, size_t namelen)
 		}
 	}
 	return ret;
 }
 
+/*
+ * Execute mid callback atomically - ensures callback runs exactly once
+ * and prevents sleeping in atomic context.
+ */
+static inline void mid_execute_callback(struct mid_q_entry *mid)
+{
+	void (*callback)(struct mid_q_entry *mid);
+
+	spin_lock(&mid->mid_lock);
+	callback = mid->callback;
+	mid->callback = NULL;  /* Mark as executed, */
+	spin_unlock(&mid->mid_lock);
+
+	if (callback)
+		callback(mid);
+}
+
 #define CIFS_REPARSE_SUPPORT(tcon) \
 	((tcon)->posix_extensions || \
 	 (le32_to_cpu((tcon)->fsAttrInfo.Attributes) & \
 	  FILE_SUPPORTS_REPARSE_POINTS))
 
diff --git a/fs/smb/client/cifstransport.c b/fs/smb/client/cifstransport.c
index 352dafb888dd..e98b95eff8c9 100644
--- a/fs/smb/client/cifstransport.c
+++ b/fs/smb/client/cifstransport.c
@@ -44,10 +44,11 @@ alloc_mid(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server)
 	}
 
 	temp = mempool_alloc(cifs_mid_poolp, GFP_NOFS);
 	memset(temp, 0, sizeof(struct mid_q_entry));
 	kref_init(&temp->refcount);
+	spin_lock_init(&temp->mid_lock);
 	temp->mid = get_mid(smb_buffer);
 	temp->pid = current->pid;
 	temp->command = cpu_to_le16(smb_buffer->Command);
 	cifs_dbg(FYI, "For smb_command %d\n", smb_buffer->Command);
 	/* easier to use jiffies */
@@ -343,20 +344,19 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses,
 		goto out;
 
 	rc = wait_for_response(server, midQ);
 	if (rc != 0) {
 		send_cancel(server, &rqst, midQ);
-		spin_lock(&server->mid_queue_lock);
-		if (midQ->mid_state == MID_REQUEST_SUBMITTED ||
-		    midQ->mid_state == MID_RESPONSE_RECEIVED) {
+		spin_lock(&midQ->mid_lock);
+		if (midQ->callback) {
 			/* no longer considered to be "in-flight" */
 			midQ->callback = release_mid;
-			spin_unlock(&server->mid_queue_lock);
+			spin_unlock(&midQ->mid_lock);
 			add_credits(server, &credits, 0);
 			return rc;
 		}
-		spin_unlock(&server->mid_queue_lock);
+		spin_unlock(&midQ->mid_lock);
 	}
 
 	rc = cifs_sync_mid_result(midQ, server);
 	if (rc != 0) {
 		add_credits(server, &credits, 0);
@@ -525,19 +525,18 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
 		}
 
 		rc = wait_for_response(server, midQ);
 		if (rc) {
 			send_cancel(server, &rqst, midQ);
-			spin_lock(&server->mid_queue_lock);
-			if (midQ->mid_state == MID_REQUEST_SUBMITTED ||
-			    midQ->mid_state == MID_RESPONSE_RECEIVED) {
+			spin_lock(&midQ->mid_lock);
+			if (midQ->callback) {
 				/* no longer considered to be "in-flight" */
 				midQ->callback = release_mid;
-				spin_unlock(&server->mid_queue_lock);
+				spin_unlock(&midQ->mid_lock);
 				return rc;
 			}
-			spin_unlock(&server->mid_queue_lock);
+			spin_unlock(&midQ->mid_lock);
 		}
 
 		/* We got the response - restart system call. */
 		rstart = 1;
 		spin_lock(&server->srv_lock);
diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
index 587845a2452d..281ccbeea719 100644
--- a/fs/smb/client/connect.c
+++ b/fs/smb/client/connect.c
@@ -333,11 +333,11 @@ cifs_abort_connection(struct TCP_Server_Info *server)
 	cifs_server_unlock(server);
 
 	cifs_dbg(FYI, "%s: issuing mid callbacks\n", __func__);
 	list_for_each_entry_safe(mid, nmid, &retry_list, qhead) {
 		list_del_init(&mid->qhead);
-		mid->callback(mid);
+		mid_execute_callback(mid);
 		release_mid(mid);
 	}
 
 	if (cifs_rdma_enabled(server)) {
 		cifs_server_lock(server);
@@ -917,11 +917,11 @@ is_smb_response(struct TCP_Server_Info *server, unsigned char type)
 			 */
 			list_for_each_entry_safe(mid, nmid, &dispose_list, qhead) {
 				list_del_init(&mid->qhead);
 				mid->mid_rc = mid_rc;
 				mid->mid_state = MID_RC;
-				mid->callback(mid);
+				mid_execute_callback(mid);
 				release_mid(mid);
 			}
 
 			/*
 			 * If reconnect failed then wait two seconds. In most
@@ -1115,11 +1115,11 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
 		/* now walk dispose list and issue callbacks */
 		list_for_each_safe(tmp, tmp2, &dispose_list) {
 			mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
 			cifs_dbg(FYI, "Callback mid %llu\n", mid_entry->mid);
 			list_del_init(&mid_entry->qhead);
-			mid_entry->callback(mid_entry);
+			mid_execute_callback(mid_entry);
 			release_mid(mid_entry);
 		}
 		/* 1/8th of sec is more than enough time for them to exit */
 		msleep(125);
 	}
@@ -1392,11 +1392,11 @@ cifs_demultiplex_thread(void *p)
 								"Share deleted. Reconnect needed");
 					}
 				}
 
 				if (!mids[i]->multiRsp || mids[i]->multiEnd)
-					mids[i]->callback(mids[i]);
+					mid_execute_callback(mids[i]);
 
 				release_mid(mids[i]);
 			} else if (server->ops->is_oplock_break &&
 				   server->ops->is_oplock_break(bufs[i],
 								server)) {
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
index ad8947434b71..f7a0f1c81b43 100644
--- a/fs/smb/client/smb2ops.c
+++ b/fs/smb/client/smb2ops.c
@@ -4803,19 +4803,19 @@ static void smb2_decrypt_offload(struct work_struct *work)
 #endif
 			if (dw->server->ops->is_network_name_deleted)
 				dw->server->ops->is_network_name_deleted(dw->buf,
 									 dw->server);
 
-			mid->callback(mid);
+			mid_execute_callback(mid);
 		} else {
 			spin_lock(&dw->server->srv_lock);
 			if (dw->server->tcpStatus == CifsNeedReconnect) {
 				spin_lock(&dw->server->mid_queue_lock);
 				mid->mid_state = MID_RETRY_NEEDED;
 				spin_unlock(&dw->server->mid_queue_lock);
 				spin_unlock(&dw->server->srv_lock);
-				mid->callback(mid);
+				mid_execute_callback(mid);
 			} else {
 				spin_lock(&dw->server->mid_queue_lock);
 				mid->mid_state = MID_REQUEST_SUBMITTED;
 				mid->deleted_from_q = false;
 				list_add_tail(&mid->qhead,
diff --git a/fs/smb/client/smb2transport.c b/fs/smb/client/smb2transport.c
index ff9ef7fcd010..bc0e92eb2b64 100644
--- a/fs/smb/client/smb2transport.c
+++ b/fs/smb/client/smb2transport.c
@@ -769,10 +769,11 @@ smb2_mid_entry_alloc(const struct smb2_hdr *shdr,
 	}
 
 	temp = mempool_alloc(cifs_mid_poolp, GFP_NOFS);
 	memset(temp, 0, sizeof(struct mid_q_entry));
 	kref_init(&temp->refcount);
+	spin_lock_init(&temp->mid_lock);
 	temp->mid = le64_to_cpu(shdr->MessageId);
 	temp->credits = credits > 0 ? credits : 1;
 	temp->pid = current->pid;
 	temp->command = shdr->Command; /* Always LE */
 	temp->when_alloc = jiffies;
diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
index 32d528b4dd83..a61ba7f3fb86 100644
--- a/fs/smb/client/transport.c
+++ b/fs/smb/client/transport.c
@@ -1003,19 +1003,18 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
 	if (rc != 0) {
 		for (; i < num_rqst; i++) {
 			cifs_server_dbg(FYI, "Cancelling wait for mid %llu cmd: %d\n",
 				 midQ[i]->mid, le16_to_cpu(midQ[i]->command));
 			send_cancel(server, &rqst[i], midQ[i]);
-			spin_lock(&server->mid_queue_lock);
+			spin_lock(&midQ[i]->mid_lock);
 			midQ[i]->wait_cancelled = true;
-			if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED ||
-			    midQ[i]->mid_state == MID_RESPONSE_RECEIVED) {
+			if (midQ[i]->callback) {
 				midQ[i]->callback = cifs_cancelled_callback;
 				cancelled_mid[i] = true;
 				credits[i].value = 0;
 			}
-			spin_unlock(&server->mid_queue_lock);
+			spin_unlock(&midQ[i]->mid_lock);
 		}
 	}
 
 	for (i = 0; i < num_rqst; i++) {
 		if (rc < 0)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH V3 2/2] smb: client: Clean up mid_queue_lock usage and standardize mid_state access
  2025-08-11 14:07 [PATCH V3 0/2] Fix mid_q_entry memory leaks in SMB client and further cleanup Wang Zhaolong
  2025-08-11 14:07 ` [PATCH V3 1/2] smb: client: fix mid_q_entry memleak leak with per-mid locking Wang Zhaolong
@ 2025-08-11 14:07 ` Wang Zhaolong
  1 sibling, 0 replies; 3+ messages in thread
From: Wang Zhaolong @ 2025-08-11 14:07 UTC (permalink / raw)
  To: sfrench, pshilov
  Cc: linux-cifs, samba-technical, linux-kernel, chengzhihao1, yi.zhang,
	yangerkun

This patch cleans up unnecessary mid_queue_lock usage in several places
where the lock is not actually needed. Many functions were holding
mid_queue_lock to protect mid state updates on mids that are no longer
in the queue, such as in smb2_decrypt_offload() and various callback
functions.

This patch also standardizes mid_state access using READ_ONCE/WRITE_ONCE
throughout the codebase to ensure consistent memory access patterns and
prevent compiler optimizations that could cause issues.

The core insight is that mid_state synchronization doesn't require the
same lock as queue structure protection. Separating these concerns makes
the locking model clearer and reduces unnecessary lock contention.

Signed-off-by: Wang Zhaolong <wangzhaolong@huaweicloud.com>
---
 fs/smb/client/cifssmb.c       |  8 +++---
 fs/smb/client/cifstransport.c | 50 ++++++++++++++++++++++++++---------
 fs/smb/client/connect.c       | 17 ++++++------
 fs/smb/client/smb1ops.c       |  5 ++--
 fs/smb/client/smb2ops.c       | 22 +++++++--------
 fs/smb/client/smb2pdu.c       | 10 ++++---
 fs/smb/client/smb2transport.c |  2 +-
 fs/smb/client/transport.c     | 43 +++++++++++++++++-------------
 8 files changed, 95 insertions(+), 62 deletions(-)

diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
index d20766f664c4..1da3436a52cb 100644
--- a/fs/smb/client/cifssmb.c
+++ b/fs/smb/client/cifssmb.c
@@ -1309,16 +1309,17 @@ cifs_readv_callback(struct mid_q_entry *mid)
 		.value = 1,
 		.instance = 0,
 		.rreq_debug_id = rdata->rreq->debug_id,
 		.rreq_debug_index = rdata->subreq.debug_index,
 	};
+	int mid_state = READ_ONCE(mid->mid_state);
 
 	cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n",
-		 __func__, mid->mid, mid->mid_state, rdata->result,
+		 __func__, mid->mid, mid_state, rdata->result,
 		 rdata->subreq.len);
 
-	switch (mid->mid_state) {
+	switch (mid_state) {
 	case MID_RESPONSE_RECEIVED:
 		/* result already set, check signature */
 		if (server->sign) {
 			int rc = 0;
 
@@ -1694,12 +1695,13 @@ cifs_writev_callback(struct mid_q_entry *mid)
 		.rreq_debug_id = wdata->rreq->debug_id,
 		.rreq_debug_index = wdata->subreq.debug_index,
 	};
 	ssize_t result;
 	size_t written;
+	int mid_state = READ_ONCE(mid->mid_state);
 
-	switch (mid->mid_state) {
+	switch (mid_state) {
 	case MID_RESPONSE_RECEIVED:
 		result = cifs_check_receive(mid, tcon->ses->server, 0);
 		if (result != 0)
 			break;
 
diff --git a/fs/smb/client/cifstransport.c b/fs/smb/client/cifstransport.c
index e98b95eff8c9..c69aff70c7a4 100644
--- a/fs/smb/client/cifstransport.c
+++ b/fs/smb/client/cifstransport.c
@@ -64,11 +64,11 @@ alloc_mid(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server)
 	temp->creator = current;
 	temp->callback = cifs_wake_up_task;
 	temp->callback_data = current;
 
 	atomic_inc(&mid_count);
-	temp->mid_state = MID_REQUEST_ALLOCATED;
+	WRITE_ONCE(temp->mid_state, MID_REQUEST_ALLOCATED);
 	return temp;
 }
 
 int
 smb_send(struct TCP_Server_Info *server, struct smb_hdr *smb_buffer,
@@ -328,11 +328,11 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses,
 	if (rc) {
 		cifs_server_unlock(server);
 		goto out;
 	}
 
-	midQ->mid_state = MID_REQUEST_SUBMITTED;
+	WRITE_ONCE(midQ->mid_state, MID_REQUEST_SUBMITTED);
 
 	rc = smb_send(server, in_buf, len);
 	cifs_save_when_sent(midQ);
 
 	if (rc < 0)
@@ -362,11 +362,11 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses,
 		add_credits(server, &credits, 0);
 		return rc;
 	}
 
 	if (!midQ->resp_buf || !out_buf ||
-	    midQ->mid_state != MID_RESPONSE_READY) {
+	    READ_ONCE(midQ->mid_state) != MID_RESPONSE_READY) {
 		rc = -EIO;
 		cifs_server_dbg(VFS, "Bad MID state?\n");
 		goto out;
 	}
 
@@ -403,10 +403,40 @@ send_lock_cancel(const unsigned int xid, struct cifs_tcon *tcon,
 
 	return SendReceive(xid, ses, in_buf, out_buf,
 			&bytes_returned, 0);
 }
 
+static inline bool cifs_blocking_lock_should_exit(struct mid_q_entry *midQ,
+					struct TCP_Server_Info *server)
+{
+	int mid_state = READ_ONCE(midQ->mid_state);
+	int tcp_status = READ_ONCE(server->tcpStatus);
+
+	if (mid_state != MID_REQUEST_SUBMITTED &&
+		mid_state != MID_RESPONSE_RECEIVED)
+		return true;
+
+	if (tcp_status != CifsGood && tcp_status != CifsNew)
+		return true;
+
+	return false;
+}
+
+static inline bool cifs_blocking_lock_can_cancel(struct mid_q_entry *midQ,
+					struct TCP_Server_Info *server)
+{
+	int mid_state = READ_ONCE(midQ->mid_state);
+	int tcp_status = READ_ONCE(server->tcpStatus);
+	/* Can only cancel if still pending */
+	bool still_pending = (mid_state == MID_REQUEST_SUBMITTED ||
+			mid_state == MID_RESPONSE_RECEIVED);
+	/* Can only cancel if server connection is good */
+	bool server_good = (tcp_status == CifsGood || tcp_status == CifsNew);
+
+	return still_pending && server_good;
+}
+
 int
 SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
 	    struct smb_hdr *in_buf, struct smb_hdr *out_buf,
 	    int *pbytes_returned)
 {
@@ -470,11 +500,11 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
 		delete_mid(midQ);
 		cifs_server_unlock(server);
 		return rc;
 	}
 
-	midQ->mid_state = MID_REQUEST_SUBMITTED;
+	WRITE_ONCE(midQ->mid_state, MID_REQUEST_SUBMITTED);
 	rc = smb_send(server, in_buf, len);
 	cifs_save_when_sent(midQ);
 
 	if (rc < 0)
 		server->sequence_number -= 2;
@@ -486,22 +516,16 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
 		return rc;
 	}
 
 	/* Wait for a reply - allow signals to interrupt. */
 	rc = wait_event_interruptible(server->response_q,
-		(!(midQ->mid_state == MID_REQUEST_SUBMITTED ||
-		   midQ->mid_state == MID_RESPONSE_RECEIVED)) ||
-		((server->tcpStatus != CifsGood) &&
-		 (server->tcpStatus != CifsNew)));
+		cifs_blocking_lock_should_exit(midQ, server));
 
 	/* Were we interrupted by a signal ? */
 	spin_lock(&server->srv_lock);
 	if ((rc == -ERESTARTSYS) &&
-		(midQ->mid_state == MID_REQUEST_SUBMITTED ||
-		 midQ->mid_state == MID_RESPONSE_RECEIVED) &&
-		((server->tcpStatus == CifsGood) ||
-		 (server->tcpStatus == CifsNew))) {
+	     cifs_blocking_lock_can_cancel(midQ, server)) {
 		spin_unlock(&server->srv_lock);
 
 		if (in_buf->Command == SMB_COM_TRANSACTION2) {
 			/* POSIX lock. We send a NT_CANCEL SMB to cause the
 			   blocking lock to return. */
@@ -546,11 +570,11 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
 	rc = cifs_sync_mid_result(midQ, server);
 	if (rc != 0)
 		return rc;
 
 	/* rcvd frame is ok */
-	if (out_buf == NULL || midQ->mid_state != MID_RESPONSE_READY) {
+	if (out_buf == NULL || READ_ONCE(midQ->mid_state) != MID_RESPONSE_READY) {
 		rc = -EIO;
 		cifs_tcon_dbg(VFS, "Bad MID state?\n");
 		goto out;
 	}
 
diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
index 281ccbeea719..4cbaf8fe3ccf 100644
--- a/fs/smb/client/connect.c
+++ b/fs/smb/client/connect.c
@@ -322,12 +322,12 @@ cifs_abort_connection(struct TCP_Server_Info *server)
 	INIT_LIST_HEAD(&retry_list);
 	cifs_dbg(FYI, "%s: moving mids to private list\n", __func__);
 	spin_lock(&server->mid_queue_lock);
 	list_for_each_entry_safe(mid, nmid, &server->pending_mid_q, qhead) {
 		kref_get(&mid->refcount);
-		if (mid->mid_state == MID_REQUEST_SUBMITTED)
-			mid->mid_state = MID_RETRY_NEEDED;
+		if (READ_ONCE(mid->mid_state) == MID_REQUEST_SUBMITTED)
+			WRITE_ONCE(mid->mid_state, MID_RETRY_NEEDED);
 		list_move(&mid->qhead, &retry_list);
 		mid->deleted_from_q = true;
 	}
 	spin_unlock(&server->mid_queue_lock);
 	cifs_server_unlock(server);
@@ -916,11 +916,11 @@ is_smb_response(struct TCP_Server_Info *server, unsigned char type)
 			 * return code should be read from mid_rc member.
 			 */
 			list_for_each_entry_safe(mid, nmid, &dispose_list, qhead) {
 				list_del_init(&mid->qhead);
 				mid->mid_rc = mid_rc;
-				mid->mid_state = MID_RC;
+				WRITE_ONCE(mid->mid_state, MID_RC);
 				mid_execute_callback(mid);
 				release_mid(mid);
 			}
 
 			/*
@@ -955,19 +955,18 @@ void
 dequeue_mid(struct mid_q_entry *mid, bool malformed)
 {
 #ifdef CONFIG_CIFS_STATS2
 	mid->when_received = jiffies;
 #endif
-	spin_lock(&mid->server->mid_queue_lock);
-	if (!malformed)
-		mid->mid_state = MID_RESPONSE_RECEIVED;
-	else
-		mid->mid_state = MID_RESPONSE_MALFORMED;
+	WRITE_ONCE(mid->mid_state, malformed ? MID_RESPONSE_MALFORMED :
+		   MID_RESPONSE_RECEIVED);
 	/*
 	 * Trying to handle/dequeue a mid after the send_recv()
 	 * function has finished processing it is a bug.
 	 */
+
+	spin_lock(&mid->server->mid_queue_lock);
 	if (mid->deleted_from_q == true) {
 		spin_unlock(&mid->server->mid_queue_lock);
 		pr_warn_once("trying to dequeue a deleted mid\n");
 	} else {
 		list_del_init(&mid->qhead);
@@ -1104,11 +1103,11 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
 		spin_lock(&server->mid_queue_lock);
 		list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
 			mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
 			cifs_dbg(FYI, "Clearing mid %llu\n", mid_entry->mid);
 			kref_get(&mid_entry->refcount);
-			mid_entry->mid_state = MID_SHUTDOWN;
+			WRITE_ONCE(mid_entry->mid_state, MID_SHUTDOWN);
 			list_move(&mid_entry->qhead, &dispose_list);
 			mid_entry->deleted_from_q = true;
 		}
 		spin_unlock(&server->mid_queue_lock);
 
diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
index 893a1ea8c000..5ef007a0bad0 100644
--- a/fs/smb/client/smb1ops.c
+++ b/fs/smb/client/smb1ops.c
@@ -96,11 +96,11 @@ cifs_find_mid(struct TCP_Server_Info *server, char *buffer)
 	struct mid_q_entry *mid;
 
 	spin_lock(&server->mid_queue_lock);
 	list_for_each_entry(mid, &server->pending_mid_q, qhead) {
 		if (compare_mid(mid->mid, buf) &&
-		    mid->mid_state == MID_REQUEST_SUBMITTED &&
+		    READ_ONCE(mid->mid_state) == MID_REQUEST_SUBMITTED &&
 		    le16_to_cpu(mid->command) == buf->Command) {
 			kref_get(&mid->refcount);
 			spin_unlock(&server->mid_queue_lock);
 			return mid;
 		}
@@ -199,11 +199,12 @@ cifs_get_next_mid(struct TCP_Server_Info *server)
 		num_mids = 0;
 		spin_lock(&server->mid_queue_lock);
 		list_for_each_entry(mid_entry, &server->pending_mid_q, qhead) {
 			++num_mids;
 			if (mid_entry->mid == cur_mid &&
-			    mid_entry->mid_state == MID_REQUEST_SUBMITTED) {
+			    READ_ONCE(mid_entry->mid_state) ==
+				      MID_REQUEST_SUBMITTED) {
 				/* This mid is in use, try a different one */
 				collision = true;
 				break;
 			}
 		}
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
index f7a0f1c81b43..0938a33a7856 100644
--- a/fs/smb/client/smb2ops.c
+++ b/fs/smb/client/smb2ops.c
@@ -402,11 +402,11 @@ __smb2_find_mid(struct TCP_Server_Info *server, char *buf, bool dequeue)
 	}
 
 	spin_lock(&server->mid_queue_lock);
 	list_for_each_entry(mid, &server->pending_mid_q, qhead) {
 		if ((mid->mid == wire_mid) &&
-		    (mid->mid_state == MID_REQUEST_SUBMITTED) &&
+		    (READ_ONCE(mid->mid_state) == MID_REQUEST_SUBMITTED) &&
 		    (mid->command == shdr->Command)) {
 			kref_get(&mid->refcount);
 			if (dequeue) {
 				list_del_init(&mid->qhead);
 				mid->deleted_from_q = true;
@@ -4661,11 +4661,11 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
 	if (rdata->result != 0) {
 		cifs_dbg(FYI, "%s: server returned error %d\n",
 			 __func__, rdata->result);
 		/* normal error on read response */
 		if (is_offloaded)
-			mid->mid_state = MID_RESPONSE_RECEIVED;
+			WRITE_ONCE(mid->mid_state, MID_RESPONSE_RECEIVED);
 		else
 			dequeue_mid(mid, false);
 		return 0;
 	}
 
@@ -4688,11 +4688,11 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
 		/* data_offset is beyond the end of smallbuf */
 		cifs_dbg(FYI, "%s: data offset (%u) beyond end of smallbuf\n",
 			 __func__, data_offset);
 		rdata->result = -EIO;
 		if (is_offloaded)
-			mid->mid_state = MID_RESPONSE_MALFORMED;
+			WRITE_ONCE(mid->mid_state, MID_RESPONSE_MALFORMED);
 		else
 			dequeue_mid(mid, rdata->result);
 		return 0;
 	}
 
@@ -4707,32 +4707,32 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
 			/* data offset is beyond the 1st page of response */
 			cifs_dbg(FYI, "%s: data offset (%u) beyond 1st page of response\n",
 				 __func__, data_offset);
 			rdata->result = -EIO;
 			if (is_offloaded)
-				mid->mid_state = MID_RESPONSE_MALFORMED;
+				WRITE_ONCE(mid->mid_state, MID_RESPONSE_MALFORMED);
 			else
 				dequeue_mid(mid, rdata->result);
 			return 0;
 		}
 
 		if (data_len > buffer_len - pad_len) {
 			/* data_len is corrupt -- discard frame */
 			rdata->result = -EIO;
 			if (is_offloaded)
-				mid->mid_state = MID_RESPONSE_MALFORMED;
+				WRITE_ONCE(mid->mid_state, MID_RESPONSE_MALFORMED);
 			else
 				dequeue_mid(mid, rdata->result);
 			return 0;
 		}
 
 		/* Copy the data to the output I/O iterator. */
 		rdata->result = cifs_copy_folioq_to_iter(buffer, buffer_len,
 							 cur_off, &rdata->subreq.io_iter);
 		if (rdata->result != 0) {
 			if (is_offloaded)
-				mid->mid_state = MID_RESPONSE_MALFORMED;
+				WRITE_ONCE(mid->mid_state, MID_RESPONSE_MALFORMED);
 			else
 				dequeue_mid(mid, rdata->result);
 			return 0;
 		}
 		rdata->got_bytes = buffer_len;
@@ -4747,18 +4747,18 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
 	} else {
 		/* read response payload cannot be in both buf and pages */
 		WARN_ONCE(1, "buf can not contain only a part of read data");
 		rdata->result = -EIO;
 		if (is_offloaded)
-			mid->mid_state = MID_RESPONSE_MALFORMED;
+			WRITE_ONCE(mid->mid_state, MID_RESPONSE_MALFORMED);
 		else
 			dequeue_mid(mid, rdata->result);
 		return 0;
 	}
 
 	if (is_offloaded)
-		mid->mid_state = MID_RESPONSE_RECEIVED;
+		WRITE_ONCE(mid->mid_state, MID_RESPONSE_RECEIVED);
 	else
 		dequeue_mid(mid, false);
 	return 0;
 }
 
@@ -4807,18 +4807,16 @@ static void smb2_decrypt_offload(struct work_struct *work)
 
 			mid_execute_callback(mid);
 		} else {
 			spin_lock(&dw->server->srv_lock);
 			if (dw->server->tcpStatus == CifsNeedReconnect) {
-				spin_lock(&dw->server->mid_queue_lock);
-				mid->mid_state = MID_RETRY_NEEDED;
-				spin_unlock(&dw->server->mid_queue_lock);
 				spin_unlock(&dw->server->srv_lock);
+				WRITE_ONCE(mid->mid_state, MID_RETRY_NEEDED);
 				mid_execute_callback(mid);
 			} else {
 				spin_lock(&dw->server->mid_queue_lock);
-				mid->mid_state = MID_REQUEST_SUBMITTED;
+				WRITE_ONCE(mid->mid_state, MID_REQUEST_SUBMITTED);
 				mid->deleted_from_q = false;
 				list_add_tail(&mid->qhead,
 					&dw->server->pending_mid_q);
 				spin_unlock(&dw->server->mid_queue_lock);
 				spin_unlock(&dw->server->srv_lock);
diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
index 2df93a75e3b8..a9aab1207ee4 100644
--- a/fs/smb/client/smb2pdu.c
+++ b/fs/smb/client/smb2pdu.c
@@ -4092,13 +4092,14 @@ static void
 smb2_echo_callback(struct mid_q_entry *mid)
 {
 	struct TCP_Server_Info *server = mid->callback_data;
 	struct smb2_echo_rsp *rsp = (struct smb2_echo_rsp *)mid->resp_buf;
 	struct cifs_credits credits = { .value = 0, .instance = 0 };
+	int mid_state = READ_ONCE(mid->mid_state);
 
-	if (mid->mid_state == MID_RESPONSE_RECEIVED
-	    || mid->mid_state == MID_RESPONSE_MALFORMED) {
+	if (mid_state == MID_RESPONSE_RECEIVED
+	    || mid_state == MID_RESPONSE_MALFORMED) {
 		credits.value = le16_to_cpu(rsp->hdr.CreditRequest);
 		credits.instance = server->reconnect_instance;
 	}
 
 	release_mid(mid);
@@ -4531,24 +4532,25 @@ smb2_readv_callback(struct mid_q_entry *mid)
 		.rreq_debug_index = rdata->subreq.debug_index,
 	};
 	struct smb_rqst rqst = { .rq_iov = &rdata->iov[1], .rq_nvec = 1 };
 	unsigned int rreq_debug_id = rdata->rreq->debug_id;
 	unsigned int subreq_debug_index = rdata->subreq.debug_index;
+	int mid_state = READ_ONCE(mid->mid_state);
 
 	if (rdata->got_bytes) {
 		rqst.rq_iter	  = rdata->subreq.io_iter;
 	}
 
 	WARN_ONCE(rdata->server != mid->server,
 		  "rdata server %p != mid server %p",
 		  rdata->server, mid->server);
 
 	cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu/%zu\n",
-		 __func__, mid->mid, mid->mid_state, rdata->result,
+		 __func__, mid->mid, mid_state, rdata->result,
 		 rdata->got_bytes, rdata->subreq.len - rdata->subreq.transferred);
 
-	switch (mid->mid_state) {
+	switch (mid_state) {
 	case MID_RESPONSE_RECEIVED:
 		credits.value = le16_to_cpu(shdr->CreditRequest);
 		credits.instance = server->reconnect_instance;
 		/* result already set, check signature */
 		if (server->sign && !mid->decrypted) {
diff --git a/fs/smb/client/smb2transport.c b/fs/smb/client/smb2transport.c
index bc0e92eb2b64..7c4108f1a28b 100644
--- a/fs/smb/client/smb2transport.c
+++ b/fs/smb/client/smb2transport.c
@@ -787,11 +787,11 @@ smb2_mid_entry_alloc(const struct smb2_hdr *shdr,
 	temp->creator = current;
 	temp->callback = cifs_wake_up_task;
 	temp->callback_data = current;
 
 	atomic_inc(&mid_count);
-	temp->mid_state = MID_REQUEST_ALLOCATED;
+	WRITE_ONCE(temp->mid_state, MID_REQUEST_ALLOCATED);
 	trace_smb3_cmd_enter(le32_to_cpu(shdr->Id.SyncId.TreeId),
 			     le64_to_cpu(shdr->SessionId),
 			     le16_to_cpu(shdr->Command), temp->mid);
 	return temp;
 }
diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
index a61ba7f3fb86..28b5b7011017 100644
--- a/fs/smb/client/transport.c
+++ b/fs/smb/client/transport.c
@@ -31,12 +31,12 @@
 #include "compress.h"
 
 void
 cifs_wake_up_task(struct mid_q_entry *mid)
 {
-	if (mid->mid_state == MID_RESPONSE_RECEIVED)
-		mid->mid_state = MID_RESPONSE_READY;
+	if (READ_ONCE(mid->mid_state) == MID_RESPONSE_RECEIVED)
+		WRITE_ONCE(mid->mid_state, MID_RESPONSE_READY);
 	wake_up_process(mid->callback_data);
 }
 
 void __release_mid(struct kref *refcount)
 {
@@ -47,18 +47,19 @@ void __release_mid(struct kref *refcount)
 	__u16 smb_cmd = le16_to_cpu(midEntry->command);
 	unsigned long now;
 	unsigned long roundtrip_time;
 #endif
 	struct TCP_Server_Info *server = midEntry->server;
+	int mid_state = READ_ONCE(midEntry->mid_state);
 
 	if (midEntry->resp_buf && (midEntry->wait_cancelled) &&
-	    (midEntry->mid_state == MID_RESPONSE_RECEIVED ||
-	     midEntry->mid_state == MID_RESPONSE_READY) &&
+	    (mid_state == MID_RESPONSE_RECEIVED ||
+	     mid_state == MID_RESPONSE_READY) &&
 	    server->ops->handle_cancelled_mid)
 		server->ops->handle_cancelled_mid(midEntry, server);
 
-	midEntry->mid_state = MID_FREE;
+	WRITE_ONCE(midEntry->mid_state, MID_FREE);
 	atomic_dec(&mid_count);
 	if (midEntry->large_buf)
 		cifs_buf_release(midEntry->resp_buf);
 	else
 		cifs_small_buf_release(midEntry->resp_buf);
@@ -631,17 +632,24 @@ cifs_wait_mtu_credits(struct TCP_Server_Info *server, size_t size,
 	credits->value = 0;
 	credits->instance = server->reconnect_instance;
 	return 0;
 }
 
+static inline bool cifs_mid_response_ready(struct mid_q_entry *mid)
+{
+	int mid_state = READ_ONCE(mid->mid_state);
+
+	return (mid_state != MID_REQUEST_SUBMITTED &&
+		mid_state != MID_RESPONSE_RECEIVED);
+}
+
 int wait_for_response(struct TCP_Server_Info *server, struct mid_q_entry *midQ)
 {
 	int error;
 
 	error = wait_event_state(server->response_q,
-				 midQ->mid_state != MID_REQUEST_SUBMITTED &&
-				 midQ->mid_state != MID_RESPONSE_RECEIVED,
+				 cifs_mid_response_ready(midQ),
 				 (TASK_KILLABLE|TASK_FREEZABLE_UNSAFE));
 	if (error < 0)
 		return -ERESTARTSYS;
 
 	return 0;
@@ -696,11 +704,11 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
 
 	mid->receive = receive;
 	mid->callback = callback;
 	mid->callback_data = cbdata;
 	mid->handle = handle;
-	mid->mid_state = MID_REQUEST_SUBMITTED;
+	WRITE_ONCE(mid->mid_state, MID_REQUEST_SUBMITTED);
 
 	/* put it on the pending_mid_q */
 	spin_lock(&server->mid_queue_lock);
 	list_add_tail(&mid->qhead, &server->pending_mid_q);
 	spin_unlock(&server->mid_queue_lock);
@@ -728,18 +736,17 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
 }
 
 int cifs_sync_mid_result(struct mid_q_entry *mid, struct TCP_Server_Info *server)
 {
 	int rc = 0;
+	int state = READ_ONCE(mid->mid_state);
 
 	cifs_dbg(FYI, "%s: cmd=%d mid=%llu state=%d\n",
-		 __func__, le16_to_cpu(mid->command), mid->mid, mid->mid_state);
+		 __func__, le16_to_cpu(mid->command), mid->mid, state);
 
-	spin_lock(&server->mid_queue_lock);
-	switch (mid->mid_state) {
+	switch (state) {
 	case MID_RESPONSE_READY:
-		spin_unlock(&server->mid_queue_lock);
 		return rc;
 	case MID_RETRY_NEEDED:
 		rc = -EAGAIN;
 		break;
 	case MID_RESPONSE_MALFORMED:
@@ -750,21 +757,21 @@ int cifs_sync_mid_result(struct mid_q_entry *mid, struct TCP_Server_Info *server
 		break;
 	case MID_RC:
 		rc = mid->mid_rc;
 		break;
 	default:
+		spin_lock(&server->mid_queue_lock);
 		if (mid->deleted_from_q == false) {
 			list_del_init(&mid->qhead);
 			mid->deleted_from_q = true;
 		}
 		spin_unlock(&server->mid_queue_lock);
 		cifs_server_dbg(VFS, "%s: invalid mid state mid=%llu state=%d\n",
-			 __func__, mid->mid, mid->mid_state);
+			 __func__, mid->mid, state);
 		rc = -EIO;
 		goto sync_mid_done;
 	}
-	spin_unlock(&server->mid_queue_lock);
 
 sync_mid_done:
 	release_mid(mid);
 	return rc;
 }
@@ -778,12 +785,12 @@ cifs_compound_callback(struct mid_q_entry *mid)
 		.instance = server->reconnect_instance,
 	};
 
 	add_credits(server, &credits, mid->optype);
 
-	if (mid->mid_state == MID_RESPONSE_RECEIVED)
-		mid->mid_state = MID_RESPONSE_READY;
+	if (READ_ONCE(mid->mid_state) == MID_RESPONSE_RECEIVED)
+		WRITE_ONCE(mid->mid_state, MID_RESPONSE_READY);
 }
 
 static void
 cifs_compound_last_callback(struct mid_q_entry *mid)
 {
@@ -936,11 +943,11 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
 			for (j = 0; j < num_rqst; j++)
 				add_credits(server, &credits[j], optype);
 			return PTR_ERR(midQ[i]);
 		}
 
-		midQ[i]->mid_state = MID_REQUEST_SUBMITTED;
+		WRITE_ONCE(midQ[i]->mid_state, MID_REQUEST_SUBMITTED);
 		midQ[i]->optype = optype;
 		/*
 		 * Invoke callback for every part of the compound chain
 		 * to calculate credits properly. Wake up this thread only when
 		 * the last element is received.
@@ -1026,11 +1033,11 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
 			cancelled_mid[i] = true;
 			goto out;
 		}
 
 		if (!midQ[i]->resp_buf ||
-		    midQ[i]->mid_state != MID_RESPONSE_READY) {
+		    READ_ONCE(midQ[i]->mid_state) != MID_RESPONSE_READY) {
 			rc = -EIO;
 			cifs_dbg(FYI, "Bad MID state?\n");
 			goto out;
 		}
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-08-11 14:15 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-11 14:07 [PATCH V3 0/2] Fix mid_q_entry memory leaks in SMB client and further cleanup Wang Zhaolong
2025-08-11 14:07 ` [PATCH V3 1/2] smb: client: fix mid_q_entry memleak leak with per-mid locking Wang Zhaolong
2025-08-11 14:07 ` [PATCH V3 2/2] smb: client: Clean up mid_queue_lock usage and standardize mid_state access Wang Zhaolong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).