From: Dongsheng Yang <dongsheng.yang@easystack.cn>
To: Philipp Reisner <philipp.reisner@linbit.com>,
"zhengbing.huang" <zhengbing.huang@easystack.cn>
Cc: drbd-dev@lists.linbit.com
Subject: Re: [PATCH 05/11] drbd_transport_rdma: dont break in dtr_tx_cq_event_handler if (cm->state != DSM_CONNECTED)
Date: Mon, 1 Jul 2024 10:23:14 +0800 [thread overview]
Message-ID: <5de313a6-8b0a-36c4-4b76-307ee1ab3477@easystack.cn> (raw)
In-Reply-To: <CADGDV=XCh8QLqYZ0-zddu6nwdJJor9UGb960K-CmN5yLB58XzA@mail.gmail.com>
在 2024/6/28 星期五 下午 8:07, Philipp Reisner 写道:
> Hello Dongsheng,
>
> It appears that you are trying to fix a leak of cm structures. Is that correct?
Yes, in our network faulure testing, we found drbdadm down command hang
at dtr_free() to
wait_event(rdma_transport->cm_count_wait,!atomic_read(&rdma_transport->cm_count));,
we can find out the leak cm in memory and found the tx_descs_posted is
not 0. then we did more hacking and found this problem in [05/11]
let's say this case:
a) post two tx_desc and tx_desc_posted to 2.
b) first tx_desc complete and call dtr_tx_cq_event_handler and into
dtr_handle_tx_cq_event().
c) network failure and dtr_tx_timeout_work_fn() clear CONNECTED.
d) dtr_handle_tx_cq_event() returns, at this time , the second tx_desc
is already complete, we expect rc = ib_req_notify_cq(cq, IB_CQ_NEXT_COMP
| IB_CQ_REPORT_MISSED_EVENTS); to return 1 in rc and continue to call
dtr_handle_tx_cq_event() in next while loop.
d) but it check cm->state is not CONNECTED, and break the outer while
loop, so the second tx_desc will never be handled.
> Do you the reference on cm that is held because of the timer?
> Please describe what the problem is, and how you are improving the situation.
>
> In case this approach is the right solution, the patch should also change the
> dtr_handle_tx_cq_event() function to type void.
>
> best regards,
> Philipp
>
> On Mon, Jun 24, 2024 at 8:22 AM zhengbing.huang
> <zhengbing.huang@easystack.cn> wrote:
>>
>> From: Dongsheng Yang <dongsheng.yang@easystack.cn>
>>
>> We need to drain all tx in disconnect to put all kref for cm
>>
>> Signed-off-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
>> ---
>> drbd/drbd_transport_rdma.c | 3 ---
>> 1 file changed, 3 deletions(-)
>>
>> diff --git a/drbd/drbd_transport_rdma.c b/drbd/drbd_transport_rdma.c
>> index b7ccb15d4..9a6d15b78 100644
>> --- a/drbd/drbd_transport_rdma.c
>> +++ b/drbd/drbd_transport_rdma.c
>> @@ -1956,9 +1956,6 @@ static void dtr_tx_cq_event_handler(struct ib_cq *cq, void *ctx)
>> err = dtr_handle_tx_cq_event(cq, cm);
>> } while (!err);
>>
>> - if (cm->state != DSM_CONNECTED)
>> - break;
>> -
>> rc = ib_req_notify_cq(cq, IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS);
>> if (unlikely(rc < 0)) {
>> struct drbd_transport *transport = cm->path->path.transport;
>> --
>> 2.27.0
>>
> .
>
next prev parent reply other threads:[~2024-07-01 7:19 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-24 5:46 [PATCH 01/11] drbd_nl: dont allow detating to be inttrupted in waiting D_DETACHING to DISKLESS zhengbing.huang
2024-06-24 5:46 ` [PATCH 02/11] drbd_receiver: get_ldev before use device->ldev for drbd_reconsider_queue_parameters() zhengbing.huang
2024-06-28 9:35 ` Philipp Reisner
2024-06-24 5:46 ` [PATCH 03/11] drbd_transport_rdma: put kref for cm in dtr_path_established in error path zhengbing.huang
2024-06-28 9:40 ` Philipp Reisner
2024-07-01 2:07 ` Dongsheng Yang
2024-07-01 2:48 ` Dongsheng Yang
2024-10-16 16:44 ` Philipp Reisner
2024-10-17 6:42 ` Zhengbing
2024-06-24 5:46 ` [PATCH 04/11] drbd_transport_rdma: dont schedule retry_connect_work in active is false zhengbing.huang
2024-06-28 11:51 ` Philipp Reisner
2024-07-01 2:11 ` Dongsheng Yang
2024-06-24 5:46 ` [PATCH 05/11] drbd_transport_rdma: dont break in dtr_tx_cq_event_handler if (cm->state != DSM_CONNECTED) zhengbing.huang
2024-06-28 12:07 ` Philipp Reisner
2024-07-01 2:23 ` Dongsheng Yang [this message]
2024-06-24 5:46 ` [PATCH 06/11] drbd_transport_rdma: put kref in error path zhengbing.huang
2024-06-28 12:12 ` Philipp Reisner
2024-06-24 5:46 ` [PATCH 07/11] drbd_transport_rdma: put kref in dtr_remap_tx_desc error zhengbing.huang
2024-06-28 12:19 ` Philipp Reisner
2024-07-01 2:28 ` Dongsheng Yang
2024-06-24 5:46 ` [PATCH 08/11] drbd_transport_rdma: fix a race between dtr_connect and drbd_thread_stop zhengbing.huang
2024-06-28 12:36 ` Philipp Reisner
2024-07-01 2:30 ` Dongsheng Yang
2024-06-24 5:46 ` [PATCH 09/11] drbd_transport_rdma: introduce timeout for rdma_disocnnect zhengbing.huang
2024-06-24 5:46 ` [PATCH 10/11] drbd_transport_rdma: introduce timeout for rdma_connect zhengbing.huang
2024-06-24 5:46 ` [PATCH 11/11] drbd_transport_rdma: wake up state_wq after clear DSB_CONNECTED in dtr_tx_timeout_work_fn zhengbing.huang
2024-06-28 9:10 ` [PATCH 01/11] drbd_nl: dont allow detating to be inttrupted in waiting D_DETACHING to DISKLESS Philipp Reisner
2024-07-01 2:02 ` Dongsheng Yang
2024-07-01 10:00 ` Philipp Reisner
2024-07-02 1:45 ` Dongsheng Yang
2024-07-03 14:31 ` [PATCH] drbd: make drbd_adm_detach() interruptible Philipp Reisner
2024-07-04 2:59 ` Zhengbing
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5de313a6-8b0a-36c4-4b76-307ee1ab3477@easystack.cn \
--to=dongsheng.yang@easystack.cn \
--cc=drbd-dev@lists.linbit.com \
--cc=philipp.reisner@linbit.com \
--cc=zhengbing.huang@easystack.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox