From: Zhu Yanjun <yanjun.zhu@linux.dev>
To: Daisuke Matsuda <dskmtsd@gmail.com>,
linux-rdma@vger.kernel.org, leon@kernel.org, jgg@ziepe.ca,
zyjzyj2000@gmail.com
Cc: philipp.reisner@linbit.com
Subject: Re: [PATCH for-rc v1] RDMA/rxe: Avoid CQ polling hang triggered by CQ resize
Date: Fri, 22 Aug 2025 22:22:39 -0700 [thread overview]
Message-ID: <41f3b537-3463-498e-a2bb-cb8be8176a1a@linux.dev> (raw)
In-Reply-To: <6851c585-b7ed-43a8-8edf-b08573a37afd@gmail.com>
在 2025/8/22 21:19, Daisuke Matsuda 写道:
> On 2025/08/21 12:12, Zhu Yanjun wrote:
>> 在 2025/8/19 8:15, Daisuke Matsuda 写道:
>>> On 2025/08/18 13:44, Zhu Yanjun wrote:
>>>> 在 2025/8/17 5:37, Daisuke Matsuda 写道:
>>>>> When running the test_resize_cq testcase from rdma-core, polling a
>>>>> completion queue from userspace may occasionally hang and
>>>>> eventually fail
>>>>> with a timeout:
>>>>> =====
>>>>> ERROR: test_resize_cq (tests.test_cq.CQTest.test_resize_cq)
>>>>> Test resize CQ, start with specific value and then increase and
>>>>> decrease
>>>>> ----------------------------------------------------------------------
>>>>>
>>>>> Traceback (most recent call last):
>>>>> File "/root/deb/rdma-core/tests/test_cq.py", line 135, in
>>>>> test_resize_cq
>>>>> u.poll_cq(self.client.cq)
>>>>> File "/root/deb/rdma-core/tests/utils.py", line 687, in poll_cq
>>>>> wcs = _poll_cq(cq, count, data)
>>>>> ^^^^^^^^^^^^^^^^^^^^^^^^^
>>>>> File "/root/deb/rdma-core/tests/utils.py", line 669, in _poll_cq
>>>>> raise PyverbsError(f'Got timeout on polling ({count} CQEs
>>>>> remaining)')
>>>>> pyverbs.pyverbs_error.PyverbsError: Got timeout on polling (1 CQEs
>>>>> remaining)
>>>>> =====
>>>>>
>>>>> The issue is caused when rxe_cq_post() fails to post a CQE due to
>>>>> the queue
>>>>> being temporarily full, and the CQE is effectively lost. To
>>>>> mitigate this,
>>>>> add a bounded busy-wait with fallback rescheduling so that CQE
>>>>> does not get
>>>>> lost.
>>>>>
>>>>> Signed-off-by: Daisuke Matsuda <dskmtsd@gmail.com>
>>>>> ---
>>>>> drivers/infiniband/sw/rxe/rxe_cq.c | 27 +++++++++++++++++++++++++--
>>>>> 1 file changed, 25 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c
>>>>> b/drivers/infiniband/ sw/rxe/rxe_cq.c
>>>>> index fffd144d509e..7b0fba63204e 100644
>>>>> --- a/drivers/infiniband/sw/rxe/rxe_cq.c
>>>>> +++ b/drivers/infiniband/sw/rxe/rxe_cq.c
>>>>> @@ -84,14 +84,36 @@ int rxe_cq_resize_queue(struct rxe_cq *cq, int
>>>>> cqe,
>>>>> /* caller holds reference to cq */
>>>>> int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int
>>>>> solicited)
>>>>> {
>>>>> + unsigned long flags;
>>>>> + u32 spin_cnt = 3000;
>>>>> struct ib_event ev;
>>>>> - int full;
>>>>> void *addr;
>>>>> - unsigned long flags;
>>>>> + int full;
>>>>> spin_lock_irqsave(&cq->cq_lock, flags);
>>>>> full = queue_full(cq->queue, QUEUE_TYPE_TO_CLIENT);
>>>>> + if (likely(!full))
>>>>> + goto post_queue;
>>>>> +
>>>>> + /* constant backoff until queue is ready */
>>>>> + while (spin_cnt--) {
>>>>> + full = queue_full(cq->queue, QUEUE_TYPE_TO_CLIENT);
>>>>> + if (!full)
>>>>> + goto post_queue;
>>>>> +
>>>>> + cpu_relax();
>>>>> + }
>>>>
>>>> The loop runs 3000 times.
>>>> Each iteration:
>>>>
>>>> Checks queue_full()
>>>> Executes cpu_relax()
>>>>
>>>> On modern CPUs, each iteration may take a few cycles, e.g., 4–10
>>>> cycles per iteration (depends on memory/cache).
>>>>
>>>> Suppose 1 cycle = ~0.3 ns on a 3 GHz CPU, 10 cycles ≈ 3 ns
>>>> 3000 iterations × 10 cycles ≈ 30,000 cycles
>>>>
>>>> 30000 cycles * 0.3 ns = 9000 ns = 9 microseconds
>>>>
>>>> So the “critical section” while spinning is tens of microseconds,
>>>> not milliseconds.
>>>>
>>>> I was concerned that 3000 iterations might make the spin lock
>>>> critical section too long, but based on the analysis above, it
>>>> appears that this is still a short-duration critical section.
>>>
>>> Thank you for the review.
>>>
>>> Assuming the two loads in queue_full() hit in the L1 cache, I
>>> estimate each iteration could take around
>>> 15–20 cycles. Based on your calculation, the maximum total time
>>> would be approximately 18 microseconds.
>>
>> ======================================================================
>> ERROR: test_rdmacm_async_write (tests.test_rdmacm.CMTestCase)
>> ----------------------------------------------------------------------
>> Traceback (most recent call last):
>> File "/..../rdma-core/tests/test_rdmacm.py", line 71, in
>> test_rdmacm_async_write
>> self.two_nodes_rdmacm_traffic(CMAsyncConnection,
>> File "/..../rdma-core/tests/base.py", line 447, in
>> two_nodes_rdmacm_traffic
>> raise Exception('Exception in active/passive side occurred')
>> Exception: Exception in active/passive side occurred
>>
>> After appying your commit, I run the following run_tests.py for 10000
>> times.
>> The above error sometimes will appear. The frequency is very low.
>>
>> "
>> for (( i = 0; i < 10000; i++ ))
>> do
>> rdma-core/build/bin/run_tests.py --dev rxe0
>> done
>> "
>> It is weird.
>
> I tried running test_rdmacm_async_write alone for 50000 times, but
> could not reproduce this one.
> There have been multiple latency-related issues in RXE, so it is not
> surprising a new one is
> uncovered by changing seemingly irrelevant part.
>
> How about applying additional change below:
> ===
> diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c
> b/drivers/infiniband/sw/rxe/rxe_cq.c
> index 7b0fba63204e..8f8d56051b8d 100644
> --- a/drivers/infiniband/sw/rxe/rxe_cq.c
> +++ b/drivers/infiniband/sw/rxe/rxe_cq.c
> @@ -102,7 +102,9 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe
> *cqe, int solicited)
> if (!full)
> goto post_queue;
>
> + spin_unlock_irqrestore(&cq->cq_lock, flags);
> cpu_relax();
> + spin_lock_irqsave(&cq->cq_lock, flags);
> }
>
> /* try giving up cpu and retry */
> ===
> This makes cpu_relax() almost meaningless, but ensures the lock is
> released in each iteration.
>
> It would be nice if you could provide the frequency and whether it
> takes longer than usual in failure cases.
> I think that could be helpful as a starting point to find a solution.
With a clean KVM QEMU VM, after applying your commit, the same problem
occurs every time the above script is run.
Yanjun.Zhu
>
> Thanks,
> Daisuke
>
>>
>> Yanjun.Zhu
>>
>
>
--
Best Regards,
Yanjun.Zhu
next prev parent reply other threads:[~2025-08-23 5:23 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-17 12:37 [PATCH for-rc v1] RDMA/rxe: Avoid CQ polling hang triggered by CQ resize Daisuke Matsuda
2025-08-18 4:44 ` Zhu Yanjun
2025-08-19 15:15 ` Daisuke Matsuda
2025-08-21 3:12 ` Zhu Yanjun
2025-08-23 4:19 ` Daisuke Matsuda
2025-08-23 5:22 ` Zhu Yanjun [this message]
2025-08-25 18:10 ` Jason Gunthorpe
2025-08-27 11:14 ` Daisuke Matsuda
2025-08-27 12:04 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=41f3b537-3463-498e-a2bb-cb8be8176a1a@linux.dev \
--to=yanjun.zhu@linux.dev \
--cc=dskmtsd@gmail.com \
--cc=jgg@ziepe.ca \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=philipp.reisner@linbit.com \
--cc=zyjzyj2000@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox