From: Vincent Ray <vray@kalrayinc.com>
To: linyunsheng <linyunsheng@huawei.com>
Cc: davem <davem@davemloft.net>, 方国炬 <guoju.fgj@alibaba-inc.com>,
kuba <kuba@kernel.org>, netdev <netdev@vger.kernel.org>,
"Samuel Jones" <sjones@kalrayinc.com>,
"vladimir oltean" <vladimir.oltean@nxp.com>,
"Guoju Fang" <gjfang@linux.alibaba.com>,
"Remy Gauguey" <rgauguey@kalrayinc.com>
Subject: packet stuck in qdisc : patch proposal
Date: Mon, 23 May 2022 15:54:12 +0200 (CEST) [thread overview]
Message-ID: <1684598287.15044793.1653314052575.JavaMail.zimbra@kalray.eu> (raw)
In-Reply-To: <2b827f3b-a9db-e1a7-0dc9-65446e07bc63@linux.alibaba.com>
[-- Attachment #1: Type: text/plain, Size: 3956 bytes --]
Hi Yunsheng, all,
I finally spotted the bug that caused (nvme-)tcp packets to remain stuck in the qdisc once in a while.
It's in qdisc_run_begin within sch_generic.h :
smp_mb__before_atomic();
// [comments]
if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
return false;
should be
smp_mb();
// [comments]
if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
return false;
I have written a more detailed explanation in the attached patch, including a race example, but in short that's because test_bit() is not an atomic operation.
Therefore it does not give you any ordering guarantee on any architecture.
And neither does spin_trylock() called at the beginning of qdisc_run_begin() when it does not grab the lock...
So test_bit() may be reordered whith a preceding enqueue(), leading to a possible race in the dialog with pfifo_fast_dequeue().
We may then end up with a skbuff pushed "silently" to the qdisc (MISSED cleared, nobody aware that there is something in the backlog).
Then the cores pushing new skbuffs to the qdisc may all bypass it for an arbitrary amount of time, leaving the enqueued skbuff stuck in the backlog.
I believe the reason for which you could not reproduce the issue on ARM64 is that, on that architecture, smp_mb__before_atomic() will translate to a memory barrier.
It does not on x86 (turned into a NOP) because you're supposed to use this function just before an atomic operation, and atomic operations themselves provide full ordering effects on x86.
I think the code has been flawed for some time but the introduction of a (true) bypass policy in 5.14 made it more visible, because without this, the "victim" skbuff does not stay very long in the backlog : it is bound to pe popped by the next core executing __qdic_run().
In my setup, with our use case (16 (virtual) cpus in a VM shooting 4KB buffers with fio through a -i4 nvme-tcp connection to a target), I did not notice any performance degradation using smp_mb() in place of smp_mb__before_atomic(), but of course that does not mean it cannot happen in other configs.
I think Guoju's patch is also correct and necessary so that both patches, his and mine, should be applied "asap" to the kernel.
A difference between Guoju's race and "mine" is that, in his case, the MISSED bit will be set : though no one will take care of the skbuff immediately, the next cpu pushing to the qdisc (if ever ...) will notice and dequeue it (so Guoju's race probably happens in my use case too but is not noticeable).
Finally, given the necessity of these two new full barriers in the code, I wonder if the whole lockless (+ bypass) thing should be reconsidered.
At least, I think general performance tests should be run to check that lockless qdics still outperform locked qdiscs, in both bypassable and not-bypassable modes.
More generally, I found this piece of code quite tricky and error-prone, as evidenced by the numerous fixes it went through in the recent history.
I believe most of this complexity comes from the lockless qdisc handling in itself, but of course the addition of the bypass support does not really help ;-)
I'm a linux kernel beginner however, so I'll let more experienced programmers decide about that :-)
I've made sure that, with this patch, no stuck packets happened any more on both v5.15 and v5.18-rc2 (whereas without the patch, numerous occurrences of stuck packets are visible).
I'm quite confident it will apply to any concerned version, that is from 5.14 (or before) to mainline.
Can you please tell me :
1) if you agree with this ?
2) how to proceed to push this patch (and Guoju's) for quick integration into the mainline ?
NB : an alternative fix (which I've tested OK too) would be to simply remove the
if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
return false;
code path, but I have no clue if this would be better or worse than the present patch in terms of performance.
Thank you, best regards,
V
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: net_sched_fixed_barrier_to_prevent_skbuff_sticking_in_qdisc_backlog.patch --]
[-- Type: text/x-patch; name*0=net_sched_fixed_barrier_to_prevent_skbuff_sticking_in_qdisc_backlog.; name*1=patch, Size: 3897 bytes --]
commit 917f7ff2b0f59d721d11f983af1f46c1cd74130a
Author: Vincent Ray <vray@kalray.eu>
Date: Mon May 23 15:24:12 2022 +0200
net: sched: fixed barrier to prevent skbuff sticking in qdisc backlog
In qdisc_run_begin(), smp_mb__before_atomic() used before test_bit()
does not provide any ordering guarantee as test_bit() is not an atomic
operation. This, added to the fact that the spin_trylock() call at
the beginning of qdisc_run_begin() does not guarantee acquire
semantics if it does not grab the lock, makes it possible for the
following statement :
if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
to be executed before an enqueue operation called before
qdisc_run_begin().
As a result the following race can happen :
CPU 1 CPU 2
qdisc_run_begin() qdisc_run_begin() /* true */
set(MISSED) .
/* returns false */ .
. /* sees MISSED = 1 */
. /* so qdisc not empty */
. __qdisc_run()
. .
. pfifo_fast_dequeue()
----> /* may be done here */ .
| . clear(MISSED)
| . .
| . smp_mb __after_atomic();
| . .
| . /* recheck the queue */
| . /* nothing => exit */
| enqueue(skb1)
| .
| qdisc_run_begin()
| .
| spin_trylock() /* fail */
| .
| smp_mb__before_atomic() /* not enough */
| .
---- if (test_bit(MISSED))
return false; /* exit */
In the above scenario, CPU 1 and CPU 2 both try to grab the
qdisc->seqlock at the same time. Only CPU 2 succeeds and enters the
bypass code path, where it emits its skb then calls __qdisc_run().
CPU1 fails, sets MISSED and goes down the traditionnal enqueue() +
dequeue() code path. But when executing qdisc_run_begin() for the
second time, after enqueing its skbuff, it sees the MISSED bit still
set (by itself) and consequently chooses to exit early without setting
it again nor trying to grab the spinlock again.
Meanwhile CPU2 has seen MISSED = 1, cleared it, checked the queue
and found it empty, so it returned.
At the end of the sequence, we end up with skb1 enqueued in the
backlog, both CPUs out of __dev_xmit_skb(), the MISSED bit not set,
and no __netif_schedule() called made. skb1 will now linger in the
qdisc until somebody later performs a full __qdisc_run(). Associated
to the bypass capacity of the qdisc, and the ability of the TCP layer
to avoid resending packets which it knows are still in the qdisc, this
can lead to serious traffic "holes" in a TCP connexion.
We fix this by turning smp_mb__before_atomic() into smp_mb() which
guarantees the correct ordering of enqueue() vs test_bit() and
consequently prevents the race.
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 9bab396c1f3b..0c6016e10a6f 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -191,7 +191,7 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
* STATE_MISSED checking is synchronized with clearing
* in pfifo_fast_dequeue().
*/
- smp_mb__before_atomic();
+ smp_mb();
/* If the MISSED flag is set, it means other thread has
* set the MISSED flag before second spin_trylock(), so
next prev parent reply other threads:[~2022-05-23 14:00 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1862202329.1457162.1643113633513.JavaMail.zimbra@kalray.eu>
[not found] ` <698739062.1462023.1643115337201.JavaMail.zimbra@kalray.eu>
2022-01-28 2:36 ` packet stuck in qdisc Yunsheng Lin
2022-01-28 8:58 ` Vincent Ray
2022-01-28 9:59 ` Vincent Ray
2022-01-29 6:53 ` Yunsheng Lin
2022-01-31 18:39 ` Vincent Ray
2022-02-07 3:17 ` Yunsheng Lin
2022-03-25 6:16 ` Yunsheng Lin
2022-03-25 8:45 ` Vincent Ray
2022-04-13 13:01 ` Vincent Ray
2022-04-14 3:05 ` Guoju Fang
2022-05-23 13:54 ` Vincent Ray [this message]
2022-05-24 2:55 ` packet stuck in qdisc : patch proposal Eric Dumazet
2022-05-24 6:43 ` Yunsheng Lin
2022-05-24 8:13 ` Vincent Ray
2022-05-24 17:00 ` Vincent Ray
2022-05-24 20:17 ` Eric Dumazet
2022-05-25 9:44 ` Vincent Ray
2022-05-25 10:45 ` Yunsheng Lin
2022-05-25 12:40 ` Guoju Fang
2022-05-25 17:43 ` Vincent Ray
2022-05-25 17:48 ` Vincent Ray
2022-05-26 0:17 ` Eric Dumazet
2022-05-26 7:01 ` [PATCH v2] net: sched: add barrier to fix packet stuck problem for lockless qdisc Guoju Fang
2022-05-27 9:11 ` [PATCH v3 net] " Guoju Fang
2022-05-28 0:51 ` Yunsheng Lin
2022-05-28 10:16 ` [PATCH v4 " Guoju Fang
2022-06-01 4:00 ` patchwork-bot+netdevbpf
2022-05-30 9:36 ` packet stuck in qdisc : patch proposal Vincent Ray
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1684598287.15044793.1653314052575.JavaMail.zimbra@kalray.eu \
--to=vray@kalrayinc.com \
--cc=davem@davemloft.net \
--cc=gjfang@linux.alibaba.com \
--cc=guoju.fgj@alibaba-inc.com \
--cc=kuba@kernel.org \
--cc=linyunsheng@huawei.com \
--cc=netdev@vger.kernel.org \
--cc=rgauguey@kalrayinc.com \
--cc=sjones@kalrayinc.com \
--cc=vladimir.oltean@nxp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).