From: Qingfang Deng <dqfext@gmail.com>
To: "David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-ppp@vger.kernel.org
Subject: [RFC PATCH net-next] net: ppp: convert to IFF_NO_QUEUE
Date: Tue, 29 Oct 2024 18:36:56 +0800 [thread overview]
Message-ID: <20241029103656.2151-1-dqfext@gmail.com> (raw)
When testing the parallel TX performance of a single PPPoE interface
over a 2.5GbE link with multiple hardware queues, the throughput could
not exceed 1.9Gbps, even with low CPU usage.
This issue arises because the PPP interface is registered with a single
queue and a tx_queue_len of 3. This default behavior dates back to Linux
2.3.13, which was suitable for slower serial ports. However, in modern
devices with multiple processors and hardware queues, this configuration
can lead to congestion.
For PPPoE/PPTP, the lower interface should handle qdisc, so we need to
set IFF_NO_QUEUE. For PPP over a serial port, we don't benefit from a
qdisc with such a short TX queue, so handling TX queueing in the driver
and setting IFF_NO_QUEUE is more effective.
With this change, PPPoE interfaces can now fully saturate a 2.5GbE link.
Signed-off-by: Qingfang Deng <dqfext@gmail.com>
---
drivers/net/ppp/ppp_generic.c | 27 ++++++++++++---------------
1 file changed, 12 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
index 4b2971e2bf48..5470e0fe1f9b 100644
--- a/drivers/net/ppp/ppp_generic.c
+++ b/drivers/net/ppp/ppp_generic.c
@@ -236,8 +236,8 @@ struct ppp_net {
/* Get the PPP protocol number from a skb */
#define PPP_PROTO(skb) get_unaligned_be16((skb)->data)
-/* We limit the length of ppp->file.rq to this (arbitrary) value */
-#define PPP_MAX_RQLEN 32
+/* We limit the length of ppp->file.rq/xq to this (arbitrary) value */
+#define PPP_MAX_QLEN 32
/*
* Maximum number of multilink fragments queued up.
@@ -920,8 +920,6 @@ static long ppp_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
break;
} else {
ppp->npmode[i] = npi.mode;
- /* we may be able to transmit more packets now (??) */
- netif_wake_queue(ppp->dev);
}
err = 0;
break;
@@ -1639,6 +1637,7 @@ static void ppp_setup(struct net_device *dev)
dev->tx_queue_len = 3;
dev->type = ARPHRD_PPP;
dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
+ dev->priv_flags |= IFF_NO_QUEUE;
dev->priv_destructor = ppp_dev_priv_destructor;
netif_keep_dst(dev);
}
@@ -1654,17 +1653,15 @@ static void __ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
if (!ppp->closing) {
ppp_push(ppp);
- if (skb)
- skb_queue_tail(&ppp->file.xq, skb);
+ if (skb) {
+ if (ppp->file.xq.qlen > PPP_MAX_QLEN)
+ kfree_skb(skb);
+ else
+ skb_queue_tail(&ppp->file.xq, skb);
+ }
while (!ppp->xmit_pending &&
(skb = skb_dequeue(&ppp->file.xq)))
ppp_send_frame(ppp, skb);
- /* If there's no work left to do, tell the core net
- code that we can accept some more. */
- if (!ppp->xmit_pending && !skb_peek(&ppp->file.xq))
- netif_wake_queue(ppp->dev);
- else
- netif_stop_queue(ppp->dev);
} else {
kfree_skb(skb);
}
@@ -1850,7 +1847,7 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
* queue it up for pppd to receive.
*/
if (ppp->flags & SC_LOOP_TRAFFIC) {
- if (ppp->file.rq.qlen > PPP_MAX_RQLEN)
+ if (ppp->file.rq.qlen > PPP_MAX_QLEN)
goto drop;
skb_queue_tail(&ppp->file.rq, skb);
wake_up_interruptible(&ppp->file.rwait);
@@ -2319,7 +2316,7 @@ ppp_input(struct ppp_channel *chan, struct sk_buff *skb)
/* put it on the channel queue */
skb_queue_tail(&pch->file.rq, skb);
/* drop old frames if queue too long */
- while (pch->file.rq.qlen > PPP_MAX_RQLEN &&
+ while (pch->file.rq.qlen > PPP_MAX_QLEN &&
(skb = skb_dequeue(&pch->file.rq)))
kfree_skb(skb);
wake_up_interruptible(&pch->file.rwait);
@@ -2472,7 +2469,7 @@ ppp_receive_nonmp_frame(struct ppp *ppp, struct sk_buff *skb)
/* control or unknown frame - pass it to pppd */
skb_queue_tail(&ppp->file.rq, skb);
/* limit queue length by dropping old frames */
- while (ppp->file.rq.qlen > PPP_MAX_RQLEN &&
+ while (ppp->file.rq.qlen > PPP_MAX_QLEN &&
(skb = skb_dequeue(&ppp->file.rq)))
kfree_skb(skb);
/* wake up any process polling or blocking on read */
--
2.34.1
next reply other threads:[~2024-10-29 10:37 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-29 10:36 Qingfang Deng [this message]
2024-11-04 11:50 ` [RFC PATCH net-next] net: ppp: convert to IFF_NO_QUEUE Simon Horman
2024-11-05 11:47 ` Toke Høiland-Jørgensen
2024-11-07 14:51 ` Simon Horman
2024-11-05 12:23 ` Toke Høiland-Jørgensen
2024-11-05 14:25 ` Qingfang Deng
2024-11-05 14:35 ` Toke Høiland-Jørgensen
2024-11-05 15:04 ` Qingfang Deng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241029103656.2151-1-dqfext@gmail.com \
--to=dqfext@gmail.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-ppp@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).