public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next] ppp: consolidate RX skb queueing
@ 2026-04-28  2:44 Qingfang Deng
  2026-04-28  6:43 ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 2+ messages in thread
From: Qingfang Deng @ 2026-04-28  2:44 UTC (permalink / raw)
  To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Qingfang Deng, Guillaume Nault, Breno Leitao,
	Taegu Ha, Kees Cook, Sebastian Andrzej Siewior, linux-ppp, netdev,
	linux-kernel

In ppp_input() and ppp_receive_nonmp_frame(), received skbs are queued
for userspace delivery using the same open-coded pattern:

	skb_queue_tail(&pf->rq, skb);
	while (pf->rq.qlen > PPP_MAX_RQLEN &&
	       (skb = skb_dequeue(&pf->rq)))
		kfree_skb(skb);
	wake_up_interruptible(&pf->rwait);

This has a potential race: skb_queue_tail() releases the queue lock,
then qlen is read locklessly before skb_dequeue() re-acquires it.
Another CPU enqueueing concurrently could cause the length check to see
stale data. This race is benign, as it only causes extra skbs to be
freed in the worst case.

Introduce ppp_file_queue_rx_skb() to perform the enqueue, length check,
and trim atomically under a single pf->rq.lock critical section. As both
callers have softirq disabled, plain spin_lock() can be used instead of
_bh()/_irqsave() variants. Since only one skb is enqueued at a time, the
queue can exceed PPP_MAX_RQLEN by at most one frame, so replace the
while-loop with an if-statement. While at it, use skb_queue_len()
instead of open-coding the qlen access.

Signed-off-by: Qingfang Deng <qingfang.deng@linux.dev>
---
 drivers/net/ppp/ppp_generic.c | 37 ++++++++++++++++++++++-------------
 1 file changed, 23 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
index 57c68efa5ff8..6ab5011540a0 100644
--- a/drivers/net/ppp/ppp_generic.c
+++ b/drivers/net/ppp/ppp_generic.c
@@ -2307,6 +2307,27 @@ static bool ppp_channel_bridge_input(struct channel *pch, struct sk_buff *skb)
 	return !!pchb;
 }
 
+/* Queue up and deliver a received skb to userspace.
+ * Must be called in softirq.
+ */
+static void ppp_file_queue_rx_skb(struct ppp_file *pf, struct sk_buff *skb)
+{
+	spin_lock(&pf->rq.lock);
+	__skb_queue_tail(&pf->rq, skb);
+	/* limit queue length by dropping old frames */
+	if (unlikely(skb_queue_len(&pf->rq) > PPP_MAX_RQLEN)) {
+		struct sk_buff *old = __skb_peek(&pf->rq);
+
+		__skb_unlink(old, &pf->rq);
+		spin_unlock(&pf->rq.lock);
+		kfree_skb(old);
+	} else {
+		spin_unlock(&pf->rq.lock);
+	}
+	/* wake up any process polling or blocking on read */
+	wake_up_interruptible(&pf->rwait);
+}
+
 void
 ppp_input(struct ppp_channel *chan, struct sk_buff *skb)
 {
@@ -2337,12 +2358,7 @@ ppp_input(struct ppp_channel *chan, struct sk_buff *skb)
 	proto = PPP_PROTO(skb);
 	if (!ppp || proto >= 0xc000 || proto == PPP_CCPFRAG) {
 		/* put it on the channel queue */
-		skb_queue_tail(&pch->file.rq, skb);
-		/* drop old frames if queue too long */
-		while (pch->file.rq.qlen > PPP_MAX_RQLEN &&
-		       (skb = skb_dequeue(&pch->file.rq)))
-			kfree_skb(skb);
-		wake_up_interruptible(&pch->file.rwait);
+		ppp_file_queue_rx_skb(&pch->file, skb);
 	} else {
 		ppp_do_recv(ppp, skb, pch);
 	}
@@ -2480,14 +2496,7 @@ ppp_receive_nonmp_frame(struct ppp *ppp, struct sk_buff *skb)
 	npi = proto_to_npindex(proto);
 	if (npi < 0) {
 		/* control or unknown frame - pass it to pppd */
-		skb_queue_tail(&ppp->file.rq, skb);
-		/* limit queue length by dropping old frames */
-		while (ppp->file.rq.qlen > PPP_MAX_RQLEN &&
-		       (skb = skb_dequeue(&ppp->file.rq)))
-			kfree_skb(skb);
-		/* wake up any process polling or blocking on read */
-		wake_up_interruptible(&ppp->file.rwait);
-
+		ppp_file_queue_rx_skb(&ppp->file, skb);
 	} else {
 		/* network protocol frame - give it to the kernel */
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH net-next] ppp: consolidate RX skb queueing
  2026-04-28  2:44 [PATCH net-next] ppp: consolidate RX skb queueing Qingfang Deng
@ 2026-04-28  6:43 ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 2+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-04-28  6:43 UTC (permalink / raw)
  To: Qingfang Deng
  Cc: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Guillaume Nault, Breno Leitao, Taegu Ha, Kees Cook,
	linux-ppp, netdev, linux-kernel

On 2026-04-28 10:44:23 [+0800], Qingfang Deng wrote:
> In ppp_input() and ppp_receive_nonmp_frame(), received skbs are queued
> for userspace delivery using the same open-coded pattern:
> 
> 	skb_queue_tail(&pf->rq, skb);
> 	while (pf->rq.qlen > PPP_MAX_RQLEN &&
> 	       (skb = skb_dequeue(&pf->rq)))
> 		kfree_skb(skb);
> 	wake_up_interruptible(&pf->rwait);
> 
> This has a potential race: skb_queue_tail() releases the queue lock,
> then qlen is read locklessly before skb_dequeue() re-acquires it.
> Another CPU enqueueing concurrently could cause the length check to see
> stale data. This race is benign, as it only causes extra skbs to be
> freed in the worst case.

That is not that bad. You could use skb_queue_len_lockless() to make it
more obvious. However, if thread A enqueues packets and is below the
limit and wakes the reader, it could enqueue more and which point it
will check the limit again. I don't see a problem except that the reader
may get more packets before the queue is trimmed. Again, not an issue.
It is only here to prevent a large amount of packets if userland does
not read the queue for some reason.

Merging the two instances into one function would be nice but there is
no need to complicate things.

Sebastian

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-28  6:43 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28  2:44 [PATCH net-next] ppp: consolidate RX skb queueing Qingfang Deng
2026-04-28  6:43 ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox