From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D44622BD5A8 for ; Fri, 27 Feb 2026 01:56:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.52 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772157383; cv=none; b=cUyPmj6Vx6OkESV5fHOqJRe3nTyN++apW6MDZvRHDk1sAFBaLNY5WTr1wCpgP3vZXyz7C9w7FFkju5A3k0C9zVLx2utujgXVifWtBLWGU/YwEGgqPNutXb5sxYFkw6jD2jvT3XM4xdggrXHXIOwMFh0McMKVi5YDnCGlmC2jqW4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772157383; c=relaxed/simple; bh=1HhXeYvYf4GxGTaxZljFQCqXOMqTu8REfuaebmkeSnk=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=VaWmDdTkqTuZY/ttj1GWusluZAqnNbCYU/Jav4AcqLYxepsSlpabfnpeHfixHX48fwi/9HbbKUYukJvq/koAv6wVUbEtHRebEu/t/fC5nWkVteuiqVAmF8BDPktP4bMHjdjw6WiS2Zw8JHkAl2gDSehOrCk/XhVNiBlcSklI66Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=atYcGpos; arc=none smtp.client-ip=209.85.208.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="atYcGpos" Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-65fb991d7e7so2017644a12.1 for ; Thu, 26 Feb 2026 17:56:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1772157379; x=1772762179; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=98c9GitqRTveJJIvXRWURQ+/NJVCMLwh2Wfc0QaY9pU=; b=atYcGposZ1q32WNDiTjOXP0fEjPhnp+nqfpoZGgl01TSwLHaNW5eKW200XpraUpiM4 CJXBgTP0zZRMkBrGlI+sSlvQGDENn9F/EDww04taj8cZE02IG6OOl4UensSyJIJFcHAS qmeEIPIgm/KwehNYfnAx+maxqNPj50Dtd/e99wnGJrli5yB2TzBbABxqq8T4rZgE1W6j 2H4mmwGF70s0paHgKB1vYx4GXydZNgkSXK4nfCZvIV+awL61Dvsu+F+SWQSQQMU8WVjd exevrk+DSwkaj2rROiV8Js52lqAUmWavhk2Q0ofOzUhB2Frle9thYYDYW0dejCaRhIba okHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772157379; x=1772762179; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=98c9GitqRTveJJIvXRWURQ+/NJVCMLwh2Wfc0QaY9pU=; b=VGCNct3c0PsPcdHNH4YztN1KnhxLb2epJF5DXRIjAw+Q8BoCKtLxMIUK0l3+C/2cIq umZpjsK78PAI3QS8TMJ/lR6f0xRNmgEOjLZjgq+C0Q5bi5EEvX1djk+1bizoIGN37znM ENZ7HhQyRWyXvjiJTNkXQ4uchGnejWNnkic9Jsg+K01wPp3MVTxdsU2KsO8mPed9dQ3g /1/bkZyBd9wGVXMD36ZFmpdMUd0yXcfZnfUHS/XA4B18BTwMvaCBe8wccMY2rebMAoZH rNllE2DcfwqhV6UxtUhR3axXO3n0m7GAK9HPH4yeFzxQLuSxE6fWn3FqgGFedtaHA3BV gzbQ== X-Forwarded-Encrypted: i=1; AJvYcCWv3xC97DwqxsG12MZJL3dBeRs93ayYvICAoWq6JTdXuNG+5cWwnrFlqHflFKj1M8X7ki1qTQA=@vger.kernel.org X-Gm-Message-State: AOJu0Yw65TAx3lphp99ztginUWRtc7xAL2TMOZNnqMnyUe52w1Ng0Dat rxZQDvz+tIZMvnKOAGtCldoL8+TatlcbnaWY8nzBfDKW6I8VyklIj19N+jcDMalpCkE= X-Gm-Gg: ATEYQzx41Ubha50M+ato5IPlUHsEfk1uQs5jpeLmDoYBswSHYQLNMKPCzWNj9VlaFSS CPSL5E3hPKXW3eFkw6jc7qE/2cXm8M+vxV2faOAmhDnm0bgZku8c11wg9S3iu0bkYsdqdkaKTsU xB7Cir3McW1Na7H6F7A0r5dQThMKutdg5RD7iIL9011S7G6MnUYab5uyTM5wY4gHcGJM2+XdU8h cuHF2i/lMcBRKnUNIhEY06ugEFrd0t6F9JXnyw20FbqTAHJ84lKr+pkoo9/xvnqL4vPECPHaFnI YauzmtsnSI+rVrlg/WmAQ79IvVBQy2xg31bM4Rz66mtzvA8aSwgpcgEDKr6fZmbuHLmHE421BV1 JS0ZRxYtkR6VnvPipygPXFHpZzNNOzfUjVyLCS9pb0lCd/Gs+7AL3CXha8uAHqWFrItI1kCDl X-Received: by 2002:a17:907:6d0b:b0:b8f:deff:a019 with SMTP id a640c23a62f3a-b937615c4a5mr49484866b.0.1772157378768; Thu, 26 Feb 2026 17:56:18 -0800 (PST) Received: from gmail.com ([2a09:bac1:5500::20a:ca]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-b935ae613ddsm102113466b.33.2026.02.26.17.56.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 Feb 2026 17:56:17 -0800 (PST) From: Qingfang Deng To: linux-ppp@vger.kernel.org, Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Vadim Fedorenko Subject: [PATCH net-next] ppp: don't store tx skb in the fastpath Date: Fri, 27 Feb 2026 09:56:10 +0800 Message-ID: <20260227015610.24874-1-dqfext@gmail.com> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Currently, ppp->xmit_pending is used in ppp_send_frame() to pass a skb to ppp_push(), and holds the skb when a PPP channel cannot immediately transmit it. This state is redundant because the transmit queue (ppp->file.xq) can already handle the backlog. Furthermore, during normal operation, an skb is queued in file.xq only to be immediately dequeued, causing unnecessary overhead. Refactor the transmit path to avoid stashing the skb when possible: - Remove ppp->xmit_pending. - Rename ppp_send_frame() to ppp_prepare_tx_skb(), and don't call ppp_push() in it. It returns NULL if the skb is consumed (dropped/queue) or a new skb to be passed to ppp_push(). - Update ppp_push() to accept the skb. It returns 1 if the skb is consumed, or 0 if the channel is busy. - Optimize __ppp_xmit_process(): - Fastpath: If the queue is empty, attempt to send the skb directly via ppp_push(). If busy, queue it. - Slowpath: If the queue is not empty, process the backlog in file.xq. Split dequeuing loop into a separate function ppp_xmit_flush() so ppp_channel_push() uses that directly instead of passing a NULL skb to __ppp_xmit_process(). This simplifies the states and reduces locking in the fastpath. Signed-off-by: Qingfang Deng --- Repost as non-RFC: - https://lore.kernel.org/linux-ppp/20260210031313.29708-1-dqfext@gmail.com/ drivers/net/ppp/ppp_generic.c | 107 +++++++++++++++++++--------------- 1 file changed, 61 insertions(+), 46 deletions(-) diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c index e9b41777be80..559bdc223a5e 100644 --- a/drivers/net/ppp/ppp_generic.c +++ b/drivers/net/ppp/ppp_generic.c @@ -134,7 +134,6 @@ struct ppp { int debug; /* debug flags 70 */ struct slcompress *vj; /* state for VJ header compression */ enum NPmode npmode[NUM_NP]; /* what to do with each net proto 78 */ - struct sk_buff *xmit_pending; /* a packet ready to go out 88 */ struct compressor *xcomp; /* transmit packet compressor 8c */ void *xc_state; /* its internal state 90 */ struct compressor *rcomp; /* receive decompressor 94 */ @@ -264,8 +263,8 @@ struct ppp_net { static int ppp_unattached_ioctl(struct net *net, struct ppp_file *pf, struct file *file, unsigned int cmd, unsigned long arg); static void ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb); -static void ppp_send_frame(struct ppp *ppp, struct sk_buff *skb); -static void ppp_push(struct ppp *ppp); +static struct sk_buff *ppp_prepare_tx_skb(struct ppp *ppp, struct sk_buff *skb); +static int ppp_push(struct ppp *ppp, struct sk_buff *skb); static void ppp_channel_push(struct channel *pch); static void ppp_receive_frame(struct ppp *ppp, struct sk_buff *skb, struct channel *pch); @@ -1651,26 +1650,45 @@ static void ppp_setup(struct net_device *dev) */ /* Called to do any work queued up on the transmit side that can now be done */ +static void ppp_xmit_flush(struct ppp *ppp) +{ + struct sk_buff *skb; + + while ((skb = skb_dequeue(&ppp->file.xq))) { + if (unlikely(!ppp_push(ppp, skb))) { + skb_queue_head(&ppp->file.xq, skb); + return; + } + } + /* If there's no work left to do, tell the core net code that we can + * accept some more. + */ + netif_wake_queue(ppp->dev); +} + static void __ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb) { ppp_xmit_lock(ppp); - if (!ppp->closing) { - ppp_push(ppp); - - if (skb) + if (unlikely(ppp->closing)) { + kfree_skb(skb); + goto out; + } + skb = ppp_prepare_tx_skb(ppp, skb); + if (unlikely(!skb)) + goto out; + /* Fastpath: No backlog, just send the new skb. */ + if (likely(skb_queue_empty(&ppp->file.xq))) { + if (unlikely(!ppp_push(ppp, skb))) { skb_queue_tail(&ppp->file.xq, skb); - while (!ppp->xmit_pending && - (skb = skb_dequeue(&ppp->file.xq))) - ppp_send_frame(ppp, skb); - /* If there's no work left to do, tell the core net - code that we can accept some more. */ - if (!ppp->xmit_pending && !skb_peek(&ppp->file.xq)) - netif_wake_queue(ppp->dev); - else netif_stop_queue(ppp->dev); - } else { - kfree_skb(skb); + } + goto out; } + + /* Slowpath: Enqueue the new skb and process backlog */ + skb_queue_tail(&ppp->file.xq, skb); + ppp_xmit_flush(ppp); +out: ppp_xmit_unlock(ppp); } @@ -1757,12 +1775,11 @@ pad_compress_skb(struct ppp *ppp, struct sk_buff *skb) } /* - * Compress and send a frame. - * The caller should have locked the xmit path, - * and xmit_pending should be 0. + * Compress and prepare to send a frame. + * The caller should have locked the xmit path. */ -static void -ppp_send_frame(struct ppp *ppp, struct sk_buff *skb) +static struct sk_buff * +ppp_prepare_tx_skb(struct ppp *ppp, struct sk_buff *skb) { int proto = PPP_PROTO(skb); struct sk_buff *new_skb; @@ -1784,7 +1801,7 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb) "PPP: outbound frame " "not passed\n"); kfree_skb(skb); - return; + return NULL; } /* if this packet passes the active filter, record the time */ if (!(ppp->active_filter && @@ -1869,42 +1886,38 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb) goto drop; skb_queue_tail(&ppp->file.rq, skb); wake_up_interruptible(&ppp->file.rwait); - return; + return NULL; } - ppp->xmit_pending = skb; - ppp_push(ppp); - return; + return skb; drop: kfree_skb(skb); ++ppp->dev->stats.tx_errors; + return NULL; } /* - * Try to send the frame in xmit_pending. + * Try to send the frame. * The caller should have the xmit path locked. + * Returns 1 if the skb was consumed, 0 if not. */ -static void -ppp_push(struct ppp *ppp) +static int +ppp_push(struct ppp *ppp, struct sk_buff *skb) { struct list_head *list; struct channel *pch; - struct sk_buff *skb = ppp->xmit_pending; - - if (!skb) - return; list = &ppp->channels; if (list_empty(list)) { /* nowhere to send the packet, just drop it */ - ppp->xmit_pending = NULL; kfree_skb(skb); - return; + return 1; } if ((ppp->flags & SC_MULTILINK) == 0) { struct ppp_channel *chan; + int ret; /* not doing multilink: send it down the first channel */ list = list->next; pch = list_entry(list, struct channel, clist); @@ -1916,27 +1929,26 @@ ppp_push(struct ppp *ppp) * skb but linearization failed */ kfree_skb(skb); - ppp->xmit_pending = NULL; + ret = 1; goto out; } - if (chan->ops->start_xmit(chan, skb)) - ppp->xmit_pending = NULL; + ret = chan->ops->start_xmit(chan, skb); out: spin_unlock(&pch->downl); - return; + return ret; } #ifdef CONFIG_PPP_MULTILINK /* Multilink: fragment the packet over as many links as can take the packet at the moment. */ if (!ppp_mp_explode(ppp, skb)) - return; + return 0; #endif /* CONFIG_PPP_MULTILINK */ - ppp->xmit_pending = NULL; kfree_skb(skb); + return 1; } #ifdef CONFIG_PPP_MULTILINK @@ -2005,7 +2017,7 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb) * performance if we have a lot of channels. */ if (nfree == 0 || nfree < navail / 2) - return 0; /* can't take now, leave it in xmit_pending */ + return 0; /* can't take now, leave it in transmit queue */ /* Do protocol field compression */ if (skb_linearize(skb)) @@ -2199,8 +2211,12 @@ static void __ppp_channel_push(struct channel *pch, struct ppp *ppp) spin_unlock(&pch->downl); /* see if there is anything from the attached unit to be sent */ if (skb_queue_empty(&pch->file.xq)) { - if (ppp) - __ppp_xmit_process(ppp, NULL); + if (ppp) { + ppp_xmit_lock(ppp); + if (!ppp->closing) + ppp_xmit_flush(ppp); + ppp_xmit_unlock(ppp); + } } } @@ -3460,7 +3476,6 @@ static void ppp_destroy_interface(struct ppp *ppp) } #endif /* CONFIG_PPP_FILTER */ - kfree_skb(ppp->xmit_pending); free_percpu(ppp->xmit_recursion); free_netdev(ppp->dev); -- 2.43.0