From: Guillaume Nault <gnault@redhat.com>
To: Qingfang Deng <dqfext@gmail.com>
Cc: Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
linux-ppp@vger.kernel.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next 1/3] ppp: convert rlock to rwlock to improve RX concurrency
Date: Thu, 26 Jun 2025 18:23:03 +0200 [thread overview]
Message-ID: <aF1z52+rpNyIKk0O@debian> (raw)
In-Reply-To: <20250625034021.3650359-2-dqfext@gmail.com>
On Wed, Jun 25, 2025 at 11:40:18AM +0800, Qingfang Deng wrote:
> The rlock spinlock in struct ppp protects the receive path, but it is
> frequently used in a read-mostly manner. Converting it to rwlock_t
> allows multiple CPUs to concurrently enter the receive path (e.g.,
> ppp_do_recv()), improving RX performance.
>
> Write locking is preserved for code paths that mutate state.
>
> Signed-off-by: Qingfang Deng <dqfext@gmail.com>
> ---
> drivers/net/ppp/ppp_generic.c | 14 ++++++++------
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
> index 4cf9d1822a83..f0f8a75e3aef 100644
> --- a/drivers/net/ppp/ppp_generic.c
> +++ b/drivers/net/ppp/ppp_generic.c
> @@ -118,7 +118,7 @@ struct ppp {
> struct file *owner; /* file that owns this unit 48 */
> struct list_head channels; /* list of attached channels 4c */
> int n_channels; /* how many channels are attached 54 */
> - spinlock_t rlock; /* lock for receive side 58 */
> + rwlock_t rlock; /* lock for receive side 58 */
> spinlock_t wlock; /* lock for transmit side 5c */
> int __percpu *xmit_recursion; /* xmit recursion detect */
> int mru; /* max receive unit 60 */
> @@ -371,12 +371,14 @@ static const int npindex_to_ethertype[NUM_NP] = {
> */
> #define ppp_xmit_lock(ppp) spin_lock_bh(&(ppp)->wlock)
> #define ppp_xmit_unlock(ppp) spin_unlock_bh(&(ppp)->wlock)
> -#define ppp_recv_lock(ppp) spin_lock_bh(&(ppp)->rlock)
> -#define ppp_recv_unlock(ppp) spin_unlock_bh(&(ppp)->rlock)
> +#define ppp_recv_lock(ppp) write_lock_bh(&(ppp)->rlock)
> +#define ppp_recv_unlock(ppp) write_unlock_bh(&(ppp)->rlock)
> #define ppp_lock(ppp) do { ppp_xmit_lock(ppp); \
> ppp_recv_lock(ppp); } while (0)
> #define ppp_unlock(ppp) do { ppp_recv_unlock(ppp); \
> ppp_xmit_unlock(ppp); } while (0)
> +#define ppp_recv_read_lock(ppp) read_lock_bh(&(ppp)->rlock)
> +#define ppp_recv_read_unlock(ppp) read_unlock_bh(&(ppp)->rlock)
>
> /*
> * /dev/ppp device routines.
> @@ -1246,7 +1248,7 @@ static int ppp_dev_configure(struct net *src_net, struct net_device *dev,
> for (indx = 0; indx < NUM_NP; ++indx)
> ppp->npmode[indx] = NPMODE_PASS;
> INIT_LIST_HEAD(&ppp->channels);
> - spin_lock_init(&ppp->rlock);
> + rwlock_init(&ppp->rlock);
> spin_lock_init(&ppp->wlock);
>
> ppp->xmit_recursion = alloc_percpu(int);
> @@ -2193,12 +2195,12 @@ struct ppp_mp_skb_parm {
> static inline void
> ppp_do_recv(struct ppp *ppp, struct sk_buff *skb, struct channel *pch)
> {
> - ppp_recv_lock(ppp);
> + ppp_recv_read_lock(ppp);
> if (!ppp->closing)
> ppp_receive_frame(ppp, skb, pch);
That doesn't look right. Several PPP Rx features are stateful
(multilink, compression, etc.) and the current implementations
currently don't take any precaution when updating the shared states.
For example, see how bsd_decompress() (in bsd_comp.c) updates db->*
fields all over the place. This db variable comes from ppp->rc_state,
which is passed as parameter of the ppp->rcomp->decompress() call in
ppp_decompress_frame().
I think a lot of work would be needed before we could allow
ppp_do_recv() to run concurrently on the same struct ppp.
> else
> kfree_skb(skb);
> - ppp_recv_unlock(ppp);
> + ppp_recv_read_unlock(ppp);
> }
>
> /**
> --
> 2.43.0
>
next prev parent reply other threads:[~2025-06-26 16:23 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-25 3:40 [PATCH net-next 0/3] ppp: improve receive path performance Qingfang Deng
2025-06-25 3:40 ` [PATCH net-next 1/3] ppp: convert rlock to rwlock to improve RX concurrency Qingfang Deng
2025-06-25 8:11 ` Andrew Lunn
2025-06-26 16:23 ` Guillaume Nault [this message]
2025-06-27 3:58 ` Qingfang Deng
2025-06-27 6:44 ` Eric Dumazet
2025-06-27 9:23 ` Qingfang Deng
2025-06-25 3:40 ` [PATCH net-next 2/3] pppoe: bypass sk_receive_skb for PPPOX_BOUND sockets Qingfang Deng
2025-06-25 3:40 ` [PATCH net-next 3/3] ppp: synchronize netstats updates Qingfang Deng
2025-06-25 8:16 ` Andrew Lunn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aF1z52+rpNyIKk0O@debian \
--to=gnault@redhat.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=dqfext@gmail.com \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-ppp@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox