* [Patch net] kcm: fix a race condition in kcm_recvmsg()
@ 2022-10-23 2:30 Cong Wang
2022-10-25 23:02 ` Jakub Kicinski
2022-11-01 20:52 ` Cong Wang
0 siblings, 2 replies; 7+ messages in thread
From: Cong Wang @ 2022-10-23 2:30 UTC (permalink / raw)
To: netdev; +Cc: Cong Wang, shaozhengchao, Paolo Abeni, Tom Herbert
From: Cong Wang <cong.wang@bytedance.com>
sk->sk_receive_queue is protected by skb queue lock, but for KCM
sockets its RX path takes mux->rx_lock to protect more than just
skb queue, so grabbing skb queue lock is not necessary when
mux->rx_lock is already held. But kcm_recvmsg() still only grabs
the skb queue lock, so race conditions still exist.
Close this race condition by taking mux->rx_lock in kcm_recvmsg()
too. This way is much simpler than enforcing skb queue lock
everywhere.
Fixes: ab7ac4eb9832 ("kcm: Kernel Connection Multiplexor module")
Tested-by: shaozhengchao <shaozhengchao@huawei.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Tom Herbert <tom@herbertland.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
---
net/kcm/kcmsock.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index 27725464ec08..8b4e5d0ab2b6 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -1116,6 +1116,7 @@ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
{
struct sock *sk = sock->sk;
struct kcm_sock *kcm = kcm_sk(sk);
+ struct kcm_mux *mux = kcm->mux;
int err = 0;
long timeo;
struct strp_msg *stm;
@@ -1156,8 +1157,10 @@ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
msg_finished:
/* Finished with message */
msg->msg_flags |= MSG_EOR;
+ spin_lock_bh(&mux->rx_lock);
KCM_STATS_INCR(kcm->stats.rx_msgs);
skb_unlink(skb, &sk->sk_receive_queue);
+ spin_unlock_bh(&mux->rx_lock);
kfree_skb(skb);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Patch net] kcm: fix a race condition in kcm_recvmsg()
2022-10-23 2:30 [Patch net] kcm: fix a race condition in kcm_recvmsg() Cong Wang
@ 2022-10-25 23:02 ` Jakub Kicinski
2022-10-25 23:49 ` Eric Dumazet
2022-10-28 19:21 ` Cong Wang
2022-11-01 20:52 ` Cong Wang
1 sibling, 2 replies; 7+ messages in thread
From: Jakub Kicinski @ 2022-10-25 23:02 UTC (permalink / raw)
To: Cong Wang
Cc: netdev, Cong Wang, shaozhengchao, Paolo Abeni, Tom Herbert,
Eric Dumazet
On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> + spin_lock_bh(&mux->rx_lock);
> KCM_STATS_INCR(kcm->stats.rx_msgs);
> skb_unlink(skb, &sk->sk_receive_queue);
> + spin_unlock_bh(&mux->rx_lock);
Why not switch to __skb_unlink() at the same time?
Abundance of caution?
Adding Eric who was fixing KCM bugs recently.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Patch net] kcm: fix a race condition in kcm_recvmsg()
2022-10-25 23:02 ` Jakub Kicinski
@ 2022-10-25 23:49 ` Eric Dumazet
2022-10-28 19:24 ` Cong Wang
2022-10-28 19:21 ` Cong Wang
1 sibling, 1 reply; 7+ messages in thread
From: Eric Dumazet @ 2022-10-25 23:49 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Cong Wang, netdev, Cong Wang, shaozhengchao, Paolo Abeni,
Tom Herbert
On Tue, Oct 25, 2022 at 4:02 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> > + spin_lock_bh(&mux->rx_lock);
> > KCM_STATS_INCR(kcm->stats.rx_msgs);
> > skb_unlink(skb, &sk->sk_receive_queue);
> > + spin_unlock_bh(&mux->rx_lock);
>
> Why not switch to __skb_unlink() at the same time?
> Abundance of caution?
>
> Adding Eric who was fixing KCM bugs recently.
I think kcm_queue_rcv_skb() might have a similar problem if/when
called from requeue_rx_msgs()
(The mux->rx_lock spinlock is not acquired, and skb_queue_tail() is used)
I agree we should stick to one lock, and if this is not the standard
skb head lock, we should not use it at all
(ie use __skb_queue_tail() and friends)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Patch net] kcm: fix a race condition in kcm_recvmsg()
2022-10-25 23:02 ` Jakub Kicinski
2022-10-25 23:49 ` Eric Dumazet
@ 2022-10-28 19:21 ` Cong Wang
2022-10-28 23:27 ` Jakub Kicinski
1 sibling, 1 reply; 7+ messages in thread
From: Cong Wang @ 2022-10-28 19:21 UTC (permalink / raw)
To: Jakub Kicinski
Cc: netdev, Cong Wang, shaozhengchao, Paolo Abeni, Tom Herbert,
Eric Dumazet
On Tue, Oct 25, 2022 at 04:02:22PM -0700, Jakub Kicinski wrote:
> On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> > + spin_lock_bh(&mux->rx_lock);
> > KCM_STATS_INCR(kcm->stats.rx_msgs);
> > skb_unlink(skb, &sk->sk_receive_queue);
> > + spin_unlock_bh(&mux->rx_lock);
>
> Why not switch to __skb_unlink() at the same time?
> Abundance of caution?
What gain do we have? Since we have rx_lock, skb queue lock should never
be contended?
Thanks.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Patch net] kcm: fix a race condition in kcm_recvmsg()
2022-10-25 23:49 ` Eric Dumazet
@ 2022-10-28 19:24 ` Cong Wang
0 siblings, 0 replies; 7+ messages in thread
From: Cong Wang @ 2022-10-28 19:24 UTC (permalink / raw)
To: Eric Dumazet
Cc: Jakub Kicinski, netdev, Cong Wang, shaozhengchao, Paolo Abeni,
Tom Herbert
On Tue, Oct 25, 2022 at 04:49:48PM -0700, Eric Dumazet wrote:
> On Tue, Oct 25, 2022 at 4:02 PM Jakub Kicinski <kuba@kernel.org> wrote:
> >
> > On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> > > + spin_lock_bh(&mux->rx_lock);
> > > KCM_STATS_INCR(kcm->stats.rx_msgs);
> > > skb_unlink(skb, &sk->sk_receive_queue);
> > > + spin_unlock_bh(&mux->rx_lock);
> >
> > Why not switch to __skb_unlink() at the same time?
> > Abundance of caution?
> >
> > Adding Eric who was fixing KCM bugs recently.
>
> I think kcm_queue_rcv_skb() might have a similar problem if/when
> called from requeue_rx_msgs()
>
> (The mux->rx_lock spinlock is not acquired, and skb_queue_tail() is used)
rx_lock is acquired at least by 2 callers of it, requeue_rx_msgs() and
kcm_rcv_ready(). kcm_rcv_strparser() seems missing it, I can fix this in
a separate patch as no one actually reported a bug.
Thanks.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Patch net] kcm: fix a race condition in kcm_recvmsg()
2022-10-28 19:21 ` Cong Wang
@ 2022-10-28 23:27 ` Jakub Kicinski
0 siblings, 0 replies; 7+ messages in thread
From: Jakub Kicinski @ 2022-10-28 23:27 UTC (permalink / raw)
To: Cong Wang
Cc: netdev, Cong Wang, shaozhengchao, Paolo Abeni, Tom Herbert,
Eric Dumazet
On Fri, 28 Oct 2022 12:21:11 -0700 Cong Wang wrote:
> On Tue, Oct 25, 2022 at 04:02:22PM -0700, Jakub Kicinski wrote:
> > On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote:
> > > + spin_lock_bh(&mux->rx_lock);
> > > KCM_STATS_INCR(kcm->stats.rx_msgs);
> > > skb_unlink(skb, &sk->sk_receive_queue);
> > > + spin_unlock_bh(&mux->rx_lock);
> >
> > Why not switch to __skb_unlink() at the same time?
> > Abundance of caution?
>
> What gain do we have? Since we have rx_lock, skb queue lock should never
> be contended?
I was thinking mostly about readability, the performance is secondary.
Other parts of the code use unlocked skb queue helpers so it may be
confusing to a reader why this on isn't, and therefore what lock
protects the queue. But no strong feelings.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Patch net] kcm: fix a race condition in kcm_recvmsg()
2022-10-23 2:30 [Patch net] kcm: fix a race condition in kcm_recvmsg() Cong Wang
2022-10-25 23:02 ` Jakub Kicinski
@ 2022-11-01 20:52 ` Cong Wang
1 sibling, 0 replies; 7+ messages in thread
From: Cong Wang @ 2022-11-01 20:52 UTC (permalink / raw)
To: netdev; +Cc: Cong Wang, shaozhengchao, Paolo Abeni, Tom Herbert
On Sat, Oct 22, 2022 at 07:30:44PM -0700, Cong Wang wrote:
> From: Cong Wang <cong.wang@bytedance.com>
>
> sk->sk_receive_queue is protected by skb queue lock, but for KCM
> sockets its RX path takes mux->rx_lock to protect more than just
> skb queue, so grabbing skb queue lock is not necessary when
> mux->rx_lock is already held. But kcm_recvmsg() still only grabs
> the skb queue lock, so race conditions still exist.
>
> Close this race condition by taking mux->rx_lock in kcm_recvmsg()
> too. This way is much simpler than enforcing skb queue lock
> everywhere.
>
After a second thought, this actually could introduce a performance
regression as struct kcm_mux can be shared by multiple KCM sockets.
So, I am afraid we have to use the skb queue lock, fortunately I found
an easier way (comparing to Paolo's) to solve the skb peek race.
Zhengchao, could you please test the following patch?
Thanks!
---------------->
diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index a5004228111d..890a2423f559 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -222,7 +222,7 @@ static void requeue_rx_msgs(struct kcm_mux *mux, struct sk_buff_head *head)
struct sk_buff *skb;
struct kcm_sock *kcm;
- while ((skb = __skb_dequeue(head))) {
+ while ((skb = skb_dequeue(head))) {
/* Reset destructor to avoid calling kcm_rcv_ready */
skb->destructor = sock_rfree;
skb_orphan(skb);
@@ -1085,53 +1085,17 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
return err;
}
-static struct sk_buff *kcm_wait_data(struct sock *sk, int flags,
- long timeo, int *err)
-{
- struct sk_buff *skb;
-
- while (!(skb = skb_peek(&sk->sk_receive_queue))) {
- if (sk->sk_err) {
- *err = sock_error(sk);
- return NULL;
- }
-
- if (sock_flag(sk, SOCK_DONE))
- return NULL;
-
- if ((flags & MSG_DONTWAIT) || !timeo) {
- *err = -EAGAIN;
- return NULL;
- }
-
- sk_wait_data(sk, &timeo, NULL);
-
- /* Handle signals */
- if (signal_pending(current)) {
- *err = sock_intr_errno(timeo);
- return NULL;
- }
- }
-
- return skb;
-}
-
static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
size_t len, int flags)
{
struct sock *sk = sock->sk;
struct kcm_sock *kcm = kcm_sk(sk);
int err = 0;
- long timeo;
struct strp_msg *stm;
int copied = 0;
struct sk_buff *skb;
- timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
-
- lock_sock(sk);
-
- skb = kcm_wait_data(sk, flags, timeo, &err);
+ skb = skb_recv_datagram(sk, flags, &err);
if (!skb)
goto out;
@@ -1162,14 +1126,11 @@ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
/* Finished with message */
msg->msg_flags |= MSG_EOR;
KCM_STATS_INCR(kcm->stats.rx_msgs);
- skb_unlink(skb, &sk->sk_receive_queue);
- kfree_skb(skb);
}
}
out:
- release_sock(sk);
-
+ skb_free_datagram(sk, skb);
return copied ? : err;
}
@@ -1179,7 +1140,6 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
{
struct sock *sk = sock->sk;
struct kcm_sock *kcm = kcm_sk(sk);
- long timeo;
struct strp_msg *stm;
int err = 0;
ssize_t copied;
@@ -1187,11 +1147,7 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
/* Only support splice for SOCKSEQPACKET */
- timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
-
- lock_sock(sk);
-
- skb = kcm_wait_data(sk, flags, timeo, &err);
+ skb = skb_recv_datagram(sk, flags, &err);
if (!skb)
goto err_out;
@@ -1219,13 +1175,11 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
* finish reading the message.
*/
- release_sock(sk);
-
+ skb_free_datagram(sk, skb);
return copied;
err_out:
- release_sock(sk);
-
+ skb_free_datagram(sk, skb);
return err;
}
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-11-01 20:52 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-23 2:30 [Patch net] kcm: fix a race condition in kcm_recvmsg() Cong Wang
2022-10-25 23:02 ` Jakub Kicinski
2022-10-25 23:49 ` Eric Dumazet
2022-10-28 19:24 ` Cong Wang
2022-10-28 19:21 ` Cong Wang
2022-10-28 23:27 ` Jakub Kicinski
2022-11-01 20:52 ` Cong Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).