netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] skmsg: Add the data length in skmsg to SIOCINQ ioctl and rx_queue
@ 2023-11-14 11:41 Pengcheng Yang
  2023-11-14 11:41 ` [PATCH bpf-next 1/3] skmsg: Calculate the data length in ingress_msg Pengcheng Yang
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Pengcheng Yang @ 2023-11-14 11:41 UTC (permalink / raw)
  To: John Fastabend, Jakub Sitnicki, Eric Dumazet, Jakub Kicinski, bpf,
	netdev
  Cc: Pengcheng Yang

When using skmsg redirect, the msg is queued in psock->ingress_msg,
and the application calling SIOCINQ ioctl will return a readable
length of 0, and we cannot track the data length of ingress_msg from
the ss tool.

In this patch set, we added the data length in ingress_msg to the 
SIOCINQ ioctl and the rx_queue of tcp_diag.

Pengcheng Yang (3):
  skmsg: Calculate the data length in ingress_msg
  tcp: Add the data length in skmsg to SIOCINQ ioctl
  tcp_diag: Add the data length in skmsg to rx_queue

 include/linux/skmsg.h | 24 ++++++++++++++++++++++--
 net/core/skmsg.c      |  4 ++++
 net/ipv4/tcp.c        |  3 ++-
 net/ipv4/tcp_diag.c   |  2 ++
 4 files changed, 30 insertions(+), 3 deletions(-)

-- 
2.38.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 1/3] skmsg: Calculate the data length in ingress_msg
  2023-11-14 11:41 [PATCH bpf-next 0/3] skmsg: Add the data length in skmsg to SIOCINQ ioctl and rx_queue Pengcheng Yang
@ 2023-11-14 11:41 ` Pengcheng Yang
  2023-11-14 13:15   ` Eric Dumazet
  2023-11-14 11:41 ` [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl Pengcheng Yang
  2023-11-14 11:42 ` [PATCH bpf-next 3/3] tcp_diag: Add the data length in skmsg to rx_queue Pengcheng Yang
  2 siblings, 1 reply; 9+ messages in thread
From: Pengcheng Yang @ 2023-11-14 11:41 UTC (permalink / raw)
  To: John Fastabend, Jakub Sitnicki, Eric Dumazet, Jakub Kicinski, bpf,
	netdev
  Cc: Pengcheng Yang

Currently we cannot get the data length in ingress_msg,
we introduce sk_msg_queue_len() to do this.

Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
---
 include/linux/skmsg.h | 24 ++++++++++++++++++++++--
 net/core/skmsg.c      |  4 ++++
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
index c1637515a8a4..3023a573859d 100644
--- a/include/linux/skmsg.h
+++ b/include/linux/skmsg.h
@@ -82,6 +82,7 @@ struct sk_psock {
 	u32				apply_bytes;
 	u32				cork_bytes;
 	u32				eval;
+	u32				msg_len;
 	bool				redir_ingress; /* undefined if sk_redir is null */
 	struct sk_msg			*cork;
 	struct sk_psock_progs		progs;
@@ -131,6 +132,11 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
 		   int len, int flags);
 bool sk_msg_is_readable(struct sock *sk);
 
+static inline void sk_msg_queue_consumed(struct sk_psock *psock, u32 len)
+{
+	psock->msg_len -= len;
+}
+
 static inline void sk_msg_check_to_free(struct sk_msg *msg, u32 i, u32 bytes)
 {
 	WARN_ON(i == msg->sg.end && bytes);
@@ -311,9 +317,10 @@ static inline void sk_psock_queue_msg(struct sk_psock *psock,
 				      struct sk_msg *msg)
 {
 	spin_lock_bh(&psock->ingress_lock);
-	if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
+	if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
 		list_add_tail(&msg->list, &psock->ingress_msg);
-	else {
+		psock->msg_len += msg->sg.size;
+	} else {
 		sk_msg_free(psock->sk, msg);
 		kfree(msg);
 	}
@@ -368,6 +375,19 @@ static inline void kfree_sk_msg(struct sk_msg *msg)
 	kfree(msg);
 }
 
+static inline u32 sk_msg_queue_len(struct sock *sk)
+{
+	struct sk_psock *psock;
+	u32 len = 0;
+
+	rcu_read_lock();
+	psock = sk_psock(sk);
+	if (psock)
+		len = psock->msg_len;
+	rcu_read_unlock();
+	return len;
+}
+
 static inline void sk_psock_report_error(struct sk_psock *psock, int err)
 {
 	struct sock *sk = psock->sk;
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 6c31eefbd777..b3de17e99b67 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -481,6 +481,8 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
 		msg_rx = sk_psock_peek_msg(psock);
 	}
 out:
+	if (likely(!peek) && copied > 0)
+		sk_msg_queue_consumed(psock, copied);
 	return copied;
 }
 EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
@@ -771,9 +773,11 @@ static void __sk_psock_purge_ingress_msg(struct sk_psock *psock)
 
 	list_for_each_entry_safe(msg, tmp, &psock->ingress_msg, list) {
 		list_del(&msg->list);
+		sk_msg_queue_consumed(psock, msg->sg.size);
 		sk_msg_free(psock->sk, msg);
 		kfree(msg);
 	}
+	WARN_ON_ONCE(psock->msg_len != 0);
 }
 
 static void __sk_psock_zap_ingress(struct sk_psock *psock)
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl
  2023-11-14 11:41 [PATCH bpf-next 0/3] skmsg: Add the data length in skmsg to SIOCINQ ioctl and rx_queue Pengcheng Yang
  2023-11-14 11:41 ` [PATCH bpf-next 1/3] skmsg: Calculate the data length in ingress_msg Pengcheng Yang
@ 2023-11-14 11:41 ` Pengcheng Yang
  2023-11-15  7:20   ` John Fastabend
  2023-11-14 11:42 ` [PATCH bpf-next 3/3] tcp_diag: Add the data length in skmsg to rx_queue Pengcheng Yang
  2 siblings, 1 reply; 9+ messages in thread
From: Pengcheng Yang @ 2023-11-14 11:41 UTC (permalink / raw)
  To: John Fastabend, Jakub Sitnicki, Eric Dumazet, Jakub Kicinski, bpf,
	netdev
  Cc: Pengcheng Yang

SIOCINQ ioctl returns the number unread bytes of the receive
queue but does not include the ingress_msg queue. With the
sk_msg redirect, an application may get a value 0 if it calls
SIOCINQ ioctl before recv() to determine the readable size.

Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
---
 net/ipv4/tcp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 3d3a24f79573..04da0684c397 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -267,6 +267,7 @@
 #include <linux/errqueue.h>
 #include <linux/static_key.h>
 #include <linux/btf.h>
+#include <linux/skmsg.h>
 
 #include <net/icmp.h>
 #include <net/inet_common.h>
@@ -613,7 +614,7 @@ int tcp_ioctl(struct sock *sk, int cmd, int *karg)
 			return -EINVAL;
 
 		slow = lock_sock_fast(sk);
-		answ = tcp_inq(sk);
+		answ = tcp_inq(sk) + sk_msg_queue_len(sk);
 		unlock_sock_fast(sk, slow);
 		break;
 	case SIOCATMARK:
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 3/3] tcp_diag: Add the data length in skmsg to rx_queue
  2023-11-14 11:41 [PATCH bpf-next 0/3] skmsg: Add the data length in skmsg to SIOCINQ ioctl and rx_queue Pengcheng Yang
  2023-11-14 11:41 ` [PATCH bpf-next 1/3] skmsg: Calculate the data length in ingress_msg Pengcheng Yang
  2023-11-14 11:41 ` [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl Pengcheng Yang
@ 2023-11-14 11:42 ` Pengcheng Yang
  2 siblings, 0 replies; 9+ messages in thread
From: Pengcheng Yang @ 2023-11-14 11:42 UTC (permalink / raw)
  To: John Fastabend, Jakub Sitnicki, Eric Dumazet, Jakub Kicinski, bpf,
	netdev
  Cc: Pengcheng Yang

In order to help us track the data length in ingress_msg
when using sk_msg redirect.

Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
---
 net/ipv4/tcp_diag.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/ipv4/tcp_diag.c b/net/ipv4/tcp_diag.c
index 01b50fa79189..b22382820a4b 100644
--- a/net/ipv4/tcp_diag.c
+++ b/net/ipv4/tcp_diag.c
@@ -11,6 +11,7 @@
 #include <linux/inet_diag.h>
 
 #include <linux/tcp.h>
+#include <linux/skmsg.h>
 
 #include <net/netlink.h>
 #include <net/tcp.h>
@@ -28,6 +29,7 @@ static void tcp_diag_get_info(struct sock *sk, struct inet_diag_msg *r,
 
 		r->idiag_rqueue = max_t(int, READ_ONCE(tp->rcv_nxt) -
 					     READ_ONCE(tp->copied_seq), 0);
+		r->idiag_rqueue += sk_msg_queue_len(sk);
 		r->idiag_wqueue = READ_ONCE(tp->write_seq) - tp->snd_una;
 	}
 	if (info)
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 1/3] skmsg: Calculate the data length in ingress_msg
  2023-11-14 11:41 ` [PATCH bpf-next 1/3] skmsg: Calculate the data length in ingress_msg Pengcheng Yang
@ 2023-11-14 13:15   ` Eric Dumazet
  0 siblings, 0 replies; 9+ messages in thread
From: Eric Dumazet @ 2023-11-14 13:15 UTC (permalink / raw)
  To: Pengcheng Yang
  Cc: John Fastabend, Jakub Sitnicki, Jakub Kicinski, bpf, netdev

On Tue, Nov 14, 2023 at 12:42 PM Pengcheng Yang <yangpc@wangsu.com> wrote:
>
> Currently we cannot get the data length in ingress_msg,
> we introduce sk_msg_queue_len() to do this.
>
> Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> ---
>  include/linux/skmsg.h | 24 ++++++++++++++++++++++--
>  net/core/skmsg.c      |  4 ++++
>  2 files changed, 26 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
> index c1637515a8a4..3023a573859d 100644
> --- a/include/linux/skmsg.h
> +++ b/include/linux/skmsg.h
> @@ -82,6 +82,7 @@ struct sk_psock {
>         u32                             apply_bytes;
>         u32                             cork_bytes;
>         u32                             eval;
> +       u32                             msg_len;
>         bool                            redir_ingress; /* undefined if sk_redir is null */
>         struct sk_msg                   *cork;
>         struct sk_psock_progs           progs;
> @@ -131,6 +132,11 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
>                    int len, int flags);
>  bool sk_msg_is_readable(struct sock *sk);
>
> +static inline void sk_msg_queue_consumed(struct sk_psock *psock, u32 len)
> +{
> +       psock->msg_len -= len;
> +}
> +
>  static inline void sk_msg_check_to_free(struct sk_msg *msg, u32 i, u32 bytes)
>  {
>         WARN_ON(i == msg->sg.end && bytes);
> @@ -311,9 +317,10 @@ static inline void sk_psock_queue_msg(struct sk_psock *psock,
>                                       struct sk_msg *msg)
>  {
>         spin_lock_bh(&psock->ingress_lock);
> -       if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
> +       if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
>                 list_add_tail(&msg->list, &psock->ingress_msg);
> -       else {
> +               psock->msg_len += msg->sg.size;
> +       } else {
>                 sk_msg_free(psock->sk, msg);
>                 kfree(msg);
>         }
> @@ -368,6 +375,19 @@ static inline void kfree_sk_msg(struct sk_msg *msg)
>         kfree(msg);
>  }
>
> +static inline u32 sk_msg_queue_len(struct sock *sk)

const struct sock *sk;

> +{
> +       struct sk_psock *psock;
> +       u32 len = 0;
> +
> +       rcu_read_lock();
> +       psock = sk_psock(sk);
> +       if (psock)
> +               len = psock->msg_len;

This is racy against writers.

You must use READ_ONCE() here, and WRITE_ONCE() on write sides.

> +       rcu_read_unlock();
> +       return len;
> +}
> +
>  static inline void sk_psock_report_error(struct sk_psock *psock, int err)
>  {
>         struct sock *sk = psock->sk;
> diff --git a/net/core/skmsg.c b/net/core/skmsg.c
> index 6c31eefbd777..b3de17e99b67 100644
> --- a/net/core/skmsg.c
> +++ b/net/core/skmsg.c
> @@ -481,6 +481,8 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
>                 msg_rx = sk_psock_peek_msg(psock);
>         }
>  out:
> +       if (likely(!peek) && copied > 0)
> +               sk_msg_queue_consumed(psock, copied);
>         return copied;
>  }
>  EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
> @@ -771,9 +773,11 @@ static void __sk_psock_purge_ingress_msg(struct sk_psock *psock)
>
>         list_for_each_entry_safe(msg, tmp, &psock->ingress_msg, list) {
>                 list_del(&msg->list);
> +               sk_msg_queue_consumed(psock, msg->sg.size);
>                 sk_msg_free(psock->sk, msg);
>                 kfree(msg);
>         }
> +       WARN_ON_ONCE(psock->msg_len != 0);
>  }
>
>  static void __sk_psock_zap_ingress(struct sk_psock *psock)
> --
> 2.38.1
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl
  2023-11-14 11:41 ` [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl Pengcheng Yang
@ 2023-11-15  7:20   ` John Fastabend
  2023-11-15 11:45     ` Pengcheng Yang
  0 siblings, 1 reply; 9+ messages in thread
From: John Fastabend @ 2023-11-15  7:20 UTC (permalink / raw)
  To: Pengcheng Yang, John Fastabend, Jakub Sitnicki, Eric Dumazet,
	Jakub Kicinski, bpf, netdev
  Cc: Pengcheng Yang

Pengcheng Yang wrote:
> SIOCINQ ioctl returns the number unread bytes of the receive
> queue but does not include the ingress_msg queue. With the
> sk_msg redirect, an application may get a value 0 if it calls
> SIOCINQ ioctl before recv() to determine the readable size.
> 
> Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> ---
>  net/ipv4/tcp.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 3d3a24f79573..04da0684c397 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -267,6 +267,7 @@
>  #include <linux/errqueue.h>
>  #include <linux/static_key.h>
>  #include <linux/btf.h>
> +#include <linux/skmsg.h>
>  
>  #include <net/icmp.h>
>  #include <net/inet_common.h>
> @@ -613,7 +614,7 @@ int tcp_ioctl(struct sock *sk, int cmd, int *karg)
>  			return -EINVAL;
>  
>  		slow = lock_sock_fast(sk);
> -		answ = tcp_inq(sk);
> +		answ = tcp_inq(sk) + sk_msg_queue_len(sk);

This will break the SK_PASS case I believe. Here we do
not update copied_seq until data is actually copied into user
space. This also ensures tcp_epollin_ready works correctly and
tcp_inq. The fix is relatively recent.

 commit e5c6de5fa025882babf89cecbed80acf49b987fa
 Author: John Fastabend <john.fastabend@gmail.com>
 Date:   Mon May 22 19:56:12 2023 -0700

    bpf, sockmap: Incorrectly handling copied_seq

The previous patch increments the msg_len for all cases even
the SK_PASS case so you will get double counting.

I was starting to poke around at how to fix the other cases e.g.
stream parser is in use and redirects but haven't got to it  yet.
By the way I think even with this patch epollin_ready is likely
not correct still. We observe this as either failing to wake up
or waking up an application to early when using stream parser.

The other thing to consider is redirected skb into another socket
and then read off the list increment the copied_seq even though
they shouldn't if they came from another sock?  The result would
be tcp_inq would be incorrect even negative perhaps?

What does your test setup look like? Simple redirect between
two TCP sockets? With or without stream parser? My guess is we
need to fix underlying copied_seq issues related to the redirect
and stream parser case. I believe the fix is, only increment
copied_seq for data that was put on the ingress_queue from SK_PASS.
Then update previous patch to only incrmeent sk_msg_queue_len()
for redirect paths. And this patch plus fix to tcp_epollin_ready
would resolve most the issues. Its a bit unfortunate to leak the
sk_sg_queue_len() into tcp_ioctl and tcp_epollin but I don't have
a cleaner idea right now.

>  		unlock_sock_fast(sk, slow);
>  		break;
>  	case SIOCATMARK:
> -- 
> 2.38.1
> 



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl
  2023-11-15  7:20   ` John Fastabend
@ 2023-11-15 11:45     ` Pengcheng Yang
  2023-11-17  1:32       ` John Fastabend
  0 siblings, 1 reply; 9+ messages in thread
From: Pengcheng Yang @ 2023-11-15 11:45 UTC (permalink / raw)
  To: 'John Fastabend', 'Jakub Sitnicki',
	'Eric Dumazet', 'Jakub Kicinski', bpf, netdev

John Fastabend <john.fastabend@gmail.com> wrote:
> Pengcheng Yang wrote:
> > SIOCINQ ioctl returns the number unread bytes of the receive
> > queue but does not include the ingress_msg queue. With the
> > sk_msg redirect, an application may get a value 0 if it calls
> > SIOCINQ ioctl before recv() to determine the readable size.
> >
> > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> 
> This will break the SK_PASS case I believe. Here we do
> not update copied_seq until data is actually copied into user
> space. This also ensures tcp_epollin_ready works correctly and
> tcp_inq. The fix is relatively recent.
> 
>  commit e5c6de5fa025882babf89cecbed80acf49b987fa
>  Author: John Fastabend <john.fastabend@gmail.com>
>  Date:   Mon May 22 19:56:12 2023 -0700
> 
>     bpf, sockmap: Incorrectly handling copied_seq
> 
> The previous patch increments the msg_len for all cases even
> the SK_PASS case so you will get double counting.

You are right, I missed the SK_PASS case of skb stream verdict.

> 
> I was starting to poke around at how to fix the other cases e.g.
> stream parser is in use and redirects but haven't got to it  yet.
> By the way I think even with this patch epollin_ready is likely
> not correct still. We observe this as either failing to wake up
> or waking up an application to early when using stream parser.
> 
> The other thing to consider is redirected skb into another socket
> and then read off the list increment the copied_seq even though
> they shouldn't if they came from another sock?  The result would
> be tcp_inq would be incorrect even negative perhaps?
> 
> What does your test setup look like? Simple redirect between
> two TCP sockets? With or without stream parser? My guess is we
> need to fix underlying copied_seq issues related to the redirect
> and stream parser case. I believe the fix is, only increment
> copied_seq for data that was put on the ingress_queue from SK_PASS.
> Then update previous patch to only incrmeent sk_msg_queue_len()
> for redirect paths. And this patch plus fix to tcp_epollin_ready
> would resolve most the issues. Its a bit unfortunate to leak the
> sk_sg_queue_len() into tcp_ioctl and tcp_epollin but I don't have
> a cleaner idea right now.
> 

What I tested was to use msg_verdict to redirect between two sockets
without stream parser, and the problem I encountered is that msg has
been queued in psock->ingress_msg, and the application has been woken up
by epoll (because of sk_psock_data_ready), but the ioctl(FIONREAD) returns 0.

The key is that the rcv_nxt is not updated on ingress redirect, or we only need
to update rcv_nxt on ingress redirect, such as in bpf_tcp_ingress() and
sk_psock_skb_ingress_enqueue() ?


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl
  2023-11-15 11:45     ` Pengcheng Yang
@ 2023-11-17  1:32       ` John Fastabend
  2023-11-17 10:59         ` Pengcheng Yang
  0 siblings, 1 reply; 9+ messages in thread
From: John Fastabend @ 2023-11-17  1:32 UTC (permalink / raw)
  To: Pengcheng Yang, 'John Fastabend',
	'Jakub Sitnicki', 'Eric Dumazet',
	'Jakub Kicinski', bpf, netdev

Pengcheng Yang wrote:
> John Fastabend <john.fastabend@gmail.com> wrote:
> > Pengcheng Yang wrote:
> > > SIOCINQ ioctl returns the number unread bytes of the receive
> > > queue but does not include the ingress_msg queue. With the
> > > sk_msg redirect, an application may get a value 0 if it calls
> > > SIOCINQ ioctl before recv() to determine the readable size.
> > >
> > > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> > 
> > This will break the SK_PASS case I believe. Here we do
> > not update copied_seq until data is actually copied into user
> > space. This also ensures tcp_epollin_ready works correctly and
> > tcp_inq. The fix is relatively recent.
> > 
> >  commit e5c6de5fa025882babf89cecbed80acf49b987fa
> >  Author: John Fastabend <john.fastabend@gmail.com>
> >  Date:   Mon May 22 19:56:12 2023 -0700
> > 
> >     bpf, sockmap: Incorrectly handling copied_seq
> > 
> > The previous patch increments the msg_len for all cases even
> > the SK_PASS case so you will get double counting.
> 
> You are right, I missed the SK_PASS case of skb stream verdict.
> 
> > 
> > I was starting to poke around at how to fix the other cases e.g.
> > stream parser is in use and redirects but haven't got to it  yet.
> > By the way I think even with this patch epollin_ready is likely
> > not correct still. We observe this as either failing to wake up
> > or waking up an application to early when using stream parser.
> > 
> > The other thing to consider is redirected skb into another socket
> > and then read off the list increment the copied_seq even though
> > they shouldn't if they came from another sock?  The result would
> > be tcp_inq would be incorrect even negative perhaps?
> > 
> > What does your test setup look like? Simple redirect between
> > two TCP sockets? With or without stream parser? My guess is we
> > need to fix underlying copied_seq issues related to the redirect
> > and stream parser case. I believe the fix is, only increment
> > copied_seq for data that was put on the ingress_queue from SK_PASS.
> > Then update previous patch to only incrmeent sk_msg_queue_len()
> > for redirect paths. And this patch plus fix to tcp_epollin_ready
> > would resolve most the issues. Its a bit unfortunate to leak the
> > sk_sg_queue_len() into tcp_ioctl and tcp_epollin but I don't have
> > a cleaner idea right now.
> > 
> 
> What I tested was to use msg_verdict to redirect between two sockets
> without stream parser, and the problem I encountered is that msg has
> been queued in psock->ingress_msg, and the application has been woken up
> by epoll (because of sk_psock_data_ready), but the ioctl(FIONREAD) returns 0.

Yep makes sense.

> 
> The key is that the rcv_nxt is not updated on ingress redirect, or we only need
> to update rcv_nxt on ingress redirect, such as in bpf_tcp_ingress() and
> sk_psock_skb_ingress_enqueue() ?
> 

I think its likely best not to touch rcv_nxt. 'rcv_nxt' is used in
the tcp stack to calculate lots of things. If you just bump it and
then ever received an actual TCP pkt you would get some really
odd behavior because seq numbers and rcv_nxt would be unrelated then.

The approach you have is really the best bet IMO, but mask out
the increment msg_len where its not needed. Then it should be OK.

Mixing ingress redirect and TCP sending/recv pkts doesn't usually work
very well anyway but I still think leaving rcv_nxt alone is best.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl
  2023-11-17  1:32       ` John Fastabend
@ 2023-11-17 10:59         ` Pengcheng Yang
  0 siblings, 0 replies; 9+ messages in thread
From: Pengcheng Yang @ 2023-11-17 10:59 UTC (permalink / raw)
  To: 'John Fastabend', 'Jakub Sitnicki',
	'Eric Dumazet', 'Jakub Kicinski', bpf, netdev

John Fastabend <john.fastabend@gmail.com> wrote:
> Pengcheng Yang wrote:
> > John Fastabend <john.fastabend@gmail.com> wrote:
> > > Pengcheng Yang wrote:
> > > > SIOCINQ ioctl returns the number unread bytes of the receive
> > > > queue but does not include the ingress_msg queue. With the
> > > > sk_msg redirect, an application may get a value 0 if it calls
> > > > SIOCINQ ioctl before recv() to determine the readable size.
> > > >
> > > > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> > >
> > > This will break the SK_PASS case I believe. Here we do
> > > not update copied_seq until data is actually copied into user
> > > space. This also ensures tcp_epollin_ready works correctly and
> > > tcp_inq. The fix is relatively recent.
> > >
> > >  commit e5c6de5fa025882babf89cecbed80acf49b987fa
> > >  Author: John Fastabend <john.fastabend@gmail.com>
> > >  Date:   Mon May 22 19:56:12 2023 -0700
> > >
> > >     bpf, sockmap: Incorrectly handling copied_seq
> > >
> > > The previous patch increments the msg_len for all cases even
> > > the SK_PASS case so you will get double counting.
> >
> > You are right, I missed the SK_PASS case of skb stream verdict.
> >
> > >
> > > I was starting to poke around at how to fix the other cases e.g.
> > > stream parser is in use and redirects but haven't got to it  yet.
> > > By the way I think even with this patch epollin_ready is likely
> > > not correct still. We observe this as either failing to wake up
> > > or waking up an application to early when using stream parser.
> > >
> > > The other thing to consider is redirected skb into another socket
> > > and then read off the list increment the copied_seq even though
> > > they shouldn't if they came from another sock?  The result would
> > > be tcp_inq would be incorrect even negative perhaps?
> > >
> > > What does your test setup look like? Simple redirect between
> > > two TCP sockets? With or without stream parser? My guess is we
> > > need to fix underlying copied_seq issues related to the redirect
> > > and stream parser case. I believe the fix is, only increment
> > > copied_seq for data that was put on the ingress_queue from SK_PASS.
> > > Then update previous patch to only incrmeent sk_msg_queue_len()
> > > for redirect paths. And this patch plus fix to tcp_epollin_ready
> > > would resolve most the issues. Its a bit unfortunate to leak the
> > > sk_sg_queue_len() into tcp_ioctl and tcp_epollin but I don't have
> > > a cleaner idea right now.
> > >
> >
> > What I tested was to use msg_verdict to redirect between two sockets
> > without stream parser, and the problem I encountered is that msg has
> > been queued in psock->ingress_msg, and the application has been woken up
> > by epoll (because of sk_psock_data_ready), but the ioctl(FIONREAD) returns 0.
> 
> Yep makes sense.
> 
> >
> > The key is that the rcv_nxt is not updated on ingress redirect, or we only need
> > to update rcv_nxt on ingress redirect, such as in bpf_tcp_ingress() and
> > sk_psock_skb_ingress_enqueue() ?
> >
> 
> I think its likely best not to touch rcv_nxt. 'rcv_nxt' is used in
> the tcp stack to calculate lots of things. If you just bump it and
> then ever received an actual TCP pkt you would get some really
> odd behavior because seq numbers and rcv_nxt would be unrelated then.
> 
> The approach you have is really the best bet IMO, but mask out
> the increment msg_len where its not needed. Then it should be OK.
> 

I think we can add a flag to msg to identify whether msg comes from the same
sock's receive_queue. In this way, we can increase and decrease the msg_len
based on this flag when msg is queued to ingress_msg and when it is read by
the application.

And, this can also fix the case you mentioned above:

	"The other thing to consider is redirected skb into another socket
	and then read off the list increment the copied_seq even though
	they shouldn't if they came from another sock?  The result would
	be tcp_inq would be incorrect even negative perhaps?"

During recv in tcp_bpf_recvmsg_parser(), we only need to increment copied_seq
when the msg comes from the same sock's receive_queue, otherwise copied_seq
may overflow rcv_nxt in this case.

> Mixing ingress redirect and TCP sending/recv pkts doesn't usually work
> very well anyway but I still think leaving rcv_nxt alone is best.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-11-17 10:59 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-14 11:41 [PATCH bpf-next 0/3] skmsg: Add the data length in skmsg to SIOCINQ ioctl and rx_queue Pengcheng Yang
2023-11-14 11:41 ` [PATCH bpf-next 1/3] skmsg: Calculate the data length in ingress_msg Pengcheng Yang
2023-11-14 13:15   ` Eric Dumazet
2023-11-14 11:41 ` [PATCH bpf-next 2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl Pengcheng Yang
2023-11-15  7:20   ` John Fastabend
2023-11-15 11:45     ` Pengcheng Yang
2023-11-17  1:32       ` John Fastabend
2023-11-17 10:59         ` Pengcheng Yang
2023-11-14 11:42 ` [PATCH bpf-next 3/3] tcp_diag: Add the data length in skmsg to rx_queue Pengcheng Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).