netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net 1/2] tcp: fix quick-ack counting to count actual ACKs of new data
@ 2023-09-27 15:15 Neal Cardwell
  2023-09-27 15:15 ` [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition Neal Cardwell
  0 siblings, 1 reply; 7+ messages in thread
From: Neal Cardwell @ 2023-09-27 15:15 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, Eric Dumazet
  Cc: netdev, Neal Cardwell, Yuchung Cheng

From: Neal Cardwell <ncardwell@google.com>

This commit fixes quick-ack counting so that it only considers that a
quick-ack has been provided if we are sending an ACK that newly
acknowledges data.

The code was erroneously using the number of data segments in outgoing
skbs when deciding how many quick-ack credits to remove. This logic
does not make sense, and could cause poor performance in
request-response workloads, like RPC traffic, where requests or
responses can be multi-segment skbs.

When a TCP connection decides to send N quick-acks, that is to
accelerate the cwnd growth of the congestion control module
controlling the remote endpoint of the TCP connection. That quick-ack
decision is purely about the incoming data and outgoing ACKs. It has
nothing to do with the outgoing data or the size of outgoing data.

And in particular, an ACK only serves the intended purpose of allowing
the remote congestion control to grow the congestion window quickly if
the ACK is ACKing or SACKing new data.

The fix is simple: only count packets as serving the goal of the
quickack mechanism if they are ACKing/SACKing new data. We can tell
whether this is the case by checking inet_csk_ack_scheduled(), since
we schedule an ACK exactly when we are ACKing/SACKing new data.

Fixes: fc6415bcb0f5 ("[TCP]: Fix quick-ack decrementing with TSO.")
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Yuchung Cheng <ycheng@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
---
 include/net/tcp.h     | 6 ++++--
 net/ipv4/tcp_output.c | 7 +++----
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 91688d0dadcd6..7b1a720691aec 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -348,12 +348,14 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
 struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, gfp_t gfp,
 				     bool force_schedule);
 
-static inline void tcp_dec_quickack_mode(struct sock *sk,
-					 const unsigned int pkts)
+static inline void tcp_dec_quickack_mode(struct sock *sk)
 {
 	struct inet_connection_sock *icsk = inet_csk(sk);
 
 	if (icsk->icsk_ack.quick) {
+		/* How many ACKs S/ACKing new data have we sent? */
+		const unsigned int pkts = inet_csk_ack_scheduled(sk) ? 1 : 0;
+
 		if (pkts >= icsk->icsk_ack.quick) {
 			icsk->icsk_ack.quick = 0;
 			/* Leaving quickack mode we deflate ATO. */
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index ccfc8bbf74558..aa0fc8c766e50 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -177,8 +177,7 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
 }
 
 /* Account for an ACK we sent. */
-static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
-				      u32 rcv_nxt)
+static inline void tcp_event_ack_sent(struct sock *sk, u32 rcv_nxt)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 
@@ -192,7 +191,7 @@ static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
 
 	if (unlikely(rcv_nxt != tp->rcv_nxt))
 		return;  /* Special ACK sent by DCTCP to reflect ECN */
-	tcp_dec_quickack_mode(sk, pkts);
+	tcp_dec_quickack_mode(sk);
 	inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK);
 }
 
@@ -1387,7 +1386,7 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
 			   sk, skb);
 
 	if (likely(tcb->tcp_flags & TCPHDR_ACK))
-		tcp_event_ack_sent(sk, tcp_skb_pcount(skb), rcv_nxt);
+		tcp_event_ack_sent(sk, rcv_nxt);
 
 	if (skb->len != tcp_header_size) {
 		tcp_event_data_sent(tp, sk);
-- 
2.42.0.515.g380fc7ccd1-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition
  2023-09-27 15:15 [PATCH net 1/2] tcp: fix quick-ack counting to count actual ACKs of new data Neal Cardwell
@ 2023-09-27 15:15 ` Neal Cardwell
  2023-09-28  8:53   ` Xin Guo
  0 siblings, 1 reply; 7+ messages in thread
From: Neal Cardwell @ 2023-09-27 15:15 UTC (permalink / raw)
  To: David Miller, Jakub Kicinski, Eric Dumazet
  Cc: netdev, Neal Cardwell, Yuchung Cheng

From: Neal Cardwell <ncardwell@google.com>

This commit fixes poor delayed ACK behavior that can cause poor TCP
latency in a particular boundary condition: when an application makes
a TCP socket write that is an exact multiple of the MSS size.

The problem is that there is painful boundary discontinuity in the
current delayed ACK behavior. With the current delayed ACK behavior,
we have:

(1) If an app reads > 1*MSS data, tcp_cleanup_rbuf() ACKs immediately
    because of:

     tp->rcv_nxt - tp->rcv_wup > icsk->icsk_ack.rcv_mss ||

(2) If an app reads < 1*MSS data and either (a) app is not ping-pong or
    (b) we received two packets <1*MSS, then tcp_cleanup_rbuf() ACKs
    immediately beecause of:

     ((icsk->icsk_ack.pending & ICSK_ACK_PUSHED2) ||
      ((icsk->icsk_ack.pending & ICSK_ACK_PUSHED) &&
       !inet_csk_in_pingpong_mode(sk))) &&

(3) *However*: if an app reads exactly 1*MSS of data,
    tcp_cleanup_rbuf() does not send an immediate ACK. This is true
    even if the app is not ping-pong and the 1*MSS of data had the PSH
    bit set, suggesting the sending application completed an
    application write.

Thus if the app is not ping-pong, we have this painful case where
>1*MSS gets an immediate ACK, and <1*MSS gets an immediate ACK, but a
write whose last skb is an exact multiple of 1*MSS can get a 40ms
delayed ACK. This means that any app that transfers data in one
direction and takes care to align write size or packet size with MSS
can suffer this problem. With receive zero copy making 4KB MSS values
more common, it is becoming more common to have application writes
naturally align with MSS, and more applications are likely to
encounter this delayed ACK problem.

The fix in this commit is to refine the delayed ACK heuristics with a
simple check: immediately ACK a received 1*MSS skb with PSH bit set if
the app reads all data. Why? If an skb has a len of exactly 1*MSS and
has the PSH bit set then it is likely the end of an application
write. So more data may not be arriving soon, and yet the data sender
may be waiting for an ACK if cwnd-bound or using TX zero copy. Thus we
set ICSK_ACK_PUSHED in this case so that tcp_cleanup_rbuf() will send
an ACK immediately if the app reads all of the data and is not
ping-pong. Note that this logic is also executed for the case where
len > MSS, but in that case this logic does not matter (and does not
hurt) because tcp_cleanup_rbuf() will always ACK immediately if the
app reads data and there is more than an MSS of unACKed data.

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Yuchung Cheng <ycheng@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
---
 net/ipv4/tcp_input.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 06fe1cf645d5a..8afb0950a6979 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -253,6 +253,19 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
 		if (unlikely(len > icsk->icsk_ack.rcv_mss +
 				   MAX_TCP_OPTION_SPACE))
 			tcp_gro_dev_warn(sk, skb, len);
+		/* If the skb has a len of exactly 1*MSS and has the PSH bit
+		 * set then it is likely the end of an application write. So
+		 * more data may not be arriving soon, and yet the data sender
+		 * may be waiting for an ACK if cwnd-bound or using TX zero
+		 * copy. So we set ICSK_ACK_PUSHED here so that
+		 * tcp_cleanup_rbuf() will send an ACK immediately if the app
+		 * reads all of the data and is not ping-pong. If len > MSS
+		 * then this logic does not matter (and does not hurt) because
+		 * tcp_cleanup_rbuf() will always ACK immediately if the app
+		 * reads data and there is more than an MSS of unACKed data.
+		 */
+		if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_PSH)
+			icsk->icsk_ack.pending |= ICSK_ACK_PUSHED;
 	} else {
 		/* Otherwise, we make more careful check taking into account,
 		 * that SACKs block is variable.
-- 
2.42.0.515.g380fc7ccd1-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition
  2023-09-27 15:15 ` [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition Neal Cardwell
@ 2023-09-28  8:53   ` Xin Guo
  2023-09-28 14:38     ` Neal Cardwell
  0 siblings, 1 reply; 7+ messages in thread
From: Xin Guo @ 2023-09-28  8:53 UTC (permalink / raw)
  To: Neal Cardwell
  Cc: David Miller, Jakub Kicinski, Eric Dumazet, netdev, Neal Cardwell,
	Yuchung Cheng

Hi Neal:
Cannot understand "if an app reads > 1*MSS data" , " If an app reads <
1*MSS data" and " if an app reads exactly 1*MSS of data" in the commit
message.
In my view, it should be like:"if an app reads and received data > 1*MSS",
" If an app reads and received data < 1*MSS" and " if an app reads and
received data exactly 1*MSS".

Regards
Guo Xin

Neal Cardwell <ncardwell.sw@gmail.com> 于2023年9月27日周三 23:15写道:
>
> From: Neal Cardwell <ncardwell@google.com>
>
> This commit fixes poor delayed ACK behavior that can cause poor TCP
> latency in a particular boundary condition: when an application makes
> a TCP socket write that is an exact multiple of the MSS size.
>
> The problem is that there is painful boundary discontinuity in the
> current delayed ACK behavior. With the current delayed ACK behavior,
> we have:
>
> (1) If an app reads > 1*MSS data, tcp_cleanup_rbuf() ACKs immediately
>     because of:
>
>      tp->rcv_nxt - tp->rcv_wup > icsk->icsk_ack.rcv_mss ||
>
> (2) If an app reads < 1*MSS data and either (a) app is not ping-pong or
>     (b) we received two packets <1*MSS, then tcp_cleanup_rbuf() ACKs
>     immediately beecause of:
>
>      ((icsk->icsk_ack.pending & ICSK_ACK_PUSHED2) ||
>       ((icsk->icsk_ack.pending & ICSK_ACK_PUSHED) &&
>        !inet_csk_in_pingpong_mode(sk))) &&
>
> (3) *However*: if an app reads exactly 1*MSS of data,
>     tcp_cleanup_rbuf() does not send an immediate ACK. This is true
>     even if the app is not ping-pong and the 1*MSS of data had the PSH
>     bit set, suggesting the sending application completed an
>     application write.
>
> Thus if the app is not ping-pong, we have this painful case where
> >1*MSS gets an immediate ACK, and <1*MSS gets an immediate ACK, but a
> write whose last skb is an exact multiple of 1*MSS can get a 40ms
> delayed ACK. This means that any app that transfers data in one
> direction and takes care to align write size or packet size with MSS
> can suffer this problem. With receive zero copy making 4KB MSS values
> more common, it is becoming more common to have application writes
> naturally align with MSS, and more applications are likely to
> encounter this delayed ACK problem.
>
> The fix in this commit is to refine the delayed ACK heuristics with a
> simple check: immediately ACK a received 1*MSS skb with PSH bit set if
> the app reads all data. Why? If an skb has a len of exactly 1*MSS and
> has the PSH bit set then it is likely the end of an application
> write. So more data may not be arriving soon, and yet the data sender
> may be waiting for an ACK if cwnd-bound or using TX zero copy. Thus we
> set ICSK_ACK_PUSHED in this case so that tcp_cleanup_rbuf() will send
> an ACK immediately if the app reads all of the data and is not
> ping-pong. Note that this logic is also executed for the case where
> len > MSS, but in that case this logic does not matter (and does not
> hurt) because tcp_cleanup_rbuf() will always ACK immediately if the
> app reads data and there is more than an MSS of unACKed data.
>
> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
> Signed-off-by: Neal Cardwell <ncardwell@google.com>
> Reviewed-by: Yuchung Cheng <ycheng@google.com>
> Reviewed-by: Eric Dumazet <edumazet@google.com>
> ---
>  net/ipv4/tcp_input.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 06fe1cf645d5a..8afb0950a6979 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -253,6 +253,19 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
>                 if (unlikely(len > icsk->icsk_ack.rcv_mss +
>                                    MAX_TCP_OPTION_SPACE))
>                         tcp_gro_dev_warn(sk, skb, len);
> +               /* If the skb has a len of exactly 1*MSS and has the PSH bit
> +                * set then it is likely the end of an application write. So
> +                * more data may not be arriving soon, and yet the data sender
> +                * may be waiting for an ACK if cwnd-bound or using TX zero
> +                * copy. So we set ICSK_ACK_PUSHED here so that
> +                * tcp_cleanup_rbuf() will send an ACK immediately if the app
> +                * reads all of the data and is not ping-pong. If len > MSS
> +                * then this logic does not matter (and does not hurt) because
> +                * tcp_cleanup_rbuf() will always ACK immediately if the app
> +                * reads data and there is more than an MSS of unACKed data.
> +                */
> +               if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_PSH)
> +                       icsk->icsk_ack.pending |= ICSK_ACK_PUSHED;
>         } else {
>                 /* Otherwise, we make more careful check taking into account,
>                  * that SACKs block is variable.
> --
> 2.42.0.515.g380fc7ccd1-goog
>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition
  2023-09-28  8:53   ` Xin Guo
@ 2023-09-28 14:38     ` Neal Cardwell
  2023-09-28 15:56       ` Xin Guo
  0 siblings, 1 reply; 7+ messages in thread
From: Neal Cardwell @ 2023-09-28 14:38 UTC (permalink / raw)
  To: Xin Guo
  Cc: Neal Cardwell, David Miller, Jakub Kicinski, Eric Dumazet, Netdev,
	Yuchung Cheng

On Thu, Sep 28, 2023, 4:53 AM Xin Guo <guoxin0309@gmail.com> wrote:
>
> Hi Neal:
> Cannot understand "if an app reads > 1*MSS data" , " If an app reads <
> 1*MSS data" and " if an app reads exactly 1*MSS of data" in the commit
> message.
> In my view, it should be like:"if an app reads and received data > 1*MSS",
> " If an app reads and received data < 1*MSS" and " if an app reads and
> received data exactly 1*MSS".

AFAICT your suggestion for tweaking the commit message - "if an app
reads and received" - would be redundant.  Our proposed phrase, "if an
app reads", is sufficient, because a read of a certain amount of data
automatically implies that the data has been received. That is, the
"and received" part is implied already. After all, how would an app
read data if it has not been received? :-)

best regards,
neal

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition
  2023-09-28 14:38     ` Neal Cardwell
@ 2023-09-28 15:56       ` Xin Guo
  2023-10-01 15:19         ` Neal Cardwell
  0 siblings, 1 reply; 7+ messages in thread
From: Xin Guo @ 2023-09-28 15:56 UTC (permalink / raw)
  To: Neal Cardwell
  Cc: Neal Cardwell, David Miller, Jakub Kicinski, Eric Dumazet, Netdev,
	Yuchung Cheng

Neal,
thanks for your explanation,
1)when I read the patch, i cannot understood "if an app reads  >1*MSS data",
because in my view that "the app reads" mean that the copied data
length from sk_receive_queue to user-space buffer
in function tcp_recvmsg_locked(as example) when an app reads data from a socket,
but for "tp->rcv_nxt - tp->rcv_wup > icsk->icsk_ack.rcv_mss ||"
"tp->rcv_nxt - tp->rcv_wup" means that the received data length from
last ack in the kernel for the sk,
and not always the length of copied data to user-space buffer.

2) when we received two small packets(<1*MSS) in the kernel for the
sk, the total length of the two packets may  > 1*MSS.

Regards
Guo Xin

Neal Cardwell <ncardwell@google.com> 于2023年9月28日周四 22:38写道:
>
> On Thu, Sep 28, 2023, 4:53 AM Xin Guo <guoxin0309@gmail.com> wrote:
> >
> > Hi Neal:
> > Cannot understand "if an app reads > 1*MSS data" , " If an app reads <
> > 1*MSS data" and " if an app reads exactly 1*MSS of data" in the commit
> > message.
> > In my view, it should be like:"if an app reads and received data > 1*MSS",
> > " If an app reads and received data < 1*MSS" and " if an app reads and
> > received data exactly 1*MSS".
>
> AFAICT your suggestion for tweaking the commit message - "if an app
> reads and received" - would be redundant.  Our proposed phrase, "if an
> app reads", is sufficient, because a read of a certain amount of data
> automatically implies that the data has been received. That is, the
> "and received" part is implied already. After all, how would an app
> read data if it has not been received? :-)
>
> best regards,
> neal

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition
  2023-09-28 15:56       ` Xin Guo
@ 2023-10-01 15:19         ` Neal Cardwell
  2023-10-02  5:33           ` Xin Guo
  0 siblings, 1 reply; 7+ messages in thread
From: Neal Cardwell @ 2023-10-01 15:19 UTC (permalink / raw)
  To: Xin Guo
  Cc: Neal Cardwell, David Miller, Jakub Kicinski, Eric Dumazet, Netdev,
	Yuchung Cheng

On Thu, Sep 28, 2023 at 11:56 AM Xin Guo <guoxin0309@gmail.com> wrote:
>
> Neal,
> thanks for your explanation,
> 1)when I read the patch, i cannot understood "if an app reads  >1*MSS data",
> because in my view that "the app reads" mean that the copied data
> length from sk_receive_queue to user-space buffer
> in function tcp_recvmsg_locked(as example) when an app reads data from a socket,
> but for "tp->rcv_nxt - tp->rcv_wup > icsk->icsk_ack.rcv_mss ||"
> "tp->rcv_nxt - tp->rcv_wup" means that the received data length from
> last ack in the kernel for the sk,
> and not always the length of copied data to user-space buffer.
>
> 2) when we received two small packets(<1*MSS) in the kernel for the
> sk, the total length of the two packets may  > 1*MSS.

Thanks for clarifying. Those are good points; the commit message could
and should be more precise when describing the existing logic in
tcp_cleanup_rbuf(). I have posted a v2 series with a more precise
commit message:
  https://patchwork.kernel.org/project/netdevbpf/patch/20231001151239.1866845-2-ncardwell.sw@gmail.com/

best regards,
neal

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition
  2023-10-01 15:19         ` Neal Cardwell
@ 2023-10-02  5:33           ` Xin Guo
  0 siblings, 0 replies; 7+ messages in thread
From: Xin Guo @ 2023-10-02  5:33 UTC (permalink / raw)
  To: Neal Cardwell
  Cc: Neal Cardwell, David Miller, Jakub Kicinski, Eric Dumazet, Netdev,
	Yuchung Cheng

Thanks,
the commit message in the v2 is so good.

Regards
Guo Xin

Neal Cardwell <ncardwell@google.com> 于2023年10月1日周日 23:19写道:
>
> On Thu, Sep 28, 2023 at 11:56 AM Xin Guo <guoxin0309@gmail.com> wrote:
> >
> > Neal,
> > thanks for your explanation,
> > 1)when I read the patch, i cannot understood "if an app reads  >1*MSS data",
> > because in my view that "the app reads" mean that the copied data
> > length from sk_receive_queue to user-space buffer
> > in function tcp_recvmsg_locked(as example) when an app reads data from a socket,
> > but for "tp->rcv_nxt - tp->rcv_wup > icsk->icsk_ack.rcv_mss ||"
> > "tp->rcv_nxt - tp->rcv_wup" means that the received data length from
> > last ack in the kernel for the sk,
> > and not always the length of copied data to user-space buffer.
> >
> > 2) when we received two small packets(<1*MSS) in the kernel for the
> > sk, the total length of the two packets may  > 1*MSS.
>
> Thanks for clarifying. Those are good points; the commit message could
> and should be more precise when describing the existing logic in
> tcp_cleanup_rbuf(). I have posted a v2 series with a more precise
> commit message:
>   https://patchwork.kernel.org/project/netdevbpf/patch/20231001151239.1866845-2-ncardwell.sw@gmail.com/
>
> best regards,
> neal

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-10-02  5:34 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-27 15:15 [PATCH net 1/2] tcp: fix quick-ack counting to count actual ACKs of new data Neal Cardwell
2023-09-27 15:15 ` [PATCH net 2/2] tcp: fix delayed ACKs for MSS boundary condition Neal Cardwell
2023-09-28  8:53   ` Xin Guo
2023-09-28 14:38     ` Neal Cardwell
2023-09-28 15:56       ` Xin Guo
2023-10-01 15:19         ` Neal Cardwell
2023-10-02  5:33           ` Xin Guo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).