* [PATCH net-next] tcp: reduce calls to tcp_schedule_loss_probe()
@ 2026-02-23 11:35 Eric Dumazet
2026-02-23 15:47 ` Neal Cardwell
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Eric Dumazet @ 2026-02-23 11:35 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, Neal Cardwell, Kuniyuki Iwashima, netdev,
eric.dumazet, Eric Dumazet
For RPC workloads, we alternate tcp_schedule_loss_probe() calls from
output path and from input path, with tp->packets_out value
oscillating between !zero and zero, leading to poor branch prediction.
Move tp->packets_out check from tcp_schedule_loss_probe() to
tcp_set_xmit_timer().
We avoid one call to tcp_schedule_loss_probe() from tcp_ack()
path for typical RPC workloads, while improving branch prediction.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
net/ipv4/tcp_input.c | 2 +-
net/ipv4/tcp_output.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index e7b41abb82aad33d8cab4fcfa989cc4771149b41..6c3f1d0314446966d0ec4e8efb0b3d83463990d9 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -3552,7 +3552,7 @@ void tcp_rearm_rto(struct sock *sk)
/* Try to schedule a loss probe; if that doesn't work, then schedule an RTO. */
static void tcp_set_xmit_timer(struct sock *sk)
{
- if (!tcp_schedule_loss_probe(sk, true))
+ if (!tcp_sk(sk)->packets_out || !tcp_schedule_loss_probe(sk, true))
tcp_rearm_rto(sk);
}
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 326b58ff1118d02fc396753d56f210f9d3007c7f..ada38dd9cef477e16ff77544bbdd057d695fa978 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -3116,7 +3116,7 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto)
* not in loss recovery, that are either limited by cwnd or application.
*/
if ((early_retrans != 3 && early_retrans != 4) ||
- !tp->packets_out || !tcp_is_sack(tp) ||
+ !tcp_is_sack(tp) ||
(icsk->icsk_ca_state != TCP_CA_Open &&
icsk->icsk_ca_state != TCP_CA_CWR))
return false;
--
2.53.0.345.g96ddfc5eaa-goog
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] tcp: reduce calls to tcp_schedule_loss_probe()
2026-02-23 11:35 [PATCH net-next] tcp: reduce calls to tcp_schedule_loss_probe() Eric Dumazet
@ 2026-02-23 15:47 ` Neal Cardwell
2026-02-23 19:00 ` Kuniyuki Iwashima
2026-02-25 2:00 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: Neal Cardwell @ 2026-02-23 15:47 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, Simon Horman,
Kuniyuki Iwashima, netdev, eric.dumazet
On Mon, Feb 23, 2026 at 3:35 AM Eric Dumazet <edumazet@google.com> wrote:
>
> For RPC workloads, we alternate tcp_schedule_loss_probe() calls from
> output path and from input path, with tp->packets_out value
> oscillating between !zero and zero, leading to poor branch prediction.
>
> Move tp->packets_out check from tcp_schedule_loss_probe() to
> tcp_set_xmit_timer().
>
> We avoid one call to tcp_schedule_loss_probe() from tcp_ack()
> path for typical RPC workloads, while improving branch prediction.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> ---
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Thanks, Eric!
neal
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] tcp: reduce calls to tcp_schedule_loss_probe()
2026-02-23 11:35 [PATCH net-next] tcp: reduce calls to tcp_schedule_loss_probe() Eric Dumazet
2026-02-23 15:47 ` Neal Cardwell
@ 2026-02-23 19:00 ` Kuniyuki Iwashima
2026-02-25 2:00 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: Kuniyuki Iwashima @ 2026-02-23 19:00 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, Simon Horman,
Neal Cardwell, netdev, eric.dumazet
On Mon, Feb 23, 2026 at 3:35 AM Eric Dumazet <edumazet@google.com> wrote:
>
> For RPC workloads, we alternate tcp_schedule_loss_probe() calls from
> output path and from input path, with tp->packets_out value
> oscillating between !zero and zero, leading to poor branch prediction.
>
> Move tp->packets_out check from tcp_schedule_loss_probe() to
> tcp_set_xmit_timer().
>
> We avoid one call to tcp_schedule_loss_probe() from tcp_ack()
> path for typical RPC workloads, while improving branch prediction.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] tcp: reduce calls to tcp_schedule_loss_probe()
2026-02-23 11:35 [PATCH net-next] tcp: reduce calls to tcp_schedule_loss_probe() Eric Dumazet
2026-02-23 15:47 ` Neal Cardwell
2026-02-23 19:00 ` Kuniyuki Iwashima
@ 2026-02-25 2:00 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-02-25 2:00 UTC (permalink / raw)
To: Eric Dumazet
Cc: davem, kuba, pabeni, horms, ncardwell, kuniyu, netdev,
eric.dumazet
Hello:
This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Mon, 23 Feb 2026 11:35:01 +0000 you wrote:
> For RPC workloads, we alternate tcp_schedule_loss_probe() calls from
> output path and from input path, with tp->packets_out value
> oscillating between !zero and zero, leading to poor branch prediction.
>
> Move tp->packets_out check from tcp_schedule_loss_probe() to
> tcp_set_xmit_timer().
>
> [...]
Here is the summary with links:
- [net-next] tcp: reduce calls to tcp_schedule_loss_probe()
https://git.kernel.org/netdev/net-next/c/fca59a2dd0b8
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-02-25 2:00 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-23 11:35 [PATCH net-next] tcp: reduce calls to tcp_schedule_loss_probe() Eric Dumazet
2026-02-23 15:47 ` Neal Cardwell
2026-02-23 19:00 ` Kuniyuki Iwashima
2026-02-25 2:00 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox