public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: David Howells <dhowells@redhat.com>
To: netdev@vger.kernel.org
Cc: David Howells <dhowells@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Marc Dionne <marc.dionne@auristor.com>,
	linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: [PATCH net-next 06/13] rxrpc: Generate extra pings for RTT during heavy-receive call
Date: Tue, 31 Jan 2023 17:12:20 +0000	[thread overview]
Message-ID: <20230131171227.3912130-7-dhowells@redhat.com> (raw)
In-Reply-To: <20230131171227.3912130-1-dhowells@redhat.com>

When doing a call that has a single transmitted data packet and a massive
amount of received data packets, we only ping for one RTT sample, which
means we don't get a good reading on it.

Fix this by converting occasional IDLE ACKs into PING ACKs to elicit a
response.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org
---
 include/trace/events/rxrpc.h |  3 ++-
 net/rxrpc/call_event.c       | 15 ++++++++++++---
 net/rxrpc/output.c           |  7 +++++--
 3 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index cdcadb1345dc..450b8f345814 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -360,11 +360,12 @@
 	EM(rxrpc_propose_ack_client_tx_end,	"ClTxEnd") \
 	EM(rxrpc_propose_ack_input_data,	"DataIn ") \
 	EM(rxrpc_propose_ack_input_data_hole,	"DataInH") \
-	EM(rxrpc_propose_ack_ping_for_check_life, "ChkLife") \
 	EM(rxrpc_propose_ack_ping_for_keepalive, "KeepAlv") \
 	EM(rxrpc_propose_ack_ping_for_lost_ack,	"LostAck") \
 	EM(rxrpc_propose_ack_ping_for_lost_reply, "LostRpl") \
+	EM(rxrpc_propose_ack_ping_for_old_rtt,	"OldRtt ") \
 	EM(rxrpc_propose_ack_ping_for_params,	"Params ") \
+	EM(rxrpc_propose_ack_ping_for_rtt,	"Rtt    ") \
 	EM(rxrpc_propose_ack_processing_op,	"ProcOp ") \
 	EM(rxrpc_propose_ack_respond_to_ack,	"Rsp2Ack") \
 	EM(rxrpc_propose_ack_respond_to_ping,	"Rsp2Png") \
diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index 1abdef15debc..cf9799be4286 100644
--- a/net/rxrpc/call_event.c
+++ b/net/rxrpc/call_event.c
@@ -498,9 +498,18 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb)
 		rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0,
 			       rxrpc_propose_ack_rx_idle);
 
-	if (atomic_read(&call->ackr_nr_unacked) > 2)
-		rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0,
-			       rxrpc_propose_ack_input_data);
+	if (atomic_read(&call->ackr_nr_unacked) > 2) {
+		if (call->peer->rtt_count < 3)
+			rxrpc_send_ACK(call, RXRPC_ACK_PING, 0,
+				       rxrpc_propose_ack_ping_for_rtt);
+		else if (ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000),
+				      ktime_get_real()))
+			rxrpc_send_ACK(call, RXRPC_ACK_PING, 0,
+				       rxrpc_propose_ack_ping_for_old_rtt);
+		else
+			rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0,
+				       rxrpc_propose_ack_input_data);
+	}
 
 	/* Make sure the timer is restarted */
 	if (!__rxrpc_call_is_complete(call)) {
diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
index a9746be29634..98b5d0db7761 100644
--- a/net/rxrpc/output.c
+++ b/net/rxrpc/output.c
@@ -253,12 +253,15 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
 	iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len);
 	ret = do_udp_sendmsg(conn->local->socket, &msg, len);
 	call->peer->last_tx_at = ktime_get_seconds();
-	if (ret < 0)
+	if (ret < 0) {
 		trace_rxrpc_tx_fail(call->debug_id, serial, ret,
 				    rxrpc_tx_point_call_ack);
-	else
+	} else {
 		trace_rxrpc_tx_packet(call->debug_id, &txb->wire,
 				      rxrpc_tx_point_call_ack);
+		if (txb->wire.flags & RXRPC_REQUEST_ACK)
+			call->peer->rtt_last_req = ktime_get_real();
+	}
 	rxrpc_tx_backoff(call, ret);
 
 	if (!__rxrpc_call_is_complete(call)) {


  parent reply	other threads:[~2023-01-31 17:14 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-31 17:12 [PATCH net-next 00/13] rxrpc: Increasing SACK size and moving away from softirq, part 5 David Howells
2023-01-31 17:12 ` [PATCH net-next 01/13] rxrpc: Fix trace string David Howells
2023-01-31 17:12 ` [PATCH net-next 02/13] rxrpc: Remove whitespace before ')' in trace header David Howells
2023-01-31 17:12 ` [PATCH net-next 03/13] rxrpc: Shrink the tabulation in the rxrpc trace header a bit David Howells
2023-01-31 17:12 ` [PATCH net-next 04/13] rxrpc: Convert call->recvmsg_lock to a spinlock David Howells
2023-01-31 17:12 ` [PATCH net-next 05/13] rxrpc: Allow a delay to be injected into packet reception David Howells
2023-01-31 17:12 ` David Howells [this message]
2023-01-31 17:12 ` [PATCH net-next 07/13] rxrpc: De-atomic call->ackr_window and call->ackr_nr_unacked David Howells
2023-01-31 17:12 ` [PATCH net-next 08/13] rxrpc: Simplify ACK handling David Howells
2023-01-31 17:12 ` [PATCH net-next 09/13] rxrpc: Don't lock call->tx_lock to access call->tx_buffer David Howells
2023-01-31 17:12 ` [PATCH net-next 10/13] rxrpc: Remove local->defrag_sem David Howells
2023-01-31 17:12 ` [PATCH net-next 11/13] rxrpc: Show consumed and freed packets as non-dropped in dropwatch David Howells
2023-02-02 10:42   ` Paolo Abeni
2023-01-31 17:12 ` [PATCH net-next 12/13] rxrpc: Change rx_packet tracepoint to display securityIndex not type twice David Howells
2023-01-31 17:12 ` [PATCH net-next 13/13] rxrpc: Kill service bundle David Howells
2023-02-02 12:10 ` [PATCH net-next 00/13] rxrpc: Increasing SACK size and moving away from softirq, part 5 patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230131171227.3912130-7-dhowells@redhat.com \
    --to=dhowells@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=linux-afs@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.dionne@auristor.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox