From: Eric Dumazet <eric.dumazet@gmail.com>
To: Yafang Shao <laoar.shao@gmail.com>,
Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
netdev <netdev@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next] tcp: add SNMP counter for the number of packets pruned from ofo queue
Date: Wed, 25 Jul 2018 21:06:10 -0700 [thread overview]
Message-ID: <70428ad9-5d59-faed-4384-36190939a19d@gmail.com> (raw)
In-Reply-To: <CALOAHbAHgEF_=w_hbd_03NOH3h9m-4W=UJ0HCYGK0BtEyq=5Tw@mail.gmail.com>
On 07/25/2018 06:42 PM, Yafang Shao wrote:
>
> Hi Eric,
>
> LINUX_MIB_TCPOFOQUEUE, LINUX_MIB_TCPOFODROP and LINUX_MIB_TCPOFOMERGE
> are all for the number of SKBs, but only LINUX_MIB_OFOPRUNED is for
> the event, that could lead misunderstading.
> So I think introducing a counter for the number of SKB pruned could be
> better, that could help us to track the whole behavior of ofo queue.
> That is why I submit this patch.
Sure, and I said your patch had issues.
You mixed 'packets' and 'skbs' but apparently you do not get my point.
I would rather not add another SNMP counter, and refine the current one,
trying to track something more meaningful.
The notion of 'skb' is internal to the kernel, and can not be mapped easily
to 'number of network segments' which probably is more useful for the user.
I will send this instead :
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index d51fa358b2b196d0f9c258b24354813f2128a675..141a062abd0660c8f6d049de1dc7c7ecf7a7230d 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -5001,18 +5001,19 @@ static bool tcp_prune_ofo_queue(struct sock *sk)
{
struct tcp_sock *tp = tcp_sk(sk);
struct rb_node *node, *prev;
+ unsigned int segs = 0;
int goal;
if (RB_EMPTY_ROOT(&tp->out_of_order_queue))
return false;
- NET_INC_STATS(sock_net(sk), LINUX_MIB_OFOPRUNED);
goal = sk->sk_rcvbuf >> 3;
node = &tp->ooo_last_skb->rbnode;
do {
prev = rb_prev(node);
rb_erase(node, &tp->out_of_order_queue);
goal -= rb_to_skb(node)->truesize;
+ segs += max_t(u16, 1, skb_shinfo(rb_to_skb(node))->gso_segs);
tcp_drop(sk, rb_to_skb(node));
if (!prev || goal <= 0) {
sk_mem_reclaim(sk);
@@ -5023,6 +5024,7 @@ static bool tcp_prune_ofo_queue(struct sock *sk)
}
node = prev;
} while (node);
+ NET_ADD_STATS(sock_net(sk), LINUX_MIB_OFOPRUNED, segs);
tp->ooo_last_skb = rb_to_skb(prev);
/* Reset SACK state. A conforming SACK implementation will
next prev parent reply other threads:[~2018-07-26 4:06 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-25 13:06 [PATCH net-next] tcp: add SNMP counter for the number of packets pruned from ofo queue Yafang Shao
2018-07-25 13:33 ` Eric Dumazet
2018-07-25 13:40 ` Yafang Shao
2018-07-25 13:55 ` Eric Dumazet
2018-07-25 13:59 ` Yafang Shao
2018-07-26 1:42 ` Yafang Shao
2018-07-26 4:06 ` Eric Dumazet [this message]
2018-07-26 4:31 ` Yafang Shao
2018-07-26 4:36 ` Eric Dumazet
2018-07-26 4:42 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=70428ad9-5d59-faed-4384-36190939a19d@gmail.com \
--to=eric.dumazet@gmail.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=laoar.shao@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).