* [PATCH net-next] inetpeer: speed up inetpeer_invalidate_tree()
@ 2017-09-25 16:14 Eric Dumazet
2017-09-28 16:41 ` David Miller
0 siblings, 1 reply; 2+ messages in thread
From: Eric Dumazet @ 2017-09-25 16:14 UTC (permalink / raw)
To: David Miller; +Cc: netdev
From: Eric Dumazet <edumazet@google.com>
As measured in my prior patch ("sch_netem: faster rb tree removal"),
rbtree_postorder_for_each_entry_safe() is nice looking but much slower
than using rb_next() directly, except when tree is small enough
to fit in CPU caches (then the cost is the same)
From: Eric Dumazet <edumazet@google.com>
---
net/ipv4/inetpeer.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index e7eb590c86ce2b33654c17c61619de74ff07bfd1..6e5626cc366c150b13fd44a8e36169c2fb54476d 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -284,14 +284,17 @@ EXPORT_SYMBOL(inet_peer_xrlim_allow);
void inetpeer_invalidate_tree(struct inet_peer_base *base)
{
- struct inet_peer *p, *n;
+ struct rb_node *p = rb_first(&base->rb_root);
- rbtree_postorder_for_each_entry_safe(p, n, &base->rb_root, rb_node) {
- inet_putpeer(p);
+ while (p) {
+ struct inet_peer *peer = rb_entry(p, struct inet_peer, rb_node);
+
+ p = rb_next(p);
+ rb_erase(&peer->rb_node, &base->rb_root);
+ inet_putpeer(peer);
cond_resched();
}
- base->rb_root = RB_ROOT;
base->total = 0;
}
EXPORT_SYMBOL(inetpeer_invalidate_tree);
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH net-next] inetpeer: speed up inetpeer_invalidate_tree()
2017-09-25 16:14 [PATCH net-next] inetpeer: speed up inetpeer_invalidate_tree() Eric Dumazet
@ 2017-09-28 16:41 ` David Miller
0 siblings, 0 replies; 2+ messages in thread
From: David Miller @ 2017-09-28 16:41 UTC (permalink / raw)
To: eric.dumazet; +Cc: netdev
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Mon, 25 Sep 2017 09:14:14 -0700
> From: Eric Dumazet <edumazet@google.com>
>
> As measured in my prior patch ("sch_netem: faster rb tree removal"),
> rbtree_postorder_for_each_entry_safe() is nice looking but much slower
> than using rb_next() directly, except when tree is small enough
> to fit in CPU caches (then the cost is the same)
>
> From: Eric Dumazet <edumazet@google.com>
Applied, thanks Eric.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-09-28 16:41 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-25 16:14 [PATCH net-next] inetpeer: speed up inetpeer_invalidate_tree() Eric Dumazet
2017-09-28 16:41 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).