From: Jiri Pirko <jiri@resnulli.us>
To: netdev@vger.kernel.org
Cc: vladbu@mellanox.com, pablo@netfilter.org,
xiyou.wangcong@gmail.com, jhs@mojatatu.com, mlxsw@mellanox.com,
alexanderk@mellanox.com
Subject: tc tp creation performance degratation since kernel 5.1
Date: Wed, 12 Jun 2019 14:03:41 +0200 [thread overview]
Message-ID: <20190612120341.GA2207@nanopsycho> (raw)
Hi.
I came across serious performance degradation when adding many tps. I'm
using following script:
------------------------------------------------------------------------
#!/bin/bash
dev=testdummy
ip link add name $dev type dummy
ip link set dev $dev up
tc qdisc add dev $dev ingress
tmp_file_name=$(date +"/tmp/tc_batch.%s.%N.tmp")
pref_id=1
while [ $pref_id -lt 20000 ]
do
echo "filter add dev $dev ingress proto ip pref $pref_id matchall action drop" >> $tmp_file_name
((pref_id++))
done
start=$(date +"%s")
tc -b $tmp_file_name
stop=$(date +"%s")
echo "Insertion duration: $(($stop - $start)) sec"
rm -f $tmp_file_name
ip link del dev $dev
------------------------------------------------------------------------
On my testing vm, result on 5.1 kernel is:
Insertion duration: 3 sec
On net-next this is:
Insertion duration: 54 sec
I did simple prifiling using perf. Output on 5.1 kernel:
77.85% tc [kernel.kallsyms] [k] tcf_chain_tp_find
3.30% tc [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
1.33% tc_pref_scale.s [kernel.kallsyms] [k] do_syscall_64
0.60% tc_pref_scale.s libc-2.28.so [.] malloc
0.55% tc [kernel.kallsyms] [k] mutex_spin_on_owner
0.51% tc libc-2.28.so [.] __memset_sse2_unaligned_erms
0.40% tc_pref_scale.s libc-2.28.so [.] __gconv_transform_utf8_internal
0.38% tc_pref_scale.s libc-2.28.so [.] _int_free
0.37% tc_pref_scale.s libc-2.28.so [.] __GI___strlen_sse2
0.37% tc [kernel.kallsyms] [k] idr_get_free
Output on net-next:
39.26% tc [kernel.vmlinux] [k] lock_is_held_type
33.99% tc [kernel.vmlinux] [k] tcf_chain_tp_find
12.77% tc [kernel.vmlinux] [k] __asan_load4_noabort
1.90% tc [kernel.vmlinux] [k] __asan_load8_noabort
1.08% tc [kernel.vmlinux] [k] lock_acquire
0.94% tc [kernel.vmlinux] [k] debug_lockdep_rcu_enabled
0.61% tc [kernel.vmlinux] [k] debug_lockdep_rcu_enabled.part.5
0.51% tc [kernel.vmlinux] [k] unwind_next_frame
0.50% tc [kernel.vmlinux] [k] _raw_spin_unlock_irqrestore
0.47% tc_pref_scale.s [kernel.vmlinux] [k] lock_acquire
0.47% tc [kernel.vmlinux] [k] lock_release
I didn't investigate this any further now. I fear that this might be
related to Vlad's changes in the area. Any ideas?
Thanks!
Jiri
next reply other threads:[~2019-06-12 12:03 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-12 12:03 Jiri Pirko [this message]
2019-06-12 12:30 ` tc tp creation performance degratation since kernel 5.1 Paolo Abeni
2019-06-13 4:50 ` Jiri Pirko
2019-06-12 12:34 ` Vlad Buslov
2019-06-13 5:49 ` Jiri Pirko
2019-06-13 8:11 ` Jiri Pirko
2019-06-13 10:09 ` Vlad Buslov
2019-06-13 11:11 ` Jiri Pirko
2019-06-13 11:26 ` Vlad Buslov
2019-06-13 14:18 ` Jiri Pirko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190612120341.GA2207@nanopsycho \
--to=jiri@resnulli.us \
--cc=alexanderk@mellanox.com \
--cc=jhs@mojatatu.com \
--cc=mlxsw@mellanox.com \
--cc=netdev@vger.kernel.org \
--cc=pablo@netfilter.org \
--cc=vladbu@mellanox.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox