From: Jesper Dangaard Brouer <brouer@redhat.com>
To: William Tu <u9012063@gmail.com>
Cc: "Björn Töpel" <bjorn.topel@gmail.com>,
magnus.karlsson@intel.com,
"Alexander Duyck" <alexander.h.duyck@intel.com>,
"Alexander Duyck" <alexander.duyck@gmail.com>,
"John Fastabend" <john.fastabend@gmail.com>,
"Alexei Starovoitov" <ast@fb.com>,
willemdebruijn.kernel@gmail.com,
"Daniel Borkmann" <daniel@iogearbox.net>,
"Linux Kernel Network Developers" <netdev@vger.kernel.org>,
"Björn Töpel" <bjorn.topel@intel.com>,
michael.lundkvist@ericsson.com, jesse.brandeburg@intel.com,
anjali.singhai@intel.com, jeffrey.b.shaw@intel.com,
ferruh.yigit@intel.com, qi.z.zhang@intel.com, brouer@redhat.com
Subject: Re: [RFC PATCH 00/24] Introducing AF_XDP support
Date: Tue, 27 Mar 2018 11:37:50 +0200 [thread overview]
Message-ID: <20180327113750.33cb4d5b@redhat.com> (raw)
In-Reply-To: <CALDO+SZcxks4xF-YZEJe3dL2sp9wR7kWYCnAnokhr-y3f9-AeQ@mail.gmail.com>
On Mon, 26 Mar 2018 14:58:02 -0700
William Tu <u9012063@gmail.com> wrote:
> > Again high count for NMI ?!?
> >
> > Maybe you just forgot to tell perf that you want it to decode the
> > bpf_prog correctly?
> >
> > https://prototype-kernel.readthedocs.io/en/latest/bpf/troubleshooting.html#perf-tool-symbols
> >
> > Enable via:
> > $ sysctl net/core/bpf_jit_kallsyms=1
> >
> > And use perf report (while BPF is STILL LOADED):
> >
> > $ perf report --kallsyms=/proc/kallsyms
> >
> > E.g. for emailing this you can use this command:
> >
> > $ perf report --sort cpu,comm,dso,symbol --kallsyms=/proc/kallsyms --no-children --stdio -g none | head -n 40
> >
>
> Thanks, I followed the steps, the result of l2fwd
> # Total Lost Samples: 119
> #
> # Samples: 2K of event 'cycles:ppp'
> # Event count (approx.): 25675705627
> #
> # Overhead CPU Command Shared Object Symbol
> # ........ ... ....... .................. ..................................
> #
> 10.48% 013 xdpsock xdpsock [.] main
> 9.77% 013 xdpsock [kernel.vmlinux] [k] clflush_cache_range
> 8.45% 013 xdpsock [kernel.vmlinux] [k] nmi
> 8.07% 013 xdpsock [kernel.vmlinux] [k] xsk_sendmsg
> 7.81% 013 xdpsock [kernel.vmlinux] [k] __domain_mapping
> 4.95% 013 xdpsock [kernel.vmlinux] [k] ixgbe_xmit_frame_ring
> 4.66% 013 xdpsock [kernel.vmlinux] [k] skb_store_bits
> 4.39% 013 xdpsock [kernel.vmlinux] [k] syscall_return_via_sysret
> 3.93% 013 xdpsock [kernel.vmlinux] [k] pfn_to_dma_pte
> 2.62% 013 xdpsock [kernel.vmlinux] [k] __intel_map_single
> 2.53% 013 xdpsock [kernel.vmlinux] [k] __alloc_skb
> 2.36% 013 xdpsock [kernel.vmlinux] [k] iommu_no_mapping
> 2.21% 013 xdpsock [kernel.vmlinux] [k] alloc_skb_with_frags
> 2.07% 013 xdpsock [kernel.vmlinux] [k] skb_set_owner_w
> 1.98% 013 xdpsock [kernel.vmlinux] [k] __kmalloc_node_track_caller
> 1.94% 013 xdpsock [kernel.vmlinux] [k] ksize
> 1.84% 013 xdpsock [kernel.vmlinux] [k] validate_xmit_skb_list
> 1.62% 013 xdpsock [kernel.vmlinux] [k] kmem_cache_alloc_node
> 1.48% 013 xdpsock [kernel.vmlinux] [k] __kmalloc_reserve.isra.37
> 1.21% 013 xdpsock xdpsock [.] xq_enq
> 1.08% 013 xdpsock [kernel.vmlinux] [k] intel_alloc_iova
>
You did use net/core/bpf_jit_kallsyms=1 and correct perf commands decoding of
bpf_prog, so the perf top#3 'nmi' is likely a real NMI call... which looks wrong.
> And l2fwd under "perf stat" looks OK to me. There is little context
> switches, cpu is fully utilized, 1.17 insn per cycle seems ok.
>
> Performance counter stats for 'CPU(s) 6':
> 10000.787420 cpu-clock (msec) # 1.000 CPUs utilized
> 24 context-switches # 0.002 K/sec
> 0 cpu-migrations # 0.000 K/sec
> 0 page-faults # 0.000 K/sec
> 22,361,333,647 cycles # 2.236 GHz
> 13,458,442,838 stalled-cycles-frontend # 60.19% frontend cycles idle
> 26,251,003,067 instructions # 1.17 insn per cycle
> # 0.51 stalled cycles per insn
> 4,938,921,868 branches # 493.853 M/sec
> 7,591,739 branch-misses # 0.15% of all branches
> 10.000835769 seconds time elapsed
This perf stat also indicate something is wrong.
The 1.17 insn per cycle is NOT okay, it is too low (compared to what
usually I see, e.g. 2.36 insn per cycle).
It clearly says you have 'stalled-cycles-frontend' and '60.19% frontend
cycles idle'. This means your CPU have issues/bottleneck fetching
instructions. Explained by Andi Kleen here [1]
[1] https://github.com/andikleen/pmu-tools/wiki/toplev-manual
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2018-03-27 9:38 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-31 13:53 [RFC PATCH 00/24] Introducing AF_XDP support Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 01/24] xsk: AF_XDP sockets buildable skeleton Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 02/24] xsk: add user memory registration sockopt Björn Töpel
2018-02-07 16:00 ` Willem de Bruijn
2018-02-07 21:39 ` Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 03/24] xsk: added XDP_{R,T}X_RING sockopt and supporting structures Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 04/24] xsk: add bind support and introduce Rx functionality Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 05/24] bpf: added bpf_xdpsk_redirect Björn Töpel
2018-02-05 13:42 ` Jesper Dangaard Brouer
2018-02-07 21:11 ` Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 06/24] net: wire up xsk support in the XDP_REDIRECT path Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 07/24] xsk: introduce Tx functionality Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 08/24] i40e: add support for XDP_REDIRECT Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 09/24] samples/bpf: added xdpsock program Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 10/24] netdevice: added XDP_{UN,}REGISTER_XSK command to ndo_bpf Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 11/24] netdevice: added ndo for transmitting a packet from an XDP socket Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 12/24] xsk: add iterator functions to xsk_ring Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 13/24] i40e: introduce external allocator support Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 14/24] i40e: implemented page recycling buff_pool Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 15/24] i40e: start using " Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 16/24] i40e: separated buff_pool interface from i40e implementaion Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 17/24] xsk: introduce xsk_buff_pool Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 18/24] xdp: added buff_pool support to struct xdp_buff Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 19/24] xsk: add support for zero copy Rx Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 20/24] xsk: add support for zero copy Tx Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 21/24] i40e: implement xsk sub-commands in ndo_bpf for zero copy Rx Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 22/24] i40e: introduced a clean_tx callback function Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 23/24] i40e: introduced Tx completion callbacks Björn Töpel
2018-01-31 13:53 ` [RFC PATCH 24/24] i40e: Tx support for zero copy allocator Björn Töpel
2018-02-01 16:42 ` [RFC PATCH 00/24] Introducing AF_XDP support Jesper Dangaard Brouer
2018-02-02 10:31 ` Jesper Dangaard Brouer
2018-02-05 15:05 ` Björn Töpel
2018-02-07 15:54 ` Willem de Bruijn
2018-02-07 21:28 ` Björn Töpel
2018-02-08 23:16 ` Willem de Bruijn
2018-02-07 17:59 ` Tom Herbert
2018-02-07 21:38 ` Björn Töpel
2018-03-26 16:06 ` William Tu
2018-03-26 16:38 ` Jesper Dangaard Brouer
2018-03-26 21:58 ` William Tu
2018-03-27 6:09 ` Björn Töpel
2018-03-27 9:37 ` Jesper Dangaard Brouer [this message]
2018-03-28 0:06 ` William Tu
2018-03-28 8:01 ` Jesper Dangaard Brouer
2018-03-28 15:05 ` William Tu
2018-03-26 22:54 ` Tushar Dave
2018-03-26 23:03 ` Alexander Duyck
2018-03-26 23:20 ` Tushar Dave
2018-03-28 0:49 ` William Tu
2018-03-27 6:30 ` Björn Töpel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180327113750.33cb4d5b@redhat.com \
--to=brouer@redhat.com \
--cc=alexander.duyck@gmail.com \
--cc=alexander.h.duyck@intel.com \
--cc=anjali.singhai@intel.com \
--cc=ast@fb.com \
--cc=bjorn.topel@gmail.com \
--cc=bjorn.topel@intel.com \
--cc=daniel@iogearbox.net \
--cc=ferruh.yigit@intel.com \
--cc=jeffrey.b.shaw@intel.com \
--cc=jesse.brandeburg@intel.com \
--cc=john.fastabend@gmail.com \
--cc=magnus.karlsson@intel.com \
--cc=michael.lundkvist@ericsson.com \
--cc=netdev@vger.kernel.org \
--cc=qi.z.zhang@intel.com \
--cc=u9012063@gmail.com \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).