From: Alexei Starovoitov <ast@plumgrid.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Daniel Borkmann <daniel@iogearbox.net>,
netdev@vger.kernel.org
Subject: Re: [PATCH net-next] pktgen: fix packet generation
Date: Tue, 12 May 2015 08:49:39 -0700 [thread overview]
Message-ID: <55522113.8050708@plumgrid.com> (raw)
In-Reply-To: <20150512101952.29e2b4af@redhat.com>
On 5/12/15 1:19 AM, Jesper Dangaard Brouer wrote:
> On Mon, 11 May 2015 15:19:48 -0700
> Alexei Starovoitov <ast@plumgrid.com> wrote:
>
>> pkt_gen->last_ok was not set properly, so after the first burst
>> pktgen instead of allocating new packet, will reuse old one, advance
>> eth_type_trans further, which would mean the stack will be seeing very
>> short bogus packets.
>>
>> Fixes: 62f64aed622b ("pktgen: introduce xmit_mode '<start_xmit|netif_receive>'")
>> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
>> ---
>> This bug slipped through due to all code refactoring and can be seen
>> after clean reboot. If taps, rps or tx mode was used at least once,
>> the bug will be hidden.
>>
>> Note to users: if you don't see ip_rcv() in your perf profile, it means
>> you were hitting this.
>> As commit log of 62f64aed622b is saying, the baseline perf profile
>> should look like:
>> 37.69% kpktgend_0 [kernel.vmlinux] [k] __netif_receive_skb_core
>> 25.81% kpktgend_0 [kernel.vmlinux] [k] kfree_skb
>> 7.22% kpktgend_0 [kernel.vmlinux] [k] ip_rcv
>> 5.68% kpktgend_0 [pktgen] [k] pktgen_thread_worker
>>
>> Jesper, that explains why you were seeing hot:
>> atomic_long_inc(&skb->dev->rx_dropped);
>
> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
thanks!
> Yes, just confirmed that this problem is gone. E.g. the multiqueue
> script now scales without hitting this "skb->dev->rx_dropped".
great.
> Good this got fixed, as my plan is to use this to profile the memory
> allocators fast path for SKB alloc/free.
>
> Setting "burst = 0" (and flag NO_TIMESTAMP):
> Device: eth4@0
> 3938513pps 1890Mb/sec (1890486240bps) errors: 10000000
>
> Thus, performance hit from 22.1Mpps to 3.9Mpps, thus 209 nanosec more
> expensive. 20% is the cost of pktgen itself, still I'm surprised that
> the hit is this big, as this should hit the most optimal cache-hot case
> of SKB alloc/free.
I tried similar and I think I've seen ip_send_check(iph); called by
pktgen->fill_packet_ipv4 to be quite hot, so burst=1 will be measuring
quite a bit more than just skb alloc/free. I think your skb allocator
micro bench was more accurate.
btw, this multi-core pktgen into netif_receive skb exposed all
spin_locks in tc actions. We need to convert them to rcu.
next prev parent reply other threads:[~2015-05-12 15:49 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-11 22:19 [PATCH net-next] pktgen: fix packet generation Alexei Starovoitov
2015-05-12 8:19 ` Jesper Dangaard Brouer
2015-05-12 15:49 ` Alexei Starovoitov [this message]
2015-05-13 3:10 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55522113.8050708@plumgrid.com \
--to=ast@plumgrid.com \
--cc=brouer@redhat.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).