netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Andy Gospodarek <andy@greyhouse.net>
Cc: David Miller <davem@davemloft.net>,
	alexei.starovoitov@gmail.com, michael.chan@broadcom.com,
	netdev@vger.kernel.org, xdp-newbies@vger.kernel.org,
	brouer@redhat.com
Subject: Re: [PATCH v4 net-next RFC] net: Generic XDP
Date: Mon, 24 Apr 2017 15:18:34 +0200	[thread overview]
Message-ID: <20170424151834.66fd0a43@redhat.com> (raw)
In-Reply-To: <20170420163034.053ec42c@redhat.com>


On Thu, 20 Apr 2017 16:30:34 +0200 Jesper Dangaard Brouer <brouer@redhat.com> wrote:

> On Wed, 19 Apr 2017 10:29:03 -0400
> Andy Gospodarek <andy@greyhouse.net> wrote:
> 
> > I ran this on top of a card that uses the bnxt_en driver on a desktop
> > class system with an i7-6700 CPU @ 3.40GHz, sending a single stream of
> > UDP traffic with flow control disabled and saw the following (all stats
> > in Million PPS).
> > 
> >                 xdp1                xdp2            xdp_tx_tunnel
> > Generic XDP      7.8    5.5 (1.3 actual)         4.6 (1.1 actual)
> > Optimized XDP   11.7		     9.7                      4.6
> > 
> > One thing to note is that the Generic XDP case shows some different
> > results for reported by the application vs actual (seen on the wire).  I
> > did not debug where the drops are happening and what counter needs to be
> > incremented to note this -- I'll add that to my TODO list.  The
> > Optimized XDP case does not have a difference in reported vs actual
> > frames on the wire.  
> 
> The reported application vs actual (seen on the wire) number sound scary.
> How do you evaluate/measure "seen on the wire"?
> 
> Perhaps you could use ethtool -S stats to see if anything is fishy?
> I recommend using my tool[1] like:
> 
>  ~/git/network-testing/bin/ethtool_stats.pl --dev mlx5p2 --sec 2
> 
> [1] https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
> 
> I'm evaluating this patch on a mlx5 NIC, and something is not right...
> I'm seeing:
> 
>  Ethtool(mlx5p2) stat:     349599 (        349,599) <= tx_multicast_phy /sec
>  Ethtool(mlx5p2) stat:    4940185 (      4,940,185) <= tx_packets /sec
>  Ethtool(mlx5p2) stat:     349596 (        349,596) <= tx_packets_phy /sec
>  [...]
>  Ethtool(mlx5p2) stat:      36898 (         36,898) <= rx_cache_busy /sec
>  Ethtool(mlx5p2) stat:      36898 (         36,898) <= rx_cache_full /sec
>  Ethtool(mlx5p2) stat:    4903287 (      4,903,287) <= rx_cache_reuse /sec
>  Ethtool(mlx5p2) stat:    4940185 (      4,940,185) <= rx_csum_complete /sec
>  Ethtool(mlx5p2) stat:    4940185 (      4,940,185) <= rx_packets /sec
> 
> Something is wrong... when I tcpdump on the generator machine, I see
> garbled packets with IPv6 multicast addresses.
> 
> And it looks like I'm only sending 349,596 tx_packets_phy/sec on the "wire".
> 

Not seeing packets on the TX wire was caused by the NIC HW dropping the
packets, because the ethernet MAC-addr were not changed/swapped.

Fixed this XDP_TX bug in my test program xdp_bench01_mem_access_cost.
https://github.com/netoptimizer/prototype-kernel/commit/85f7ba2f0ea2

Even added a new option --swapmac for creating another test option for
modifying the packet.
https://github.com/netoptimizer/prototype-kernel/commit/fe080e6f3ccf

I will shortly publish a full report of testing this patch.
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2017-04-24 13:18 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-13 16:09 [PATCH v4 net-next RFC] net: Generic XDP David Miller
2017-04-13 20:16 ` Michael Chan
2017-04-13 20:23   ` David Miller
2017-04-14  1:59     ` [lkp-robot] [net] d1dff7db3b: net/core/dev.c:#suspicious_rcu_dereference_check()usage kernel test robot
2017-04-15  0:59     ` [PATCH v4 net-next RFC] net: Generic XDP Alexei Starovoitov
2017-04-15 15:46       ` David Ahern
2017-04-18 19:05       ` Andy Gospodarek
2017-04-18 19:07         ` David Miller
2017-04-18 19:29           ` David Miller
2017-04-18 19:37             ` Andy Gospodarek
2017-04-19 14:29             ` Andy Gospodarek
2017-04-19 17:17               ` Alexei Starovoitov
2017-04-19 17:44                 ` John Fastabend
2017-04-19 20:25                   ` Andy Gospodarek
2017-04-20  0:13                     ` Alexei Starovoitov
2017-04-20  1:40               ` David Miller
2017-04-20 22:09                 ` Andy Gospodarek
2017-04-20 14:30               ` Jesper Dangaard Brouer
2017-04-24 13:18                 ` Jesper Dangaard Brouer [this message]
2017-04-18 20:26         ` Jesper Dangaard Brouer
2017-04-14 16:33 ` William Tu
2017-04-24 14:24 ` Blogpost evaluation this " Jesper Dangaard Brouer
2017-04-24 22:26   ` David Miller
2017-04-25  8:28     ` Jesper Dangaard Brouer
2017-04-25 17:25     ` Andy Gospodarek
2017-04-25 17:31       ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170424151834.66fd0a43@redhat.com \
    --to=brouer@redhat.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andy@greyhouse.net \
    --cc=davem@davemloft.net \
    --cc=michael.chan@broadcom.com \
    --cc=netdev@vger.kernel.org \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).