netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rick Jones <rick.jones2@hp.com>
To: "Bruno Prémont" <bonbons@linux-vserver.org>
Cc: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	netdev@vger.kernel.org
Subject: Re: Marvell Kirkwood - MV643XX: near 100% UDP RX packet loss
Date: Mon, 29 Dec 2014 15:25:15 -0800	[thread overview]
Message-ID: <54A1E2DB.2010309@hp.com> (raw)
In-Reply-To: <20141227121727.20fe52c8@neptune.home>

On 12/27/2014 03:17 AM, Bruno Prémont wrote:
> On Thu, 25 December 2014 Rick Jones <rick.jones2@hp.com> wrote:
>>> Why are so many packets being discarded?
>>
>> You should also check the netstat statistics, particularly UDP on the
>> receiving side.  Look before and after the test and see how they change,
>> if at all.
>
> Here they go.
>
> Summary of numbers:
> iperf UDP run, 5 seconds @ 1Gb/s
>   lost 71216/71776 packets
>
>                                    before      after      delta
> ethtool:
>   rx_packets:                      420001     688424     268423
>   rx_bytes:                     433251917  809803463  376551546
>   rx_errors:                            0          0          0
>   rx_dropped:                           0          0          0
>   bad_octets_received:                  0          0          0
>   bad_frames_received:                  0          0          0
>   rx_discard:                      159691     323123     163432
>   rx_overrun:                           0          0          0
> netstat, udp:
>   packets received                  15559      16137        578
>   packets to unknown port received     18         18          0
>   packet receive errors             41599      83890      42291
>   packets sent                      34697      34770         73
>   receive buffer errors                 0          0          0
>   send buffer errors                    0          0          0
>

Well, it certainly looks like a decent fraction of your lost traffic are 
UDP packet receive errors.  Overrunning the SO_RCVBUF on the receiving 
side presumably.  You can either start walking-down the transmission 
rate of the iperf client, or try a larger receive socket buffer size on 
the iperf server, though that will only help if those drops are from the 
receiving side being only occasionally slower than the sending side. 
You might also want to make sure the UDP datagrams being sent are huge 
and so getting fragmented.  All it takes is to lose one fragment of an 
IP datagram to render the entire datagram useless.

As for the rx_discard in the ethtool stats, someone more familiar with 
the hardware will have to describe the various reasons for that stat to 
be incremented.

rick jones

      reply	other threads:[~2014-12-29 23:25 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-23 23:18 Marvell Kirkwood - MV643XX: near 100% UDP RX packet loss Bruno Prémont
2014-12-25 21:54 ` Rick Jones
2014-12-27 11:17   ` Bruno Prémont
2014-12-29 23:25     ` Rick Jones [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54A1E2DB.2010309@hp.com \
    --to=rick.jones2@hp.com \
    --cc=bonbons@linux-vserver.org \
    --cc=netdev@vger.kernel.org \
    --cc=sebastian.hesselbarth@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).