netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rick Jones <rick.jones2@hp.com>
To: Sharat Masetty <sharat04@gmail.com>, netdev@vger.kernel.org
Subject: Re: Packet drops observed @ LINUX_MIB_TCPBACKLOGDROP
Date: Thu, 27 Feb 2014 08:42:16 -0800	[thread overview]
Message-ID: <530F6AE8.1030307@hp.com> (raw)
In-Reply-To: <CAJzFV35=wUOA+pQCkAwV2+oF79VAmbsRY08gZUiFz-Vtw-898w@mail.gmail.com>

On 02/26/2014 06:00 PM, Sharat Masetty wrote:
> Hi,
>
> We are trying to achieve category 4 data rates on an ARM device.

Please forgive my ignorance, but what are "category 4 data rates?"

> We see that with an incoming TCP stream(IP packets coming in and
> acks going out) lots of packets are getting dropped when the backlog
> queue is full. This is impacting overall data TCP throughput. I am
> trying to understand the full context of why this queue is getting
> full so often.
>
> From my brief look at the code, it looks to me like the user space
> process is slow and busy in pulling the data from the socket buffer,
> therefore the TCP stack is using this backlog queue in the mean time.
> This queue is also charged against the main socket buffer allocation.
>
> Can you please explain this backlog queue, and possibly confirm if my
> understanding this  matter is accurate?
> Also can you suggest any ideas on how to mitigate these drops?

Well, there is always the question of why the user process is slow 
pulling the data out of the socket.  If it is unable to handle this 
"category 4 data rate" on a sustained basis, then something has got to 
give.  If it is only *sometimes* unable to keep-up but otherwise is able 
to go as fast and faster (to be able to clear-out a backlog) then you 
could consider tweaking the size of the queue.  But it would be better 
still to find the cause of the occasional slowness and address it.

If you run something which does no processing on the data (eg netperf) 
are you able to achieve the data rates you seek?  At what level of CPU 
utilization?  From a system you know can generate the desired data rate, 
something like:

netperf -H <yourARMsystem> -t TCP_STREAM -C  -- -m <what your 
application sends each time>

If the ARM system is multi-core, I might go with

netperf -H <yourARMsystem> -t TCP_STREAM -C  -- -m <sendsize> -o 
throughput,remote_cpu_util,remote_cpu_peak_util,remote_cpu_peak_id,remote_sd

so netperf will tell you the ID and utilization of the most utilized CPU 
on the receiver in addition to the overall CPU utilization.

There might be other netperf options to use depending on just what the 
sender is doing - to know which would require knowing more about this 
stream of traffic.

happy benchmarking,

rick jones

  reply	other threads:[~2014-02-27 16:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-27  2:00 Packet drops observed @ LINUX_MIB_TCPBACKLOGDROP Sharat Masetty
2014-02-27 16:42 ` Rick Jones [this message]
2014-02-27 20:50   ` Sharat Masetty
2014-02-28  0:34     ` Rick Jones
2014-02-27 17:54 ` Eric Dumazet
2014-02-27 20:40   ` Sharat Masetty
2014-02-27 20:49     ` Eric Dumazet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=530F6AE8.1030307@hp.com \
    --to=rick.jones2@hp.com \
    --cc=netdev@vger.kernel.org \
    --cc=sharat04@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).