netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: David Miller <davem@davemloft.net>
Cc: alexander.h.duyck@intel.com, netdev@vger.kernel.org,
	jeffrey.t.kirsher@intel.com, dborkman@redhat.com, fw@strlen.de,
	shemminger@vyatta.com, paulmck@linux.vnet.ibm.com,
	robert@herjulf.se, greearb@candelatech.com,
	john.r.fastabend@intel.com, danieltt@kth.se,
	zhouzhouyi@gmail.com, brouer@redhat.com
Subject: Re: [net-next PATCH 2/5] ixgbe: increase default TX ring buffer to 1024
Date: Wed, 14 May 2014 21:09:35 +0200	[thread overview]
Message-ID: <20140514210935.5fc80c79@redhat.com> (raw)
In-Reply-To: <20140514.134950.1208688313542719676.davem@davemloft.net>

On Wed, 14 May 2014 13:49:50 -0400 (EDT)
David Miller <davem@davemloft.net> wrote:

> From: Alexander Duyck <alexander.h.duyck@intel.com>
> Date: Wed, 14 May 2014 09:28:50 -0700
> 
> > I'd say that it might be better to just add a note to the documentation
> > folder indicating what configuration is optimal for pktgen rather then
> > changing everyone's defaults to support one specific test.
> 
> We could have drivers provide a pktgen config adjustment mechanism,
> so if someone starts pktgen then the device auto-adjusts to a pktgen
> optimal configuration (whatever that may entail).

That might be problematic because changing the TX queue size cause the
ixgbe driver to reset the link.

Notice that pktgen is ignoring BQL.  I'm sort of hoping that BQL will
push back for real use-cases, to avoid the bad effects of increasing
the TX size.

One of the bad effects, I'm hoping BQL will mitigate, is the case of
filling the TX queue with large frames.  Consider 9K jumbo frames, how
long time will it take to empty 1024 jumbo frames on a 10G link:

(9000*8)/(10000*10^6)*1000*1024 = 7.37ms

But with 9K MTU and 512, we already have:
 (9000*8)/(10000*10^6)*1000*512 = 3.69ms

Guess the more normal use-case would be 1500+38 (Ethernet overhead)
 (1538*8)/(10000*10^6)*1000*1024 = 1.25ms

And then again, these calculations should then in theory be multiplied
with the number of TX queues.


I know, increasing these limits should not be taken lightly, but we
just have to be crystal clear that the current 512 limit, is
artificially limiting the capabilities of your hardware.

We can postpone this increase, because I also observe a 2Mpps limit
when actually using (alloc/free) real SKBs.  The alloc/free cost is
currently just hiding this limitation.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2014-05-14 19:09 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-14 14:17 [net-next PATCH 0/5] Optimizing "pktgen" for single CPU performance Jesper Dangaard Brouer
2014-05-14 14:17 ` [net-next PATCH 1/5] ixgbe: trivial fixes while reading code Jesper Dangaard Brouer
2014-05-14 14:17 ` [net-next PATCH 2/5] ixgbe: increase default TX ring buffer to 1024 Jesper Dangaard Brouer
2014-05-14 14:28   ` David Laight
2014-05-14 19:25     ` Jesper Dangaard Brouer
2014-05-14 16:28   ` Alexander Duyck
2014-05-14 17:49     ` David Miller
2014-05-14 19:09       ` Jesper Dangaard Brouer [this message]
2014-05-14 19:54         ` David Miller
2014-05-15  9:16         ` David Laight
2014-05-29 15:29         ` Jesper Dangaard Brouer
2014-05-14 14:17 ` [net-next PATCH 3/5] pktgen: avoid atomic_inc per packet in xmit loop Jesper Dangaard Brouer
2014-05-14 14:35   ` Eric Dumazet
2014-05-14 15:13     ` Jesper Dangaard Brouer
2014-05-14 15:35       ` Eric Dumazet
2014-05-14 14:17 ` [net-next PATCH 4/5] pktgen: avoid expensive set_current_state() call in loop Jesper Dangaard Brouer
2014-05-14 14:18 ` [net-next PATCH 5/5] pktgen: RCU'ify "if_list" to remove lock in next_to_run() Jesper Dangaard Brouer
2014-06-26 11:16 ` [net-next PATCH V2 0/3] Optimizing pktgen for single CPU performance Jesper Dangaard Brouer
2014-06-26 11:16   ` [net-next PATCH V2 1/3] pktgen: document tuning for max NIC performance Jesper Dangaard Brouer
2014-06-26 11:16   ` [net-next PATCH V2 2/3] pktgen: avoid expensive set_current_state() call in loop Jesper Dangaard Brouer
2014-06-26 11:16   ` [net-next PATCH V2 3/3] pktgen: RCU-ify "if_list" to remove lock in next_to_run() Jesper Dangaard Brouer
2014-07-01 22:51   ` [net-next PATCH V2 0/3] Optimizing pktgen for single CPU performance David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140514210935.5fc80c79@redhat.com \
    --to=brouer@redhat.com \
    --cc=alexander.h.duyck@intel.com \
    --cc=danieltt@kth.se \
    --cc=davem@davemloft.net \
    --cc=dborkman@redhat.com \
    --cc=fw@strlen.de \
    --cc=greearb@candelatech.com \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=john.r.fastabend@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=robert@herjulf.se \
    --cc=shemminger@vyatta.com \
    --cc=zhouzhouyi@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).