netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Tariq Toukan <ttoukan.linux@gmail.com>,
	Eric Dumazet <edumazet@google.com>,
	"David S . Miller" <davem@davemloft.net>,
	netdev <netdev@vger.kernel.org>,
	Tariq Toukan <tariqt@mellanox.com>,
	Martin KaFai Lau <kafai@fb.com>,
	Willem de Bruijn <willemb@google.com>,
	Brenden Blanco <bblanco@plumgrid.com>,
	Alexei Starovoitov <ast@kernel.org>,
	brouer@redhat.com
Subject: Re: [PATCH v2 net-next 00/14] mlx4: order-0 allocations and page recycling
Date: Sun, 12 Feb 2017 23:38:53 +0100	[thread overview]
Message-ID: <20170212233853.193a2714@redhat.com> (raw)
In-Reply-To: <1486933066.8227.6.camel@edumazet-glaptop3.roam.corp.google.com>

On Sun, 12 Feb 2017 12:57:46 -0800
Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On Sun, 2017-02-12 at 18:31 +0200, Tariq Toukan wrote:
> > On 09/02/2017 6:56 PM, Eric Dumazet wrote:  
> > >> Default, out of box.  
> > > Well. Please report :
> > >
> > > ethtool  -l eth0
> > > ethtool -g eth0  
> > $ ethtool -g p1p1
> > Ring parameters for p1p1:
> > Pre-set maximums:
> > RX:             8192
> > RX Mini:        0
> > RX Jumbo:       0
> > TX:             8192
> > Current hardware settings:
> > RX:             1024
> > RX Mini:        0
> > RX Jumbo:       0
> > TX:             512  
> 
> We are using 4096 slots per RX queue, this is why I could not reproduce
> your results.

Just so others understand this: The number of RX queue slots is
indirectly the size of the page-recycle "cache" in this scheme (that
depend on refcnt tricks to see if page can be reused).  


> A single TCP flow easily can have more than 1024 MSS waiting in its
> receive queue (typical receive window on linux is 6MB/2 )

So, you do need to increase the page-"cache" size, and need this for
real-life cases, interesting.


> I mentioned that having a slightly inflated skb->truesize might have an
> impact in some workloads. (charging for 2048 bytes per MSS instead of
> 1536), but this is not related to mlx4 and should be tweaked in TCP
> stack instead, since this 2048 bytes (half a page on x86) strategy is
> now well spread.


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2017-02-12 22:38 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-09 13:58 [PATCH v2 net-next 00/14] mlx4: order-0 allocations and page recycling Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 01/14] mlx4: use __skb_fill_page_desc() Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 02/14] mlx4: dma_dir is a mlx4_en_priv attribute Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 03/14] mlx4: remove order field from mlx4_en_frag_info Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 04/14] mlx4: get rid of frag_prefix_size Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 05/14] mlx4: rx_headroom is a per port attribute Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 06/14] mlx4: reduce rx ring page_cache size Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 07/14] mlx4: removal of frag_sizes[] Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 08/14] mlx4: use order-0 pages for RX Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 09/14] mlx4: add page recycling in receive path Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 10/14] mlx4: add rx_alloc_pages counter in ethtool -S Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 11/14] mlx4: do not access rx_desc from mlx4_en_process_rx_cq() Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 12/14] mlx4: factorize page_address() calls Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 13/14] mlx4: make validate_loopback() more generic Eric Dumazet
2017-02-09 13:58 ` [PATCH v2 net-next 14/14] mlx4: remove duplicate code in mlx4_en_process_rx_cq() Eric Dumazet
2017-02-09 17:15   ` Saeed Mahameed
2017-02-09 17:26     ` Eric Dumazet
     [not found] ` <3c48eac5-0c4f-f43a-1d76-75399e5fc1b8@gmail.com>
2017-02-09 16:44   ` [PATCH v2 net-next 00/14] mlx4: order-0 allocations and page recycling Eric Dumazet
2017-02-09 16:49     ` Tariq Toukan
2017-02-09 16:56       ` Eric Dumazet
     [not found]         ` <8ffca63d-62f4-9d6b-fe06-20a0e28dc44d@gmail.com>
2017-02-12 15:32           ` Eric Dumazet
2017-02-12 17:24             ` Tariq Toukan
2017-02-12 16:31         ` Tariq Toukan
2017-02-12 20:57           ` Eric Dumazet
2017-02-12 22:38             ` Jesper Dangaard Brouer [this message]
2017-02-13  0:33               ` Eric Dumazet
     [not found] ` <9b098a3d-aec0-4085-2cd5-ea3819927071@mellanox.com>
     [not found]   ` <b7f2d119-3c84-b911-eeb4-880427299213@mellanox.com>
2017-02-12 16:48     ` Eric Dumazet
2017-02-13  8:50       ` Tariq Toukan
2017-02-13 19:33         ` Eric Dumazet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170212233853.193a2714@redhat.com \
    --to=brouer@redhat.com \
    --cc=ast@kernel.org \
    --cc=bblanco@plumgrid.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=kafai@fb.com \
    --cc=netdev@vger.kernel.org \
    --cc=tariqt@mellanox.com \
    --cc=ttoukan.linux@gmail.com \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).