From: Rick Jones <rick.jones2@hp.com>
To: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"leitao@linux.vnet.ibm.com" <leitao@linux.vnet.ibm.com>,
"amirv@mellanox.com" <amirv@mellanox.com>,
"yevgenyp@mellanox.co.il" <yevgenyp@mellanox.co.il>,
"klebers@linux.vnet.ibm.com" <klebers@linux.vnet.ibm.com>,
"anton@samba.org" <anton@samba.org>,
"brking@linux.vnet.ibm.com" <brking@linux.vnet.ibm.com>,
"ogerlitz@mellanox.com" <ogerlitz@mellanox.com>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
"davem@davemloft.net" <davem@davemloft.net>
Subject: Re: [PATCH] mlx4_en: map entire pages to increase throughput
Date: Mon, 16 Jul 2012 14:08:30 -0700 [thread overview]
Message-ID: <500482CE.9000202@hp.com> (raw)
In-Reply-To: <20120716204717.GA16137@oc1711230544.ibm.com>
I was thinking more along the lines of an additional comparison,
explicitly using netperf TCP_RR or something like it, not just the
packets per second from a bulk transfer test.
rick
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
> I used a uperf profile that is similar to TCP_RR. It writes, then reads
> some bytes. I kept the TCP_NODELAY flag.
>
> Without the patch, I saw the following:
>
> packet size ops/s Gb/s
> 1 337024 0.0027
> 90 276620 0.199
> 900 190455 1.37
> 4000 68863 2.20
> 9000 45638 3.29
> 60000 9409 4.52
>
> With the patch:
>
> packet size ops/s Gb/s
> 1 451738 0.0036
> 90 345682 0.248
> 900 272258 1.96
> 4000 127055 4.07
> 9000 106614 7.68
> 60000 30671 14.72
>
So, on the surface it looks like it did good things for PPS, though it
would be nice to know what the CPU utilizations/service demands were as
a sanity check - does uperf not have that sort of functionality?
I'm guessing there were several writes at a time - the 1 byte packet
size (sic - that is payload, not packet, and without TCP_NODELAY not
even payload necessarily) How many writes does it have outstanding
before it does a read? And does it take care to build-up to that number
of writes to avoid batching during slowstart, even with TCP_NODELAY set?
rick jones
prev parent reply other threads:[~2012-07-16 21:08 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1342458113-10384-1-git-send-email-cascardo@linux.vnet.ibm.com>
[not found] ` <50044F1D.6000703@hp.com>
2012-07-16 19:06 ` [PATCH] mlx4_en: map entire pages to increase throughput Thadeu Lima de Souza Cascardo
2012-07-16 19:42 ` Rick Jones
2012-07-16 20:36 ` Or Gerlitz
2012-07-16 20:43 ` Or Gerlitz
2012-07-16 20:57 ` Thadeu Lima de Souza Cascardo
2012-07-18 14:59 ` Or Gerlitz
2012-07-16 20:47 ` Thadeu Lima de Souza Cascardo
2012-07-16 21:08 ` Rick Jones [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=500482CE.9000202@hp.com \
--to=rick.jones2@hp.com \
--cc=amirv@mellanox.com \
--cc=anton@samba.org \
--cc=brking@linux.vnet.ibm.com \
--cc=cascardo@linux.vnet.ibm.com \
--cc=davem@davemloft.net \
--cc=klebers@linux.vnet.ibm.com \
--cc=leitao@linux.vnet.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=yevgenyp@mellanox.co.il \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).