netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "J.Hwan Kim" <frog1120@gmail.com>
To: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: netdev <netdev@vger.kernel.org>
Subject: Re: intel 82599 multi-port performance
Date: Tue, 27 Sep 2011 09:45:02 +0900	[thread overview]
Message-ID: <4E811C8E.8020508@gmail.com> (raw)
In-Reply-To: <4E80A2AB.2040206@intel.com>

On 2011년 09월 27일 01:04, Alexander Duyck wrote:
> On 09/26/2011 08:42 AM, J.Hwan.Kim wrote:
>> On 2011년 09월 26일 23:20, Chris Friesen wrote:
>>> On 09/26/2011 04:26 AM, J.Hwan Kim wrote:
>>>> Hi, everyone
>>>>
>>>> Now, I'm testing a network card including intel 82599.
>>>> In our experiment, with the driver modified with ixgbe and multi-port
>>>> enabled,
>>>
>>> What do you mean by "modified with ixgbe and multi-port enabled"? You
>>> shouldn't need to do anything special to use both ports.
>>>
>>>> rx performance of each port with 10Gbps of 64bytes frame is
>>>> a half than when only 1 port is used.
>>>
>>> Sounds like a cpu limitation. What is your cpu usage? How are your
>>> interrupts routed? Are you using multiple rx queues?
>>>
>>
>> Our server is XEON 2.4GHz with 8 cores.
>> I'm using 4 RSS queues for each port and distributed it's interrupts 
>> to different cores respectively.
>> I checked the CPU utilization with TOP, I guess ,it is not cpu 
>> imitation problem.
>
> What kind of rates are you seeing on a single port versus multiple 
> ports?  There are multiple possibilities in terms of what could be 
> limiting your performance.
>

I tested the 10G - 64byte frames.
With ixgbe-modified driver, in single port, 92% of packet received in 
driver level and in 2 port we received around 42% packets.

> It sounds like you are using a single card, would that be correct?

Yes, I tested a single card with 2 ports.

> If you are running close to line rate on both ports this could be 
> causing you to saturate the PCIe x8 link.  If you have a second card 
> available you may want to try installing that in a secondary Gen2 PCIe 
> slot and seeing if you can improve the performance by using 2 PCIe 
> slots instead of one.

I tested it also, if it is tested with 2 card, it seems that the 
performance of each port is almost same with a single port. (maximum 
performance)

  parent reply	other threads:[~2011-09-27  0:45 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-26 10:26 intel 82599 multi-port performance J.Hwan Kim
2011-09-26 14:20 ` Chris Friesen
2011-09-26 15:42   ` J.Hwan.Kim
2011-09-26 16:04     ` Alexander Duyck
2011-09-26 16:40       ` Chris Friesen
2011-09-26 17:24         ` [E1000-devel] " Ben Greear
2011-09-26 17:46           ` Chris Friesen
2011-09-26 17:57             ` Ben Greear
2011-09-27  0:45       ` J.Hwan Kim [this message]
2011-09-27 15:30         ` Martin Millnert
2011-09-27 17:14         ` Alexander Duyck
2011-09-27 22:57           ` Chris Friesen
2011-09-26 18:16     ` Rick Jones
2011-09-27  0:39       ` J.Hwan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E811C8E.8020508@gmail.com \
    --to=frog1120@gmail.com \
    --cc=alexander.h.duyck@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).