From: Brice Goglin <brice@myri.com>
To: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Cc: Jeff Garzik <jeff@garzik.org>,
netdev@vger.kernel.org, "David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH 0/3] myri10ge Large Receive Offload
Date: Sat, 30 Sep 2006 23:39:04 +0200 [thread overview]
Message-ID: <451EE3F8.20202@myri.com> (raw)
In-Reply-To: <20060930093800.GA19549@2ka.mipt.ru>
Evgeniy Polyakov a écrit :
> On Sat, Sep 30, 2006 at 12:16:44AM +0200, Brice Goglin (brice@myri.com) wrote:
>
>> Jeff Garzik a écrit :
>>
>>> Brice Goglin wrote:
>>>
>>>> The complete driver code in our CVS actually also supports high-order
>>>> allocations instead of single physical pages since it significantly
>>>> increase the performance. Order=2 allows us to receive standard frames
>>>> at line rate even on low-end hardware such as an AMD Athlon(tm) 64 X2
>>>> Dual Core Processor 3800+ (2.0GHz). Some customer might not care a lot
>>>> about memory fragmentation if the performance is better.
>>>>
>>>> But, since high-order allocations are generally considered a bad idea,
>>>> we do not include the relevant code in the following patch for inclusion
>>>> in Linux. Here, we simply pass order=0 to all page allocation routines.
>>>> If necessary, I could drop the remaining reference to high-order
>>>> (especially replace alloc_pages() with alloc_page()) but I'd rather
>>>> keep it as is.
>>>>
>>>> If high-order allocations are ever considered OK under some circum-
>>>> stances, we could send an additional patch (a module parameter would
>>>> be used to switch from single physical pages to high-order pages).
>>>>
>> Any comments about what I was saying about high-order allocations above?
>>
>
> It is quite strnage that you see very noticeble speed degradation after
> switching from higher order to 0 order allocations, could specify where
> observed bottleneck in network stack is?
>
The bottleneck is not in the network stack, it is simply related to the
number of page allocations that are required. Since we store multiple
fragments in a same page, if MTU=1500, we need one 0-order allocation
every 2 fragments, while we need one 2-order allocation every 8
fragments. IIRC, we observed about 20% higher throughput on the receive
side when switching from order=0 to order=2 (7.5Gbit/s -> 9.3Gbit/s with
roughly the same CPU usage).
Brice
next prev parent reply other threads:[~2006-09-30 21:39 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-09-27 23:12 [PATCH 0/3] myri10ge Large Receive Offload Brice Goglin
2006-09-28 0:53 ` Herbert Xu
2006-09-28 4:27 ` Jeff Garzik
2006-09-29 22:16 ` Brice Goglin
2006-09-30 9:38 ` Evgeniy Polyakov
2006-09-30 21:39 ` Brice Goglin [this message]
2006-09-30 14:01 ` Brice Goglin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=451EE3F8.20202@myri.com \
--to=brice@myri.com \
--cc=davem@davemloft.net \
--cc=jeff@garzik.org \
--cc=johnpol@2ka.mipt.ru \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).