From: John Fastabend <john.fastabend@gmail.com>
To: "jsullivan@opensourcedevel.com" <jsullivan@opensourcedevel.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>, netdev@vger.kernel.org
Subject: Re: Drops in qdisc on ifb interface
Date: Thu, 28 May 2015 08:45:50 -0700 [thread overview]
Message-ID: <5567382E.4010706@gmail.com> (raw)
In-Reply-To: <746223891.124020.1432827038389.JavaMail.open-xchange@oxuslxltgw11.lxa.perfora.net>
On 05/28/2015 08:30 AM, jsullivan@opensourcedevel.com wrote:
>
>> On May 28, 2015 at 11:14 AM Eric Dumazet <eric.dumazet@gmail.com> wrote:
>>
>>
>> On Thu, 2015-05-28 at 10:38 -0400, jsullivan@opensourcedevel.com wrote:
>>
> <snip>
>> IFB has still a long way before being efficient.
>>
>> In the mean time, you could play with following patch, and
>> setup /sys/class/net/eth0/gro_timeout to 20000
>>
>> This way, the GRO aggregation will work even at 1Gbps, and your IFB will
>> get big GRO packets instead of single MSS segments.
>>
>> Both IFB but also IP/TCP stack will have less work to do,
>> and receiver will send fewer ACK packets as well.
>>
>> diff --git a/drivers/net/ethernet/intel/igb/igb_main.c
>> b/drivers/net/ethernet/intel/igb/igb_main.c
>> index
>> f287186192bb655ba2dc1a205fb251351d593e98..c37f6657c047d3eb9bd72b647572edd53b1881ac
>> 100644
>> --- a/drivers/net/ethernet/intel/igb/igb_main.c
>> +++ b/drivers/net/ethernet/intel/igb/igb_main.c
>> @@ -151,7 +151,7 @@ static void igb_setup_dca(struct igb_adapter *);
>> #endif /* CONFIG_IGB_DCA */
> <snip>
>
> Interesting but this is destined to become a critical production system for a
> high profile, internationally recognized product so I am hesitant to patch. I
> doubt I can convince my company to do it but is improving IFB the sort of
> development effort that could be sponsored and then executed in a moderately
> short period of time? Thanks - John
> --
If your experimenting one thing you could do is create many
ifb devices and load balance across them from tc. I'm not
sure if this would be practical in your setup or not but might
be worth trying.
One thing I've been debating adding is the ability to match
on current cpu_id in tc which would allow you to load balance by
cpu. I could send you a patch if you wanted to test it. I would
expect this to help somewhat with 'single queue' issue but sorry
haven't had time yet to test it out myself.
.John
--
John Fastabend Intel Corporation
next prev parent reply other threads:[~2015-05-28 15:46 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-25 20:05 Drops in qdisc on ifb interface John A. Sullivan III
2015-05-25 22:31 ` Eric Dumazet
2015-05-26 2:52 ` John A. Sullivan III
2015-05-26 3:17 ` Eric Dumazet
2015-05-28 14:38 ` jsullivan
2015-05-28 15:14 ` Eric Dumazet
2015-05-28 15:30 ` jsullivan
2015-05-28 15:45 ` John Fastabend [this message]
2015-05-28 16:26 ` Eric Dumazet
2015-05-28 16:33 ` jsullivan
2015-05-28 17:17 ` Eric Dumazet
2015-05-28 17:31 ` jsullivan
2015-05-28 17:49 ` Eric Dumazet
2015-05-28 17:54 ` jsullivan
2015-05-28 16:28 ` jsullivan
2015-05-28 15:51 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5567382E.4010706@gmail.com \
--to=john.fastabend@gmail.com \
--cc=eric.dumazet@gmail.com \
--cc=jsullivan@opensourcedevel.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).