netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Fastabend <john.r.fastabend@intel.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Dave Taht <dave.taht@gmail.com>,
	Tom Herbert <therbert@google.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [PATCH v4 0/10] bql: Byte Queue Limits
Date: Tue, 29 Nov 2011 00:03:40 -0800	[thread overview]
Message-ID: <4ED491DC.8030304@intel.com> (raw)
In-Reply-To: <1322552733.2970.78.camel@edumazet-laptop>

On 11/28/2011 11:45 PM, Eric Dumazet wrote:
> Le lundi 28 novembre 2011 à 23:23 -0800, John Fastabend a écrit :
> 
>> I wonder if we should consider enabling TSO/GSO per queue or per traffic
>> class on devices that support this. At least in devices that support
>> multiple traffic classes it seems to be a common usage case to put bulk
>> storage traffic (iSCSI) on a traffic class and low latency traffic on a
>> separate traffic class, VoIP for example.
>>
> 
> It all depends on how device itself is doing its mux from queues to
> ethernet wire. If queue 0 starts transmit of one 64KB 'super packet',
> will queue 1 be able to insert a litle frame between the frames of queue
> 0 ?
> 

Yes this works at least on the ixgbe supported 82599 device as you
would hope. 'super packets' from queues can and will be interleaved,
perhaps with standard sized packets, depending on the currently
configured arbitration scheme. So with multiple traffic classes we
can make a link strict 'low latency' class to TX frames as soon as
they are available.

Also I would expect this to work correctly on any of the coined
CNA devices, the bnx2x devices for example. I'll probably see what
can be done after finishing up some other things first.

  reply	other threads:[~2011-11-29  8:03 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-29  2:32 [PATCH v4 0/10] bql: Byte Queue Limits Tom Herbert
2011-11-29  4:23 ` Dave Taht
2011-11-29  7:02   ` Eric Dumazet
2011-11-29  7:07     ` Eric Dumazet
2011-11-29  7:23     ` John Fastabend
2011-11-29  7:45       ` Eric Dumazet
2011-11-29  8:03         ` John Fastabend [this message]
2011-11-29  8:37       ` Dave Taht
2011-11-29  8:43         ` Eric Dumazet
2011-11-29  8:51           ` Dave Taht
2011-11-29 14:57             ` Eric Dumazet
2011-11-29 16:24               ` Dave Taht
2011-11-29 17:06                 ` David Laight
2011-11-29 14:24     ` Ben Hutchings
2011-11-29 14:29       ` Eric Dumazet
2011-11-29 16:06         ` Dave Taht
2011-11-29 16:41           ` Ben Hutchings
2011-11-29 17:28     ` Rick Jones
2011-11-29 16:46 ` Eric Dumazet
2011-11-29 17:47   ` David Miller
2011-11-29 18:31     ` Tom Herbert
2011-12-01 16:50       ` Kirill Smelkov
2011-12-01 18:00         ` David Miller
2011-12-02 11:22           ` Kirill Smelkov
2011-12-02 11:57             ` Eric Dumazet
2011-12-02 12:26               ` Kirill Smelkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4ED491DC.8030304@intel.com \
    --to=john.r.fastabend@intel.com \
    --cc=dave.taht@gmail.com \
    --cc=davem@davemloft.net \
    --cc=eric.dumazet@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=therbert@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).