* Latency difference between fifo and pfifo_fast
[not found] <8f05fdb0-6e4c-4adf-b8d1-bd67a0dc114f@jasiiieee>
@ 2011-12-06 4:10 ` John A. Sullivan III
2011-12-06 6:02 ` Eric Dumazet
0 siblings, 1 reply; 16+ messages in thread
From: John A. Sullivan III @ 2011-12-06 4:10 UTC (permalink / raw)
To: netdev
Hello, all. We are trying to minimize latency on our iSCSI SAN. The network is entirely dedicated to the iSCSI traffic. Since all the traffic is the same, would it make sense to change the qdisc for that interface to fifo from the default pfifo_fast or is the latency difference between the two completely negligible? Thanks - John
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-06 4:10 ` Latency difference between fifo and pfifo_fast John A. Sullivan III
@ 2011-12-06 6:02 ` Eric Dumazet
2011-12-06 6:29 ` Eric Dumazet
0 siblings, 1 reply; 16+ messages in thread
From: Eric Dumazet @ 2011-12-06 6:02 UTC (permalink / raw)
To: John A. Sullivan III; +Cc: netdev
Le lundi 05 décembre 2011 à 23:10 -0500, John A. Sullivan III a écrit :
> Hello, all. We are trying to minimize latency on our iSCSI SAN. The
> network is entirely dedicated to the iSCSI traffic. Since all the
> traffic is the same, would it make sense to change the qdisc for that
> interface to fifo from the default pfifo_fast or is the latency
> difference between the two completely negligible? Thanks - John
A very small difference indeed. How many packets per second are
expected ? What kind of NIC are you using ?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-06 6:02 ` Eric Dumazet
@ 2011-12-06 6:29 ` Eric Dumazet
2011-12-06 8:39 ` John A. Sullivan III
0 siblings, 1 reply; 16+ messages in thread
From: Eric Dumazet @ 2011-12-06 6:29 UTC (permalink / raw)
To: John A. Sullivan III; +Cc: netdev
Le mardi 06 décembre 2011 à 07:02 +0100, Eric Dumazet a écrit :
> Le lundi 05 décembre 2011 à 23:10 -0500, John A. Sullivan III a écrit :
> > Hello, all. We are trying to minimize latency on our iSCSI SAN. The
> > network is entirely dedicated to the iSCSI traffic. Since all the
> > traffic is the same, would it make sense to change the qdisc for that
> > interface to fifo from the default pfifo_fast or is the latency
> > difference between the two completely negligible? Thanks - John
>
> A very small difference indeed. How many packets per second are
> expected ? What kind of NIC are you using ?
>
To really remove a possible source of latency, you could remove qdisc
layer...
ifconfig eth2 txqueuelen 0
tc qdisc add dev eth2 root pfifo
tc qdisc del dev eth2 root
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-06 6:29 ` Eric Dumazet
@ 2011-12-06 8:39 ` John A. Sullivan III
2011-12-06 8:51 ` Eric Dumazet
0 siblings, 1 reply; 16+ messages in thread
From: John A. Sullivan III @ 2011-12-06 8:39 UTC (permalink / raw)
To: Eric Dumazet; +Cc: netdev
----- Original Message -----
> From: "Eric Dumazet" <eric.dumazet@gmail.com>
> To: "John A. Sullivan III" <jsullivan@opensourcedevel.com>
> Cc: netdev@vger.kernel.org
> Sent: Tuesday, December 6, 2011 1:29:02 AM
> Subject: Re: Latency difference between fifo and pfifo_fast
>
> Le mardi 06 décembre 2011 à 07:02 +0100, Eric Dumazet a écrit :
> > Le lundi 05 décembre 2011 à 23:10 -0500, John A. Sullivan III a
> > écrit :
> > > Hello, all. We are trying to minimize latency on our iSCSI SAN.
> > > The
> > > network is entirely dedicated to the iSCSI traffic. Since all
> > > the
> > > traffic is the same, would it make sense to change the qdisc for
> > > that
> > > interface to fifo from the default pfifo_fast or is the latency
> > > difference between the two completely negligible? Thanks - John
> >
> > A very small difference indeed. How many packets per second are
> > expected ? What kind of NIC are you using ?
> >
>
> To really remove a possible source of latency, you could remove qdisc
> layer...
>
> ifconfig eth2 txqueuelen 0
> tc qdisc add dev eth2 root pfifo
> tc qdisc del dev eth2 root
>
>
>
>
Really? I didn't know one could do that. Thanks. However, with no queue length, do I have a significant risk of dropping packets? To answer your other response's question, these are Intel quad port e1000 cards. We are frequently pushing them to near line speed so 1,000,000,000 / 1534 / 8 = 81,486 pps - John
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-06 8:39 ` John A. Sullivan III
@ 2011-12-06 8:51 ` Eric Dumazet
2011-12-06 18:20 ` Rick Jones
0 siblings, 1 reply; 16+ messages in thread
From: Eric Dumazet @ 2011-12-06 8:51 UTC (permalink / raw)
To: John A. Sullivan III; +Cc: netdev
Le mardi 06 décembre 2011 à 03:39 -0500, John A. Sullivan III a écrit :
> > ifconfig eth2 txqueuelen 0
> > tc qdisc add dev eth2 root pfifo
> > tc qdisc del dev eth2 root
> >
> >
> >
> >
> Really? I didn't know one could do that. Thanks. However, with no
> queue length, do I have a significant risk of dropping packets? To
> answer your other response's question, these are Intel quad port e1000
> cards. We are frequently pushing them to near line speed so
> 1,000,000,000 / 1534 / 8 = 81,486 pps - John
You can remove qdisc layer, since NIC itself has a TX ring queue
(check exact value with ethtool -g ethX)
# ethtool -g eth2
Ring parameters for eth2:
Pre-set maximums:
RX: 4078
RX Mini: 0
RX Jumbo: 0
TX: 4078
Current hardware settings:
RX: 254
RX Mini: 0
RX Jumbo: 0
TX: 4078 ---- HERE ----
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-06 8:51 ` Eric Dumazet
@ 2011-12-06 18:20 ` Rick Jones
2011-12-06 18:39 ` Dave Taht
0 siblings, 1 reply; 16+ messages in thread
From: Rick Jones @ 2011-12-06 18:20 UTC (permalink / raw)
To: John A. Sullivan III; +Cc: Eric Dumazet, netdev
On 12/06/2011 12:51 AM, Eric Dumazet wrote:
> Le mardi 06 décembre 2011 à 03:39 -0500, John A. Sullivan III a écrit :
>
>>> ifconfig eth2 txqueuelen 0
>>> tc qdisc add dev eth2 root pfifo
>>> tc qdisc del dev eth2 root
>>>
>>>
>>>
>>>
>> Really? I didn't know one could do that. Thanks. However, with no
>> queue length, do I have a significant risk of dropping packets? To
>> answer your other response's question, these are Intel quad port e1000
>> cards. We are frequently pushing them to near line speed so
>> 1,000,000,000 / 1534 / 8 = 81,486 pps - John
>
> You can remove qdisc layer, since NIC itself has a TX ring queue
>
> (check exact value with ethtool -g ethX)
>
> # ethtool -g eth2
> Ring parameters for eth2:
> Pre-set maximums:
> RX: 4078
> RX Mini: 0
> RX Jumbo: 0
> TX: 4078
> Current hardware settings:
> RX: 254
> RX Mini: 0
> RX Jumbo: 0
> TX: 4078 ---- HERE ----
And while you are down at the NIC, if every microsecond is precious (no
matter how close to epsilon compared to the latencies of spinning rust
:) you might consider disabling interrupt coalescing via ethtool -C.
rick jones
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-06 18:20 ` Rick Jones
@ 2011-12-06 18:39 ` Dave Taht
2011-12-06 19:44 ` John A. Sullivan III
0 siblings, 1 reply; 16+ messages in thread
From: Dave Taht @ 2011-12-06 18:39 UTC (permalink / raw)
To: Rick Jones; +Cc: John A. Sullivan III, Eric Dumazet, netdev
On Tue, Dec 6, 2011 at 7:20 PM, Rick Jones <rick.jones2@hp.com> wrote:
> On 12/06/2011 12:51 AM, Eric Dumazet wrote:
>>
>> Le mardi 06 décembre 2011 à 03:39 -0500, John A. Sullivan III a écrit :
>>
>>>> ifconfig eth2 txqueuelen 0
>>>> tc qdisc add dev eth2 root pfifo
>>>> tc qdisc del dev eth2 root
>>>>
>>>>
>>>>
>>>>
>>> Really? I didn't know one could do that. Thanks. However, with no
>>> queue length, do I have a significant risk of dropping packets? To
>>> answer your other response's question, these are Intel quad port e1000
>>> cards. We are frequently pushing them to near line speed so
>>> 1,000,000,000 / 1534 / 8 = 81,486 pps - John
>>
>>
>> You can remove qdisc layer, since NIC itself has a TX ring queue
>>
>> (check exact value with ethtool -g ethX)
>>
>> # ethtool -g eth2
>> Ring parameters for eth2:
>> Pre-set maximums:
>> RX: 4078
>> RX Mini: 0
>> RX Jumbo: 0
>> TX: 4078
>> Current hardware settings:
>> RX: 254
>> RX Mini: 0
>> RX Jumbo: 0
>> TX: 4078 ---- HERE ----
>
>
> And while you are down at the NIC, if every microsecond is precious (no
> matter how close to epsilon compared to the latencies of spinning rust :)
> you might consider disabling interrupt coalescing via ethtool -C.
>
> rick jones
Ya know, me being me, and if latency is your real problem, I can't help
but think you'd do better by reducing those tx queues enormously,
applying QFQ and maybe something like RED on top, would balance
out the differences between flows and result in a net benefit.
I realize that you are struggling to achieve line rate in the first place...
but from where I sit (with asbestos suit on), it would be an interesting
experiment. (I have no data on how much cpu this stuff uses at these
sort of speeds)
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-06 18:39 ` Dave Taht
@ 2011-12-06 19:44 ` John A. Sullivan III
2011-12-07 13:04 ` Eric Dumazet
0 siblings, 1 reply; 16+ messages in thread
From: John A. Sullivan III @ 2011-12-06 19:44 UTC (permalink / raw)
To: Dave Taht; +Cc: Eric Dumazet, netdev, Rick Jones
----- Original Message -----
> From: "Dave Taht" <dave.taht@gmail.com>
> To: "Rick Jones" <rick.jones2@hp.com>
> Cc: "John A. Sullivan III" <jsullivan@opensourcedevel.com>, "Eric Dumazet" <eric.dumazet@gmail.com>,
> netdev@vger.kernel.org
> Sent: Tuesday, December 6, 2011 1:39:13 PM
> Subject: Re: Latency difference between fifo and pfifo_fast
>
> On Tue, Dec 6, 2011 at 7:20 PM, Rick Jones <rick.jones2@hp.com>
> wrote:
> > On 12/06/2011 12:51 AM, Eric Dumazet wrote:
> >>
> >> Le mardi 06 décembre 2011 à 03:39 -0500, John A. Sullivan III a
> >> écrit :
> >>
> >>>> ifconfig eth2 txqueuelen 0
> >>>> tc qdisc add dev eth2 root pfifo
> >>>> tc qdisc del dev eth2 root
> >>>>
> >>>>
> >>>>
> >>>>
> >>> Really? I didn't know one could do that. Thanks. However, with
> >>> no
> >>> queue length, do I have a significant risk of dropping packets?
> >>> To
> >>> answer your other response's question, these are Intel quad port
> >>> e1000
> >>> cards. We are frequently pushing them to near line speed so
> >>> 1,000,000,000 / 1534 / 8 = 81,486 pps - John
> >>
> >>
> >> You can remove qdisc layer, since NIC itself has a TX ring queue
> >>
> >> (check exact value with ethtool -g ethX)
> >>
> >> # ethtool -g eth2
> >> Ring parameters for eth2:
> >> Pre-set maximums:
> >> RX: 4078
> >> RX Mini: 0
> >> RX Jumbo: 0
> >> TX: 4078
> >> Current hardware settings:
> >> RX: 254
> >> RX Mini: 0
> >> RX Jumbo: 0
> >> TX: 4078 ---- HERE ----
> >
> >
> > And while you are down at the NIC, if every microsecond is precious
> > (no
> > matter how close to epsilon compared to the latencies of spinning
> > rust :)
> > you might consider disabling interrupt coalescing via ethtool -C.
> >
> > rick jones
>
> Ya know, me being me, and if latency is your real problem, I can't
> help
> but think you'd do better by reducing those tx queues enormously,
> applying QFQ and maybe something like RED on top, would balance
> out the differences between flows and result in a net benefit.
>
> I realize that you are struggling to achieve line rate in the first
> place...
>
> but from where I sit (with asbestos suit on), it would be an
> interesting
> experiment. (I have no data on how much cpu this stuff uses at these
> sort of speeds)
> <snip>
>
Interesting. Would that still be true if all the traffic is the same, i.e., nothing but iSCSI packets on the network? Or would just dumping packets with minimal processing be fastest? Thanks - John
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-06 19:44 ` John A. Sullivan III
@ 2011-12-07 13:04 ` Eric Dumazet
2011-12-07 13:27 ` Dave Taht
0 siblings, 1 reply; 16+ messages in thread
From: Eric Dumazet @ 2011-12-07 13:04 UTC (permalink / raw)
To: John A. Sullivan III; +Cc: Dave Taht, netdev, Rick Jones
Le mardi 06 décembre 2011 à 14:44 -0500, John A. Sullivan III a écrit :
> Interesting. Would that still be true if all the traffic is the same,
> i.e., nothing but iSCSI packets on the network? Or would just dumping
> packets with minimal processing be fastest? Thanks - John
Dave focuses on fairness and latencies under ~20 ms (a typical (under)
provisioned ADSL (up)link shared by many (hostile) flows, with various
type of services)
I doubt this is your concern ? You want high throughput more than low
latencies ...
Your workload is probably under _one_ ms latencies, and dedicated link
to address few targets.
If you have to use a Qdisc (and expensive packet classification), then
something is wrong in your iSCSI network connectivity :)
Please note that with BQL, the NIC TX ring size doesn’t matter, and you
could get "Virtual device ethX asks to queue packet!" warnings in your
message log.
So before removing Qdisc, you also want to make sure BQL is disabled for
your NIC device/queues.
(BQL is scheduled for linux-3.3)
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-07 13:04 ` Eric Dumazet
@ 2011-12-07 13:27 ` Dave Taht
2011-12-07 14:08 ` David Laight
0 siblings, 1 reply; 16+ messages in thread
From: Dave Taht @ 2011-12-07 13:27 UTC (permalink / raw)
To: Eric Dumazet; +Cc: John A. Sullivan III, netdev, Rick Jones
On Wed, Dec 7, 2011 at 2:04 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le mardi 06 décembre 2011 à 14:44 -0500, John A. Sullivan III a écrit :
>> Interesting. Would that still be true if all the traffic is the same,
>> i.e., nothing but iSCSI packets on the network? Or would just dumping
>> packets with minimal processing be fastest? Thanks - John
>
> Dave focuses on fairness and latencies under ~20 ms (a typical (under)
> provisioned ADSL (up)link shared by many (hostile) flows, with various
> type of services)
True, that is my focus, but queuing theory applies at all time scales.
If it didn't, the universe, and not just the internet, would have melted
down long ago.
And I did ask specifically what sort of latencies he was trying to address.
If he's hovering at close to line rate (wow), and yet experiencing
serious delays on short traffic, perhaps what I describe below may apply.
> I doubt this is your concern ? You want high throughput more than low
> latencies ...
My assumption - is that your 'iSCSI' packets are TCP streams. If they aren't,
then some of what I say below does not apply, although I tend to be
a believer in FQ technologies for their effects on downstream buffering.
I freely confess to not grokking how iSCSI is deployed. My understanding
is that TCP is used to negotiate a virtual connection between two endpoints,
and there are usually very few - usually one - endpoints.
1) TCP grabs all the bandwidth it can. If you have no packet loss,
it will eat more bandwidth, as rapidly as it can ramp up. Until it eventually
has packet loss.
Q) John indicated he didn't want any packet loss, so I for starters questioned
my assumption he was using tcp, and secondly it was late, and I was
feeling snarky. I honestly should stay in the .2ms to 10000ms range I'm
comfortable in.
2) Once you have one stream so completely dominating a connection
it can starve other stream's attempts to ramp up.
> Your workload is probably under _one_ ms latencies, and dedicated link
> to address few targets.
That was my second question, basically, how many links are in use?
More than 1 introduces a head of line problem between flows.
> If you have to use a Qdisc (and expensive packet classification), then
> something is wrong in your iSCSI network connectivity :)
>
> Please note that with BQL, the NIC TX ring size doesn’t matter, and you
> could get "Virtual device ethX asks to queue packet!" warnings in your
> message log.
so his tx 4000 is 'about right', even without BQL?
>
> So before removing Qdisc, you also want to make sure BQL is disabled for
> your NIC device/queues.
> (BQL is scheduled for linux-3.3)
>
>
>
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: Latency difference between fifo and pfifo_fast
2011-12-07 13:27 ` Dave Taht
@ 2011-12-07 14:08 ` David Laight
2011-12-08 0:05 ` John A. Sullivan III
0 siblings, 1 reply; 16+ messages in thread
From: David Laight @ 2011-12-07 14:08 UTC (permalink / raw)
To: Dave Taht, Eric Dumazet; +Cc: John A. Sullivan III, netdev, Rick Jones
...
> If he's hovering at close to line rate (wow), and yet experiencing
> serious delays on short traffic, perhaps what I describe
> below may apply.
...
> 1) TCP grabs all the bandwidth it can. If you have no packet loss,
> it will eat more bandwidth, as rapidly as it can ramp up.
> Until it eventually has packet loss.
The 'ramp up' may be part of the problem!
At a guess iSCSI is using the TCP connection to carry
many, separate, commands and responses. As such Nagle
will cause serious grief and is likely to be disabled.
TCP 'slow start' will apply whenever there is no unacked
data - which might be after any slight lull in the traffic.
IIRC (from looking at traces) Linux TCP will only send 4 data
packets following 'slow start' until it has received an ack.
Linux (at least some versions) will also delay sending an
ack until the next clock tick - rather than the traditional
scheme of always acking every other packet.
So if there are no responses, the requests can be delayed.
This will increase latency.
My suspicions (as I've said before) is that slow start
is broken for very low latency local networks.
Might be worth disabling it - but that is a massive system-wide
switch.
Very high packet rates can cause packet loss, but buying better
network infrastructure should mitigate that. In any case, 'slow
start' doesn't limit packet rate.
David
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-08 0:05 ` John A. Sullivan III
@ 2011-12-07 23:27 ` Stephen Hemminger
2011-12-08 0:34 ` John A. Sullivan III
0 siblings, 1 reply; 16+ messages in thread
From: Stephen Hemminger @ 2011-12-07 23:27 UTC (permalink / raw)
To: John A. Sullivan III
Cc: David Laight, netdev, Rick Jones, Dave Taht, Eric Dumazet
> <grin> Sorry to have kicked up a storm! We really don't have a problem - just trying to optimize our environment. We have been told by our SAN vendor that, because of the 4KB limit on block size in Linux file systems, iSCSI connections for Linux file services are latency bound and not bandwidth bound. I'm not sure if I believe that based upon our traces where tag queueing seems to coalesce SCSI commands into larger blocks and we are able to achieve network saturation. I was just wondering, since it is all the same traffic and hence no need to separate into bands, if I should change the qdisc on those connections from pfifo_fast (which I assume needs to look at the TOS bits, sort into bands, and poll the separate bands) to fifo which I assume simply dumps packets on the wire. Thanks -
John
Is this a shared network? TOS won't matter if it is only your traffic.
There are number of route metrics that you can tweak to that can reduce TCP slow
start effects, like increasing the initial cwnd, etc.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-08 0:34 ` John A. Sullivan III
@ 2011-12-07 23:49 ` Stephen Hemminger
2011-12-08 3:20 ` John A. Sullivan III
0 siblings, 1 reply; 16+ messages in thread
From: Stephen Hemminger @ 2011-12-07 23:49 UTC (permalink / raw)
To: John A. Sullivan III
Cc: David Laight, netdev, Rick Jones, Dave Taht, Eric Dumazet
=
> > Is this a shared network? TOS won't matter if it is only your
> > traffic.
> >
> > There are number of route metrics that you can tweak to that can
> > reduce TCP slow
> > start effects, like increasing the initial cwnd, etc.
> >
> It is a private network dedicated only to SAN traffic - a couple of SAN devices and some virtualization hosts - John
Therefore unless your switch is shared, playing with queueing and TOS
won't help reduce absolute latency.
You maybe able to prioritize one host or SAN over another though.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-07 14:08 ` David Laight
@ 2011-12-08 0:05 ` John A. Sullivan III
2011-12-07 23:27 ` Stephen Hemminger
0 siblings, 1 reply; 16+ messages in thread
From: John A. Sullivan III @ 2011-12-08 0:05 UTC (permalink / raw)
To: David Laight; +Cc: netdev, Rick Jones, Dave Taht, Eric Dumazet
----- Original Message -----
> From: "David Laight" <David.Laight@ACULAB.COM>
> To: "Dave Taht" <dave.taht@gmail.com>, "Eric Dumazet" <eric.dumazet@gmail.com>
> Cc: "John A. Sullivan III" <jsullivan@opensourcedevel.com>, netdev@vger.kernel.org, "Rick Jones" <rick.jones2@hp.com>
> Sent: Wednesday, December 7, 2011 9:08:24 AM
> Subject: RE: Latency difference between fifo and pfifo_fast
>
>
> ...
> > If he's hovering at close to line rate (wow), and yet experiencing
> > serious delays on short traffic, perhaps what I describe
> > below may apply.
> ...
> > 1) TCP grabs all the bandwidth it can. If you have no packet loss,
> > it will eat more bandwidth, as rapidly as it can ramp up.
> > Until it eventually has packet loss.
>
> The 'ramp up' may be part of the problem!
> At a guess iSCSI is using the TCP connection to carry
> many, separate, commands and responses. As such Nagle
> will cause serious grief and is likely to be disabled.
>
> TCP 'slow start' will apply whenever there is no unacked
> data - which might be after any slight lull in the traffic.
> IIRC (from looking at traces) Linux TCP will only send 4 data
> packets following 'slow start' until it has received an ack.
> Linux (at least some versions) will also delay sending an
> ack until the next clock tick - rather than the traditional
> scheme of always acking every other packet.
>
> So if there are no responses, the requests can be delayed.
> This will increase latency.
>
> My suspicions (as I've said before) is that slow start
> is broken for very low latency local networks.
> Might be worth disabling it - but that is a massive system-wide
> switch.
>
> Very high packet rates can cause packet loss, but buying better
> network infrastructure should mitigate that. In any case, 'slow
> start' doesn't limit packet rate.
>
> David
>
>
>
<grin> Sorry to have kicked up a storm! We really don't have a problem - just trying to optimize our environment. We have been told by our SAN vendor that, because of the 4KB limit on block size in Linux file systems, iSCSI connections for Linux file services are latency bound and not bandwidth bound. I'm not sure if I believe that based upon our traces where tag queueing seems to coalesce SCSI commands into larger blocks and we are able to achieve network saturation. I was just wondering, since it is all the same traffic and hence no need to separate into bands, if I should change the qdisc on those connections from pfifo_fast (which I assume needs to look at the TOS bits, sort into bands, and poll the separate bands) to fifo which I assume simply dumps packets on the wire. Thanks - J
ohn
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-07 23:27 ` Stephen Hemminger
@ 2011-12-08 0:34 ` John A. Sullivan III
2011-12-07 23:49 ` Stephen Hemminger
0 siblings, 1 reply; 16+ messages in thread
From: John A. Sullivan III @ 2011-12-08 0:34 UTC (permalink / raw)
To: Stephen Hemminger
Cc: David Laight, netdev, Rick Jones, Dave Taht, Eric Dumazet
----- Original Message -----
> From: "Stephen Hemminger" <shemminger@vyatta.com>
> To: "John A. Sullivan III" <jsullivan@opensourcedevel.com>
> Cc: "David Laight" <David.Laight@ACULAB.COM>, netdev@vger.kernel.org, "Rick Jones" <rick.jones2@hp.com>, "Dave Taht"
> <dave.taht@gmail.com>, "Eric Dumazet" <eric.dumazet@gmail.com>
> Sent: Wednesday, December 7, 2011 6:27:09 PM
> Subject: Re: Latency difference between fifo and pfifo_fast
>
>
> > <grin> Sorry to have kicked up a storm! We really don't have a
> > problem - just trying to optimize our environment. We have been
> > told by our SAN vendor that, because of the 4KB limit on block
> > size in Linux file systems, iSCSI connections for Linux file
> > services are latency bound and not bandwidth bound. I'm not sure
> > if I believe that based upon our traces where tag queueing seems
> > to coalesce SCSI commands into larger blocks and we are able to
> > achieve network saturation. I was just wondering, since it is all
> > the same traffic and hence no need to separate into bands, if I
> > should change the qdisc on those connections from pfifo_fast
> > (which I assume needs to look at the TOS bits, sort into bands,
> > and poll the separate bands) to fifo which I assume simply dumps
> > packets on the wire. Thanks - John
>
> Is this a shared network? TOS won't matter if it is only your
> traffic.
>
> There are number of route metrics that you can tweak to that can
> reduce TCP slow
> start effects, like increasing the initial cwnd, etc.
>
It is a private network dedicated only to SAN traffic - a couple of SAN devices and some virtualization hosts - John
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Latency difference between fifo and pfifo_fast
2011-12-07 23:49 ` Stephen Hemminger
@ 2011-12-08 3:20 ` John A. Sullivan III
0 siblings, 0 replies; 16+ messages in thread
From: John A. Sullivan III @ 2011-12-08 3:20 UTC (permalink / raw)
To: Stephen Hemminger
Cc: David Laight, netdev, Rick Jones, Dave Taht, Eric Dumazet
----- Original Message -----
> From: "Stephen Hemminger" <shemminger@vyatta.com>
> To: "John A. Sullivan III" <jsullivan@opensourcedevel.com>
> Cc: "David Laight" <David.Laight@ACULAB.COM>, netdev@vger.kernel.org, "Rick Jones" <rick.jones2@hp.com>, "Dave Taht"
> <dave.taht@gmail.com>, "Eric Dumazet" <eric.dumazet@gmail.com>
> Sent: Wednesday, December 7, 2011 6:49:07 PM
> Subject: Re: Latency difference between fifo and pfifo_fast
>
> =
> > > Is this a shared network? TOS won't matter if it is only your
> > > traffic.
> > >
> > > There are number of route metrics that you can tweak to that can
> > > reduce TCP slow
> > > start effects, like increasing the initial cwnd, etc.
> > >
> > It is a private network dedicated only to SAN traffic - a couple of
> > SAN devices and some virtualization hosts - John
>
> Therefore unless your switch is shared, playing with queueing and TOS
> won't help reduce absolute latency.
> You maybe able to prioritize one host or SAN over another though.
>
That's why I was wondering if we should switch to fifo from pfifo_fast since there is no prioritization of one host or SAN over another. Thanks - John
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2011-12-08 2:20 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <8f05fdb0-6e4c-4adf-b8d1-bd67a0dc114f@jasiiieee>
2011-12-06 4:10 ` Latency difference between fifo and pfifo_fast John A. Sullivan III
2011-12-06 6:02 ` Eric Dumazet
2011-12-06 6:29 ` Eric Dumazet
2011-12-06 8:39 ` John A. Sullivan III
2011-12-06 8:51 ` Eric Dumazet
2011-12-06 18:20 ` Rick Jones
2011-12-06 18:39 ` Dave Taht
2011-12-06 19:44 ` John A. Sullivan III
2011-12-07 13:04 ` Eric Dumazet
2011-12-07 13:27 ` Dave Taht
2011-12-07 14:08 ` David Laight
2011-12-08 0:05 ` John A. Sullivan III
2011-12-07 23:27 ` Stephen Hemminger
2011-12-08 0:34 ` John A. Sullivan III
2011-12-07 23:49 ` Stephen Hemminger
2011-12-08 3:20 ` John A. Sullivan III
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).