netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Request to include ESFQ patch
@ 2008-01-02 16:10 Denys Fedoryshchenko
  2008-01-06 20:01 ` Andy Furniss
  0 siblings, 1 reply; 3+ messages in thread
From: Denys Fedoryshchenko @ 2008-01-02 16:10 UTC (permalink / raw)
  To: netdev; +Cc: bugfood-c

Hi

I took risk and installed ESFQ on my main backbone QoS. I found it highly 
useful, and very need in setup's where is more than 128 flows passing and 
especially where is nat available.

Here is results with overloaded class for low-priority P2P traffic customers:

pfifo 128Kbyte, bandwidth 512Kbit/s
... cut ...
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=27 ttl=51 
time=228 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=28 ttl=51 
time=247 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=29 ttl=51 
time=415 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=30 ttl=51 
time=198 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=31 ttl=51 
time=2274 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=32 ttl=51 
time=2237 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=33 ttl=51 
time=2235 ms
... cut ...
--- www.nuclearcat.com ping statistics ---
100 packets transmitted, 98 received, 2% packet loss, time 99006ms
rtt min/avg/max/mdev = 155.647/1022.177/2289.229/881.461 ms, pipe 3


ping is very unstable, there is drops also. And i dont use almost traffic.

sfq
... cut ...
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=61 ttl=51 
time=1136 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=62 ttl=51 
time=930 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=63 ttl=51 
time=1057 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=64 ttl=51 
time=1055 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=65 ttl=51 
time=1012 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=66 ttl=51 
time=880 ms
... cut ...
--- www.nuclearcat.com ping statistics ---
100 packets transmitted, 95 received, 5% packet loss, time 98984ms
rtt min/avg/max/mdev = 157.328/479.812/1136.569/331.170 ms, pipe 2

Also not so stable, buffer in sfq is 128packets, on average packet 500 bytes 
it will be around 64000 bytes, about 1 second delay only (while i need for 
test 2 second). Packetloss very high.

esfq: perturb 30 depth 65536 divisor 14 limit 256 hash ctorigdst
... cut ...
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=12 ttl=51 
time=185 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=13 ttl=51 
time=238 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=14 ttl=51 
time=228 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=15 ttl=51 
time=377 ms
64 bytes from usa.nuclearcat.com (66.230.167.210): icmp_seq=16 ttl=51 
time=177 ms
... cut ...
--- www.nuclearcat.com ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99009ms
rtt min/avg/max/mdev = 154.254/208.048/553.740/58.716 ms

This is worst jitter, other looks fine.
ping just great, no packetloss, even jitter is acceptable. This queue just my 
dream.

Conclusion:
There is no fair queue qdisc available for "serious" setup. ESFQ only one 
real pretendent i seen for today, to be included in kernel. And probably it 
will be highly useful feature.


--
Denys Fedoryshchenko
Technical Manager
Virtual ISP S.A.L.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Request to include ESFQ patch
  2008-01-02 16:10 Request to include ESFQ patch Denys Fedoryshchenko
@ 2008-01-06 20:01 ` Andy Furniss
  2008-01-07  4:46   ` Denys Fedoryshchenko
  0 siblings, 1 reply; 3+ messages in thread
From: Andy Furniss @ 2008-01-06 20:01 UTC (permalink / raw)
  To: Denys Fedoryshchenko; +Cc: netdev, bugfood-c

Denys Fedoryshchenko wrote:
> Hi
> 
> I took risk and installed ESFQ on my main backbone QoS. I found it highly 
> useful, and very need in setup's where is more than 128 flows passing and 
> especially where is nat available.

I agree it will be good when it's in.

> 
> Here is results with overloaded class for low-priority P2P traffic customers:
> 

*sfq was never meant for interactive traffic as such. If you really want 
to do QOS for them you would need to (somehow) classify interactive and 
give it prio over bulk. I know this may not be practical for your setup, 
but what ping times users get will vary depending how many other active 
users there are/queue length/how many tcps the user has open etc.

Andy.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Request to include ESFQ patch
  2008-01-06 20:01 ` Andy Furniss
@ 2008-01-07  4:46   ` Denys Fedoryshchenko
  0 siblings, 0 replies; 3+ messages in thread
From: Denys Fedoryshchenko @ 2008-01-07  4:46 UTC (permalink / raw)
  To: lists; +Cc: netdev, bugfood-c

On Sun, 06 Jan 2008 20:01:40 +0000, Andy Furniss wrote
> Denys Fedoryshchenko wrote:
> > Hi
> > 
> > I took risk and installed ESFQ on my main backbone QoS. I found it highly 
> > useful, and very need in setup's where is more than 128 flows passing and 
> > especially where is nat available.
> 
> I agree it will be good when it's in.
> 
> > 
> > Here is results with overloaded class for low-priority P2P traffic customers:
> >
> 
> *sfq was never meant for interactive traffic as such. If you really 
> want to do QOS for them you would need to (somehow) classify 
> interactive and give it prio over bulk. I know this may not be 
> practical for your setup, but what ping times users get will vary 
> depending how many other active users there are/queue length/how 
> many tcps the user has open etc.

In this specific setup i have a lot of classes just for type of traffic and
with priority, with many levels of hierarchy (root, realtime low bw
apps(inside also separation for few levels, and next step is specific
applications), realtime high bw apps, p2p low priority high bw and etc), plus
i have at root separation for types of customers. 

But because even with this, i have few thousands of IP's, and after all if i
will create classes for each customer i will not fit in 65k of classes, plus i
will face performance issues, even if i will use hashes in filters. Instead
doing a thousands of classes, i group traffic and customers types and inside
distribute traffic fairly by ESFQ. ESFQ do the job perfectly. 
SFQ for example cause of limit 128 flows will mess up with VoIP and other
applications for whom is critical packet ordering.

ESFQ does good job to prevent one dst IP inside group (for example) taking
prevailing part of bandwidth. Or to prevent one source to do same thing. Or
whatever - depends on hash type. It is not corrupting packet ordering on
non-classic hash type.

Btw in test i used ping just to get statistics how traffic passing class, how
bandwidth of single user affected with different qdiscs, and etc. If i see
some problem with specific type of bandwidth, i can forward ping with specific
ToS to that class by filters and i can compare regular ping with ToS'ed ping,
to see if issue caused by shaper, router,backbone provider or whatever.


> 
> Andy.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
Denys Fedoryshchenko
Technical Manager
Virtual ISP S.A.L.


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2008-01-07  4:46 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-02 16:10 Request to include ESFQ patch Denys Fedoryshchenko
2008-01-06 20:01 ` Andy Furniss
2008-01-07  4:46   ` Denys Fedoryshchenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).