netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Bonding gigabit and fast?
@ 2008-12-16 19:39 Tvrtko A. Ursulin
  2008-12-16 19:54 ` Chris Snook
  2008-12-17  2:53 ` Bonding gigabit and fast? Trent Piepho
  0 siblings, 2 replies; 11+ messages in thread
From: Tvrtko A. Ursulin @ 2008-12-16 19:39 UTC (permalink / raw)
  To: netdev


Hi to all,

Does it make any sense from bandwith point of view to bond gigabit and fast 
ethernet?

I wanted to use adaptive-alb mode to load balance both transmit and receive 
direction of traffic but it seems 8139too does not support it so I use 
balance-rr.

When serving data from the machine I get 13.7 MB/s aggregated while with a 
single slave (so bond still active) I get 5.6 MB/s for gigabit and 9.1 MB/s 
for fast. Yes, that's not a typo - fast ethernet is faster than gigabit.

That is actually another problem I was trying to get to the bottom of for some 
time. Gigabit adapter is skge in a PCI slot and outgoing bandwith oscillates 
a lot during transfer, much more than on 8139too which is both stable and 
faster.

Unfortunately this machine takes low-profile cards and so far I was unable to 
find something other than skge to test with.

Oh and yes, kernel is 2.6.27(-9-generic, so Ubuntu derivative of 2.6.27).

Tvrtko



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-16 19:39 Bonding gigabit and fast? Tvrtko A. Ursulin
@ 2008-12-16 19:54 ` Chris Snook
  2008-12-16 20:12   ` Tvrtko A. Ursulin
  2008-12-17  2:53 ` Bonding gigabit and fast? Trent Piepho
  1 sibling, 1 reply; 11+ messages in thread
From: Chris Snook @ 2008-12-16 19:54 UTC (permalink / raw)
  To: Tvrtko A. Ursulin; +Cc: netdev

Tvrtko A. Ursulin wrote:
> Hi to all,
> 
> Does it make any sense from bandwith point of view to bond gigabit and fast 
> ethernet?

Not unless there's something very wrong with your gigabit card.  People with 
this sort of hardware generally use the fast ethernet for management or cluster 
heartbeat, or maybe as a non-primary slave in an active-backup bond.

> I wanted to use adaptive-alb mode to load balance both transmit and receive 
> direction of traffic but it seems 8139too does not support it so I use 
> balance-rr.
> 
> When serving data from the machine I get 13.7 MB/s aggregated while with a 
> single slave (so bond still active) I get 5.6 MB/s for gigabit and 9.1 MB/s 
> for fast. Yes, that's not a typo - fast ethernet is faster than gigabit.

That would qualify as something very wrong with your gigabit card.  What do you 
get when bonding is completely disabled?

> That is actually another problem I was trying to get to the bottom of for some 
> time. Gigabit adapter is skge in a PCI slot and outgoing bandwith oscillates 
> a lot during transfer, much more than on 8139too which is both stable and 
> faster.

The gigabit card might be sharing a PCI bus with your disk controller, so 
swapping which slots the cards are in might make gigabit work faster, but it 
sounds more like the driver is doing something stupid with interrupt servicing.

> Unfortunately this machine takes low-profile cards and so far I was unable to 
> find something other than skge to test with.
> 
> Oh and yes, kernel is 2.6.27(-9-generic, so Ubuntu derivative of 2.6.27).


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-16 19:54 ` Chris Snook
@ 2008-12-16 20:12   ` Tvrtko A. Ursulin
  2008-12-16 20:37     ` Chris Snook
  0 siblings, 1 reply; 11+ messages in thread
From: Tvrtko A. Ursulin @ 2008-12-16 20:12 UTC (permalink / raw)
  To: Chris Snook; +Cc: netdev

On Tuesday 16 December 2008 19:54:29 Chris Snook wrote:
> > When serving data from the machine I get 13.7 MB/s aggregated while with
> > a single slave (so bond still active) I get 5.6 MB/s for gigabit and 9.1
> > MB/s for fast. Yes, that's not a typo - fast ethernet is faster than
> > gigabit.
>
> That would qualify as something very wrong with your gigabit card.  What do
> you get when bonding is completely disabled?

With same testing methology (ie. serving from Samba to CIFS) it averages to 
around 10 Mb/s, so somewhat faster than when bonded but still terribly 
unstable. Problem is I think it was much better under older kernels. I wrote 
about it before:

http://lkml.org/lkml/2008/11/20/418
http://bugzilla.kernel.org/show_bug.cgi?id=6796

Stephen thinks it may be limited PCI bandwith, but the fact that I get double 
speed in the opposite direction and that slow direction was previously 
roughly double of what it is now, makes me suspicious that there is a 
regression here somewhere.

> > That is actually another problem I was trying to get to the bottom of for
> > some time. Gigabit adapter is skge in a PCI slot and outgoing bandwith
> > oscillates a lot during transfer, much more than on 8139too which is both
> > stable and faster.
>
> The gigabit card might be sharing a PCI bus with your disk controller, so
> swapping which slots the cards are in might make gigabit work faster, but
> it sounds more like the driver is doing something stupid with interrupt
> servicing.

Dang you are right, they really do share the same interrupt. And I have 
nowhere else to move that card since it is a single PCI slot. Interestingly, 
fast ethernet (eth0) generates double number of interrupts than gigabit 
(eth1) and SATA combined.

>From powertop:

Top causes for wakeups:
  65.5% (11091.1)       <interrupt> : eth0
  32.9% (5570.5)       <interrupt> : sata_sil, eth1

Tvrtko

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-16 20:12   ` Tvrtko A. Ursulin
@ 2008-12-16 20:37     ` Chris Snook
  2008-12-16 22:55       ` Tvrtko A. Ursulin
  2008-12-17 20:18       ` skge performance sensitivity (WAS: Bonding gigabit and fast?) Tvrtko A. Ursulin
  0 siblings, 2 replies; 11+ messages in thread
From: Chris Snook @ 2008-12-16 20:37 UTC (permalink / raw)
  To: Tvrtko A. Ursulin; +Cc: netdev

Tvrtko A. Ursulin wrote:
> On Tuesday 16 December 2008 19:54:29 Chris Snook wrote:
>>> When serving data from the machine I get 13.7 MB/s aggregated while with
>>> a single slave (so bond still active) I get 5.6 MB/s for gigabit and 9.1
>>> MB/s for fast. Yes, that's not a typo - fast ethernet is faster than
>>> gigabit.
>> That would qualify as something very wrong with your gigabit card.  What do
>> you get when bonding is completely disabled?
> 
> With same testing methology (ie. serving from Samba to CIFS) it averages to 
> around 10 Mb/s, so somewhat faster than when bonded but still terribly 
> unstable. Problem is I think it was much better under older kernels. I wrote 
> about it before:
> 
> http://lkml.org/lkml/2008/11/20/418
> http://bugzilla.kernel.org/show_bug.cgi?id=6796
> 
> Stephen thinks it may be limited PCI bandwith, but the fact that I get double 
> speed in the opposite direction and that slow direction was previously 
> roughly double of what it is now, makes me suspicious that there is a 
> regression here somewhere.
> 
>>> That is actually another problem I was trying to get to the bottom of for
>>> some time. Gigabit adapter is skge in a PCI slot and outgoing bandwith
>>> oscillates a lot during transfer, much more than on 8139too which is both
>>> stable and faster.
>> The gigabit card might be sharing a PCI bus with your disk controller, so
>> swapping which slots the cards are in might make gigabit work faster, but
>> it sounds more like the driver is doing something stupid with interrupt
>> servicing.
> 
> Dang you are right, they really do share the same interrupt. And I have 
> nowhere else to move that card since it is a single PCI slot. Interestingly, 
> fast ethernet (eth0) generates double number of interrupts than gigabit 
> (eth1) and SATA combined.
> 
> From powertop:
> 
> Top causes for wakeups:
>   65.5% (11091.1)       <interrupt> : eth0
>   32.9% (5570.5)       <interrupt> : sata_sil, eth1
> 
> Tvrtko

Sharing an interrupt shouldn't be a problem, unless the other driver is doing 
bad things.  Sharing the bus limits PCI bandwidth though, and that can hurt.

The fact that you're getting more interrupts on the card moving more packets 
isn't surprising.

It occurred to me that the alb algorithm is not designed for asymmetric bonds, 
so part of the problem is likely the distribution of traffic.  You always end up 
with somewhat unbalanced distribution, and it happens to be favoring the slower 
card.

The real problem is that you get such lousy performance in unbonded gigabit 
mode.  Try oprofiling it to see where it's spending all that time.

-- Chris

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-16 20:37     ` Chris Snook
@ 2008-12-16 22:55       ` Tvrtko A. Ursulin
  2008-12-17  4:48         ` Jay Vosburgh
  2008-12-17  7:37         ` Tvrtko A. Ursulin
  2008-12-17 20:18       ` skge performance sensitivity (WAS: Bonding gigabit and fast?) Tvrtko A. Ursulin
  1 sibling, 2 replies; 11+ messages in thread
From: Tvrtko A. Ursulin @ 2008-12-16 22:55 UTC (permalink / raw)
  To: Chris Snook; +Cc: netdev

On Tuesday 16 December 2008 20:37:47 Chris Snook wrote:
> >> The gigabit card might be sharing a PCI bus with your disk controller,
> >> so swapping which slots the cards are in might make gigabit work faster,
> >> but it sounds more like the driver is doing something stupid with
> >> interrupt servicing.
> >
> > Dang you are right, they really do share the same interrupt. And I have
> > nowhere else to move that card since it is a single PCI slot.
> > Interestingly, fast ethernet (eth0) generates double number of interrupts
> > than gigabit (eth1) and SATA combined.
> >
> > From powertop:
> >
> > Top causes for wakeups:
> >   65.5% (11091.1)       <interrupt> : eth0
> >   32.9% (5570.5)       <interrupt> : sata_sil, eth1
> >
> > Tvrtko
>
> Sharing an interrupt shouldn't be a problem, unless the other driver is
> doing bad things.  Sharing the bus limits PCI bandwidth though, and that
> can hurt.
>
> The fact that you're getting more interrupts on the card moving more
> packets isn't surprising.
>
> It occurred to me that the alb algorithm is not designed for asymmetric
> bonds, so part of the problem is likely the distribution of traffic.  You
> always end up with somewhat unbalanced distribution, and it happens to be
> favoring the slower card.

I was using balance-rr, alb flavour does not seem to like 8139too.

> The real problem is that you get such lousy performance in unbonded gigabit
> mode.  Try oprofiling it to see where it's spending all that time.

Could it be something scheduling related? Or maybe CIFS on the client which is also running a flavour of 2.6.27? I had to put vanilla 2.6.27.9 on the server in order 
to run oprofile so maybe I'll have to do the same thing on the client..

In the meantime these are the latest testing results. When serving over Samba I get 9.6Mb/s and oprofile looks like this:

Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask of 0x00 (No unit mask) count 100000
samples  %        image name               app name                 symbol name
43810    11.2563  skge                     skge                     (no symbols)
36363     9.3429  vmlinux                  vmlinux                  handle_fasteoi_irq
32805     8.4287  vmlinux                  vmlinux                  __napi_schedule
30122     7.7394  vmlinux                  vmlinux                  handle_IRQ_event
22270     5.7219  vmlinux                  vmlinux                  copy_user_generic_string
13444     3.4542  vmlinux                  vmlinux                  native_read_tsc
7606      1.9542  smbd                     smbd                     (no symbols)
7492      1.9250  vmlinux                  vmlinux                  mcount
6014      1.5452  libmythui-0.21.so.0.21.0 libmythui-0.21.so.0.21.0 (no symbols)
5689      1.4617  vmlinux                  vmlinux                  memcpy_c
5090      1.3078  libc-2.8.90.so           libc-2.8.90.so           (no symbols)
4176      1.0730  vmlinux                  vmlinux                  native_safe_halt
3970      1.0200  vmlinux                  vmlinux                  ioread8

It is generally not very CPU intensive, but as I said it oscillates a lot. For example vmstat 1 output from middle of this transfer:

 1  0      0  33684  18908 532240    0    0  8832   128 8448 1605  0  9 86  5
 0  0      0  21040  18908 544768    0    0 12544     0 11615 1876  0 10 89  1
 0  0      0  17168  18908 548636    0    0  3840     0 3999  978  0  5 95  0
 0  0      0  10904  18972 554412    0    0  5772     0 5651 1050  1  7 86  6
 1  0      0   8976  18840 556312    0    0  3200     0 3573  891  0  4 96  0
 0  0      0   9948  18792 555716    0    0  7168     0 6776 1202  0  9 89  2

Or bandwith log (500msec period):

1229466433;eth1;6448786.50;206129.75;6654916.50;103271;3230842;4483.03;2608.78;7091.82;1307;2246;0.00;0.00;0;0
1229466433;eth1;11794112.00;377258.00;12171370.00;188629;5897056;8186.00;4772.00;12958.00;2386;4093;0.00;0.00;0;0
1229466434;eth1;4417197.50;141690.62;4558888.50;70987;2213016;3069.86;1792.42;4862.28;898;1538;0.00;0.00;0;0
1229466434;eth1;6059886.00;194222.00;6254108.00;97111;3029943;4212.00;2458.00;6670.00;1229;2106;0.00;0.00;0;0
1229466435;eth1;9232362.00;295816.38;9528178.00;148204;4625413;6413.17;3742.52;10155.69;1875;3213;0.00;0.00;0;0
1229466435;eth1;20735192.00;663600.00;21398792.00;331800;10367596;14398.00;8400.00;22798.00;4200;7199;0.00;0.00;0;0
1229466436;eth1;12515441.00;399852.31;12915294.00;200326;6270236;8688.62;5063.87;13752.50;2537;4353;0.00;0.00;0;0

On the other hand when I pulled the same file with scp I got pretty stable 22.3 Mb/s and this oprofile:

samples  %        image name               app name                 symbol name
242779   48.4619  libcrypto.so.0.9.8       libcrypto.so.0.9.8       (no symbols)
30214     6.0311  skge                     skge                     (no symbols)
29276     5.8439  vmlinux                  vmlinux                  copy_user_generic_string
21052     4.2023  vmlinux                  vmlinux                  handle_fasteoi_irq
19124     3.8174  vmlinux                  vmlinux                  __napi_schedule
15394     3.0728  libc-2.8.90.so           libc-2.8.90.so           (no symbols)
14327     2.8599  vmlinux                  vmlinux                  handle_IRQ_event
5303      1.0585  vmlinux                  vmlinux                  native_read_tsc

Hm, let me do one more test which has no CPU taxing network transport like netcat. Nope, same "fast" ~22Mb/s speed, or:

samples  %        image name               app name                 symbol name
29719    11.5280  vmlinux                  vmlinux                  copy_user_generic_string
28354    10.9985  skge                     skge                     (no symbols)
18259     7.0826  vmlinux                  vmlinux                  handle_fasteoi_irq
17359     6.7335  vmlinux                  vmlinux                  __napi_schedule
15095     5.8553  vmlinux                  vmlinux                  handle_IRQ_event
7422      2.8790  vmlinux                  vmlinux                  native_read_tsc
5619      2.1796  vmlinux                  vmlinux                  mcount
3966      1.5384  libmythui-0.21.so.0.21.0 libmythui-0.21.so.0.21.0 (no symbols)
3709      1.4387  libc-2.8.90.so           libc-2.8.90.so           (no symbols)
3510      1.3615  vmlinux                  vmlinux                  memcpy_c

Maybe also NFS.. no, also fast. 

So this points to Samba/scheduler/CIFS client regression I think. I'll try to do more testing in the following days. All this assuming that ~22Mb/s is the best this 
machine can do and only hunting for slow and unstable speed over Samba.

But I find it strange iperf also couldn't do more and it does not put any load on the shared interrupt line. Especially since it did 400Mbps in the other direction. 

Thank you for your help of course, forgot to say it earlier!

Tvrtko

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-16 19:39 Bonding gigabit and fast? Tvrtko A. Ursulin
  2008-12-16 19:54 ` Chris Snook
@ 2008-12-17  2:53 ` Trent Piepho
  2008-12-17  7:51   ` Tvrtko A. Ursulin
  1 sibling, 1 reply; 11+ messages in thread
From: Trent Piepho @ 2008-12-17  2:53 UTC (permalink / raw)
  To: Tvrtko A. Ursulin; +Cc: netdev

On Tue, 16 Dec 2008, Tvrtko A. Ursulin wrote:
> Does it make any sense from bandwith point of view to bond gigabit and fast
> ethernet?
>
> I wanted to use adaptive-alb mode to load balance both transmit and receive
> direction of traffic but it seems 8139too does not support it so I use
> balance-rr.

My experience from channel bonding in Beowulf clusters, which is now
somewhat dated wrt the current kernel, is that channel bonding with gigabit
didn't work well.

The problem is that at gigabit speeds, one IRQ per packet is very costly. 
So gigabit drivers have to use interrupt mitigation to transfer multiple
packets per interrupt (and/or use jumbo frames to move more data per
packet).

If the transmitter is sending out packets round-robin on two links, it's
sending the packets like this:

Link A:  1   3   5   7
Link B:    2   4   6   8

If the cards return two packets per interrupt, the receiver gets them like
this:

Link A:  1 3     5 7
Link B:      2 4     6 8

Well, the Linux kernel does not like getting the packets in the order (1 3
2 4 5 7 6 8).  It likes to get the packets in the correct order.  And so
performance suffers.  At the time, it suffered greatly.  Maybe it's better
or worse now?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-16 22:55       ` Tvrtko A. Ursulin
@ 2008-12-17  4:48         ` Jay Vosburgh
  2008-12-17  7:51           ` Tvrtko A. Ursulin
  2008-12-17  7:37         ` Tvrtko A. Ursulin
  1 sibling, 1 reply; 11+ messages in thread
From: Jay Vosburgh @ 2008-12-17  4:48 UTC (permalink / raw)
  To: Tvrtko A. Ursulin; +Cc: Chris Snook, netdev

Tvrtko A. Ursulin <tvrtko@ursulin.net> wrote:
[...]
>I was using balance-rr, alb flavour does not seem to like 8139too.

	The choice of balance-rr may be half of your problem.  Try
balance-xor with xmit_hash_policy=layer3+4, it may behave better.  That
mode doesn't know about dissimilar speed slaves, so it simply balances
by math, but that still may behave better than balance-rr because it
won't stripe single connections across slaves.

	The balance-alb mode would likely be better (it's smarter about
balancing across slaves of differing speeds), but requires that the
slaves be able to change MAC address while up, which not every device is
capable of doing.

	To elaborate a bit on balance-rr, it will usually cause out of
order delivery to varying degrees, which in turn causes TCP's congestion
control and/or fast retransmits to kick in.  The effect can be mitigated
to some degree (but not eliminated) by raising the value of the
net.ipv4.tcp_reordering sysctl.  If memory serves, values more than
about 125 don't make much additional difference.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-16 22:55       ` Tvrtko A. Ursulin
  2008-12-17  4:48         ` Jay Vosburgh
@ 2008-12-17  7:37         ` Tvrtko A. Ursulin
  1 sibling, 0 replies; 11+ messages in thread
From: Tvrtko A. Ursulin @ 2008-12-17  7:37 UTC (permalink / raw)
  To: Chris Snook; +Cc: netdev

On Tuesday 16 December 2008 22:55:47 Tvrtko A. Ursulin wrote:
> So this points to Samba/scheduler/CIFS client regression I think. I'll try
> to do more testing in the following days. All this assuming that ~22Mb/s is
> the best this machine can do and only hunting for slow and unstable speed
> over Samba.

In the morning I am not so sure about this any more. Problem is there is no 
regression with fast ethernet and it being faster than gigabit it moves the 
blame back to networking/skge.

Do you any ideas for further tests which could clarify this?

Thanks,

Tvrtko

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-17  4:48         ` Jay Vosburgh
@ 2008-12-17  7:51           ` Tvrtko A. Ursulin
  0 siblings, 0 replies; 11+ messages in thread
From: Tvrtko A. Ursulin @ 2008-12-17  7:51 UTC (permalink / raw)
  To: Jay Vosburgh; +Cc: Chris Snook, netdev

On Wednesday 17 December 2008 04:48:55 Jay Vosburgh wrote:
> Tvrtko A. Ursulin <tvrtko@ursulin.net> wrote:
> [...]
>
> >I was using balance-rr, alb flavour does not seem to like 8139too.
>
> 	The choice of balance-rr may be half of your problem.  Try
> balance-xor with xmit_hash_policy=layer3+4, it may behave better.  That
> mode doesn't know about dissimilar speed slaves, so it simply balances
> by math, but that still may behave better than balance-rr because it
> won't stripe single connections across slaves.
>
> 	The balance-alb mode would likely be better (it's smarter about
> balancing across slaves of differing speeds), but requires that the
> slaves be able to change MAC address while up, which not every device is
> capable of doing.
>
> 	To elaborate a bit on balance-rr, it will usually cause out of
> order delivery to varying degrees, which in turn causes TCP's congestion
> control and/or fast retransmits to kick in.  The effect can be mitigated
> to some degree (but not eliminated) by raising the value of the
> net.ipv4.tcp_reordering sysctl.  If memory serves, values more than
> about 125 don't make much additional difference.

Yes, makes sense and is in line from what is written in the bonding HOWTO. My 
problem is that I actually wanted to stripe single connections in order to 
work around really slow gigabit (slower than fast ethernet) performance. 
There is a single GbE link on the receiver side so maybe there shouldn't be 
that much out of order traffic, although something is causing aggregated 
bandwith to be around 50% less than hypothetical sum of two (9+9 alone vs 12 
bonded). And without stripping I couldn't measure aggregated bandwith with a 
single client.

Subject I put for this thread is actually misleading..

Thanks,

Tvrtko

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Bonding gigabit and fast?
  2008-12-17  2:53 ` Bonding gigabit and fast? Trent Piepho
@ 2008-12-17  7:51   ` Tvrtko A. Ursulin
  0 siblings, 0 replies; 11+ messages in thread
From: Tvrtko A. Ursulin @ 2008-12-17  7:51 UTC (permalink / raw)
  To: Trent Piepho; +Cc: netdev

On Wednesday 17 December 2008 02:53:10 Trent Piepho wrote:
> If the transmitter is sending out packets round-robin on two links, it's
> sending the packets like this:
>
> Link A:  1   3   5   7
> Link B:    2   4   6   8
>
> If the cards return two packets per interrupt, the receiver gets them like
> this:
>
> Link A:  1 3     5 7
> Link B:      2 4     6 8
>
> Well, the Linux kernel does not like getting the packets in the order (1 3
> 2 4 5 7 6 8).  It likes to get the packets in the correct order.  And so
> performance suffers.  At the time, it suffered greatly.  Maybe it's better
> or worse now?

I don't know, but to clarify I was never aiming to get higher than gigabit 
speed, even on the receiver side there is only single gigabit link. From the 
bonding HOWTO I understood that in this configuration packets my actually 
arrive in order.

Point of my experiment was to see if I can work around very slow gigabit 
speeds (<10Mb/s) I was getting while serving data out with Samba. So I was 
hoping to get 20 Mb/s sustained with this slow gigabit and normally fast fast 
ethernet. But yeah, bonding seemed to cause aggregated speed to be somewhat 
less than when each were alone.

Tvrtko

^ permalink raw reply	[flat|nested] 11+ messages in thread

* skge performance sensitivity (WAS: Bonding gigabit and fast?)
  2008-12-16 20:37     ` Chris Snook
  2008-12-16 22:55       ` Tvrtko A. Ursulin
@ 2008-12-17 20:18       ` Tvrtko A. Ursulin
  1 sibling, 0 replies; 11+ messages in thread
From: Tvrtko A. Ursulin @ 2008-12-17 20:18 UTC (permalink / raw)
  To: Chris Snook; +Cc: netdev

On Tuesday 16 December 2008 20:37:47 Chris Snook wrote:

[snip]
> The real problem is that you get such lousy performance in unbonded gigabit
> mode.  Try oprofiling it to see where it's spending all that time.

Ha! Since I have put vanilla 2.6.27.9 on the server yesterday, but compiled it 
with distro configuration, I decided to try a minimal "optimised" config for 
this machine today.

With a reconfigured and recompiled kernel skge outbound performance went from 
13 Mb/s to 17 Mb/s so it seems there was something in there which was really 
hampering it.

Kernel configs are radically different so I am not sure if there is any point 
in sending them here? And I am also not sure if I will be able to spend any 
more time trying to pinpoint which config option was to blame. But if I will 
I will of course report my findings.

Tvrtko

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2008-12-17 20:19 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-12-16 19:39 Bonding gigabit and fast? Tvrtko A. Ursulin
2008-12-16 19:54 ` Chris Snook
2008-12-16 20:12   ` Tvrtko A. Ursulin
2008-12-16 20:37     ` Chris Snook
2008-12-16 22:55       ` Tvrtko A. Ursulin
2008-12-17  4:48         ` Jay Vosburgh
2008-12-17  7:51           ` Tvrtko A. Ursulin
2008-12-17  7:37         ` Tvrtko A. Ursulin
2008-12-17 20:18       ` skge performance sensitivity (WAS: Bonding gigabit and fast?) Tvrtko A. Ursulin
2008-12-17  2:53 ` Bonding gigabit and fast? Trent Piepho
2008-12-17  7:51   ` Tvrtko A. Ursulin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).