netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* jumbo frames and memory fragmentation
@ 2006-06-29 18:54 Chris Friesen
  2006-06-30 13:01 ` Herbert Xu
  0 siblings, 1 reply; 9+ messages in thread
From: Chris Friesen @ 2006-06-29 18:54 UTC (permalink / raw)
  To: netdev


I'm running a system with multiple e1000 devices, using 9KB jumbo 
frames.  I'm running a modified 2.6.10 with e1000 driver 5.5.4-k2.

I'm a bit concerned about the behaviour of this driver with jumbo 
frames.  We ask for 9KB.  The driver then bumps that up to a 
power-of-two, so it calls dev_alloc_skb(16384).  That then bumps it up a 
bit to allow for its own overhead, so it appears that we end up asking 
for 32KB of physically contiguous memory for every packet coming in.  Ouch.

Add to that the fact that this version of the driver doesn't do 
copybreak, and it means that after we're up for a few days it starts 
complaining about not being able to allocate buffers.

Anyone have any suggestions on how to improve this?  Upgrading kernels 
isn't an option.  I could port back the copybreak stuff fairly easily.

Back in 2.4 some of the drivers used to retry buffer allocations using 
GFP_KERNEL once interrupts were reenabled.  I don't see many of them 
doing that anymore--would there be any benefit to that?

Thanks,

Chris

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2006-07-17 17:16 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-29 18:54 jumbo frames and memory fragmentation Chris Friesen
2006-06-30 13:01 ` Herbert Xu
2006-06-30 17:53   ` Chris Friesen
2006-06-30 18:48     ` Evgeniy Polyakov
2006-06-30 23:35       ` Chris Friesen
2006-07-01  0:04         ` Jesse Brandeburg
2006-07-08 22:25         ` Evgeniy Polyakov
2006-07-13 12:13     ` Herbert Xu
2006-07-17 17:16       ` Chris Friesen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).