public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* proposed optimization for network drivers
@ 2003-04-17  6:56 Don Cohen
  2003-04-18  8:36 ` David S. Miller
  0 siblings, 1 reply; 6+ messages in thread
From: Don Cohen @ 2003-04-17  6:56 UTC (permalink / raw)
  To: linux-kernel, linux-net


I notice that netif_rx (net/core/dev.c) discards packets when it's
in throttle mode.  However, before it does this the driver has already
wasted a lot of time allocating the skb, calling eth_type_trans, etc.
That effort could be saved if the driver checked throttle first.


Sample diff:

--- /usr/src/linux-2.4.18-clean/drivers/net/8139too.c   Mon Feb 25 11:37:59 2002
+++ /usr/src/linux-2.4.18/drivers/net/8139too.c Wed Apr 16 23:30:30 2003
@@ -1908,6 +1908,11 @@
                        return;
                }

+               if (softnet_data[smp_processor_id()].throttle) {
+                 dev->last_rx = jiffies;
+                 netdev_rx_stat[smp_processor_id()].dropped++;
+               } else {
+
                /* Malloc up new buffer, compatible with net-2e. */
                /* Omit the four octet CRC from the length. */

@@ -1936,6 +1941,7 @@
                                dev->name);
                        tp->stats.rx_dropped++;
                }
+               }

                cur_rx = (cur_rx + rx_size + 4 + 3) & ~3;
                RTL_W16 (RxBufPtr, cur_rx - 16);

Hmm, now that I notice, there is one small change in semantics here.
Before every packet that was counted by netdev_rx_stat[].dropped had
also been counted in stats.rx_bytes and rx_packets and this is no
longer the case.  If that bothers you then perhaps there should just
be a separate counter for packets dropped like this.

In my limited experimentation this sort of change has significantly
increased the number of packets I could process when they were
arriving too fast. 

Please cc me on replies.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: proposed optimization for network drivers
  2003-04-17  6:56 proposed optimization for network drivers Don Cohen
@ 2003-04-18  8:36 ` David S. Miller
  2003-04-18 15:37   ` Don Cohen
  0 siblings, 1 reply; 6+ messages in thread
From: David S. Miller @ 2003-04-18  8:36 UTC (permalink / raw)
  To: don-linux; +Cc: linux-kernel, linux-net


What is we change the congestion implementation?  Then we'll
have to edit every single driver.  I don't think that's very
maintainable.

The whole idea is to abstract things out as far as possible so that
the device drivers are totally agnostic about the details of the
generic network queueing implementation.

I mean, it's an interesting idea, but it exposes details that
should not be exposed.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: proposed optimization for network drivers
  2003-04-18  8:36 ` David S. Miller
@ 2003-04-18 15:37   ` Don Cohen
  2003-04-18 15:59     ` David S. Miller
  2003-04-18 17:18     ` Jeff Garzik
  0 siblings, 2 replies; 6+ messages in thread
From: Don Cohen @ 2003-04-18 15:37 UTC (permalink / raw)
  To: David S. Miller; +Cc: linux-kernel, linux-net


In part I agree.  I would have preferred to make my change in one
place instead of one driver at a time.  On the other hand, it seems to
me that some of these details are already spread around all the
drivers.  For instance, why does every driver have to call
eth_type_trans?  Could that be delayed for netif_rx ?

I do think it's reasonable for a driver to test whether the upper
layers are ready to process another packet.  I suggest that this 
test be encapsulated into a new function that can be changed at the
cost of only recompiling all the drivers.

David S. Miller writes:
 > 
 > What is we change the congestion implementation?  Then we'll
 > have to edit every single driver.  I don't think that's very
 > maintainable.
 > 
 > The whole idea is to abstract things out as far as possible so that
 > the device drivers are totally agnostic about the details of the
 > generic network queueing implementation.
 > 
 > I mean, it's an interesting idea, but it exposes details that
 > should not be exposed.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: proposed optimization for network drivers
  2003-04-18 15:37   ` Don Cohen
@ 2003-04-18 15:59     ` David S. Miller
  2003-04-18 16:22       ` Don Cohen
  2003-04-18 17:18     ` Jeff Garzik
  1 sibling, 1 reply; 6+ messages in thread
From: David S. Miller @ 2003-04-18 15:59 UTC (permalink / raw)
  To: don-linux; +Cc: linux-kernel, linux-net

   From: don-linux@isis.cs3-inc.com (Don Cohen)
   Date: Fri, 18 Apr 2003 08:37:01 -0700

   For instance, why does every driver have to call
   eth_type_trans?  Could that be delayed for netif_rx ?

Silly example.  Ethernet specific routines do not belong in
device generic netif_rx().


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: proposed optimization for network drivers
  2003-04-18 15:59     ` David S. Miller
@ 2003-04-18 16:22       ` Don Cohen
  0 siblings, 0 replies; 6+ messages in thread
From: Don Cohen @ 2003-04-18 16:22 UTC (permalink / raw)
  To: David S. Miller; +Cc: linux-kernel, linux-net

David S. Miller writes:
 >    From: don-linux@isis.cs3-inc.com (Don Cohen)
 >    Date: Fri, 18 Apr 2003 08:37:01 -0700
 > 
 >    For instance, why does every driver have to call
 >    eth_type_trans?  Could that be delayed for netif_rx ?
 > 
 > Silly example.  Ethernet specific routines do not belong in
 > device generic netif_rx().

Ok, fair enough.
How about the larger point that it's reasonable to check early whether
the labor you're about to expend will be wasted and the suggestion
that this be encapsulated in a function that all drivers can share?


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: proposed optimization for network drivers
  2003-04-18 15:37   ` Don Cohen
  2003-04-18 15:59     ` David S. Miller
@ 2003-04-18 17:18     ` Jeff Garzik
  1 sibling, 0 replies; 6+ messages in thread
From: Jeff Garzik @ 2003-04-18 17:18 UTC (permalink / raw)
  To: Don Cohen; +Cc: David S. Miller, linux-kernel, linux-net

Don Cohen wrote:
> In part I agree.  I would have preferred to make my change in one
> place instead of one driver at a time.  On the other hand, it seems to
> me that some of these details are already spread around all the
> drivers.  For instance, why does every driver have to call
> eth_type_trans?  Could that be delayed for netif_rx ?
> 
> I do think it's reasonable for a driver to test whether the upper
> layers are ready to process another packet.  I suggest that this 
> test be encapsulated into a new function that can be changed at the
> cost of only recompiling all the drivers.


Why not NAPI?  That is the existing mechanism provided to let the upper 
layer feedback to the low-level driver system congestion information.

	Jeff




^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2003-04-18 17:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-04-17  6:56 proposed optimization for network drivers Don Cohen
2003-04-18  8:36 ` David S. Miller
2003-04-18 15:37   ` Don Cohen
2003-04-18 15:59     ` David S. Miller
2003-04-18 16:22       ` Don Cohen
2003-04-18 17:18     ` Jeff Garzik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox