netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* r8169 :  always copying the rx buffer to new skb
@ 2011-04-18 17:08 John Lumby
  2011-04-18 17:27 ` Ben Hutchings
  2011-04-18 18:21 ` Francois Romieu
  0 siblings, 2 replies; 16+ messages in thread
From: John Lumby @ 2011-04-18 17:08 UTC (permalink / raw)
  To: netdev

Summary  -  current r8169 always memcpy's every received buffer to new skb,
             and I'd like to propose that it should not,
             which can improve throughput / reduce CPU utilization by 
around 10%

In the patch
   2010-10-09 Stanislaw Gruszka r8169: allocate with GFP_KERNEL flag 
when able to sleep
the code for making a decision
     "shall I copy the buffer to new skb or unhook it from ring to pass 
up and allocate new"
based on a module param called rx_copybreak,  was removed,  and instead 
we now always allocate naked data buffers (i.e. not wrapped in skb) and 
always memcpy each one to a new skb to pass up to netif.

I think (not sure) this was related to a bug
   Bug 629158 Network adapter "disappears" after resuming from acpi suspend
although the change from using GFP_ATOMIC to using GFP_KERNEL for 
initial allocation
was made (I believe) in
   2010-04-30 Eric Dumazet r8169: Fix rtl8169_rx_interrupt()

So I am not entirely certain of the motivation for the removal of 
rx_copybreak and the "always memcpy".  But I believe it is not 
necessary,  at least not on all (most?) systems, and have measured 11% 
increase in throughput (aggregate Mbits/sec) on a heavy network 
benchmark on my own machine,  on 2.6.39-rc2 with rps/rfs enabled, by 
patching the code back to the 2.6.39 equivalent of rx_copybreak.

oprofile shows the improvement is all or mostly obtained from avoiding 
the memcpy'ing.  And of course since the memcpy'ing is done in the 
driver before the netif/rps gets it,  all that memcpy'ing is done on 
CPU0    (I think true on any SMP machine with an rtl8169?)

The measurements are :
                              aggregate (rx+tx) Mbits/sec through the NIC
     2.6.39-rc2 unpatched                                  918
     2.6.39-rc2   patched,  rx_copybreak=64               1026

packet rates fluctuate around 60K - 70K pkts/sec rx plus 60K - 70K 
pkts/sec tx

I would like to propose that :
   .  switch back to wrapping databuffers in skb's (so we have the 
option of copying or unhooking)
   .  re-introduce rx_copybreak module param so each sysadmin can 
control if wished.

That is my main proposal,    and I would be interested to hear thoughts 
on that.

As a second proposal (which I made before),  I'd like to suggest that 
the number of rx buffers allocated at open should be configurable by 
module param.     This is not needed for my other proposal but may help 
reduce the possibility of temporary shortage of buffers.     There is 
really no justification for assuming that 256 buffers is the correct 
number for all systems from a netbook to a 32-way server.     I ran my 
"patched" measurement with num_rx_buffs=128 and there were no alloc 
failures logged.     I would say that if there is any enthusiasm for the 
main avoid_memcpy proposal,   then it is worth the extra small effort to 
do the num_rx_buffs as well.

My current patch (including configurable num_rx_buffs) -
lines deleted   120
lines added     668

BTW if this proposal is acceptable,  I'm willing to do the patch work 
but I have only one machine with a r8169 (actually a RTL8168c) to test on.

John Lumby

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-18 17:08 r8169 : always copying the rx buffer to new skb John Lumby
@ 2011-04-18 17:27 ` Ben Hutchings
  2011-04-18 21:26   ` John Lumby
  2011-04-18 18:21 ` Francois Romieu
  1 sibling, 1 reply; 16+ messages in thread
From: Ben Hutchings @ 2011-04-18 17:27 UTC (permalink / raw)
  To: John Lumby; +Cc: netdev

On Mon, 2011-04-18 at 13:08 -0400, John Lumby wrote:
> Summary  -  current r8169 always memcpy's every received buffer to new skb,
>              and I'd like to propose that it should not,
>              which can improve throughput / reduce CPU utilization by 
> around 10%

At least some variants of the hardware have a bug in RX DMA scatter that
can be exploited for denial-of-service.  The workaround is to allocate
huge RX buffers (16K).  Then, to avoid allocation failures later on (and
to save memory) the buffers must be copied rather than passed up the
stack and reallocated.

[...]
> As a second proposal (which I made before),  I'd like to suggest that 
> the number of rx buffers allocated at open should be configurable by 
> module param.
[...]

No, it should implement the ethtool set_ringparam operation.

Ben.

-- 
Ben Hutchings, Senior Software Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-18 17:08 r8169 : always copying the rx buffer to new skb John Lumby
  2011-04-18 17:27 ` Ben Hutchings
@ 2011-04-18 18:21 ` Francois Romieu
  1 sibling, 0 replies; 16+ messages in thread
From: Francois Romieu @ 2011-04-18 18:21 UTC (permalink / raw)
  To: John Lumby; +Cc: netdev, nic_swsd

John Lumby <johnlumby@hotmail.com> :
> Summary  -  current r8169 always memcpy's every received buffer to new skb,
>             and I'd like to propose that it should not,
>             which can improve throughput / reduce CPU utilization by
> around 10%

Disclaimer: I'll read your suggestion more carefully when I am back home.
I am late and I could be off-topic.

> I think (not sure) this was related to a bug

Short answer: it's mostly related to CVE-2009-4537 (see git log).

I may resurrect some alternate fix - i.e. detect corrupted Tx descriptors
and reset before things gets wrong - but it is not easy to prove it right
since it may be necessary to tailor it for each member of the 816x / 810x
family. Some input from Realtek may help though.

-- 
Ueimor

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-18 17:27 ` Ben Hutchings
@ 2011-04-18 21:26   ` John Lumby
  2011-04-20 19:13     ` Francois Romieu
  0 siblings, 1 reply; 16+ messages in thread
From: John Lumby @ 2011-04-18 21:26 UTC (permalink / raw)
  To: netdev; +Cc: Ben Hutchings, Francois Romieu, nic_swsd

On 04/18/11 13:27, Ben Hutchings wrote:
> At least some variants of the hardware have a bug [...] avoid allocation failures later on (and
> to save memory) the buffers must be copied rather than passed up the
> stack and reallocated.

Yes,  I can see that the always-copy method eliminates all possibility 
of an allocation failure,   but an *occasional* allocation failure does 
no harm  -  the driver just retains ownership of that descriptor and 
tries again on the next rx_interrupt.     With a rx ring of N buffers,  
it would take something like N-(small_number) consecutive allocation 
failures to cause a failure to be exposed up to the application.    
That's the way the code used to work and the way I've re-patched it to 
work and I've  verified that on my 8168c by simulating an allocation 
failure on 15 out of every 16 rx-Interrupts  (unhooking the current skb 
and then simply not allocating a new skb and not giving the 
corresponding descriptor to the asic) and everything works just fine,  
with just a slight drop in throughput (down to 987 Mbits/sec,  still 
well ahead of the always-copy).

So do we really need to be that concerned about occasional allocation 
failure?

And if someone is that concerned,   then,   with my proposal,  they can 
leave the rx_copybreak at its default of 16383,   when every packet is 
copied anyway.     (My patch takes a slightly different approach if the 
allocation of the new skb fails  -   current 2.6.39 drops the packet,   
I would propose to unhook and retain the descriptor because I can 
replenish later  -  but that is also debatable).     Also that's why I 
favour making the rx ring size configurable.

On 04/18/11 14:21, Francois Romieu wrote:
> Short answer: it's mostly related to CVE-2009-4537 (see git log).

I understand the need to make the rx_buf_size 16383 to defeat the DOS 
attacker,   no suggestion to alter that.     I'm just not sure I see why 
that has to imply the always_copy.
> I may resurrect some alternate fix - i.e. detect corrupted Tx descriptors
> and reset before things gets wrong - but it is not easy to prove it right
> since it may be necessary to tailor it for each member of the 816x / 810x
> family. Some input from Realtek may help though.
>
Yes,  more input the better,    and especially I recognize that I have 
tested only my RTL8168c and maybe other models behave differently.

On 04/18/11 13:27, Ben Hutchings wrote:
>> the number of rx buffers allocated at open should be configurable by
>> module param.
> [...]
>
> No, it should implement the ethtool set_ringparam operation.
>
Ah  ok    thanks.     I'll take a look at that.

John Lumby



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-18 21:26   ` John Lumby
@ 2011-04-20 19:13     ` Francois Romieu
  2011-04-21  3:41       ` John Lumby
  2011-04-21  3:52       ` John Lumby
  0 siblings, 2 replies; 16+ messages in thread
From: Francois Romieu @ 2011-04-20 19:13 UTC (permalink / raw)
  To: John Lumby; +Cc: netdev, Ben Hutchings, nic_swsd

John Lumby <johnlumby@hotmail.com> :
[...]
> I've  verified that on my 8168c by simulating an allocation failure
> on 15 out of every 16 rx-Interrupts  (unhooking the current skb and
> then simply not allocating a new skb and not giving the
> corresponding descriptor to the asic) and everything works just
> fine,  with just a slight drop in throughput (down to 987 Mbits/sec,
> still well ahead of the always-copy).

Did your testing account for some memory pressure ?

> So do we really need to be that concerned about occasional
> allocation failure?

See $search_engine +r8169 high order memory allocation failure.

> And if someone is that concerned,   then,   with my proposal,  they
> can leave the rx_copybreak at its default of 16383,   when every
> packet is copied anyway.     (My patch takes a slightly different
> approach if the allocation of the new skb fails  -   current 2.6.39
> drops the packet,   I would propose to unhook and retain the
> descriptor because I can replenish later  -  but that is also
> debatable).     Also that's why I favour making the rx ring size
> configurable.

Why don't you send the patch through the mailing list ?

(hint, hint)

> On 04/18/11 14:21, Francois Romieu wrote:
> >Short answer: it's mostly related to CVE-2009-4537 (see git log).
> 
> I understand the need to make the rx_buf_size 16383 to defeat the
> DOS attacker,   no suggestion to alter that.     I'm just not sure I
> see why that has to imply the always_copy.

Because of high-order memory allocation failure under memory pressure and
memory wastage. Btw several 816x have limited jumbo frames abilities.

-- 
Ueimor

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-20 19:13     ` Francois Romieu
@ 2011-04-21  3:41       ` John Lumby
  2011-04-21  3:52       ` John Lumby
  1 sibling, 0 replies; 16+ messages in thread
From: John Lumby @ 2011-04-21  3:41 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, Ben Hutchings, nic_swsd

On 04/20/11 15:13, Francois Romieu wrote:
> John Lumby<johnlumby@hotmail.com>  :
> [...]
>> I've  verified that [...] and everything works just
>> fine,
> Did your testing account for some memory pressure ?

No,  something I need to do.     In my tests paging rate was light.     
I will try a squeeze test to see what happens.

>> So do we really need to be that concerned about occasional
>> allocation failure?
> See $search_engine +r8169 high order memory allocation failure.

The bug reports I see are all related to the problem that occurred at 
open in the days when the init_ring() requested GFP_ATOMIC *and* 
insisted that *all* 256 buffers must be obtained or else failed the open.
I take your point,  but I think it is different in the interrupt case  
-   the rx_interrupt does not consider a single alloc failure to trigger 
any visible failure.   In the old code (and my patched),  it just tries 
again next time.   In the current code,  it can't do that since it has 
no skb to pass up,  so it drops the packet.

Have I missed some other bug reports on alloc failures?

But I also realize that (I guess) few sysadmins may have ever set the 
rx_copybreak down low in the days when the parameter was there,   so 
maybe we just don't really know how many problems would arise with 
it.     So I would propose leaving it to default to 16383 as before.

> Why don't you send the patch through the mailing list ?
>
> (hint, hint)

In my next post.   Still on 2.6.39-rc2 and too late for me to try merge 
to latest right now,  hope still useful.

>>
>> I'm just not sure I
>> see why that has to imply the always_copy.
>>
>>
>> Because of high-order memory allocation failure under memory pressure and
>> memory wastage.

I think memory allocation failure can occur regardless of code design  
-  old,  current and my patched code all do the same amount of dynamic 
alloc'ing of skbs.   It's just the initial set in the ring which is 
different.   In the current code,  alloc failure => we drop packets.    
Maybe dropping many packets might cause more trouble owing to 
re-transmits  than not replenishing the rx buffers temporarily.    
Although I guess the NIC stays up with the current code.   Not really sure.

>>
>>   Btw several 816x have limited jumbo frames abilities.

Is that point that there is some special significance for memory 
allocation?   Not sure about that  -  I mean the total data rate per sec 
is limited to 1Gbit nominal in each direction regardless of size of data 
buffers  (I think).     So naiively jumbo will not put more memory 
pressure than 1500 will it?    Or are you thinking about testing?

John Lumby

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-20 19:13     ` Francois Romieu
  2011-04-21  3:41       ` John Lumby
@ 2011-04-21  3:52       ` John Lumby
  2011-04-27  2:18         ` John Lumby
  1 sibling, 1 reply; 16+ messages in thread
From: John Lumby @ 2011-04-21  3:52 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, Ben Hutchings, nic_swsd

On 04/20/11 15:13, Francois Romieu wrote:
>
> Why don't you send the patch through the mailing list ?
>
> (hint, hint)
>

based on 2.6.39-rc2.

also has changes for ethtool  -
    .    get and set ring parms (suggested by Ben)
    .    get and set rx_copybreak   -    not sure if this is a good idea 
or not,
           as it's a driver parm,  not NIC setting,
           but there are 22 net drivers that have the parm so I thought
           maybe useful.

-------------------------------------------------------------------------------------
--- linux-2.6.39-rc2FCrtl/drivers/net/r8169.c.orig    2011-04-05 
21:30:43.000000000 -0400
+++ linux-2.6.39-rc2FCrtl/drivers/net/r8169.c    2011-04-20 
21:34:42.000000000 -0400
@@ -56,7 +56,7 @@
      (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN)

  #define TX_BUFFS_AVAIL(tp) \
-    (tp->dirty_tx + NUM_TX_DESC - tp->cur_tx - 1)
+    (tp->dirty_tx + tp->num_tx_allocd - tp->cur_tx - 1)

  /* Maximum number of multicast addresses to filter (vs. Rx-all-multicast).
     The RTL chips use a 64 element hash table based on the Ethernet CRC. */
@@ -74,11 +74,19 @@ static const int multicast_filter_limit

  #define R8169_REGS_SIZE        256
  #define R8169_NAPI_WEIGHT    64
-#define NUM_TX_DESC    64    /* Number of Tx descriptor registers */
-#define NUM_RX_DESC    256    /* Number of Rx descriptor registers */
-#define RX_BUF_SIZE    1536    /* Rx Buffer size */
-#define R8169_TX_RING_BYTES    (NUM_TX_DESC * sizeof(struct TxDesc))
-#define R8169_RX_RING_BYTES    (NUM_RX_DESC * sizeof(struct RxDesc))
+/*  #define NUM_TX_DESC    64    Number of Tx descriptor registers is 
now based on variable num_tx_allocd */
+/*  #define NUM_RX_DESC    256    Number of in-use Rx descriptor 
registers is now based on variable num_rx_allocd :
+                                see comments attached to definition of 
that variable */
+#define MIN_NUM_RX_DESC 16    /*   minimum number of Rx descriptor 
registers with which the chip can operate ? */
+#define MAX_NUM_RX_DESC 256    /*   maximum number of Rx descriptor 
registers with which the chip can operate ? */
+#define MIN_NUM_TX_DESC 16    /*   minimum number of Tx descriptor 
registers with which the chip can operate ? */
+#define MAX_NUM_TX_DESC 64    /*   maximum number of Tx descriptor 
registers with which the chip can operate ? */
+
+                /*  number of in-use Rx descriptors is based on 
variable num_rx_allocd
+                 **  and num_rx_allocd is always <= num_rx_requested value
+                 */
+#define R8169_RX_RING_BYTES    (tp->num_rx_requested * sizeof(struct 
RxDesc))
+#define R8169_TX_RING_BYTES    (tp->num_tx_requested * sizeof(struct 
TxDesc))

  #define RTL8169_TX_TIMEOUT    (6*HZ)
  #define RTL8169_PHY_TIMEOUT    (10*HZ)
@@ -198,12 +206,23 @@ static DEFINE_PCI_DEVICE_TABLE(rtl8169_p

  MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);

-static int rx_buf_sz = 16383;
+static const int rx_buf_sz = 16383;
+/*
+ *  we set our default copybreak very high to eliminate
+ *  the possibility of running out of receive buffers.
+ *  HOWEVER lowering it will reduce memcpying
+ *  and may improve performance significantly.
+ */
+static int rx_copybreak = 16383;
  static int use_dac;
  static struct {
      u32 msg_enable;
-} debug = { -1 };
+} debug = {
+-1};

+#ifdef RTL8169_DEBUG
+static int simulate_alloc_fail = 0;    /*  set to (P-1) to fail alloc 
on all except every P attempts */
+#endif /* RTL8169_DEBUG */
  enum rtl_registers {
      MAC0        = 0,    /* Ethernet hardware address. */
      MAC4        = 4,
@@ -522,16 +541,50 @@ struct rtl8169_private {
      u32 msg_enable;
      int chipset;
      int mac_version;
-    u32 cur_rx; /* Index into the Rx descriptor buffer of next Rx pkt. */
-    u32 cur_tx; /* Index into the Tx descriptor buffer of next Rx pkt. */
-    u32 dirty_rx;
-    u32 dirty_tx;
+
+    /*   Note - re number of Rx/Tx descriptor buffers allocated :
+     **    we maintain two values per ring  -   requested and allocd.
+     **    requested can be set by ethtool and defaults to the max 
permitted
+     **    allocd is the number actually obtained at open and may be 
less than
+     **    requested,  but provided it is at least the minimum 
required, we'll continue.
+     **    ethtool setting is asynchronous and takes effect at next open.
+     **    The num_xx_allocd count is used as modulus for
+     **    locating active entries in the array using logic like this 
snippet
+     **    in rtl8169_rx_interrupt  :
+     **               entry = cur_rx % num_rx_allocd;
+     **    The size of each array of per-ring-element thingy is always 
the maximum.
+     **
+     **    at present,  with the tx ring info embedded in private,
+     **    it is a bit silly pretending to provide a settable tx_requested,
+     **    but if desired,  at expense of extra ptr deref,
+     **    could change it to an array of pointers
+     */
+    u32 num_tx_requested;    /*   num Tx buffers requested */
+    u32 num_rx_requested;    /*   num Rx buffers requested */
+    u32 num_tx_allocd;    /*   num Tx descriptor buffers allocated */
+    u32 num_rx_allocd;    /*   num Rx descriptor buffers allocated */
+
+    /*    note - the following two counters are monotonically-ascending 
- can be thought of
+     **           as the count of number of buffers which the hardware 
has accessed.
+     */
+    u32 cur_rx;        /* Index of next Rx pkt. */
+    u32 cur_tx;        /* Index of next Tx pkt. */
+
+    u32 totl_rx_replenished;    /*  monotonically-ascending count of 
replenished buffers */
+    u32 replenish_rx_cursor;    /*  Index of next Rx pkt. to replenish 
(modulo,  not monotonic) */
+    /* the following counts pkts copied as opposed to uncopied 
(unhooked)                     */
+    /*  note  -                      count of uncopied packets = cur_rx 
- copied_rx_pkt_count */
+    u32 copied_rx_pkt_count;    /*  total pkts copied to new skb  */
+    u32 totl_rx_alloc_fail;    /*  rx alloc failures */
+    u32 dirty_tx;        /*  monotonic count of transmitted packets (or 
fragments?) */
      struct TxDesc *TxDescArray;    /* 256-aligned Tx descriptor ring */
      struct RxDesc *RxDescArray;    /* 256-aligned Rx descriptor ring */
      dma_addr_t TxPhyAddr;
      dma_addr_t RxPhyAddr;
-    void *Rx_databuff[NUM_RX_DESC];    /* Rx data buffers */
-    struct ring_info tx_skb[NUM_TX_DESC];    /* Tx data buffers */
+    struct sk_buff *Rx_skbuff[MAX_NUM_RX_DESC];    /* Rx data buffers */
+    struct ring_info tx_skb[MAX_NUM_TX_DESC];    /* Tx data buffers */
+
+    unsigned align;
      struct timer_list timer;
      u16 cp_cmd;
      u16 intr_event;
@@ -569,6 +622,14 @@ struct rtl8169_private {

  MODULE_AUTHOR("Realtek and the Linux r8169 crew 
<netdev@vger.kernel.org>");
  MODULE_DESCRIPTION("RealTek RTL-8169 Gigabit Ethernet driver");
+module_param(rx_copybreak, int, 0);
+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for 
copy-only-tiny-frames");
+#ifdef RTL8169_DEBUG
+module_param(simulate_alloc_fail, int, 0);
+MODULE_PARM_DESC(simulate_alloc_fail,
+         "set to (2**P - 1) eg 15, to fail alloc rx skb on all except 
every 2**P attempts");
+#endif /* RTL8169_DEBUG */
+
  module_param(use_dac, int, 0);
  MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot.");
  module_param_named(debug, debug.msg_enable, int, 0);
@@ -583,7 +644,7 @@ static int rtl8169_open(struct net_devic
  static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
                        struct net_device *dev);
  static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance);
-static int rtl8169_init_ring(struct net_device *dev);
+static int rtl8169_init_ring(struct rtl8169_private *tp);
  static void rtl_hw_start(struct net_device *dev);
  static int rtl8169_close(struct net_device *dev);
  static void rtl_set_rx_mode(struct net_device *dev);
@@ -1242,6 +1303,15 @@ static int rtl8169_set_settings(struct n
      spin_lock_irqsave(&tp->lock, flags);
      ret = rtl8169_set_speed(dev,
          cmd->autoneg, cmd->speed, cmd->duplex, cmd->advertising);
+
+    /*   check that ethtool has set a copybreak value before accepting 
it */
+    if ( (cmd->supported & (SUPPORTED_cmd_extension |
+                   SUPPORTED_cmd_extension_rx_copybreak))
+ && (cmd->rx_copybreak <= rx_buf_sz) ) {
+        rx_copybreak = cmd->rx_copybreak;
+        netif_info(tp, drv, dev, "set rx_copybreak to %d\n",
+               rx_copybreak);
+    }
      spin_unlock_irqrestore(&tp->lock, flags);

      return ret;
@@ -1254,6 +1324,49 @@ static u32 rtl8169_get_rx_csum(struct ne
      return tp->cp_cmd & RxChkSum;
  }

+static void rtl8169_get_ringparam(struct net_device *netdev,
+                  struct ethtool_ringparam *ring)
+{
+    struct rtl8169_private *tp = netdev_priv(netdev);
+
+    ring->rx_max_pending = MAX_NUM_RX_DESC;
+    ring->tx_max_pending = MAX_NUM_TX_DESC;
+    ring->rx_mini_max_pending = 0;
+    ring->rx_jumbo_max_pending = 0;
+    ring->rx_pending = tp->num_rx_allocd;
+    ring->tx_pending = tp->num_tx_allocd;
+    ring->rx_mini_pending = 0;
+    ring->rx_jumbo_pending = 0;
+}
+
+static int rtl8169_set_ringparam(struct net_device *netdev,
+                 struct ethtool_ringparam *ring)
+{
+    struct rtl8169_private *tp = netdev_priv(netdev);
+
+    if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
+        return -EINVAL;
+
+    /*  I am not sure about closing and opening the NIC here
+     *  so will leave the change pending for next open
+     */
+
+    tp->num_rx_requested = ((ring->rx_pending < MIN_NUM_RX_DESC) ?
+                MIN_NUM_RX_DESC :
+                ((ring->rx_pending > MAX_NUM_RX_DESC) ?
+                 MAX_NUM_RX_DESC : ring->rx_pending));
+    tp->num_tx_requested = ((ring->tx_pending < MIN_NUM_TX_DESC) ?
+                MIN_NUM_TX_DESC :
+                ((ring->tx_pending > MAX_NUM_TX_DESC) ?
+                 MAX_NUM_TX_DESC : ring->tx_pending));
+
+    netif_info(tp, drv, netdev,
+           "Ring sizes to be requested at next open: num rx: %d, num tx 
%d\n",
+           tp->num_rx_requested, tp->num_tx_requested);
+
+    return 0;
+}
+
  static int rtl8169_set_rx_csum(struct net_device *dev, u32 data)
  {
      struct rtl8169_private *tp = netdev_priv(dev);
@@ -1351,6 +1464,13 @@ static int rtl8169_get_settings(struct n

      rc = tp->get_settings(dev, cmd);

+    /* inform about returning extended info - rx_copybreak
+     * and initialize so we can detect if set to new val by ethtool
+         */
+    cmd->rx_copybreak = rx_copybreak;
+    cmd->supported |= SUPPORTED_cmd_extension;
+    cmd->supported &= ~SUPPORTED_cmd_extension_rx_copybreak;
+
      spin_unlock_irqrestore(&tp->lock, flags);
      return rc;
  }
@@ -1397,6 +1517,11 @@ static const char rtl8169_gstrings[][ETH
      "multicast",
      "tx_aborted",
      "tx_underrun",
+    /*  extras maintained in driver code */
+    "tot rx intrpts",
+        "tot rx copied",
+        "tot rx replenished",
+    "tot rx alloc_fail"
  };

  static int rtl8169_get_sset_count(struct net_device *dev, int sset)
@@ -1472,9 +1597,15 @@ static void rtl8169_get_ethtool_stats(st
      data[10] = le32_to_cpu(tp->counters.rx_multicast);
      data[11] = le16_to_cpu(tp->counters.tx_aborted);
      data[12] = le16_to_cpu(tp->counters.tx_underun);
+    /*  extras maintained in driver code */
+    data[13] = tp->cur_rx;
+    data[14] = tp->copied_rx_pkt_count;
+    data[15] = tp->totl_rx_replenished;
+    data[16] = tp->totl_rx_alloc_fail;
  }

-static void rtl8169_get_strings(struct net_device *dev, u32 stringset, 
u8 *data)
+static void rtl8169_get_strings(struct net_device *dev, u32 stringset,
+                u8 * data)
  {
      switch(stringset) {
      case ETH_SS_STATS:
@@ -1516,6 +1647,8 @@ static const struct ethtool_ops rtl8169_
      .get_rx_csum        = rtl8169_get_rx_csum,
      .set_rx_csum        = rtl8169_set_rx_csum,
      .set_tx_csum        = ethtool_op_set_tx_csum,
+    .get_ringparam = rtl8169_get_ringparam,
+    .set_ringparam = rtl8169_set_ringparam,
      .set_sg            = ethtool_op_set_sg,
      .set_tso        = ethtool_op_set_tso,
      .get_regs        = rtl8169_get_regs,
@@ -3060,6 +3193,10 @@ rtl8169_init_one(struct pci_dev *pdev, c
      tp->pci_dev = pdev;
      tp->msg_enable = netif_msg_init(debug.msg_enable, R8169_MSG_DEFAULT);

+    tp->num_rx_allocd = tp->num_tx_allocd = 0;
+    tp->num_rx_requested = MAX_NUM_RX_DESC;
+    tp->num_tx_requested = MAX_NUM_TX_DESC;
+
      mii = &tp->mii;
      mii->dev = dev;
      mii->mdio_read = rtl_mdio_read;
@@ -3229,6 +3366,7 @@ rtl8169_init_one(struct pci_dev *pdev, c
      dev->features |= NETIF_F_HW_VLAN_TX_RX | NETIF_F_GRO;

      tp->intr_mask = 0xffff;
+    tp->align = cfg->align;
      tp->hw_start = cfg->hw_start;
      tp->intr_event = cfg->intr_event;
      tp->napi_event = cfg->napi_event;
@@ -3326,7 +3464,7 @@ static int rtl8169_open(struct net_devic
      if (!tp->RxDescArray)
          goto err_free_tx_0;

-    retval = rtl8169_init_ring(dev);
+    retval = rtl8169_init_ring(tp);
      if (retval < 0)
          goto err_free_rx_1;

@@ -4071,14 +4209,15 @@ static inline void rtl8169_make_unusable
      desc->opts1 &= ~cpu_to_le32(DescOwn | RsvdMask);
  }

-static void rtl8169_free_rx_databuff(struct rtl8169_private *tp,
-                     void **data_buff, struct RxDesc *desc)
+static void rtl8169_free_rx_skb(struct rtl8169_private *tp,
+                struct sk_buff **sk_buff, struct RxDesc *desc)
  {
-    dma_unmap_single(&tp->pci_dev->dev, le64_to_cpu(desc->addr), rx_buf_sz,
-             DMA_FROM_DEVICE);
+    struct pci_dev *pdev = tp->pci_dev;

-    kfree(*data_buff);
-    *data_buff = NULL;
+    dma_unmap_single(&pdev->dev, le64_to_cpu(desc->addr), rx_buf_sz,
+             PCI_DMA_FROMDEVICE);
+    dev_kfree_skb(*sk_buff);    /* also frees the data buffer! */
+    *sk_buff = NULL;
      rtl8169_make_unusable_by_asic(desc);
  }

@@ -4102,28 +4241,25 @@ static inline void *rtl8169_align(void *
      return (void *)ALIGN((long)data, 16);
  }

-static struct sk_buff *rtl8169_alloc_rx_data(struct rtl8169_private *tp,
-                         struct RxDesc *desc)
+static struct sk_buff *rtl8169_alloc_rx_skb(struct rtl8169_private *tp,
+                        struct RxDesc *desc, gfp_t gfp)
  {
-    void *data;
+    struct sk_buff *skb;
      dma_addr_t mapping;
      struct device *d = &tp->pci_dev->dev;
      struct net_device *dev = tp->dev;
-    int node = dev->dev.parent ? dev_to_node(dev->dev.parent) : -1;
+    unsigned int pad;

-    data = kmalloc_node(rx_buf_sz, GFP_KERNEL, node);
-    if (!data)
-        return NULL;
+    pad = tp->align ? tp->align : NET_IP_ALIGN;

-    if (rtl8169_align(data) != data) {
-        kfree(data);
-        data = kmalloc_node(rx_buf_sz + 15, GFP_KERNEL, node);
-        if (!data)
-            return NULL;
-    }
+    skb = __netdev_alloc_skb(dev, rx_buf_sz + pad, gfp);
+    if (!skb)
+        goto err_out;
+
+    skb_reserve(skb,
+            tp->align ? ((pad - 1) & (unsigned long)skb->data) : pad);

-    mapping = dma_map_single(d, rtl8169_align(data), rx_buf_sz,
-                 DMA_FROM_DEVICE);
+    mapping = dma_map_single(d, skb->data, rx_buf_sz, DMA_FROM_DEVICE);
      if (unlikely(dma_mapping_error(d, mapping))) {
          if (net_ratelimit())
              netif_err(tp, drv, tp->dev, "Failed to map RX DMA!\n");
@@ -4131,23 +4267,25 @@ static struct sk_buff *rtl8169_alloc_rx_
      }

      rtl8169_map_to_asic(desc, mapping, rx_buf_sz);
-    return data;
+out:
+    return skb;

  err_out:
-    kfree(data);
-    return NULL;
+    rtl8169_make_unusable_by_asic(desc);
+    goto out;
  }

  static void rtl8169_rx_clear(struct rtl8169_private *tp)
  {
      unsigned int i;

-    for (i = 0; i < NUM_RX_DESC; i++) {
-        if (tp->Rx_databuff[i]) {
-            rtl8169_free_rx_databuff(tp, tp->Rx_databuff + i,
+    for (i = 0; i < tp->num_rx_allocd; i++) {
+        if (tp->Rx_skbuff[i]) {
+            rtl8169_free_rx_skb(tp, tp->Rx_skbuff + i,
                          tp->RxDescArray + i);
          }
      }
+    tp->num_rx_allocd = 0;    /*  no rx descriptors allocated any more ! */
  }

  static inline void rtl8169_mark_as_last_descriptor(struct RxDesc *desc)
@@ -4155,47 +4293,92 @@ static inline void rtl8169_mark_as_last_
      desc->opts1 |= cpu_to_le32(RingEnd);
  }

-static int rtl8169_rx_fill(struct rtl8169_private *tp)
+/*   rtl8169_rx_fill :allocate num_to_alloc rx skb buffers to rx 
descriptors
+ *   starting with descriptor first_desc.
+ *   this function operates in one of two slightly different modes,
+ *   depending on whether the num_replenished parm is zero or not :
+ *      zero     -  traverse a fixed number of buffers specified by 
num_to_alloc,
+ *                  allocating those which are empty;
+ *      non-zero -  traverse as many buffers as needed
+ *                  to replenish num_replenished empty buffers,
+ *                  and update the parm with number actually replenished.
+ *   in each case,  stop if unable to allocate,
+ *   and in each case return number of buffers traversed.
+ */
+static u32 rtl8169_rx_fill(struct rtl8169_private *tp, u32 first_desc,
+               u32 num_to_alloc, u32 * num_replenished, gfp_t gfp)
  {
-    unsigned int i;
+    unsigned int this_desc_index;    /*   loop through on this */
+    u32 count_allocd;    /*   count allocd */
+    u32 num_traversed;    /*   count num traversed */
+
+    for (count_allocd = 0, num_traversed = 0, this_desc_index = first_desc;
+         ((num_replenished && (count_allocd < *num_replenished))
+          || (num_traversed < num_to_alloc)
+         ); num_traversed++) {
+        struct sk_buff *skb;

-    for (i = 0; i < NUM_RX_DESC; i++) {
-        void *data;
+        if (tp->Rx_skbuff[this_desc_index] == (struct sk_buff *)0) {    
/* bypass if allocd */

-        if (tp->Rx_databuff[i])
-            continue;
+            skb =
+                rtl8169_alloc_rx_skb(tp,
+                         tp->RxDescArray +
+                         this_desc_index, gfp);
+            if (!skb)
+                break;

-        data = rtl8169_alloc_rx_data(tp, tp->RxDescArray + i);
-        if (!data) {
-            rtl8169_make_unusable_by_asic(tp->RxDescArray + i);
-            goto err_out;
-        }
-        tp->Rx_databuff[i] = data;
+            tp->Rx_skbuff[this_desc_index] = skb;
+            count_allocd++;
      }

-    rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1);
-    return 0;
+        /*  increment this_desc_index allowing for modulo num_rx_allocd 
if latter is > 0
+         *  also ensuring we stop after one complete circuit
+         */
+        this_desc_index++;
+        if (this_desc_index == tp->num_rx_allocd) {
+            this_desc_index = 0;
+        }
+        if (this_desc_index == first_desc) {
+            break;
+        }
+    }

-err_out:
-    rtl8169_rx_clear(tp);
-    return -ENOMEM;
+    if (num_replenished)
+        *num_replenished = count_allocd;
+    return num_traversed;
  }

  static void rtl8169_init_ring_indexes(struct rtl8169_private *tp)
  {
-    tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0;
+    tp->dirty_tx = tp->totl_rx_replenished = tp->totl_rx_alloc_fail =
+        tp->cur_tx = tp->cur_rx = tp->replenish_rx_cursor = 0;
  }

-static int rtl8169_init_ring(struct net_device *dev)
+static int rtl8169_init_ring(struct rtl8169_private *tp)
  {
-    struct rtl8169_private *tp = netdev_priv(dev);

      rtl8169_init_ring_indexes(tp);

-    memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info));
-    memset(tp->Rx_databuff, 0x0, NUM_RX_DESC * sizeof(void *));
+    memset(tp->tx_skb, 0x0, MAX_NUM_TX_DESC * sizeof(struct ring_info));
+    memset(tp->Rx_skbuff, 0x0, MAX_NUM_RX_DESC * sizeof(struct sk_buff *));
+    tp->copied_rx_pkt_count = 0;
+
+    /*  see comment preceding defn of num_tx_requested */
+    tp->num_tx_allocd = tp->num_tx_requested;
+    tp->num_rx_allocd =
+        rtl8169_rx_fill(tp, 0, (u32) tp->num_rx_requested, 0, GFP_KERNEL);
+    printk(KERN_INFO "%s num_rx_requested= %d num_rx_allocd= %d\n",
+           MODULENAME, (u32) tp->num_rx_requested, tp->num_rx_allocd);
+    if (tp->num_rx_allocd < MIN_NUM_RX_DESC)
+        goto err_out;
+
+    rtl8169_mark_as_last_descriptor(tp->RxDescArray + tp->num_rx_allocd 
- 1);

-    return rtl8169_rx_fill(tp);
+    return 0;
+
+err_out:
+    rtl8169_rx_clear(tp);
+    return -ENOMEM;
  }

  static void rtl8169_unmap_tx_skb(struct device *d, struct ring_info 
*tx_skb,
@@ -4217,7 +4400,7 @@ static void rtl8169_tx_clear_range(struc
      unsigned int i;

      for (i = 0; i < n; i++) {
-        unsigned int entry = (start + i) % NUM_TX_DESC;
+        unsigned int entry = (start + i) % tp->num_tx_allocd;
          struct ring_info *tx_skb = tp->tx_skb + entry;
          unsigned int len = tx_skb->len;

@@ -4237,7 +4420,7 @@ static void rtl8169_tx_clear_range(struc

  static void rtl8169_tx_clear(struct rtl8169_private *tp)
  {
-    rtl8169_tx_clear_range(tp, tp->dirty_tx, NUM_TX_DESC);
+    rtl8169_tx_clear_range(tp, tp->dirty_tx, tp->num_tx_allocd);
      tp->cur_tx = tp->dirty_tx = 0;
  }

@@ -4310,7 +4493,7 @@ static void rtl8169_reset_task(struct wo
      rtl8169_rx_interrupt(dev, tp, tp->mmio_addr, ~(u32)0);
      rtl8169_tx_clear(tp);

-    if (tp->dirty_rx == tp->cur_rx) {
+    if (tp->totl_rx_replenished == tp->cur_rx) {
          rtl8169_init_ring_indexes(tp);
          rtl_hw_start(dev);
          netif_wake_queue(dev);
@@ -4350,7 +4533,7 @@ static int rtl8169_xmit_frags(struct rtl
          u32 status, len;
          void *addr;

-        entry = (entry + 1) % NUM_TX_DESC;
+        entry = (entry + 1) % tp->num_tx_allocd;

          txd = tp->TxDescArray + entry;
          len = frag->size;
@@ -4364,7 +4547,9 @@ static int rtl8169_xmit_frags(struct rtl
          }

          /* anti gcc 2.95.3 bugware (sic) */
-        status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+        status =
+            opts1 | len | (RingEnd *
+                   !((entry + 1) % tp->num_tx_allocd));

          txd->opts1 = cpu_to_le32(status);
          txd->addr = cpu_to_le64(mapping);
@@ -4408,7 +4593,7 @@ static netdev_tx_t rtl8169_start_xmit(st
                        struct net_device *dev)
  {
      struct rtl8169_private *tp = netdev_priv(dev);
-    unsigned int entry = tp->cur_tx % NUM_TX_DESC;
+    unsigned int entry = tp->cur_tx % tp->num_tx_allocd;
      struct TxDesc *txd = tp->TxDescArray + entry;
      void __iomem *ioaddr = tp->mmio_addr;
      struct device *d = &tp->pci_dev->dev;
@@ -4418,7 +4603,8 @@ static netdev_tx_t rtl8169_start_xmit(st
      int frags;

      if (unlikely(TX_BUFFS_AVAIL(tp) < skb_shinfo(skb)->nr_frags)) {
-        netif_err(tp, drv, dev, "BUG! Tx Ring full when queue awake!\n");
+        netif_err(tp, drv, dev,
+              "BUG! Tx Ring full when queue awake!\n");
          goto err_stop_0;
      }

@@ -4452,7 +4638,7 @@ static netdev_tx_t rtl8169_start_xmit(st
      wmb();

      /* anti gcc 2.95.3 bugware (sic) */
-    status = opts1 | len | (RingEnd * !((entry + 1) % NUM_TX_DESC));
+    status = opts1 | len | (RingEnd * !((entry + 1) % tp->num_tx_allocd));
      txd->opts1 = cpu_to_le32(status);

      tp->cur_tx += frags + 1;
@@ -4512,11 +4698,13 @@ static void rtl8169_pcierr_interrupt(str

      pci_write_config_word(pdev, PCI_STATUS,
          pci_status & (PCI_STATUS_DETECTED_PARITY |
-        PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_REC_MASTER_ABORT |
-        PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_SIG_TARGET_ABORT));
+                        PCI_STATUS_SIG_SYSTEM_ERROR |
+                        PCI_STATUS_REC_MASTER_ABORT |
+                        PCI_STATUS_REC_TARGET_ABORT |
+                        PCI_STATUS_SIG_TARGET_ABORT));

      /* The infamous DAC f*ckup only happens at boot time */
-    if ((tp->cp_cmd & PCIDAC) && !tp->dirty_rx && !tp->cur_rx) {
+    if ((tp->cp_cmd & PCIDAC) && !tp->totl_rx_replenished && !tp->cur_rx) {
          void __iomem *ioaddr = tp->mmio_addr;

          netif_info(tp, intr, dev, "disabling PCI DAC\n");
@@ -4541,7 +4729,7 @@ static void rtl8169_tx_interrupt(struct
      tx_left = tp->cur_tx - dirty_tx;

      while (tx_left > 0) {
-        unsigned int entry = dirty_tx % NUM_TX_DESC;
+        unsigned int entry = dirty_tx % tp->num_tx_allocd;
          struct ring_info *tx_skb = tp->tx_skb + entry;
          u32 status;

@@ -4597,29 +4785,110 @@ static inline void rtl8169_rx_csum(struc
          skb_checksum_none_assert(skb);
  }

-static struct sk_buff *rtl8169_try_rx_copy(void *data,
-                       struct rtl8169_private *tp,
-                       int pkt_size,
-                       dma_addr_t addr)
+/*   rtl8169_rx_deliver : delivers one rx skb up to higher netif layer
+ *   and copies or replenishes the skb as needed.
+ *   @tp        -> private cb for this NIC
+ *   @entry     == index of rx descriptor in ring
+ *   @polling   == whether polling or not (see comments for rx_interrupt)
+ *   we guarantee that the received packet will be passed up to the 
higher layer.
+ *   we also try to ensure that a buffer is available for next receive 
on this skb,
+ *   but do not guarantee that.
+ *   This function does not write or read to the asic registers
+ *   and does not return any return code -  work is reported via the 
descriptors.
+ *   "original" skb means the one previously in the ring
+ *   "returned" skb means the one passed up
+ *   these may be the same or different :
+ *       if packet size sufficiently small relative to rx_copybreak mod 
parm,
+ *       then try to copy the entire active skb to a new one,  and,
+ *       if successful,  return the new and leave the original as active.
+ *       otherwise,   return the original and try to replenish the ring.
+ */
+
+void rtl8169_rx_deliver(struct rtl8169_private *tp, unsigned int entry,
+              int polling)
  {
-    struct sk_buff *skb;
-    struct device *d = &tp->pci_dev->dev;
+    struct RxDesc *desc;
+    u32 opts1;
+    struct sk_buff *original_skb;
+    struct sk_buff *returned_skb;
+    dma_addr_t addr;
+    int pkt_size;
+    struct pci_dev *pdev;
+
+    desc = tp->RxDescArray + entry;
+    opts1 = le32_to_cpu(desc->opts1);
+    original_skb = tp->Rx_skbuff[entry];
+    addr = le64_to_cpu(desc->addr);
+    pkt_size = (opts1 & 0x00001FFF) - 4;
+    pdev = tp->pci_dev;
+
+    dprintk
+        ("rtl8169_rx_deliver entry= %d opts1= 0x%X pkt_size= %d 
polling= 0x%X\n",
+         entry, opts1, pkt_size, polling);
+
+    if (pkt_size < rx_copybreak) {
+        returned_skb = netdev_alloc_skb_ip_align(tp->dev, pkt_size);
+        if (returned_skb) {
+            dma_sync_single_for_cpu(&pdev->dev, addr, pkt_size,
+                        PCI_DMA_FROMDEVICE);
+            prefetch(original_skb->data);
+            memcpy(returned_skb->data, original_skb->data,
+                   pkt_size);
+            dma_sync_single_for_device(&pdev->dev, addr, pkt_size,
+                           PCI_DMA_FROMDEVICE);
+            rtl8169_mark_to_asic(desc, rx_buf_sz);
+            tp->totl_rx_replenished++;
+            tp->copied_rx_pkt_count++;
+        } else {
+            /*  can't replenish (out of storage ) */
+            rtl8169_make_unusable_by_asic(desc);
+            dma_unmap_single(&pdev->dev, addr, rx_buf_sz,
+                     PCI_DMA_FROMDEVICE);
+            tp->Rx_skbuff[entry] = NULL;
+            returned_skb = original_skb;
+            tp->totl_rx_alloc_fail++;
+        }
+    } else {
+        returned_skb = original_skb;
+        dma_unmap_single(&pdev->dev, addr, rx_buf_sz,
+                 PCI_DMA_FROMDEVICE);
+        /*  following may fail in which case it sets the skbuff ptr to 0 */
+#ifdef RTL8169_DEBUG
+        /*  to simulate alloc failure every n attempts  */
+        if (simulate_alloc_fail && ((simulate_alloc_fail & entry) != 0))
+            tp->Rx_skbuff[entry] = 0;
+        else
+#endif /* RTL8169_DEBUG */
+            tp->Rx_skbuff[entry] =
+                rtl8169_alloc_rx_skb(tp, desc, GFP_ATOMIC);
+        if (tp->Rx_skbuff[entry]) {
+            tp->totl_rx_replenished++;
+        } else {
+            rtl8169_make_unusable_by_asic(desc);
+            tp->totl_rx_alloc_fail++;
+        }
+    }

-    data = rtl8169_align(data);
-    dma_sync_single_for_cpu(d, addr, pkt_size, DMA_FROM_DEVICE);
-    prefetch(data);
-    skb = netdev_alloc_skb_ip_align(tp->dev, pkt_size);
-    if (skb)
-        memcpy(skb->data, data, pkt_size);
-    dma_sync_single_for_device(d, addr, pkt_size, DMA_FROM_DEVICE);
+    rtl8169_rx_csum(returned_skb, opts1);
+    skb_put(returned_skb, pkt_size);
+    returned_skb->protocol = eth_type_trans(returned_skb, tp->dev);
+
+    rtl8169_rx_vlan_tag(desc, returned_skb);
+
+    if (likely(polling)) {
+        napi_gro_receive(&tp->napi, returned_skb);
+        dprintk("rtl8169_rx_deliver explicit napi_gro_receive\n");
+    } else {
+        netif_rx(returned_skb);
+        dprintk("rtl8169_rx_deliver explicit netif_rx\n");
+    }

-    return skb;
  }

  /*
   * Warning : rtl8169_rx_interrupt() might be called :
   * 1) from NAPI (softirq) context
- *    (polling = 1 : we should call netif_receive_skb())
+ *    (polling = 1 : we should call napi_gro_receive())
   * 2) from process context (rtl8169_reset_task())
   *    (polling = 0 : we must call netif_rx() instead)
   */
@@ -4628,71 +4897,55 @@ static int rtl8169_rx_interrupt(struct n
                  void __iomem *ioaddr, u32 budget)
  {
      unsigned int cur_rx, rx_left;
-    unsigned int count;
+
+    unsigned int replenish_rx_cursor_delta;    /*  amount by which to 
advance cursor  */
+    unsigned int count;    /*  number of completed buffers handled in 
this call   */
+    unsigned int number_to_replenish; /* num buffers to replenish after 
delivering */
      int polling = (budget != ~(u32)0) ? 1 : 0;

      cur_rx = tp->cur_rx;
-    rx_left = NUM_RX_DESC + tp->dirty_rx - cur_rx;
+    rx_left = tp->num_rx_allocd + tp->totl_rx_replenished - cur_rx;
      rx_left = min(rx_left, budget);

      for (; rx_left > 0; rx_left--, cur_rx++) {
-        unsigned int entry = cur_rx % NUM_RX_DESC;
+        unsigned int entry = cur_rx % tp->num_rx_allocd;
          struct RxDesc *desc = tp->RxDescArray + entry;
-        u32 status;
+        u32 opts1;

          rmb();
-        status = le32_to_cpu(desc->opts1);
+        opts1 = le32_to_cpu(desc->opts1);

-        if (status & DescOwn)
+        if (opts1 & DescOwn)
              break;
-        if (unlikely(status & RxRES)) {
-            netif_info(tp, rx_err, dev, "Rx ERROR. status = %08x\n",
-                   status);
+        if (unlikely(opts1 & RxRES)) {
+            netif_info(tp, rx_err, dev, "Rx ERROR. opts1 = %08x\n",
+                   opts1);
              dev->stats.rx_errors++;
-            if (status & (RxRWT | RxRUNT))
+            if (opts1 & (RxRWT | RxRUNT))
                  dev->stats.rx_length_errors++;
-            if (status & RxCRC)
+            if (opts1 & RxCRC)
                  dev->stats.rx_crc_errors++;
-            if (status & RxFOVF) {
+            if (opts1 & RxFOVF) {
                  rtl8169_schedule_work(dev, rtl8169_reset_task);
                  dev->stats.rx_fifo_errors++;
              }
              rtl8169_mark_to_asic(desc, rx_buf_sz);
          } else {
-            struct sk_buff *skb;
-            dma_addr_t addr = le64_to_cpu(desc->addr);
-            int pkt_size = (status & 0x00001FFF) - 4;
+            int pkt_size = (opts1 & 0x00001FFF) - 4;

              /*
               * The driver does not support incoming fragmented
               * frames. They are seen as a symptom of over-mtu
               * sized frames.
               */
-            if (unlikely(rtl8169_fragmented_frame(status))) {
+            if (unlikely(rtl8169_fragmented_frame(opts1))) {
                  dev->stats.rx_dropped++;
                  dev->stats.rx_length_errors++;
                  rtl8169_mark_to_asic(desc, rx_buf_sz);
                  continue;
              }

-            skb = rtl8169_try_rx_copy(tp->Rx_databuff[entry],
-                          tp, pkt_size, addr);
-            rtl8169_mark_to_asic(desc, rx_buf_sz);
-            if (!skb) {
-                dev->stats.rx_dropped++;
-                continue;
-            }
-
-            rtl8169_rx_csum(skb, status);
-            skb_put(skb, pkt_size);
-            skb->protocol = eth_type_trans(skb, dev);
-
-            rtl8169_rx_vlan_tag(desc, skb);
-
-            if (likely(polling))
-                napi_gro_receive(&tp->napi, skb);
-            else
-                netif_rx(skb);
+            rtl8169_rx_deliver(tp, entry, polling);

              dev->stats.rx_bytes += pkt_size;
              dev->stats.rx_packets++;
@@ -4706,10 +4959,36 @@ static int rtl8169_rx_interrupt(struct n
          }
      }

-    count = cur_rx - tp->cur_rx;
+    replenish_rx_cursor_delta = count = cur_rx - tp->cur_rx;
      tp->cur_rx = cur_rx;

-    tp->dirty_rx += count;
+    /*   try to replenish buffers that any previous rtl8169_rx_deliver
+     *   failed to.   Note that these may not be contiguous  -
+     *   alloc success and fail may be interleaved.
+     *   replenish_rx_cursor marks the earliest unreplenished.
+     */
+
+    number_to_replenish = (tp->cur_rx - tp->totl_rx_replenished);
+
+    if (number_to_replenish > 0) {
+        replenish_rx_cursor_delta =
+            rtl8169_rx_fill(tp, tp->replenish_rx_cursor, 0,
+ &number_to_replenish, GFP_ATOMIC);
+        if (!replenish_rx_cursor_delta)
+            netif_info(tp, intr, dev, "no Rx buffer allocated\n");
+        tp->totl_rx_replenished += number_to_replenish;
+    }
+    tp->replenish_rx_cursor =
+        ((tp->replenish_rx_cursor +
+          replenish_rx_cursor_delta) % tp->num_rx_allocd);
+
+    /*
+     * exhaustion of available buffers may kill the Rx process.
+     * the previous code tries to replenish but may fail. To prevent that,
+     * set or let default rx_copybreak to maximum value to copy every 
buffer.
+     */
+    if ((tp->totl_rx_replenished + tp->num_rx_allocd) == tp->cur_rx)
+        netif_emerg(tp, intr, dev, "Rx buffers exhausted\n");

      return count;
  }


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-21  3:52       ` John Lumby
@ 2011-04-27  2:18         ` John Lumby
  2011-04-27  3:57           ` Eric Dumazet
                             ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: John Lumby @ 2011-04-27  2:18 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, Ben Hutchings, nic_swsd

Anyone have any further thoughts on the proposal to avoid memcpy'ing?  
(see earlier post)

I also have a question concerning NAPI.     I've found that much of the 
CPU saved from not memcpy'ing is burned in extra rx_interrupt'ing,  and 
much of that seems to be wasted (no new packets).    So the actual 
benefit is rather less than I think should be possible.

I've tried some tinkering with the napi weight but can't find any 
setting which really improves the ratio of rx packets to hard interrupts 
significantly.    The problem seems to be that each successive 
rtl8169_poll() is driven too soon after the last one   (in this 
particular workload).     The napi weight doesn't directly influence that.

So  -  question :
is there any way,   when returning from rtl8169_poll,  to tell napi 
something like :
    "   finish this interrupt context and let something else run on this 
CPU  (always CPU0 on my machine) BUT reschedule another napi poll on 
this same device at some time after that "
the point being that rtl8169_poll will,  for this case,  NOT re-enable 
the NIC's napi interrupts,  in the hope that maybe some user work can be 
dispatched,    so something else will have to schedule the next napi 
poll for it.    Conceptually,    if rtl8169_poll finds no rx work done 
on this call,   it wants to cause a yield() and then try again.     
Except it can't from within the interrupt.

I appreciate this could lead to delays in handling new work so might be 
dangerous,    but it seems to me to be in line with NAPI objectives so I 
wanted to try it .   But don't know how.     Any hints or thoughts 
appreciated.


John


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-27  2:18         ` John Lumby
@ 2011-04-27  3:57           ` Eric Dumazet
  2011-04-27 20:35           ` Francois Romieu
  2011-05-02 19:04           ` Chris Friesen
  2 siblings, 0 replies; 16+ messages in thread
From: Eric Dumazet @ 2011-04-27  3:57 UTC (permalink / raw)
  To: John Lumby; +Cc: Francois Romieu, netdev, Ben Hutchings, nic_swsd

Le mardi 26 avril 2011 à 22:18 -0400, John Lumby a écrit :
> Anyone have any further thoughts on the proposal to avoid memcpy'ing?  
> (see earlier post)
> 
> I also have a question concerning NAPI.     I've found that much of the 
> CPU saved from not memcpy'ing is burned in extra rx_interrupt'ing,  and 
> much of that seems to be wasted (no new packets).    So the actual 
> benefit is rather less than I think should be possible.
> 
> I've tried some tinkering with the napi weight but can't find any 
> setting which really improves the ratio of rx packets to hard interrupts 
> significantly.    The problem seems to be that each successive 
> rtl8169_poll() is driven too soon after the last one   (in this 
> particular workload).     The napi weight doesn't directly influence that.
> 
> So  -  question :
> is there any way,   when returning from rtl8169_poll,  to tell napi 
> something like :
>     "   finish this interrupt context and let something else run on this 
> CPU  (always CPU0 on my machine) BUT reschedule another napi poll on 
> this same device at some time after that "
> the point being that rtl8169_poll will,  for this case,  NOT re-enable 
> the NIC's napi interrupts,  in the hope that maybe some user work can be 
> dispatched,    so something else will have to schedule the next napi 
> poll for it.    Conceptually,    if rtl8169_poll finds no rx work done 
> on this call,   it wants to cause a yield() and then try again.     
> Except it can't from within the interrupt.
> 
> I appreciate this could lead to delays in handling new work so might be 
> dangerous,    but it seems to me to be in line with NAPI objectives so I 
> wanted to try it .   But don't know how.     Any hints or thoughts 
> appreciated.

Answer is no. There is no such facility in NAPI infrastructure.

You want to introduce a timer based polling. Some old pre-NAPI drivers
were doing that. Its OK when you have one device to handle, it can be a
nightmare when you mix several devices.




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-27  2:18         ` John Lumby
  2011-04-27  3:57           ` Eric Dumazet
@ 2011-04-27 20:35           ` Francois Romieu
  2011-04-29  1:55             ` John Lumby
  2011-05-02 19:04           ` Chris Friesen
  2 siblings, 1 reply; 16+ messages in thread
From: Francois Romieu @ 2011-04-27 20:35 UTC (permalink / raw)
  To: John Lumby; +Cc: netdev, Ben Hutchings, nic_swsd

John Lumby <johnlumby@hotmail.com> :
> Anyone have any further thoughts on the proposal to avoid
> memcpy'ing?  (see earlier post)

The patch mixes different changes. Please avoid it.

Your MUA damaged the patch. Documentation/SubmittingPatches
could help if you have not read it yet.

The patch makes some gratuitous changes which needlessly
increase the differences (dirty_xy rename for instance).

A set_ringparam() method which does nothing until open()
is used does not exactly ring like "least surprize behavior"
to me.

The behavior under memory pressure is still unknown.

I am mildly convinced by the implementation.

-- 
Ueimor

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-27 20:35           ` Francois Romieu
@ 2011-04-29  1:55             ` John Lumby
  2011-04-29  4:54               ` Eric Dumazet
  0 siblings, 1 reply; 16+ messages in thread
From: John Lumby @ 2011-04-29  1:55 UTC (permalink / raw)
  To: Francois Romieu; +Cc: netdev, Ben Hutchings, nic_swsd

On 04/27/11 16:35, Francois Romieu wrote:
>
> The patch mixes different changes. Please avoid it.

Sorry about that,  I'll rewrite with only the changes absolutely needed 
for avoiding memcpy (and maybe the setting of num_rx_bufs ring param?)

> Your MUA damaged the patch. Documentation/SubmittingPatches
> could help if you have not read it yet.

I see some truncation happened,  will fix that in next submission

> The patch makes some gratuitous changes which needlessly
> increase the differences (dirty_xy rename for instance).

will revert those

> A set_ringparam() method which does nothing until open()
> is used does not exactly ring like "least surprize behavior"
> to me.

Please see questions below

> The behavior under memory pressure is still unknown.

I have run some initial tests with memory pressure  -  the pressure 
provided by running n concurrent memory hogs,  each of which loops 
endlessly on allocating 1024 blocks of 1MB bytes each,  writing 
something into all bytes in each block,   then freeing each block,   
then repeating.  result:

*
copybreak        numhogs     workload throughput  swapping alloc 
failures?     dropped packets
                                    
Mb/sec                                   or other NIC err reports?

  16383              0              1043            none           
no                 no
     64              0              1086              |            
no                 no

  16383              1               935           moderate        
no                 no
     64              1               902              |            
no                 no

  16383              2               854           heavy           
no                 no
     64              2               851              |           yes, 
many           no

  16383              3               817           very heavy      
no                 no
     64              3            did not attempt     |
*
   Conclusions  :
     . setting copybreak to 16383 seems to be a valid way of avoiding 
alloc failures when under heavy memory pressure,  although the alloc 
failures don't seem to cause much trouble in these runs.
     .  But I am surprised to see how well the copybreak=16383 case runs 
with no memory pressure,   much better than I saw for the unpatched 
2.6.39rc2 earlier on,  and I need to run some more tests to check 
that.     I will also run same tests on the vanilla 2.6.39.

> I am mildly convinced by the implementation.
>

Thanks for all comments.

I do have a couple more questions:

    .    for my next patch submission  -   what should I base it on?     
Is there a git project which has the "latest" version of r8169.c?    I 
think it's not  torvalds/linux-2.6.git as fixes to r8169.c in that 
project go only to 2011-03-21.   Sorry if this is dumb question.
    .     regarding setting the ring param  -  I understand your comment 
but is it safe to close and open the NIC when called by ethtool 
ioctl?        Would some locking be needed?
    .     and again on the ring params  -   what is the minimum and 
maximum valid value for num rx bufs and separately for num  tx bufs   
that the r8169 supports?


Cheers,   John Lumby

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-29  1:55             ` John Lumby
@ 2011-04-29  4:54               ` Eric Dumazet
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Dumazet @ 2011-04-29  4:54 UTC (permalink / raw)
  To: John Lumby; +Cc: Francois Romieu, netdev, Ben Hutchings, nic_swsd

Le jeudi 28 avril 2011 à 21:55 -0400, John Lumby a écrit :
> *
>    Conclusions  :
>      . setting copybreak to 16383 seems to be a valid way of avoiding 
> alloc failures when under heavy memory pressure,  although the alloc 
> failures don't seem to cause much trouble in these runs.
>      .  But I am surprised to see how well the copybreak=16383 case runs 
> with no memory pressure,   much better than I saw for the unpatched 
> 2.6.39rc2 earlier on,  and I need to run some more tests to check 
> that.     I will also run same tests on the vanilla 2.6.39.

Doing the copy of data and building an exact size skb has benefit of
providing 'right' skb->truesize (might reduce RCVBUF contention and
avoid backlog drops) and already cached data (hot in cpu caches). Next
'copy' is almost free (L1 cache access)

It all depends on workload. If you want to receive a huge number of
small datagrams, [and feed them to several cpus], results might be
completely different.


>     .    for my next patch submission  -   what should I base it on?     
> Is there a git project which has the "latest" version of r8169.c?    I 
> think it's not  torvalds/linux-2.6.git as fixes to r8169.c in that 
> project go only to 2011-03-21.   Sorry if this is dumb question.

This one ?

http://git.kernel.org/?p=linux/kernel/git/davem/net-next-2.6.git




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-04-27  2:18         ` John Lumby
  2011-04-27  3:57           ` Eric Dumazet
  2011-04-27 20:35           ` Francois Romieu
@ 2011-05-02 19:04           ` Chris Friesen
  2011-05-03 11:59             ` hayeswang
  2 siblings, 1 reply; 16+ messages in thread
From: Chris Friesen @ 2011-05-02 19:04 UTC (permalink / raw)
  To: John Lumby; +Cc: Francois Romieu, netdev, Ben Hutchings, nic_swsd

On 04/26/2011 08:18 PM, John Lumby wrote:

> So - question :
> is there any way, when returning from rtl8169_poll, to tell napi
> something like :
> " finish this interrupt context and let something else run on this CPU
> (always CPU0 on my machine) BUT reschedule another napi poll on this
> same device at some time after that "

Does the hardware support any options for interrupt mitigation?  I've 
used some devices where you can specify a minimum time between 
interrupts such that even if NAPI re-enabled interrupts and there were 
packets waiting the hardware would wait a certain time before raising a 
new interrupt.

Chris

-- 
Chris Friesen
Software Developer
GENBAND
chris.friesen@genband.com
www.genband.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: r8169 :  always copying the rx buffer to new skb
  2011-05-02 19:04           ` Chris Friesen
@ 2011-05-03 11:59             ` hayeswang
  0 siblings, 0 replies; 16+ messages in thread
From: hayeswang @ 2011-05-03 11:59 UTC (permalink / raw)
  To: 'Chris Friesen', 'John Lumby'
  Cc: 'Francois Romieu', netdev, 'Ben Hutchings'

 

> -----Original Message-----
> From: Chris Friesen [mailto:chris.friesen@genband.com] 
> Sent: Tuesday, May 03, 2011 3:04 AM
> To: John Lumby
> Cc: Francois Romieu; netdev@vger.kernel.org; Ben Hutchings; nic_swsd
> Subject: Re: r8169 : always copying the rx buffer to new skb
> 
> On 04/26/2011 08:18 PM, John Lumby wrote:
> 
> > So - question :
> > is there any way, when returning from rtl8169_poll, to tell napi 
> > something like :
> > " finish this interrupt context and let something else run 
> on this CPU 
> > (always CPU0 on my machine) BUT reschedule another napi 
> poll on this 
> > same device at some time after that "
> 
> Does the hardware support any options for interrupt 
> mitigation?  I've used some devices where you can specify a 
> minimum time between interrupts such that even if NAPI 
> re-enabled interrupts and there were packets waiting the 
> hardware would wait a certain time before raising a new interrupt.
> 

Yes, please refer to the register 0xE2 of datasheet. And the time is influenced
by the register 0xE0.

 
Best Regards,
Hayes


^ permalink raw reply	[flat|nested] 16+ messages in thread

* r8169 :  always copying the rx buffer to new skb
@ 2011-06-27 22:54 John Lumby
  2011-06-28  7:55 ` Francois Romieu
  0 siblings, 1 reply; 16+ messages in thread
From: John Lumby @ 2011-06-27 22:54 UTC (permalink / raw)
  To: netdev


Summary of some results since previous posts in April :

Previously I suggested re-introducing the rx_copybreak parameter to provide the option of un-hooking the receive buffer rather than copying it,  in order to save the overhead of the memcpy,   which shows as the highest tick-count in oprofile.  All buffer memcpy'ing is done on CPU0 on my system.

I then found that,  without the memcpy,  the driver and net stack consume other overhead elsewhere,  particularly in too-frequent polling/interrupting.

Eric D pointed out that :
            Doing the copy of data and building an exact size skb has benefit of
            providing 'right' skb->truesize (might reduce RCVBUF contention and
            avoid backlog drops) and already cached data (hot in cpu caches).
            Next 'copy' is almost free (L1 cache access)

There was also some discussion off-line about using larger MTU size.

Since then,  I have explored some ideas for dealing with the too-frequent polling/interrupting and the cache aspect,  with some success on the first and no success on the second.   In summary of results:
   .  With MTU of 1500 and "normal" workload,   I see an improvement of between 4% - 6% in throughput,  depending on kernel release and kernel .config.    Specifically,  with the heaviest workload and most tuned kernel .config:
       no changes  -  ~  1440 Megabits/sec bi-directional
     with changes  -  ~  1530 Megabits/sec bi-directional
   (same .config for each of course)
      All 4 of my atom 330's (2 physical x 2SMt per physical) were at 100% on both without and with changes for this workload,  but with very different profiles.
      These throughput numbers are higher than I reported before,   and % improvement lower,  because of the tuning to the base system and workload.

   .  With MTU of 6144,  I see a more dramatic effect  -  the same workload runs at 1725 Megabit/sec on both kernels,  (which may be a practical hardware limit on one of the adapters,  since it hits exactly this rate almost every time no matter what else I change),  but overall CPU utilization drops from ~ 80% without changes to ~60% with changes.    I feel this is significant but of course its use limited to networks that can support this segment size everywhere.

Notes on the changes:

 Too-frequent polling/interrupting:
 These two are highly interrelated by NAPI.
     Too-frequent polling:
         The NAPI weight is a double-duty parameter,  controlling both the dynamic choice between continuing a NAPI polling session versus leaving and resuming interrupts,  and also the maximum number of receive buffers to be passed up per poll.   It's also not configurable (set to 64).    I split it into two numbers,  one for each purpose,  and made them configurable,  and tried tuning them.    A good value for the poll/int choice was 16,   while the max-size number was best left at 64.    This helps a bit,  but polling is still too frequent.
         I then made an interface up into the softirqd to let the driver tell the softirqd :
              "keep my napi session alive but sched_yield to other runnable processes before running another poll"
         I added a check to __do_softirq that if the *only* pending softirq is NET_RX_SOFTIRQ and the rx_action routine requested this, then it exits and tells the deamon to yield.
         I borrowed a bit in local_softirq_pending for this.  This helped a lot for certain workloads.    I saw considerable drop in system CPU% on CPU0 and higher user CPU% there.

     Too-frequent interrupting:
         I made use of the r8169's Interrupt Mitigation feature,   setting it to the maximum multiplied by a factor between 0-1 based inversely on tx queue size  (large qsize,  short delay and vice versa).     This also helped a lot.  The current driver sets these registers but only once per "up" session,  during rtl8169_open of the NIC;  But Hayes explained that the regs must be set on each enabling of interrupts.    This is the one case where (I think) I corrected a bug present in the current driver.   Harmless but not doing what was intended.

      The effect of these two changes was to reduce the rate of hardware interrupts down to less than 1/20 of before,  and also hold the polling rate down (around 4-5 packets per poll on average on a typical run,  sometimes much higher).


  memory and caching:
  Here I failed to achieve anything.    Based on Eric's point about memcpy giving a "free" next copy, I thought possibly memory prefetching might provide something equivalent.   Specifically,  prefetch the skb and its databuff immediately after un-dma'ing.
  For example,   with my changes and no memcpy,  I see eth_type_trans() high in oprofile tick score on CPU0.
  This small function does very little work but is the first (I think) to access a field in the skb->data buffer - the ethernet header.  Prefetching ought to do better than memcpy'ing since only one copy of the data will enter L1,  not two.    But my attempts at this achieved nothing or negative.
  Note  -  the current driver does issue a prefetch of the original buffer prior to the memcpy.  But,  on my system (atom CPUs),  gdb of the object file r8169.o indicates no prefetch instructions are generated,   only lea of the address to be prefetched.     I tried changing the prefetch call to an asm generated prefetcht0/prefetchnta instruction with disappointing results.   I noticed some discussion of memory prefetch in this list earlier and maybe it is not useful.

  I tried to explore Eric's other point about skb->truesize but ran out of time researching.     I guess my current results are negatively impacted by these memory and skb issues that Eric mentions,  but I could not find any answer.


  There was a question of how this changed driver handles memory pressure :
  Along with the rx_copybreak change,  I made the number of rx and tx ring buffers configurable and dynamically replenishable.    The changed driver can tolerate occasional or even bursty alloc failures without exposing any effects outside itself,    whereas the current driver drops packets.   However,  under extreme consecutive failures,  the changed driver will eventually run too low and stop completely,  whereas the current driver will (I assume) stay up.     I was unable to cause either of these in my tests.    Measurements with concurrent memory hogs confirmed this but did show heavy drop in throughput for the changed driver.


  I've tried these changes out on all kernel release levels from 2.6.36 to 3.0-rc3 and see roughly comparable deltas on all,  but with slightly different tuning required to hit the optimum,  and some variability on all after 2.6.37.     2.6.37 seemed to be slightly the "best".   Not sure why although I see some relevant changes to the scheduler between 2.6.37 - 38.    There is also a strange effect with the old RTC in 3.0  -  I had to remove it from the kernel to get good results,    whereas it was a module in 2.6 levels (which I did not load for the tests).     I don't need it on my system except for one ancient utility.     I also found major impact from iptables and cleared all tables for the tests.    That is the one item that would normally be needed in a production setup that I turned off.  The overhead of iptables is presumably highly dependent on how many rules in the filter chains.    (I have rather a lot in INPUT)


I don't plan to do any more on this but can provide my patch (currently one monolithic one based on DaveM 3.0.0-rc1netnext-110615) and detailed results if anyone wants.

Cheers,   John Lumby
 		 	   		  

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: r8169 :  always copying the rx buffer to new skb
  2011-06-27 22:54 John Lumby
@ 2011-06-28  7:55 ` Francois Romieu
  0 siblings, 0 replies; 16+ messages in thread
From: Francois Romieu @ 2011-06-28  7:55 UTC (permalink / raw)
  To: John Lumby; +Cc: netdev

John, please ask your mail user agent to avoid really, really long lines.

John Lumby <johnlumby@hotmail.com> :
[...]
> I don't plan to do any more on this but can provide my patch (currently
> one monolithic one based on DaveM 3.0.0-rc1netnext-110615) and detailed
> results if anyone wants.

If you want your work to stand a chance to be of any use, you should
send it. Please read the "12) Sign your work" part of
"Documentation/SubmittingPatches" - even if it's a giant pile of mildly
standard code - so that I can store it at kernel.org.

Thanks.

-- 
Ueimor

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2011-06-28  8:08 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-18 17:08 r8169 : always copying the rx buffer to new skb John Lumby
2011-04-18 17:27 ` Ben Hutchings
2011-04-18 21:26   ` John Lumby
2011-04-20 19:13     ` Francois Romieu
2011-04-21  3:41       ` John Lumby
2011-04-21  3:52       ` John Lumby
2011-04-27  2:18         ` John Lumby
2011-04-27  3:57           ` Eric Dumazet
2011-04-27 20:35           ` Francois Romieu
2011-04-29  1:55             ` John Lumby
2011-04-29  4:54               ` Eric Dumazet
2011-05-02 19:04           ` Chris Friesen
2011-05-03 11:59             ` hayeswang
2011-04-18 18:21 ` Francois Romieu
  -- strict thread matches above, loose matches on Subject: below --
2011-06-27 22:54 John Lumby
2011-06-28  7:55 ` Francois Romieu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).