public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Large inlines in include/linux/skbuff.h
@ 2004-04-21 19:26 Denis Vlasenko
  2004-04-22  0:47 ` James Morris
  0 siblings, 1 reply; 6+ messages in thread
From: Denis Vlasenko @ 2004-04-21 19:26 UTC (permalink / raw)
  To: David S. Miller; +Cc: Jeff Garzik, linux-kernel

Let's skip stuff which is smaller than 100 bytes.

Size  Uses Wasted Name and definition
===== ==== ====== ================================================
   58   20    722 __skb_dequeue include/linux/skbuff.h
  122  119  12036 skb_dequeue   include/linux/skbuff.h
  103   10    747 __skb_queue_purge     include/linux/skbuff.h
  164   78  11088 skb_queue_purge       include/linux/skbuff.h
   46   26    650 __skb_queue_tail      include/linux/skbuff.h
  109  101   8900 skb_queue_tail        include/linux/skbuff.h
   47   11    270 __skb_queue_head      include/linux/skbuff.h
  110   44   3870 skb_queue_head        include/linux/skbuff.h
   51   16    465 __skb_unlink  include/linux/skbuff.h
  132    9    896 skb_unlink    include/linux/skbuff.h
   46    2     26 __skb_append  include/linux/skbuff.h
  114    5    376 skb_append    include/linux/skbuff.h

'Uses' was counted over .c files only, .h were excluded, so
no double count here.

As you can see here, non-locking functions are more or less sanely
sized (except __skb_queue_purge). When spin_lock/unlock code gets added,
we cross 100 byte mark.

What shall be done with this? I'll make patch to move locking functions
into net/core/skbuff.c unless there is some reason not to do it.

Just to make you feel what is 100+ bytes in assembly:
void inline_skb_dequeue_1() { skb_dequeqe(0); }
gets compiled into:

00003513 <inline_skb_dequeue_1>:
    3513:       55                      push   %ebp
    3514:       89 e5                   mov    %esp,%ebp
    3516:       53                      push   %ebx
    3517:       9c                      pushf
    3518:       5b                      pop    %ebx
    3519:       fa                      cli
    351a:       b9 00 e0 ff ff          mov    $0xffffe000,%ecx
    351f:       21 e1                   and    %esp,%ecx
    3521:       ff 41 14                incl   0x14(%ecx)
    3524:       f0 fe 0d 0c 00 00 00    lock decb 0xc
    352b:       78 02                   js     352f <inline_skb_dequeue_1+0x1c>
    352d:       eb 0d                   jmp    353c <inline_skb_dequeue_1+0x29>
    352f:       f3 90                   repz nop
    3531:       80 3d 0c 00 00 00 00    cmpb   $0x0,0xc
    3538:       7e f5                   jle    352f <inline_skb_dequeue_1+0x1c>
    353a:       eb e8                   jmp    3524 <inline_skb_dequeue_1+0x11>
    353c:       8b 15 00 00 00 00       mov    0x0,%edx
    3542:       85 d2                   test   %edx,%edx
    3544:       74 2b                   je     3571 <inline_skb_dequeue_1+0x5e>
    3546:       89 d0                   mov    %edx,%eax
    3548:       8b 12                   mov    (%edx),%edx
    354a:       89 15 00 00 00 00       mov    %edx,0x0
    3550:       ff 0d 08 00 00 00       decl   0x8
    3556:       c7 42 04 00 00 00 00    movl   $0x0,0x4(%edx)
    355d:       c7 00 00 00 00 00       movl   $0x0,(%eax)
    3563:       c7 40 04 00 00 00 00    movl   $0x0,0x4(%eax)
    356a:       c7 40 08 00 00 00 00    movl   $0x0,0x8(%eax)
    3571:       b0 01                   mov    $0x1,%al
    3573:       86 05 0c 00 00 00       xchg   %al,0xc
    3579:       53                      push   %ebx
    357a:       9d                      popf
    357b:       8b 41 08                mov    0x8(%ecx),%eax
    357e:       ff 49 14                decl   0x14(%ecx)
    3581:       a8 08                   test   $0x8,%al
    3583:       74 05                   je     358a <inline_skb_dequeue_1+0x77>
    3585:       e8 fc ff ff ff          call   3586 <inline_skb_dequeue_1+0x73>
    358a:       5b                      pop    %ebx
    358b:       c9                      leave
    358c:       c3                      ret
--
vda


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Large inlines in include/linux/skbuff.h
  2004-04-21 19:26 Large inlines in include/linux/skbuff.h Denis Vlasenko
@ 2004-04-22  0:47 ` James Morris
       [not found]   ` <200404221756.46240.vda@port.imtp.ilyichevsk.odessa.ua>
  2004-04-27 21:21   ` [PATCH] eliminate large inline's in skbuff Stephen Hemminger
  0 siblings, 2 replies; 6+ messages in thread
From: James Morris @ 2004-04-22  0:47 UTC (permalink / raw)
  To: Denis Vlasenko; +Cc: David S. Miller, Jeff Garzik, linux-kernel

On Wed, 21 Apr 2004, Denis Vlasenko wrote:

> What shall be done with this? I'll make patch to move locking functions
> into net/core/skbuff.c unless there is some reason not to do it.

How will these changes impact performance?  I asked this last time you 
posted about inlines and didn't see any response.



- James
-- 
James Morris
<jmorris@redhat.com>



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Large inlines in include/linux/skbuff.h
       [not found]   ` <200404221756.46240.vda@port.imtp.ilyichevsk.odessa.ua>
@ 2004-04-22 18:36     ` David S. Miller
       [not found]       ` <200404222239.34760.vda@port.imtp.ilyichevsk.odessa.ua>
  0 siblings, 1 reply; 6+ messages in thread
From: David S. Miller @ 2004-04-22 18:36 UTC (permalink / raw)
  To: Denis Vlasenko; +Cc: jmorris, jgarzik, linux-kernel

On Thu, 22 Apr 2004 17:56:46 +0300
Denis Vlasenko <vda@port.imtp.ilyichevsk.odessa.ua> wrote:

> On Thursday 22 April 2004 03:47, James Morris wrote:
> > On Wed, 21 Apr 2004, Denis Vlasenko wrote:
> > > What shall be done with this? I'll make patch to move locking functions
> > > into net/core/skbuff.c unless there is some reason not to do it.
> >
> > How will these changes impact performance?  I asked this last time you
> > posted about inlines and didn't see any response.
> 
> Hard to say. We will lose ~2% to extra call/return instructions
> but will gain 40kb of icache space.

He's asking you to test and quantify the performance changes that
occur as a result of this patch, not to figure it out via calculations
in your head :-)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Large inlines in include/linux/skbuff.h
       [not found]       ` <200404222239.34760.vda@port.imtp.ilyichevsk.odessa.ua>
@ 2004-04-22 22:10         ` David S. Miller
  0 siblings, 0 replies; 6+ messages in thread
From: David S. Miller @ 2004-04-22 22:10 UTC (permalink / raw)
  To: Denis Vlasenko; +Cc: jmorris, jgarzik, linux-kernel

On Thu, 22 Apr 2004 22:39:34 +0300
Denis Vlasenko <vda@port.imtp.ilyichevsk.odessa.ua> wrote:

> Okay. I am willing do it. Hmm...
> Which type of test will highlight the difference?
> I don't have gigabit network to play with. Any hints of how can I measure
> it on 100mbit?

Get someone to run specweb before and after your changes.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH] eliminate large inline's in skbuff
  2004-04-22  0:47 ` James Morris
       [not found]   ` <200404221756.46240.vda@port.imtp.ilyichevsk.odessa.ua>
@ 2004-04-27 21:21   ` Stephen Hemminger
  2004-04-27 23:30     ` David S. Miller
  1 sibling, 1 reply; 6+ messages in thread
From: Stephen Hemminger @ 2004-04-27 21:21 UTC (permalink / raw)
  To: James Morris, David S. Miller; +Cc: linux-kernel

This takes the suggestion and makes all the locked skb_ stuff, not inline.
It showed a 3% performance improvement when doing single TCP stream over 1G
Ethernet between SMP machines. Test was average of 10 iterations of
iperf for 30 seconds and SUT was 4 way Xeon.  Http performance difference
was not noticeable (less than the std. deviation of the test runs).

diff -Nru a/include/linux/skbuff.h b/include/linux/skbuff.h
--- a/include/linux/skbuff.h	Tue Apr 27 10:40:45 2004
+++ b/include/linux/skbuff.h	Tue Apr 27 10:40:45 2004
@@ -511,6 +511,7 @@
  *
  *	A buffer cannot be placed on two lists at the same time.
  */
+extern void skb_queue_head(struct sk_buff_head *list, struct sk_buff *newsk);
 static inline void __skb_queue_head(struct sk_buff_head *list,
 				    struct sk_buff *newsk)
 {
@@ -525,28 +526,6 @@
 	next->prev  = prev->next = newsk;
 }
 
-
-/**
- *	skb_queue_head - queue a buffer at the list head
- *	@list: list to use
- *	@newsk: buffer to queue
- *
- *	Queue a buffer at the start of the list. This function takes the
- *	list lock and can be used safely with other locking &sk_buff functions
- *	safely.
- *
- *	A buffer cannot be placed on two lists at the same time.
- */
-static inline void skb_queue_head(struct sk_buff_head *list,
-				  struct sk_buff *newsk)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&list->lock, flags);
-	__skb_queue_head(list, newsk);
-	spin_unlock_irqrestore(&list->lock, flags);
-}
-
 /**
  *	__skb_queue_tail - queue a buffer at the list tail
  *	@list: list to use
@@ -557,6 +536,7 @@
  *
  *	A buffer cannot be placed on two lists at the same time.
  */
+extern void skb_queue_tail(struct sk_buff_head *list, struct sk_buff *newsk);
 static inline void __skb_queue_tail(struct sk_buff_head *list,
 				   struct sk_buff *newsk)
 {
@@ -571,26 +551,6 @@
 	next->prev  = prev->next = newsk;
 }
 
-/**
- *	skb_queue_tail - queue a buffer at the list tail
- *	@list: list to use
- *	@newsk: buffer to queue
- *
- *	Queue a buffer at the tail of the list. This function takes the
- *	list lock and can be used safely with other locking &sk_buff functions
- *	safely.
- *
- *	A buffer cannot be placed on two lists at the same time.
- */
-static inline void skb_queue_tail(struct sk_buff_head *list,
-				  struct sk_buff *newsk)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&list->lock, flags);
-	__skb_queue_tail(list, newsk);
-	spin_unlock_irqrestore(&list->lock, flags);
-}
 
 /**
  *	__skb_dequeue - remove from the head of the queue
@@ -600,6 +560,7 @@
  *	so must be used with appropriate locks held only. The head item is
  *	returned or %NULL if the list is empty.
  */
+extern struct sk_buff *skb_dequeue(struct sk_buff_head *list);
 static inline struct sk_buff *__skb_dequeue(struct sk_buff_head *list)
 {
 	struct sk_buff *next, *prev, *result;
@@ -619,30 +580,11 @@
 	return result;
 }
 
-/**
- *	skb_dequeue - remove from the head of the queue
- *	@list: list to dequeue from
- *
- *	Remove the head of the list. The list lock is taken so the function
- *	may be used safely with other locking list functions. The head item is
- *	returned or %NULL if the list is empty.
- */
-
-static inline struct sk_buff *skb_dequeue(struct sk_buff_head *list)
-{
-	unsigned long flags;
-	struct sk_buff *result;
-
-	spin_lock_irqsave(&list->lock, flags);
-	result = __skb_dequeue(list);
-	spin_unlock_irqrestore(&list->lock, flags);
-	return result;
-}
 
 /*
  *	Insert a packet on a list.
  */
-
+extern void        skb_insert(struct sk_buff *old, struct sk_buff *newsk);
 static inline void __skb_insert(struct sk_buff *newsk,
 				struct sk_buff *prev, struct sk_buff *next,
 				struct sk_buff_head *list)
@@ -654,58 +596,20 @@
 	list->qlen++;
 }
 
-/**
- *	skb_insert	-	insert a buffer
- *	@old: buffer to insert before
- *	@newsk: buffer to insert
- *
- *	Place a packet before a given packet in a list. The list locks are taken
- *	and this function is atomic with respect to other list locked calls
- *	A buffer cannot be placed on two lists at the same time.
- */
-
-static inline void skb_insert(struct sk_buff *old, struct sk_buff *newsk)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&old->list->lock, flags);
-	__skb_insert(newsk, old->prev, old, old->list);
-	spin_unlock_irqrestore(&old->list->lock, flags);
-}
-
 /*
  *	Place a packet after a given packet in a list.
  */
-
+extern void	   skb_append(struct sk_buff *old, struct sk_buff *newsk);
 static inline void __skb_append(struct sk_buff *old, struct sk_buff *newsk)
 {
 	__skb_insert(newsk, old, old->next, old->list);
 }
 
-/**
- *	skb_append	-	append a buffer
- *	@old: buffer to insert after
- *	@newsk: buffer to insert
- *
- *	Place a packet after a given packet in a list. The list locks are taken
- *	and this function is atomic with respect to other list locked calls.
- *	A buffer cannot be placed on two lists at the same time.
- */
-
-
-static inline void skb_append(struct sk_buff *old, struct sk_buff *newsk)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&old->list->lock, flags);
-	__skb_append(old, newsk);
-	spin_unlock_irqrestore(&old->list->lock, flags);
-}
-
 /*
  * remove sk_buff from list. _Must_ be called atomically, and with
  * the list known..
  */
+extern void	   skb_unlink(struct sk_buff *skb);
 static inline void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)
 {
 	struct sk_buff *next, *prev;
@@ -719,31 +623,6 @@
 	prev->next = next;
 }
 
-/**
- *	skb_unlink	-	remove a buffer from a list
- *	@skb: buffer to remove
- *
- *	Place a packet after a given packet in a list. The list locks are taken
- *	and this function is atomic with respect to other list locked calls
- *
- *	Works even without knowing the list it is sitting on, which can be
- *	handy at times. It also means that THE LIST MUST EXIST when you
- *	unlink. Thus a list must have its contents unlinked before it is
- *	destroyed.
- */
-static inline void skb_unlink(struct sk_buff *skb)
-{
-	struct sk_buff_head *list = skb->list;
-
-	if (list) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&list->lock, flags);
-		if (skb->list == list)
-			__skb_unlink(skb, skb->list);
-		spin_unlock_irqrestore(&list->lock, flags);
-	}
-}
 
 /* XXX: more streamlined implementation */
 
@@ -755,6 +634,7 @@
  *	so must be used with appropriate locks held only. The tail item is
  *	returned or %NULL if the list is empty.
  */
+extern struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list);
 static inline struct sk_buff *__skb_dequeue_tail(struct sk_buff_head *list)
 {
 	struct sk_buff *skb = skb_peek_tail(list);
@@ -763,24 +643,6 @@
 	return skb;
 }
 
-/**
- *	skb_dequeue_tail - remove from the tail of the queue
- *	@list: list to dequeue from
- *
- *	Remove the tail of the list. The list lock is taken so the function
- *	may be used safely with other locking list functions. The tail item is
- *	returned or %NULL if the list is empty.
- */
-static inline struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list)
-{
-	unsigned long flags;
-	struct sk_buff *result;
-
-	spin_lock_irqsave(&list->lock, flags);
-	result = __skb_dequeue_tail(list);
-	spin_unlock_irqrestore(&list->lock, flags);
-	return result;
-}
 
 static inline int skb_is_nonlinear(const struct sk_buff *skb)
 {
@@ -1012,21 +874,6 @@
 }
 
 /**
- *	skb_queue_purge - empty a list
- *	@list: list to empty
- *
- *	Delete all buffers on an &sk_buff list. Each buffer is removed from
- *	the list and one reference dropped. This function takes the list
- *	lock and is atomic with respect to other list locking functions.
- */
-static inline void skb_queue_purge(struct sk_buff_head *list)
-{
-	struct sk_buff *skb;
-	while ((skb = skb_dequeue(list)) != NULL)
-		kfree_skb(skb);
-}
-
-/**
  *	__skb_queue_purge - empty a list
  *	@list: list to empty
  *
@@ -1034,6 +881,7 @@
  *	the list and one reference dropped. This function does not take the
  *	list lock and the caller must hold the relevant locks to use it.
  */
+extern void skb_queue_purge(struct sk_buff_head *list);
 static inline void __skb_queue_purge(struct sk_buff_head *list)
 {
 	struct sk_buff *skb;
diff -Nru a/net/core/skbuff.c b/net/core/skbuff.c
--- a/net/core/skbuff.c	Tue Apr 27 10:40:45 2004
+++ b/net/core/skbuff.c	Tue Apr 27 10:40:45 2004
@@ -1091,6 +1091,165 @@
 	}
 }
 
+/**
+ *	skb_dequeue - remove from the head of the queue
+ *	@list: list to dequeue from
+ *
+ *	Remove the head of the list. The list lock is taken so the function
+ *	may be used safely with other locking list functions. The head item is
+ *	returned or %NULL if the list is empty.
+ */
+
+struct sk_buff *skb_dequeue(struct sk_buff_head *list)
+{
+	unsigned long flags;
+	struct sk_buff *result;
+
+	spin_lock_irqsave(&list->lock, flags);
+	result = __skb_dequeue(list);
+	spin_unlock_irqrestore(&list->lock, flags);
+	return result;
+}
+
+/**
+ *	skb_dequeue_tail - remove from the tail of the queue
+ *	@list: list to dequeue from
+ *
+ *	Remove the tail of the list. The list lock is taken so the function
+ *	may be used safely with other locking list functions. The tail item is
+ *	returned or %NULL if the list is empty.
+ */
+struct sk_buff *skb_dequeue_tail(struct sk_buff_head *list)
+{
+	unsigned long flags;
+	struct sk_buff *result;
+
+	spin_lock_irqsave(&list->lock, flags);
+	result = __skb_dequeue_tail(list);
+	spin_unlock_irqrestore(&list->lock, flags);
+	return result;
+}
+
+/**
+ *	skb_queue_purge - empty a list
+ *	@list: list to empty
+ *
+ *	Delete all buffers on an &sk_buff list. Each buffer is removed from
+ *	the list and one reference dropped. This function takes the list
+ *	lock and is atomic with respect to other list locking functions.
+ */
+void skb_queue_purge(struct sk_buff_head *list)
+{
+	struct sk_buff *skb;
+	while ((skb = skb_dequeue(list)) != NULL)
+		kfree_skb(skb);
+}
+
+/**
+ *	skb_queue_head - queue a buffer at the list head
+ *	@list: list to use
+ *	@newsk: buffer to queue
+ *
+ *	Queue a buffer at the start of the list. This function takes the
+ *	list lock and can be used safely with other locking &sk_buff functions
+ *	safely.
+ *
+ *	A buffer cannot be placed on two lists at the same time.
+ */
+void skb_queue_head(struct sk_buff_head *list, struct sk_buff *newsk)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&list->lock, flags);
+	__skb_queue_head(list, newsk);
+	spin_unlock_irqrestore(&list->lock, flags);
+}
+
+/**
+ *	skb_queue_tail - queue a buffer at the list tail
+ *	@list: list to use
+ *	@newsk: buffer to queue
+ *
+ *	Queue a buffer at the tail of the list. This function takes the
+ *	list lock and can be used safely with other locking &sk_buff functions
+ *	safely.
+ *
+ *	A buffer cannot be placed on two lists at the same time.
+ */
+void skb_queue_tail(struct sk_buff_head *list, struct sk_buff *newsk)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&list->lock, flags);
+	__skb_queue_tail(list, newsk);
+	spin_unlock_irqrestore(&list->lock, flags);
+}
+/**
+ *	skb_unlink	-	remove a buffer from a list
+ *	@skb: buffer to remove
+ *
+ *	Place a packet after a given packet in a list. The list locks are taken
+ *	and this function is atomic with respect to other list locked calls
+ *
+ *	Works even without knowing the list it is sitting on, which can be
+ *	handy at times. It also means that THE LIST MUST EXIST when you
+ *	unlink. Thus a list must have its contents unlinked before it is
+ *	destroyed.
+ */
+void skb_unlink(struct sk_buff *skb)
+{
+	struct sk_buff_head *list = skb->list;
+
+	if (list) {
+		unsigned long flags;
+
+		spin_lock_irqsave(&list->lock, flags);
+		if (skb->list == list)
+			__skb_unlink(skb, skb->list);
+		spin_unlock_irqrestore(&list->lock, flags);
+	}
+}
+
+
+/**
+ *	skb_append	-	append a buffer
+ *	@old: buffer to insert after
+ *	@newsk: buffer to insert
+ *
+ *	Place a packet after a given packet in a list. The list locks are taken
+ *	and this function is atomic with respect to other list locked calls.
+ *	A buffer cannot be placed on two lists at the same time.
+ */
+
+void skb_append(struct sk_buff *old, struct sk_buff *newsk)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&old->list->lock, flags);
+	__skb_append(old, newsk);
+	spin_unlock_irqrestore(&old->list->lock, flags);
+}
+
+
+/**
+ *	skb_insert	-	insert a buffer
+ *	@old: buffer to insert before
+ *	@newsk: buffer to insert
+ *
+ *	Place a packet before a given packet in a list. The list locks are taken
+ *	and this function is atomic with respect to other list locked calls
+ *	A buffer cannot be placed on two lists at the same time.
+ */
+
+void skb_insert(struct sk_buff *old, struct sk_buff *newsk)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&old->list->lock, flags);
+	__skb_insert(newsk, old->prev, old, old->list);
+	spin_unlock_irqrestore(&old->list->lock, flags);
+}
+
 #if 0
 /*
  * 	Tune the memory allocator for a new MTU size.
@@ -1133,3 +1292,11 @@
 EXPORT_SYMBOL(skb_pad);
 EXPORT_SYMBOL(skb_realloc_headroom);
 EXPORT_SYMBOL(skb_under_panic);
+EXPORT_SYMBOL(skb_dequeue);
+EXPORT_SYMBOL(skb_dequeue_tail);
+EXPORT_SYMBOL(skb_insert);
+EXPORT_SYMBOL(skb_queue_purge);
+EXPORT_SYMBOL(skb_queue_head);
+EXPORT_SYMBOL(skb_queue_tail);
+EXPORT_SYMBOL(skb_unlink);
+EXPORT_SYMBOL(skb_append);

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] eliminate large inline's in skbuff
  2004-04-27 21:21   ` [PATCH] eliminate large inline's in skbuff Stephen Hemminger
@ 2004-04-27 23:30     ` David S. Miller
  0 siblings, 0 replies; 6+ messages in thread
From: David S. Miller @ 2004-04-27 23:30 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: jmorris, linux-kernel

On Tue, 27 Apr 2004 14:21:36 -0700
Stephen Hemminger <shemminger@osdl.org> wrote:

> This takes the suggestion and makes all the locked skb_ stuff, not inline.
> It showed a 3% performance improvement when doing single TCP stream over 1G
> Ethernet between SMP machines. Test was average of 10 iterations of
> iperf for 30 seconds and SUT was 4 way Xeon.  Http performance difference
> was not noticeable (less than the std. deviation of the test runs).

Ok, let's give this a spin.

Applied, thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2004-04-27 23:31 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-04-21 19:26 Large inlines in include/linux/skbuff.h Denis Vlasenko
2004-04-22  0:47 ` James Morris
     [not found]   ` <200404221756.46240.vda@port.imtp.ilyichevsk.odessa.ua>
2004-04-22 18:36     ` David S. Miller
     [not found]       ` <200404222239.34760.vda@port.imtp.ilyichevsk.odessa.ua>
2004-04-22 22:10         ` David S. Miller
2004-04-27 21:21   ` [PATCH] eliminate large inline's in skbuff Stephen Hemminger
2004-04-27 23:30     ` David S. Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox