netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Evgeniy Polyakov <zbr@ioremap.net>
To: Peter Chacko <peterchacko35@gmail.com>
Cc: David Miller <davem@davemloft.net>,
	rick.jones2@hp.com, radhamohan_ch@yahoo.com,
	netdev@vger.kernel.org
Subject: Re: can we reuse an skb
Date: Sat, 20 Jun 2009 12:00:57 +0400	[thread overview]
Message-ID: <20090620080057.GA16143@ioremap.net> (raw)
In-Reply-To: <1f808b4a0906192054k6182ac2fh1618f0e630033518@mail.gmail.com>

On Sat, Jun 20, 2009 at 09:24:52AM +0530, Peter Chacko (peterchacko35@gmail.com) wrote:
> Assume that we have n-cores capable of processing packet at the same
> time. if we trade off memory for computing, why don't we pre-allocate
> "n" dedicated skb buffers regardless of the size of each packet, but
> just as big as the size of the MTU itself.(forget JUMBO packet for
> now).(  today, dev_alloc_skb() allocate based on the packet len, which
> is memory usage optimized.).
> 
> each dedicated memory buffer is now a per-CPU/thread data structure.
> 
> In the  IO stack world , we have buffer cache, identified by buffer
> headers. That design was conceived so early as each block level write
> was typically 512 bytes and same for all blocks. why don't we adapt
> that into network stack ?

And page cache has ugly buffer_head strucures to point partially updated
blocks within the page, while block IO has to perform read-modify-write
cycles when it updates less than a single block, which is rather costly,
especially on big-sized blocks.

Consider 100-byte writes and 1500 MTU (or consider 9k MTU for a moment)
- this is a huge overhead and if socket queue is 10 MB for example, you
will waste 140 MB of RAM or have miserable performance if the whole 1500
bytes are accounted into socket buffer while only 100 bytes are used.

For those who wants to play with skb recycling - just implement own pool
of skbs and provide private destructor, which will requeue the same
object into private pool. Not sure this will work any bit faster than
existing scheme though if proper memory accounting is implemented.

-- 
	Evgeniy Polyakov

  reply	other threads:[~2009-06-20  8:01 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-06-19  6:46 can we reuse an skb Radha Mohan
2009-06-19  6:51 ` jon_zhou
2009-06-19  7:10   ` Radha Mohan
2009-06-19  7:21   ` Peter Chacko
2009-06-19 10:37 ` Saikiran Madugula
2009-06-19 18:41   ` Neil Horman
2009-06-19 16:56 ` Rick Jones
2009-06-19 23:29   ` David Miller
2009-06-20  3:54     ` Peter Chacko
2009-06-20  8:00       ` Evgeniy Polyakov [this message]
2009-06-20 11:51       ` Ben Hutchings
2009-06-21  5:41     ` Peter Chacko
2009-06-21  5:49       ` David Miller
2009-06-21 11:46       ` [RFD] Pluggable code design (was: can we reuse an skb) Al Boldi
  -- strict thread matches above, loose matches on Subject: below --
2009-06-19 10:11 can we reuse an skb Nicholas Van Orton
2009-06-22 13:34 ` Philby John
2009-06-22 13:56   ` Peter Chacko
2009-06-22 14:33     ` Philby John

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090620080057.GA16143@ioremap.net \
    --to=zbr@ioremap.net \
    --cc=davem@davemloft.net \
    --cc=netdev@vger.kernel.org \
    --cc=peterchacko35@gmail.com \
    --cc=radhamohan_ch@yahoo.com \
    --cc=rick.jones2@hp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).