public inbox for dev@dpdk.org
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: "Morten Brørup" <mb@smartsharesystems.com>
Cc: <dev@dpdk.org>, Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Subject: Re: [PATCH] doc: remove obsolete mempool creation advice
Date: Thu, 19 Mar 2026 12:02:58 +0000	[thread overview]
Message-ID: <abvl8iM040DNc1pv@bricha3-mobl1.ger.corp.intel.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35F657A4@smartserver.smartshare.dk>

On Thu, Mar 19, 2026 at 11:55:11AM +0100, Morten Brørup wrote:
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > Sent: Thursday, 19 March 2026 10.41
> > 
> > On Thu, Mar 19, 2026 at 09:13:00AM +0000, Morten Brørup wrote:
> > > The descriptions for the mempool creation functions contained advice
> > for
> > > choosing the optimum (in terms of memory usage) number of elements
> > and
> > > cache size.
> > > The advice was based on implementation details, which was changed
> > long
> > > ago, making the advice completely irrelevant.
> > >
> > 
> > The comment is still correct in most cases, since the default backing
> > storage remains an rte_ring. If passing a power-of-2 size to mempool
> > create
> > one will get a backing rte_ring store which is twice as large as
> > requested,
> > leading to lots of ring slots being wasted. For example, for a pool
> > with
> > 16k elements, the actual ring size allocated will be 32k, leading to
> > wasting 128k of RAM, and also potentially cache too. The latter will
> > occur
> > because of the nature of the ring to iterate through all mempool/ring
> > entries, meaning that even if only 16k of the 32k slots will ever be
> > used,
> > all 32k slots will be passed through the cpu cache if it works on the
> > mempool directly and not just from the per-core cache.
> 
> You are right about the waste of memory in the ring driver. And good point about the CPU cache!
> 
> However, only pointer entries (8 byte each) are being wasted, not object entries (which are much larger). This is not 100 % clear from the advice.
> 
> Furthermore, with 16k mbufs of 2368 byte each, the mempool itself consumes 37 MB worth of memory, so do we really care about wasting 128 KB?
> 
> IMHO, removing the advice improves the quality of the documentation.
> I don't think a detail about saving 0.3 % of the memory used by the mempool should be presented so prominently in the documentation.
> 

Ok, point taken. It would actually be the cache wastage that would worry me
more, but again the cache use from the extra ring space is probably small
compared to that from the buffers if we are cycling through the whole
mempool.

/Bruce


  reply	other threads:[~2026-03-19 12:03 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19  9:13 [PATCH] doc: remove obsolete mempool creation advice Morten Brørup
2026-03-19  9:40 ` Bruce Richardson
2026-03-19 10:55   ` Morten Brørup
2026-03-19 12:02     ` Bruce Richardson [this message]
2026-03-19 12:03 ` Bruce Richardson
2026-03-20  5:44   ` Andrew Rybchenko
2026-03-25 22:39     ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abvl8iM040DNc1pv@bricha3-mobl1.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=mb@smartsharesystems.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox