netfilter-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pablo Neira Ayuso <pablo@netfilter.org>
To: Phil Sutter <phil@nwl.cc>, netfilter-devel@vger.kernel.org
Subject: Re: [nft PATCH] intervals: Do not sort cached set elements over and over again
Date: Thu, 16 Jun 2022 13:15:01 +0200	[thread overview]
Message-ID: <YqsQtYa8afgUdsDB@salvia> (raw)
In-Reply-To: <YqsFkwU/369O5vxQ@orbyte.nwl.cc>

On Thu, Jun 16, 2022 at 12:27:31PM +0200, Phil Sutter wrote:
> On Wed, Jun 15, 2022 at 09:36:11PM +0200, Pablo Neira Ayuso wrote:
> > On Wed, Jun 15, 2022 at 07:33:29PM +0200, Phil Sutter wrote:
> > > When adding element(s) to a non-empty set, code merged the two lists and
> > > sorted the result. With many individual 'add element' commands this
> > > causes substantial overhead. Make use of the fact that
> > > existing_set->init is sorted already, sort only the list of new elements
> > > and use list_splice_sorted() to merge the two sorted lists.
> > > 
> > > A test case adding ~25k elements in individual commands completes in
> > > about 1/4th of the time with this patch applied.
> > 
> > Good.
> > 
> > Do you still like the idea of coalescing set element commands whenever
> > possible?
> 
> Does it mess with error reporting? If not, I don't see a downside of
> doing it.
> 
> With regards to the problem at hand, it seems like a feature to escape
> the actual problem. Please keep in mind that my patch's improvement from
> ~4min down to ~1min is pretty lousy given that v1.0.1 completed the same
> task in 0.3s.

I running this comparison between 1.0.1:

# nft -v
nftables v1.0.1 (Fearless Fosdick #3)
# nft -f dump_sep.nft

real    0m3,867s
user    0m3,651s
sys     0m0,219s

and current 1.0.4 plus pending patches in patchwork:

# nft -v
nftables v1.0.4 (Lester Gooch #3)
# nft -f dump_sep.nft

real    0m3,867s
user    0m3,677s
sys     0m0,190s

For the record, this dump_sep.nft (that you sent me) looks like this:

# cat dump_sep.nft
add table t
add set t s { type ipv4_addr; flags interval; }
add element t s { 1.0.1.0/24 }
add element t s { 1.0.2.0/23 }
[...] more single command to add element [...]

> IMHO the whole overlap detection/auto merging should happen as commit
> preparation and not per command.

Then, this needs to coalesce the commands that update a single set at
a later stage, in such commit preparation phase.

This code also has to deal with deletions coming in the same batch,
which might be happening per command, by a robot generated batch.

Userspace overlap detection is only required by kernels <= 5.7, so
this check could be removed.

For automerging, I don't think I can escape tracking each command to
update the userspace set cache and adjust the existing ranges
accordingly.

      reply	other threads:[~2022-06-16 11:15 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-15 17:33 [nft PATCH] intervals: Do not sort cached set elements over and over again Phil Sutter
2022-06-15 19:36 ` Pablo Neira Ayuso
2022-06-16 10:27   ` Phil Sutter
2022-06-16 11:15     ` Pablo Neira Ayuso [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YqsQtYa8afgUdsDB@salvia \
    --to=pablo@netfilter.org \
    --cc=netfilter-devel@vger.kernel.org \
    --cc=phil@nwl.cc \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).