All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Robin Jarry" <rjarry@redhat.com>
To: "Maxime Leroy" <maxime@leroys.fr>,
	"Medvedkin, Vladimir" <vladimir.medvedkin@intel.com>
Cc: <dev@dpdk.org>, <nsaxena16@gmail.com>, <mb@smartsharesystems.com>,
	<adwivedi@marvell.com>, <jerinjacobk@gmail.com>
Subject: Re: [RFC PATCH 0/4] VRF support in FIB library
Date: Mon, 23 Mar 2026 16:08:51 +0100	[thread overview]
Message-ID: <DHA98WR0DERF.T2XK820FGTF4@redhat.com> (raw)
In-Reply-To: <CAHHRULVvLpFpwA-Z8KMuCKmdM24OCeZDrzEMEMs88h2-iqapXw@mail.gmail.com>

Hey folks,

Maxime Leroy, Mar 23, 2026 at 15:53:
> Fair point on VLAN subinterfaces and MPLS VPN. SRv6 L3VPN (End.DT4/
> End.DT6) also fits that pattern after decap.
>
> I agree DPDK often pre-allocates for performance, but I wonder if the
> flat TBL24 actually helps here. Each VRF's working set is spread
> 128 MB apart in the flat table. Would regrouping packets by VRF and
> doing one bulk lookup per VRF with separate contiguous TBL24s be
> more cache-friendly than a single mixed-VRF gather? Do you have
> benchmarks comparing the two approaches?

> On the memory trade-off and VRF ID mapping: the API uses vrf_id as
> a direct index (0 to max_vrfs-1). With 256 VRFs and 8B nexthops,
> TBL24 alone costs 32 GB for IPv4 and 32 GB for IPv6 -- 64 GB total
> at startup. In grout, VRF IDs are interface IDs that can be any
> uint16_t, so we would also need to maintain a mapping between our
> VRF IDs and FIB slot indices. We would need to introduce a max_vrfs
> limit, which forces a bad trade-off: either set it low (e.g. 16)
> and limit deployments, or set it high (e.g. 256) and pay 64 GB at
> startup even with a single VRF. With separate FIB instances per VRF,
> we only allocate what we use.

I am also concerned about the global memory consumption. Taking grout as
a live example, we currently support up to 1024 VRFs (each VRF is
an interface so the upper limit is just the number of interfaces).

Pre-allocating 1024 rte_fib and rte_fib6 is virtually impossible.

> On the IPv4/IPv6 TBL8 pool: I was not suggesting merging FIBs, just
> sharing the TBL8 block allocator between separate FIB instances.
> This is possible since dir24_8 and trie use the same TBL8 block
> format (256 entries, same encoding, same size).
>
> Would it be possible to pass a shared TBL8 pool at rte_fib_create()
> time? Each FIB keeps its own TBL24 and RIB, but TBL8 is shared
> across all FIBs and potentially across IPv4/IPv6. Users would no
> longer have to guess num_tbl8 per FIB.

+1 to this. That would help a lot to have a common tbl8 pool. That way
we could keep the single VRF per fib/rib but have a global tbl8 pool
that we can tune to our use case.

Cheers,


  reply	other threads:[~2026-03-23 15:08 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-22 15:42 [RFC PATCH 0/4] VRF support in FIB library Vladimir Medvedkin
2026-03-22 15:42 ` [RFC PATCH 1/4] fib: add multi-VRF support Vladimir Medvedkin
2026-03-23 15:48   ` Konstantin Ananyev
2026-03-23 19:06     ` Medvedkin, Vladimir
2026-03-23 22:22       ` Konstantin Ananyev
2026-03-25 14:09         ` Medvedkin, Vladimir
2026-03-26 10:13           ` Konstantin Ananyev
2026-03-27 18:32             ` Medvedkin, Vladimir
2026-03-22 15:42 ` [RFC PATCH 2/4] fib: add VRF functional and unit tests Vladimir Medvedkin
2026-03-22 16:40   ` Stephen Hemminger
2026-03-22 16:41   ` Stephen Hemminger
2026-03-22 15:42 ` [RFC PATCH 3/4] fib6: add multi-VRF support Vladimir Medvedkin
2026-03-22 15:42 ` [RFC PATCH 4/4] fib6: add VRF functional and unit tests Vladimir Medvedkin
2026-03-22 16:45   ` Stephen Hemminger
2026-03-22 16:43 ` [RFC PATCH 0/4] VRF support in FIB library Stephen Hemminger
2026-03-23  9:01   ` Morten Brørup
2026-03-23 11:32     ` Medvedkin, Vladimir
2026-03-23 11:16   ` Medvedkin, Vladimir
2026-03-23  9:54 ` Robin Jarry
2026-03-23 11:34   ` Medvedkin, Vladimir
2026-03-23 11:27 ` Maxime Leroy
2026-03-23 12:49   ` Medvedkin, Vladimir
2026-03-23 14:53     ` Maxime Leroy
2026-03-23 15:08       ` Robin Jarry [this message]
2026-03-23 15:27         ` Morten Brørup
2026-03-23 18:52           ` Medvedkin, Vladimir
2026-03-23 18:42       ` Medvedkin, Vladimir
2026-03-24  9:19         ` Maxime Leroy
2026-03-25 15:56           ` Medvedkin, Vladimir
2026-03-25 21:43             ` Maxime Leroy
2026-03-27 18:27               ` Medvedkin, Vladimir
2026-04-02 16:51                 ` Maxime Leroy
2026-03-23 19:05 ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DHA98WR0DERF.T2XK820FGTF4@redhat.com \
    --to=rjarry@redhat.com \
    --cc=adwivedi@marvell.com \
    --cc=dev@dpdk.org \
    --cc=jerinjacobk@gmail.com \
    --cc=maxime@leroys.fr \
    --cc=mb@smartsharesystems.com \
    --cc=nsaxena16@gmail.com \
    --cc=vladimir.medvedkin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.