From: Simon Horman <horms@kernel.org>
To: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Tobias Brunner <tobias@strongswan.org>,
Antony Antony <antony.antony@secunet.com>,
Daniel Xu <dxu@dxuuu.xyz>, Paul Wouters <paul@nohats.ca>,
Sabrina Dubroca <sd@queasysnail.net>,
netdev@vger.kernel.org, devel@linux-ipsec.org
Subject: Re: [PATCH 1/4] xfrm: Add support for per cpu xfrm state handling.
Date: Tue, 8 Oct 2024 17:47:26 +0100 [thread overview]
Message-ID: <20241008164726.GD99782@kernel.org> (raw)
In-Reply-To: <20241007064453.2171933-2-steffen.klassert@secunet.com>
On Mon, Oct 07, 2024 at 08:44:50AM +0200, Steffen Klassert wrote:
> Currently all flows for a certain SA must be processed by the same
> cpu to avoid packet reordering and lock contention of the xfrm
> state lock.
>
> To get rid of this limitation, the IETF is about to standardize
> per cpu SAs. This patch implements the xfrm part of it:
>
> https://datatracker.ietf.org/doc/draft-ietf-ipsecme-multi-sa-performance/
>
> This adds the cpu as a lookup key for xfrm states and a config option
> to generate acquire messages for each cpu.
>
> With that, we can have on each cpu a SA with identical traffic selector
> so that flows can be processed in parallel on all cpu.
>
> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
...
> @@ -2521,6 +2547,7 @@ static inline unsigned int xfrm_aevent_msgsize(struct xfrm_state *x)
> + nla_total_size(4) /* XFRM_AE_RTHR */
> + nla_total_size(4) /* XFRM_AE_ETHR */
> + nla_total_size(sizeof(x->dir)); /* XFRMA_SA_DIR */
> + + nla_total_size(4); /* XFRMA_SA_PCPU */
Hi Steffen,
It looks like the ';' needs to be dropped from the x->dir line.
(Completely untested!)
+ nla_total_size(sizeof(x->dir)) /* XFRMA_SA_DIR */
+ nla_total_size(4); /* XFRMA_SA_PCPU */
Flagged by Smatch.
> }
>
> static int build_aevent(struct sk_buff *skb, struct xfrm_state *x, const struct km_event *c)
...
next prev parent reply other threads:[~2024-10-08 16:47 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-07 6:44 [PATCH 0/4] xfrm: Add support for RFC 9611 per cpu xfrm states Steffen Klassert
2024-10-07 6:44 ` [PATCH 1/4] xfrm: Add support for per cpu xfrm state handling Steffen Klassert
2024-10-08 16:47 ` Simon Horman [this message]
2024-10-11 8:22 ` Steffen Klassert
2024-10-10 18:22 ` kernel test robot
2024-10-07 6:44 ` [PATCH 2/4] xfrm: Cache used outbound xfrm states at the policy Steffen Klassert
2024-10-07 14:26 ` Jakub Kicinski
2024-10-11 8:21 ` Steffen Klassert
2024-10-07 6:44 ` [PATCH 3/4] xfrm: Add an inbound percpu state cache Steffen Klassert
2024-10-07 6:44 ` [PATCH 4/4] xfrm: Restrict percpu SA attribute to specific netlink message types Steffen Klassert
2024-10-07 6:48 ` [PATCH 0/4] xfrm: Add support for RFC 9611 per cpu xfrm states Steffen Klassert
-- strict thread matches above, loose matches on Subject: below --
2024-11-11 20:42 [PATCH 1/4] xfrm: Add support for per cpu xfrm state handling Kees Bakker
2024-11-12 11:03 ` Steffen Klassert
2024-11-12 19:21 ` Kees Bakker
2024-11-14 10:59 ` Steffen Klassert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241008164726.GD99782@kernel.org \
--to=horms@kernel.org \
--cc=antony.antony@secunet.com \
--cc=devel@linux-ipsec.org \
--cc=dxu@dxuuu.xyz \
--cc=netdev@vger.kernel.org \
--cc=paul@nohats.ca \
--cc=sd@queasysnail.net \
--cc=steffen.klassert@secunet.com \
--cc=tobias@strongswan.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).