From: Nicholas Piggin <npiggin@gmail.com>
To: netdev@vger.kernel.org
Cc: Nicholas Piggin <npiggin@gmail.com>,
dev@openvswitch.org, Pravin B Shelar <pshelar@ovn.org>,
Aaron Conole <aconole@redhat.com>,
"Eelco Chaudron" <echaudro@redhat.com>,
"Ilya Maximets" <imaximet@redhat.com>,
"Flavio Leitner" <fbl@redhat.com>
Subject: [PATCH 0/7] net: openvswitch: Reduce stack usage
Date: Wed, 11 Oct 2023 13:43:37 +1000 [thread overview]
Message-ID: <20231011034344.104398-1-npiggin@gmail.com> (raw)
Hi,
I'll post this out again to keep discussion going. Thanks all for the
testing and comments so far.
Changes since the RFC
https://lore.kernel.org/netdev/20230927001308.749910-1-npiggin@gmail.com/
- Replace slab allocations for flow keys with expanding the use
of the per-CPU key allocator to ovs_vport_receive.
- Drop patch 1 with Ilya's since they did the same thing (that is
added at patch 3).
- Change push_nsh stack reduction from slab allocation to per-cpu
buffer.
- Drop the ovs_fragment stack usage reduction for now sinc it used
slab and was a bit more complicated.
I posted an initial version of the per-cpu flow allocator patch in
the RFC thread. Since then I cleaned up some debug code and increased
the allocator size to accommodate the additional user of it.
Thanks,
Nick
Ilya Maximets (1):
openvswitch: reduce stack usage in do_execute_actions
Nicholas Piggin (6):
net: openvswitch: generalise the per-cpu flow key allocation stack
net: openvswitch: Use flow key allocator in ovs_vport_receive
net: openvswitch: Reduce push_nsh stack usage
net: openvswitch: uninline action execution
net: openvswitch: uninline ovs_fragment to control stack usage
net: openvswitch: Reduce stack usage in ovs_dp_process_packet
net/openvswitch/actions.c | 208 +++++++++++++++++++++++--------------
net/openvswitch/datapath.c | 56 +++++-----
net/openvswitch/flow.h | 3 +
net/openvswitch/vport.c | 27 +++--
4 files changed, 185 insertions(+), 109 deletions(-)
--
2.42.0
next reply other threads:[~2023-10-11 3:44 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-11 3:43 Nicholas Piggin [this message]
2023-10-11 3:43 ` [PATCH 1/7] net: openvswitch: generalise the per-cpu flow key allocation stack Nicholas Piggin
2023-10-11 3:43 ` [PATCH 2/7] net: openvswitch: Use flow key allocator in ovs_vport_receive Nicholas Piggin
2023-10-11 3:43 ` [PATCH 3/7] openvswitch: reduce stack usage in do_execute_actions Nicholas Piggin
2023-10-11 3:43 ` [PATCH 4/7] net: openvswitch: Reduce push_nsh stack usage Nicholas Piggin
2023-10-11 3:43 ` [PATCH 5/7] net: openvswitch: uninline action execution Nicholas Piggin
2023-10-11 3:43 ` [PATCH 6/7] net: openvswitch: uninline ovs_fragment to control stack usage Nicholas Piggin
2023-10-11 3:43 ` [PATCH 7/7] net: openvswitch: Reduce stack usage in ovs_dp_process_packet Nicholas Piggin
2023-10-11 12:22 ` [PATCH 0/7] net: openvswitch: Reduce stack usage Ilya Maximets
2023-10-12 0:08 ` Nicholas Piggin
2023-10-11 13:23 ` Aaron Conole
2023-10-12 1:19 ` Nicholas Piggin
2023-10-13 8:27 ` David Laight
2023-10-20 17:04 ` Aaron Conole
2023-10-25 4:06 ` Nicholas Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231011034344.104398-1-npiggin@gmail.com \
--to=npiggin@gmail.com \
--cc=aconole@redhat.com \
--cc=dev@openvswitch.org \
--cc=echaudro@redhat.com \
--cc=fbl@redhat.com \
--cc=imaximet@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pshelar@ovn.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).