netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Aaron Conole <aconole@redhat.com>
To: "Nicholas Piggin" <npiggin@gmail.com>
Cc: <netdev@vger.kernel.org>,  <dev@openvswitch.org>,
	 "Ilya Maximets" <imaximet@redhat.com>,
	 "Eelco Chaudron" <echaudro@redhat.com>,
	 "Flavio Leitner" <fbl@redhat.com>
Subject: Re: [ovs-dev] [RFC PATCH 4/7] net: openvswitch: ovs_vport_receive reduce stack usage
Date: Wed, 04 Oct 2023 11:16:11 -0400	[thread overview]
Message-ID: <f7tr0mafmf8.fsf@redhat.com> (raw)
In-Reply-To: <CVZH8D7WMBDW.3D77G01KQ48R3@wheely> (Nicholas Piggin's message of "Wed, 04 Oct 2023 17:29:30 +1000")

"Nicholas Piggin" <npiggin@gmail.com> writes:

> On Fri Sep 29, 2023 at 1:26 AM AEST, Aaron Conole wrote:
>> Nicholas Piggin <npiggin@gmail.com> writes:
>>
>> > Dynamically allocating the sw_flow_key reduces stack usage of
>> > ovs_vport_receive from 544 bytes to 64 bytes at the cost of
>> > another GFP_ATOMIC allocation in the receive path.
>> >
>> > XXX: is this a problem with memory reserves if ovs is in a
>> > memory reclaim path, or since we have a skb allocated, is it
>> > okay to use some GFP_ATOMIC reserves?
>> >
>> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> > ---
>>
>> This represents a fairly large performance hit.  Just my own quick
>> testing on a system using two netns, iperf3, and simple forwarding rules
>> shows between 2.5% and 4% performance reduction on x86-64.  Note that it
>> is a simple case, and doesn't involve a more involved scenario like
>> multiple bridges, tunnels, and internal ports.  I suspect such cases
>> will see even bigger hit.
>>
>> I don't know the impact of the other changes, but just an FYI that the
>> performance impact of this change is extremely noticeable on x86
>> platform.
>>
>> ----
>> ip netns add left
>> ip netns add right
>>
>> ip link add eth0 type veth peer name l0
>> ip link set eth0 netns left
>> ip netns exec left ip addr add 172.31.110.1/24 dev eth0
>> ip netns exec left ip link set eth0 up
>> ip link set l0 up
>>
>> ip link add eth0 type veth peer name r0
>> ip link set eth0 netns right
>> ip netns exec right ip addr add 172.31.110.2/24 dev eth0
>> ip netns exec right ip link set eth0 up
>> ip link set r0 up
>>
>> python3 ovs-dpctl.py add-dp br0
>> python3 ovs-dpctl.py add-if br0 l0
>> python3 ovs-dpctl.py add-if br0 r0
>>
>> python3 ovs-dpctl.py add-flow \
>>   br0 'in_port(1),eth(),eth_type(0x806),arp()' 2
>>   
>> python3 ovs-dpctl.py add-flow \
>>   br0 'in_port(2),eth(),eth_type(0x806),arp()' 1
>>
>> python3 ovs-dpctl.py add-flow \
>>   br0 'in_port(1),eth(),eth_type(0x800),ipv4()' 2
>>
>> python3 ovs-dpctl.py add-flow \
>>   br0 'in_port(2),eth(),eth_type(0x800),ipv4()' 1
>>
>> ----
>>
>> ex results without this patch:
>> [root@wsfd-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
>> ...
>> [  5]   0.00-10.00  sec  46.7 GBytes  40.2 Gbits/sec    0             sender
>> [  5]   0.00-10.00  sec  46.7 GBytes  40.2 Gbits/sec                  receiver
>>
>>
>> ex results with this patch:
>> [root@wsfd-netdev60 ~]# ip netns exec right ./git/iperf/src/iperf3 -c 172.31.110.1
>> ...
>> [  5]   0.00-10.00  sec  44.9 GBytes  38.6 Gbits/sec    0             sender
>> [  5]   0.00-10.00  sec  44.9 GBytes  38.6 Gbits/sec                  receiver
>>
>> I did testing with udp at various bandwidths and this tcp testing.
>
> Thanks for the test case. It works perfectly in the end, but it took me
> days to get there because of a random conspiracy of problems I hit :(
> Sorry for the slow reply, but I was now able to test another idea for
> this. Performance seems to be within the noise with the full series, but
> my system is only getting ~half the rate of yours so you might see more
> movement.
>
> Instead of slab it reuses the per-cpu actions key allocator here.
>
> https://github.com/torvalds/linux/commit/878f01f04ca858e445ff4b4c64351a25bb8399e3
>
> Pushed the series to kvm branch of https://github.com/npiggin/linux
>
> I can repost the series as a second RFC but will wait for thoughts on
> this approach.

Thanks - I'll take a look at it.

> Thanks,
> Nick


  reply	other threads:[~2023-10-04 15:16 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-27  0:13 [RFC PATCH 0/7] net: openvswitch: Reduce stack usage Nicholas Piggin
2023-09-27  0:13 ` [RFC PATCH 1/7] net: openvswitch: Move NSH buffer out of do_execute_actions Nicholas Piggin
2023-09-27  8:26   ` [ovs-dev] " Ilya Maximets
2023-09-27 10:03     ` Nicholas Piggin
2023-09-27  0:13 ` [RFC PATCH 2/7] net: openvswitch: Reduce execute_push_nsh stack overhead Nicholas Piggin
2023-09-27  0:13 ` [RFC PATCH 3/7] net: openvswitch: uninline action execution Nicholas Piggin
2023-09-27  0:13 ` [RFC PATCH 4/7] net: openvswitch: ovs_vport_receive reduce stack usage Nicholas Piggin
2023-09-28 15:26   ` [ovs-dev] " Aaron Conole
2023-09-29  7:00     ` Nicholas Piggin
2023-09-29  8:38       ` Eelco Chaudron
2023-10-04  7:11         ` Nicholas Piggin
2023-10-04 15:15           ` Aaron Conole
2023-10-05  2:01         ` Nicholas Piggin
2023-10-11 13:34           ` Aaron Conole
2023-10-11 23:58             ` Nicholas Piggin
2023-10-04  7:29     ` Nicholas Piggin
2023-10-04 15:16       ` Aaron Conole [this message]
2023-09-27  0:13 ` [RFC PATCH 5/7] net: openvswitch: uninline ovs_fragment to control " Nicholas Piggin
2023-09-27  0:13 ` [RFC PATCH 6/7] net: openvswitch: Reduce ovs_fragment " Nicholas Piggin
2023-09-27  0:13 ` [RFC PATCH 7/7] net: openvswitch: Reduce stack usage in ovs_dp_process_packet Nicholas Piggin
2023-09-27  8:36 ` [ovs-dev] [RFC PATCH 0/7] net: openvswitch: Reduce stack usage Ilya Maximets
2023-09-28  1:52   ` Nicholas Piggin
2023-10-02 11:54     ` Ilya Maximets
2023-10-04  9:56       ` Nicholas Piggin
2023-09-29  7:06   ` Nicholas Piggin
2023-10-02 11:56     ` Ilya Maximets
2023-10-03 13:31       ` Aaron Conole

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f7tr0mafmf8.fsf@redhat.com \
    --to=aconole@redhat.com \
    --cc=dev@openvswitch.org \
    --cc=echaudro@redhat.com \
    --cc=fbl@redhat.com \
    --cc=imaximet@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=npiggin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).