From: "Nicholas Piggin" <npiggin@gmail.com>
To: "Ilya Maximets" <i.maximets@ovn.org>, <netdev@vger.kernel.org>
Cc: <dev@openvswitch.org>
Subject: Re: [ovs-dev] [RFC PATCH 0/7] net: openvswitch: Reduce stack usage
Date: Fri, 29 Sep 2023 17:06:53 +1000 [thread overview]
Message-ID: <CVV7MBT9C7JY.5PYBOXU9NUDR@wheely> (raw)
In-Reply-To: <a018e82f-5cce-fb81-b52c-901e106c16eb@ovn.org>
On Wed Sep 27, 2023 at 6:36 PM AEST, Ilya Maximets wrote:
> On 9/27/23 02:13, Nicholas Piggin wrote:
> > Hi,
> >
> > We've got a report of a stack overflow on ppc64le with a 16kB kernel
> > stack. Openvswitch is just one of many things in the stack, but it
> > does cause recursion and contributes to some usage.
> >
> > Here are a few patches for reducing stack overhead. I don't know the
> > code well so consider them just ideas. GFP_ATOMIC allocations
> > introduced in a couple of places might be controversial, but there
> > is still some savings to be had if you skip those.
> >
> > Here is one place detected where the stack reaches >14kB before
> > overflowing a little later. I massaged the output so it just shows
> > the stack frame address on the left.
>
> Hi, Nicholas. Thanks for the patches!
>
> Though it looks like OVS is not really playing a huge role in the
> stack trace below. How much of the stack does the patch set save
> in total? How much patches 2-7 contribute (I posted a patch similar
> to the first one last week, so we may not count it)?
Stack usage was tested for the same path (this is backported to
RHEL9 kernel), and saving was 2080 bytes for that. It's enough
to get us out of trouble. But if it was a config that caused more
recursions then it might still be a problem.
>
> Also, most of the changes introduced here has a real chance to
> noticeably impact performance. Did you run any performance tests
> with this to assess the impact?
Some numbers were posted by Aaron as you would see. 2-4% for that
patch, but I suspect the rest should have much smaller impact.
Maybe patch 2 if you were doing a lot of push_nsh operations, but
that might be less important since it's out of the recursive path.
>
> One last thing is that at least some of the patches seem to change
> non-inlined non-recursive functions. Seems unnecessary.
>
> Best regards, Ilya Maximets.
>
One thing I do notice in the trace:
> >
> > [c00000037d480b40] __kmalloc+0x8c/0x5e0
> > [c00000037d480bc0] virtqueue_add_outbuf+0x354/0xac0
> > [c00000037d480cc0] xmit_skb+0x1dc/0x350 [virtio_net]
> > [c00000037d480d50] start_xmit+0xd4/0x3b0 [virtio_net]
> > [c00000037d480e00] dev_hard_start_xmit+0x11c/0x280
> > [c00000037d480e80] sch_direct_xmit+0xec/0x330
> > [c00000037d480f20] __dev_xmit_skb+0x41c/0xa80
> > [c00000037d480f90] __dev_queue_xmit+0x414/0x950
> > [c00000037d481070] ovs_vport_send+0xb4/0x210 [openvswitch]
> > [c00000037d4810f0] do_output+0x7c/0x200 [openvswitch]
> > [c00000037d481140] do_execute_actions+0xe48/0xeb0 [openvswitch]
> > [c00000037d481300] ovs_execute_actions+0x78/0x1f0 [openvswitch]
> > [c00000037d481380] ovs_dp_process_packet+0xb4/0x2e0 [openvswitch]
> > [c00000037d481450] ovs_vport_receive+0x8c/0x130 [openvswitch]
> > [c00000037d481660] internal_dev_xmit+0x40/0xd0 [openvswitch]
> > [c00000037d481690] dev_hard_start_xmit+0x11c/0x280
> > [c00000037d481710] __dev_queue_xmit+0x634/0x950
> > [c00000037d4817f0] neigh_hh_output+0xd0/0x180
> > [c00000037d481840] ip_finish_output2+0x31c/0x5c0
> > [c00000037d4818e0] ip_local_out+0x64/0x90
> > [c00000037d481920] iptunnel_xmit+0x194/0x290
> > [c00000037d4819c0] udp_tunnel_xmit_skb+0x100/0x140 [udp_tunnel]
> > [c00000037d481a80] geneve_xmit_skb+0x34c/0x610 [geneve]
> > [c00000037d481bb0] geneve_xmit+0x94/0x1e8 [geneve]
> > [c00000037d481c30] dev_hard_start_xmit+0x11c/0x280
> > [c00000037d481cb0] __dev_queue_xmit+0x634/0x950
> > [c00000037d481d90] ovs_vport_send+0xb4/0x210 [openvswitch]
> > [c00000037d481e10] do_output+0x7c/0x200 [openvswitch]
> > [c00000037d481e60] do_execute_actions+0xe48/0xeb0 [openvswitch]
> > [c00000037d482020] ovs_execute_actions+0x78/0x1f0 [openvswitch]
> > [c00000037d4820a0] ovs_dp_process_packet+0xb4/0x2e0 [openvswitch]
> > [c00000037d482170] clone_execute+0x2c8/0x370 [openvswitch]
^^^^^
clone_execute is an action which can be deferred AFAIKS, but it is
not deferred until several recursions deep.
If we deferred always when possible, then might avoid such a big
stack (at least for this config). Is it very costly to defer? Would
it help here, or is it just going to process it right away and
cause basically the same call chain?
Thanks,
Nick
next prev parent reply other threads:[~2023-09-29 7:07 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-27 0:13 [RFC PATCH 0/7] net: openvswitch: Reduce stack usage Nicholas Piggin
2023-09-27 0:13 ` [RFC PATCH 1/7] net: openvswitch: Move NSH buffer out of do_execute_actions Nicholas Piggin
2023-09-27 8:26 ` [ovs-dev] " Ilya Maximets
2023-09-27 10:03 ` Nicholas Piggin
2023-09-27 0:13 ` [RFC PATCH 2/7] net: openvswitch: Reduce execute_push_nsh stack overhead Nicholas Piggin
2023-09-27 0:13 ` [RFC PATCH 3/7] net: openvswitch: uninline action execution Nicholas Piggin
2023-09-27 0:13 ` [RFC PATCH 4/7] net: openvswitch: ovs_vport_receive reduce stack usage Nicholas Piggin
2023-09-28 15:26 ` [ovs-dev] " Aaron Conole
2023-09-29 7:00 ` Nicholas Piggin
2023-09-29 8:38 ` Eelco Chaudron
2023-10-04 7:11 ` Nicholas Piggin
2023-10-04 15:15 ` Aaron Conole
2023-10-05 2:01 ` Nicholas Piggin
2023-10-11 13:34 ` Aaron Conole
2023-10-11 23:58 ` Nicholas Piggin
2023-10-04 7:29 ` Nicholas Piggin
2023-10-04 15:16 ` Aaron Conole
2023-09-27 0:13 ` [RFC PATCH 5/7] net: openvswitch: uninline ovs_fragment to control " Nicholas Piggin
2023-09-27 0:13 ` [RFC PATCH 6/7] net: openvswitch: Reduce ovs_fragment " Nicholas Piggin
2023-09-27 0:13 ` [RFC PATCH 7/7] net: openvswitch: Reduce stack usage in ovs_dp_process_packet Nicholas Piggin
2023-09-27 8:36 ` [ovs-dev] [RFC PATCH 0/7] net: openvswitch: Reduce stack usage Ilya Maximets
2023-09-28 1:52 ` Nicholas Piggin
2023-10-02 11:54 ` Ilya Maximets
2023-10-04 9:56 ` Nicholas Piggin
2023-09-29 7:06 ` Nicholas Piggin [this message]
2023-10-02 11:56 ` Ilya Maximets
2023-10-03 13:31 ` Aaron Conole
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CVV7MBT9C7JY.5PYBOXU9NUDR@wheely \
--to=npiggin@gmail.com \
--cc=dev@openvswitch.org \
--cc=i.maximets@ovn.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).