From: Vlad Buslov <vladbu@nvidia.com>
To: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: <davem@davemloft.net>, <kuba@kernel.org>, <pabeni@redhat.com>,
<netdev@vger.kernel.org>, <netfilter-devel@vger.kernel.org>,
<jhs@mojatatu.com>, <xiyou.wangcong@gmail.com>,
<jiri@resnulli.us>, <ozsh@nvidia.com>,
<marcelo.leitner@gmail.com>, <simon.horman@corigine.com>
Subject: Re: [PATCH net-next v3 4/7] netfilter: flowtable: allow updating offloaded rules asynchronously
Date: Tue, 24 Jan 2023 11:19:31 +0200 [thread overview]
Message-ID: <87sfg0cmnw.fsf@nvidia.com> (raw)
In-Reply-To: <Y8+Zny8S9BQm7asq@salvia>
On Tue 24 Jan 2023 at 09:41, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> Hi Vlad,
>
> On Tue, Jan 24, 2023 at 09:06:13AM +0200, Vlad Buslov wrote:
>>
>> On Fri 20 Jan 2023 at 12:41, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
>> > Hi Vlad,
>> >
>> > On Thu, Jan 19, 2023 at 08:51:01PM +0100, Vlad Buslov wrote:
>> >> Following patches in series need to update flowtable rule several times
>> >> during its lifetime in order to synchronize hardware offload with actual ct
>> >> status. However, reusing existing 'refresh' logic in act_ct would cause
>> >> data path to potentially schedule significant amount of spurious tasks in
>> >> 'add' workqueue since it is executed per-packet. Instead, introduce a new
>> >> flow 'update' flag and use it to schedule async flow refresh in flowtable
>> >> gc which will only be executed once per gc iteration.
>> >
>> > So the idea is to use a NF_FLOW_HW_UPDATE which triggers the update
>> > from the garbage collector. I understand the motivation here is to
>> > avoid adding more work to the workqueue, by simply letting the gc
>> > thread pick up for the update.
>> >
>> > I already proposed in the last year alternative approaches to improve
>> > the workqueue logic, including cancelation of useless work. For
>> > example, cancel a flying "add" work if "delete" just arrive and the
>> > work is still sitting in the queue. Same approach could be use for
>> > this update logic, ie. cancel an add UDP unidirectional or upgrade it
>> > to bidirectional if, by the time we see traffic in both directions,
>> > then work is still sitting in the queue.
>>
>> Thanks for the suggestion. I'll try to make this work over regular
>> workqueues without further extending the flow flags and/or putting more
>> stuff into gc.
>
> Let me make a second pass to sort out thoughts on this.
>
> Either we use regular workqueues (without new flags) or we explore
> fully consolidating this hardware offload workqueue infrastructure
> around flags, ie. use flags not only for update events, but also for
> new and delete.
>
> This would go more in the direction of your _UPDATE flag idea:
>
> - Update the hardware offload workqueue to iterate over the
> flowtable. The hardware offload workqueue would be "scanning" for
> entries in the flowtable that require some sort of update in the
> hardware. The flags would tell what kind of action is needed.
>
> - Add these flags:
>
> NF_FLOW_HW_NEW
> NF_FLOW_HW_UPDATE
> NF_FLOW_HW_DELETE
>
> and remove the work object (flow_offload_work) and the existing list.
> If the workqueue finds an entry with:
>
> NEW|DELETE, this means this is short lived flow, not worth to waste
> cycles to offload it.
> NEW|UPDATE, this means this is an UDP flow that is bidirectional.
>
> Then, there will be no more work allocation + "flying" work objects to
> the hardware offload workqueue. Instead, the hardware offload
> workqueue will be iterating.
>
> This approach would need _DONE flags to annotate if the offload
> updates have been applied to hardware already (similar to the
> conntrack _DONE flags).
>
> (Oh well, this proposal is adding even more flags. But I think flags
> are not the issue, but the mixture of the existing flow_offload_work
> approach with this new _UPDATE flag and the gc changes).
>
> If flow_offload_work is removed, we would also need to add a:
>
> struct nf_flowtable *flowtable;
>
> field to the flow_offload entry, which is an entry field that is
> passed via flow_offload_work. So it is one extra field for the each
> flow_offload entry.
>
> The other alternative is to use the existing nf_flow_offload_add_wq
> with UPDATE command, which might result in more flying objects in
> turn. I think this is what you are trying to avoid with the _UPDATE
> flag approach.
This looks interesting, but is very ambitious and will probably be a
bigger change than this whole series. I have an idea how we can leverage
existing 'refresh' mechanism for updating flow state that doesn't
involve large-scale refactoring of existing offload infrastructure,
which I would prefer to try first. WDYT?
next prev parent reply other threads:[~2023-01-24 9:23 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-19 19:50 [PATCH net-next v3 0/7] Allow offloading of UDP NEW connections via act_ct Vlad Buslov
2023-01-19 19:50 ` [PATCH net-next v3 1/7] net: flow_offload: provision conntrack info in ct_metadata Vlad Buslov
2023-01-19 19:50 ` [PATCH net-next v3 2/7] netfilter: flowtable: fixup UDP timeout depending on ct state Vlad Buslov
2023-01-20 11:57 ` Pablo Neira Ayuso
2023-01-24 7:08 ` Vlad Buslov
2023-01-19 19:51 ` [PATCH net-next v3 3/7] netfilter: flowtable: allow unidirectional rules Vlad Buslov
2023-01-19 19:51 ` [PATCH net-next v3 4/7] netfilter: flowtable: allow updating offloaded rules asynchronously Vlad Buslov
2023-01-20 11:41 ` Pablo Neira Ayuso
2023-01-24 7:06 ` Vlad Buslov
2023-01-24 8:41 ` Pablo Neira Ayuso
2023-01-24 9:19 ` Vlad Buslov [this message]
2023-01-24 13:57 ` Vlad Buslov
2023-01-19 19:51 ` [PATCH net-next v3 5/7] net/sched: act_ct: set ctinfo in meta action depending on ct state Vlad Buslov
2023-01-19 19:51 ` [PATCH net-next v3 6/7] net/sched: act_ct: offload UDP NEW connections Vlad Buslov
2023-01-19 19:51 ` [PATCH net-next v3 7/7] netfilter: nf_conntrack: allow early drop of offloaded UDP conns Vlad Buslov
2023-01-19 21:37 ` [PATCH net-next v3 0/7] Allow offloading of UDP NEW connections via act_ct Marcelo Ricardo Leitner
2023-01-20 6:38 ` Vlad Buslov
2023-01-20 6:57 ` Vlad Buslov
2023-01-20 11:30 ` Marcelo Ricardo Leitner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87sfg0cmnw.fsf@nvidia.com \
--to=vladbu@nvidia.com \
--cc=davem@davemloft.net \
--cc=jhs@mojatatu.com \
--cc=jiri@resnulli.us \
--cc=kuba@kernel.org \
--cc=marcelo.leitner@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=netfilter-devel@vger.kernel.org \
--cc=ozsh@nvidia.com \
--cc=pabeni@redhat.com \
--cc=pablo@netfilter.org \
--cc=simon.horman@corigine.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).