netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pablo Neira Ayuso <pablo@netfilter.org>
To: Simon Horman <simon.horman@netronome.com>
Cc: wenxu <wenxu@ucloud.cn>,
	netdev@vger.kernel.org, davem@davemloft.net, vladbu@mellanox.com
Subject: Re: [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb
Date: Tue, 16 Jun 2020 22:38:34 +0200	[thread overview]
Message-ID: <20200616203834.GA27394@salvia> (raw)
In-Reply-To: <20200616154716.GA16382@netronome.com>

On Tue, Jun 16, 2020 at 05:47:17PM +0200, Simon Horman wrote:
> On Tue, Jun 16, 2020 at 11:18:16PM +0800, wenxu wrote:
> > 
> > 在 2020/6/16 22:34, Simon Horman 写道:
> > > On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu wrote:
> > >> 在 2020/6/16 18:51, Simon Horman 写道:
> > >>> On Tue, Jun 16, 2020 at 11:19:38AM +0800, wenxu@ucloud.cn wrote:
> > >>>> From: wenxu <wenxu@ucloud.cn>
> > >>>>
> > >>>> In the function __flow_block_indr_cleanup, The match stataments
> > >>>> this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv
> > >>>> is totally different data with the flow_indr_dev->cb_priv.
> > >>>>
> > >>>> Store the representor cb_priv to the flow_block_cb->indr.cb_priv in
> > >>>> the driver.
> > >>>>
> > >>>> Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect flow_block infrastructure")
> > >>>> Signed-off-by: wenxu <wenxu@ucloud.cn>
> > >>> Hi Wenxu,
> > >>>
> > >>> I wonder if this can be resolved by using the cb_ident field of struct
> > >>> flow_block_cb.
> > >>>
> > >>> I observe that mlx5e_rep_indr_setup_block() seems to be the only call-site
> > >>> where the value of the cb_ident parameter of flow_block_cb_alloc() is
> > >>> per-block rather than per-device. So part of my proposal is to change
> > >>> that.
> > >> I check all the xxdriver_indr_setup_block. It seems all the cb_ident parameter of
> > >>
> > >> flow_block_cb_alloc is per-block. Both in the nfp_flower_setup_indr_tc_block
> > >>
> > >> and bnxt_tc_setup_indr_block.
> > >>
> > >>
> > >> nfp_flower_setup_indr_tc_block:
> > >>
> > >> struct nfp_flower_indr_block_cb_priv *cb_priv;
> > >>
> > >> block_cb = flow_block_cb_alloc(nfp_flower_setup_indr_block_cb,
> > >>                                                cb_priv, cb_priv,
> > >>                                                nfp_flower_setup_indr_tc_release);
> > >>
> > >>
> > >> bnxt_tc_setup_indr_block:
> > >>
> > >> struct bnxt_flower_indr_block_cb_priv *cb_priv;
> > >>
> > >> block_cb = flow_block_cb_alloc(bnxt_tc_setup_indr_block_cb,
> > >>                                                cb_priv, cb_priv,
> > >>                                                bnxt_tc_setup_indr_rel);
> > >>
> > >>
> > >> And the function flow_block_cb_is_busy called in most place. Pass the
> > >>
> > >> parameter as cb_priv but not cb_indent .
> > > Thanks, I see that now. But I still think it would be useful to understand
> > > the purpose of cb_ident. It feels like it would lead to a clean solution
> > > to the problem you have highlighted.
> > 
> > I think The cb_ident means identify.  It is used to identify the each flow block cb.
> > 
> > In the both flow_block_cb_is_busy and flow_block_cb_lookup function check
> > 
> > the block_cb->cb_ident == cb_ident.
> 
> Thanks, I think that I now see what you mean about the different scope of
> cb_ident and your proposal to allow cleanup by flow_indr_dev_unregister().
> 
> I do, however, still wonder if there is a nicer way than reaching into
> the structure and manually setting block_cb->indr.cb_priv
> at each call-site.
> 
> Perhaps a variant of flow_block_cb_alloc() for indirect blocks
> would be nicer?

A follow up patch to add this new variant would be good. Probably
__flow_block_indr_binding() can go away with this new variant to set
up the indirect flow block.

  reply	other threads:[~2020-06-16 20:38 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-16  3:19 [PATCH net v3 0/4] several fixes for indirect flow_blocks offload wenxu
2020-06-16  3:19 ` [PATCH net v3 1/4] flow_offload: fix incorrect cleanup for flowtable indirect flow_blocks wenxu
2020-06-16 15:55   ` Simon Horman
2020-06-16 20:11   ` Pablo Neira Ayuso
2020-06-16  3:19 ` [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb wenxu
2020-06-16 10:51   ` Simon Horman
2020-06-16 14:20     ` wenxu
2020-06-16 14:34       ` Simon Horman
2020-06-16 15:18         ` wenxu
2020-06-16 15:47           ` Simon Horman
2020-06-16 20:38             ` Pablo Neira Ayuso [this message]
2020-06-17  3:36               ` wenxu
2020-06-17  8:38                 ` Pablo Neira Ayuso
2020-06-17 10:09                   ` wenxu
2020-06-17  2:47             ` wenxu
2020-06-16 20:13   ` Pablo Neira Ayuso
2020-06-17  2:42     ` wenxu
2020-06-17  9:03       ` Pablo Neira Ayuso
2020-06-16  3:19 ` [PATCH net v3 3/4] net/sched: cls_api: fix nooffloaddevcnt warning dmesg log wenxu
2020-06-16 20:17   ` Pablo Neira Ayuso
2020-06-16 20:30     ` Pablo Neira Ayuso
2020-06-17  2:34       ` wenxu
2020-06-17  2:29     ` wenxu
2020-06-16  3:19 ` [PATCH net v3 4/4] flow_offload: fix the list_del corruption in the driver list wenxu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200616203834.GA27394@salvia \
    --to=pablo@netfilter.org \
    --cc=davem@davemloft.net \
    --cc=netdev@vger.kernel.org \
    --cc=simon.horman@netronome.com \
    --cc=vladbu@mellanox.com \
    --cc=wenxu@ucloud.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).