From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B773C43381 for ; Thu, 14 Feb 2019 20:34:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DD9492083E for ; Thu, 14 Feb 2019 20:34:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439967AbfBNUeL (ORCPT ); Thu, 14 Feb 2019 15:34:11 -0500 Received: from mx1.redhat.com ([209.132.183.28]:53482 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387975AbfBNUeL (ORCPT ); Thu, 14 Feb 2019 15:34:11 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4FF127F6A4; Thu, 14 Feb 2019 20:34:10 +0000 (UTC) Received: from localhost (ovpn-200-19.brq.redhat.com [10.40.200.19]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 72DE619C65; Thu, 14 Feb 2019 20:34:07 +0000 (UTC) Date: Thu, 14 Feb 2019 21:34:02 +0100 From: Stefano Brivio To: Vlad Buslov Cc: netdev@vger.kernel.org, jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net Subject: Re: [PATCH net-next 02/12] net: sched: flower: refactor fl_change Message-ID: <20190214213402.67919dea@redhat.com> In-Reply-To: <20190214074712.17846-3-vladbu@mellanox.com> References: <20190214074712.17846-1-vladbu@mellanox.com> <20190214074712.17846-3-vladbu@mellanox.com> Organization: Red Hat MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Thu, 14 Feb 2019 20:34:10 +0000 (UTC) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, 14 Feb 2019 09:47:02 +0200 Vlad Buslov wrote: > As a preparation for using classifier spinlock instead of relying on > external rtnl lock, rearrange code in fl_change. The goal is to group the > code which changes classifier state in single block in order to allow > following commits in this set to protect it from parallel modification with > tp->lock. Data structures that require tp->lock protection are mask > hashtable and filters list, and classifier handle_idr. > > fl_hw_replace_filter() is a sleeping function and cannot be called while > holding a spinlock. In order to execute all sequence of changes to shared > classifier data structures atomically, call fl_hw_replace_filter() before > modifying them. > > Signed-off-by: Vlad Buslov > Acked-by: Jiri Pirko > --- > net/sched/cls_flower.c | 85 ++++++++++++++++++++++++++------------------------ > 1 file changed, 44 insertions(+), 41 deletions(-) > > diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c > index 88d7af78ba7e..91596a6271f8 100644 > --- a/net/sched/cls_flower.c > +++ b/net/sched/cls_flower.c > @@ -1354,90 +1354,93 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, > if (err < 0) > goto errout; > > - if (!handle) { > - handle = 1; > - err = idr_alloc_u32(&head->handle_idr, fnew, &handle, > - INT_MAX, GFP_KERNEL); > - } else if (!fold) { > - /* user specifies a handle and it doesn't exist */ > - err = idr_alloc_u32(&head->handle_idr, fnew, &handle, > - handle, GFP_KERNEL); > - } > - if (err) > - goto errout; > - fnew->handle = handle; > - > > [...] > > if (fold) { > + fnew->handle = handle; I'm probably missing something, but what if fold is passed and the handle isn't specified? That can still happen, right? In that case we wouldn't be allocating the handle. > + > + err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node, > + fnew->mask->filter_ht_params); > + if (err) > + goto errout_hw; > + > rhashtable_remove_fast(&fold->mask->ht, > &fold->ht_node, > fold->mask->filter_ht_params); > - if (!tc_skip_hw(fold->flags)) > - fl_hw_destroy_filter(tp, fold, NULL); > - } > - > - *arg = fnew; > - > - if (fold) { > idr_replace(&head->handle_idr, fnew, fnew->handle); > list_replace_rcu(&fold->list, &fnew->list); > + > + if (!tc_skip_hw(fold->flags)) > + fl_hw_destroy_filter(tp, fold, NULL); > tcf_unbind_filter(tp, &fold->res); > tcf_exts_get_net(&fold->exts); > tcf_queue_work(&fold->rwork, fl_destroy_filter_work); > } else { > + if (__fl_lookup(fnew->mask, &fnew->mkey)) { > + err = -EEXIST; > + goto errout_hw; > + } > + > + if (handle) { > + /* user specifies a handle and it doesn't exist */ > + err = idr_alloc_u32(&head->handle_idr, fnew, &handle, > + handle, GFP_ATOMIC); > + } else { > + handle = 1; > + err = idr_alloc_u32(&head->handle_idr, fnew, &handle, > + INT_MAX, GFP_ATOMIC); > + } > + if (err) > + goto errout_hw; Just if you respin: a newline here would be nice to have. > + fnew->handle = handle; > + > + err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node, > + fnew->mask->filter_ht_params); > + if (err) > + goto errout_idr; > + > list_add_tail_rcu(&fnew->list, &fnew->mask->filters); > } > > + *arg = fnew; > + > kfree(tb); > kfree(mask); > return 0; > > -errout_mask_ht: > - rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node, > - fnew->mask->filter_ht_params); > - > -errout_mask: > - fl_mask_put(head, fnew->mask, false); > - > errout_idr: > if (!fold) This check could go away, I guess (not a strong preference though). > idr_remove(&head->handle_idr, fnew->handle); > +errout_hw: > + if (!tc_skip_hw(fnew->flags)) > + fl_hw_destroy_filter(tp, fnew, NULL); > +errout_mask: > + fl_mask_put(head, fnew->mask, false); > errout: > tcf_exts_destroy(&fnew->exts); > kfree(fnew); -- Stefano