From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFA78C43381 for ; Wed, 27 Feb 2019 23:03:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7C57B2087C for ; Wed, 27 Feb 2019 23:03:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Y3JTObJX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730131AbfB0XDd (ORCPT ); Wed, 27 Feb 2019 18:03:33 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:38492 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728397AbfB0XDd (ORCPT ); Wed, 27 Feb 2019 18:03:33 -0500 Received: by mail-pf1-f195.google.com with SMTP id n125so8727907pfn.5 for ; Wed, 27 Feb 2019 15:03:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=IUfXDF6RMHswx7wjFz3HCPYWIBa0UnCnKyxixmI01ZQ=; b=Y3JTObJX9R+uKUthDiNQLxOUgkyJ0b2K+Z3WYaS0OgfRf3/qgag5oYd8dH4OQTuN3i N3wlYEZ5PmSaKWu2s2rmg3KEutBVQmst/pYlyux3vxWtRHus+rmgfSNmhVia1UVJHRJ5 dQCesgMjxMnaLQrRB82lL4TGibD9nNP7h4uw7iyB9idHnFJO9yIZ36Pkad0/ZSYtPDLT BWibBC6p6u4IzCzOmOaUxw1Tt1CeHuo/Kf8Dcx5lcxRZmRR/jZK2NW/4XpLpc9N9mk1j ddLQUoxfF3RTdfcJPfzOcB91VjOmj09uWhr72hbeII8nJbadABd4b9uboPnEWjmsLxvi Y7ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=IUfXDF6RMHswx7wjFz3HCPYWIBa0UnCnKyxixmI01ZQ=; b=NGQf8PMT7Wgkh5j84Cd/wOU+qiXvmNhDIuI5qe56mJw+ncZ/DEKCmYTrAbbxM4qa/R hk3WqZJGDba2amIv7etemZsUufAPe595kjGZOg51UKW3QREfuhqUktgi26MTm6J0fnTV fuq67pl0a6G3dm756NKd1gk2IX7URTUxAI22Wb41+kbZL35ASAv7jiXS0bJep0QckyP0 bvJb4VQDQYeeMI8ThBUzMGJFfjn2EK2lLK/WdUMHrBQI8Jk4gebWXA7wu7Ioxi7u8IpU InsEZDAP7EtTjPH9WIH91xOORsMVYGkGP/mx8J1rxtIVRuymzKFq9rZPKnsLRaokSGnE LT2Q== X-Gm-Message-State: AHQUAuZMDIIuxDE5qe/TOTcNMEEW30PdDpKAJfR6C+YO8dr1CUpZpVAM XdN3IIGga7pwNFINXZeRWL4Iv49qgUFFIvItYh4= X-Google-Smtp-Source: AHgI3IYHkkTee0dhRijgAln4dyaJy8a1P4ZQ8EElRFMcK2UzfjpQZqVZKg4rWv6MIU84TdT62NKuKrQy5NSJXPPKg10= X-Received: by 2002:a63:28c1:: with SMTP id o184mr5407790pgo.123.1551308612322; Wed, 27 Feb 2019 15:03:32 -0800 (PST) MIME-Version: 1.0 References: <20190225154544.10453-1-vladbu@mellanox.com> In-Reply-To: From: Cong Wang Date: Wed, 27 Feb 2019 15:03:20 -0800 Message-ID: Subject: Re: [PATCH net-next] net: sched: don't release block->lock when dumping chains To: Vlad Buslov Cc: Linux Kernel Network Developers , Jamal Hadi Salim , Jiri Pirko , David Miller Content-Type: text/plain; charset="UTF-8" Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Tue, Feb 26, 2019 at 8:10 AM Vlad Buslov wrote: > > > On Tue 26 Feb 2019 at 00:15, Cong Wang wrote: > > On Mon, Feb 25, 2019 at 7:45 AM Vlad Buslov wrote: > >> > >> Function tc_dump_chain() obtains and releases block->lock on each iteration > >> of its inner loop that dumps all chains on block. Outputting chain template > >> info is fast operation so locking/unlocking mutex multiple times is an > >> overhead when lock is highly contested. Modify tc_dump_chain() to only > >> obtain block->lock once and dump all chains without releasing it. > >> > >> Signed-off-by: Vlad Buslov > >> Suggested-by: Cong Wang > > > > Thanks for the followup! > > > > Isn't it similar for __tcf_get_next_proto() in tcf_chain_dump()? > > And for tc_dump_tfilter()? > > Not really. These two dump all tp filters and not just a template, which > is O(n) on number of filters and can be slow because it calls hw offload > API for each of them. Our typical use-case involves periodic filter dump > (to update stats) while multiple concurrent user-space threads are > updating filters, so it is important for them to be able to execute in > parallel. Hmm, but if these are read-only, you probably don't even need a mutex, you can just use RCU read lock to protect list iteration and you still can grab the refcnt in the same way.