From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65794C43381 for ; Sat, 2 Mar 2019 00:08:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 312BC20823 for ; Sat, 2 Mar 2019 00:08:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="tbGCyq72" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726835AbfCBAIP (ORCPT ); Fri, 1 Mar 2019 19:08:15 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:41125 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725958AbfCBAIP (ORCPT ); Fri, 1 Mar 2019 19:08:15 -0500 Received: by mail-pl1-f196.google.com with SMTP id y5so12216368plk.8 for ; Fri, 01 Mar 2019 16:08:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=xQxa/katrx8IOJ97Z09GhOAStFlG9b9FD6EAOO+qfdQ=; b=tbGCyq72J0dsbnKedJBaYUN6uCGiR+O6pLcfve7z0idy+zaje1NCc6CDELdXR6Qjk9 BuQ7uYhfLq6YUltbLMBIt/eG9opii8gUxNSKhUfQNMbAPrITGYhNTJMNxp2dCUwxoctV gBye6AvbIriEnjLUDlpF2OABSUyNjPDmUJmqSym3YcuJNH1vIBiH/X4IBYpFCwbM63OM /XorHMOprmGdI6MW5QlOLhiYM7jrzS+BRK1ixVTLCrxrSRNFpL7vD5XjcKYSuakdLE4+ iVlrQKIVVWLnrv5oJQ/Zc03SHO6oWsx4tBu6v3dJgUyHRinQoQBJribpx11lsF6622e/ jmAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=xQxa/katrx8IOJ97Z09GhOAStFlG9b9FD6EAOO+qfdQ=; b=R8HsN3LmMtFzDJSdePc3YbWlS7mSuIjLdgGUU9zR6cv67tMCWZ/91Y4aMUMXVOXYRQ FV48tUuQEQIwLWj5dq4YfWlwMZLr6il4P5GhRHbbTFCkQOhwsE/lAXyWNumBWPIfKpGy TZFmfSvZ4mSAME9NpD0xhT2vZcmqYy+47VtwtAjcvaiKntKt7Yug/tX1fTvkF+rkAs2t /nBtEt6KR5kWARQe+iq0nONIOaqCj6BMyAovc7YUCOQdPKRJMjGkboEAnjVbPYqPT/6H XNy9YHQZzYIFjfJ1O8WdV3wDUqjG5PJ22w2eJl7JSmNaE8WXWeOFSY2XOfexeQF0AXBd aEdA== X-Gm-Message-State: APjAAAVkvatBA9iNwl1C/1T+s04OC6iQaP7Ym69HzVwPPg2h2MKwKPSA 6W7Kfig5zuTc4lbDw30H8CL+OPy1lq9n9rdNZfl+Dg== X-Google-Smtp-Source: APXvYqzpabpPBFIlKR8uf76KYM+5SXBbM9uSNsthZ/erPP8oRumlgH8UFva4ymo/K5XolJC32H0qCwvQZzCz9TYBaFY= X-Received: by 2002:a17:902:46a4:: with SMTP id p33mr8351134pld.48.1551485294459; Fri, 01 Mar 2019 16:08:14 -0800 (PST) MIME-Version: 1.0 References: <20190225154544.10453-1-vladbu@mellanox.com> In-Reply-To: From: Cong Wang Date: Fri, 1 Mar 2019 16:08:03 -0800 Message-ID: Subject: Re: [PATCH net-next] net: sched: don't release block->lock when dumping chains To: Vlad Buslov Cc: Linux Kernel Network Developers , Jamal Hadi Salim , Jiri Pirko , David Miller Content-Type: text/plain; charset="UTF-8" Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, Feb 28, 2019 at 6:53 AM Vlad Buslov wrote: > > > On Wed 27 Feb 2019 at 23:03, Cong Wang wrote: > > On Tue, Feb 26, 2019 at 8:10 AM Vlad Buslov wrote: > >> > >> > >> On Tue 26 Feb 2019 at 00:15, Cong Wang wrote: > >> > On Mon, Feb 25, 2019 at 7:45 AM Vlad Buslov wrote: > >> >> > >> >> Function tc_dump_chain() obtains and releases block->lock on each iteration > >> >> of its inner loop that dumps all chains on block. Outputting chain template > >> >> info is fast operation so locking/unlocking mutex multiple times is an > >> >> overhead when lock is highly contested. Modify tc_dump_chain() to only > >> >> obtain block->lock once and dump all chains without releasing it. > >> >> > >> >> Signed-off-by: Vlad Buslov > >> >> Suggested-by: Cong Wang > >> > > >> > Thanks for the followup! > >> > > >> > Isn't it similar for __tcf_get_next_proto() in tcf_chain_dump()? > >> > And for tc_dump_tfilter()? > >> > >> Not really. These two dump all tp filters and not just a template, which > >> is O(n) on number of filters and can be slow because it calls hw offload > >> API for each of them. Our typical use-case involves periodic filter dump > >> (to update stats) while multiple concurrent user-space threads are > >> updating filters, so it is important for them to be able to execute in > >> parallel. > > > > Hmm, but if these are read-only, you probably don't even need a > > mutex, you can just use RCU read lock to protect list iteration > > and you still can grab the refcnt in the same way. > > That is how it worked in my initial implementation. However, it doesn't > work with hw offloads because driver callbacks can sleep. Hmm? You drop RCU read lock after grabbing the refcnt, right? If so what's the problem with sleeping?