From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69227C433E0 for ; Wed, 13 Jan 2021 02:01:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 297F12312E for ; Wed, 13 Jan 2021 02:01:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727393AbhAMCBm (ORCPT ); Tue, 12 Jan 2021 21:01:42 -0500 Received: from youngberry.canonical.com ([91.189.89.112]:51092 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726691AbhAMCBl (ORCPT ); Tue, 12 Jan 2021 21:01:41 -0500 Received: from 1.general.jvosburgh.us.vpn ([10.172.68.206] helo=famine.localdomain) by youngberry.canonical.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kzVSp-0003pl-Sl; Wed, 13 Jan 2021 02:00:48 +0000 Received: by famine.localdomain (Postfix, from userid 1000) id 2CCFB5FEE8; Tue, 12 Jan 2021 18:00:46 -0800 (PST) Received: from famine (localhost [127.0.0.1]) by famine.localdomain (Postfix) with ESMTP id 26C129FAB0; Tue, 12 Jan 2021 18:00:46 -0800 (PST) From: Jay Vosburgh To: Saeed Mahameed cc: Vladimir Oltean , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org, Andrew Lunn , Florian Fainelli , Cong Wang , Stephen Hemminger , Eric Dumazet , George McCollister , Oleksij Rempel , Veaceslav Falico , Andy Gospodarek , Arnd Bergmann , Taehee Yoo , Jiri Pirko , Florian Westphal , Nikolay Aleksandrov , Pravin B Shelar , Sridhar Samudrala Subject: Re: [PATCH v6 net-next 14/15] net: bonding: ensure .ndo_get_stats64 can sleep In-reply-to: <4c4c08e37aeff87f0dd2ea52037c32d07d2868d1.camel@kernel.org> References: <20210109172624.2028156-1-olteanv@gmail.com> <20210109172624.2028156-15-olteanv@gmail.com> <20210112143710.nxpxnlcojhvqipw7@skbuf> <4c4c08e37aeff87f0dd2ea52037c32d07d2868d1.camel@kernel.org> Comments: In-reply-to Saeed Mahameed message dated "Tue, 12 Jan 2021 12:10:38 -0800." X-Mailer: MH-E 8.6+git; nmh 1.6; GNU Emacs 27.0.50 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: <8200.1610503246.1@famine> Date: Tue, 12 Jan 2021 18:00:46 -0800 Message-ID: <8201.1610503246@famine> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Saeed Mahameed wrote: >On Tue, 2021-01-12 at 16:37 +0200, Vladimir Oltean wrote: >> On Mon, Jan 11, 2021 at 03:38:49PM -0800, Saeed Mahameed wrote: >> > GFP_ATOMIC is a little bit aggressive especially when user daemons >> > are >> > periodically reading stats. This can be avoided. >> > >> > You can pre-allocate with GFP_KERNEL an array with an "approximate" >> > size. >> > then fill the array up with whatever slaves the the bond has at >> > that >> > moment, num_of_slaves can be less, equal or more than the array >> > you >> > just allocated but we shouldn't care .. >> > >> > something like: >> > rcu_read_lock() >> > nslaves = bond_get_num_slaves(); >> > rcu_read_unlock() Can be nslaves = READ_ONCE(bond->slave_cnt), or, for just active slaves: struct bond_up_slave *slaves; slaves = rcu_dereference(bond->slave_arr); nslaves = slaves ? READ_ONCE(slaves->count) : 0; >> > sarray = kcalloc(nslaves, sizeof(struct bonding_slave_dev), >> > GFP_KERNEL); >> > rcu_read_lock(); >> > bond_fill_slaves_array(bond, sarray); // also do: dev_hold() >> > rcu_read_unlock(); >> > >> > >> > bond_get_slaves_array_stats(sarray); >> > >> > bond_put_slaves_array(sarray); >> >> I don't know what to say about acquiring RCU read lock twice and >> traversing the list of interfaces three or four times. > >You can optimize this by tracking #num_slaves. I think that the set of active slaves changing between the two calls will be a rare exception, and that the number of slaves is generally small (more than 2 is uncommon in my experience). >> On the other hand, what's the worst that can happen if the GFP_ATOMIC >> memory allocation fails. It's not like there is any data loss. >> User space will retry when there is less memory pressure. > >Anyway Up to you, i just don't like it when we use GFP_ATOMIC when it >can be avoided, especially for periodic jobs, like stats polling.. And, for the common case, I suspect that an array allocation will have lower overhead than a loop that allocates once per slave. -J --- -Jay Vosburgh, jay.vosburgh@canonical.com