From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F29CC4167B for ; Mon, 27 Nov 2023 18:17:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232606AbjK0SRS (ORCPT ); Mon, 27 Nov 2023 13:17:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232580AbjK0SRQ (ORCPT ); Mon, 27 Nov 2023 13:17:16 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5F1B194 for ; Mon, 27 Nov 2023 10:17:21 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6F6DC433C7; Mon, 27 Nov 2023 18:17:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701109041; bh=Kq8G4186oIvvG0OGelsVcM8Ma8D3c+DoqPy0UYxMLWY=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=QG6ONPukVchwWErAMLaSIt1BRLdfpyvDFYvlAneifGBLLwxrgc9YT9DQUv+jORVZl Wv2eJmxqdYpysNKvq88faMBlo509G9gLB4g5hMFATiV5ip1MamKARiU7ZI0QrcJzby iBOmDkhn7cftAcy9e+M8XNePHSRWBpEgAucP8J5IOYslJywc08DLve2OOYLOKSUH9D dXV4vYiNlS555n5f3WzGRnn8Lj7S9VnIkJT3TNd8k8d8Wd2AEfeoacY5qE0wwD7bxj 3zEqW/QBr4UsSEFa3hQSNHvf5Db9My2Ez9GHfnaqmQhtAynL/Th6EhJQ2Mwo2LLnyA KeVCUsFW2DZRA== Date: Mon, 27 Nov 2023 10:17:20 -0800 From: Jakub Kicinski To: Alexander Lobakin Cc: Yunsheng Lin , Christoph Hellwig , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexander Duyck , David Christensen , Jesper Dangaard Brouer , "Ilias Apalodimas" , Paul Menzel , , , , "David S. Miller" , Eric Dumazet , Paolo Abeni Subject: Re: [PATCH net-next v5 03/14] page_pool: avoid calling no-op externals when possible Message-ID: <20231127101720.282862f6@kernel.org> In-Reply-To: References: <20231124154732.1623518-1-aleksander.lobakin@intel.com> <20231124154732.1623518-4-aleksander.lobakin@intel.com> <6bd14aa9-fa65-e4f6-579c-3a1064b2a382@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 27 Nov 2023 15:32:19 +0100 Alexander Lobakin wrote: > > Sorry for not remembering the suggestion:( > > In the previous versions of this change I used a global flag per whole > page_pool, just like XSk does for the whole XSk buff pool, then you > proposed to use the lowest bit of ::dma_addr and store it per page, so > that it would be more granular/precise. I tested it and it doesn't > perform worse than global, but in some cases may be beneficial. FWIW I'd vote to stick to per-page pool. You seem to handle the sizeof(dma_addr_t) > sizeof(long) case correctly but the code is growing in complexity, providing no known/measurable benefit. We can always do this later but for now it seems like a premature optimization to me.