From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E032C282C4 for ; Tue, 12 Feb 2019 13:49:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 228CD218B0 for ; Tue, 12 Feb 2019 13:49:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729934AbfBLNts (ORCPT ); Tue, 12 Feb 2019 08:49:48 -0500 Received: from mx1.redhat.com ([209.132.183.28]:60990 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727428AbfBLNts (ORCPT ); Tue, 12 Feb 2019 08:49:48 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 160CC90906; Tue, 12 Feb 2019 13:49:48 +0000 (UTC) Received: from carbon (ovpn-200-42.brq.redhat.com [10.40.200.42]) by smtp.corp.redhat.com (Postfix) with ESMTP id C873A5D736; Tue, 12 Feb 2019 13:49:39 +0000 (UTC) Date: Tue, 12 Feb 2019 14:49:38 +0100 From: Jesper Dangaard Brouer To: Tariq Toukan Cc: Eric Dumazet , Ilias Apalodimas , Matthew Wilcox , David Miller , "toke@redhat.com" , "netdev@vger.kernel.org" , "mgorman@techsingularity.net" , "linux-mm@kvack.org" , brouer@redhat.com Subject: Re: [RFC, PATCH] net: page_pool: Don't use page->private to store dma_addr_t Message-ID: <20190212144938.36dd45b4@carbon> In-Reply-To: <27e97aac-f25b-d46c-3e70-7d0d44f784b5@mellanox.com> References: <1549550196-25581-1-git-send-email-ilias.apalodimas@linaro.org> <20190207150745.GW21860@bombadil.infradead.org> <20190207152034.GA3295@apalos> <20190207.132519.1698007650891404763.davem@davemloft.net> <20190207213400.GA21860@bombadil.infradead.org> <20190207214237.GA10676@Iliass-MBP.lan> <64f7af75-e6df-7abc-c4ce-82e6ca51fafe@gmail.com> <27e97aac-f25b-d46c-3e70-7d0d44f784b5@mellanox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 12 Feb 2019 13:49:48 +0000 (UTC) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Tue, 12 Feb 2019 12:39:59 +0000 Tariq Toukan wrote: > On 2/11/2019 7:14 PM, Eric Dumazet wrote: > > > > On 02/11/2019 12:53 AM, Tariq Toukan wrote: > >> > > > >> Hi, > >> > >> It's great to use the struct page to store its dma mapping, but I am > >> worried about extensibility. > >> page_pool is evolving, and it would need several more per-page fields. > >> One of them would be pageref_bias, a planned optimization to reduce the > >> number of the costly atomic pageref operations (and replace existing > >> code in several drivers). > >> > > > > But the point about pageref_bias is to place it in a different > > cache line than "struct page" Yes, exactly. > > The major cost is having a cache line bouncing between producer and > > consumer. > > pageref_bias is meant to be dirtied only by the page requester, i.e. the > NIC driver / page_pool. > All other components (basically, SKB release flow / put_page) should > continue working with the atomic page_refcnt, and not dirty the > pageref_bias. > > However, what bothers me more is another issue. > The optimization doesn't cleanly combine with the new page_pool > direction for maintaining a queue for "available" pages, as the put_page > flow would need to read pageref_bias, asynchronously, and act accordingly. > > The suggested hook in put_page (to catch the 2 -> 1 "biased refcnt" > transition) causes a problem to the traditional pageref_bias idea, as it > implies a new point in which the pageref_bias field is read > *asynchronously*. This would risk missing the this critical 2 -> 1 > transition! Unless pageref_bias is atomic... I want to stop you here... It seems to me that you are trying to shoehorn in a refcount optimization into page_pool. The page_pool is optimized for the XDP case of one-frame-per-page, where we can avoid changing the refcount, and tradeoff memory usage for speed. It is compatible with the elevated refcount usage, but that is not the optimization target. If the case you are optimizing for is "packing" more frames in a page, then the page_pool might be the wrong choice. To me it would make more sense to create another enum xdp_mem_type, that generalize the pageref_bias tricks also used by some drivers. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer