From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F336C61DF4 for ; Fri, 24 Nov 2023 07:35:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A1AD6B06AA; Fri, 24 Nov 2023 02:35:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1520F6B06AC; Fri, 24 Nov 2023 02:35:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F35556B06AE; Fri, 24 Nov 2023 02:35:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E40256B06AA for ; Fri, 24 Nov 2023 02:35:23 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7B84E1CBA5D for ; Fri, 24 Nov 2023 07:35:23 +0000 (UTC) X-FDA: 81492037326.25.9C4C613 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf06.hostedemail.com (Postfix) with ESMTP id 78363180020 for ; Fri, 24 Nov 2023 07:35:21 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FB5v8k38; spf=pass (imf06.hostedemail.com: domain of liangchen.linux@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=liangchen.linux@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700811321; a=rsa-sha256; cv=none; b=vDf0qj21UL0J5vEE7DANCBtRxi+rmwczyDiiGVKovPqMDlZ0MYW2QTLka0OfB/p10bABZp 72JgWh3FW5qkINTtAiibRr/3LyaMN2tm71VCT4iRLx0NZzSa9fx1rvXz6Q4IolrfsuUkiR qAkJrT+BVgHWuWMTFTrPJ7DzHLltyBw= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FB5v8k38; spf=pass (imf06.hostedemail.com: domain of liangchen.linux@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=liangchen.linux@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700811321; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GVyM+zZxeeVgQmF9kYV8H8JgVavZaWVSN/+0EvaihZw=; b=ougQKY8yKfg2bpOnnm0IsEOErACKNoTjPhlGIIsRjsXmD0D8PbvcAH7AqsnIrH2sLUM0Lh bz8xPrWLwzaO+S7T2Cwdvn95nb6L6jEPOkBVvdl66e8bRUW45ZZCP5vd7WRli8mlap308g 9j5bv4mmQxvZvYNrwcrpolt4sV8h2nI= Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1cf7a1546c0so13710745ad.0 for ; Thu, 23 Nov 2023 23:35:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700811320; x=1701416120; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GVyM+zZxeeVgQmF9kYV8H8JgVavZaWVSN/+0EvaihZw=; b=FB5v8k38fRj4RQOZpBHBEryAOIjO8ekB6sDCR+psouq1tr/0jK8kycQiN4PCoF8QQZ xqJ4tgSQYcfwrdVEMC7pK1dVm8x6q6dd2cWlQ95grOqUa4dCuZv4e8zTd0mI/NuD6LWV 67NrAO+VIzbRnQXEbo9U+W03B7CWb2Ma9Vc0FTBiMVxa2i2pwrFGmDmCbjlE93lMyFFp lMuzbLB7y6+2SM4NjplIhTHxXTQapwFUBPpnWYrTgwmvTYk9UQgNJ4nHO2tbePCamNJO GXmCBa7UE2td2MPQoJou+VNccLylrqSx1OKPp6QrR5q7z5leIdTQIAVj7yHk3QFJ7ArR dhGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700811320; x=1701416120; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GVyM+zZxeeVgQmF9kYV8H8JgVavZaWVSN/+0EvaihZw=; b=SpWU6XWfVp5tOeuBKRNyeU2v/xtB5ecedCOY08NCi7nIKKWMseKCCxRY78i9817olW KYDuajzlVmMtCZodl27yQWmDKbdTD13ps9e3Fgc+y4FjZO5r4YLCKVeHCuyKtVfawGpg QNQtfr1PVjrpxw9Rug2V7wEBiKtGAbKB5Gdcr2E2BTPqfYuoARbVtSIYKvFkDSZ1gzPE VU8C0pVW+JBfGvU+c0Ba95I63kZhf2PwS5qAPi6plUxwW6BIzniAuVOm/MyWAS3tMuPJ 8ZDMtnSZLvVrFvEkcqv1PXRd+GOawWtBwvBY2R0g2M2jdjIZrmnXQSzZr9kiyQoUuTiK 9Sjg== X-Gm-Message-State: AOJu0YyPlhRZZ80YygWEbckia2F5cK6GZ8Obs12BX12TWEfb/aV4Wm9q pMXaLsWiI3q+h/20QGGcDFU= X-Google-Smtp-Source: AGHT+IFW9OW4Chd+eMwjSDquqBSaDDcPEXnyhJCyPdt6UaUVM+1Bexyln3V36mIMM6we2HvXDXpbGQ== X-Received: by 2002:a17:902:f60a:b0:1ce:6669:3260 with SMTP id n10-20020a170902f60a00b001ce66693260mr2718379plg.67.1700811320395; Thu, 23 Nov 2023 23:35:20 -0800 (PST) Received: from localhost.localdomain ([89.187.161.180]) by smtp.gmail.com with ESMTPSA id t15-20020a170902e84f00b001b9d7c8f44dsm2499329plg.182.2023.11.23.23.35.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 23:35:19 -0800 (PST) From: Liang Chen To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, hawk@kernel.org, ilias.apalodimas@linaro.org, linyunsheng@huawei.com Cc: netdev@vger.kernel.org, linux-mm@kvack.org, liangchen.linux@gmail.com Subject: [PATCH net-next v3 1/3] page_pool: Rename pp_frag_count to pp_ref_count Date: Fri, 24 Nov 2023 15:34:37 +0800 Message-Id: <20231124073439.52626-2-liangchen.linux@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231124073439.52626-1-liangchen.linux@gmail.com> References: <20231124073439.52626-1-liangchen.linux@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 78363180020 X-Stat-Signature: 5ib3zypu8zj9m1pmek6fy69uk4ro3j6f X-Rspam-User: X-HE-Tag: 1700811321-180478 X-HE-Meta: U2FsdGVkX1/BRHbpW0whkkNfAZsJeF3MfbMqGvPVW8z0q/a4mbbaY2IVHWAxNGGCfnUQMzQh8LfbYF1xUffYtvh6hML2EAMnKDHkdAOXSjmChxT2bmG6MZ4KOj75zJzxd3MAyhoa9KGyw+ZbQnFTDnRAKfXGYNkiAmgH5hnYUZLy5HVPSl+gZIez17dnsysAqc3X5HUHVMgF6MTM6ESKmECVT4XUIPve78gxfpa7WHf3vnLejNcvH0KkJoviA437YHLz/0SmMMlweZcZwiN3/e3qiBvRopQDA5J/yQyvmsQrGc/tzCZJf7KMCOoVGrW8i9vVkacSXXCNl3oAUPN3YkwqNknQr9/CmFnVuRvyrTnHjHXy9Hnzcj40XxWrA7W+ywAdIzx0mgiaDISEGcpnt3TQv8rsF2NLX3U2GjF8rcuBEGsVcPQiLzPCbvRY5ra8f/gCC8rJsgVo+ZSCFcItSsQmwnPfoAS/iEMx5b3sa3a/88x5Kil5b0qhDqlu3/pteOxnOHHKoTsMCV8Jd8K7cv+5zjL33rWo+o1034c+YqCJCYX0cd7wlYz4QYfplE+Xyti766ijBnInwQ/AU7ccNaVfyILgru8+AwDHd5JH0SXvBaNEN0w1vaBgGdSyxGrM8pCLdoF4SKsr9fZrR9H6eW/kWjiDj/GoZWPqL2fohr6xXpXpxdXmIy5pIByZ40yMAxzT57ZTj+1YOw0UoP0HlpYNlJyKIvWkmesNqQJQ1UeyWvaclu2N1+SCvILWdJvb+QuCx+iveR0sir3eE592ZWXJg3V73w5P6aiWuyb67kkpDZ5dUBed28YOzqHSm4UuFkpK0nEGrh+2PNkWlMkw8CwQVwHf2wDSJO9fY3eYtgJu9vWjv9svwNhYirITieQoK7COfCs5v+BQb7AhFGiLf9GhrgTwYzJN5SvhVRAoUEUJFcLhJBmDxYwz1V2NMygixgJ+9zVuN/ww6qAGrch qWf8VbU9 Ds47gNgsrDF/b7KFOuhq1HJPQFWGinRQeA56CmRwfHRJ+pD+fx9vAFbyNPgwVL8uikNqgv9BOP/4LVKXBjYF4BunX7iL7XiAwToW8pnl/qyDEIGJEClksGYxT9cYtGwYozeRoEZ94bQ20KuIrIxz0Z8M0RjCobnbTBBuSNYsUkmJQ2nJhtoPsTDoqsN2jPGB07kiVcQPElIXA9Mp3xaxTU69jh06uCEGRp3pM1QSybZLJ1bZuzws8Pt6nzzBjM1xLvavwi1IORBgxgAXRky6mT1FKlr995jSTiAjJEdqQzLqdIbJo3bnyYDdzf1TQmp+sYT9oQyY4bohRJ0YLtQQzUPZ1/hTtfHp5k9AfAvBtJ/wetOfeSpPclzd8GdHS3jZ5J2N7VFFR2UcCnqTqHq9TwIwu1moCzLNilWnufoTZcKvOCR7VQaMkmBWELcGxGOKYz7JSEIDdIANoaycyrM3h0v6RMsiBdiECiCSoBRN92uqB0np1aWL+yNsmtevfP+sDoduvFIsiEW5tz6xyA1WAWxXJ+dVADNcEbwbxV/uSkQeo+STgLZ8O279VGSInabp1DyUZcXF2PHuGH69DuiEtU0CPixxWkp/CDMnxpbqlemklmPnEfLpOMXVgQnhnjERNzgXZKmOw4OHu1p/QJmZCimRx+pSxvZnpd3mf6TcIof9IAEE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To support multiple users referencing the same fragment, pp_frag_count is renamed to pp_ref_count to better reflect its actual meaning based on the suggestion from [1]. [1] http://lore.kernel.org/netdev/f71d9448-70c8-8793-dc9a-0eb48a570300@huawei.com Signed-off-by: Liang Chen --- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +- include/linux/mm_types.h | 2 +- include/net/page_pool/helpers.h | 45 ++++++++++--------- include/net/page_pool/types.h | 2 +- net/core/page_pool.c | 12 ++--- 5 files changed, 35 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 8d9743a5e42c..4454c750733e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -298,8 +298,8 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq, u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; struct page *page = frag_page->page; - if (page_pool_defrag_page(page, drain_count) == 0) - page_pool_put_defragged_page(rq->page_pool, page, -1, true); + if (page_pool_deref_page(page, drain_count) == 0) + page_pool_put_derefed_page(rq->page_pool, page, -1, true); } static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 957ce38768b2..64e4572ef06d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -125,7 +125,7 @@ struct page { struct page_pool *pp; unsigned long _pp_mapping_pad; unsigned long dma_addr; - atomic_long_t pp_frag_count; + atomic_long_t pp_ref_count; }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 4ebd544ae977..700f435292e7 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -29,7 +29,7 @@ * page allocated from page pool. Page splitting enables memory saving and thus * avoids TLB/cache miss for data access, but there also is some cost to * implement page splitting, mainly some cache line dirtying/bouncing for - * 'struct page' and atomic operation for page->pp_frag_count. + * 'struct page' and atomic operation for page->pp_ref_count. * * The API keeps track of in-flight pages, in order to let API users know when * it is safe to free a page_pool object, the API users must call @@ -214,69 +214,74 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) return pool->p.dma_dir; } -/* pp_frag_count represents the number of writers who can update the page +/* pp_ref_count represents the number of writers who can update the page * either by updating skb->data or via DMA mappings for the device. * We can't rely on the page refcnt for that as we don't know who might be * holding page references and we can't reliably destroy or sync DMA mappings * of the fragments. * - * When pp_frag_count reaches 0 we can either recycle the page if the page + * pp_ref_count initially corresponds to the number of fragments. However, + * when multiple users start to reference a single fragment, for example in + * skb_try_coalesce, the pp_ref_count will become greater than the number of + * fragments. + * + * When pp_ref_count reaches 0 we can either recycle the page if the page * refcnt is 1 or return it back to the memory allocator and destroy any * mappings we have. */ static inline void page_pool_fragment_page(struct page *page, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&page->pp_ref_count, nr); } -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_deref_page(struct page *page, long nr) { long ret; - /* If nr == pp_frag_count then we have cleared all remaining + /* If nr == pp_ref_count then we have cleared all remaining * references to the page: * 1. 'n == 1': no need to actually overwrite it. * 2. 'n != 1': overwrite it with one, which is the rare case - * for pp_frag_count draining. + * for pp_ref_count draining. * * The main advantage to doing this is that not only we avoid a atomic * update, as an atomic_read is generally a much cheaper operation than * an atomic update, especially when dealing with a page that may be - * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count + * referenced by only 2 or 3 users; but also unify the pp_ref_count * handling by ensuring all pages have partitioned into only 1 piece * initially, and only overwrite it when the page is partitioned into * more than one piece. */ - if (atomic_long_read(&page->pp_frag_count) == nr) { + if (atomic_long_read(&page->pp_ref_count) == nr) { /* As we have ensured nr is always one for constant case using * the BUILD_BUG_ON(), only need to handle the non-constant case - * here for pp_frag_count draining, which is a rare case. + * here for pp_ref_count draining, which is a rare case. */ BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1); if (!__builtin_constant_p(nr)) - atomic_long_set(&page->pp_frag_count, 1); + atomic_long_set(&page->pp_ref_count, 1); return 0; } - ret = atomic_long_sub_return(nr, &page->pp_frag_count); + ret = atomic_long_sub_return(nr, &page->pp_ref_count); WARN_ON(ret < 0); - /* We are the last user here too, reset pp_frag_count back to 1 to + /* We are the last user here too, reset pp_ref_count back to 1 to * ensure all pages have been partitioned into 1 piece initially, * this should be the rare case when the last two fragment users call - * page_pool_defrag_page() currently. + * page_pool_deref_page() currently. */ if (unlikely(!ret)) - atomic_long_set(&page->pp_frag_count, 1); + atomic_long_set(&page->pp_ref_count, 1); return ret; } -static inline bool page_pool_is_last_frag(struct page *page) +static inline bool page_pool_is_last_ref(struct page *page) { - /* If page_pool_defrag_page() returns 0, we were the last user */ - return page_pool_defrag_page(page, 1) == 0; + /* If page_pool_deref_page() returns 0, we were the last user */ + return page_pool_deref_page(page, 1) == 0; } /** @@ -301,10 +306,10 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_ref(page)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_derefed_page(pool, page, dma_sync_size, allow_direct); #endif } diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index e1bb92c192de..1c82e87f2577 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -224,7 +224,7 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, } #endif -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_derefed_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct); diff --git a/net/core/page_pool.c b/net/core/page_pool.c index df2a06d7da52..0c6c2b11aabe 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -650,8 +650,8 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, return NULL; } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, bool allow_direct) +void page_pool_put_derefed_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) { page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); if (page && !page_pool_recycle_in_ring(pool, page)) { @@ -660,7 +660,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, page_pool_return_page(pool, page); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_derefed_page); /** * page_pool_put_page_bulk() - release references on multiple pages @@ -687,7 +687,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, struct page *page = virt_to_head_page(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_ref(page)) continue; page = __page_pool_put_page(pool, page, -1, false); @@ -729,7 +729,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_deref_page(page, drain_count))) return NULL; if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { @@ -750,7 +750,7 @@ static void page_pool_free_frag(struct page_pool *pool) pool->frag_page = NULL; - if (!page || page_pool_defrag_page(page, drain_count)) + if (!page || page_pool_deref_page(page, drain_count)) return; page_pool_return_page(pool, page); -- 2.31.1