From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1C97C54EBE for ; Tue, 10 Jan 2023 09:58:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79C018E0002; Tue, 10 Jan 2023 04:58:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 74C538E0001; Tue, 10 Jan 2023 04:58:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5ECC08E0002; Tue, 10 Jan 2023 04:58:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4C7848E0001 for ; Tue, 10 Jan 2023 04:58:09 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 25262C026F for ; Tue, 10 Jan 2023 09:58:09 +0000 (UTC) X-FDA: 80338438698.22.3D67BAF Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by imf11.hostedemail.com (Postfix) with ESMTP id 6950340009 for ; Tue, 10 Jan 2023 09:58:07 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=eBZz5AOb; spf=pass (imf11.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.46 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673344687; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SaptiqdNbxtUgdNkz5Mipv3YrbcHxTCIZLBnLFrBTi0=; b=yeMlnW4tJJkp4x1u5i4HmOeSIOIOW63RbRy6PvKEvLb8SdHSbwKny1xM9la0eUcvaZCxFF TV1iGoRK4t8o21e0hU5KVPlbTxC671cmCLa+84gVltg1mZ7Iux2dStwtNskWwQ273L1ISz DFqDMvJCiZe35DkOaC6bte2fBeGLZdg= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=eBZz5AOb; spf=pass (imf11.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.46 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673344687; a=rsa-sha256; cv=none; b=wq3lckU3jBnB7Snv19seaHgZNa/g0Ztb9nNj6bWUwIy5TkuNURrs0l52EHOOjXv9KBmbqf nQ72/AmVu84KKHKrHtgVPrKzx1Nq1NGKsVBZ7t3/ratfNXY9u5Qbg3NNQftbag8bloiIrj ef61p1aZMnFxQKeRV7YbXERw+AHLTDI= Received: by mail-ed1-f46.google.com with SMTP id m21so16728010edc.3 for ; Tue, 10 Jan 2023 01:58:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=SaptiqdNbxtUgdNkz5Mipv3YrbcHxTCIZLBnLFrBTi0=; b=eBZz5AOboJ9hln6IxHELKIQ94E3x39MiKP7h+QHEpsgppZQhfVrG0azfhlbPIcHb9l hSHVMK9XQt0EGuDB6ochtoJ7mcZXkPOSN8RBA9C47S6rW6Y6yCq6JimnjOdwNdc6foTT d9nONBK7TZ2u+42XVj6AvYcGz33s4MI4BrKEP8IBQDryufnUHJ3nFP5MYOcVy7Es6ane P/+R3eHLymy/wWCGZm4O7hc5xqAencnfWhybhoZHWdbXqXkpo4NlLna+hUUyQRIjmqQf qFBPxKUkZKDG+01a8HoP7kqtiejkmdxmEU/zvHUgQRK0vTZTA5bxYqJJTA/en6uwYIib XVuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=SaptiqdNbxtUgdNkz5Mipv3YrbcHxTCIZLBnLFrBTi0=; b=HuXTk73FmiFZhlJ18dcJfh4dE1agR138l7Q2Ji2zGF2fCqsPIQpduBGzLrLug9ZKMF 4jbMQ7HOkt9bj5BqvI6WNiIapEMrRsxDXgvl2FOCdN/BRKtzex4yvH6JF1Em+lm7ediI 4gYIztGKSpOw/ADGDqDPoKCJCq/Wk94fZacKfCLmmrk8hfc8HFcBiNHkcBpJajKDSxDJ lPBakybYpPfWNdWGcRcLZSqT2XWj9nlVp4q1gd9E2ofoLyOdOPGNeEPnXEGj1SXDyn4o LgB5rSHNSlrx74y/Ki/XraVfyIyRQLkYoyomvUvMjDNzkA14mZzK1/9Jzfnfbz0SgKmm 5pqQ== X-Gm-Message-State: AFqh2krhUlhg68ff2sN/R3XfBEOOpRCISiR8xsdEa5hp0MPHsgvX6OOJ JTU0aUkGN0ROX1t0qmOT/CPXNw== X-Google-Smtp-Source: AMrXdXuh9DEaG1bgQWukUJrq/48IKSFLrmVgiLqqL3TfjylY2+zUVguAvm2q4bRdEMBeUGv2nELOpA== X-Received: by 2002:a05:6402:4493:b0:48e:ab4f:71f8 with SMTP id er19-20020a056402449300b0048eab4f71f8mr22705942edb.29.1673344686027; Tue, 10 Jan 2023 01:58:06 -0800 (PST) Received: from hera (ppp079167090036.access.hol.gr. [79.167.90.36]) by smtp.gmail.com with ESMTPSA id k16-20020a05640212d000b0048b4e2aaba0sm4754217edx.34.2023.01.10.01.58.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 01:58:05 -0800 (PST) Date: Tue, 10 Jan 2023 11:58:03 +0200 From: Ilias Apalodimas To: "Matthew Wilcox (Oracle)" Cc: Jesper Dangaard Brouer , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: Re: [PATCH v2 08/24] page_pool: Convert pp_alloc_cache to contain netmem Message-ID: References: <20230105214631.3939268-1-willy@infradead.org> <20230105214631.3939268-9-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230105214631.3939268-9-willy@infradead.org> X-Stat-Signature: zcfanpgp4ktzgz6se1u58qrbm8ycnm8b X-Rspam-User: X-Rspamd-Queue-Id: 6950340009 X-Rspamd-Server: rspam06 X-HE-Tag: 1673344687-937681 X-HE-Meta: U2FsdGVkX1+PnHj2JMNdZCOBhihrjkTRIHYbv7P+cBAsZF0DzQZDOfFfP/bTyXidllh6gHSssDKGrd8vUd6v59/gXZDusl/DI3oRJQj12ovywokoKwN/bbSjk5HJAlUNhVx+9BbLd0TftvvJ2NjahGBUTDOc5Rk4dhGFa+I5PQEWIzEiq6bEcl1YLu3OyhXQ3Y3VSX4GsBDbsBwBXPui8wyxsdLrhWuMOa1inSkC9LNWC76owf54Vfv1wEIA1j4pUnYBpF/KLPN4L5yB3cTG6BmyjuKzk1o+t9LUAAzn/cBmXbdXkamJtK9pXEDoKkEr6/m/vA2/nisu6wmhJ5D3SwHQYIj7hh3IKwEMB0EWqaPVlIWimDMNV+JuAf56PPQXjMk9udqIrGJCHWwFlixz3RbUfSKUq6mTk2iZrUo6KbnzdrI+uNFEUe2vQi/0LzEfNfpRqkCtl48N3Lv/g4IkRvJmP2yDvnLrFKXo07zeTB0y9i+8/jeoFAP+dLVJWDZg4IwphpeEMe53+6VvJX1wTWAomI86sgCUJm1nBNDZ/9foQuUjWEILFx9RrfELXuzSMpcu+zPBXYd1zpzBrCqbQveRywIAbl6K9rvNqmNry6luUl5yo3gE27oH2qNUZOekZHCmS5Y5m5pfmK+Tm+iyMknRTv4u9kLNz5r78+l0AvnZEPmvgNn8am5gMr74qg9qNl0KtGcoLPCD2rDDyE6ww6ypyYqT2fkn3gwuAj+7zgsAlSX4d7SsqTz7iWBoNtBiIuYMp45e2Ka1fWF9RJ3bNav+fbYFbHpLZtGs5QnJU0QFkD9mbfuJ11b+kU5W3OyWv/O168HBPMlX3VSW6LS9zhZ8FIN0Ne2toLF31XgAJfS5p9cQkflSrTOftlTdFmW7bYHd+V+JmoLyCMhNqDI5woxYAoaUF7Fmotg/apC04+BXC7PRwYCXJwYwLnUzrAYGUkOUlGjOyMPQ/lK/PkP OldhUOmc mvcVPB+ArT+ev4xj7c8KHonKzOuAn9QQt1LX7U+U2IoXD38mKrsxdtA0rtTx8cfMgM69VL7EK070MLl3U2ORY+kXRal0jaqq6p5m+WwEAczU6JZiRgwrxafV80TXVt11ea2tg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 05, 2023 at 09:46:15PM +0000, Matthew Wilcox (Oracle) wrote: > Change the type here from page to netmem. It works out well to > convert page_pool_refill_alloc_cache() to return a netmem instead > of a page as part of this commit. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/net/page_pool.h | 2 +- > net/core/page_pool.c | 52 ++++++++++++++++++++--------------------- > 2 files changed, 27 insertions(+), 27 deletions(-) > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index 480baa22bc50..63aa530922de 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -173,7 +173,7 @@ static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) > #define PP_ALLOC_CACHE_REFILL 64 > struct pp_alloc_cache { > u32 count; > - struct page *cache[PP_ALLOC_CACHE_SIZE]; > + struct netmem *cache[PP_ALLOC_CACHE_SIZE]; > }; > > struct page_pool_params { > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index 8f3f7cc5a2d5..c54217ce6b77 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -229,10 +229,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) > } > > noinline > -static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) > +static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) > { > struct ptr_ring *r = &pool->ring; > - struct page *page; > + struct netmem *nmem; > int pref_nid; /* preferred NUMA node */ > > /* Quicker fallback, avoid locks when ring is empty */ > @@ -253,49 +253,49 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) > > /* Refill alloc array, but only if NUMA match */ > do { > - page = __ptr_ring_consume(r); > - if (unlikely(!page)) > + nmem = __ptr_ring_consume(r); > + if (unlikely(!nmem)) > break; > > - if (likely(page_to_nid(page) == pref_nid)) { > - pool->alloc.cache[pool->alloc.count++] = page; > + if (likely(netmem_nid(nmem) == pref_nid)) { > + pool->alloc.cache[pool->alloc.count++] = nmem; > } else { > /* NUMA mismatch; > * (1) release 1 page to page-allocator and > * (2) break out to fallthrough to alloc_pages_node. > * This limit stress on page buddy alloactor. > */ > - page_pool_return_page(pool, page); > + page_pool_return_netmem(pool, nmem); > alloc_stat_inc(pool, waive); > - page = NULL; > + nmem = NULL; > break; > } > } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); > > /* Return last page */ > if (likely(pool->alloc.count > 0)) { > - page = pool->alloc.cache[--pool->alloc.count]; > + nmem = pool->alloc.cache[--pool->alloc.count]; > alloc_stat_inc(pool, refill); > } > > - return page; > + return nmem; > } > > /* fast path */ > static struct page *__page_pool_get_cached(struct page_pool *pool) > { > - struct page *page; > + struct netmem *nmem; > > /* Caller MUST guarantee safe non-concurrent access, e.g. softirq */ > if (likely(pool->alloc.count)) { > /* Fast-path */ > - page = pool->alloc.cache[--pool->alloc.count]; > + nmem = pool->alloc.cache[--pool->alloc.count]; > alloc_stat_inc(pool, fast); > } else { > - page = page_pool_refill_alloc_cache(pool); > + nmem = page_pool_refill_alloc_cache(pool); > } > > - return page; > + return netmem_page(nmem); > } > > static void page_pool_dma_sync_for_device(struct page_pool *pool, > @@ -391,13 +391,13 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > > /* Unnecessary as alloc cache is empty, but guarantees zero count */ > if (unlikely(pool->alloc.count > 0)) > - return pool->alloc.cache[--pool->alloc.count]; > + return netmem_page(pool->alloc.cache[--pool->alloc.count]); > > /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ > memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); > > nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, bulk, > - pool->alloc.cache); > + (struct page **)pool->alloc.cache); > if (unlikely(!nr_pages)) > return NULL; > > @@ -405,7 +405,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > * page element have not been (possibly) DMA mapped. > */ > for (i = 0; i < nr_pages; i++) { > - struct netmem *nmem = page_netmem(pool->alloc.cache[i]); > + struct netmem *nmem = pool->alloc.cache[i]; > if ((pp_flags & PP_FLAG_DMA_MAP) && > unlikely(!page_pool_dma_map(pool, nmem))) { > netmem_put(nmem); > @@ -413,7 +413,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > } > > page_pool_set_pp_info(pool, nmem); > - pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); > + pool->alloc.cache[pool->alloc.count++] = nmem; > /* Track how many pages are held 'in-flight' */ > pool->pages_state_hold_cnt++; > trace_page_pool_state_hold(pool, nmem, > @@ -422,7 +422,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > > /* Return last page */ > if (likely(pool->alloc.count > 0)) { > - page = pool->alloc.cache[--pool->alloc.count]; > + page = netmem_page(pool->alloc.cache[--pool->alloc.count]); > alloc_stat_inc(pool, slow); > } else { > page = NULL; > @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, > } > > /* Caller MUST have verified/know (page_ref_count(page) == 1) */ > - pool->alloc.cache[pool->alloc.count++] = page; > + pool->alloc.cache[pool->alloc.count++] = page_netmem(page); > recycle_stat_inc(pool, cached); > return true; > } > @@ -785,7 +785,7 @@ static void page_pool_free(struct page_pool *pool) > > static void page_pool_empty_alloc_cache_once(struct page_pool *pool) > { > - struct page *page; > + struct netmem *nmem; > > if (pool->destroy_cnt) > return; > @@ -795,8 +795,8 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) > * call concurrently. > */ > while (pool->alloc.count) { > - page = pool->alloc.cache[--pool->alloc.count]; > - page_pool_return_page(pool, page); > + nmem = pool->alloc.cache[--pool->alloc.count]; > + page_pool_return_netmem(pool, nmem); > } > } > > @@ -878,15 +878,15 @@ EXPORT_SYMBOL(page_pool_destroy); > /* Caller must provide appropriate safe context, e.g. NAPI. */ > void page_pool_update_nid(struct page_pool *pool, int new_nid) > { > - struct page *page; > + struct netmem *nmem; > > trace_page_pool_update_nid(pool, new_nid); > pool->p.nid = new_nid; > > /* Flush pool alloc cache, as refill will check NUMA node */ > while (pool->alloc.count) { > - page = pool->alloc.cache[--pool->alloc.count]; > - page_pool_return_page(pool, page); > + nmem = pool->alloc.cache[--pool->alloc.count]; > + page_pool_return_netmem(pool, nmem); > } > } > EXPORT_SYMBOL(page_pool_update_nid); > -- > 2.35.1 > Reviewed-by: Ilias Apalodimas