From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 512DBC2BD09 for ; Fri, 28 Jun 2024 22:35:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C7C156B008A; Fri, 28 Jun 2024 18:35:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C2B586B008C; Fri, 28 Jun 2024 18:35:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACBB56B0092; Fri, 28 Jun 2024 18:35:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8E1D56B008A for ; Fri, 28 Jun 2024 18:35:09 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3E6E01C0FC4 for ; Fri, 28 Jun 2024 22:35:09 +0000 (UTC) X-FDA: 82281754338.04.C3CC0CF Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf15.hostedemail.com (Postfix) with ESMTP id 55AF9A0004 for ; Fri, 28 Jun 2024 22:35:07 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=O1Cgv7v+; spf=pass (imf15.hostedemail.com: domain of alexander.duyck@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=alexander.duyck@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719614089; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k/Il0CyJ05RUxTJn8nhYPqyFFh/jRSTkpMyOabX2+KY=; b=wXzkunvHNQnTyEDC0orE0iBByF7DFZ2zFGzHDLfGQY/1ADghhXV3H91mQfKh40qNckw06N gtUqJRb6XlMM62cuvlsyYJwkGl12v/zkpVnrqYg4CyvR2hlWv/0IiUvEUisT8u4I3q7En0 PaPdu+ekoXJq+9GHFJCMrePiOiV9+yY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719614089; a=rsa-sha256; cv=none; b=q+ceGZDxS+nkwVExedZfkVuZvJq6C5QGuXx6UcS4l+SZFecDPCzWbU5836fPrPUACd7UOo NIHx7CPc80R6r5IY7FiyR7r/8fPgocBx97beSrFd00UuFlb3L/mOcuMIgq1nI2rDa0bkwI Mm/qa3f5MuRVB6beZIMauntqrCAqLdQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=O1Cgv7v+; spf=pass (imf15.hostedemail.com: domain of alexander.duyck@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=alexander.duyck@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-706b53ee183so1617602b3a.1 for ; Fri, 28 Jun 2024 15:35:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719614106; x=1720218906; darn=kvack.org; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject :date:message-id:reply-to; bh=k/Il0CyJ05RUxTJn8nhYPqyFFh/jRSTkpMyOabX2+KY=; b=O1Cgv7v+SHOVV+pbvou6RUGNbkFyATUhWYezakt28i+rWHC++/15PM14dFL69lpIUh WzxCsYsdyPG68ro/nDF5dRH2ZQnHOitwgZOvMWDaOzzLA1hl53JdzxGGKKhE6hfQCK2t UwlkHLqTBjlqcm5mE//kCFi8sSQYtsjAkKBmkJNwVLR0mlmW0Ht45BiiG3rEsacH+EkK tOWl2W0YnFtDNQFy0YJ9af01BwmxpbA1ec77E56ScEX61h47MxZbyi2Zh35dEQkmwdbP gt6+VGaLxYX7/VnffxTh6A9snjqZUJuO3hf0T2ii25Zl2XukNLiB2WmFPFAO0JdAIbZM cE8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719614106; x=1720218906; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=k/Il0CyJ05RUxTJn8nhYPqyFFh/jRSTkpMyOabX2+KY=; b=STtH3owFX3zj3zXCV0PyuZ5ZOjhA+Elg6DdGIKEp7QoZSf7aHjSypxbkIn7UsjDNzG o40YNR1OIdEhhcrQr0WH5+gzS3mzOD8r/hYpfbRxPKozkvCJsFijljQIt/CUIhtWpUNS NJTgYBsMFsTkZstlTg1u8VsDaoUqFFzclAVOOimyu6pMTm5bfv1M1E7QKIj8nXA8rc+C aa0GeSDtzarPAn7JLs748aF6blyv3IwwDB4HKtMEH9a/Fx5F/KIwsPxxsCVLwR7fvB6c zpbtPLIf3T4zstcg6SkovmnDxWnE5+d8JdzFqzQdDwA86jgmb/lt15kaBbh2DuXySp7D bTMw== X-Forwarded-Encrypted: i=1; AJvYcCU3z4G4mhAojc7U4RfCl52X/Mgxm8EoEtHhkocovn7xagnA0EgN4sDOrvZlpire1NggNsvX6ts5ViC4aPArIXZSwEs= X-Gm-Message-State: AOJu0YwTwnLIFBN4LsC2y3k2wBlsTWtBBd/qpNphZhQ/tuMOOwlgD+/h Fy0rxRu+4KhxK4D62ZJafuOcGpJZ0bKa2reznzJkizMwTrKiaJgH X-Google-Smtp-Source: AGHT+IFdAI8lVdgvZ8QDwW9Tb7xEOZPV/kOLSX7mFxiYpppyd+S5/RqWY2OwYARoewQLD24aY4u95A== X-Received: by 2002:a05:6a21:1a1:b0:1be:bfc9:dbcf with SMTP id adf61e73a8af0-1bee4926c47mr4315795637.13.1719614105892; Fri, 28 Jun 2024 15:35:05 -0700 (PDT) Received: from ?IPv6:2605:59c8:829:4c00:82ee:73ff:fe41:9a02? ([2605:59c8:829:4c00:82ee:73ff:fe41:9a02]) by smtp.googlemail.com with ESMTPSA id 41be03b00d2f7-72c69b53e3dsm1769146a12.9.2024.06.28.15.35.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Jun 2024 15:35:05 -0700 (PDT) Message-ID: <33c3c7fc00d2385e741dc6c9be0eade26c30bd12.camel@gmail.com> Subject: Re: [PATCH net-next v9 10/13] mm: page_frag: introduce prepare/probe/commit API From: Alexander H Duyck To: Yunsheng Lin , davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Andrew Morton , linux-mm@kvack.org Date: Fri, 28 Jun 2024 15:35:04 -0700 In-Reply-To: <20240625135216.47007-11-linyunsheng@huawei.com> References: <20240625135216.47007-1-linyunsheng@huawei.com> <20240625135216.47007-11-linyunsheng@huawei.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.48.4 (3.48.4-1.fc38) MIME-Version: 1.0 X-Rspamd-Queue-Id: 55AF9A0004 X-Stat-Signature: pxht6su7dte1ew78ijab8g9ayokf7uec X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1719614107-296834 X-HE-Meta: U2FsdGVkX1+8TjV3aQGY4WzAZX47mVFz606Bsu5mnjMoj/8PcwC9FwLRQyHlM4xezNiLSGF82xi3/1XEH3kjm8x6oIkCiXAHXMDC6khPjtIJLVTj41hx16ORB/YKIJL3MNXDtEZ8oyD97nJWEycGUBNfwb8l22yDUHPd0lpH67IiSWOoNe7c/xVHoNZjkHraGsuCBXDBRuLjlfBDDm7rrTFhEenC+RqjyshaqSvv4FyNFankPKrhxZKrbZQTmHtrwrwkAAl5kiElzGVfKyCR0BpzCsdtmOwzmun12lwPlreL0w3CMytgjNpOhuswWDxsfaaWMKyHOgP+z3xfWnSKIsSNhQzT4kwJ9fbzL7f98M9pNZ3digrap1pj92hnkUI7qLkxuVJDM99hRPOaEgGyZ5J+r+MS6px7+MLIsm5POuu5jRd3boFwdyhW7p6xHDe7ZhG6VSo/PbTmXtlOF7uiJAQcGtzmhqQw3+E3uHIp2s6U1+rffACLOBxsPyo0+JIosbNc+53cPUh8WqkEvTXHUIQ6HPbLGhNAGH7do9DFml0mlc1yfiSSvX8AO0ImyVZfsyJSLAS7r0TOXDuaBBtS8QX34EdjOu6ce8wHI0ioKuGwiALSaMZ3q5YAXvJLCeV8/tqy2y+0OAisIS3F87kkhBJweQuCZSOdhNV2i0sCfjKRCYsQO4Nqmta1VfOGYiwu3/ztI2377QzwZ5kCGAMGawgQHAc9/bfSn6ZU1ksR8MpsW4Is3+63lUuYs4TRRe2PDwg0Zq5A+J3atV28QY9J1W2olVJxuzlZL6CQRoItBDcpVbYyNG9CJynwWr88jElsRjQPIyjrJbFWRiAK4qg4ajCecgxzSpOL3RXsFIrqCM7y8LXkw6PsxvwIxWL5ECfVz44svjGpKAY7gnOsmtHf1WgB1O/4vD5az2SX4nwDddrCiSsY2Zi5J2YJeSjJaRRrwI5R+zAFm/m9zy42vCR T8nFTX/E tiMmhoRU1kMKM8hXDb7YgdSuyLQqOAsE+OjWpWE9Gprj4r7wbZF3uGbx0DQvAyBVanVxvLNFC8fBW0b/RygGS6Oas5hj7WbbyHiWgwvkOn1Zk7HPgDa/ZdB6DMqEl2sP661/mS9otsmwTVrSU0HI4+m0y0UjKWVgwdcELhqYBqyH99L+TGISA9/HlCbnzy6LVLBlvomiysA1EAYqB9mMkAjJ6Z4q1Ax/wq0pO8sU23KvmH8n38ihxDoSkToXJXZytUGBVuholWvGFRtSyu9dngRhozKfFvXmlxDkHU2YSVRVDwnWVHtfPjDmCh2i02F00HrdCOELchPPIT2Mw2HtIaLdW7+W/IBsnC2f28gs7joMuoh3UL9o1G084BjvnzJTl26g5kXYnxVKzJ8MtK2uqupISjSmSTk75z2LMCdvxLL8TYKVaaebEtsKvH1Zoey31KVC7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 2024-06-25 at 21:52 +0800, Yunsheng Lin wrote: > There are many use cases that need minimum memory in order > for forward progress, but more performant if more memory is > available or need to probe the cache info to use any memory > available for frag caoleasing reason. >=20 > Currently skb_page_frag_refill() API is used to solve the > above use cases, but caller needs to know about the internal > detail and access the data field of 'struct page_frag' to > meet the requirement of the above use cases and its > implementation is similar to the one in mm subsystem. >=20 > To unify those two page_frag implementations, introduce a > prepare API to ensure minimum memory is satisfied and return > how much the actual memory is available to the caller and a > probe API to report the current available memory to caller > without doing cache refilling. The caller needs to either call > the commit API to report how much memory it actually uses, or > not do so if deciding to not use any memory. >=20 > As next patch is about to replace 'struct page_frag' with > 'struct page_frag_cache' in linux/sched.h, which is included > by the asm-offsets.s, using the virt_to_page() in the inline > helper of page_frag_cache.h cause a "'vmemmap' undeclared" > compiling error for asm-offsets.s, use a macro for probe API > to avoid that compiling error. >=20 > CC: Alexander Duyck > Signed-off-by: Yunsheng Lin > --- > include/linux/page_frag_cache.h | 82 +++++++++++++++++++++++ > mm/page_frag_cache.c | 114 ++++++++++++++++++++++++++++++++ > 2 files changed, 196 insertions(+) >=20 > diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_ca= che.h > index b33904d4494f..e95d44a36ec9 100644 > --- a/include/linux/page_frag_cache.h > +++ b/include/linux/page_frag_cache.h > @@ -4,6 +4,7 @@ > #define _LINUX_PAGE_FRAG_CACHE_H > =20 > #include > +#include > =20 > #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) > #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) > @@ -87,6 +88,9 @@ static inline unsigned int page_frag_cache_page_size(st= ruct encoded_va *encoded_ > =20 > void page_frag_cache_drain(struct page_frag_cache *nc); > void __page_frag_cache_drain(struct page *page, unsigned int count); > +struct page *page_frag_alloc_pg(struct page_frag_cache *nc, > + unsigned int *offset, unsigned int fragsz, > + gfp_t gfp); > void *__page_frag_alloc_va_align(struct page_frag_cache *nc, > unsigned int fragsz, gfp_t gfp_mask, > unsigned int align_mask); > @@ -99,12 +103,90 @@ static inline void *page_frag_alloc_va_align(struct = page_frag_cache *nc, > return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); > } > =20 > +static inline unsigned int page_frag_cache_page_offset(const struct page= _frag_cache *nc) > +{ > + return page_frag_cache_page_size(nc->encoded_va) - nc->remaining; > +} > + > static inline void *page_frag_alloc_va(struct page_frag_cache *nc, > unsigned int fragsz, gfp_t gfp_mask) > { > return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u); > } > =20 > +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned in= t *fragsz, > + gfp_t gfp); > + > +static inline void *page_frag_alloc_va_prepare_align(struct page_frag_ca= che *nc, > + unsigned int *fragsz, > + gfp_t gfp, > + unsigned int align) > +{ > + WARN_ON_ONCE(!is_power_of_2(align) || align > PAGE_SIZE); > + nc->remaining =3D nc->remaining & -align; > + return page_frag_alloc_va_prepare(nc, fragsz, gfp); > +} > + > +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, > + unsigned int *offset, > + unsigned int *fragsz, gfp_t gfp); > + > +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, > + unsigned int *offset, > + unsigned int *fragsz, > + void **va, gfp_t gfp); > + > +static inline struct encoded_va *__page_frag_alloc_probe(struct page_fra= g_cache *nc, > + unsigned int *offset, > + unsigned int *fragsz, > + void **va) > +{ > + struct encoded_va *encoded_va; > + > + *fragsz =3D nc->remaining; > + encoded_va =3D nc->encoded_va; > + *offset =3D page_frag_cache_page_size(encoded_va) - *fragsz; > + *va =3D encoded_page_address(encoded_va) + *offset; > + > + return encoded_va; > +} > + > +#define page_frag_alloc_probe(nc, offset, fragsz, va) \ > +({ \ > + struct page *__page =3D NULL; \ > + \ > + VM_BUG_ON(!*(fragsz)); \ > + if (likely((nc)->remaining >=3D *(fragsz))) \ > + __page =3D virt_to_page(__page_frag_alloc_probe(nc, \ > + offset, \ > + fragsz, \ > + va)); \ > + \ > + __page; \ > +}) > + Why is this a macro instead of just being an inline? Are you trying to avoid having to include a header due to the virt_to_page?