From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FC66EB64D7 for ; Fri, 23 Jun 2023 08:09:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B7E438D0002; Fri, 23 Jun 2023 04:09:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B08078D0001; Fri, 23 Jun 2023 04:09:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95A088D0002; Fri, 23 Jun 2023 04:09:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 80D538D0001 for ; Fri, 23 Jun 2023 04:09:04 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id BD11A140D9A for ; Fri, 23 Jun 2023 08:09:03 +0000 (UTC) X-FDA: 80933286966.14.D4BDC03 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf12.hostedemail.com (Postfix) with ESMTP id 87EBC4000E for ; Fri, 23 Jun 2023 08:09:00 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=M45z5Lpj; spf=pass (imf12.hostedemail.com: domain of pabeni@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=pabeni@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687507740; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=L5m46qexnrMcZMsVKjEBXopZbG5R46g6YBHvUZvdkUI=; b=DIsM3jsZ4aBw9434HWTsyHPPjZ5J7sSyH4S0pFjBsY2lSlXD3+mFZf4dRwHg/kdJYo+aY+ etTLeZ+8kD7GmOPo0B0NYjaiyiZjOoQ6mHYfDCgzPeniI3AfpwHvDHasM7/gn11YZ3mt02 VIjrRo4nkW4qomrhUFnrggTroZxzFDM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687507740; a=rsa-sha256; cv=none; b=m1RybN52E6QqBDpAhfbfL6hKXQPTubd5DBM+o8ngt3qLIYqddLUfFZQcXCjyQJ3DxJgo5S 3I1gbHetiHJB6dze7/LB/+nCXC/SeG+g0O7DLMPIUoRMifun+Hpcx/FK34avRDOV4zAzdX IF4UyvDYRUwlwHOFq1n0+1JIJDosMzM= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=M45z5Lpj; spf=pass (imf12.hostedemail.com: domain of pabeni@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=pabeni@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687507739; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L5m46qexnrMcZMsVKjEBXopZbG5R46g6YBHvUZvdkUI=; b=M45z5LpjWOgQ3m8sTSh2xVuXKjy2yeNSrgDzxNEpCdapMlJDSrpOeIyGUHUkXhWE/eMg8I E26TmhYYhGaLo6LKQ1Zqyao5BExqjUo0Dl6B9W5NO+Jnu1LLUHtCyjv6Urso8mt5vQo5n+ 5j1BI+muL4y7a4Gpiy4ZmCc00ztRLDo= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-452-1lLAl783OzSc6pVJ5Z6v-Q-1; Fri, 23 Jun 2023 04:08:54 -0400 X-MC-Unique: 1lLAl783OzSc6pVJ5Z6v-Q-1 Received: by mail-qv1-f72.google.com with SMTP id 6a1803df08f44-62fea7a5de9so924596d6.0 for ; Fri, 23 Jun 2023 01:08:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687507733; x=1690099733; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=XRsrloG5XK9IUekWqHBCtRBKWd5yuNn8Erowenhh7uo=; b=b43XJdFGmxoTQn9Q0RwcWWmXYwSVSEYHh9cSIvB13LWj5RUlcXDzHaSAyepcRSsqy2 kY4sfjl9G6J7ysKPcIxY6UcsO3qJiNOZDoKBwo5OOU2Ebz3blTocLqR8XbJD/TbHkUuN bxDFV8dG4HZC2rGt0AdxPlf3ItM+y6e6UAu02dqwxWYmnp1W7fwK+e97fddOuuzFd5wZ LgtnUxKC1qDvF0V2fcng563tNdf5IHyprVewn7DxzG74yo8QHZguU6sbNS815c5lfAs6 LDnOWupJ9JplyAkHiaBMxR3B07molz2bL9KjWN4UEdvuRecUd0wrGepZOI5gAFI1qqKN J/rQ== X-Gm-Message-State: AC+VfDwGv+sfCsEDOyrocHZ1slHNW/1yekkjnYNr8HyTJmOL52FPSErt LBPwndJ4lYEWmtKZ52EmR4zSlrmwaeJuXG4w3fO98uqINbZns403trRXIV36vcsOcIm7+mQ+ZSM PfenUcly//nY= X-Received: by 2002:ad4:5945:0:b0:625:aa48:e50f with SMTP id eo5-20020ad45945000000b00625aa48e50fmr23823146qvb.6.1687507733606; Fri, 23 Jun 2023 01:08:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4np55509X5SjvvyV/z6p3nBJkRmzoebgMoA+uLUJVCIrabJOfi5M325Mt3d67PAMUYklxL7Q== X-Received: by 2002:ad4:5945:0:b0:625:aa48:e50f with SMTP id eo5-20020ad45945000000b00625aa48e50fmr23823134qvb.6.1687507733324; Fri, 23 Jun 2023 01:08:53 -0700 (PDT) Received: from gerbillo.redhat.com (146-241-231-243.dyn.eolo.it. [146.241.231.243]) by smtp.gmail.com with ESMTPSA id y1-20020a0ce041000000b005dd8b9345b4sm4748746qvk.76.2023.06.23.01.08.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 01:08:53 -0700 (PDT) Message-ID: <634c885ccfb2e49e284aedc60e157bb12e5f3530.camel@redhat.com> Subject: Re: [PATCH net-next v3 01/18] net: Copy slab data for sendmsg(MSG_SPLICE_PAGES) From: Paolo Abeni To: David Howells , netdev@vger.kernel.org Cc: Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Menglong Dong Date: Fri, 23 Jun 2023 10:08:48 +0200 In-Reply-To: <20230620145338.1300897-2-dhowells@redhat.com> References: <20230620145338.1300897-1-dhowells@redhat.com> <20230620145338.1300897-2-dhowells@redhat.com> User-Agent: Evolution 3.46.4 (3.46.4-1.fc37) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 87EBC4000E X-Rspam-User: X-Stat-Signature: qgjbcm6t3mqoykns8m1pergrzewpbt4n X-Rspamd-Server: rspam03 X-HE-Tag: 1687507740-759166 X-HE-Meta: U2FsdGVkX1/er125meVbxCA3cvOsVT6Qd2M+sO9Td7Hn1vsQ6yOnhQ2PD1nPzYcsufzgQzo3ojQGSGPkjIqh2q4gVXLF8DWFkXUmNmy4wcc7t0NYWkPIkL19V/JAV7Ke5tHoLUlMEN1Gv35kmAgAtUaC6laA8dBIoOqT8AtkRU7VEjfuOC+/05FT75EOQ8XhyB0CuKbEBWZ+SAfMUye0hSjToTwEqMJX7j5et775cpPaTchLuh2wTCFHO1EEbxkIC6zqal1Mxr/jssxQjY8eLYrIRabKurfG9hGT8s0JWp7ReE2I75hIV+jWom242I10kaXOuAzLTl65/7BtubpvNsZGxH8mqKsaewb+KYSrVhoT5sg8zoWdNCQ+Fj4e/wvD+mX9wc8maCkUAMmL5eyqFI1/36lmTjpVWD4V8fcr4h8MHUt5jVx5qeaoLaIPWAnYd5pC3UWuG+clmr39Uum1+LeflBBaPiiJIXsCGThqqqzAtRdOxZuJhN6HGlaLzlwFcTvs4Pgijy5W5zCMqd86bN/Z8Sn/IXEIe9tboD15vihBaiZ7J9VCPtjHF6Bgcg51kUo/XjVNn2mDhUMBegVwlfXC/R0lpSXOB+6SvtHRefzOYxt01wIMRQ8hOd7/Jt4MTD7Ou+KKUUFCNMV0PeHFHybusUj92NbdJq2Rp15q4w0FMfQNqaBv0OS0GsLTLrxFVcSumcZT8jCyKDuX6UMdchGE9Q/9Ur9IXB0gyluCARHyLc2+3sqMrlWjm7TTCKnjkW5HSnpLQ6oFqzQm4KOyJre6XCA8CrYAItmRAEeImCl3QaV1bYKTWzcoucli1AR9FZzsg85OdDLfElOcz18yAUOMTyh2VxGnh4/XcTWRXenson86n5o4h2Fd2mD2kNoi+6KpZlkdYJlu0bM2Wk5QXlSSwzKp0Suj59iEbtNKtDc98LC7z2Cwf1OHdnACvWf9s/SQA2NjWKu2SQF/mvc f1WCOxfL jqMkbFJj9pv81liuNRybIUdBAs4Xp7ZQX5PXpk/IlxSBppLlCEQ4FTrKWfeoJac2spPEaVpKGGgxdGsU6SwoEpOFH/52Il6VissPLMMQlgugbJRoSfuwkSoJGzitk+lCfgnvc0W12g6yZfM1DPM5GjG61aZANcItNQ6Ok4bdyfQMLSvDr7SZ70Pbd4lHumRb0zWpY38kNvhYNwmmbBdj7JbYZhOZanrg9bwD45QaKaBZNy2lvnL5dNNbVlrupijH0Yr/P7LnzEEOvb7GR/SLzR9DZ0qpESAEnwB4oJrh64dL5GNzh5OXHmjcTW2+hdibsD4fscsv5tRoo7zNtOXBnkJYu+UfrYbK1n6C8a1w54pGeNeU+zBz2cyfndyB/GhbiTp8VvONwaAmjS50HkcHmuQbV8zzs4UGFbpYb0a0JN7y76w+Lr2LMisoHMP9expoglvYjiLa67ZHfdsEDX/ouwMZVCcL5R6lKUTRGvOg1YeBvBOhBgjGvZJDeE4Kq34t93tJc4NJMXkZzfIE6avMmT380HH94XO08sW40gbspNOmAVmE0uS1QAZtLY72BzN3NJAJ71roBt+B2eIFZ/v1gGFED7swg3Wonjx82NEIat786Of9/YRqgmB09b44TMuea6okdarKiREO3kJhpGw3nHksxiCzcOpQdhZCCwar21fmfy5+LSeAQNloIOpdUKfkVXH8tpFzWEWy7XTDOZPREcQr1N6BV/ZRp7KYB+gqmb/K7gVJp0d1H3a1KMm9gXzJtStUqJSdsgfsvX0Jv2rawTPIIYK+Q7StWyNus X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, First thing first, I'm sorry for the delayed feedback. I was traveling. On Tue, 2023-06-20 at 15:53 +0100, David Howells wrote: > If sendmsg() is passed MSG_SPLICE_PAGES and is given a buffer that contai= ns > some data that's resident in the slab, copy it rather than returning EIO. > This can be made use of by a number of drivers in the kernel, including: > iwarp, ceph/rds, dlm, nvme, ocfs2, drdb. It could also be used by iscsi, > rxrpc, sunrpc, cifs and probably others. >=20 > skb_splice_from_iter() is given it's own fragment allocator as > page_frag_alloc_align() can't be used because it does no locking to preve= nt > parallel callers from racing. alloc_skb_frag() uses a separate folio for > each cpu and locks to the cpu whilst allocating, reenabling cpu migration > around folio allocation. >=20 > This could allocate a whole page instead for each fragment to be copied, = as > alloc_skb_with_frags() would do instead, but that would waste a lot of > space (most of the fragments look like they're going to be small). >=20 > This allows an entire message that consists of, say, a protocol header or > two, a number of pages of data and a protocol footer to be sent using a > single call to sock_sendmsg(). >=20 > The callers could be made to copy the data into fragments before calling > sendmsg(), but that then penalises them if MSG_SPLICE_PAGES gets ignored. >=20 > Signed-off-by: David Howells > cc: Alexander Duyck > cc: Eric Dumazet > cc: "David S. Miller" > cc: David Ahern > cc: Jakub Kicinski > cc: Paolo Abeni > cc: Jens Axboe > cc: Matthew Wilcox > cc: Menglong Dong > cc: netdev@vger.kernel.org > --- >=20 > Notes: > ver #3) > - Remove duplicate decl of skb_splice_from_iter(). > =20 > ver #2) > - Fix parameter to put_cpu_ptr() to have an '&'. >=20 > include/linux/skbuff.h | 2 + > net/core/skbuff.c | 171 ++++++++++++++++++++++++++++++++++++++++- > 2 files changed, 170 insertions(+), 3 deletions(-) >=20 > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > index 91ed66952580..5f53bd5d375d 100644 > --- a/include/linux/skbuff.h > +++ b/include/linux/skbuff.h > @@ -5037,6 +5037,8 @@ static inline void skb_mark_for_recycle(struct sk_b= uff *skb) > #endif > } > =20 > +void *alloc_skb_frag(size_t fragsz, gfp_t gfp); > +void *copy_skb_frag(const void *s, size_t len, gfp_t gfp); > ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, > =09=09=09 ssize_t maxsize, gfp_t gfp); > =20 > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index fee2b1c105fe..d962c93a429d 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -6755,6 +6755,145 @@ nodefer:=09__kfree_skb(skb); > =09=09smp_call_function_single_async(cpu, &sd->defer_csd); > } > =20 > +struct skb_splice_frag_cache { > +=09struct folio=09*folio; > +=09void=09=09*virt; > +=09unsigned int=09offset; > +=09/* we maintain a pagecount bias, so that we dont dirty cache line > +=09 * containing page->_refcount every time we allocate a fragment. > +=09 */ > +=09unsigned int=09pagecnt_bias; > +=09bool=09=09pfmemalloc; > +}; > + > +static DEFINE_PER_CPU(struct skb_splice_frag_cache, skb_splice_frag_cach= e); > + > +/** > + * alloc_skb_frag - Allocate a page fragment for using in a socket > + * @fragsz: The size of fragment required > + * @gfp: Allocation flags > + */ > +void *alloc_skb_frag(size_t fragsz, gfp_t gfp) > +{ > +=09struct skb_splice_frag_cache *cache; > +=09struct folio *folio, *spare =3D NULL; > +=09size_t offset, fsize; > +=09void *p; > + > +=09if (WARN_ON_ONCE(fragsz =3D=3D 0)) > +=09=09fragsz =3D 1; > + > +=09cache =3D get_cpu_ptr(&skb_splice_frag_cache); > +reload: > +=09folio =3D cache->folio; > +=09offset =3D cache->offset; > +try_again: > +=09if (fragsz > offset) > +=09=09goto insufficient_space; > + > +=09/* Make the allocation. */ > +=09cache->pagecnt_bias--; > +=09offset =3D ALIGN_DOWN(offset - fragsz, SMP_CACHE_BYTES); > +=09cache->offset =3D offset; > +=09p =3D cache->virt + offset; > +=09put_cpu_ptr(&skb_splice_frag_cache); > +=09if (spare) > +=09=09folio_put(spare); > +=09return p; > + > +insufficient_space: > +=09/* See if we can refurbish the current folio. */ > +=09if (!folio || !folio_ref_sub_and_test(folio, cache->pagecnt_bias)) > +=09=09goto get_new_folio; > +=09if (unlikely(cache->pfmemalloc)) { > +=09=09__folio_put(folio); > +=09=09goto get_new_folio; > +=09} > + > +=09fsize =3D folio_size(folio); > +=09if (unlikely(fragsz > fsize)) > +=09=09goto frag_too_big; > + > +=09/* OK, page count is 0, we can safely set it */ > +=09folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); > + > +=09/* Reset page count bias and offset to start of new frag */ > +=09cache->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; > +=09offset =3D fsize; > +=09goto try_again; IMHO this function uses a bit too much labels and would be more easy to read, e.g. moving the above chunk of code in conditional branch. Even without such change, I think the above 'goto try_again;' introduces an unneeded conditional, as at this point we know 'fragsz <=3D fsize'. > + > +get_new_folio: > +=09if (!spare) { > +=09=09cache->folio =3D NULL; > +=09=09put_cpu_ptr(&skb_splice_frag_cache); > + > +#if PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE > +=09=09spare =3D folio_alloc(gfp | __GFP_NOWARN | __GFP_NORETRY | > +=09=09=09=09 __GFP_NOMEMALLOC, > +=09=09=09=09 PAGE_FRAG_CACHE_MAX_ORDER); > +=09=09if (!spare) > +#endif > +=09=09=09spare =3D folio_alloc(gfp, 0); > +=09=09if (!spare) > +=09=09=09return NULL; > + > +=09=09cache =3D get_cpu_ptr(&skb_splice_frag_cache); > +=09=09/* We may now be on a different cpu and/or someone else may > +=09=09 * have refilled it > +=09=09 */ > +=09=09cache->pfmemalloc =3D folio_is_pfmemalloc(spare); > +=09=09if (cache->folio) > +=09=09=09goto reload; I think there is some problem with the above. If cache->folio is !=3D NULL, and cache->folio was not pfmemalloc-ed while the spare one is, it looks like the wrong policy will be used. And should be even worse if folio was pfmemalloc-ed while spare is not. I think moving 'cache->pfmemalloc' initialization... > +=09} > + ... here should fix the above. > +=09cache->folio =3D spare; > +=09cache->virt =3D folio_address(spare); > +=09folio =3D spare; > +=09spare =3D NULL; > + > +=09/* Even if we own the page, we do not use atomic_set(). This would > +=09 * break get_page_unless_zero() users. > +=09 */ > +=09folio_ref_add(folio, PAGE_FRAG_CACHE_MAX_SIZE); > + > +=09/* Reset page count bias and offset to start of new frag */ > +=09cache->pagecnt_bias =3D PAGE_FRAG_CACHE_MAX_SIZE + 1; > +=09offset =3D folio_size(folio); > +=09goto try_again; What if fragsz > PAGE_SIZE, we are consistently unable to allocate an high order page, but order-0, pfmemalloc-ed page allocation is successful? It looks like this could become an unbounded loop? > + > +frag_too_big: > +=09/* The caller is trying to allocate a fragment with fragsz > PAGE_SIZ= E > +=09 * but the cache isn't big enough to satisfy the request, this may > +=09 * happen in low memory conditions. We don't release the cache page > +=09 * because it could make memory pressure worse so we simply return NU= LL > +=09 * here. > +=09 */ > +=09cache->offset =3D offset; > +=09put_cpu_ptr(&skb_splice_frag_cache); > +=09if (spare) > +=09=09folio_put(spare); > +=09return NULL; > +} > +EXPORT_SYMBOL(alloc_skb_frag); > + > +/** > + * copy_skb_frag - Copy data into a page fragment. > + * @s: The data to copy > + * @len: The size of the data > + * @gfp: Allocation flags > + */ > +void *copy_skb_frag(const void *s, size_t len, gfp_t gfp) > +{ > +=09void *p; > + > +=09p =3D alloc_skb_frag(len, gfp); > +=09if (!p) > +=09=09return NULL; > + > +=09return memcpy(p, s, len); > +} > +EXPORT_SYMBOL(copy_skb_frag); > + > static void skb_splice_csum_page(struct sk_buff *skb, struct page *page, > =09=09=09=09 size_t offset, size_t len) > { > @@ -6808,17 +6947,43 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb,= struct iov_iter *iter, > =09=09=09break; > =09=09} > =20 > +=09=09if (space =3D=3D 0 && > +=09=09 !skb_can_coalesce(skb, skb_shinfo(skb)->nr_frags, > +=09=09=09=09 pages[0], off)) { > +=09=09=09iov_iter_revert(iter, len); > +=09=09=09break; > +=09=09} > + > =09=09i =3D 0; > =09=09do { > =09=09=09struct page *page =3D pages[i++]; > =09=09=09size_t part =3D min_t(size_t, PAGE_SIZE - off, len); > - > -=09=09=09ret =3D -EIO; > -=09=09=09if (WARN_ON_ONCE(!sendpage_ok(page))) > +=09=09=09bool put =3D false; > + > +=09=09=09if (PageSlab(page)) { I'm a bit concerned from the above. If I read correctly, tcp 0-copy will go through that for every page, even if the expected use-case is always !PageSlub(page). compound_head() could be costly if the head page is not hot on cache and I'm not sure if that could be the case for tcp 0-copy. The bottom line is that I fear a possible regression here. Given all the above, and the late stage of the current devel cycle, would you consider slicing down this series to just kill MSG_SENDPAGE_NOTLAST, as Jakub suggested? Thanks! Paolo