From: Paolo Abeni <pabeni@redhat.com>
To: David Howells <dhowells@redhat.com>
Cc: netdev@vger.kernel.org,
Alexander Duyck <alexander.duyck@gmail.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>,
Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
David Ahern <dsahern@kernel.org>,
Matthew Wilcox <willy@infradead.org>,
Jens Axboe <axboe@kernel.dk>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Menglong Dong <imagedong@tencent.com>
Subject: Re: [PATCH net-next v3 01/18] net: Copy slab data for sendmsg(MSG_SPLICE_PAGES)
Date: Fri, 23 Jun 2023 11:37:12 +0200 [thread overview]
Message-ID: <6cf2ea121c4fdbd04682224c5acf6c73cc47f2f7.camel@redhat.com> (raw)
In-Reply-To: <1969720.1687511219@warthog.procyon.org.uk>
On Fri, 2023-06-23 at 10:06 +0100, David Howells wrote:
> Paolo Abeni <pabeni@redhat.com> wrote:
>
> > IMHO this function uses a bit too much labels and would be more easy to
> > read, e.g. moving the above chunk of code in conditional branch.
>
> Maybe. I was trying to put the fast path up at the top without the slow path
> bits in it, but I can put the "insufficient_space" bit there.
I *think* you could move the insufficient_space in a separate helped,
that should achieve your goal with fewer labels and hopefully no
additional complexity.
>
> > Even without such change, I think the above 'goto try_again;'
> > introduces an unneeded conditional, as at this point we know 'fragsz <=
> > fsize'.
>
> Good point.
>
> > > + cache->pfmemalloc = folio_is_pfmemalloc(spare);
> > > + if (cache->folio)
> > > + goto reload;
> >
> > I think there is some problem with the above.
> >
> > If cache->folio is != NULL, and cache->folio was not pfmemalloc-ed
> > while the spare one is, it looks like the wrong policy will be used.
> > And should be even worse if folio was pfmemalloc-ed while spare is not.
> >
> > I think moving 'cache->pfmemalloc' initialization...
> >
> > > + }
> > > +
> >
> > ... here should fix the above.
>
> Yeah. We might have raced with someone else or been moved to another cpu and
> there might now be a folio we can allocate from.
>
> > > + /* Reset page count bias and offset to start of new frag */
> > > + cache->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> > > + offset = folio_size(folio);
> > > + goto try_again;
> >
> > What if fragsz > PAGE_SIZE, we are consistently unable to allocate an
> > high order page, but order-0, pfmemalloc-ed page allocation is
> > successful? It looks like this could become an unbounded loop?
>
> It shouldn't. It should go:
>
> try_again:
> if (fragsz > offset)
> goto insufficient_space;
> insufficient_space:
> /* See if we can refurbish the current folio. */
> ...
I think the critical path is with pfmemalloc-ed pages:
if (unlikely(cache->pfmemalloc)) {
__folio_put(folio);
goto get_new_folio;
}
just before the following.
> fsize = folio_size(folio);
> if (unlikely(fragsz > fsize))
> goto frag_too_big;
> frag_too_big:
> ...
> return NULL;
>
> Though for safety's sake, it would make sense to put in a size check in the
> case we fail to allocate a larger-order folio.
>
> > > do {
> > > struct page *page = pages[i++];
> > > size_t part = min_t(size_t, PAGE_SIZE - off, len);
> > > -
> > > - ret = -EIO;
> > > - if (WARN_ON_ONCE(!sendpage_ok(page)))
> > > + bool put = false;
> > > +
> > > + if (PageSlab(page)) {
> >
> > I'm a bit concerned from the above. If I read correctly, tcp 0-copy
>
> Well, splice()-to-tcp will; MSG_ZEROCOPY is unaffected.
Ah right! I got lost in some 'if' branch.
> > will go through that for every page, even if the expected use-case is
> > always !PageSlub(page). compound_head() could be costly if the head
> > page is not hot on cache and I'm not sure if that could be the case for
> > tcp 0-copy. The bottom line is that I fear a possible regression here.
>
> I can put the PageSlab() check inside the sendpage_ok() so the page flag is
> only checked once.
Perhaps I'm lost again, but AFAICS:
__PAGEFLAG(Slab, slab, PF_NO_TAIL)
// ...
#define __PAGEFLAG(uname, lname, policy) \
TESTPAGEFLAG(uname, lname, policy) \
// ...
#define TESTPAGEFLAG(uname, lname, policy) \
static __always_inline bool folio_test_##lname(struct folio *folio) \
{ return test_bit(PG_##lname, folio_flags(folio, FOLIO_##policy));} \
static __always_inline int Page##uname(struct page *page) \
{ return test_bit(PG_##lname, &policy(page, 0)->flags); }
// ... 'policy' is PF_NO_TAIL here
#define PF_NO_TAIL(page, enforce) ({ \
VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \
PF_POISONED_CHECK(compound_head(page)); })
It looks at compound_head in the end ?!?
> But PageSlab() doesn't check the headpage, only the page
> it is given. sendpage_ok() is more the problem as it also calls
> page_count(). I could drop the check.
Once the head page is hot on cache due to the previous check, it should
be cheap?
Cheers,
Paolo
next prev parent reply other threads:[~2023-06-23 9:37 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-20 14:53 [PATCH net-next v3 00/18] splice, net: Switch over users of sendpage() and remove it David Howells
2023-06-20 14:53 ` [PATCH net-next v3 01/18] net: Copy slab data for sendmsg(MSG_SPLICE_PAGES) David Howells
2023-06-22 18:12 ` Jakub Kicinski
2023-06-22 18:28 ` Alexander Duyck
2023-06-22 19:40 ` David Howells
2023-06-22 20:28 ` Jakub Kicinski
2023-06-22 22:54 ` David Howells
2023-06-23 2:11 ` Jakub Kicinski
2023-06-23 9:08 ` David Howells
2023-06-23 9:52 ` Paolo Abeni
2023-06-23 10:06 ` David Howells
2023-06-23 10:21 ` Paolo Abeni
2023-06-23 8:08 ` Paolo Abeni
2023-06-23 9:06 ` David Howells
2023-06-23 9:37 ` Paolo Abeni [this message]
2023-06-23 10:00 ` David Howells
2023-06-20 14:53 ` [PATCH net-next v3 02/18] net: Display info about MSG_SPLICE_PAGES memory handling in proc David Howells
2023-06-23 8:18 ` Paolo Abeni
2023-06-23 9:42 ` David Howells
2023-06-20 14:53 ` [PATCH net-next v3 03/18] tcp_bpf, smc, tls, espintcp: Reduce MSG_SENDPAGE_NOTLAST usage David Howells
2023-06-20 14:53 ` [PATCH net-next v3 04/18] siw: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage to transmit David Howells
2023-06-21 8:57 ` Bernard Metzler
2023-06-20 14:53 ` [PATCH net-next v3 05/18] ceph: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage David Howells
2023-06-20 14:53 ` [PATCH net-next v3 06/18] net: Use sendmsg(MSG_SPLICE_PAGES) not sendpage in skb_send_sock() David Howells
2023-06-20 14:53 ` [PATCH net-next v3 07/18] ceph: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage() David Howells
2023-06-20 14:53 ` [PATCH net-next v3 08/18] rds: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage David Howells
2023-06-20 14:53 ` [PATCH net-next v3 09/18] dlm: " David Howells
2023-06-20 14:53 ` [PATCH net-next v3 10/18] nvme/host: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage David Howells
2023-06-21 10:15 ` Sagi Grimberg
2023-06-21 12:35 ` David Howells
2023-06-21 14:05 ` Sagi Grimberg
2023-06-29 14:45 ` Aurelien Aptel
2023-06-29 14:49 ` Sagi Grimberg
2023-06-29 15:02 ` Aurelien Aptel
2023-06-29 21:23 ` David Howells
2023-06-29 21:33 ` Sagi Grimberg
2023-06-29 21:34 ` David Howells
2023-06-29 23:43 ` Jakub Kicinski
2023-06-30 16:10 ` Nathan Chancellor
2023-06-30 16:14 ` Jakub Kicinski
2023-06-30 19:28 ` Nathan Chancellor
2023-07-07 20:45 ` Nick Desaulniers
2023-06-20 14:53 ` [PATCH net-next v3 11/18] nvme/target: " David Howells
2023-06-20 14:53 ` [PATCH net-next v3 12/18] smc: Drop smc_sendpage() in favour of smc_sendmsg() + MSG_SPLICE_PAGES David Howells
2023-06-20 14:53 ` [PATCH net-next v3 13/18] ocfs2: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage() David Howells
2023-06-20 14:53 ` [PATCH net-next v3 14/18] drbd: " David Howells
2023-06-20 14:53 ` [PATCH net-next v3 15/18] drdb: Send an entire bio in a single sendmsg David Howells
2023-06-20 14:53 ` [PATCH net-next v3 16/18] iscsi: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage David Howells
2023-06-20 14:53 ` [PATCH net-next v3 18/18] net: Kill MSG_SENDPAGE_NOTLAST David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6cf2ea121c4fdbd04682224c5acf6c73cc47f2f7.camel@redhat.com \
--to=pabeni@redhat.com \
--cc=alexander.duyck@gmail.com \
--cc=axboe@kernel.dk \
--cc=davem@davemloft.net \
--cc=dhowells@redhat.com \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=imagedong@tencent.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=netdev@vger.kernel.org \
--cc=willemdebruijn.kernel@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).