public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Yunsheng Lin <linyunsheng@huawei.com>,
	Netdev <netdev@vger.kernel.org>,
	Alexander Duyck <alexanderduyck@fb.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Alexander Lobakin <alobakin@pm.me>,
	Jonathan Lemon <jonathan.lemon@gmail.com>,
	Willem de Bruijn <willemb@google.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Guillaume Nault <gnault@redhat.com>,
	Cong Wang <cong.wang@bytedance.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Matteo Croce <mcroce@microsoft.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/1 v2] skbuff: Fix a potential race while recycling page_pool packets
Date: Thu, 15 Jul 2021 18:02:53 +0300	[thread overview]
Message-ID: <YPBOHcx/sCEz/+wn@Iliass-MBP> (raw)
In-Reply-To: <CAKgT0UfOr7U-8T+Hr9NVPL7EMYaTzbx7w1-hUthjD9bXUFsqMw@mail.gmail.com>

On Thu, Jul 15, 2021 at 07:57:57AM -0700, Alexander Duyck wrote:
> On Thu, Jul 15, 2021 at 7:45 AM Ilias Apalodimas
> <ilias.apalodimas@linaro.org> wrote:
> >
> > > > >           atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
> >
> > [...]
> >
> > > > >                             &shinfo->dataref))
> > > > > -             return;
> > > > > +             goto exit;
> > > >
> > > > Is it possible this patch may break the head frag page for the original skb,
> > > > supposing it's head frag page is from the page pool and below change clears
> > > > the pp_recycle for original skb, causing a page leaking for the page pool?
> > >
> > > I don't see how. The assumption here is that when atomic_sub_return
> > > gets down to 0 we will still have an skb with skb->pp_recycle set and
> > > it will flow down and encounter skb_free_head below. All we are doing
> > > is skipping those steps and clearing skb->pp_recycle for all but the
> > > last buffer and the last one to free it will trigger the recycling.
> >
> > I think the assumption here is that
> > 1. We clone an skb
> > 2. The original skb goes into pskb_expand_head()
> > 3. skb_release_data() will be called for the original skb
> >
> > But with the dataref bumped, we'll skip the recycling for it but we'll also
> > skip recycling or unmapping the current head (which is a page_pool mapped
> > buffer)
> 
> Right, but in that case it is the clone that is left holding the
> original head and the skb->pp_recycle flag is set on the clone as it
> was copied from the original when we cloned it. 

Ah yes, that's what I missed

> What we have
> essentially done is transferred the responsibility for freeing it from
> the original to the clone.
> 
> If you think about it the result is the same as if step 2 was to go
> into kfree_skb. We would still be calling skb_release_data and the
> dataref would be decremented without the original freeing the page. We
> have to wait until all the clones are freed and dataref reaches 0
> before the head can be recycled.

Yep sounds correct

Thanks
/Ilias

  reply	other threads:[~2021-07-15 15:03 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09  6:29 [PATCH 1/1 v2] skbuff: Fix a potential race while recycling page_pool packets Ilias Apalodimas
2021-07-09 14:34 ` Alexander Duyck
2021-07-09 17:29   ` Ilias Apalodimas
2021-07-12 11:52   ` Jesper Dangaard Brouer
2021-07-15  4:00 ` Yunsheng Lin
2021-07-15 10:00   ` Ilias Apalodimas
2021-07-15 10:38     ` Ilias Apalodimas
2021-07-15 10:47       ` Yunsheng Lin
2021-07-15 12:37         ` Ilias Apalodimas
2021-07-15 14:25   ` Alexander Duyck
2021-07-15 14:45     ` Ilias Apalodimas
2021-07-15 14:57       ` Alexander Duyck
2021-07-15 15:02         ` Ilias Apalodimas [this message]
2021-07-16  2:30           ` Yunsheng Lin
2021-07-16  6:43             ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YPBOHcx/sCEz/+wn@Iliass-MBP \
    --to=ilias.apalodimas@linaro.org \
    --cc=alexander.duyck@gmail.com \
    --cc=alexanderduyck@fb.com \
    --cc=alobakin@pm.me \
    --cc=brouer@redhat.com \
    --cc=cong.wang@bytedance.com \
    --cc=davem@davemloft.net \
    --cc=gnault@redhat.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=mcroce@microsoft.com \
    --cc=netdev@vger.kernel.org \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox