From: Paul Durrant <Paul.Durrant@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
"davem@davemloft.net" <davem@davemloft.net>
Subject: RE: [PATCH] xen-netback: fix occasional leak of grant ref mappings under memory pressure
Date: Thu, 28 Feb 2019 11:21:57 +0000 [thread overview]
Message-ID: <e5ef3c0291854b67841b6c532f7651b9@AMSPEX02CL02.citrite.net> (raw)
In-Reply-To: <20190228110136.somjads2f5ivqhju@zion.uk.xensource.com>
> -----Original Message-----
> From: Wei Liu [mailto:wei.liu2@citrix.com]
> Sent: 28 February 2019 11:02
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Igor Druzhinin <igor.druzhinin@citrix.com>; xen-devel@lists.xenproject.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Wei Liu <wei.liu2@citrix.com>;
> davem@davemloft.net
> Subject: Re: [PATCH] xen-netback: fix occasional leak of grant ref mappings under memory pressure
>
> On Thu, Feb 28, 2019 at 09:46:57AM +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Igor Druzhinin [mailto:igor.druzhinin@citrix.com]
> > > Sent: 28 February 2019 02:03
> > > To: xen-devel@lists.xenproject.org; netdev@vger.kernel.org; linux-kernel@vger.kernel.org
> > > Cc: Wei Liu <wei.liu2@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; davem@davemloft.net;
> Igor
> > > Druzhinin <igor.druzhinin@citrix.com>
> > > Subject: [PATCH] xen-netback: fix occasional leak of grant ref mappings under memory pressure
> > >
> > > Zero-copy callback flag is not yet set on frag list skb at the moment
> > > xenvif_handle_frag_list() returns -ENOMEM. This eventually results in
> > > leaking grant ref mappings since xenvif_zerocopy_callback() is never
> > > called for these fragments. Those eventually build up and cause Xen
> > > to kill Dom0 as the slots get reused for new mappings.
> > >
> > > That behavior is observed under certain workloads where sudden spikes
> > > of page cache usage for writes coexist with active atomic skb allocations.
> > >
> > > Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> > > ---
> > > drivers/net/xen-netback/netback.c | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > > index 80aae3a..2023317 100644
> > > --- a/drivers/net/xen-netback/netback.c
> > > +++ b/drivers/net/xen-netback/netback.c
> > > @@ -1146,9 +1146,12 @@ static int xenvif_tx_submit(struct xenvif_queue *queue)
> > >
> > > if (unlikely(skb_has_frag_list(skb))) {
> > > if (xenvif_handle_frag_list(queue, skb)) {
> > > + struct sk_buff *nskb =
> > > + skb_shinfo(skb)->frag_list;
> > > if (net_ratelimit())
> > > netdev_err(queue->vif->dev,
> > > "Not enough memory to consolidate frag_list!\n");
> > > + xenvif_skb_zerocopy_prepare(queue, nskb);
> > > xenvif_skb_zerocopy_prepare(queue, skb);
> > > kfree_skb(skb);
> > > continue;
> >
> > Whilst this fix will do the job, I think it would be better to get rid of the kfree_skb() from
> inside xenvif_handle_frag_list() and always deal with it here rather than having it happen in two
> different places. Something like the following...
>
> +1 for having only one place.
>
> >
> > ---8<---
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > index 80aae3a32c2a..093c7b860772 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -1027,13 +1027,13 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
> > /* Consolidate skb with a frag_list into a brand new one with local pages on
> > * frags. Returns 0 or -ENOMEM if can't allocate new pages.
> > */
> > -static int xenvif_handle_frag_list(struct xenvif_queue *queue, struct sk_buff *skb)
> > +static int xenvif_handle_frag_list(struct xenvif_queue *queue, struct sk_buff *diff --git
> a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > index 80aae3a32c2a..093c7b860772 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -1027,13 +1027,13 @@ static void xenvif_tx_build_gops(struct xenvif_queue *qu
> > eue,
> > /* Consolidate skb with a frag_list into a brand new one with local pages on
> > * frags. Returns 0 or -ENOMEM if can't allocate new pages.
> > */
> > -static int xenvif_handle_frag_list(struct xenvif_queue *queue, struct sk_buff *
> > skb)
> > +static int xenvif_handle_frag_list(struct xenvif_queue *queue, struct sk_buff *
> > skb,
> > + struct sk_buff *nskb)
> > {
> > unsigned int offset = skb_headlen(skb);
> > skb_frag_t frags[MAX_SKB_FRAGS];
> > int i, f;
> > struct ubuf_info *uarg;
> > - struct sk_buff *nskb = skb_shinfo(skb)->frag_list;
> >
> > queue->stats.tx_zerocopy_sent += 2;
> > queue->stats.tx_frag_overflow++;
> > @@ -1072,11 +1072,6 @@ static int xenvif_handle_frag_list(struct xenvif_queue *q
> > ueue, struct sk_buff *s
> > skb_frag_size_set(&frags[i], len);
> > }
> >
> > - /* Copied all the bits from the frag list -- free it. */
> > - skb_frag_list_init(skb);
> > - xenvif_skb_zerocopy_prepare(queue, nskb);
> > - kfree_skb(nskb);
> > -
> > /* Release all the original (foreign) frags. */
> > for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
> > skb_frag_unref(skb, f);
> > @@ -1145,7 +1140,11 @@ static int xenvif_tx_submit(struct xenvif_queue *queue)
> > xenvif_fill_frags(queue, skb);
> >
> > if (unlikely(skb_has_frag_list(skb))) {
> > - if (xenvif_handle_frag_list(queue, skb)) {
> > + struct sk_buff *nskb = skb_shinfo(skb)->frag_list;
> > +
> > + xenvif_skb_zerocopy_prepare(queue, nskb);
> > +
> > + if (xenvif_handle_frag_list(queue, skb, nskb)) {
> > if (net_ratelimit())
> > netdev_err(queue->vif->dev,
> > "Not enough memory to consolidate frag_list!\n");
> > @@ -1153,6 +1152,10 @@ static int xenvif_tx_submit(struct xenvif_queue *queue)
> > kfree_skb(skb);
> > continue;
> > }
> > +
> > + /* Copied all the bits from the frag list. */
> > + skb_frag_list_init(skb);
> > + kfree(nskb);
>
> I think you want kfree_skb here?
No. nskb is the frag list... it is unlinked from skb by the call to skb_frag_list_init() and then it can be freed on its own. The skb is what we need to retain, because that now contains all the data.
Cheers,
Paul
>
> Wei.
>
> > }
> >
> > skb->dev = queue->vif->dev;
> > ---8<---
> >
> > What do you think?
> >
> > Paul
> >
> > > --
> > > 2.7.4
> >
next prev parent reply other threads:[~2019-02-28 11:22 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-28 2:03 [PATCH] xen-netback: fix occasional leak of grant ref mappings under memory pressure Igor Druzhinin
2019-02-28 9:46 ` Paul Durrant
2019-02-28 11:01 ` Wei Liu
2019-02-28 11:21 ` Paul Durrant [this message]
2019-02-28 11:43 ` Igor Druzhinin
2019-02-28 11:49 ` Paul Durrant
2019-02-28 12:07 ` Paul Durrant
2019-02-28 12:37 ` Wei Liu
2019-02-28 9:50 ` [Xen-devel] " Andrew Cooper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e5ef3c0291854b67841b6c532f7651b9@AMSPEX02CL02.citrite.net \
--to=paul.durrant@citrix.com \
--cc=davem@davemloft.net \
--cc=igor.druzhinin@citrix.com \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).