From: Rakesh Pandit <rakesh@tuxera.com>
To: "Javier González" <jg@lightnvm.io>
Cc: "Matias Bjørling" <mb@lightnvm.io>,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 5/6] lightnvm: pblk: free up mempool allocation for erases correctly
Date: Mon, 2 Oct 2017 20:18:42 +0300 [thread overview]
Message-ID: <20171002171842.GA6008@hercules.tuxera.com> (raw)
In-Reply-To: <20171002122510.GA3946@hercules.tuxera.com>
On Mon, Oct 02, 2017 at 03:25:10PM +0300, Rakesh Pandit wrote:
> On Mon, Oct 02, 2017 at 02:09:35PM +0200, Javier Gonz�lez wrote:
> > > On 1 Oct 2017, at 15.25, Rakesh Pandit <rakesh@tuxera.com> wrote:
> > >
> > > While separating read and erase mempools in 22da65a1b pblk_g_rq_cache
> > > was used two times to set aside memory both for erase and read
> > > requests. Because same kmem cache is used repeatedly a single call to
> > > kmem_cache_destroy wouldn't deallocate everything. Repeatedly doing
> > > loading and unloading of pblk modules would eventually result in some
> > > leak.
> > >
> > > The fix is to really use separate kmem cache and track it
> > > appropriately.
> > >
> > > Fixes: 22da65a1b ("lightnvm: pblk: decouple read/erase mempools")
> > > Signed-off-by: Rakesh Pandit <rakesh@tuxera.com>
> > >
> >
> > I'm not sure I follow this logic. I assume that you're thinking of the
> > refcount on kmem_cache. During cache creation, all is good; if a
> > different cache creation fails, destruction is guaranteed, since the
> > refcount is 0. On tear down (pblk_core_free), we destroy the mempools
> > associated to the caches. In this case, the refcount goes to 0 too, as
> > we destroy the 2 mempools. So I don't see where the leak can happen. Am
> > I missing something?
> >
> > In any case, Jens reported some bugs on the mempools, where we did not
> > guarantee forward progress. Here you can find the original discussion
> > and the mempool audit [1]. Would be good if you reviewed these.
> >
> > [1] https://www.spinics.net/lists/kernel/msg2602274.html
> >
>
> Thanks, yes makes sense to follow up in patch thread. I will respond
> to above questions there later today.
>
I wasn't thinking it right in addition to looking at test results from
a incorrectly instrumented debugged version.
I went through the series you pointed and all seem okay to me now.
Please drop this patch.
Regards,
next prev parent reply other threads:[~2017-10-02 17:18 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-01 13:25 [PATCH 5/6] lightnvm: pblk: free up mempool allocation for erases correctly Rakesh Pandit
2017-10-02 12:09 ` Javier González
2017-10-02 12:25 ` Rakesh Pandit
2017-10-02 17:18 ` Rakesh Pandit [this message]
2017-10-03 6:42 ` Javier González
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171002171842.GA6008@hercules.tuxera.com \
--to=rakesh@tuxera.com \
--cc=jg@lightnvm.io \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mb@lightnvm.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).