From: Minchan Kim <minchan@kernel.org>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-kernel@vger.kernel.org, Jiang Liu <jiang.liu@huawei.com>,
Nitin Gupta <ngupta@vflare.org>,
stable@vger.kernel.org
Subject: Re: [PATCH] zram: bug fix: delay lock holding in zram_slot_free_noity
Date: Mon, 5 Aug 2013 17:27:00 +0900 [thread overview]
Message-ID: <20130805082700.GP32486@bbox> (raw)
In-Reply-To: <20130805080422.GB15376@kroah.com>
Hello Greg,
On Mon, Aug 05, 2013 at 04:04:22PM +0800, Greg Kroah-Hartman wrote:
> On Mon, Aug 05, 2013 at 04:18:34PM +0900, Minchan Kim wrote:
> > I was preparing to promote zram and it was almost done.
> > Before sending patch, I tried to test and eyebrows went up.
> >
> > [1] introduced down_write in zram_slot_free_notify to prevent race
> > between zram_slot_free_notify and zram_bvec_[read|write]. The race
> > could happen if somebody who has right permission to open swap device
> > is reading swap device while it is used by swap in parallel.
> >
> > However, zram_slot_free_notify is called with holding spin_lock of
> > swap layer so we shouldn't avoid holing mutex. Otherwise, lockdep
> > warns it.
>
> As it should.
It's okay to call down_write_trylock instead of down_write under spinlock.
Is there any problem? Might need to rewrite description?
>
> > I guess, best solution is to redesign zram lock scheme totally but
> > we are on the verge of promoting so it's not desirable to change a lot
> > critical code and such big change isn't good shape for backporting to
> > stable trees so I think the simple patch is best at the moment.
>
> What do you mean by "verge of promoting"? If it's wrong, it needs to be
> fixed properly, don't paper over something.
It seems you consider the patch as bandaid due to rather misleading my
description. I didn't mean it. I guess ideal solution would be to change
locking scheme totally to enhance concurrency but others might think it's
rather overkill because we don't see any reports about such parallel workloads
to make coarse-grained lock trouble. So, I think below simple patch looks
reasonable to me. Let's wait other zram developers's opinons.
>
> Please fix this correctly, I really don't care about staging drivers in
> stable kernels as lots of distros refuse to enable them (and rightly
> so.)
It might be a huge so early decision is rather hurry.
Let's wait others's opition.
Nitin, could you post your opinion?
>
> thanks,
>
> greg k-h
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Kind regards,
Minchan Kim
next prev parent reply other threads:[~2013-08-05 8:26 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-05 7:18 [PATCH] zram: bug fix: delay lock holding in zram_slot_free_noity Minchan Kim
2013-08-05 8:04 ` Greg Kroah-Hartman
2013-08-05 8:27 ` Minchan Kim [this message]
2013-08-05 16:26 ` Minchan Kim
2013-08-09 23:39 ` Greg Kroah-Hartman
2013-08-12 3:53 ` Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130805082700.GP32486@bbox \
--to=minchan@kernel.org \
--cc=gregkh@linuxfoundation.org \
--cc=jiang.liu@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=ngupta@vflare.org \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).