From: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
To: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Jerome Marchand <jmarchan@redhat.com>,
devel@driverdev.osuosl.org,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
linux-kernel@vger.kernel.org, Minchan Kim <minchan@kernel.org>
Subject: Re: [PATCH 1/2] staging: zram: minimize `slot_free_lock' usage
Date: Mon, 9 Sep 2013 12:06:49 +0300 [thread overview]
Message-ID: <20130909090649.GA2236@swordfish.minsk.epam.com> (raw)
In-Reply-To: <20130909083327.GX6329@mwanda>
On (09/09/13 11:33), Dan Carpenter wrote:
> On Fri, Sep 06, 2013 at 05:55:45PM +0300, Sergey Senozhatsky wrote:
> > On (09/06/13 16:42), Jerome Marchand wrote:
> > > On 09/06/2013 03:47 PM, Sergey Senozhatsky wrote:
> > > > Calling handle_pending_slot_free() for every RW operation may
> > > > cause unneccessary slot_free_lock locking, because most likely
> > > > process will see NULL slot_free_rq. handle_pending_slot_free()
> > > > only when current detects that slot_free_rq is not NULL.
> > > >
> > > > Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> > > >
> > > > ---
> > > >
> > > > drivers/staging/zram/zram_drv.c | 5 +++--
> > > > 1 file changed, 3 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
> > > > index 91d94b5..17386e2 100644
> > > > --- a/drivers/staging/zram/zram_drv.c
> > > > +++ b/drivers/staging/zram/zram_drv.c
> > > > @@ -532,14 +532,15 @@ static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
> > > > {
> > > > int ret;
> > > >
> > > > + if (zram->slot_free_rq)
> > > > + handle_pending_slot_free(zram);
> > > > +
> > >
> > > Calling handle_pending_slot_free() without holding zram->lock?
> > > That's racy.
> >
> > sorry, my bad. it should take down_write() lock.
> >
>
> Or down_read() on the read path. We leave the original as-is?
>
Hello,
down_write() for both READ and WRITE looks ok to me (+down_write()
for zram_slot_free()). is there something I miss?
down_read() for READ in case of N active readers will force N-1
processes to spin on zram->slot_free_lock in handle_pending_slot_free().
it probably makes sense to add extra zram->slot_free_rq check for
the case when process slept on rw lock while someone was freeing
pages:
static void handle_pending_slot_free(struct zram *zram)
{
struct zram_slot_free *free_rq;
down_write(&zram->lock);
+ if (!zram->slot_free_rq)
+ goto out;
spin_lock(&zram->slot_free_lock);
while (zram->slot_free_rq) {
free_rq = zram->slot_free_rq;
zram->slot_free_rq = free_rq->next;
zram_free_page(zram, free_rq->index);
kfree(free_rq);
}
spin_unlock(&zram->slot_free_lock);
+out:
up_write(&zram->lock);
}
-ss
> regards,
> dan carpenter
>
prev parent reply other threads:[~2013-09-09 9:08 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-06 13:47 [PATCH 1/2] staging: zram: minimize `slot_free_lock' usage Sergey Senozhatsky
2013-09-06 14:42 ` Jerome Marchand
2013-09-06 14:55 ` Sergey Senozhatsky
2013-09-09 8:33 ` Dan Carpenter
2013-09-09 9:06 ` Sergey Senozhatsky [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130909090649.GA2236@swordfish.minsk.epam.com \
--to=sergey.senozhatsky@gmail.com \
--cc=dan.carpenter@oracle.com \
--cc=devel@driverdev.osuosl.org \
--cc=gregkh@linuxfoundation.org \
--cc=jmarchan@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox