public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Nitin Gupta <ngupta@vflare.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Greg KH <greg@kroah.com>,
	Seth Jennings <sjenning@linux.vnet.ibm.com>,
	Minchan Kim <minchan.kim@gmail.com>,
	Sam Hansen <solid.se7en@gmail.com>,
	Linux Driver Project <devel@linuxdriverproject.org>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] [staging][zram] Fix handling of incompressible pages
Date: Tue, 09 Oct 2012 10:35:24 -0700	[thread overview]
Message-ID: <5074605C.3000301@vflare.org> (raw)
In-Reply-To: <20121009133128.GA3244@barrios>

Hi Minchan,

On 10/09/2012 06:31 AM, Minchan Kim wrote:
>
> On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
>> Change 130f315a introduced a bug in the handling of incompressible
>> pages which resulted in memory allocation failure for such pages.
>> The fix is to store the page as-is i.e. without compression if the
>> compressed size exceeds a threshold (max_zpage_size) and request
>> exactly PAGE_SIZE sized buffer from zsmalloc.
>
> It seems you found a bug and already fixed it with below helpers.
> But unfortunately, description isn't enough to understand the problem for me.
> Could you explain in detail?
> You said it results in memory allocation failure. What is failure?
> You mean this code by needing a few pages for zspage to meet class size?
>
>          handle = zs_malloc(zram->mem_pool, clen);
>          if (!handle) {
>                  pr_info("Error allocating memory for compressed "
>                          "page: %u, size=%zu\n", index, clen);
>                  ret = -ENOMEM;
>                  goto out;
>          }
>
> So instead of allocating more pages for incompressible page to make zspage,
> just allocate a page for PAGE_SIZE class without compression?
>

When a page expands on compression, say from 4K to 4K+30, we were trying 
to do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc 
can allocate is PAGE_SIZE (for obvious reasons), so such allocation 
requests always return failure (0).

For a page that has compressed size larger than the original size (this 
may happen with already compressed or random data), there is no point 
storing the compressed version as that would take more space and would 
also require time for decompression when needed again. So, the fix is to 
store any page, whose compressed size exceeds a threshold 
(max_zpage_size), as-it-is i.e. without compression.  Memory required 
for storing this uncompressed page can then be requested from zsmalloc 
which supports PAGE_SIZE sized allocations.

Lastly, the fix checks that we do not attempt to "decompress" the page 
which we stored in the uncompressed form -- we just memcpy() out such pages.

Thanks,
Nitin


>>
>> Signed-off-by: Nitin Gupta <ngupta@vflare.org>
>> Reported-by: viechweg@gmail.com
>> Reported-by: paerley@gmail.com
>> Reported-by: wu.tommy@gmail.com
>> Tested-by: wu.tommy@gmail.com
>> Tested-by: michael@zugelder.org
>> ---
>>   drivers/staging/zram/zram_drv.c |   12 ++++++++++--
>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
>> index 653b074..6edefde 100644
>> --- a/drivers/staging/zram/zram_drv.c
>> +++ b/drivers/staging/zram/zram_drv.c
>> @@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
>>   	cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
>>   				ZS_MM_RO);
>>
>> -	ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>> +	if (zram->table[index].size == PAGE_SIZE) {
>> +		memcpy(uncmem, cmem, PAGE_SIZE);
>> +		ret = LZO_E_OK;
>> +	} else {
>> +		ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>>   				    uncmem, &clen);
>> +	}
>>
>>   	if (is_partial_io(bvec)) {
>>   		memcpy(user_mem + bvec->bv_offset, uncmem + offset,
>> @@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>>   		goto out;
>>   	}
>>
>> -	if (unlikely(clen > max_zpage_size))
>> +	if (unlikely(clen > max_zpage_size)) {
>>   		zram_stat_inc(&zram->stats.bad_compress);
>> +		src = uncmem;
>> +		clen = PAGE_SIZE;
>> +	}
>>
>>   	handle = zs_malloc(zram->mem_pool, clen);
>>   	if (!handle) {
>> --
>> 1.7.9.5
>>
>


  reply	other threads:[~2012-10-09 17:35 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-09  1:32 [PATCH] [staging][zram] Fix handling of incompressible pages Nitin Gupta
2012-10-09  8:46 ` Dan Carpenter
2012-10-09 13:31 ` Minchan Kim
2012-10-09 17:35   ` Nitin Gupta [this message]
2012-10-09 23:36     ` Minchan Kim
  -- strict thread matches above, loose matches on Subject: below --
2012-10-11  0:42 Nitin Gupta
2012-10-22 20:43 ` Greg KH
2012-10-22 21:48   ` Nitin Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5074605C.3000301@vflare.org \
    --to=ngupta@vflare.org \
    --cc=devel@linuxdriverproject.org \
    --cc=greg@kroah.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=minchan.kim@gmail.com \
    --cc=minchan@kernel.org \
    --cc=sjenning@linux.vnet.ibm.com \
    --cc=solid.se7en@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox