public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] [staging][zram] Fix handling of incompressible pages
@ 2012-10-09  1:32 Nitin Gupta
  2012-10-09  8:46 ` Dan Carpenter
  2012-10-09 13:31 ` Minchan Kim
  0 siblings, 2 replies; 8+ messages in thread
From: Nitin Gupta @ 2012-10-09  1:32 UTC (permalink / raw)
  To: Greg KH
  Cc: Seth Jennings, Minchan Kim, Sam Hansen, Linux Driver Project,
	linux-kernel

Change 130f315a introduced a bug in the handling of incompressible
pages which resulted in memory allocation failure for such pages.
The fix is to store the page as-is i.e. without compression if the
compressed size exceeds a threshold (max_zpage_size) and request
exactly PAGE_SIZE sized buffer from zsmalloc.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: viechweg@gmail.com
Reported-by: paerley@gmail.com
Reported-by: wu.tommy@gmail.com
Tested-by: wu.tommy@gmail.com
Tested-by: michael@zugelder.org
---
 drivers/staging/zram/zram_drv.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index 653b074..6edefde 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
 	cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
 				ZS_MM_RO);
 
-	ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
+	if (zram->table[index].size == PAGE_SIZE) {
+		memcpy(uncmem, cmem, PAGE_SIZE);
+		ret = LZO_E_OK;
+	} else {
+		ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
 				    uncmem, &clen);
+	}
 
 	if (is_partial_io(bvec)) {
 		memcpy(user_mem + bvec->bv_offset, uncmem + offset,
@@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 		goto out;
 	}
 
-	if (unlikely(clen > max_zpage_size))
+	if (unlikely(clen > max_zpage_size)) {
 		zram_stat_inc(&zram->stats.bad_compress);
+		src = uncmem;
+		clen = PAGE_SIZE;
+	}
 
 	handle = zs_malloc(zram->mem_pool, clen);
 	if (!handle) {
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] [staging][zram] Fix handling of incompressible pages
  2012-10-09  1:32 Nitin Gupta
@ 2012-10-09  8:46 ` Dan Carpenter
  2012-10-09 13:31 ` Minchan Kim
  1 sibling, 0 replies; 8+ messages in thread
From: Dan Carpenter @ 2012-10-09  8:46 UTC (permalink / raw)
  To: Nitin Gupta
  Cc: Greg KH, Linux Driver Project, Seth Jennings, Minchan Kim,
	linux-kernel

On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
> Change 130f315a introduced a bug in the handling of incompressible
         ^^^^^^^^
When you mention a patch, please include the human readable patch
title as well as the hash.  "staging: zram: remove special handle of
uncompressed page".

regards,
dan carpenter


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] [staging][zram] Fix handling of incompressible pages
  2012-10-09  1:32 Nitin Gupta
  2012-10-09  8:46 ` Dan Carpenter
@ 2012-10-09 13:31 ` Minchan Kim
  2012-10-09 17:35   ` Nitin Gupta
  1 sibling, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2012-10-09 13:31 UTC (permalink / raw)
  To: Nitin Gupta
  Cc: Greg KH, Seth Jennings, Minchan Kim, Sam Hansen,
	Linux Driver Project, linux-kernel

Hi Nitin,

On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
> Change 130f315a introduced a bug in the handling of incompressible
> pages which resulted in memory allocation failure for such pages.
> The fix is to store the page as-is i.e. without compression if the
> compressed size exceeds a threshold (max_zpage_size) and request
> exactly PAGE_SIZE sized buffer from zsmalloc.

It seems you found a bug and already fixed it with below helpers.
But unfortunately, description isn't enough to understand the problem for me.
Could you explain in detail?
You said it results in memory allocation failure. What is failure?
You mean this code by needing a few pages for zspage to meet class size?

        handle = zs_malloc(zram->mem_pool, clen);
        if (!handle) {
                pr_info("Error allocating memory for compressed "
                        "page: %u, size=%zu\n", index, clen);
                ret = -ENOMEM;
                goto out;
        }

So instead of allocating more pages for incompressible page to make zspage,
just allocate a page for PAGE_SIZE class without compression?

> 
> Signed-off-by: Nitin Gupta <ngupta@vflare.org>
> Reported-by: viechweg@gmail.com
> Reported-by: paerley@gmail.com
> Reported-by: wu.tommy@gmail.com
> Tested-by: wu.tommy@gmail.com
> Tested-by: michael@zugelder.org
> ---
>  drivers/staging/zram/zram_drv.c |   12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
> index 653b074..6edefde 100644
> --- a/drivers/staging/zram/zram_drv.c
> +++ b/drivers/staging/zram/zram_drv.c
> @@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
>  	cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
>  				ZS_MM_RO);
>  
> -	ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
> +	if (zram->table[index].size == PAGE_SIZE) {
> +		memcpy(uncmem, cmem, PAGE_SIZE);
> +		ret = LZO_E_OK;
> +	} else {
> +		ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>  				    uncmem, &clen);
> +	}
>  
>  	if (is_partial_io(bvec)) {
>  		memcpy(user_mem + bvec->bv_offset, uncmem + offset,
> @@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>  		goto out;
>  	}
>  
> -	if (unlikely(clen > max_zpage_size))
> +	if (unlikely(clen > max_zpage_size)) {
>  		zram_stat_inc(&zram->stats.bad_compress);
> +		src = uncmem;
> +		clen = PAGE_SIZE;
> +	}
>  
>  	handle = zs_malloc(zram->mem_pool, clen);
>  	if (!handle) {
> -- 
> 1.7.9.5
> 

-- 
Kind Regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] [staging][zram] Fix handling of incompressible pages
  2012-10-09 13:31 ` Minchan Kim
@ 2012-10-09 17:35   ` Nitin Gupta
  2012-10-09 23:36     ` Minchan Kim
  0 siblings, 1 reply; 8+ messages in thread
From: Nitin Gupta @ 2012-10-09 17:35 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Greg KH, Seth Jennings, Minchan Kim, Sam Hansen,
	Linux Driver Project, linux-kernel

Hi Minchan,

On 10/09/2012 06:31 AM, Minchan Kim wrote:
>
> On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
>> Change 130f315a introduced a bug in the handling of incompressible
>> pages which resulted in memory allocation failure for such pages.
>> The fix is to store the page as-is i.e. without compression if the
>> compressed size exceeds a threshold (max_zpage_size) and request
>> exactly PAGE_SIZE sized buffer from zsmalloc.
>
> It seems you found a bug and already fixed it with below helpers.
> But unfortunately, description isn't enough to understand the problem for me.
> Could you explain in detail?
> You said it results in memory allocation failure. What is failure?
> You mean this code by needing a few pages for zspage to meet class size?
>
>          handle = zs_malloc(zram->mem_pool, clen);
>          if (!handle) {
>                  pr_info("Error allocating memory for compressed "
>                          "page: %u, size=%zu\n", index, clen);
>                  ret = -ENOMEM;
>                  goto out;
>          }
>
> So instead of allocating more pages for incompressible page to make zspage,
> just allocate a page for PAGE_SIZE class without compression?
>

When a page expands on compression, say from 4K to 4K+30, we were trying 
to do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc 
can allocate is PAGE_SIZE (for obvious reasons), so such allocation 
requests always return failure (0).

For a page that has compressed size larger than the original size (this 
may happen with already compressed or random data), there is no point 
storing the compressed version as that would take more space and would 
also require time for decompression when needed again. So, the fix is to 
store any page, whose compressed size exceeds a threshold 
(max_zpage_size), as-it-is i.e. without compression.  Memory required 
for storing this uncompressed page can then be requested from zsmalloc 
which supports PAGE_SIZE sized allocations.

Lastly, the fix checks that we do not attempt to "decompress" the page 
which we stored in the uncompressed form -- we just memcpy() out such pages.

Thanks,
Nitin


>>
>> Signed-off-by: Nitin Gupta <ngupta@vflare.org>
>> Reported-by: viechweg@gmail.com
>> Reported-by: paerley@gmail.com
>> Reported-by: wu.tommy@gmail.com
>> Tested-by: wu.tommy@gmail.com
>> Tested-by: michael@zugelder.org
>> ---
>>   drivers/staging/zram/zram_drv.c |   12 ++++++++++--
>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
>> index 653b074..6edefde 100644
>> --- a/drivers/staging/zram/zram_drv.c
>> +++ b/drivers/staging/zram/zram_drv.c
>> @@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
>>   	cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
>>   				ZS_MM_RO);
>>
>> -	ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>> +	if (zram->table[index].size == PAGE_SIZE) {
>> +		memcpy(uncmem, cmem, PAGE_SIZE);
>> +		ret = LZO_E_OK;
>> +	} else {
>> +		ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>>   				    uncmem, &clen);
>> +	}
>>
>>   	if (is_partial_io(bvec)) {
>>   		memcpy(user_mem + bvec->bv_offset, uncmem + offset,
>> @@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>>   		goto out;
>>   	}
>>
>> -	if (unlikely(clen > max_zpage_size))
>> +	if (unlikely(clen > max_zpage_size)) {
>>   		zram_stat_inc(&zram->stats.bad_compress);
>> +		src = uncmem;
>> +		clen = PAGE_SIZE;
>> +	}
>>
>>   	handle = zs_malloc(zram->mem_pool, clen);
>>   	if (!handle) {
>> --
>> 1.7.9.5
>>
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] [staging][zram] Fix handling of incompressible pages
  2012-10-09 17:35   ` Nitin Gupta
@ 2012-10-09 23:36     ` Minchan Kim
  0 siblings, 0 replies; 8+ messages in thread
From: Minchan Kim @ 2012-10-09 23:36 UTC (permalink / raw)
  To: Nitin Gupta
  Cc: Greg KH, Seth Jennings, Sam Hansen, Linux Driver Project,
	linux-kernel

On Tue, Oct 09, 2012 at 10:35:24AM -0700, Nitin Gupta wrote:
> Hi Minchan,
> 
> On 10/09/2012 06:31 AM, Minchan Kim wrote:
> >
> >On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
> >>Change 130f315a introduced a bug in the handling of incompressible
> >>pages which resulted in memory allocation failure for such pages.
> >>The fix is to store the page as-is i.e. without compression if the
> >>compressed size exceeds a threshold (max_zpage_size) and request
> >>exactly PAGE_SIZE sized buffer from zsmalloc.
> >
> >It seems you found a bug and already fixed it with below helpers.
> >But unfortunately, description isn't enough to understand the problem for me.
> >Could you explain in detail?
> >You said it results in memory allocation failure. What is failure?
> >You mean this code by needing a few pages for zspage to meet class size?
> >
> >         handle = zs_malloc(zram->mem_pool, clen);
> >         if (!handle) {
> >                 pr_info("Error allocating memory for compressed "
> >                         "page: %u, size=%zu\n", index, clen);
> >                 ret = -ENOMEM;
> >                 goto out;
> >         }
> >
> >So instead of allocating more pages for incompressible page to make zspage,
> >just allocate a page for PAGE_SIZE class without compression?
> >
> 
> When a page expands on compression, say from 4K to 4K+30, we were
> trying to do zsmalloc(pool, 4K+30). However, the maximum size which
> zsmalloc can allocate is PAGE_SIZE (for obvious reasons), so such
> allocation requests always return failure (0).

Right.
I think it would be better to add this explanation in description.

> 
> For a page that has compressed size larger than the original size
> (this may happen with already compressed or random data), there is
> no point storing the compressed version as that would take more
> space and would also require time for decompression when needed
> again. So, the fix is to store any page, whose compressed size
> exceeds a threshold (max_zpage_size), as-it-is i.e. without
> compression.  Memory required for storing this uncompressed page can

Yes. It's already definition of max_zpage_size.

> then be requested from zsmalloc which supports PAGE_SIZE sized
> allocations.

> 
> Lastly, the fix checks that we do not attempt to "decompress" the
> page which we stored in the uncompressed form -- we just memcpy()
> out such pages.
> 
> Thanks,
> Nitin
> 
> 
> >>
> >>Signed-off-by: Nitin Gupta <ngupta@vflare.org>
> >>Reported-by: viechweg@gmail.com
> >>Reported-by: paerley@gmail.com
> >>Reported-by: wu.tommy@gmail.com
> >>Tested-by: wu.tommy@gmail.com
> >>Tested-by: michael@zugelder.org
Anyway, 
Acked-by: Minchan Kim <minchan@kernel.org>

Thanks, Nitin!
-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] [staging][zram] Fix handling of incompressible pages
@ 2012-10-11  0:42 Nitin Gupta
  2012-10-22 20:43 ` Greg KH
  0 siblings, 1 reply; 8+ messages in thread
From: Nitin Gupta @ 2012-10-11  0:42 UTC (permalink / raw)
  To: Greg KH
  Cc: Seth Jennings, Minchan Kim, Dan Carpenter, Sam Hansen,
	Linux Driver Project, linux-kernel

Change 130f315a (staging: zram: remove special handle of uncompressed page)
introduced a bug in the handling of incompressible pages which resulted in
memory allocation failure for such pages.

When a page expands on compression, say from 4K to 4K+30, we were trying to
do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc can
allocate is PAGE_SIZE (for obvious reasons), so such allocation requests
always return failure (0).

For a page that has compressed size larger than the original size (this may
happen with already compressed or random data), there is no point storing
the compressed version as that would take more space and would also require
time for decompression when needed again. So, the fix is to store any page,
whose compressed size exceeds a threshold (max_zpage_size), as-it-is i.e.
without compression.  Memory required for storing this uncompressed page can
then be requested from zsmalloc which supports PAGE_SIZE sized allocations.

Lastly, the fix checks that we do not attempt to "decompress" the page which
we stored in the uncompressed form -- we just memcpy() out such pages.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: viechweg@gmail.com
Reported-by: paerley@gmail.com
Reported-by: wu.tommy@gmail.com
Acked-by: Minchan Kim <minchan@kernel.org>
---
 drivers/staging/zram/zram_drv.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index 653b074..6edefde 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
 	cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
 				ZS_MM_RO);
 
-	ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
+	if (zram->table[index].size == PAGE_SIZE) {
+		memcpy(uncmem, cmem, PAGE_SIZE);
+		ret = LZO_E_OK;
+	} else {
+		ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
 				    uncmem, &clen);
+	}
 
 	if (is_partial_io(bvec)) {
 		memcpy(user_mem + bvec->bv_offset, uncmem + offset,
@@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 		goto out;
 	}
 
-	if (unlikely(clen > max_zpage_size))
+	if (unlikely(clen > max_zpage_size)) {
 		zram_stat_inc(&zram->stats.bad_compress);
+		src = uncmem;
+		clen = PAGE_SIZE;
+	}
 
 	handle = zs_malloc(zram->mem_pool, clen);
 	if (!handle) {
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] [staging][zram] Fix handling of incompressible pages
  2012-10-11  0:42 [PATCH] [staging][zram] Fix handling of incompressible pages Nitin Gupta
@ 2012-10-22 20:43 ` Greg KH
  2012-10-22 21:48   ` Nitin Gupta
  0 siblings, 1 reply; 8+ messages in thread
From: Greg KH @ 2012-10-22 20:43 UTC (permalink / raw)
  To: Nitin Gupta
  Cc: Seth Jennings, Minchan Kim, Dan Carpenter, Sam Hansen,
	Linux Driver Project, linux-kernel

On Wed, Oct 10, 2012 at 05:42:18PM -0700, Nitin Gupta wrote:
> Change 130f315a (staging: zram: remove special handle of uncompressed page)
> introduced a bug in the handling of incompressible pages which resulted in
> memory allocation failure for such pages.
> 
> When a page expands on compression, say from 4K to 4K+30, we were trying to
> do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc can
> allocate is PAGE_SIZE (for obvious reasons), so such allocation requests
> always return failure (0).
> 
> For a page that has compressed size larger than the original size (this may
> happen with already compressed or random data), there is no point storing
> the compressed version as that would take more space and would also require
> time for decompression when needed again. So, the fix is to store any page,
> whose compressed size exceeds a threshold (max_zpage_size), as-it-is i.e.
> without compression.  Memory required for storing this uncompressed page can
> then be requested from zsmalloc which supports PAGE_SIZE sized allocations.
> 
> Lastly, the fix checks that we do not attempt to "decompress" the page which
> we stored in the uncompressed form -- we just memcpy() out such pages.

So this fix needs to go to the stable 3.6 release also, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] [staging][zram] Fix handling of incompressible pages
  2012-10-22 20:43 ` Greg KH
@ 2012-10-22 21:48   ` Nitin Gupta
  0 siblings, 0 replies; 8+ messages in thread
From: Nitin Gupta @ 2012-10-22 21:48 UTC (permalink / raw)
  To: Greg KH
  Cc: Seth Jennings, Minchan Kim, Dan Carpenter, Sam Hansen,
	Linux Driver Project, linux-kernel

On Mon, Oct 22, 2012 at 1:43 PM, Greg KH <greg@kroah.com> wrote:
> On Wed, Oct 10, 2012 at 05:42:18PM -0700, Nitin Gupta wrote:
>> Change 130f315a (staging: zram: remove special handle of uncompressed page)
>> introduced a bug in the handling of incompressible pages which resulted in
>> memory allocation failure for such pages.
>>
>> When a page expands on compression, say from 4K to 4K+30, we were trying to
>> do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc can
>> allocate is PAGE_SIZE (for obvious reasons), so such allocation requests
>> always return failure (0).
>>
>> For a page that has compressed size larger than the original size (this may
>> happen with already compressed or random data), there is no point storing
>> the compressed version as that would take more space and would also require
>> time for decompression when needed again. So, the fix is to store any page,
>> whose compressed size exceeds a threshold (max_zpage_size), as-it-is i.e.
>> without compression.  Memory required for storing this uncompressed page can
>> then be requested from zsmalloc which supports PAGE_SIZE sized allocations.
>>
>> Lastly, the fix checks that we do not attempt to "decompress" the page which
>> we stored in the uncompressed form -- we just memcpy() out such pages.
>
> So this fix needs to go to the stable 3.6 release also, right?
>

Forgot to mention -- yes, this needs to be in 3.6 also.

Thanks,
Nitin

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-10-22 21:48 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-11  0:42 [PATCH] [staging][zram] Fix handling of incompressible pages Nitin Gupta
2012-10-22 20:43 ` Greg KH
2012-10-22 21:48   ` Nitin Gupta
  -- strict thread matches above, loose matches on Subject: below --
2012-10-09  1:32 Nitin Gupta
2012-10-09  8:46 ` Dan Carpenter
2012-10-09 13:31 ` Minchan Kim
2012-10-09 17:35   ` Nitin Gupta
2012-10-09 23:36     ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox