linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: tixy@linaro.org (Jon Medhurst (Tixy))
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] decompressors: fix "no limit" output buffer length
Date: Mon, 22 Jul 2013 19:08:01 +0100	[thread overview]
Message-ID: <1374516481.14712.3.camel@linaro1.home> (raw)
In-Reply-To: <1374476169-32194-1-git-send-email-acourbot@nvidia.com>

On Mon, 2013-07-22 at 15:56 +0900, Alexandre Courbot wrote:
> When decompressing into memory, the output buffer length is set to some
> arbitrarily high value (0x7fffffff) to indicate the output is,
> virtually, unlimited in size.
> 
> The problem with this is that some platforms have their physical memory
> at high physical addresses (0x80000000 or more), and that the output
> buffer address and its "unlimited" length cannot be added without
> overflowing. An example of this can be found in inflate_fast():
> 
> /* next_out is the output buffer address */
> out = strm->next_out - OFF;
> /* avail_out is the output buffer size. end will overflow if the output
>  * address is >= 0x80000104 */
> end = out + (strm->avail_out - 257);
> 
> This has huge consequences on the performance of kernel decompression,
> since the following exit condition of inflate_fast() will be always
> true:
> 
> } while (in < last && out < end);
> 
> Indeed, "end" has overflowed and is now always lower than "out". As a
> result, inflate_fast() will return after processing one single byte of
> input data, and will thus need to be called an unreasonably high number
> of times. This probably went unnoticed because kernel decompression is
> fast enough even with this issue.
> 
> Nonetheless, adjusting the output buffer length in such a way that the
> above pointer arithmetic never overflows results in a kernel
> decompression that is about 3 times faster on affected machines.
> 
> Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>

This speeds up booting of my Versatile Express TC2 by 15 seconds when
starting on the A7 cluster :-)

Tested-by: Jon Medhurst <tixy@linaro.org>

> ---
>  lib/decompress_inflate.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/decompress_inflate.c b/lib/decompress_inflate.c
> index 19ff89e..d619b28 100644
> --- a/lib/decompress_inflate.c
> +++ b/lib/decompress_inflate.c
> @@ -48,7 +48,7 @@ STATIC int INIT gunzip(unsigned char *buf, int len,
>  		out_len = 0x8000; /* 32 K */
>  		out_buf = malloc(out_len);
>  	} else {
> -		out_len = 0x7fffffff; /* no limit */
> +		out_len = ((size_t)~0) - (size_t)out_buf; /* no limit */
>  	}
>  	if (!out_buf) {
>  		error("Out of memory while allocating output buffer");

  reply	other threads:[~2013-07-22 18:08 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-22  6:56 [PATCH] decompressors: fix "no limit" output buffer length Alexandre Courbot
2013-07-22 18:08 ` Jon Medhurst (Tixy) [this message]
2013-07-23  2:15   ` Alex Courbot
2013-07-23  3:32     ` Stephen Warren
2013-07-23  5:01       ` Alex Courbot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1374516481.14712.3.camel@linaro1.home \
    --to=tixy@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).