public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Pavel Machek <pavel@ucw.cz>
To: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Andrew Morton <akpm@osdl.org>, Andy Isaacson <adi@hexapodia.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH][mm][Fix] swsusp: fix counting of highmem pages
Date: Sat, 3 Dec 2005 22:40:20 +0100	[thread overview]
Message-ID: <20051203214020.GA5198@elf.ucw.cz> (raw)
In-Reply-To: <200512032140.15192.rjw@sisk.pl>

Hi!

> The following patch fixes a problem with swsusp that causes suspend to
> fail on systems with the highmem zone, if many highmem pages are in use.
> 
> It makes swsusp count the non-free highmem pages in a correct way
> and, consequently, release a sufficient amount of memory before suspend.
> 
> Please apply (Pavel, please ack if you think the patch is ok).

Please don't, it's way too complex in my eyes. Sorry, result of
misscomunication between me and Rafael.

> +static inline unsigned int get_kmalloc_size(void)
> +{
> +#define CACHE(x) \
> +	if (sizeof(struct highmem_page) <= x) \
> +		return x;
> +#include <linux/kmalloc_sizes.h>
> +#undef CACHE
> +	return sizeof(struct highmem_page);
> +}
> +

Can we get rid of this uglyness...

> @@ -437,8 +446,14 @@
>  
>  static int enough_free_mem(unsigned int nr_pages)
>  {
> -	pr_debug("swsusp: available memory: %u pages\n", nr_free_pages());
> -	return nr_free_pages() > (nr_pages + PAGES_FOR_IO +
> +	struct zone *zone;
> +	unsigned int n = 0;
> +
> +	for_each_zone (zone)
> +		if (!is_highmem(zone))
> +			n += zone->free_pages;
> +	pr_debug("swsusp: available memory: %u pages\n", n);
> +	return n > (nr_pages + PAGES_FOR_IO +
>  		(nr_pages + PBES_PER_PAGE - 1) / PBES_PER_PAGE);
>  }
>  

And just use 2% approximation here, too?

> Index: linux-2.6.15-rc3-mm1/kernel/power/swsusp.c
> ===================================================================
> --- linux-2.6.15-rc3-mm1.orig/kernel/power/swsusp.c	2005-12-03 00:14:49.000000000 +0100
> +++ linux-2.6.15-rc3-mm1/kernel/power/swsusp.c	2005-12-03 21:25:07.000000000 +0100
> @@ -635,7 +635,8 @@
>  	printk("Shrinking memory...  ");
>  	do {
>  #ifdef FAST_FREE
> -		tmp = count_data_pages() + count_highmem_pages();
> +		tmp = 2 * count_highmem_pages();
> +		tmp += tmp / 50 + count_data_pages();
>  		tmp += (tmp + PBES_PER_PAGE - 1) / PBES_PER_PAGE +
>  			PAGES_FOR_IO;
>  		for_each_zone (zone)

This part is okay. Just make enough_free_mem use similar code. (If
possible, share the code, it is really computing the same thing).

								Pavel
-- 
Thanks, Sharp!

  reply	other threads:[~2005-12-03 21:40 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-12-03 20:40 [PATCH][mm][Fix] swsusp: fix counting of highmem pages Rafael J. Wysocki
2005-12-03 21:40 ` Pavel Machek [this message]
2005-12-03 23:11   ` Rafael J. Wysocki
2005-12-03 23:50     ` Pavel Machek
2005-12-04  0:02       ` Rafael J. Wysocki
2005-12-04  0:10         ` Pavel Machek
2005-12-04  0:26           ` Rafael J. Wysocki
2005-12-04  0:35             ` Pavel Machek
2005-12-04  0:57               ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20051203214020.GA5198@elf.ucw.cz \
    --to=pavel@ucw.cz \
    --cc=adi@hexapodia.org \
    --cc=akpm@osdl.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rjw@sisk.pl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox