public inbox for linux-pm@vger.kernel.org
 help / color / mirror / Atom feed
From: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
To: linux-pm@lists.linux-foundation.org
Subject: Memory consumption difference between in-kernel and userspace hibernation
Date: Thu, 12 Nov 2009 21:01:55 +0100	[thread overview]
Message-ID: <20091112210155.0f9bbe5d@surf> (raw)

Hi,

Let me first introduce my question, and then give details about the
context.

Question: is there any difference in terms of memory requirements for
the in-kernel hibernation (echo disk > /sys/power/state) and the
userspace hibernation interface (through /dev/snapshot) ? With exactly
the same userspace workload and applications running, the in-kernel
hibernation works, but the hibernating using the userspace hibernation
interface fails because not enough memory can be freed.

Now, the context.

I'm implementing hibernation on an embedded device, which has no swap
since the only storage available is NAND flash.

I started by using the in-kernel hibernation mechanism, which saved the
resume image directly into an MTD partition, declared as a swap just
before starting the hibernation process (swapon /dev/mtdblockX; echo
disk > /sys/power/state). This worked like a charm.

But writing the resume image directly to the MTD partition is not
satisfying since it doesn't handle bad erase blocks and wear leveling.
Therefore, I wanted to save the resume image into a file, inside a
JFFS2 or YAFFS2 filesystem. For this, I used the /dev/snapshot
userspace interface to swsusp. With a light workload, it works
perfectly (both suspend and resume). But with a similar workload than
the one tested with the in-kernel hibernation, things fail at the
SNAPSHOT_ATOMIC_SNAPSHOT ioctl() step, which returns ENOMEM.

To get some details about the issue, I've added a few printk()s in
swsusp_shrink_memory(). Here is the patch:

==================================================================
--- foo.orig/kernel/power/swsusp.c
+++ foo/kernel/power/swsusp.c
@@ -226,15 +226,20 @@
                highmem_size = count_highmem_pages();
                size = count_data_pages() + PAGES_FOR_IO;
                tmp = size;
+               printk("size=%d\n", size);
                size += highmem_size;
                for_each_zone (zone)
                        if (populated_zone(zone)) {
                                if (is_highmem(zone)) {
                                        highmem_size -= zone->free_pages;
                                } else {
+                                       printk("1 tmp=%d\n", tmp);
                                        tmp -= zone->free_pages;
+                                       printk("2 tmp=%d\n", tmp);
                                        tmp += zone->lowmem_reserve[ZONE_NORMAL];
+                                       printk("3 tmp=%d\n", tmp);
                                        tmp += snapshot_additional_pages(zone);
+                                       printk("4 tmp=%d\n", tmp);
                                }
                        }
 
@@ -243,9 +248,12 @@
 
                tmp += highmem_size;
                if (tmp > 0) {
+                       printk("trying to free %d pages\n", tmp);
                        tmp = __shrink_memory(tmp);
-                       if (!tmp)
+                       if (!tmp) {
+                               printk("\bfailed, ENOMEM\n");
                                return -ENOMEM;
+                       }
                        pages += tmp;
                } else if (size > image_size / PAGE_SIZE) {
                        tmp = __shrink_memory(size - (image_size / PAGE_SIZE));
==================================================================

I get the following output:

==================================================================
Stopping tasks ... done.
Shrinking memory...  size=6967
1 tmp=6967
2 tmp=6639
3 tmp=6639
4 tmp=6643
trying to free 6643 pages
-size=4036
1 tmp=4036
2 tmp=777
3 tmp=777
4 tmp=781
trying to free 781 pages
failed, ENOMEM
Restarting tasks ... done.
==================================================================

Note 1: I've already reduced PAGES_FOR_IO from 1024 to 128.

Note 2: As usual in the embedded space, I'm stuck with an old 2.6.25
        kernel.

Any idea on why it works with the in-kernel solution and not the
userspace one ?

Thanks a lot for your inputs,

Thomas
-- 
Thomas Petazzoni, Free Electrons
Kernel, drivers and embedded Linux development,
consulting, training and support.
http://free-electrons.com

             reply	other threads:[~2009-11-12 20:01 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-12 20:01 Thomas Petazzoni [this message]
2009-11-12 20:52 ` Memory consumption difference between in-kernel and userspace hibernation Rafael J. Wysocki
2009-11-12 21:12   ` Thomas Petazzoni
2009-11-13 20:05     ` Rafael J. Wysocki
2009-11-21  9:29       ` Pavel Machek
2009-11-13  9:52 ` Thomas Petazzoni
2009-11-13 16:28   ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20091112210155.0f9bbe5d@surf \
    --to=thomas.petazzoni@free-electrons.com \
    --cc=linux-pm@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox