From: Askar Safin <safinaskar@gmail.com>
To: gmazyland@gmail.com
Cc: Dell.Client.Kernel@dell.com, dm-devel@lists.linux.dev,
linux-block@vger.kernel.org, linux-btrfs@vger.kernel.org,
linux-crypto@vger.kernel.org, linux-lvm@lists.linux.dev,
linux-mm@kvack.org, linux-pm@vger.kernel.org,
linux-raid@vger.kernel.org, lvm-devel@lists.linux.dev,
mpatocka@redhat.com, pavel@ucw.cz, rafael@kernel.org,
safinaskar@gmail.com
Subject: Re: [RFC PATCH 2/2] swsusp: make it possible to hibernate to device mapper devices
Date: Tue, 23 Dec 2025 09:33:55 +0300 [thread overview]
Message-ID: <20251223063355.2740782-1-safinaskar@gmail.com> (raw)
In-Reply-To: <86300955-72e4-42d5-892d-f49bdf14441e@gmail.com>
Milan Broz <gmazyland@gmail.com>:
> Anyway, my understanding is that all device-mapper targets use mempools,
> which should ensure that they can process even under memory pressure.
Okay, I just read some more code and docs.
dm-integrity fortunately uses bufio for checksums only.
And bufio allocates memory without __GFP_IO (thus allocation should not
lead to recursion). And bufio claims that "dm-bufio is resistant to allocation failures":
https://elixir.bootlin.com/linux/v6.19-rc2/source/drivers/md/dm-bufio.c#L1603 .
This still seems to be fragile.
So I will change mode to 'D' and hope for the best. :)
--
Askar Safin
next prev parent reply other threads:[~2025-12-23 6:34 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-23 11:29 dm bug: hibernate to swap located on dm-integrity doesn't work (how to get data redundancy for swap?) Askar Safin
2025-10-23 20:42 ` Milan Broz
2025-10-24 16:31 ` Askar Safin
2025-10-24 17:50 ` Milan Broz
2025-10-25 5:26 ` Askar Safin
2025-10-27 8:08 ` Askar Safin
[not found] ` <4cd2d217-f97d-4923-b852-4f8746456704@mazyland.cz>
2025-10-24 10:23 ` [PATCH] pm-hibernate: flush block device cache when hibernating Mikulas Patocka
2025-10-27 8:42 ` Askar Safin
2025-10-31 19:29 ` Mikulas Patocka
2025-10-31 19:33 ` [PATCH 1/2] pm-hibernate: flush disk cache when suspending Mikulas Patocka
2025-11-03 15:53 ` Askar Safin
2025-11-22 13:51 ` Milan Broz
2025-11-22 20:33 ` Askar Safin
2025-11-22 22:47 ` Askar Safin
2025-11-24 19:51 ` Mikulas Patocka
2025-10-31 19:35 ` [RFC PATCH 2/2] swsusp: make it possible to hibernate to device mapper devices Mikulas Patocka
2025-11-30 0:56 ` Askar Safin
2025-12-17 23:18 ` Askar Safin
2025-12-22 15:03 ` Milan Broz
2025-12-22 22:24 ` Askar Safin
2025-12-23 1:41 ` Askar Safin
2025-12-23 5:29 ` Askar Safin
2025-12-23 6:33 ` Askar Safin [this message]
2025-10-29 13:31 ` [PATCH] pm-hibernate: flush block device cache when hibernating Rafael J. Wysocki
2025-10-29 14:38 ` Christoph Hellwig
2025-10-29 16:31 ` Mikulas Patocka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251223063355.2740782-1-safinaskar@gmail.com \
--to=safinaskar@gmail.com \
--cc=Dell.Client.Kernel@dell.com \
--cc=dm-devel@lists.linux.dev \
--cc=gmazyland@gmail.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-lvm@lists.linux.dev \
--cc=linux-mm@kvack.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=lvm-devel@lists.linux.dev \
--cc=mpatocka@redhat.com \
--cc=pavel@ucw.cz \
--cc=rafael@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).