linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Askar Safin <safinaskar@gmail.com>
To: gmazyland@gmail.com
Cc: Dell.Client.Kernel@dell.com, dm-devel@lists.linux.dev,
	linux-block@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-crypto@vger.kernel.org, linux-lvm@lists.linux.dev,
	linux-mm@kvack.org, linux-pm@vger.kernel.org,
	linux-raid@vger.kernel.org, lvm-devel@lists.linux.dev,
	mpatocka@redhat.com, pavel@ucw.cz, rafael@kernel.org
Subject: Re: [RFC PATCH 2/2] swsusp: make it possible to hibernate to device mapper devices
Date: Tue, 23 Dec 2025 08:29:52 +0300	[thread overview]
Message-ID: <20251223052952.2623971-1-safinaskar@gmail.com> (raw)
In-Reply-To: <86300955-72e4-42d5-892d-f49bdf14441e@gmail.com>

Milan Broz <gmazyland@gmail.com>:
> Anyway, my understanding is that all device-mapper targets use mempools,
> which should ensure that they can process even under memory pressure.

I used journal mode so far, but, as well as I understand, direct mode is
okay for my use case.

Okay, I spent some time carefully reading dm-integrity source code.

I have read v6.12.48, because this is kernel I use.

And I conclude that dm-integrity code never allocate (not even from mempool)...
...in main code paths (as opposed to initialization code paths)...
...in direct ('D') mode...
...if I/O doesn't fail and checksums match.

(As I said in previous letter, mempools are bad, too, as well as I understand.)

I found exactly one place, where we seem to allocate in main code path:
https://elixir.bootlin.com/linux/v6.12.48/source/drivers/md/dm-integrity.c#L1789
(i. e. these two kmalloc's).

But I think this okay, because:
- we pass GFP_NOIO, so, as well as I understand, this should not lead to
recursion
- we pass __GFP_NORETRY, so, as well as I understand, we will not block in
this kmalloc for too much time
- we gracefully handle possible failure

Other strange place I found is this:
https://elixir.bootlin.com/linux/v6.12.48/source/drivers/md/dm-integrity.c#L1704 .

But I think this is okay, because:
- integrity_recheck is only ever called from here:
https://elixir.bootlin.com/linux/v6.12.48/source/drivers/md/dm-integrity.c#L1857
- that integrity_recheck call is only ever happens if dm_integrity_rw_tag failed
- as well as I understand, dm_integrity_rw_tag can only fail if we got actual
I/O error or checksum mismatch

So, this mempool_alloc call is okay for my use case.

So: in 'D' mode everything should be okay for my use case.

Another note: I used very stupid way to search functions, which allocate:
if function has "alloc" in its name, then I consider it allocating. :)

And final note: there is an elephant in a room: bufio.

As well as I understand, when pages are swapped in my use case, they first
will get to dm-integrity bufio cache, and only after that, they will
actually hit disk.

This, of course, defeats whole purpose of swap.

And possibly can lead to deadlocks.

Is there a way to disable bufio?

Or maybe bufio is used for checksums and metadata only?

-- 
Askar Safin

  parent reply	other threads:[~2025-12-23  5:30 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-23 11:29 dm bug: hibernate to swap located on dm-integrity doesn't work (how to get data redundancy for swap?) Askar Safin
2025-10-23 20:42 ` Milan Broz
2025-10-24 16:31   ` Askar Safin
2025-10-24 17:50     ` Milan Broz
2025-10-25  5:26     ` Askar Safin
2025-10-27  8:08   ` Askar Safin
     [not found] ` <4cd2d217-f97d-4923-b852-4f8746456704@mazyland.cz>
2025-10-24 10:23   ` [PATCH] pm-hibernate: flush block device cache when hibernating Mikulas Patocka
2025-10-27  8:42     ` Askar Safin
2025-10-31 19:29       ` Mikulas Patocka
2025-10-31 19:33         ` [PATCH 1/2] pm-hibernate: flush disk cache when suspending Mikulas Patocka
2025-11-03 15:53           ` Askar Safin
2025-11-22 13:51             ` Milan Broz
2025-11-22 20:33           ` Askar Safin
2025-11-22 22:47           ` Askar Safin
2025-11-24 19:51             ` Mikulas Patocka
2025-10-31 19:35         ` [RFC PATCH 2/2] swsusp: make it possible to hibernate to device mapper devices Mikulas Patocka
2025-11-30  0:56           ` Askar Safin
2025-12-17 23:18           ` Askar Safin
2025-12-22 15:03             ` Milan Broz
2025-12-22 22:24               ` Askar Safin
2025-12-23  1:41               ` Askar Safin
2025-12-23  5:29               ` Askar Safin [this message]
2025-12-23  6:33               ` Askar Safin
2025-10-29 13:31     ` [PATCH] pm-hibernate: flush block device cache when hibernating Rafael J. Wysocki
2025-10-29 14:38       ` Christoph Hellwig
2025-10-29 16:31         ` Mikulas Patocka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251223052952.2623971-1-safinaskar@gmail.com \
    --to=safinaskar@gmail.com \
    --cc=Dell.Client.Kernel@dell.com \
    --cc=dm-devel@lists.linux.dev \
    --cc=gmazyland@gmail.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-lvm@lists.linux.dev \
    --cc=linux-mm@kvack.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=lvm-devel@lists.linux.dev \
    --cc=mpatocka@redhat.com \
    --cc=pavel@ucw.cz \
    --cc=rafael@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).