From: Esteban Cerutti <esteban.cerutti@gmail.com>
To: linux-kernel@vger.kernel.org
Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org
Subject: [RFC] block/nvme: exploring asynchronous durability notification semantics
Date: Sun, 19 Apr 2026 18:35:53 -0300 [thread overview]
Message-ID: <aeVKuVLhpjkNxLtg@yanara> (raw)
Hi Hannes, Christoph,
thank you both for the detailed replies. They were very helpful and
corrected several misconceptions on my side.
In particular, I now understand that:
- reliance on synchronous flushes is not universal and largely depends
on filesystem design (e.g. ext4 vs btrfs),
- FUA already provides a per-request durability guarantee when needed,
- and that power-backed guarantees at the system level are much harder
to make reliable than they may appear.
Based on your feedback, I took some time to read more about how this
problem is handled in practice, especially at the filesystem and
database level (e.g. journaling, WAL, group commit, batching of fsync).
It seems that the cost of durability in fsync-heavy workloads is indeed
well known, but is typically addressed at higher layers rather than in
the block layer or device interface.
So rather than defending my original idea, I would like to refine the
question.
Is there a fundamental reason why these optimizations are handled
exclusively at the filesystem / application level, instead of being
exposed (or assisted) at a lower level such as the block layer or
device interface?
For example, is it due to:
- lack of reliable visibility into device-internal persistence state,
- difficulty in defining correct semantics across different hardware,
- or simply that higher layers already have enough information to
optimize safely (making lower-level support unnecessary)?
I realize that my earlier proposal likely underestimated the complexity
involved, but I am still trying to understand whether there is a
meaningful limitation at the lower layers, or if this is primarily a
question of where the problem is best solved.
Thanks again for your time and for the insights — this has already been
very useful for understanding how these pieces fit together.
Best regards,
Esteban
next reply other threads:[~2026-04-19 21:36 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-19 21:35 Esteban Cerutti [this message]
-- strict thread matches above, loose matches on Subject: below --
2026-04-02 21:22 [RFC] block/nvme: exploring asynchronous durability notification semantics Esteban Cerutti
2026-04-05 12:58 ` Hannes Reinecke
2026-04-07 5:48 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aeVKuVLhpjkNxLtg@yanara \
--to=esteban.cerutti@gmail.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox