public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC] block/nvme: exploring asynchronous durability notification semantics
@ 2026-04-02 21:22 Esteban Cerutti
  2026-04-05 12:58 ` Hannes Reinecke
  2026-04-07  5:48 ` Christoph Hellwig
  0 siblings, 2 replies; 4+ messages in thread
From: Esteban Cerutti @ 2026-04-02 21:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-block, linux-nvme

Hi,

I would like to explore whether current NVMe completion semantics
unnecessarily conflate execution completion with durability, and
whether there is room for a more explicit, asynchronous durability
notification model between host and device.

Today, a successful write completion indicates command execution,
but not necessarily physical persistence to non-volatile media unless
FUA or Flush is used. This forces the kernel and filesystems to assume
worst-case durability behavior and rely on synchronous flushes and
barriers for safety.

The device internally knows when data is staged in volatile buffers
versus committed to NAND (or equivalent persistent media), but this
information is not exposed to the host.

This RFC explores a potential extension model with two components:

1) Multi-phase completion semantics

   - Normal completion continues to signal execution.
   - The device assigns a persistence token ID.
   - When the data is physically committed to non-volatile media,
     the device emits an asynchronous durability confirmation
     referencing that token.

This would decouple execution throughput from durability
confirmation and potentially allow filesystems to close journal
transactions only upon confirmed persistence, without forcing
synchronous flush fences.

2) Advisory write intent classification

   - Host-provided hints such as EPHEMERAL, STANDARD, or CRITICAL.
   - CRITICAL writes would request immediate durability.
   - EPHEMERAL writes could tolerate extended volatile staging.

Additionally, I am curious whether host power-state awareness
could be relevant in such a model. For example, if the kernel
can detect battery-backed operation or confirmed UPS
infrastructure, it could advertise a bounded persistence
relaxation window (e.g. guaranteed power for N ms), allowing
the device to safely extend volatile staging within that
window. This would be advisory and revocable, not a mandatory
trust model.

Questions for discussion:

- Has asynchronous durability acknowledgment been previously
  explored in NVMe or block-layer discussions?

- Are there fundamental architectural reasons why separating
  execution completion from durability confirmation would not
  be viable?

- Would such semantics belong strictly in NVMe specification
  work, or is there room for experimentation in the Linux NVMe
  driver as a prototype?

- Are there known workloads where this model would clearly fail
  or provide no measurable benefit?

This is not a proposal for immediate implementation, but an
attempt to identify whether the current binary durability model
(completion vs flush) leaves performance or efficiency on the
table due to lack of explicit state sharing between host and
device.

Feedback, criticism, or pointers to prior art are very welcome.

Thanks,
Esteban Cerutti

^ permalink raw reply	[flat|nested] 4+ messages in thread
* [RFC] block/nvme: exploring asynchronous durability notification semantics
@ 2026-04-19 21:35 Esteban Cerutti
  0 siblings, 0 replies; 4+ messages in thread
From: Esteban Cerutti @ 2026-04-19 21:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-block, linux-nvme

Hi Hannes, Christoph,

thank you both for the detailed replies. They were very helpful and
corrected several misconceptions on my side.

In particular, I now understand that:

- reliance on synchronous flushes is not universal and largely depends
  on filesystem design (e.g. ext4 vs btrfs),
- FUA already provides a per-request durability guarantee when needed,
- and that power-backed guarantees at the system level are much harder
  to make reliable than they may appear.

Based on your feedback, I took some time to read more about how this
problem is handled in practice, especially at the filesystem and
database level (e.g. journaling, WAL, group commit, batching of fsync).

It seems that the cost of durability in fsync-heavy workloads is indeed
well known, but is typically addressed at higher layers rather than in
the block layer or device interface.

So rather than defending my original idea, I would like to refine the
question.

Is there a fundamental reason why these optimizations are handled
exclusively at the filesystem / application level, instead of being
exposed (or assisted) at a lower level such as the block layer or
device interface?

For example, is it due to:

  - lack of reliable visibility into device-internal persistence state,
  - difficulty in defining correct semantics across different hardware,
  - or simply that higher layers already have enough information to
    optimize safely (making lower-level support unnecessary)?

I realize that my earlier proposal likely underestimated the complexity
involved, but I am still trying to understand whether there is a
meaningful limitation at the lower layers, or if this is primarily a
question of where the problem is best solved.

Thanks again for your time and for the insights — this has already been
very useful for understanding how these pieces fit together.

Best regards,
Esteban

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-04-19 21:35 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-02 21:22 [RFC] block/nvme: exploring asynchronous durability notification semantics Esteban Cerutti
2026-04-05 12:58 ` Hannes Reinecke
2026-04-07  5:48 ` Christoph Hellwig
  -- strict thread matches above, loose matches on Subject: below --
2026-04-19 21:35 Esteban Cerutti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox