public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [RFC] block/nvme: exploring asynchronous durability notification semantics
@ 2026-04-02 21:22 Esteban Cerutti
  2026-04-05 12:58 ` Hannes Reinecke
  0 siblings, 1 reply; 2+ messages in thread
From: Esteban Cerutti @ 2026-04-02 21:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-block, linux-nvme

Hi,

I would like to explore whether current NVMe completion semantics
unnecessarily conflate execution completion with durability, and
whether there is room for a more explicit, asynchronous durability
notification model between host and device.

Today, a successful write completion indicates command execution,
but not necessarily physical persistence to non-volatile media unless
FUA or Flush is used. This forces the kernel and filesystems to assume
worst-case durability behavior and rely on synchronous flushes and
barriers for safety.

The device internally knows when data is staged in volatile buffers
versus committed to NAND (or equivalent persistent media), but this
information is not exposed to the host.

This RFC explores a potential extension model with two components:

1) Multi-phase completion semantics

   - Normal completion continues to signal execution.
   - The device assigns a persistence token ID.
   - When the data is physically committed to non-volatile media,
     the device emits an asynchronous durability confirmation
     referencing that token.

This would decouple execution throughput from durability
confirmation and potentially allow filesystems to close journal
transactions only upon confirmed persistence, without forcing
synchronous flush fences.

2) Advisory write intent classification

   - Host-provided hints such as EPHEMERAL, STANDARD, or CRITICAL.
   - CRITICAL writes would request immediate durability.
   - EPHEMERAL writes could tolerate extended volatile staging.

Additionally, I am curious whether host power-state awareness
could be relevant in such a model. For example, if the kernel
can detect battery-backed operation or confirmed UPS
infrastructure, it could advertise a bounded persistence
relaxation window (e.g. guaranteed power for N ms), allowing
the device to safely extend volatile staging within that
window. This would be advisory and revocable, not a mandatory
trust model.

Questions for discussion:

- Has asynchronous durability acknowledgment been previously
  explored in NVMe or block-layer discussions?

- Are there fundamental architectural reasons why separating
  execution completion from durability confirmation would not
  be viable?

- Would such semantics belong strictly in NVMe specification
  work, or is there room for experimentation in the Linux NVMe
  driver as a prototype?

- Are there known workloads where this model would clearly fail
  or provide no measurable benefit?

This is not a proposal for immediate implementation, but an
attempt to identify whether the current binary durability model
(completion vs flush) leaves performance or efficiency on the
table due to lack of explicit state sharing between host and
device.

Feedback, criticism, or pointers to prior art are very welcome.

Thanks,
Esteban Cerutti


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [RFC] block/nvme: exploring asynchronous durability notification semantics
  2026-04-02 21:22 [RFC] block/nvme: exploring asynchronous durability notification semantics Esteban Cerutti
@ 2026-04-05 12:58 ` Hannes Reinecke
  0 siblings, 0 replies; 2+ messages in thread
From: Hannes Reinecke @ 2026-04-05 12:58 UTC (permalink / raw)
  To: Esteban Cerutti, linux-kernel; +Cc: linux-block, linux-nvme

On 4/2/26 23:22, Esteban Cerutti wrote:
> Hi,
> 
> I would like to explore whether current NVMe completion semantics
> unnecessarily conflate execution completion with durability, and
> whether there is room for a more explicit, asynchronous durability
> notification model between host and device.
> 
> Today, a successful write completion indicates command execution,
> but not necessarily physical persistence to non-volatile media unless
> FUA or Flush is used. This forces the kernel and filesystems to assume
> worst-case durability behavior and rely on synchronous flushes and
> barriers for safety.
> 
Not necessarily. Synchronous flushes and barriers are really only
an issue for ext4 (which happens to rely on that). Other filesystems
like btrfs work happily without that.
It's more a design choice; explicit flushes guarantee global visibility,
something which you need if you have things like superblocks at fixed
positions. But if you haven't (like btrfs) than you are perfectly fine
using per-request flushes (eg to ensure the log has been written), and
then you don't need a global flush.

> The device internally knows when data is staged in volatile buffers
> versus committed to NAND (or equivalent persistent media), but this
> information is not exposed to the host.
> 
... and not available to the host.
I'm not aware of any command (or I/O completion information) which
would tell you if and when an I/O cached internally in the drive is
written to disk.

> This RFC explores a potential extension model with two components:
> 
> 1) Multi-phase completion semantics
> 
>     - Normal completion continues to signal execution.
>     - The device assigns a persistence token ID.
>     - When the data is physically committed to non-volatile media,
>       the device emits an asynchronous durability confirmation
>       referencing that token.
> 
> This would decouple execution throughput from durability
> confirmation and potentially allow filesystems to close journal
> transactions only upon confirmed persistence, without forcing
> synchronous flush fences.
> 
... or redesign your filesystem to _not_ rely on synchronous flushes.

> 2) Advisory write intent classification
> 
>     - Host-provided hints such as EPHEMERAL, STANDARD, or CRITICAL.
>     - CRITICAL writes would request immediate durability.
>     - EPHEMERAL writes could tolerate extended volatile staging.
> 
> Additionally, I am curious whether host power-state awareness
> could be relevant in such a model. For example, if the kernel
> can detect battery-backed operation or confirmed UPS
> infrastructure, it could advertise a bounded persistence
> relaxation window (e.g. guaranteed power for N ms), allowing
> the device to safely extend volatile staging within that
> window. This would be advisory and revocable, not a mandatory
> trust model.
> 
Been there, done that. Really hard to ensure.
Some RAID controllers try to do something here by not
advertising volatile cache if battery backup is not present,
but that's a really lame thing as you need to ensure that
you have enough power to keep the HDDs running after a
power outage until you have written out all data.
It's getting even harder for on-board HBAs where you have
no distinct power rails for the HDDs; you'd need to have
a battery large enough to keep the entire system running,
but that would be massively overprovisioned for 99% of the
time. And hence far too expensive.

Doubt it's worth it.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-05 12:58 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-02 21:22 [RFC] block/nvme: exploring asynchronous durability notification semantics Esteban Cerutti
2026-04-05 12:58 ` Hannes Reinecke

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox