public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Esteban Cerutti <esteban.cerutti@gmail.com>,
	linux-kernel@vger.kernel.org
Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org
Subject: Re: [RFC] block/nvme: exploring asynchronous durability notification semantics
Date: Sun, 5 Apr 2026 14:58:06 +0200	[thread overview]
Message-ID: <4080e4d2-6a95-4d6e-ba53-e576855aac16@suse.de> (raw)
In-Reply-To: <ac7eHEnFQlLN-vDy@yanara>

On 4/2/26 23:22, Esteban Cerutti wrote:
> Hi,
> 
> I would like to explore whether current NVMe completion semantics
> unnecessarily conflate execution completion with durability, and
> whether there is room for a more explicit, asynchronous durability
> notification model between host and device.
> 
> Today, a successful write completion indicates command execution,
> but not necessarily physical persistence to non-volatile media unless
> FUA or Flush is used. This forces the kernel and filesystems to assume
> worst-case durability behavior and rely on synchronous flushes and
> barriers for safety.
> 
Not necessarily. Synchronous flushes and barriers are really only
an issue for ext4 (which happens to rely on that). Other filesystems
like btrfs work happily without that.
It's more a design choice; explicit flushes guarantee global visibility,
something which you need if you have things like superblocks at fixed
positions. But if you haven't (like btrfs) than you are perfectly fine
using per-request flushes (eg to ensure the log has been written), and
then you don't need a global flush.

> The device internally knows when data is staged in volatile buffers
> versus committed to NAND (or equivalent persistent media), but this
> information is not exposed to the host.
> 
... and not available to the host.
I'm not aware of any command (or I/O completion information) which
would tell you if and when an I/O cached internally in the drive is
written to disk.

> This RFC explores a potential extension model with two components:
> 
> 1) Multi-phase completion semantics
> 
>     - Normal completion continues to signal execution.
>     - The device assigns a persistence token ID.
>     - When the data is physically committed to non-volatile media,
>       the device emits an asynchronous durability confirmation
>       referencing that token.
> 
> This would decouple execution throughput from durability
> confirmation and potentially allow filesystems to close journal
> transactions only upon confirmed persistence, without forcing
> synchronous flush fences.
> 
... or redesign your filesystem to _not_ rely on synchronous flushes.

> 2) Advisory write intent classification
> 
>     - Host-provided hints such as EPHEMERAL, STANDARD, or CRITICAL.
>     - CRITICAL writes would request immediate durability.
>     - EPHEMERAL writes could tolerate extended volatile staging.
> 
> Additionally, I am curious whether host power-state awareness
> could be relevant in such a model. For example, if the kernel
> can detect battery-backed operation or confirmed UPS
> infrastructure, it could advertise a bounded persistence
> relaxation window (e.g. guaranteed power for N ms), allowing
> the device to safely extend volatile staging within that
> window. This would be advisory and revocable, not a mandatory
> trust model.
> 
Been there, done that. Really hard to ensure.
Some RAID controllers try to do something here by not
advertising volatile cache if battery backup is not present,
but that's a really lame thing as you need to ensure that
you have enough power to keep the HDDs running after a
power outage until you have written out all data.
It's getting even harder for on-board HBAs where you have
no distinct power rails for the HDDs; you'd need to have
a battery large enough to keep the entire system running,
but that would be massively overprovisioned for 99% of the
time. And hence far too expensive.

Doubt it's worth it.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich

  reply	other threads:[~2026-04-05 12:58 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-02 21:22 [RFC] block/nvme: exploring asynchronous durability notification semantics Esteban Cerutti
2026-04-05 12:58 ` Hannes Reinecke [this message]
2026-04-07  5:48 ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4080e4d2-6a95-4d6e-ba53-e576855aac16@suse.de \
    --to=hare@suse.de \
    --cc=esteban.cerutti@gmail.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox