linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Do we need an opt-in for file systems use of hw atomic writes?
@ 2025-07-14 13:17 Christoph Hellwig
  2025-07-14 13:24 ` Theodore Ts'o
                   ` (3 more replies)
  0 siblings, 4 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-14 13:17 UTC (permalink / raw)
  To: John Garry, Darrick J. Wong
  Cc: linux-fsdevel, linux-xfs, linux-block, linux-nvme

Hi all,

I'm currently trying to sort out the nvme atomics limits mess, and
between that, the lack of a atomic write command in nvme, and the
overall degrading quality of cheap consumer nvme devices I'm starting
to free really uneasy about XFS using hardware atomics by default without
an explicit opt-in, as broken atomics implementations will lead to
really subtle data corruption.

Is is just me, or would it be a good idea to require an explicit
opt-in to user hardware atomics?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:17 Do we need an opt-in for file systems use of hw atomic writes? Christoph Hellwig
@ 2025-07-14 13:24 ` Theodore Ts'o
  2025-07-14 13:30   ` Christoph Hellwig
  2025-07-14 13:39 ` John Garry
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 28+ messages in thread
From: Theodore Ts'o @ 2025-07-14 13:24 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: John Garry, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On Mon, Jul 14, 2025 at 03:17:13PM +0200, Christoph Hellwig wrote:
> Hi all,
> 
> I'm currently trying to sort out the nvme atomics limits mess, and
> between that, the lack of a atomic write command in nvme, and the
> overall degrading quality of cheap consumer nvme devices I'm starting
> to free really uneasy about XFS using hardware atomics by default without
> an explicit opt-in, as broken atomics implementations will lead to
> really subtle data corruption.
> 
> Is is just me, or would it be a good idea to require an explicit
> opt-in to user hardware atomics?

How common do we think broken atomics implementations; is this
something that we could solve using a blacklist of broken devices?

It used to be the case that broken flash devices would get bricked
when trim commands would get sent racing with write requests.  But
over time, the broken devices died out (in some cases, when the
companies selling the broken SSD's died out :-).  It would have been
annoying if we had to explicitly enable trim support in 2025 just
because there were some broken SSD's existed a decade or more ago.

	       	  	      	      - Ted

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:24 ` Theodore Ts'o
@ 2025-07-14 13:30   ` Christoph Hellwig
  2025-07-14 16:04     ` Darrick J. Wong
  2025-07-15  3:22     ` Martin K. Petersen
  0 siblings, 2 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-14 13:30 UTC (permalink / raw)
  To: Theodore Ts'o
  Cc: Christoph Hellwig, John Garry, Darrick J. Wong, linux-fsdevel,
	linux-xfs, linux-block, linux-nvme

On Mon, Jul 14, 2025 at 09:24:07AM -0400, Theodore Ts'o wrote:
> > Is is just me, or would it be a good idea to require an explicit
> > opt-in to user hardware atomics?
> 
> How common do we think broken atomics implementations; is this
> something that we could solve using a blacklist of broken devices?

I don't know.  But cheap consumer SSDs can basically exhibit any
brokenness you can imagine.  And claiming to support atomics basically
just means filling out a single field in identify with a non-zero
value.  So my hopes of only seeing it in a few devices is low,
moreover we will only notice it was broken when people lost data.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:17 Do we need an opt-in for file systems use of hw atomic writes? Christoph Hellwig
  2025-07-14 13:24 ` Theodore Ts'o
@ 2025-07-14 13:39 ` John Garry
  2025-07-14 13:50   ` Christoph Hellwig
  2025-07-14 20:53 ` Dave Chinner
  2025-07-15 20:56 ` Keith Busch
  3 siblings, 1 reply; 28+ messages in thread
From: John Garry @ 2025-07-14 13:39 UTC (permalink / raw)
  To: Christoph Hellwig, Darrick J. Wong
  Cc: linux-fsdevel, linux-xfs, linux-block, linux-nvme

On 14/07/2025 14:17, Christoph Hellwig wrote:
> Hi all,
> 
> I'm currently trying to sort out the nvme atomics limits mess, and
> between that, the lack of a atomic write command in nvme, and the
> overall degrading quality of cheap consumer nvme devices I'm starting
> to free really uneasy about XFS using hardware atomics by default without
> an explicit opt-in, as broken atomics implementations will lead to
> really subtle data corruption.
> 
> Is is just me, or would it be a good idea to require an explicit
> opt-in to user hardware atomics?

But isn't this just an NVMe issue? I would assume that we would look at 
such an option in the NVMe driver (to opt in when we are concerned about 
the implementation), and not the FS. SCSI is ok AFAIK.

And we also have bdev fops supporting atomic writes, so any opt in 
method needs to cover that.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:39 ` John Garry
@ 2025-07-14 13:50   ` Christoph Hellwig
  2025-07-14 15:53     ` John Garry
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-14 13:50 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On Mon, Jul 14, 2025 at 02:39:54PM +0100, John Garry wrote:
> On 14/07/2025 14:17, Christoph Hellwig wrote:
> > Hi all,
> > 
> > I'm currently trying to sort out the nvme atomics limits mess, and
> > between that, the lack of a atomic write command in nvme, and the
> > overall degrading quality of cheap consumer nvme devices I'm starting
> > to free really uneasy about XFS using hardware atomics by default without
> > an explicit opt-in, as broken atomics implementations will lead to
> > really subtle data corruption.
> > 
> > Is is just me, or would it be a good idea to require an explicit
> > opt-in to user hardware atomics?
> 
> But isn't this just an NVMe issue? I would assume that we would look at such
> an option in the NVMe driver (to opt in when we are concerned about the
> implementation), and not the FS. SCSI is ok AFAIK.

SCSI is a better standard, and modulo USB devices doesn't have as much
of an issue with cheap consumer devices.

But form the file system POV we've spent the last decade or so hardening
file systems against hardware failures, so now suddenly using such a
high risk feature automatically feels a bit odd.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:50   ` Christoph Hellwig
@ 2025-07-14 15:53     ` John Garry
  2025-07-15  6:02       ` Christoph Hellwig
  0 siblings, 1 reply; 28+ messages in thread
From: John Garry @ 2025-07-14 15:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Christoph Hellwig, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On 14/07/2025 14:50, Christoph Hellwig wrote:
> On Mon, Jul 14, 2025 at 02:39:54PM +0100, John Garry wrote:
>> On 14/07/2025 14:17, Christoph Hellwig wrote:
>>> Hi all,
>>>
>>> I'm currently trying to sort out the nvme atomics limits mess, and
>>> between that, the lack of a atomic write command in nvme, and the
>>> overall degrading quality of cheap consumer nvme devices I'm starting
>>> to free really uneasy about XFS using hardware atomics by default without
>>> an explicit opt-in, as broken atomics implementations will lead to
>>> really subtle data corruption.
>>>
>>> Is is just me, or would it be a good idea to require an explicit
>>> opt-in to user hardware atomics?
>>
>> But isn't this just an NVMe issue? I would assume that we would look at such
>> an option in the NVMe driver (to opt in when we are concerned about the
>> implementation), and not the FS. SCSI is ok AFAIK.
> 
> SCSI is a better standard, and modulo USB devices doesn't have as much
> of an issue with cheap consumer devices.
> 
> But form the file system POV we've spent the last decade or so hardening
> file systems against hardware failures, so now suddenly using such a
> high risk feature automatically feels a bit odd.
> 

I see. I figure that something like a FS_XFLAG could be used for that. 
But we should still protect bdev fops users as well.

JFYI, I have done a good bit of HW and SW-based atomic powerfail testing 
with fio on a Linux dev board, so there is a decent method available for 
users to verify their HW atomics. But then testing power failures is not 
always practical. Crashing the kernel only tests AWUN, and AWUPF (for NVMe).


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:30   ` Christoph Hellwig
@ 2025-07-14 16:04     ` Darrick J. Wong
  2025-07-15  6:00       ` Christoph Hellwig
  2025-07-15  3:22     ` Martin K. Petersen
  1 sibling, 1 reply; 28+ messages in thread
From: Darrick J. Wong @ 2025-07-14 16:04 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Theodore Ts'o, John Garry, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On Mon, Jul 14, 2025 at 03:30:14PM +0200, Christoph Hellwig wrote:
> On Mon, Jul 14, 2025 at 09:24:07AM -0400, Theodore Ts'o wrote:
> > > Is is just me, or would it be a good idea to require an explicit
> > > opt-in to user hardware atomics?
> > 
> > How common do we think broken atomics implementations; is this
> > something that we could solve using a blacklist of broken devices?
> 
> I don't know.  But cheap consumer SSDs can basically exhibit any
> brokenness you can imagine.  And claiming to support atomics basically
> just means filling out a single field in identify with a non-zero
> value.  So my hopes of only seeing it in a few devices is low,
> moreover we will only notice it was broken when people lost data.

Do you want to handle it the same way as we do discard-zeroes-data and
have a quirks list of devices we trust?  Though I can hardly talk,
knowing the severe limitations of allowlists vs. product managers trying
to win benchmarks with custom firmware. :(

--D

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:17 Do we need an opt-in for file systems use of hw atomic writes? Christoph Hellwig
  2025-07-14 13:24 ` Theodore Ts'o
  2025-07-14 13:39 ` John Garry
@ 2025-07-14 20:53 ` Dave Chinner
  2025-07-15  6:05   ` Christoph Hellwig
  2025-07-15 20:56 ` Keith Busch
  3 siblings, 1 reply; 28+ messages in thread
From: Dave Chinner @ 2025-07-14 20:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: John Garry, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On Mon, Jul 14, 2025 at 03:17:13PM +0200, Christoph Hellwig wrote:
> Hi all,
> 
> I'm currently trying to sort out the nvme atomics limits mess, and
> between that, the lack of a atomic write command in nvme, and the
> overall degrading quality of cheap consumer nvme devices I'm starting
> to free really uneasy about XFS using hardware atomics by default without
> an explicit opt-in, as broken atomics implementations will lead to
> really subtle data corruption.
> 
> Is is just me, or would it be a good idea to require an explicit
> opt-in to user hardware atomics?

This isn't a filesystem question - this is a question about what
features the block device should expose by default to the
user/filesystem by default.

Block device feature configuration is typically done at hotplug time
with udev rules.  Require the user to add a custom udev rule for the
block device to enable hardware atomics if you are concerned that
hardware atomic writes are problematic.

Once the user has opted in to having their bdev feature activated,
then the filesystem should be able to detect and use it
automatically.

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:30   ` Christoph Hellwig
  2025-07-14 16:04     ` Darrick J. Wong
@ 2025-07-15  3:22     ` Martin K. Petersen
  2025-07-15  6:00       ` Christoph Hellwig
  1 sibling, 1 reply; 28+ messages in thread
From: Martin K. Petersen @ 2025-07-15  3:22 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Theodore Ts'o, John Garry, Darrick J. Wong, linux-fsdevel,
	linux-xfs, linux-block, linux-nvme


Christoph,

> I don't know. But cheap consumer SSDs can basically exhibit any
> brokenness you can imagine. And claiming to support atomics basically
> just means filling out a single field in identify with a non-zero
> value. So my hopes of only seeing it in a few devices is low, moreover
> we will only notice it was broken when people lost data.

It's very unfortunate that AWUN/AWUPF are useless. Even when it comes to
something as simple as sanity checking the reported NAWUN/NAWUPF.

For PCIe transport devices maybe we could consider adding an additional
heuristic based on something like PLP or VWC?

-- 
Martin K. Petersen

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 16:04     ` Darrick J. Wong
@ 2025-07-15  6:00       ` Christoph Hellwig
  0 siblings, 0 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-15  6:00 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, Theodore Ts'o, John Garry, linux-fsdevel,
	linux-xfs, linux-block, linux-nvme

On Mon, Jul 14, 2025 at 09:04:00AM -0700, Darrick J. Wong wrote:
> Do you want to handle it the same way as we do discard-zeroes-data and
> have a quirks list of devices we trust?  Though I can hardly talk,
> knowing the severe limitations of allowlists vs. product managers trying
> to win benchmarks with custom firmware. :(

I don't think whitelists are a good idea.  I'd expect the admin to
opt into it.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15  3:22     ` Martin K. Petersen
@ 2025-07-15  6:00       ` Christoph Hellwig
  2025-07-15 12:45         ` Martin K. Petersen
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-15  6:00 UTC (permalink / raw)
  To: Martin K. Petersen
  Cc: Christoph Hellwig, Theodore Ts'o, John Garry, Darrick J. Wong,
	linux-fsdevel, linux-xfs, linux-block, linux-nvme

On Mon, Jul 14, 2025 at 11:22:43PM -0400, Martin K. Petersen wrote:
> For PCIe transport devices maybe we could consider adding an additional
> heuristic based on something like PLP or VWC?

What would you check there?  Atomic writes work perfectly fine if not
better with volatile write caches.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 15:53     ` John Garry
@ 2025-07-15  6:02       ` Christoph Hellwig
  2025-07-15  8:42         ` John Garry
  2025-07-15 10:02         ` Christian Brauner
  0 siblings, 2 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-15  6:02 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, Christoph Hellwig, Darrick J. Wong,
	linux-fsdevel, linux-xfs, linux-block, linux-nvme

On Mon, Jul 14, 2025 at 04:53:49PM +0100, John Garry wrote:
> I see. I figure that something like a FS_XFLAG could be used for that. But 
> we should still protect bdev fops users as well.

I'm not sure a XFLAG is all that useful.  It's not really a per-file
persistent thing.  It's more of a mount option, or better persistent
mount-option attr like we did for autofsck.

>
> JFYI, I have done a good bit of HW and SW-based atomic powerfail testing 
> with fio on a Linux dev board, so there is a decent method available for 
> users to verify their HW atomics. But then testing power failures is not 
> always practical. Crashing the kernel only tests AWUN, and AWUPF (for 
> NVMe).

Yes.  There's some ways to emulate power fail for file system level
power fail testing using dm-log-writes and similar, but that doesn't
help at all with testing the power fail behavior of the device which
we rely on here.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 20:53 ` Dave Chinner
@ 2025-07-15  6:05   ` Christoph Hellwig
  0 siblings, 0 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-15  6:05 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Christoph Hellwig, John Garry, Darrick J. Wong, linux-fsdevel,
	linux-xfs, linux-block, linux-nvme

On Tue, Jul 15, 2025 at 06:53:29AM +1000, Dave Chinner wrote:
> This isn't a filesystem question - this is a question about what
> features the block device should expose by default to the
> user/filesystem by default.

It's both.  As I said before we've spent a lot of time making the file
systems less reliant on hardware getting everything right by adding
checksums (for metadata everywhere, and non-XFS file systems for data),
lsn verifications, etc.  And now we go all in to trust a new (in case
of nvme very much misdesigned) feature.

I'm perfectly fine offering that use, but I'm a lot less excited by
automatically using using it due to my deep mistrust of hardware.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15  6:02       ` Christoph Hellwig
@ 2025-07-15  8:42         ` John Garry
  2025-07-15  9:03           ` Christoph Hellwig
  2025-07-15 10:02         ` Christian Brauner
  1 sibling, 1 reply; 28+ messages in thread
From: John Garry @ 2025-07-15  8:42 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Christoph Hellwig, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On 15/07/2025 07:02, Christoph Hellwig wrote:
> On Mon, Jul 14, 2025 at 04:53:49PM +0100, John Garry wrote:
>> I see. I figure that something like a FS_XFLAG could be used for that. But
>> we should still protect bdev fops users as well.
> 
> I'm not sure a XFLAG is all that useful.  It's not really a per-file
> persistent thing.  It's more of a mount option, or better persistent
> mount-option attr like we did for autofsck.

For all these options, the admin must know that the atomic behaviour of 
their disk is as advertised - I am not sure how realistic it is.

Apart from this, it would be nice to have an idea of how to handle the 
NVMe. About this:

" III.	 don't allow atomics on controllers that only report AWUPF and
  	 limit support to controllers that support that more sanely
	 defined NAWUPF"

Would it be possible to also have a driver opt-in for those controllers 
which don't support NAWUPF?

Thanks,
John

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15  8:42         ` John Garry
@ 2025-07-15  9:03           ` Christoph Hellwig
  2025-08-19 11:42             ` John Garry
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-15  9:03 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, Christoph Hellwig, Darrick J. Wong,
	linux-fsdevel, linux-xfs, linux-block, linux-nvme

On Tue, Jul 15, 2025 at 09:42:33AM +0100, John Garry wrote:
>> I'm not sure a XFLAG is all that useful.  It's not really a per-file
>> persistent thing.  It's more of a mount option, or better persistent
>> mount-option attr like we did for autofsck.
>
> For all these options, the admin must know that the atomic behaviour of 
> their disk is as advertised - I am not sure how realistic it is.

Well, who else would know it, or rather who else can do the risk
calculation?

I'm not worried about Oracle cloud running data bases on drives written
to their purchase spec and validated by them.

I'm worried about $RANDOMUSER running $APPLICATION here that thing
atomic write APIs are nice (they finally are) and while that works
fine with the software implemenetation and even reasonably high end
consumer devices, they now get the $CHEAPO SSD off Alibab and while
things work fine their entire browinshistory / ledger / movie data
base or whatever is toast and the file system gets blamed.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15  6:02       ` Christoph Hellwig
  2025-07-15  8:42         ` John Garry
@ 2025-07-15 10:02         ` Christian Brauner
  2025-07-15 11:29           ` Christoph Hellwig
  2025-07-15 11:58           ` Theodore Ts'o
  1 sibling, 2 replies; 28+ messages in thread
From: Christian Brauner @ 2025-07-15 10:02 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: John Garry, Christoph Hellwig, Darrick J. Wong, linux-fsdevel,
	linux-xfs, linux-block, linux-nvme

On Tue, Jul 15, 2025 at 08:02:47AM +0200, Christoph Hellwig wrote:
> On Mon, Jul 14, 2025 at 04:53:49PM +0100, John Garry wrote:
> > I see. I figure that something like a FS_XFLAG could be used for that. But 
> > we should still protect bdev fops users as well.
> 
> I'm not sure a XFLAG is all that useful.  It's not really a per-file
> persistent thing.  It's more of a mount option, or better persistent
> mount-option attr like we did for autofsck.

If we were to make this a mount option it would be really really ugly.
Either it is a filesystem specific mount option and then we have the
problem that we're ending up with different mount option names
per-filesystem.

And for a VFS generic mount option this is way too specific. It would
be extremely misplaced if we start accumulating hardware opt-out/opt-in
mount options on the VFS layer.

It feels like this is something that needs to be done on the block
layer. IOW, maybe add generic block layer ioctls or a per-device sysfs
entry that allows to turn atomic writes on or off. That information
would then also potentially available to the filesystem to e.g.,
generate an info message during mount that hardware atomics are used or
aren't used. Because ultimately the block layer is where the decision
needs to be made.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15 10:02         ` Christian Brauner
@ 2025-07-15 11:29           ` Christoph Hellwig
  2025-07-15 12:20             ` Christian Brauner
  2025-07-15 11:58           ` Theodore Ts'o
  1 sibling, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-15 11:29 UTC (permalink / raw)
  To: Christian Brauner
  Cc: Christoph Hellwig, John Garry, Christoph Hellwig, Darrick J. Wong,
	linux-fsdevel, linux-xfs, linux-block, linux-nvme

On Tue, Jul 15, 2025 at 12:02:06PM +0200, Christian Brauner wrote:
> > I'm not sure a XFLAG is all that useful.  It's not really a per-file
> > persistent thing.  It's more of a mount option, or better persistent
> > mount-option attr like we did for autofsck.
> 
> If we were to make this a mount option it would be really really ugly.
> Either it is a filesystem specific mount option and then we have the
> problem that we're ending up with different mount option names
> per-filesystem.

Not that I'm arguing for a mount option (this should be sticky), but
we've had plenty of fs parsed mount options with common semantics.

> It feels like this is something that needs to be done on the block
> layer. IOW, maybe add generic block layer ioctls or a per-device sysfs
> entry that allows to turn atomic writes on or off. That information
> would then also potentially available to the filesystem to e.g.,
> generate an info message during mount that hardware atomics are used or
> aren't used. Because ultimately the block layer is where the decision
> needs to be made.

The block layer just passes things through.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15 10:02         ` Christian Brauner
  2025-07-15 11:29           ` Christoph Hellwig
@ 2025-07-15 11:58           ` Theodore Ts'o
  1 sibling, 0 replies; 28+ messages in thread
From: Theodore Ts'o @ 2025-07-15 11:58 UTC (permalink / raw)
  To: Christian Brauner
  Cc: Christoph Hellwig, John Garry, Christoph Hellwig, Darrick J. Wong,
	linux-fsdevel, linux-xfs, linux-block, linux-nvme

On Tue, Jul 15, 2025 at 12:02:06PM +0200, Christian Brauner wrote:
> 
> It feels like this is something that needs to be done on the block
> layer. IOW, maybe add generic block layer ioctls or a per-device sysfs
> entry that allows to turn atomic writes on or off. That information
> would then also potentially available to the filesystem to e.g.,
> generate an info message during mount that hardware atomics are used or
> aren't used. Because ultimately the block layer is where the decision
> needs to be made.

I'd really like it if we can edit the atomic write granularity by
writing to the sysfs file to make it easier to test the atomic write
codepaths in the file system.

So I'd suggest combining this with John Garry's suggestion to allow
atomic writes by default on NVMe devices that report NAWUPF, not to
ignore AWUPF.  If system admistrators need to make atomic writes on
legacy devices that only report AWUPF, they can manually set the
atomic write granulairty.  And if they screw up --- well, that's on
them.

And file system developers who don't care about data safety on power
failure (which we can't directly test via fstests anyway), but just
want to test the code paths, we can manually write to the sysfs file
as well.

Cheers,

						- Ted

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15 11:29           ` Christoph Hellwig
@ 2025-07-15 12:20             ` Christian Brauner
  0 siblings, 0 replies; 28+ messages in thread
From: Christian Brauner @ 2025-07-15 12:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: John Garry, Christoph Hellwig, Darrick J. Wong, linux-fsdevel,
	linux-xfs, linux-block, linux-nvme

On Tue, Jul 15, 2025 at 01:29:52PM +0200, Christoph Hellwig wrote:
> On Tue, Jul 15, 2025 at 12:02:06PM +0200, Christian Brauner wrote:
> > > I'm not sure a XFLAG is all that useful.  It's not really a per-file
> > > persistent thing.  It's more of a mount option, or better persistent
> > > mount-option attr like we did for autofsck.
> > 
> > If we were to make this a mount option it would be really really ugly.
> > Either it is a filesystem specific mount option and then we have the
> > problem that we're ending up with different mount option names
> > per-filesystem.
> 
> Not that I'm arguing for a mount option (this should be sticky), but
> we've had plenty of fs parsed mount options with common semantics.
> 
> > It feels like this is something that needs to be done on the block
> > layer. IOW, maybe add generic block layer ioctls or a per-device sysfs
> > entry that allows to turn atomic writes on or off. That information
> > would then also potentially available to the filesystem to e.g.,
> > generate an info message during mount that hardware atomics are used or
> > aren't used. Because ultimately the block layer is where the decision
> > needs to be made.
> 
> The block layer just passes things through.

We already have bdev_can_atomic_write() which checks whether the
underlying device is capable of hardware assisted atomic writes. If
that's the case the filesystem currently just uses them, fine.

So it is possible to implement an ioctl() that allows an administrator
to mark a device as untrusted for hardware assisted atomic writes.

This is also nice is because this can be integrated with udev easily. If
a device is know to have broken hardware assisted atomic writes then add
the device into systemd-udev's hardware database (hwdb).

When systemd-udev sees that device show up during boot it will
automatically mark that device as having broken atomic write support and
any mount of that device will have the filesystem immediately see the
broken hardware assisted atomic write support in bdev_can_atomic_write()
and not use it.

Fwiw, this pattern is already used for other stuff. For example for the
iocost stuff that udev will auto-apply if known. The broken atomic write
stuff would fit very well in there. Either it's an allowlist or a
denylist.

commit 6b8e90545e918a4653281b3672a873e948f12b65
Author:     Gustavo Noronha Silva <gustavo.noronha@collabora.com>
AuthorDate: Mon May 2 14:02:23 2022 -0300
Commit:     Lennart Poettering <lennart@poettering.net>
CommitDate: Thu Apr 20 16:45:57 2023 +0200

    Apply known iocost solutions to block devices

    Meta's resource control demo project[0] includes a benchmark tool that can
    be used to calculate the best iocost solutions for a given SSD.

      [0]: https://github.com/facebookexperimental/resctl-demo

    A project[1] has now been started to create a publicly available database
    of results that can be used to apply them automatically.

      [1]: https://github.com/iocost-benchmark/iocost-benchmarks

    This change adds a new tool that gets triggered by a udev rule for any
    block device and queries the hwdb for known solutions. The format for
    the hwdb file that is currently generated by the github action looks like
    this:

      # This file was auto-generated on Tue, 23 Aug 2022 13:03:57 +0000.
      # From the following commit:
      # https://github.com/iocost-benchmark/iocost-benchmarks/commit/ca82acfe93c40f21d3b513c055779f43f1126f88
      #
      # Match key format:
      # block:<devpath>:name:<model name>:

      # 12 points, MOF=[1.346,1.346], aMOF=[1.249,1.249]
      block:*:name:HFS256GD9TNG-62A0A:fwver:*:
        IOCOST_SOLUTIONS=isolation isolated-bandwidth bandwidth naive
        IOCOST_MODEL_ISOLATION=rbps=1091439492 rseqiops=52286 rrandiops=63784 wbps=192329466 wseqiops=12309 wrandiops=16119
        IOCOST_QOS_ISOLATION=rpct=0.00 rlat=8807 wpct=0.00 wlat=59023 min=100.00 max=100.00
        IOCOST_MODEL_ISOLATED_BANDWIDTH=rbps=1091439492 rseqiops=52286 rrandiops=63784 wbps=192329466 wseqiops=12309 wrandiops=16119
        IOCOST_QOS_ISOLATED_BANDWIDTH=rpct=0.00 rlat=8807 wpct=0.00 wlat=59023 min=100.00 max=100.00
        IOCOST_MODEL_BANDWIDTH=rbps=1091439492 rseqiops=52286 rrandiops=63784 wbps=192329466 wseqiops=12309 wrandiops=16119
        IOCOST_QOS_BANDWIDTH=rpct=0.00 rlat=8807 wpct=0.00 wlat=59023 min=100.00 max=100.00
        IOCOST_MODEL_NAIVE=rbps=1091439492 rseqiops=52286 rrandiops=63784 wbps=192329466 wseqiops=12309 wrandiops=16119
        IOCOST_QOS_NAIVE=rpct=99.00 rlat=8807 wpct=99.00 wlat=59023 min=75.00 max=100.00

    The IOCOST_SOLUTIONS key lists the solutions available for that device
    in the preferred order for higher isolation, which is a reasonable
    default for most client systems. This can be overriden to choose better
    defaults for custom use cases, like the various data center workloads.

    The tool can also be used to query the known solutions for a specific
    device or to apply a non-default solution (say, isolation or bandwidth).

    Co-authored-by: Santosh Mahto <santosh.mahto@collabora.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15  6:00       ` Christoph Hellwig
@ 2025-07-15 12:45         ` Martin K. Petersen
  0 siblings, 0 replies; 28+ messages in thread
From: Martin K. Petersen @ 2025-07-15 12:45 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Martin K. Petersen, Theodore Ts'o, John Garry,
	Darrick J. Wong, linux-fsdevel, linux-xfs, linux-block,
	linux-nvme


Hi Christoph!

>> For PCIe transport devices maybe we could consider adding an additional
>> heuristic based on something like PLP or VWC?
>
> What would you check there?  Atomic writes work perfectly fine if not
> better with volatile write caches.

What I propose is making sure we only enable atomics when several
independent device-reported values line up and are mutually consistent.
Just like we do in SCSI.

Maybe something like this:

  if (NSFEAT & NSABP &&
      is_power_of_2(NAWUPF) &&
      NAWUPF <= NAWUN &&
      NAWUPF <= MDTS &&
      NAWUPF % NPWG == 0)

It would be good to have more Identify Controller data in there to
validate against. But since we want to fix up AWUN and AWUPF in the
spec, we shouldn't depend on those.

I just wonder if there is something else from either PCIe config space
or Identify Controller we could add to weed out rando consumer devices
that seed their Identify buffers with garbage. A heuristic like "you
wouldn't possibly want to enable atomics unless the drive also supported
feature XYZ"...

-- 
Martin K. Petersen

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-14 13:17 Do we need an opt-in for file systems use of hw atomic writes? Christoph Hellwig
                   ` (2 preceding siblings ...)
  2025-07-14 20:53 ` Dave Chinner
@ 2025-07-15 20:56 ` Keith Busch
  2025-07-16  5:50   ` Nilay Shroff
  3 siblings, 1 reply; 28+ messages in thread
From: Keith Busch @ 2025-07-15 20:56 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: John Garry, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On Mon, Jul 14, 2025 at 03:17:13PM +0200, Christoph Hellwig wrote:
> Is is just me, or would it be a good idea to require an explicit
> opt-in to user hardware atomics?

IMO, if the block device's limits reports atomic capabilities, it's fair
game for any in kernel use. These are used outside of filesystems too,
like through raw block fops.

We've already settled on discarding problematic nvme attributes from
consideration. Is there something beyond that you've really found? If
so, maybe we should continue down the path of splitting more queue
limits into "hardware" and "user" values, and make filesystems subscribe
to the udev value where it defaults to "unsupported" for untrusted
devices.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15 20:56 ` Keith Busch
@ 2025-07-16  5:50   ` Nilay Shroff
  0 siblings, 0 replies; 28+ messages in thread
From: Nilay Shroff @ 2025-07-16  5:50 UTC (permalink / raw)
  To: Keith Busch, Christoph Hellwig
  Cc: John Garry, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme



On 7/16/25 2:26 AM, Keith Busch wrote:
> On Mon, Jul 14, 2025 at 03:17:13PM +0200, Christoph Hellwig wrote:
>> Is is just me, or would it be a good idea to require an explicit
>> opt-in to user hardware atomics?
> 
> IMO, if the block device's limits reports atomic capabilities, it's fair
> game for any in kernel use. These are used outside of filesystems too,
> like through raw block fops.
> 
> We've already settled on discarding problematic nvme attributes from
> consideration. Is there something beyond that you've really found? If
> so, maybe we should continue down the path of splitting more queue
> limits into "hardware" and "user" values, and make filesystems subscribe
> to the udev value where it defaults to "unsupported" for untrusted
> devices.
> 
If we're going down the path of disregarding atomic write support for 
NVMe devices that don't report NAWUPF, then we likely need an opt-in
mechanism for users who trust their device to have a sane AWUPF value.

For example, consider an NVMe disk that does not support NAWUPF, but 
does consistently support AWUPF across all namespaces and for different
LBA sizes. In such cases, I would still want to enable atomic writes on
this disk, even if the kernel driver marks it as "unsupported" due to
missing NAWUPF.

Having an explicit user opt-in mechanism in such scenarios would be very
useful, allowing advanced users to take advantage of hardware capabilities
they trust, despite conservative kernel defaults. 

Thanks,
--Nilay

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-07-15  9:03           ` Christoph Hellwig
@ 2025-08-19 11:42             ` John Garry
  2025-08-19 13:39               ` Christoph Hellwig
  2025-08-21 14:01               ` Keith Busch
  0 siblings, 2 replies; 28+ messages in thread
From: John Garry @ 2025-08-19 11:42 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Christoph Hellwig, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On 15/07/2025 10:03, Christoph Hellwig wrote:
> On Tue, Jul 15, 2025 at 09:42:33AM +0100, John Garry wrote:
>>> I'm not sure a XFLAG is all that useful.  It's not really a per-file
>>> persistent thing.  It's more of a mount option, or better persistent
>>> mount-option attr like we did for autofsck.
>>
>> For all these options, the admin must know that the atomic behaviour of
>> their disk is as advertised - I am not sure how realistic it is.
> 
> Well, who else would know it, or rather who else can do the risk
> calculation?
> 
> I'm not worried about Oracle cloud running data bases on drives written
> to their purchase spec and validated by them.
> 
> I'm worried about $RANDOMUSER running $APPLICATION here that thing
> atomic write APIs are nice (they finally are) and while that works
> fine with the software implemenetation and even reasonably high end
> consumer devices, they now get the $CHEAPO SSD off Alibab and while
> things work fine their entire browinshistory / ledger / movie data
> base or whatever is toast and the file system gets blamed.
> 
> 
Hi Christoph,

nothing has been happening on this thread for a while. I figure that it 
is because we have no good or obvious options.

I think that it's better deal with the NVMe driver handling of AWUPF 
first, as this applies to block fops as well.

As for the suggestion to have an opt-in to use AWUPF, you wrote above 
that users may not know when to enable this opt-in or not.

It seems to me that we can give the option, but clearly label that it is 
potentially dangerous. Hopefully the $RANDOMUSER with the $CHEAPO SSD 
will be wise and steer clear.

If we always ignore AWUPF, I fear that lots of sound NVMe 
implementations will be excluded from HW atomics.

Thanks,
John

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-08-19 11:42             ` John Garry
@ 2025-08-19 13:39               ` Christoph Hellwig
  2025-08-19 14:36                 ` John Garry
  2025-08-21 14:01               ` Keith Busch
  1 sibling, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2025-08-19 13:39 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, Christoph Hellwig, Darrick J. Wong,
	linux-fsdevel, linux-xfs, linux-block, linux-nvme

On Tue, Aug 19, 2025 at 12:42:01PM +0100, John Garry wrote:
> nothing has been happening on this thread for a while. I figure that it is 
> because we have no good or obvious options.
>
> I think that it's better deal with the NVMe driver handling of AWUPF first, 
> as this applies to block fops as well.
>
> As for the suggestion to have an opt-in to use AWUPF, you wrote above that 
> users may not know when to enable this opt-in or not.
>
> It seems to me that we can give the option, but clearly label that it is 
> potentially dangerous. Hopefully the $RANDOMUSER with the $CHEAPO SSD will 
> be wise and steer clear.
>
> If we always ignore AWUPF, I fear that lots of sound NVMe implementations 
> will be excluded from HW atomics.

I think ignoring AWUPF is a good idea, but I've also hard some folks
not liking that.

The reason why I prefer a mount option is because we add that to fstab
and the kernel command line easily.  For block layer or driver options
we'd either need a sysfs file which is always annoying to apply at boot
time, or a module option which has the downside of applying to all
devices.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-08-19 13:39               ` Christoph Hellwig
@ 2025-08-19 14:36                 ` John Garry
  2025-08-19 14:43                   ` Darrick J. Wong
  0 siblings, 1 reply; 28+ messages in thread
From: John Garry @ 2025-08-19 14:36 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Christoph Hellwig, Darrick J. Wong, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On 19/08/2025 14:39, Christoph Hellwig wrote:
> On Tue, Aug 19, 2025 at 12:42:01PM +0100, John Garry wrote:
>> nothing has been happening on this thread for a while. I figure that it is
>> because we have no good or obvious options.
>>
>> I think that it's better deal with the NVMe driver handling of AWUPF first,
>> as this applies to block fops as well.
>>
>> As for the suggestion to have an opt-in to use AWUPF, you wrote above that
>> users may not know when to enable this opt-in or not.
>>
>> It seems to me that we can give the option, but clearly label that it is
>> potentially dangerous. Hopefully the $RANDOMUSER with the $CHEAPO SSD will
>> be wise and steer clear.
>>
>> If we always ignore AWUPF, I fear that lots of sound NVMe implementations
>> will be excluded from HW atomics.
> 
> I think ignoring AWUPF is a good idea, but I've also hard some folks
> not liking that.

Disabling reading AWUPF would be the best way to know that for sure :)

> 
> The reason why I prefer a mount option is because we add that to fstab
> and the kernel command line easily.  For block layer or driver options
> we'd either need a sysfs file which is always annoying to apply at boot
> time, 

Could system-udev auto enable for us via sysfs file or ioctl?

> or a module option which has the downside of applying to all
> devices.

About the mount option, I suppose that it won't do much harm - it's just 
a bit of extra work to configure.

I just fear that admins will miss enabling it or not enable it out of 
doubt and users won't see the benefit of HW atomics.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-08-19 14:36                 ` John Garry
@ 2025-08-19 14:43                   ` Darrick J. Wong
  2025-08-19 14:45                     ` Christoph Hellwig
  0 siblings, 1 reply; 28+ messages in thread
From: Darrick J. Wong @ 2025-08-19 14:43 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, Christoph Hellwig, linux-fsdevel, linux-xfs,
	linux-block, linux-nvme

On Tue, Aug 19, 2025 at 03:36:33PM +0100, John Garry wrote:
> On 19/08/2025 14:39, Christoph Hellwig wrote:
> > On Tue, Aug 19, 2025 at 12:42:01PM +0100, John Garry wrote:
> > > nothing has been happening on this thread for a while. I figure that it is
> > > because we have no good or obvious options.
> > > 
> > > I think that it's better deal with the NVMe driver handling of AWUPF first,
> > > as this applies to block fops as well.
> > > 
> > > As for the suggestion to have an opt-in to use AWUPF, you wrote above that
> > > users may not know when to enable this opt-in or not.
> > > 
> > > It seems to me that we can give the option, but clearly label that it is
> > > potentially dangerous. Hopefully the $RANDOMUSER with the $CHEAPO SSD will
> > > be wise and steer clear.
> > > 
> > > If we always ignore AWUPF, I fear that lots of sound NVMe implementations
> > > will be excluded from HW atomics.
> > 
> > I think ignoring AWUPF is a good idea, but I've also hard some folks
> > not liking that.
> 
> Disabling reading AWUPF would be the best way to know that for sure :)

What is the likelihood of convincing the nvme standards folks to add a
new command for write-untorn that doesn't just silently fail if you get
the parameters wrong?

> > The reason why I prefer a mount option is because we add that to fstab
> > and the kernel command line easily.  For block layer or driver options
> > we'd either need a sysfs file which is always annoying to apply at boot
> > time,

(Yuck, mount options, look how poorly that went for dax= ;))

> Could system-udev auto enable for us via sysfs file or ioctl?

Userspace controllable sysfs configuration knobs like discard_max_bytes
and discard_max_hw_bytes work well with that model.  The nvme layer can
set atomic_write_bytes to zero by default, and a udev rule can change it
up to atomic_write_max_hw_bytes.

That's not /so/ bad if you can either get the udev rulefile merged into
systemd, or dropped in via clod-init or something.

--D

> > or a module option which has the downside of applying to all
> > devices.
> 
> About the mount option, I suppose that it won't do much harm - it's just a
> bit of extra work to configure.
> 
> I just fear that admins will miss enabling it or not enable it out of doubt
> and users won't see the benefit of HW atomics.
> 
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-08-19 14:43                   ` Darrick J. Wong
@ 2025-08-19 14:45                     ` Christoph Hellwig
  0 siblings, 0 replies; 28+ messages in thread
From: Christoph Hellwig @ 2025-08-19 14:45 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: John Garry, Christoph Hellwig, Christoph Hellwig, linux-fsdevel,
	linux-xfs, linux-block, linux-nvme

On Tue, Aug 19, 2025 at 07:43:47AM -0700, Darrick J. Wong wrote:
> What is the likelihood of convincing the nvme standards folks to add a
> new command for write-untorn that doesn't just silently fail if you get
> the parameters wrong?

In my experience that depends mostly on how much a big customers for
NVMe hardware is asking for it.  Hint:  while Oracle isn't in the
absolute top tier of the influence scale it probably is just below.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Do we need an opt-in for file systems use of hw atomic writes?
  2025-08-19 11:42             ` John Garry
  2025-08-19 13:39               ` Christoph Hellwig
@ 2025-08-21 14:01               ` Keith Busch
  1 sibling, 0 replies; 28+ messages in thread
From: Keith Busch @ 2025-08-21 14:01 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, Christoph Hellwig, Darrick J. Wong,
	linux-fsdevel, linux-xfs, linux-block, linux-nvme

On Tue, Aug 19, 2025 at 12:42:01PM +0100, John Garry wrote:
> 
> If we always ignore AWUPF, I fear that lots of sound NVMe implementations
> will be excluded from HW atomics.

It's not that they're excluded from HW atomics. They're just excluded
from the block layer's attributes and guard rails. People were using
NVMe atomics long before those block layer features, at least.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2025-08-21 14:01 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-14 13:17 Do we need an opt-in for file systems use of hw atomic writes? Christoph Hellwig
2025-07-14 13:24 ` Theodore Ts'o
2025-07-14 13:30   ` Christoph Hellwig
2025-07-14 16:04     ` Darrick J. Wong
2025-07-15  6:00       ` Christoph Hellwig
2025-07-15  3:22     ` Martin K. Petersen
2025-07-15  6:00       ` Christoph Hellwig
2025-07-15 12:45         ` Martin K. Petersen
2025-07-14 13:39 ` John Garry
2025-07-14 13:50   ` Christoph Hellwig
2025-07-14 15:53     ` John Garry
2025-07-15  6:02       ` Christoph Hellwig
2025-07-15  8:42         ` John Garry
2025-07-15  9:03           ` Christoph Hellwig
2025-08-19 11:42             ` John Garry
2025-08-19 13:39               ` Christoph Hellwig
2025-08-19 14:36                 ` John Garry
2025-08-19 14:43                   ` Darrick J. Wong
2025-08-19 14:45                     ` Christoph Hellwig
2025-08-21 14:01               ` Keith Busch
2025-07-15 10:02         ` Christian Brauner
2025-07-15 11:29           ` Christoph Hellwig
2025-07-15 12:20             ` Christian Brauner
2025-07-15 11:58           ` Theodore Ts'o
2025-07-14 20:53 ` Dave Chinner
2025-07-15  6:05   ` Christoph Hellwig
2025-07-15 20:56 ` Keith Busch
2025-07-16  5:50   ` Nilay Shroff

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).