* Fwd: default mount options
@ 2016-11-29 23:51 L.A. Walsh
2016-11-30 0:14 ` Eric Sandeen
0 siblings, 1 reply; 9+ messages in thread
From: L.A. Walsh @ 2016-11-29 23:51 UTC (permalink / raw)
To: linux-xfs
Is it possible for the 'mount' man page to be enhanced to show
what the defaults are? Or if that's not possible,
maybe the xfs(5) manpage?
Also, I'm again "unclear" on barriers.
The xfs(5) man page says:
"barrier|nobarrier - Enables/disables the use of block layer write
barriers... this allows for drive level write caching to be enabled.
Barriers are enabled by default.
This seems to say that barriers are enabled. Does that mean
the the barriers are implemented in the HW of the disk, or that
SW adds "barriers" for disks that don't have them implemented in HW?
It also says drives may enable write-caching -- but this should
only be done if they support write barriers. How is this "decided"?
I.e is it done "automatically" in HW? in SW?
Or should the user "know"?
Is this related to whether or not the drives support "state" over
power interruptions? By having non-volatile "write-cache" memory,
battery-backed cache, or backed by a UPS? Wouldn't SSD's be
considers safe for this purpose (because their state is non-volatile?).
I seem to "get" this topic periodically, but after some time passes,
and upon rereading the associated manpages, I realize I'm not
real clear which way is what and realize the lack of defaults
being specified and whether or not SSD's and/or UPS-backed disks
were safe whether barriers were present or not was still vague.
Thanks!
-linda
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Fwd: default mount options
2016-11-29 23:51 Fwd: default mount options L.A. Walsh
@ 2016-11-30 0:14 ` Eric Sandeen
2016-11-30 19:27 ` L A Walsh
0 siblings, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2016-11-30 0:14 UTC (permalink / raw)
To: L.A. Walsh, linux-xfs
On 11/29/16 5:51 PM, L.A. Walsh wrote:
>
> Is it possible for the 'mount' man page to be enhanced to show
> what the defaults are? Or if that's not possible,
> maybe the xfs(5) manpage?
xfs(5) covers xfs mount options now. AFAIK, defaults are clearly
stated. Is something missing?
i.e. -
"Barriers are enabled by default."
"For this reason, nodiscard is the default."
"For kernel v3.7 and later, inode64 is the default."
"noikeep is the default."
etc.
If there's something missing, please let us know (or send a patch).
> Also, I'm again "unclear" on barriers.
>
> The xfs(5) man page says:
>
> "barrier|nobarrier - Enables/disables the use of block layer write
> barriers... this allows for drive level write caching to be enabled.
> Barriers are enabled by default.
>
> This seems to say that barriers are enabled. Does that mean
> the the barriers are implemented in the HW of the disk, or that
> SW adds "barriers" for disks that don't have them implemented in HW?
It means that the xfs "barrier" mount option is enabled by default. ;)
There is no "barrier implemented in hardware" - having barriers
turned on means that XFS will requeseet block device flushes at
appropriate times to maintain consistency, even with write caching.
> It also says drives may enable write-caching -- but this should
> only be done if they support write barriers. How is this "decided"?
> I.e is it done "automatically" in HW? in SW? Or should the user "know"?
What this means is that if barriers are enabled, write-caching
on the drive can be safely enabled.
The user should leave barriers on. Devices which don't need them
should ignore them.
Simplified, if turning barriers /off/ made your workload go faster,
that means you should have left them on in the first place. If it
didn't, then there was no reason to turn the knob in the first place...
> Is this related to whether or not the drives support "state" over
> power interruptions? By having non-volatile "write-cache" memory,
> battery-backed cache, or backed by a UPS? Wouldn't SSD's be
> considers safe for this purpose (because their state is non-volatile?).
I'm not sure there is any universal answer for what SSDs may do
on a power loss, but I think it's certainly common for them to
have a volatile write cache as well.
> I seem to "get" this topic periodically, but after some time passes,
> and upon rereading the associated manpages, I realize I'm not
> real clear which way is what and realize the lack of defaults
> being specified and whether or not SSD's and/or UPS-backed disks
> were safe whether barriers were present or not was still vague.
Just leave the option at the default, and you'll be fine. There is
rarely, if ever, a reason to change it.
-Eric
>
> Thanks!
> -linda
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: default mount options
2016-11-30 0:14 ` Eric Sandeen
@ 2016-11-30 19:27 ` L A Walsh
2016-11-30 19:50 ` Eric Sandeen
0 siblings, 1 reply; 9+ messages in thread
From: L A Walsh @ 2016-11-30 19:27 UTC (permalink / raw)
To: Eric Sandeen; +Cc: linux-xfs
Eric Sandeen wrote:
> On 11/29/16 5:51 PM, L.A. Walsh wrote:
>> Is it possible for the 'mount' man page to be enhanced to show
>> what the defaults are? Or if that's not possible,
>> maybe the xfs(5) manpage?
>
> xfs(5) covers xfs mount options now. AFAIK, defaults are clearly
> stated. Is something missing?
> i.e. -
> "Barriers are enabled by default."
> "For this reason, nodiscard is the default."
> "For kernel v3.7 and later, inode64 is the default."
> "noikeep is the default."
> etc.
----
Most of the text is the same as on the manpage, with small
insertions that weren't obvious at the time I sent the email, however
logbsize -- default for v2 logs is MAX(32768, log_sunit). What
does 'log' mean? As in the arithmetic function?
"noalign" is only relevant to filesystems created with
non-zero data alignment parms by mkfs. Does it apply if the container
that xfs is in is not zero aligned? If the partitions weren't created
on boundaries or the "volumes" on top of the partitions weren't created
on boundaries, how would one specify the overall file system alignment --
especially when, say, lvm's on-disk allocs at he beginning of a volume
may not be a multiple of a strip-size (had 768K w/a 3-stripe, 4-data
disk RAID5 (RAID50).
"noquota" to force off all quota. So I don't specify
any quotas, is that the same as "noquota" -- does that mean it is
the default?
It seems one can figure things out if one makes certain
assumptions, but that makes me uneasy.
> If there's something missing, please let us know (or send a patch).
>
>> Also, I'm again "unclear" on barriers.
> It means that the xfs "barrier" mount option is enabled by default. ;)
---
Then why doesn't it say "the barrier option" telling
xfs to add barriers, is the default".
> There is no "barrier implemented in hardware" - having barriers
> turned on means that XFS will requeseet block device flushes at
> appropriate times to maintain consistency, even with write caching.
---
"requeseet"? Is that "request"? (Seriously, I had
to google it to be sure).
>
>> It also says drives may enable write-caching -- but this should
>> only be done if they support write barriers. How is this "decided"?
>> I.e is it done "automatically" in HW? in SW? Or should the user "know"?
>
> What this means is that if barriers are enabled, write-caching
> on the drive can be safely enabled.
>
> The user should leave barriers on. Devices which don't need them
> should ignore them.
====
Not my experience. Devices with non-volatile cache
or UPS-backed cache, in the past, have been considered "not needing
barriers". Has this changed? Why?
>
> Simplified, if turning barriers /off/ made your workload go faster,
> that means you should have left them on in the first place. If it
> didn't, then there was no reason to turn the knob in the first place...
====
Not my experience. Devices with non-volatile cache
or UPS-backed cache, in the past, have been considered "not needing
barriers". But those systems also, sometimes, change runtime
behavior based on the UPS or battery state -- using write-back on
a full-healthy battery, or write-through when it wouldn't be safe.
In that case, it seems nobarrier would be a better choice
for those volumes -- letting the controller decide.
>> Is this related to whether or not the drives support "state" over
>> power interruptions? By having non-volatile "write-cache" memory,
>> battery-backed cache, or backed by a UPS? Wouldn't SSD's be
>> considers safe for this purpose (because their state is non-volatile?).
>
> I'm not sure there is any universal answer for what SSDs may do
> on a power loss, but I think it's certainly common for them to
> have a volatile write cache as well.
---
I've yet to see one that does. Not saying they couldn't
exist, but just that I've yet to see one -- with the behavior
being that if it accepts the write and returns, the data is
on the SSD.
> Just leave the option at the default, and you'll be fine. There is
> rarely, if ever, a reason to change it.
---
Fine isn't what I asked. I wanted to know if the switch
specified that xfs should add barriers or that barriers were already
handled in the backing store for those file systems. If the prior
then I would want nobarrier on some file systems, if the latter, I
might want the default. But it sounds like the switch applies
to the former -- meaning I don't want them for partitions that
don't need them.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: default mount options
2016-11-30 19:27 ` L A Walsh
@ 2016-11-30 19:50 ` Eric Sandeen
2016-11-30 20:04 ` L A Walsh
0 siblings, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2016-11-30 19:50 UTC (permalink / raw)
To: L A Walsh; +Cc: linux-xfs
On 11/30/16 1:27 PM, L A Walsh wrote:
>
>
> Eric Sandeen wrote:
>> On 11/29/16 5:51 PM, L.A. Walsh wrote:
>>> Is it possible for the 'mount' man page to be enhanced to show
>>> what the defaults are? Or if that's not possible,
>>> maybe the xfs(5) manpage?
>>
>> xfs(5) covers xfs mount options now. AFAIK, defaults are clearly
>> stated. Is something missing?
>> i.e. -
>> "Barriers are enabled by default."
>> "For this reason, nodiscard is the default."
>> "For kernel v3.7 and later, inode64 is the default."
>> "noikeep is the default."
>> etc.
> ----
> Most of the text is the same as on the manpage,
Right, I was quoting from the manpage... (xfs(5)).
> with small
> insertions that weren't obvious at the time I sent the email, however
> logbsize -- default for v2 logs is MAX(32768, log_sunit). What
> does 'log' mean? As in the arithmetic function?
v2 logs are version 2 xfs logs - i.e. the metadata log.
log_sunit is the stripe unit of the log. I can see that that's not super
clear, as it's not used or defined anywhere else.
> "noalign" is only relevant to filesystems created with
> non-zero data alignment parms by mkfs. Does it apply if the container
> that xfs is in is not zero aligned? If the partitions weren't created
> on boundaries or the "volumes" on top of the partitions weren't created
> on boundaries, how would one specify the overall file system alignment --
> especially when, say, lvm's on-disk allocs at he beginning of a volume
> may not be a multiple of a strip-size (had 768K w/a 3-stripe, 4-data
> disk RAID5 (RAID50).
alignment only applies /within/ the filesystem, it has no view outside of
that. If you create an unaligned partition, there is no magical
re-alignment that can happen within the filesystem.
> "noquota" to force off all quota. So I don't specify
> any quotas, is that the same as "noquota" -- does that mean it is
> the default? It seems one can figure things out if one makes certain
> assumptions, but that makes me uneasy.
I'm not actually sure why one would ever use "noquota."
>> If there's something missing, please let us know (or send a patch).
>>
>>> Also, I'm again "unclear" on barriers.
>> It means that the xfs "barrier" mount option is enabled by default. ;)
> ---
> Then why doesn't it say "the barrier option" telling
> xfs to add barriers, is the default".
I guess we assumed that it could be inferred readily from the phrase
"Barriers are enabled by default" in the barrier/nobarrier section.
>
>> There is no "barrier implemented in hardware" - having barriers
>> turned on means that XFS will requeseet block device flushes at
>> appropriate times to maintain consistency, even with write caching.
> ---
> "requeseet"? Is that "request"? (Seriously, I had
> to google it to be sure).
typo.
>
>>
>>> It also says drives may enable write-caching -- but this should
>>> only be done if they support write barriers. How is this "decided"?
>>> I.e is it done "automatically" in HW? in SW? Or should the user "know"?
>>
>> What this means is that if barriers are enabled, write-caching
>> on the drive can be safely enabled.
>>
>> The user should leave barriers on. Devices which don't need them
>> should ignore them.
> ====
> Not my experience. Devices with non-volatile cache
> or UPS-backed cache, in the past, have been considered "not needing
> barriers". Has this changed? Why?
Devices with ups-backed cache will in general ignore barrier requests.
I didn't mean to say that they did need barriers, I meant to imply
that such devices will /ignore/ barriers.
>>
>> Simplified, if turning barriers /off/ made your workload go faster,
>> that means you should have left them on in the first place. If it
>> didn't, then there was no reason to turn the knob in the first place...
> ====
> Not my experience. Devices with non-volatile cache
> or UPS-backed cache, in the past, have been considered "not needing
> barriers".
This is true.
> But those systems also, sometimes, change runtime
> behavior based on the UPS or battery state -- using write-back on
> a full-healthy battery, or write-through when it wouldn't be safe.
>
> In that case, it seems nobarrier would be a better choice
> for those volumes -- letting the controller decide.
No. Because then xfs will /never/ send barriers requests, even
if the battery dies. So I think you have that backwards.
If you leave them at the default, i.e. barriers /enabled,/ then the
device is free to ignore the barrier operations if the battery is
healthy, or to honor them if it fails.
If you turn it off at mount time, xfs will /never/ send such
requests, and the storage will be unsafe if the battery fails,
and you will be at risk for corruption or data loss.
>
>>> Is this related to whether or not the drives support "state" over
>>> power interruptions? By having non-volatile "write-cache" memory,
>>> battery-backed cache, or backed by a UPS? Wouldn't SSD's be
>>> considers safe for this purpose (because their state is non-volatile?).
>>
>> I'm not sure there is any universal answer for what SSDs may do
>> on a power loss, but I think it's certainly common for them to
>> have a volatile write cache as well.
> ---
> I've yet to see one that does. Not saying they couldn't exist, but just that I've yet to see one -- with the behavior
> being that if it accepts the write and returns, the data is
> on the SSD.
*shrug* I'm not going to tell anyone to turn off barriers for
ssds. :)
>> Just leave the option at the default, and you'll be fine. There is
>> rarely, if ever, a reason to change it.
> ---
> Fine isn't what I asked. I wanted to know if the switch
> specified that xfs should add barriers or that barriers were already
> handled in the backing store for those file systems. If the prior
> then I would want nobarrier on some file systems, if the latter, I
> might want the default. But it sounds like the switch applies
> to the former -- meaning I don't want them for partitions that
> don't need them.
"barrier" means "the xfs filesystem will send barrier requests to the
storage." It does this at critical points during updates to ensure
that data is /permanently/ stored on disk when required - for metadata
consistency and/or for data permanence.
If the storage doesn't need barriers, they'll simply be ignored.
"partitions that don't need them" should be /unaffected/ by their
presence, so there's no use in turning them off.
Turning them off risks corruption.
-Eric
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: default mount options
2016-11-30 19:50 ` Eric Sandeen
@ 2016-11-30 20:04 ` L A Walsh
2016-11-30 20:13 ` Eric Sandeen
2016-11-30 22:18 ` Dave Chinner
0 siblings, 2 replies; 9+ messages in thread
From: L A Walsh @ 2016-11-30 20:04 UTC (permalink / raw)
To: Eric Sandeen; +Cc: linux-xfs
Eric Sandeen wrote:
>
>> But those systems also, sometimes, change runtime
>> behavior based on the UPS or battery state -- using write-back on
>> a full-healthy battery, or write-through when it wouldn't be safe.
>>
>> In that case, it seems nobarrier would be a better choice
>> for those volumes -- letting the controller decide.
>
> No. Because then xfs will /never/ send barriers requests, even
> if the battery dies. So I think you have that backwards.
---
If the battery dies, then the controller shifts
to write-through and no longer uses its write cache. This is
documented and observed behavior.
>
> If you leave them at the default, i.e. barriers /enabled,/ then the
> device is free to ignore the barrier operations if the battery is
> healthy, or to honor them if it fails.
>
> If you turn it off at mount time, xfs will /never/ send such
> requests, and the storage will be unsafe if the battery fails,
> and you will be at risk for corruption or data loss.
---
I know what the device does in regards to its battery.
I don't know that the device responds to xfs-drivers in a way
that xfs will know to change barrier usage.
>>> Just leave the option at the default, and you'll be fine. There is
>>> rarely, if ever, a reason to change it.
>> ---
>> Fine isn't what I asked. I wanted to know if the switch
>> specified that xfs should add barriers or that barriers were already
>> handled in the backing store for those file systems. If the prior
>> then I would want nobarrier on some file systems, if the latter, I
>> might want the default. But it sounds like the switch applies
>> to the former -- meaning I don't want them for partitions that
>> don't need them.
>
> "barrier" means "the xfs filesystem will send barrier requests to the
> storage." It does this at critical points during updates to ensure
> that data is /permanently/ stored on disk when required - for metadata
> consistency and/or for data permanence.
>
> If the storage doesn't need barriers, they'll simply be ignored.
---
How can that be determined? If xfs was able to determine
the need for barriers or not, then why can't it determine something
as simple as disk alignment and ensure writes are on optimal boundaries?
> "partitions that don't need them" should be /unaffected/ by their
> presence, so there's no use in turning them off.
>
> Turning them off risks corruption.
---
The only corrupt devices I've had w/xfs were ones
that had them turned on. Those were > 5 years ago. That
says to me that there are likely other risks that have had a greater
possibility for causing corruption than that caused by lack or
presence of barriers.
-l
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: default mount options
2016-11-30 20:04 ` L A Walsh
@ 2016-11-30 20:13 ` Eric Sandeen
2016-11-30 22:18 ` Dave Chinner
1 sibling, 0 replies; 9+ messages in thread
From: Eric Sandeen @ 2016-11-30 20:13 UTC (permalink / raw)
To: L A Walsh; +Cc: linux-xfs
On 11/30/16 2:04 PM, L A Walsh wrote:
>
>
> Eric Sandeen wrote:
>>
>>> But those systems also, sometimes, change runtime
>>> behavior based on the UPS or battery state -- using write-back on
>>> a full-healthy battery, or write-through when it wouldn't be safe.
>>>
>>> In that case, it seems nobarrier would be a better choice
>>> for those volumes -- letting the controller decide.
>>
>> No. Because then xfs will /never/ send barriers requests, even
>> if the battery dies. So I think you have that backwards.
> ---
> If the battery dies, then the controller shifts
> to write-through and no longer uses its write cache. This is
> documented and observed behavior.
Ok, right, sorry.
In that case barriers may not be /needed,/ but turning them
off should offer no benefit.
>>
>> If you leave them at the default, i.e. barriers /enabled,/ then the
>> device is free to ignore the barrier operations if the battery is
>> healthy, or to honor them if it fails.
>
>>
>> If you turn it off at mount time, xfs will /never/ send such
>> requests, and the storage will be unsafe if the battery fails,
>> and you will be at risk for corruption or data loss.
> ---
> I know what the device does in regards to its battery.
> I don't know that the device responds to xfs-drivers in a way
> that xfs will know to change barrier usage.
xfs does not change its barrier usage. It is determined solely
at mount time.
>
>>>> Just leave the option at the default, and you'll be fine. There is
>>>> rarely, if ever, a reason to change it.
>>> --- Fine isn't what I asked. I wanted to know if the switch
>>> specified that xfs should add barriers or that barriers were already
>>> handled in the backing store for those file systems. If the prior
>>> then I would want nobarrier on some file systems, if the latter, I
>>> might want the default. But it sounds like the switch applies
>>> to the former -- meaning I don't want them for partitions that
>>> don't need them.
>>
>> "barrier" means "the xfs filesystem will send barrier requests to the
>> storage." It does this at critical points during updates to ensure
>> that data is /permanently/ stored on disk when required - for metadata
>> consistency and/or for data permanence.
>>
>> If the storage doesn't need barriers, they'll simply be ignored.
> ---
> How can that be determined? If xfs was able to determine
> the need for barriers or not, then why can't it determine something
> as simple as disk alignment and ensure writes are on optimal boundaries?
xfs does not determine the need for barriers or not, it is governed solely
by the specified mount option.
If they are sent by xfs, the device can choose to ignore them or not.
(ignoring the alignment non-sequitor for now)
>> "partitions that don't need them" should be /unaffected/ by their
>> presence, so there's no use in turning them off.
>>
>> Turning them off risks corruption.
> ---
> The only corrupt devices I've had w/xfs were ones
> that had them turned on. Those were > 5 years ago. That
> says to me that there are likely other risks that have had a greater
> possibility for causing corruption than that caused by lack or
> presence of barriers.
*shrug*
You seem to really want to turn barriers off in some cases. I certainly
can't /make/ you leave it at the safe-and-harmless "on" default. :)
-Eric
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: default mount options
2016-11-30 20:04 ` L A Walsh
2016-11-30 20:13 ` Eric Sandeen
@ 2016-11-30 22:18 ` Dave Chinner
2016-12-01 4:04 ` L A Walsh
1 sibling, 1 reply; 9+ messages in thread
From: Dave Chinner @ 2016-11-30 22:18 UTC (permalink / raw)
To: L A Walsh; +Cc: Eric Sandeen, linux-xfs
On Wed, Nov 30, 2016 at 12:04:24PM -0800, L A Walsh wrote:
>
>
> Eric Sandeen wrote:
> >
> >> But those systems also, sometimes, change runtime
> >>behavior based on the UPS or battery state -- using write-back on
> >>a full-healthy battery, or write-through when it wouldn't be safe.
> >>
> >> In that case, it seems nobarrier would be a better choice
> >>for those volumes -- letting the controller decide.
> >
> >No. Because then xfs will /never/ send barriers requests, even
> >if the battery dies. So I think you have that backwards.
Let's just get somethign straight first - there is no "barrier"
operation that is sent to the storage, and Linux does not have
"barriers" anymore. What we now do is strictly order our IO at the
filesystem level and issue cache flush requests to ensure all IO
prior to the cache flush request is on stable storage. We also make
use of FUA writes, which guarantee that a specific write hits stable
storage before the filesystem is told that it is complete (FUA is
emulated with post-IO cache flush requests on devices that don't
support FUA).
This is why "barriers" no longer have a performance cost - we don't
need to empty the IO pipeline to guarantee integrity anymore. And it
should be clear why hardware that has non-volatile caches don't care
whether "barriers" are enabled or not because all writes are FUA and
cache flushes are no-ops.
IOWs, "barriers" are an outdated concept and we only still have it
hanging around because we were stupid enough to name a mount option
after an implementation, rather than the feature it provided.
> ---
> If the battery dies, then the controller shifts
> to write-through and no longer uses its write cache. This is
> documented and observed behavior.
For /some/ RAID controllers in /some/ modes. For example, the
megaraid driver that has been ignoring cache flushes for over 9
years because in RAID mode it doesn't need it. However, in JBOD
mode, that same controller requires cache flushes to be sent because
it turns off sane cache management behaviour in JBOD mode:
ommit 1e793f6fc0db920400574211c48f9157a37e3945
Author: Kashyap Desai <kashyap.desai@broadcom.com>
Date: Fri Oct 21 06:33:32 2016 -0700
scsi: megaraid_sas: Fix data integrity failure for JBOD (passthrough) devices
Commit 02b01e010afe ("megaraid_sas: return sync cache call with
success") modified the driver to successfully complete SYNCHRONIZE_CACHE
commands without passing them to the controller. Disk drive caches are
only explicitly managed by controller firmware when operating in RAID
mode. So this commit effectively disabled writeback cache flushing for
any drives used in JBOD mode, leading to data integrity failures.
This is a clear example of why "barriers" should always be on and
cache flushes always passed through to the storage - because we just
don't know WTF the storage is actually doing with it's caches.
Quite frankly, I think it's time we marked the "barrier/nobarrier"
mount options as deprecated and simply always issue the required
cache flushes.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: default mount options
2016-11-30 22:18 ` Dave Chinner
@ 2016-12-01 4:04 ` L A Walsh
2016-12-01 10:50 ` Dave Chinner
0 siblings, 1 reply; 9+ messages in thread
From: L A Walsh @ 2016-12-01 4:04 UTC (permalink / raw)
To: Dave Chinner; +Cc: Eric Sandeen, linux-xfs
Dave Chinner wrote:
> For /some/ RAID controllers in /some/ modes. For example, the
> megaraid driver that has been ignoring cache flushes for over 9
> years because in RAID mode it doesn't need it. However, in JBOD
> mode, that same controller requires cache flushes to be sent because
> it turns off sane cache management behaviour in JBOD mode...
---
Lovely. For better or worse, none of my HW-based LSI-raid cards have
been able to do JBOD.
do JBOD.
> This is a clear example of why "barriers" should always be on and
> cache flushes always passed through to the storage - because we just
> don't know WTF the storage is actually doing with it's caches.
----
When it comes to JBOD, its not so clear about where caching
helps the most.
Related -- wondering about how external journals would
affect need for barriers. Haven't thought about it much, but it seems
like one would get alot of bang-for-the-buck to put journals on
SSD's. _If_ data written to SSD's becomes "safe" as soon as it is
accepted by SSDs (not saying it *is*, but _if_ it is), then how
"needed" is ordering of writes for rotating media -- apart from
special use/need cases like apps maintaining their own data-integrity
journals like DB's or such? Just wondering about how increasing use
of SSD's might affect the need for barriers.
Anyway, I have the illusion that I am mostly clear about
current params (at least until my next disillusioning)... ;-)
-l
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: default mount options
2016-12-01 4:04 ` L A Walsh
@ 2016-12-01 10:50 ` Dave Chinner
0 siblings, 0 replies; 9+ messages in thread
From: Dave Chinner @ 2016-12-01 10:50 UTC (permalink / raw)
To: L A Walsh; +Cc: Eric Sandeen, linux-xfs
On Wed, Nov 30, 2016 at 08:04:36PM -0800, L A Walsh wrote:
> Related -- wondering about how external journals would affect need
> for barriers.
XFS correctly orders the cache flushes for it's different
devices to ensure integrity. e.g. it flushes the caches of the data
device which contains the filesytsem metadata before it issues the
the FUA write to the external log device.
And it does all the right cache flushes for both data and real-time
devices on fsync to ensure data updates are stable, too.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-12-01 10:50 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-29 23:51 Fwd: default mount options L.A. Walsh
2016-11-30 0:14 ` Eric Sandeen
2016-11-30 19:27 ` L A Walsh
2016-11-30 19:50 ` Eric Sandeen
2016-11-30 20:04 ` L A Walsh
2016-11-30 20:13 ` Eric Sandeen
2016-11-30 22:18 ` Dave Chinner
2016-12-01 4:04 ` L A Walsh
2016-12-01 10:50 ` Dave Chinner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).