* Increase in XFS journal flushes with (direct_write;fdatasync)+
@ 2026-05-06 13:26 ` Andres Freund
2026-05-06 15:05 ` Carlos Maiolino
2026-05-07 20:34 ` Pankaj Raghav (Samsung)
0 siblings, 2 replies; 10+ messages in thread
From: Andres Freund @ 2026-05-06 13:26 UTC (permalink / raw)
To: linux-xfs, Carlos Maiolino, Christoph Hellwig, Christian Brauner; +Cc: Pankaj
Hi,
While looking at performance issues on Samsung client drives due to slow FUA,
I tried to reproduce older numbers on a recent kernel. And couldn't, at first
- but not because the problem went away, but because the fdatasync numbers
(which shouldn't use FUA) got *much* worse.
These drives have FUA writes that are slower than full flushes, making
O_DIRECT|O_DSYNC writes perform poorly and fdatasync() comparatively better.
What I'm seeing is that with recent kernels the fdatasync() performance is
roughly as bad as the O_DSYNC, whereas previously it was > 2x as
fasts. blktrace showed that there are ongoing FUA writes during a workload
with just overwriting writes and an fdatasync after every write.
At first I thought it was a regression between 7.0..7.1-rc2, but that turned
out to be only because the 7.0 machine did not have lazytime enabled. After
fixing that discrepancy, the regression is also visible in 7.0. I have
confirmed it's not visible in 6.18.
Repro Workload:
fio --directory ${mountpoint}/fio/ --overwrite 1 --size=$((4096*123)) --buffered 0 --bs=4096 --rw=write --name write-fdatasync --fdatasync=1 |grep IOPS
On v7.1-rc2-5-g6d35786de2811:
mounted with lazytime:
write: IOPS=158, BW=636KiB/s (651kB/s)(492KiB/774msec); 0 zone resets
mounted with nolazyatime:
write: IOPS=594, BW=2377KiB/s (2434kB/s)(492KiB/207msec); 0 zone resets
Running it with perf stat and a few events [1] shows:
using lazytime
write: IOPS=174, BW=697KiB/s (714kB/s)(492KiB/706msec); 0 zone resets
Performance counter stats for 'fio --directory /srv/fio/ --overwrite 1 --size=503808 --buffered 0 --bs=4096 --rw=write --name write-fdatasync --fdatasync=1':
123 syscalls:sys_enter_pwrite64
122 syscalls:sys_exit_fdatasync
121 xfs:xlog_iclog_write
121 xfs:xlog_iclog_sync
123 xfs:xfs_file_direct_write
122 xfs:xfs_update_time
122 xfs:xfs_log_reserve
123 xfs:xfs_trans_add_item
8 writeback:writeback_dirty_inode
122 xfs:xfs_trans_commit
1.170287744 seconds time elapsed
0.192673000 seconds user
0.054510000 seconds sys
using nolazytime
write: IOPS=672, BW=2689KiB/s (2753kB/s)(492KiB/183msec); 0 zone resets
Performance counter stats for 'fio --directory /srv/fio/ --overwrite 1 --size=503808 --buffered 0 --bs=4096 --rw=write --name write-fdatasync --fdatasync=1':
123 syscalls:sys_enter_pwrite64
122 syscalls:sys_exit_fdatasync
1 xfs:xlog_iclog_write
1 xfs:xlog_iclog_sync
123 xfs:xfs_file_direct_write
55 xfs:xfs_update_time
55 xfs:xfs_log_reserve
55 xfs:xfs_trans_add_item
7 writeback:writeback_dirty_inode
55 xfs:xfs_trans_commit
0.667253953 seconds time elapsed
0.160385000 seconds user
0.061264000 seconds sys
The relevant difference presumably is that nolazytime has a lot more log
flushes (xfs:xlog_iclog_sync).
ext4 does not show that behaviour.
Presumably this happened as part of
commit 74554251dfc9374ebf1a9dfc54d6745d56bb9265
Merge: 996812c453caf 77ef2c3ff5916
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: 2026-02-09 11:25:01 -0800
Merge tag 'vfs-7.0-rc1.nonblocking_timestamps' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull vfs timestamp updates from Christian Brauner:
"This contains the changes to support non-blocking timestamp updates.
Or in one of the followup fixes. Hence CCing the folks involved in that.
Greetings,
Andres Freund
[1] mountpoint=/srv; for opt in lazytime nolazytime; do echo "using $opt"; mount $mountpoint -o remount,$opt && perf stat -e syscalls:sys_enter_pwrite64,syscalls:sys_exit_fdatasync,xfs:xlog_iclog_write,xfs:xlog_iclog_sync,xfs:xfs_file_direct_write,xfs:xfs_update_time,xfs:xfs_log_reserve,xfs:xfs_trans_add_item,writeback:writeback_dirty_inode,xfs:xfs_trans_commit fio --directory ${mountpoint}/fio/ --overwrite 1 --size=$((4096*123)) --buffered 0 --bs=4096 --rw=write --name write-fdatasync --fdatasync=1 |grep IOPS || break;done
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-06 13:26 ` Increase in XFS journal flushes with (direct_write;fdatasync)+ Andres Freund
@ 2026-05-06 15:05 ` Carlos Maiolino
2026-05-07 20:34 ` Pankaj Raghav (Samsung)
1 sibling, 0 replies; 10+ messages in thread
From: Carlos Maiolino @ 2026-05-06 15:05 UTC (permalink / raw)
To: Andres Freund; +Cc: linux-xfs, Christoph Hellwig, Christian Brauner, Pankaj
On Wed, May 06, 2026 at 09:26:25AM -0400, Andres Freund wrote:
> Hi,
>
> While looking at performance issues on Samsung client drives due to slow FUA,
> I tried to reproduce older numbers on a recent kernel. And couldn't, at first
> - but not because the problem went away, but because the fdatasync numbers
> (which shouldn't use FUA) got *much* worse.
>
> These drives have FUA writes that are slower than full flushes, making
> O_DIRECT|O_DSYNC writes perform poorly and fdatasync() comparatively better.
>
>
> What I'm seeing is that with recent kernels the fdatasync() performance is
> roughly as bad as the O_DSYNC, whereas previously it was > 2x as
> fasts. blktrace showed that there are ongoing FUA writes during a workload
> with just overwriting writes and an fdatasync after every write.
>
>
> At first I thought it was a regression between 7.0..7.1-rc2, but that turned
> out to be only because the 7.0 machine did not have lazytime enabled. After
> fixing that discrepancy, the regression is also visible in 7.0. I have
> confirmed it's not visible in 6.18.
>
> Repro Workload:
>
> fio --directory ${mountpoint}/fio/ --overwrite 1 --size=$((4096*123)) --buffered 0 --bs=4096 --rw=write --name write-fdatasync --fdatasync=1 |grep IOPS
>
> On v7.1-rc2-5-g6d35786de2811:
>
> mounted with lazytime:
> write: IOPS=158, BW=636KiB/s (651kB/s)(492KiB/774msec); 0 zone resets
>
> mounted with nolazyatime:
> write: IOPS=594, BW=2377KiB/s (2434kB/s)(492KiB/207msec); 0 zone resets
>
>
>
> Running it with perf stat and a few events [1] shows:
>
> using lazytime
> write: IOPS=174, BW=697KiB/s (714kB/s)(492KiB/706msec); 0 zone resets
>
> Performance counter stats for 'fio --directory /srv/fio/ --overwrite 1 --size=503808 --buffered 0 --bs=4096 --rw=write --name write-fdatasync --fdatasync=1':
>
> 123 syscalls:sys_enter_pwrite64
> 122 syscalls:sys_exit_fdatasync
> 121 xfs:xlog_iclog_write
> 121 xfs:xlog_iclog_sync
> 123 xfs:xfs_file_direct_write
> 122 xfs:xfs_update_time
> 122 xfs:xfs_log_reserve
> 123 xfs:xfs_trans_add_item
> 8 writeback:writeback_dirty_inode
> 122 xfs:xfs_trans_commit
>
> 1.170287744 seconds time elapsed
>
> 0.192673000 seconds user
> 0.054510000 seconds sys
>
>
> using nolazytime
> write: IOPS=672, BW=2689KiB/s (2753kB/s)(492KiB/183msec); 0 zone resets
>
> Performance counter stats for 'fio --directory /srv/fio/ --overwrite 1 --size=503808 --buffered 0 --bs=4096 --rw=write --name write-fdatasync --fdatasync=1':
>
> 123 syscalls:sys_enter_pwrite64
> 122 syscalls:sys_exit_fdatasync
> 1 xfs:xlog_iclog_write
> 1 xfs:xlog_iclog_sync
> 123 xfs:xfs_file_direct_write
> 55 xfs:xfs_update_time
> 55 xfs:xfs_log_reserve
> 55 xfs:xfs_trans_add_item
> 7 writeback:writeback_dirty_inode
> 55 xfs:xfs_trans_commit
>
> 0.667253953 seconds time elapsed
>
> 0.160385000 seconds user
> 0.061264000 seconds sys
>
>
> The relevant difference presumably is that nolazytime has a lot more log
> flushes (xfs:xlog_iclog_sync).
>
>
> ext4 does not show that behaviour.
>
>
> Presumably this happened as part of
>
> commit 74554251dfc9374ebf1a9dfc54d6745d56bb9265
> Merge: 996812c453caf 77ef2c3ff5916
> Author: Linus Torvalds <torvalds@linux-foundation.org>
> Date: 2026-02-09 11:25:01 -0800
>
> Merge tag 'vfs-7.0-rc1.nonblocking_timestamps' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
>
> Pull vfs timestamp updates from Christian Brauner:
> "This contains the changes to support non-blocking timestamp updates.
>
> Or in one of the followup fixes. Hence CCing the folks involved in that.
Thanks for the info, I'll look into it next week assuming nobody looks
into it first.
>
>
> Greetings,
>
> Andres Freund
>
>
> [1] mountpoint=/srv; for opt in lazytime nolazytime; do echo "using $opt"; mount $mountpoint -o remount,$opt && perf stat -e syscalls:sys_enter_pwrite64,syscalls:sys_exit_fdatasync,xfs:xlog_iclog_write,xfs:xlog_iclog_sync,xfs:xfs_file_direct_write,xfs:xfs_update_time,xfs:xfs_log_reserve,xfs:xfs_trans_add_item,writeback:writeback_dirty_inode,xfs:xfs_trans_commit fio --directory ${mountpoint}/fio/ --overwrite 1 --size=$((4096*123)) --buffered 0 --bs=4096 --rw=write --name write-fdatasync --fdatasync=1 |grep IOPS || break;done
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-06 13:26 ` Increase in XFS journal flushes with (direct_write;fdatasync)+ Andres Freund
2026-05-06 15:05 ` Carlos Maiolino
@ 2026-05-07 20:34 ` Pankaj Raghav (Samsung)
2026-05-08 8:10 ` Christoph Hellwig
1 sibling, 1 reply; 10+ messages in thread
From: Pankaj Raghav (Samsung) @ 2026-05-07 20:34 UTC (permalink / raw)
To: Andres Freund
Cc: linux-xfs, Carlos Maiolino, Christoph Hellwig, Christian Brauner,
gost.dev, p.raghav
On Wed, May 06, 2026 at 09:26:25AM -0400, Andres Freund wrote:
> Hi,
>
> While looking at performance issues on Samsung client drives due to slow FUA,
> I tried to reproduce older numbers on a recent kernel. And couldn't, at first
> - but not because the problem went away, but because the fdatasync numbers
> (which shouldn't use FUA) got *much* worse.
>
> These drives have FUA writes that are slower than full flushes, making
> O_DIRECT|O_DSYNC writes perform poorly and fdatasync() comparatively better.
>
>
> What I'm seeing is that with recent kernels the fdatasync() performance is
> roughly as bad as the O_DSYNC, whereas previously it was > 2x as
> fasts. blktrace showed that there are ongoing FUA writes during a workload
> with just overwriting writes and an fdatasync after every write.
>
>
> At first I thought it was a regression between 7.0..7.1-rc2, but that turned
> out to be only because the 7.0 machine did not have lazytime enabled. After
> fixing that discrepancy, the regression is also visible in 7.0. I have
> confirmed it's not visible in 6.18.
I was able to reproduce this issue. The commit causing the issue is
indeed from nonblocking timestamps series as you indicated
(fs: add support for non-blocking timestamp updates).
In inode_update_cmtime, we have the following changes as a part of the
series:
...
mtime_changed = !timespec64_equal(&now, &mtime);
if (mtime_changed || !timespec64_equal(&now, &ctime))
dirty = inode_time_dirty_flag(inode); // #1
/*
* Pure timestamp updates can be recorded in the inode without blocking
* by not dirtying the inode. But when the file system requires
* i_version updates, the update of i_version can still block.
* Error out if we'd actually have to update i_version or don't support
* lazytime.
*/
if (IS_I_VERSION(inode)) {
if (flags & IOCB_NOWAIT) {
if (!(inode->i_sb->s_flags & SB_LAZYTIME) ||
inode_iversion_need_inc(inode))
return -EAGAIN;
} else {
if (inode_maybe_inc_iversion(inode, !!dirty)) //#2
dirty |= I_DIRTY_SYNC;
}
}
...
In the above snippet, in #1 we set dirty = I_DIRTY_TIME if SB_LAZYTIME
is set and in #2 we do a force increment on iversion for any non-zero
dirty values, including I_DIRTY_TIME alone.
I think the fix is to use "dirty != I_DIRTY_TIME" as the force parameter? This passes
false for pure lazytime updates (allowing the I_VERSION_QUERIED optimization
to work), while still forcing the increment when dirty contains other flags
indicating real changes.
diff --git a/fs/inode.c b/fs/inode.c
index 6a3cbc7dcd28..e9b3b2febb58 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -2124,7 +2124,7 @@ static int inode_update_cmtime(struct inode *inode, unsigned int flags)
inode_iversion_need_inc(inode))
return -EAGAIN;
} else {
- if (inode_maybe_inc_iversion(inode, !!dirty))
+ if (inode_maybe_inc_iversion(inode, dirty != I_DIRTY_TIME))
dirty |= I_DIRTY_SYNC;
}
}
This fix seems to reduce the number flush calls and fix the regression.
@carlos and @hch let me know if this is the correct fix or I am just
suppressing the symptom and not fixing the root cause.
--
Pankaj
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-07 20:34 ` Pankaj Raghav (Samsung)
@ 2026-05-08 8:10 ` Christoph Hellwig
2026-05-08 8:29 ` Pankaj Raghav
0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2026-05-08 8:10 UTC (permalink / raw)
To: Pankaj Raghav (Samsung)
Cc: Andres Freund, linux-xfs, Carlos Maiolino, Christoph Hellwig,
Christian Brauner, gost.dev, p.raghav
On Thu, May 07, 2026 at 10:34:43PM +0200, Pankaj Raghav (Samsung) wrote:
> This fix seems to reduce the number flush calls and fix the regression.
> @carlos and @hch let me know if this is the correct fix or I am just
> suppressing the symptom and not fixing the root cause.
This looks good from a very quick look. It'll need testing, a line
length fix and preferably a comment as a reminder and should be good
to go.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-08 8:10 ` Christoph Hellwig
@ 2026-05-08 8:29 ` Pankaj Raghav
2026-05-08 8:43 ` Christoph Hellwig
0 siblings, 1 reply; 10+ messages in thread
From: Pankaj Raghav @ 2026-05-08 8:29 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Andres Freund, linux-xfs, Carlos Maiolino, Christian Brauner,
gost.dev, p.raghav
On 5/8/26 10:10, Christoph Hellwig wrote:
> On Thu, May 07, 2026 at 10:34:43PM +0200, Pankaj Raghav (Samsung) wrote:
>> This fix seems to reduce the number flush calls and fix the regression.
>> @carlos and @hch let me know if this is the correct fix or I am just
>> suppressing the symptom and not fixing the root cause.
>
> This looks good from a very quick look. It'll need testing, a line
> length fix and preferably a comment as a reminder and should be good
> to go.
>
What about this:
diff --git a/fs/inode.c b/fs/inode.c
index 6a3cbc7dcd28..1b373fe1100d 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -2124,7 +2124,12 @@ static int inode_update_cmtime(struct inode *inode, unsigned int flags)
inode_iversion_need_inc(inode))
return -EAGAIN;
} else {
- if (inode_maybe_inc_iversion(inode, !!dirty))
+ /*
+ * Don't force iversion increment for pure lazytime
+ * updates (when dirty is set to I_DIRTY_TIME only).
+ */
+ if (inode_maybe_inc_iversion(inode,
+ dirty != I_DIRTY_TIME))
dirty |= I_DIRTY_SYNC;
}
}
If you are tests are passing, then I can send a fix as a separate patch.
--
Pankaj
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-08 8:29 ` Pankaj Raghav
@ 2026-05-08 8:43 ` Christoph Hellwig
2026-05-08 11:42 ` Jeff Layton
0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2026-05-08 8:43 UTC (permalink / raw)
To: Pankaj Raghav
Cc: Christoph Hellwig, Andres Freund, linux-xfs, Carlos Maiolino,
Christian Brauner, gost.dev, p.raghav, Jeff Layton
On Fri, May 08, 2026 at 10:29:38AM +0200, Pankaj Raghav wrote:
> } else {
> - if (inode_maybe_inc_iversion(inode, !!dirty))
> + /*
> + * Don't force iversion increment for pure lazytime
> + * updates (when dirty is set to I_DIRTY_TIME only).
> + */
> + if (inode_maybe_inc_iversion(inode,
> + dirty != I_DIRTY_TIME))
> dirty |= I_DIRTY_SYNC;
> }
> }
>
> If you are tests are passing, then I can send a fix as a separate patch.
The comment needs to explain the why and not the how. AFAICS the
why is that lazytime is not propagated to the disk at this mount,
so incrementing i_version should not happen, but I'm adding Jeff
for insights.
>
> --
> Pankaj
>
---end quoted text---
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-08 8:43 ` Christoph Hellwig
@ 2026-05-08 11:42 ` Jeff Layton
2026-05-08 11:47 ` Pankaj Raghav
0 siblings, 1 reply; 10+ messages in thread
From: Jeff Layton @ 2026-05-08 11:42 UTC (permalink / raw)
To: Christoph Hellwig, Pankaj Raghav
Cc: Andres Freund, linux-xfs, Carlos Maiolino, Christian Brauner,
gost.dev, p.raghav
On Fri, 2026-05-08 at 10:43 +0200, Christoph Hellwig wrote:
> On Fri, May 08, 2026 at 10:29:38AM +0200, Pankaj Raghav wrote:
> > } else {
> > - if (inode_maybe_inc_iversion(inode, !!dirty))
> > + /*
> > + * Don't force iversion increment for pure lazytime
> > + * updates (when dirty is set to I_DIRTY_TIME only).
> > + */
> > + if (inode_maybe_inc_iversion(inode,
> > + dirty != I_DIRTY_TIME))
> > dirty |= I_DIRTY_SYNC;
> > }
> > }
> >
> > If you are tests are passing, then I can send a fix as a separate patch.
>
> The comment needs to explain the why and not the how. AFAICS the
> why is that lazytime is not propagated to the disk at this mount,
> so incrementing i_version should not happen, but I'm adding Jeff
> for insights.
>
>
That looks correct to me. I think the logic here is:
If we're going to disk anyway, then we might as well force an i_version
update. In the case where we're not (dirty == I_DIRTY_TIME), then we
only want to do an i_version update if someone has queried it. If an
i_version update does occur, then we need to go to disk by setting
I_DIRTY_SYNC.
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-08 11:42 ` Jeff Layton
@ 2026-05-08 11:47 ` Pankaj Raghav
2026-05-11 8:56 ` Christoph Hellwig
0 siblings, 1 reply; 10+ messages in thread
From: Pankaj Raghav @ 2026-05-08 11:47 UTC (permalink / raw)
To: Jeff Layton, Christoph Hellwig
Cc: Andres Freund, linux-xfs, Carlos Maiolino, Christian Brauner,
gost.dev, p.raghav
>> The comment needs to explain the why and not the how. AFAICS the
>> why is that lazytime is not propagated to the disk at this mount,
>> so incrementing i_version should not happen, but I'm adding Jeff
>> for insights.
>>
>>
>
> That looks correct to me. I think the logic here is:
>
> If we're going to disk anyway, then we might as well force an i_version
> update. In the case where we're not (dirty == I_DIRTY_TIME), then we
> only want to do an i_version update if someone has queried it. If an
> i_version update does occur, then we need to go to disk by setting
> I_DIRTY_SYNC.
Does this reflect the why:
diff --git a/fs/inode.c b/fs/inode.c
index 6a3cbc7dcd28..62c579a0cf7d 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -2124,7 +2124,13 @@ static int inode_update_cmtime(struct inode *inode, unsigned int flags)
inode_iversion_need_inc(inode))
return -EAGAIN;
} else {
- if (inode_maybe_inc_iversion(inode, !!dirty))
+ /*
+ * Don't force iversion increment for pure lazytime
+ * updates (I_DIRTY_TIME only), let I_VERSION_QUERIED
+ * dictate whether the increment is needed.
+ */
+ if (inode_maybe_inc_iversion(inode,
+ dirty != I_DIRTY_TIME))
dirty |= I_DIRTY_SYNC;
}
}
--
Pankaj
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-08 11:47 ` Pankaj Raghav
@ 2026-05-11 8:56 ` Christoph Hellwig
2026-05-11 10:31 ` Pankaj Raghav
0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2026-05-11 8:56 UTC (permalink / raw)
To: Pankaj Raghav
Cc: Jeff Layton, Christoph Hellwig, Andres Freund, linux-xfs,
Carlos Maiolino, Christian Brauner, gost.dev, p.raghav
On Fri, May 08, 2026 at 01:47:58PM +0200, Pankaj Raghav wrote:
> - if (inode_maybe_inc_iversion(inode, !!dirty))
> + /*
> + * Don't force iversion increment for pure lazytime
> + * updates (I_DIRTY_TIME only), let I_VERSION_QUERIED
> + * dictate whether the increment is needed.
> + */
> + if (inode_maybe_inc_iversion(inode,
> + dirty != I_DIRTY_TIME))
Looks good, thanks!
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Increase in XFS journal flushes with (direct_write;fdatasync)+
2026-05-11 8:56 ` Christoph Hellwig
@ 2026-05-11 10:31 ` Pankaj Raghav
0 siblings, 0 replies; 10+ messages in thread
From: Pankaj Raghav @ 2026-05-11 10:31 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Jeff Layton, Andres Freund, linux-xfs, Carlos Maiolino,
Christian Brauner, gost.dev, p.raghav
On 5/11/26 10:56, Christoph Hellwig wrote:
> On Fri, May 08, 2026 at 01:47:58PM +0200, Pankaj Raghav wrote:
>> - if (inode_maybe_inc_iversion(inode, !!dirty))
>> + /*
>> + * Don't force iversion increment for pure lazytime
>> + * updates (I_DIRTY_TIME only), let I_VERSION_QUERIED
>> + * dictate whether the increment is needed.
>> + */
>> + if (inode_maybe_inc_iversion(inode,
>> + dirty != I_DIRTY_TIME))
>
> Looks good, thanks!
Perfect. I will send a separate patch with the Fixes tag in it.
--
Pankaj
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-05-11 10:31 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <IS8F8EYS5pW4UU5a3jxOTy-f18EgkDa_2zAUswRgTm6NVtvmajAaQyu9CDxkTelDnfXfCl7L_692C77zRAxwFQ==@protonmail.internalid>
2026-05-06 13:26 ` Increase in XFS journal flushes with (direct_write;fdatasync)+ Andres Freund
2026-05-06 15:05 ` Carlos Maiolino
2026-05-07 20:34 ` Pankaj Raghav (Samsung)
2026-05-08 8:10 ` Christoph Hellwig
2026-05-08 8:29 ` Pankaj Raghav
2026-05-08 8:43 ` Christoph Hellwig
2026-05-08 11:42 ` Jeff Layton
2026-05-08 11:47 ` Pankaj Raghav
2026-05-11 8:56 ` Christoph Hellwig
2026-05-11 10:31 ` Pankaj Raghav
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox