From: Christoph Hellwig <hch@lst.de>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org, hch@lst.de, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH] [RFC] iomap: Use FUA for pure data O_DSYNC DIO writes
Date: Fri, 2 Mar 2018 23:20:31 +0100 [thread overview]
Message-ID: <20180302222031.GA30818@lst.de> (raw)
In-Reply-To: <20180301014144.28892-1-david@fromorbit.com>
> @@ -760,8 +761,19 @@ static ssize_t iomap_dio_complete(struct iomap_dio *dio)
> }
>
> inode_dio_end(file_inode(iocb->ki_filp));
> - kfree(dio);
>
> + /*
> + * If a FUA write was done, then that is all we required for datasync
> + * semantics -. we don't need to call generic_write_sync() to complete
> + * the write.
> + */
> + if (ret > 0 &&
> + (dio->flags & (IOMAP_DIO_WRITE|IOMAP_DIO_WRITE_FUA)) ==
> + IOMAP_DIO_WRITE) {
> + ret = generic_write_sync(iocb, ret);
> + }
> +
> + kfree(dio);
Can please split the move of the generic_write_sync call into
generic_write_sync a separate prep patch? It's enough of a logic change
on its own that it warrants a separate commit with a separate explanation.
Also I'd be tempted to invert the IOMAP_DIO_WRITE_FUA flag and replace
it with an IOMAP_DIO_WRITE_SYNC flag to indicate we need the
generic_write_sync call, as that should make the logic much more clear.
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 260ff5e5c264..81aa3b73471e 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -732,6 +732,11 @@ xfs_file_write_iter(
> ret = xfs_file_dio_aio_write(iocb, from);
> if (ret == -EREMCHG)
> goto buffered;
> + /*
> + * Direct IO handles sync type writes internally on I/O
> + * completion.
> + */
> + return ret;
> } else {
> buffered:
> ret = xfs_file_buffered_aio_write(iocb, from);
The else is not needed and you can now have a much more sensible
code flow here:
ret = xfs_file_dio_aio_write(iocb, from);
if (ret != -EREMCHG))
return ret;
}
ret = xfs_file_buffered_aio_write(iocb, from);
next prev parent reply other threads:[~2018-03-02 22:20 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-01 1:41 [PATCH] [RFC] iomap: Use FUA for pure data O_DSYNC DIO writes Dave Chinner
2018-03-02 17:05 ` Darrick J. Wong
2018-03-02 22:20 ` Christoph Hellwig [this message]
2018-03-02 22:26 ` Christoph Hellwig
2018-03-04 23:00 ` Dave Chinner
2018-03-05 15:11 ` Christoph Hellwig
2018-03-02 22:53 ` Dave Chinner
2018-03-02 22:59 ` Christoph Hellwig
2018-03-02 23:00 ` Christoph Hellwig
2018-03-02 23:15 ` Dave Chinner
2018-03-02 23:21 ` Christoph Hellwig
2018-03-12 23:53 ` Dan Williams
2018-03-13 0:15 ` Robert Dorr
2018-03-13 5:10 ` Dave Chinner
2018-03-13 16:00 ` Robert Dorr
2018-03-13 16:12 ` Christoph Hellwig
2018-03-13 18:52 ` Robert Dorr
2018-03-19 16:06 ` Jan Kara
2018-03-19 16:14 ` Robert Dorr
2018-03-21 23:52 ` Robert Dorr
2018-03-22 14:35 ` Jan Kara
2018-03-22 14:38 ` Robert Dorr
2018-04-24 14:09 ` Robert Dorr
2018-04-24 15:32 ` Nikolay Borisov
2018-04-25 22:28 ` Jan Kara
2023-12-07 6:50 ` Theodore Ts'o
2023-12-07 7:32 ` Christoph Hellwig
2023-12-07 23:03 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180302222031.GA30818@lst.de \
--to=hch@lst.de \
--cc=david@fromorbit.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).