From: Chris Mason <clm@fb.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Eric Sandeen <sandeen@redhat.com>, xfs@oss.sgi.com
Subject: Re: [PATCH RFC] xfs: use invalidate_inode_pages2_range for DIO writes
Date: Fri, 8 Aug 2014 22:42:24 -0400 [thread overview]
Message-ID: <53E58A90.7060709@fb.com> (raw)
In-Reply-To: <20140809004857.GF26465@dastard>
On 08/08/2014 08:48 PM, Dave Chinner wrote:
> On Fri, Aug 08, 2014 at 12:04:40PM -0400, Chris Mason wrote:
>>
>> xfs is using truncate_pagecache_range to invalidate the page cache
>> during DIO writes. The other filesystems are calling
>> invalidate_inode_pages2_range
>>
>> truncate_pagecache_range is meant to be used when we are freeing the
>> underlying data structs from disk, so it will zero any partial ranges
>> in the page. This means a DIO write can zero out part of the page cache
>> page, and it is possible the page will stay in cache.
>>
>> This one is an RFC because it is untested and because I don't understand
>> how XFS is dealing with pages the truncate was unable to clear away.
>> I'm not able to actually trigger zeros by mixing DIO writes with
>> buffered reads.
>>
>> Signed-off-by: Chris Mason <clm@fb.com>
>>
>> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
>> index 8d25d98..c30c112 100644
>> --- a/fs/xfs/xfs_file.c
>> +++ b/fs/xfs/xfs_file.c
>> @@ -638,7 +638,10 @@ xfs_file_dio_aio_write(
>> pos, -1);
>> if (ret)
>> goto out;
>> - truncate_pagecache_range(VFS_I(ip), pos, -1);
>> +
>> + /* what do we do if we can't invalidate the pages? */
>> + invalidate_inode_pages2_range(VFS_I(ip)->i_mapping,
>> + pos >> PAGE_CACHE_SHIFT, -1);
>
> I don't think it can on XFS.
>
> We're holding the XFS_IOLOCK_EXCL, so no other syscall based IO can
> dirty pages, all the pages are clean, try_to_free_buffers() will
> never fail, no-one can run a truncate operation concurently, and
> so on.
>
> So, I'd just do:
>
> ret = invalidate_inode_pages2_range(VFS_I(ip)->i_mapping,
> pos >> PAGE_CACHE_SHIFT, -1);
> WARN_ON_ONCE(ret);
> ret = 0;
>
Since pos is page aligned I agree this should be fine. I'll leave that
one to you though, since I don't have a great test case for/against it.
-chris
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2014-08-09 2:42 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-08 14:35 [PATCH] xfs: don't zero partial page cache pages during O_DIRECT Chris Mason
2014-08-08 15:17 ` Chris Mason
2014-08-08 16:04 ` [PATCH RFC] xfs: use invalidate_inode_pages2_range for DIO writes Chris Mason
2014-08-09 0:48 ` Dave Chinner
2014-08-09 2:42 ` Chris Mason [this message]
2014-08-08 20:39 ` [PATCH] xfs: don't zero partial page cache pages during O_DIRECT Brian Foster
2014-08-09 0:36 ` Dave Chinner
2014-08-09 2:32 ` Chris Mason
2014-08-09 3:19 ` Eric Sandeen
2014-08-09 4:17 ` Dave Chinner
2014-08-09 12:57 ` [PATCH v2] " Chris Mason
2014-08-11 13:29 ` Brian Foster
2014-08-12 1:17 ` Dave Chinner
2014-08-19 19:24 ` Chris Mason
2014-08-19 22:35 ` Dave Chinner
2014-08-20 1:54 ` Chris Mason
2014-08-20 2:19 ` Dave Chinner
2014-08-20 2:36 ` Dave Chinner
2014-08-20 4:41 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53E58A90.7060709@fb.com \
--to=clm@fb.com \
--cc=david@fromorbit.com \
--cc=sandeen@redhat.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox