From: Matthew Wilcox <willy@infradead.org>
To: Sedat Dilek <sedat.dilek@gmail.com>
Cc: Qian Cai <cai@redhat.com>,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
"Darrick J. Wong" <darrick.wong@oracle.com>,
Christoph Hellwig <hch@infradead.org>,
Brian Foster <bfoster@redhat.com>
Subject: Re: [PATCH] iomap: Set all uptodate bits for an Uptodate page
Date: Sun, 27 Sep 2020 14:54:21 +0100 [thread overview]
Message-ID: <20200927135421.GD7714@casper.infradead.org> (raw)
In-Reply-To: <CA+icZUWSHf9YbkuEYeG4azSrPt=GYu-MmHxj3+uGvxPW-HHjjQ@mail.gmail.com>
On Sun, Sep 27, 2020 at 03:48:39PM +0200, Sedat Dilek wrote:
> With your patch and assertion diff I hit the same issue like with Ext4-FS...
>
> [So Sep 27 15:40:18 2020] run fstests generic/095 at 2020-09-27 15:40:19
> [So Sep 27 15:40:26 2020] XFS (sdb1): Mounting V5 Filesystem
> [So Sep 27 15:40:26 2020] XFS (sdb1): Ending clean mount
> [So Sep 27 15:40:26 2020] xfs filesystem being mounted at /mnt/scratch
> supports timestamps until 2038 (0x7fffffff)
> [So Sep 27 15:40:28 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:28 2020] File: /mnt/scratch/file1 PID: 12 Comm: kworker/0:1
> [So Sep 27 15:40:29 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:29 2020] File: /mnt/scratch/file1 PID: 73 Comm: kworker/0:2
> [So Sep 27 15:40:30 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:30 2020] File: /mnt/scratch/file2 PID: 12 Comm: kworker/0:1
> [So Sep 27 15:40:30 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:30 2020] File: /mnt/scratch/file2 PID: 3271 Comm: fio
> [So Sep 27 15:40:30 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:30 2020] File: /mnt/scratch/file2 PID: 3273 Comm: fio
> [So Sep 27 15:40:31 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:31 2020] File: /mnt/scratch/file1 PID: 3308 Comm: fio
> [So Sep 27 15:40:36 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:36 2020] File: /mnt/scratch/file1 PID: 73 Comm: kworker/0:2
> [So Sep 27 15:40:43 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:43 2020] File: /mnt/scratch/file1 PID: 73 Comm: kworker/0:2
> [So Sep 27 15:40:52 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:52 2020] File: /mnt/scratch/file2 PID: 73 Comm: kworker/0:2
> [So Sep 27 15:40:56 2020] Page cache invalidation failure on direct
> I/O. Possible data corruption due to collision with buffered I/O!
> [So Sep 27 15:40:56 2020] File: /mnt/scratch/file2 PID: 12 Comm: kworker/0:1
>
> Is that a different issue?
The test is expected to emit those messages; userspace has done something
so utterly bonkers (direct I/O to an mmaped, mlocked page) that we can't
provide the normal guarantees of data integrity.
next prev parent reply other threads:[~2020-09-27 13:54 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-24 12:56 [PATCH] iomap: Set all uptodate bits for an Uptodate page Matthew Wilcox (Oracle)
2020-09-24 13:12 ` Brian Foster
2020-09-24 13:59 ` Matthew Wilcox
2020-09-24 14:47 ` Gao Xiang
2020-09-24 15:12 ` Brian Foster
2020-09-24 15:22 ` Matthew Wilcox
2020-09-24 17:26 ` Brian Foster
2020-09-24 17:56 ` Matthew Wilcox
2020-09-24 15:08 ` Sedat Dilek
2020-09-24 15:15 ` Matthew Wilcox
2020-09-24 15:21 ` Sedat Dilek
2020-09-24 15:27 ` Matthew Wilcox
2020-09-24 16:19 ` Sedat Dilek
2020-09-24 16:36 ` Matthew Wilcox
2020-09-24 18:27 ` Sedat Dilek
2020-09-24 18:44 ` Matthew Wilcox
2020-09-24 18:47 ` Qian Cai
2020-09-24 19:54 ` Sedat Dilek
2020-09-24 20:02 ` Matthew Wilcox
2020-09-24 20:04 ` Sedat Dilek
2020-09-24 23:57 ` Matthew Wilcox
2020-09-25 2:13 ` Sedat Dilek
2020-09-25 10:44 ` Sedat Dilek
2020-09-25 11:12 ` Sedat Dilek
2020-09-25 13:24 ` Sedat Dilek
2020-09-25 13:36 ` Sedat Dilek
2020-09-25 13:46 ` Matthew Wilcox
2020-09-25 14:01 ` Sedat Dilek
2020-09-25 15:53 ` Matthew Wilcox
2020-09-26 19:12 ` Sedat Dilek
2020-09-27 11:31 ` Sedat Dilek
2020-09-27 12:04 ` Matthew Wilcox
2020-09-27 12:34 ` Sedat Dilek
2020-09-27 12:45 ` Sedat Dilek
2020-09-27 13:48 ` Sedat Dilek
2020-09-27 13:54 ` Matthew Wilcox [this message]
2020-09-27 14:02 ` Sedat Dilek
2020-09-27 15:19 ` Sedat Dilek
2020-10-03 18:52 ` Sedat Dilek
2020-10-04 4:13 ` Matthew Wilcox
2020-10-04 10:35 ` Sedat Dilek
2020-09-25 18:17 ` Darrick J. Wong
2020-09-28 6:41 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200927135421.GD7714@casper.infradead.org \
--to=willy@infradead.org \
--cc=bfoster@redhat.com \
--cc=cai@redhat.com \
--cc=darrick.wong@oracle.com \
--cc=hch@infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=sedat.dilek@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).