From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf0-f71.google.com (mail-lf0-f71.google.com [209.85.215.71]) by kanga.kvack.org (Postfix) with ESMTP id BDFE86B025F for ; Mon, 16 May 2016 09:55:33 -0400 (EDT) Received: by mail-lf0-f71.google.com with SMTP id j8so98668106lfd.0 for ; Mon, 16 May 2016 06:55:33 -0700 (PDT) Received: from mail-wm0-x22c.google.com (mail-wm0-x22c.google.com. [2a00:1450:400c:c09::22c]) by mx.google.com with ESMTPS id e89si20280161wmc.42.2016.05.16.06.55.32 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 May 2016 06:55:32 -0700 (PDT) Received: by mail-wm0-x22c.google.com with SMTP id e201so101984200wme.0 for ; Mon, 16 May 2016 06:55:32 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <5739B60E.1090700@suse.cz> References: <1462713387-16724-1-git-send-email-anthony.romano@coreos.com> <5739B60E.1090700@suse.cz> Date: Mon, 16 May 2016 06:55:32 -0700 Message-ID: Subject: Re: [PATCH] tmpfs: don't undo fallocate past its last page From: Anthony Romano Content-Type: multipart/alternative; boundary=001a1147145c5d5b000532f5faa4 Sender: owner-linux-mm@kvack.org List-ID: To: Vlastimil Babka Cc: Hugh Dickins , linux-mm@kvack.org, linux-kernel@vger.kernel.org --001a1147145c5d5b000532f5faa4 Content-Type: text/plain; charset=UTF-8 The code for shmem_undo_range is very similar to truncate_inode_pages_range so I assume that's why it's using an inclusive range. It appears the bug was introduced in 1635f6a74152f1dcd1b888231609d64875f0a81a On Mon, May 16, 2016 at 4:59 AM, Vlastimil Babka wrote: > On 05/08/2016 03:16 PM, Anthony Romano wrote: > >> When fallocate is interrupted it will undo a range that extends one byte >> past its range of allocated pages. This can corrupt an in-use page by >> zeroing out its first byte. Instead, undo using the inclusive byte range. >> > > Huh, good catch. So why is shmem_undo_range() adding +1 to the value in > the first place? The only other caller is shmem_truncate_range() and all > *its* callers do subtract 1 to avoid the same issue. So a nicer fix would > be to remove all this +1/-1 madness. Or is there some subtle corner case > I'm missing? > > Signed-off-by: Anthony Romano >> > > Looks like a stable candidate patch. Can you point out the commit that > introduced the bug, for the Fixes: tag? > > Thanks, > Vlastimil > > > --- >> mm/shmem.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/mm/shmem.c b/mm/shmem.c >> index 719bd6b..f0f9405 100644 >> --- a/mm/shmem.c >> +++ b/mm/shmem.c >> @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int >> mode, loff_t offset, >> /* Remove the !PageUptodate pages we added */ >> shmem_undo_range(inode, >> (loff_t)start << PAGE_SHIFT, >> - (loff_t)index << PAGE_SHIFT, true); >> + ((loff_t)index << PAGE_SHIFT) - 1, true); >> goto undone; >> } >> >> >> > --001a1147145c5d5b000532f5faa4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
The code for shmem_undo_range is very similar to truncate_= inode_pages_range so I assume that's why it's using an inclusive ra= nge.

It appears the bug was introduced in 1635f6a74152f1dcd1b8882316= 09d64875f0a81a

On Mon, May 16, 2016 at 4:59 AM, Vlastimil Babka &= lt;vbabka@suse.cz&g= t; wrote:
On 05/0= 8/2016 03:16 PM, Anthony Romano wrote:
When fallocate is interrupted it will undo a range that extends one byte past its range of allocated pages. This can corrupt an in-use page by
zeroing out its first byte. Instead, undo using the inclusive byte range.

Huh, good catch. So why is shmem_undo_range() adding +1 to the value in the= first place? The only other caller is shmem_truncate_range() and all *its*= callers do subtract 1 to avoid the same issue. So a nicer fix would be to = remove all this +1/-1 madness. Or is there some subtle corner case I'm = missing?

Signed-off-by: Anthony Romano <anthony.romano@coreos.com>

Looks like a stable candidate patch. Can you point out the commit that intr= oduced the bug, for the Fixes: tag?

Thanks,
Vlastimil


---
=C2=A0 mm/shmem.c | 2 +-
=C2=A0 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 719bd6b..f0f9405 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int mo= de, loff_t offset,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 /* Remove the !PageUptodate pages we added */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 shmem_undo_range(inode,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (loff_t)start << PAGE_SHIFT, -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(loff_t)index << PAGE_SHIFT, tr= ue);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0((loff_t)index << PAGE_SHIFT) -= 1, true);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 goto undone;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }




--001a1147145c5d5b000532f5faa4-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org