From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from szxga05-in.huawei.com ([45.249.212.191]:2252 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751846AbdLED0Q (ORCPT ); Mon, 4 Dec 2017 22:26:16 -0500 From: "zhangyi (F)" To: CC: , , , , Subject: [PATCH] dax: fix potential overflow on 32bit machine Date: Tue, 5 Dec 2017 11:32:10 +0800 Message-ID: <20171205033210.38338-1-yi.zhang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On 32bit machine, when mmap2 a large enough file with pgoff more than ULONG_MAX >> PAGE_SHIFT, it will trigger offset overflow and lead to unmap the wrong page in dax_insert_mapping_entry(). This patch cast pgoff to 64bit to prevent the overflow. Signed-off-by: zhangyi (F) --- fs/dax.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 78b72c4..8e12848 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -539,10 +539,11 @@ static void *dax_insert_mapping_entry(struct address_space *mapping, /* we are replacing a zero page with block mapping */ if (dax_is_pmd_entry(entry)) unmap_mapping_range(mapping, - (vmf->pgoff << PAGE_SHIFT) & PMD_MASK, + ((loff_t)vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0); else /* pte entry */ - unmap_mapping_range(mapping, vmf->pgoff << PAGE_SHIFT, + unmap_mapping_range(mapping, + (loff_t)vmf->pgoff << PAGE_SHIFT, PAGE_SIZE, 0); } -- 2.9.5