From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78B0FC433FE for ; Wed, 8 Dec 2021 16:43:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC8EA6B0072; Wed, 8 Dec 2021 11:43:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E76A56B0073; Wed, 8 Dec 2021 11:43:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D92A06B0074; Wed, 8 Dec 2021 11:43:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id CE5B26B0072 for ; Wed, 8 Dec 2021 11:43:18 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8EAD7185831C0 for ; Wed, 8 Dec 2021 16:43:08 +0000 (UTC) X-FDA: 78895196856.25.8FCA380 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id B3BB620006 for ; Wed, 8 Dec 2021 16:43:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=unVsL4/B8c5q3qCPRGdOxHvlVKh2YxxO1irX6dLbbto=; b=Qg1Xw6qoSKh1r1mwxFjdllNQUz RHBv0sALyvj8m3+m6R4YPZ3FBy0APnL8q7y4YOAXtU2bvhx+foObC3VCGQ2HU0lwkdHr6sm/IXwpP yYUuIFpuP5IqQbDTnjBHxswbJGb9VxaXo5TF2LzWDyalB+wdVfCZEuESfvKtxsrsTlamxRo8t1yOj VJJCWTr8j+lLNxTlvMbesG5s8Wac9iw0gsWAaTl/rv4QPmVk9RO2Y+JwWbQ+UbmuiFJL7SQsYoLN/ EqZeKn71UBv2mIGZWbD3UHcQL3wIQhihlLksAl5AX1LN3807J48rry/ohiKODn5wPR0BYAsRWeOat Vff+yXyQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mv021-008akh-9e; Wed, 08 Dec 2021 16:43:01 +0000 Date: Wed, 8 Dec 2021 16:43:01 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: Jan Kara , William Kucharski Subject: Re: [PATCH 46/48] truncate,shmem: Handle truncates that split large folios Message-ID: References: <20211208042256.1923824-1-willy@infradead.org> <20211208042256.1923824-47-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211208042256.1923824-47-willy@infradead.org> X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B3BB620006 X-Stat-Signature: xnbanockdhqq9zcz1tcbzrc9pachtpdk Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Qg1Xw6qo; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1638981787-829722 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 08, 2021 at 04:22:54AM +0000, Matthew Wilcox (Oracle) wrote: > @@ -917,13 +904,13 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, > struct shmem_inode_info *info = SHMEM_I(inode); > pgoff_t start = (lstart + PAGE_SIZE - 1) >> PAGE_SHIFT; > pgoff_t end = (lend + 1) >> PAGE_SHIFT; > - unsigned int partial_start = lstart & (PAGE_SIZE - 1); > - unsigned int partial_end = (lend + 1) & (PAGE_SIZE - 1); > struct folio_batch fbatch; > pgoff_t indices[PAGEVEC_SIZE]; > + struct folio *folio; This turns a couple of other definitions of struct folio in this function into shadowed definitions. We don't have -Wshadow turned on, so I didn't notice until doing more patch review this morning. I'm going to fold in this patch: +++ b/mm/shmem.c @@ -919,7 +919,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, while (index < end && find_lock_entries(mapping, index, end - 1, &fbatch, indices)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { - struct folio *folio = fbatch.folios[i]; + folio = fbatch.folios[i]; index = indices[i]; @@ -985,7 +985,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, continue; } for (i = 0; i < folio_batch_count(&fbatch); i++) { - struct folio *folio = fbatch.folios[i]; + folio = fbatch.folios[i]; index = indices[i]; if (xa_is_value(folio)) {