From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9EB2C49EC9 for ; Mon, 28 Jun 2021 14:38:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A6B8A61C75 for ; Mon, 28 Jun 2021 14:38:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234217AbhF1OkZ (ORCPT ); Mon, 28 Jun 2021 10:40:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:43938 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234929AbhF1Oiu (ORCPT ); Mon, 28 Jun 2021 10:38:50 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id C634061CC8; Mon, 28 Jun 2021 14:31:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624890678; bh=YB6lpiG0lCSbnUZ2ll6QXaYugQpp1ukoTvc1goQrrBM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N/SxXEMsVXpac6Zxk7esUmgQ0NUbiSJopqINvknAQPNqe4fE4XIdVXj78WHWjjUvb uzY+r+JoORfjFmk3TBnmxEOtQ9JWeJBoLyzxIwqhwoG6BlVtoFcD5YoG4NRHBwtAev Z7jhLIisRpHgDb9A+gLt3iCZn4hhlLfI3uUyOaIWGUa72HsiUzl4L6QLCIlMllgmW2 6vZk0j/8I8aF+raEEOGOuqaYqZafkBD7dVly2zB8kxG4THmAhMpWw8dOVYZuuDyIFr jHcFjmxPblvV0eQllwBmK48FtZf+qoM0LNfueWeZPO7O0K+FnvCpFuXYHnacPNTr6i sb8TnwuT4pc7g== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Hugh Dickins , "Kirill A . Shutemov" , Alistair Popple , Matthew Wilcox , Peter Xu , Ralph Campbell , Wang Yugui , Will Deacon , Yang Shi , Zi Yan , Andrew Morton , Linus Torvalds , Greg Kroah-Hartman Subject: [PATCH 5.4 63/71] mm: page_vma_mapped_walk(): get vma_address_end() earlier Date: Mon, 28 Jun 2021 10:29:56 -0400 Message-Id: <20210628143004.32596-64-sashal@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210628143004.32596-1-sashal@kernel.org> References: <20210628143004.32596-1-sashal@kernel.org> MIME-Version: 1.0 X-KernelTest-Patch: http://kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.4.129-rc1.gz X-KernelTest-Tree: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git X-KernelTest-Branch: linux-5.4.y X-KernelTest-Patches: git://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git X-KernelTest-Version: 5.4.129-rc1 X-KernelTest-Deadline: 2021-06-30T14:29+00:00 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Hugh Dickins page_vma_mapped_walk() cleanup: get THP's vma_address_end() at the start, rather than later at next_pte. It's a little unnecessary overhead on the first call, but makes for a simpler loop in the following commit. Link: https://lkml.kernel.org/r/4542b34d-862f-7cb4-bb22-e0df6ce830a2@google.com Signed-off-by: Hugh Dickins Acked-by: Kirill A. Shutemov Cc: Alistair Popple Cc: Matthew Wilcox Cc: Peter Xu Cc: Ralph Campbell Cc: Wang Yugui Cc: Will Deacon Cc: Yang Shi Cc: Zi Yan Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/page_vma_mapped.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index fc68ab2d3ea1..93dd79ba9969 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -166,6 +166,15 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return true; } + /* + * Seek to next pte only makes sense for THP. + * But more important than that optimization, is to filter out + * any PageKsm page: whose page->index misleads vma_address() + * and vma_address_end() to disaster. + */ + end = PageTransCompound(page) ? + vma_address_end(page, pvmw->vma) : + pvmw->address + PAGE_SIZE; if (pvmw->pte) goto next_pte; restart: @@ -233,10 +242,6 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (check_pte(pvmw)) return true; next_pte: - /* Seek to next pte only makes sense for THP */ - if (!PageTransHuge(page)) - return not_found(pvmw); - end = vma_address_end(page, pvmw->vma); do { pvmw->address += PAGE_SIZE; if (pvmw->address >= end) -- 2.30.2