From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AAF3C43334 for ; Tue, 7 Jun 2022 17:47:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348128AbiFGRrV (ORCPT ); Tue, 7 Jun 2022 13:47:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349095AbiFGRqp (ORCPT ); Tue, 7 Jun 2022 13:46:45 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A360764A; Tue, 7 Jun 2022 10:36:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D36EE614D8; Tue, 7 Jun 2022 17:36:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6193C385A5; Tue, 7 Jun 2022 17:36:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1654623404; bh=c5+zOy+1lyAaxZtwws/4oxbj2X7+DJiWWXy825tMdNI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zBPkegn/p9JTu/0gCmtEBNv008rR3qKwiBD4WaUu7XqzIeh5FC296QsqtzQ8LDCof D+IL/dBRjRY5o3aLw0bXsVmXGKEfNrvU+V+uZDVQ4s0RN32RAzyhwJasEEEr+TQL0T QIBvQg+6p6vYtwtu/XnRiin447VQtHzQPpNrvDQM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Kravetz , Muchun Song , Andrew Morton Subject: [PATCH 5.10 404/452] hugetlb: fix huge_pmd_unshare address update Date: Tue, 7 Jun 2022 19:04:21 +0200 Message-Id: <20220607164920.599354616@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220607164908.521895282@linuxfoundation.org> References: <20220607164908.521895282@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mike Kravetz commit 48381273f8734d28ef56a5bdf1966dd8530111bc upstream. The routine huge_pmd_unshare() is passed a pointer to an address associated with an area which may be unshared. If unshare is successful this address is updated to 'optimize' callers iterating over huge page addresses. For the optimization to work correctly, address should be updated to the last huge page in the unmapped/unshared area. However, in the common case where the passed address is PUD_SIZE aligned, the address is incorrectly updated to the address of the preceding huge page. That wastes CPU cycles as the unmapped/unshared range is scanned twice. Link: https://lkml.kernel.org/r/20220524205003.126184-1-mike.kravetz@oracle.com Fixes: 39dde65c9940 ("shared page table for hugetlb page") Signed-off-by: Mike Kravetz Acked-by: Muchun Song Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/hugetlb.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5465,7 +5465,14 @@ int huge_pmd_unshare(struct mm_struct *m pud_clear(pud); put_page(virt_to_page(ptep)); mm_dec_nr_pmds(mm); - *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; + /* + * This update of passed address optimizes loops sequentially + * processing addresses in increments of huge page size (PMD_SIZE + * in this case). By clearing the pud, a PUD_SIZE area is unmapped. + * Update address to the 'last page' in the cleared area so that + * calling loop can move to first page past this area. + */ + *addr |= PUD_SIZE - PMD_SIZE; return 1; } #define want_pmd_share() (1)