From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A004CC4332F for ; Fri, 9 Dec 2022 17:02:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229821AbiLIRCQ (ORCPT ); Fri, 9 Dec 2022 12:02:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229580AbiLIRCN (ORCPT ); Fri, 9 Dec 2022 12:02:13 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 306C77D08A for ; Fri, 9 Dec 2022 09:01:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670605275; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W6TRH+D4KiiNcSkU8trC/D1fuTVk/3U3ig0mjyWFdEw=; b=BasAfyTl2W3DbC3FWMAKlioqVS3BzNwSpaquhle+SBStzFsTSD0M022GIVgKReJlL7Muhf DPNDfLigr4Mi3rLsB9vfwMfZZxOxbWE6RjI/42GKCYWDcIJUaSqBl4NbOKLDmyWg95sdYg rOcYusLYeM2tIqlmRaogN4+ztpKLgg0= Received: from mail-oa1-f71.google.com (mail-oa1-f71.google.com [209.85.160.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-249-wg_v7kINO1OpP8R3Tt9HrQ-1; Fri, 09 Dec 2022 12:01:11 -0500 X-MC-Unique: wg_v7kINO1OpP8R3Tt9HrQ-1 Received: by mail-oa1-f71.google.com with SMTP id 586e51a60fabf-144870e8fe8so134755fac.13 for ; Fri, 09 Dec 2022 09:01:11 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W6TRH+D4KiiNcSkU8trC/D1fuTVk/3U3ig0mjyWFdEw=; b=ZQKQl1IyyfarP78uIWU/RGr1TTbEAypGYck0+wlOjBzS4NEY9ML+4bfJ9FCGL8iBdt kxlBOfvLjZHZTA2VbqyEqFfikUUrlteSe7m7ZmJe0s8kWGUhr32JF31zM7jB/sqDQwNG 9FD6Pvog2MrWE3vr1GuWyKbE2qZW1/RQuBO9Mk4KRad02ztkgRoPH+s+cSr6D4kS+Fgf WmfkM/YseefjOHtJqyfQxlyVhUd9LtUlp3OAYa8y/v0oEADJiPDq1bdzPsln4K2GE+KQ lbDr9lJeOd7PP7qPZTUjsOvrNfSLEGv+15nhWAv+o0dRhWJzhud5BiJZ8cvGXhbVCcXo efow== X-Gm-Message-State: ANoB5pmKZVJoqcQkpezADuSy4zhDlkhl4drFrzCMGoeIzwWw1TVGxTj6 zgXcPsGobTCmL7ArLzdNBK41PmoTFiVhh6Nlb/xNJSU1U+MXqMTRBjrpLdqoCrqy7uI8CTV2ud7 2GadPz2PK8fwlvvLP+V0HXoRN X-Received: by 2002:a05:6870:ed8e:b0:144:c281:11ec with SMTP id fz14-20020a056870ed8e00b00144c28111ecmr3491113oab.38.1670605270738; Fri, 09 Dec 2022 09:01:10 -0800 (PST) X-Google-Smtp-Source: AA0mqf6WHTJqwwVeVsgY0NWBk0jYEVKINaa6MKI3WEpB+yvuS9iLQU0VCefP3qH0KxAb0MhxkQi25w== X-Received: by 2002:a05:6870:ed8e:b0:144:c281:11ec with SMTP id fz14-20020a056870ed8e00b00144c28111ecmr3491088oab.38.1670605270459; Fri, 09 Dec 2022 09:01:10 -0800 (PST) Received: from x1n.redhat.com (bras-base-aurron9127w-grc-46-70-31-27-79.dsl.bell.ca. [70.31.27.79]) by smtp.gmail.com with ESMTPSA id q7-20020a05620a0d8700b006cf38fd659asm178907qkl.103.2022.12.09.09.01.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Dec 2022 09:01:06 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Miaohe Lin , David Hildenbrand , Nadav Amit , peterx@redhat.com, Andrea Arcangeli , Jann Horn , John Hubbard , Mike Kravetz , James Houghton , Rik van Riel , Muchun Song Subject: [PATCH v3 1/9] mm/hugetlb: Let vma_offset_start() to return start Date: Fri, 9 Dec 2022 12:00:52 -0500 Message-Id: <20221209170100.973970-2-peterx@redhat.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221209170100.973970-1-peterx@redhat.com> References: <20221209170100.973970-1-peterx@redhat.com> MIME-Version: 1.0 Content-type: text/plain Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Even though vma_offset_start() is named like that, it's not returning "the start address of the range" but rather the offset we should use to offset the vma->vm_start address. Make it return the real value of the start vaddr, and it also helps for all the callers because whenever the retval is used, it'll be ultimately added into the vma->vm_start anyway, so it's better. Reviewed-by: Mike Kravetz Reviewed-by: David Hildenbrand Reviewed-by: John Hubbard Signed-off-by: Peter Xu --- fs/hugetlbfs/inode.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 790d2727141a..fdb16246f46e 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -412,10 +412,12 @@ static bool hugetlb_vma_maps_page(struct vm_area_struct *vma, */ static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start) { + unsigned long offset = 0; + if (vma->vm_pgoff < start) - return (start - vma->vm_pgoff) << PAGE_SHIFT; - else - return 0; + offset = (start - vma->vm_pgoff) << PAGE_SHIFT; + + return vma->vm_start + offset; } static unsigned long vma_offset_end(struct vm_area_struct *vma, pgoff_t end) @@ -457,7 +459,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h, v_start = vma_offset_start(vma, start); v_end = vma_offset_end(vma, end); - if (!hugetlb_vma_maps_page(vma, vma->vm_start + v_start, page)) + if (!hugetlb_vma_maps_page(vma, v_start, page)) continue; if (!hugetlb_vma_trylock_write(vma)) { @@ -473,8 +475,8 @@ static void hugetlb_unmap_file_folio(struct hstate *h, break; } - unmap_hugepage_range(vma, vma->vm_start + v_start, v_end, - NULL, ZAP_FLAG_DROP_MARKER); + unmap_hugepage_range(vma, v_start, v_end, NULL, + ZAP_FLAG_DROP_MARKER); hugetlb_vma_unlock_write(vma); } @@ -507,10 +509,9 @@ static void hugetlb_unmap_file_folio(struct hstate *h, */ v_start = vma_offset_start(vma, start); v_end = vma_offset_end(vma, end); - if (hugetlb_vma_maps_page(vma, vma->vm_start + v_start, page)) - unmap_hugepage_range(vma, vma->vm_start + v_start, - v_end, NULL, - ZAP_FLAG_DROP_MARKER); + if (hugetlb_vma_maps_page(vma, v_start, page)) + unmap_hugepage_range(vma, v_start, v_end, NULL, + ZAP_FLAG_DROP_MARKER); kref_put(&vma_lock->refs, hugetlb_vma_lock_release); hugetlb_vma_unlock_write(vma); @@ -540,8 +541,7 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end, v_start = vma_offset_start(vma, start); v_end = vma_offset_end(vma, end); - unmap_hugepage_range(vma, vma->vm_start + v_start, v_end, - NULL, zap_flags); + unmap_hugepage_range(vma, v_start, v_end, NULL, zap_flags); /* * Note that vma lock only exists for shared/non-private -- 2.37.3