From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-183.mta0.migadu.com (out-183.mta0.migadu.com [91.218.175.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51FEF2C187 for ; Sat, 28 Feb 2026 04:44:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.183 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772253862; cv=none; b=CDEZyOvl+wJciWq/lsHn44FDM35p9jOZhoLJY4LdPuzHCWJgRg69mx6rS3CmCcniTVv9SildvOLmjsw7dFBawyd2TzjxtkOFa9PErcy6PoQ5ab4dSG5KI8r+rYfR44RoJX9P8vkjnKxvbqzJt5RuoBz83ZPsKzgGtnNwHgMtUEw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772253862; c=relaxed/simple; bh=VHFL21LP4s8oY3dLSwqJ23hMMplrURveRmSHA+ixVQ8=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=S9IZ0n7l7geBDECMOzgAkrPslFBLd08jQJ1RDuTfWHP1JUikeAsRKNgCmR3BuUm0ARRXHNJ5OoxzgCFcWpQ/VghNEGWYO6G28LeOH/U8K6SCJDlmoah4/Hd1tVqYPQw5zms5wb+sojPK+i2Cug0v3fgWB5tGMkENuft7FGdq97o= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=pvUr+SNg; arc=none smtp.client-ip=91.218.175.183 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="pvUr+SNg" Message-ID: <1f12f6f4-4d35-486d-b6cf-2672ae2c4979@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1772253857; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jY2ApfuCjEG20j8TkQHYrlMehNjrQ7bPhuWc9VDU1ws=; b=pvUr+SNgFES/em2Zq2o641Pnyw4JGgWAe8XpZNJNc/M5WHzzzov/cdUQztTLF7staxU5ek RoAhE0lEl86BwsdvUcCFUm53ONNjSmgysdrgBJLHn9hMnkHbz18hwlT1gnZXWAktOxXgVE s9qsVtkKrEvBoBD/AvD7Kfw/tJ31zMk= Date: Sat, 28 Feb 2026 12:44:10 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH v2] khugepaged: remove redundant index check for pmd-folios Content-Language: en-US To: Dev Jain Cc: ziy@nvidia.com, david@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, baohua@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20260227143501.1488110-1-dev.jain@arm.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Lance Yang In-Reply-To: <20260227143501.1488110-1-dev.jain@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT On 2026/2/27 22:35, Dev Jain wrote: > Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start. > > Proof: Both loops in hpage_collapse_scan_file and collapse_file, which > iterate on the xarray, have the invariant that > start <= folio->index < start + HPAGE_PMD_NR ... (i) > > A folio is always naturally aligned in the pagecache, therefore > folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii) > > thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual > offsets in the VMA are aligned to the order, > => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii) > > Combining (i), (ii) and (iii), the claim is proven. > > Therefore, remove this check. > While at it, simplify the comments. > > Signed-off-by: Dev Jain > --- > v1->v2: > - Remove the check instead of converting to VM_WARN_ON > - While at it, simplify the comments > > Based on mm-new (8982358e1c87). > > mm/khugepaged.c | 14 ++++---------- > 1 file changed, 4 insertions(+), 10 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 5f668c1dd0fe4..b7b4680d27ab1 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr, > * we locked the first folio, then a THP might be there already. > * This will be discovered on the first iteration. > */ > - if (folio_order(folio) == HPAGE_PMD_ORDER && > - folio->index == start) { > - /* Maybe PMD-mapped */ > + if (folio_order(folio) == HPAGE_PMD_ORDER) { > result = SCAN_PTE_MAPPED_HUGEPAGE; > goto out_unlock; > } > @@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, > continue; > } > > - if (folio_order(folio) == HPAGE_PMD_ORDER && > - folio->index == start) { > - /* Maybe PMD-mapped */ > + if (folio_order(folio) == HPAGE_PMD_ORDER) { > result = SCAN_PTE_MAPPED_HUGEPAGE; > /* > - * For SCAN_PTE_MAPPED_HUGEPAGE, further processing > - * by the caller won't touch the page cache, and so > - * it's safe to skip LRU and refcount checks before > - * returning. > + * PMD-sized THP implies that we can only try > + * retracting the PTE table. > */ > folio_put(folio); > break; LGTM! The proof is sound, the combination of the loop invariant, natural alignment, and VMA alignment requirements indeed makes the index check redundant :D Reviewed-by: Lance Yang