From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7843CA0EE5 for ; Fri, 30 Aug 2024 06:54:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 55C6B6B00C8; Fri, 30 Aug 2024 02:54:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 50B9D6B00C9; Fri, 30 Aug 2024 02:54:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AC266B00CA; Fri, 30 Aug 2024 02:54:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1B7786B00C8 for ; Fri, 30 Aug 2024 02:54:31 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id CA284A7E89 for ; Fri, 30 Aug 2024 06:54:30 +0000 (UTC) X-FDA: 82507998300.02.453D002 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf03.hostedemail.com (Postfix) with ESMTP id 2B87120015 for ; Fri, 30 Aug 2024 06:54:27 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=gMN6sK2H; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf03.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725000847; a=rsa-sha256; cv=none; b=H6tHADmDcJWHf3WKhEJprTfILOZiSkXA8Zpbi9ELB8z+YANkYmbDPALu8wDgRyZH1EGs1n apPceYXxzz+x3DX+RrAWOUmxY1yKo3DN4/r2DRQtsXLEINgwbZjLK55Fj+ip7ihQcGAD3y 7kvXh8pUmBg5s4ve+2NAhFT5rbHxMjk= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=gMN6sK2H; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf03.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725000847; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JXsfI13QxQMuq1bhCiFBSW4jYSMwXLCeILFvF2AASUw=; b=J/BWEmL8vsbqoAiBJlCn9iUJM7JJ4C7WuWA0RIB2Lz2K5G9bzMS5JiQnLk2CpOCuLozQeq 03ulViH93N/bxpcTE8NSAtaxOj6aN5saR2HvImu+9Hdpg7x5BUiGJkiaITZ4RyIZ4Hy6am Dd0AseWEhb+zf6+R/IHWqiYmaMisss8= Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-201fba05363so13166545ad.3 for ; Thu, 29 Aug 2024 23:54:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1725000867; x=1725605667; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=JXsfI13QxQMuq1bhCiFBSW4jYSMwXLCeILFvF2AASUw=; b=gMN6sK2Ht/U4lljTH12yoRMXnsfqxs1G/CO7lg33kbgc2KzkfA2j977ds+HKbOZvQ7 3I2aQGFY1w8e6ZakL5q5c059KBOTEYzxPAgY5DfZgjboMIPsnqra89mt5ORarCEAsmmj ybjk2oCho8p/2bXI5sCpWhuB8Dp3EeH5pxLbkoq5fOj0RQgxdaXHEOTCFUDT3Vgaz+dx Ec90EbugFrCSBioRrj5tGMuiP4CbFWaHavy8wGbimOwOGUBZQ8axIu2F2yK40ofDcIt6 y0s3CVWWUc8MYPlVLrjoQ4U1Eidu7HJvQfa7DVQ3rtXnR4gyYB/iYZp6qUY3PNu/E+KS AQkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725000867; x=1725605667; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JXsfI13QxQMuq1bhCiFBSW4jYSMwXLCeILFvF2AASUw=; b=e9Hx46jxcPl7+//JQ7mzJ37+yYM57rtH5t5xHkGMRIS12cuRH7TYOWKsCxkszjYx7P HOWOglAUtcEgJNwUpqqLFlfUUcOX7GVaj5u2Qr6fPPZoILvwkT37dm1T3EqrKgSFARZz MMNruBK+u9v2+vb+r/2Zzcj0Rtf0GzN1eyzjtYzVJGQqJv4ozYia6OYVzPI+spEiONl/ fehDaD4mFJg5RrLmxOq+zRA6zBKHcEre5J+YcRPFTOjd6CDMr2gvTLQziSDxCLjmt3/o xoe5VJ+nRX11QOnrZ+lfjqeyc8Hv4pC1GPhrXdROffCdCYCtyFFUfnkNf7dBz1kG/MLG FDKA== X-Forwarded-Encrypted: i=1; AJvYcCV6v95xh3dJbg02ZkCgPh7qYX0xH9TzpveRU4oNglBLo1GxlpzIMVuQIx13Np3ue6TYcczciIZi0A==@kvack.org X-Gm-Message-State: AOJu0Yw6BXvo9/RF2ktrOc6tKmetkZ61I+4vqHvWAoLUzjkD/hmrrhB4 28tFHZ7f3eOmDw+PGA5bsFAZAr1Ntcfr64kbjGGfCc+VyEba7f68s4BxWouRJZ8= X-Google-Smtp-Source: AGHT+IHQyky+Kk9qEKi7IWE1UWr/UmQJSB+2gVrWMjTiY9kash16TX5475rtXBjRZ94+FApWIS8qXA== X-Received: by 2002:a17:902:e5c3:b0:202:35cb:b0c6 with SMTP id d9443c01a7336-2050c3bf6e5mr52764945ad.34.1725000866519; Thu, 29 Aug 2024 23:54:26 -0700 (PDT) Received: from [10.4.59.158] ([139.177.225.242]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-205155452ecsm20825575ad.219.2024.08.29.23.54.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 29 Aug 2024 23:54:26 -0700 (PDT) Message-ID: Date: Fri, 30 Aug 2024 14:54:16 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 07/14] mm: khugepaged: collapse_pte_mapped_thp() use pte_offset_map_rw_nolock() Content-Language: en-US To: Muchun Song Cc: david@redhat.com, hughd@google.com, willy@infradead.org, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org References: <24be821f-a95f-47f1-879a-c392a79072cc@linux.dev> From: Qi Zheng In-Reply-To: <24be821f-a95f-47f1-879a-c392a79072cc@linux.dev> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 2B87120015 X-Rspamd-Server: rspam01 X-Stat-Signature: qxkm9sb9tx1gozisp441wxg946kzuy8b X-HE-Tag: 1725000867-869047 X-HE-Meta: U2FsdGVkX1/N7TlCR7auvCGUq0XupaLoCEoH+oaOHVRAgEy//JflBu2O7y0sTutQNVHGPhCYvB5xN3Dq6rmXsb9jo7EIVLJrpWw8x0L3VUDcATcLz9+tU3g0tt1EDRu1//ZKdQ1SBhzrHv5Opoh5QdLFC+bbf3pcvR/CCjasC7kRJxdMbFjR1Fvuxqk2czOxTkFOAbkO7/FHzMkRIZg0tOjg2CEJo0HIkaV3zMvk6F/gP0mSeIXd5wmsz6a0xPVh6THOP1ihucYBrcQ3Sosu1GKoTIPRtEWu4vs2JezrFFpYdhI0or2TmVgdOeCOPwQSUvOuGD2xgfBDtI0Pdf7Dut2RgKesmye9c48QqV5bQvbLyHjKYsFJjUmZdxd6jARHY7KVJCEieMoHluXAwU5Fji4FDz5qGKL+sotGwcZlZmnhzDdmQb6tRLhSdSpFPQhe4DLs7ee3axgICyW+VcLa7hh3N/rbPKNMFGnHZtETwviGc93GmT1r7HbUx0SOsFH76yTpnaXlk8UOYWi8P8yejz/b4D/OdYqLN0JbzuHNG7/q7HqWQrnKaumrc59QRjlGuPSFrtt+SLpoyY7CQj6gaQ8etMcbGEK8vC3KNwVaLRjM0Zzu/zBxIQ0Vi4xDaJDxsUK/mSn/04/n1opQb6IX2KkABh8hLg9506UUx4xNWHV2a43XnG5Xt99OYEqtArrwIY2VW+lHb1oXXi33dlM0dOKk6wCY56QOapjIWWWVeLO2EP17aNZlcDcuwpftYMZbW7f7jSYpHUGy6LI8slxPMx9zF9J1fwNzSQAbe3XjY02tpZJOIPDaixzb0lfcggs5quakwnU5wS/Y3fnsqyf72Co3jKFJlqcGJG/hT+Zk5VzOdKTG+hl3ZGtADE7sdtDqD8pZW9W18tbo/6k06cfg5hD1EWeHr1p1AP2RUr7f0lUFg9bZwr180a+XAhMQbUw31uuDNZQMsfzY8ec46yz stiYEYWv Et71UYw6owz+fpjus4PT3/ZadDS2+tL5vRJXVBbk3Tdg7Em2/LhxyI3kjv1W9rIamklnAJEE6Jw/zL+6/MgTd84uZVUga6H+eXGUzbrn1RFJDDUHWe8ofDDe1/G7WAZiRtkNJZP3yG78m+Wdn65OZv2AqWeHG3ZnNBUhE35vcTxXwdYXq9Y1uJtsn2pD5EQ71uviJxZq3e2a+PECMMH+XGlAhcRUX58bkn5kVr7sG8V796BpJrlo9qCoGuIrKs+wvYjgG5y2zTZrXdaDZs9NWenfYljqt9XNNyLZqCk2mUtqZjpIYJ9ltdiqpD8rKz5inZVqVIgtXbJFzijJDJlBP3b2f+7ZdkJrBu4S4QbI9htmgC42pKWw7CPE3GQK/H771hjN2UHwU9vtDh+TsylKdbJpopZCw5q2y4xXucniB76rzQWrUG+Fs5F/6xddRc9X0be1JBZJMJgPhV5emkMFY7REWKFoRY4EJQ0S56pfJ5TB6vhjczKvKHYvLTkDThcg2rGPDQTwgKDlyMG8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/8/29 16:10, Muchun Song wrote: > > > On 2024/8/22 15:13, Qi Zheng wrote: >> In collapse_pte_mapped_thp(), we may modify the pte and pmd entry after >> acquring the ptl, so convert it to using pte_offset_map_rw_nolock(). At >> this time, the write lock of mmap_lock is not held, and the pte_same() >> check is not performed after the PTL held. So we should get pgt_pmd >> and do >> pmd_same() check after the ptl held. >> >> For the case where the ptl is released first and then the pml is >> acquired, >> the PTE page may have been freed, so we must do pmd_same() check before >> reacquiring the ptl. >> >> Signed-off-by: Qi Zheng >> --- >>   mm/khugepaged.c | 16 +++++++++++++++- >>   1 file changed, 15 insertions(+), 1 deletion(-) >> >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >> index 53bfa7f4b7f82..15d3f7f3c65f2 100644 >> --- a/mm/khugepaged.c >> +++ b/mm/khugepaged.c >> @@ -1604,7 +1604,7 @@ int collapse_pte_mapped_thp(struct mm_struct >> *mm, unsigned long addr, >>       if (userfaultfd_armed(vma) && !(vma->vm_flags & VM_SHARED)) >>           pml = pmd_lock(mm, pmd); >> -    start_pte = pte_offset_map_nolock(mm, pmd, haddr, &ptl); >> +    start_pte = pte_offset_map_rw_nolock(mm, pmd, haddr, &pgt_pmd, >> &ptl); >>       if (!start_pte)        /* mmap_lock + page lock should prevent >> this */ >>           goto abort; >>       if (!pml) >> @@ -1612,6 +1612,9 @@ int collapse_pte_mapped_thp(struct mm_struct >> *mm, unsigned long addr, >>       else if (ptl != pml) >>           spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); >> +    if (unlikely(!pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) >> +        goto abort; >> + >>       /* step 2: clear page table and adjust rmap */ >>       for (i = 0, addr = haddr, pte = start_pte; >>            i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE, pte++) { >> @@ -1657,6 +1660,16 @@ int collapse_pte_mapped_thp(struct mm_struct >> *mm, unsigned long addr, >>       /* step 4: remove empty page table */ >>       if (!pml) { >>           pml = pmd_lock(mm, pmd); >> +        /* >> +         * We called pte_unmap() and release the ptl before acquiring >> +         * the pml, which means we left the RCU critical section, so the >> +         * PTE page may have been freed, so we must do pmd_same() check >> +         * before reacquiring the ptl. >> +         */ >> +        if (unlikely(!pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) { >> +            spin_unlock(pml); >> +            goto pmd_change; > > Seems we forget to flush TLB since we've cleared some pte entry? See comment above the ptep_clear(): /* * Must clear entry, or a racing truncate may re-remove it. * TLB flush can be left until pmdp_collapse_flush() does it. * PTE dirty? Shmem page is already dirty; file is read-only. */ The TLB flush was handed over to pmdp_collapse_flush(). If a concurrent thread free the PTE page at this time, the TLB will also be flushed after pmd_clear(). > >> +        } >>           if (ptl != pml) >>               spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); >>       } >> @@ -1688,6 +1701,7 @@ int collapse_pte_mapped_thp(struct mm_struct >> *mm, unsigned long addr, >>           pte_unmap_unlock(start_pte, ptl); >>       if (pml && pml != ptl) >>           spin_unlock(pml); >> +pmd_change: >>       if (notified) >>           mmu_notifier_invalidate_range_end(&range); >>   drop_folio: >