From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA6401D9A56 for ; Mon, 6 Jan 2025 10:04:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.54 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736157891; cv=none; b=Yeh9V93ihay+64vmNdaZ9dzP+3V1OXsXSHtQo03iwzYfGjSVkHLFy6mrdmIdy8ceaGOI1HBzlY8FtEEVga8xTklsKGXW2ltvkAh6s7++4wE+PnJ6+bn1jt2KPTxv9Nr8uDobKXTUxu7BXlg2Y9sMfLM/VNOAcyun6FbjU7hlKX8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736157891; c=relaxed/simple; bh=a6xGpwfwoEcW6UoO0PcwXxZ5liYJdMiMICw+SrlbkjU=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=SIAW3eO1F0BFOm7msveTpmp7umGfeERx2LBFpZw2kL/jGZT4K7ZZeZOBXGwC/0OWllO9Rewo6kek8bbofmQcN14EsHYnwxEsIiDZV5OHntKJDY9u8j0UVL04UWoD6zaSYZCdp1c3mb4Qxti8j+uBhcLuAqIIPUVyfqYnzvyJvlw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ER7dkgyr; arc=none smtp.client-ip=209.85.128.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ER7dkgyr" Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-4361b6f9faeso85241905e9.1 for ; Mon, 06 Jan 2025 02:04:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736157887; x=1736762687; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=KUWWdbE0JwLkOyEqtHtl+YMzGRX6Cx3X1P0UKjdToP4=; b=ER7dkgyrYaoKQm7ka0qSNvCX1E8Hf9hioUiYgQlWWvB0dLhjKnAA4YPK9yuNlhzukO hEBk+ANTNhtemPtH+mZGyWsAmLCSGCE/LrLube+EYF4cxi92bdKP46ZYNXyFdgCv+ZyL zp1Jcvg883Wfnfr4+jVcrv53qs1MxpViEZZyM5H5MIXFsq351EgMWhDDaYBeZ4HnSZmb qnF+DC8pmTyFCyV/63vzQfC+SyBd9iwKwgaf56z9es510VnNVTJ1hxBuo42FnpcXW5V8 6ab5lQwaBl9hUljQeU79NE7tT1Lz/z2SL2HV6TLRbnHsazNJDRA8HCkhAGmNitOkSx9G pMRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736157887; x=1736762687; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KUWWdbE0JwLkOyEqtHtl+YMzGRX6Cx3X1P0UKjdToP4=; b=FXQqtmXsUsA1lXwECy0GykrJAU8v6aOjmCD5gC7AE1UNRBps0WSgH3J4HxjdEa2306 kDNrGn0X+x5j3nP3T2dPGGCod+CxyFoNbMcYfcyOb1hBXf/yASVVsUeZ/pt2osxmH6KF x+dRAd7+87SdMLMaAO5eB1SnioLaVCtjg5EtEh1YU/xZ7NkbOg7SJPKEP9XHS/qPRwhn xBTnX8AVYy9+SJlbBcqSkCfu64AG5XWaFPqgbQE7kXeCxkHkRyROv4CzBGTUHbxTTxMA 3jWMQTOHY5XUMfVxTRoFH9c+1LOPl1yrijz2WY6pjkbX3x9SJSBL935go1KbF1KSjSQy P0Hg== X-Forwarded-Encrypted: i=1; AJvYcCUAXF20GebVj7MSmCPflklo/yIdFqYV1stLKsZrVnFed31dfhf2zYWaYQXm2jZ1OxvyLCoysw0ziEwYa5Y=@vger.kernel.org X-Gm-Message-State: AOJu0YySHbtCWZ943168rZOG0iJe2zLLSxGD0of5QtIF4TWIJFFFxVQc eiOdtebMYUAFooQ8D/OU8JhvTVUdNhsLco4QKubcytdW8ZWLHBta X-Gm-Gg: ASbGncvBHslOS2YYyweLJOsXay1wMuRcrdK+siwsNFwyXCURZxFDf18rRQLZUpYQ/aQ Rk87BTJR6Vy/92Rkn7iST8WHFauYehHPDHNB7hdkYktrPwZcHlEFpNu/5+Iro2uIW0eQRYuLqCp cZjWaaR2D0P0NBwUWBPBDc6bT+kuHSoFOaTTmg+vL5yZtgOZqwRkzXT3e9+Ix9jdr8el7PfvPQZ cilI37TQxe0sROqkK5jebH4U5eQBq0lKWR6/a3pfsToFZ2xshqUPw42JIGc1ihXnBi/Q0jFKhEw FEsPV6X0sYmYrn4m+4I7XkHNyWwNePCO4h5OLbNQ8Bsg4BJ01+9S X-Google-Smtp-Source: AGHT+IERPh32d5Udv4bEm5Vk1YmTAy/qLYZxoujIM5UW2o0iTGU1c8fx7/ZAMMTysUS6CgPgE6HZqg== X-Received: by 2002:a05:600c:1d12:b0:434:f9ad:7222 with SMTP id 5b1f17b1804b1-436699ffa31mr495988575e9.7.1736157886567; Mon, 06 Jan 2025 02:04:46 -0800 (PST) Received: from ?IPV6:2a01:4b00:b211:ad00:1096:2c00:b223:9747? ([2a01:4b00:b211:ad00:1096:2c00:b223:9747]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4366127c508sm563667385e9.33.2025.01.06.02.04.45 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 06 Jan 2025 02:04:46 -0800 (PST) Message-ID: <56bf9df5-febf-4bef-966f-d4d71365a18d@gmail.com> Date: Mon, 6 Jan 2025 10:04:45 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH 07/12] khugepaged: Scan PTEs order-wise To: Dev Jain , akpm@linux-foundation.org, david@redhat.com, willy@infradead.org, kirill.shutemov@linux.intel.com Cc: ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner References: <20241216165105.56185-1-dev.jain@arm.com> <20241216165105.56185-8-dev.jain@arm.com> Content-Language: en-US From: Usama Arif In-Reply-To: <20241216165105.56185-8-dev.jain@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 16/12/2024 16:51, Dev Jain wrote: > Scan the PTEs order-wise, using the mask of suitable orders for this VMA > derived in conjunction with sysfs THP settings. Scale down the tunables; in > case of collapse failure, we drop down to the next order. Otherwise, we try to > jump to the highest possible order and then start a fresh scan. Note that > madvise(MADV_COLLAPSE) has not been generalized. > > Signed-off-by: Dev Jain > --- > mm/khugepaged.c | 84 ++++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 69 insertions(+), 15 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 886c76816963..078794aa3335 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -20,6 +20,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -1111,7 +1112,7 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, > } > > static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > - int referenced, int unmapped, > + int referenced, int unmapped, int order, > struct collapse_control *cc) > { > LIST_HEAD(compound_pagelist); > @@ -1278,38 +1279,59 @@ static int hpage_collapse_scan_ptes(struct mm_struct *mm, > unsigned long address, bool *mmap_locked, > struct collapse_control *cc) > { > - pmd_t *pmd; > - pte_t *pte, *_pte; > - int result = SCAN_FAIL, referenced = 0; > - int none_or_zero = 0, shared = 0; > - struct page *page = NULL; > + unsigned int max_ptes_shared, max_ptes_none, max_ptes_swap; > + int referenced, shared, none_or_zero, unmapped; > + unsigned long _address, org_address = address; > struct folio *folio = NULL; > - unsigned long _address; > - spinlock_t *ptl; > - int node = NUMA_NO_NODE, unmapped = 0; > + struct page *page = NULL; > + int node = NUMA_NO_NODE; > + int result = SCAN_FAIL; > bool writable = false; > + unsigned long orders; > + pte_t *pte, *_pte; > + spinlock_t *ptl; > + pmd_t *pmd; > + int order; > > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > > + orders = thp_vma_allowable_orders(vma, vma->vm_flags, > + TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER + 1) - 1); > + orders = thp_vma_suitable_orders(vma, address, orders); > + order = highest_order(orders); > + > + /* MADV_COLLAPSE needs to work irrespective of sysfs setting */ > + if (!cc->is_khugepaged) > + order = HPAGE_PMD_ORDER; > + > +scan_pte_range: > + > + max_ptes_shared = khugepaged_max_ptes_shared >> (HPAGE_PMD_ORDER - order); > + max_ptes_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order); > + max_ptes_swap = khugepaged_max_ptes_swap >> (HPAGE_PMD_ORDER - order); > + referenced = 0, shared = 0, none_or_zero = 0, unmapped = 0; > + Hi Dev, Thanks for the patches. Looking at the above code, I imagine you are planning to use the max_ptes_none, max_ptes_shared and max_ptes_swap that is used for PMD THPs for all mTHP sizes? I think this can be a bit confusing for users who aren't familiar with kernel code, as the default values are for PMD THPs, for e.g. max_ptes_none is 511, and the user might not know that it is going to be scaled down for lower order THPs. Another thing is, what if these parameters have different optimal values then the scaled down versions of mTHP? The other option is to introduce these parameters as new sysfs entries per mTHP size. These parameters can be very difficult to tune (and are usually left at their default values), so I don't think its a good idea to introduce new sysfs parameters, but just something to think about. Regards, Usama > + /* Check pmd after taking mmap lock */ > result = find_pmd_or_thp_or_none(mm, address, &pmd); > if (result != SCAN_SUCCEED) > goto out; > > memset(cc->node_load, 0, sizeof(cc->node_load)); > nodes_clear(cc->alloc_nmask); > + > pte = pte_offset_map_lock(mm, pmd, address, &ptl); > if (!pte) { > result = SCAN_PMD_NULL; > goto out; > } > > - for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR; > + for (_address = address, _pte = pte; _pte < pte + (1UL << order); > _pte++, _address += PAGE_SIZE) { > pte_t pteval = ptep_get(_pte); > if (is_swap_pte(pteval)) { > ++unmapped; > if (!cc->is_khugepaged || > - unmapped <= khugepaged_max_ptes_swap) { > + unmapped <= max_ptes_swap) { > /* > * Always be strict with uffd-wp > * enabled swap entries. Please see > @@ -1330,7 +1352,7 @@ static int hpage_collapse_scan_ptes(struct mm_struct *mm, > ++none_or_zero; > if (!userfaultfd_armed(vma) && > (!cc->is_khugepaged || > - none_or_zero <= khugepaged_max_ptes_none)) { > + none_or_zero <= max_ptes_none)) { > continue; > } else { > result = SCAN_EXCEED_NONE_PTE; > @@ -1375,7 +1397,7 @@ static int hpage_collapse_scan_ptes(struct mm_struct *mm, > if (folio_likely_mapped_shared(folio)) { > ++shared; > if (cc->is_khugepaged && > - shared > khugepaged_max_ptes_shared) { > + shared > max_ptes_shared) { > result = SCAN_EXCEED_SHARED_PTE; > count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); > goto out_unmap; > @@ -1432,7 +1454,7 @@ static int hpage_collapse_scan_ptes(struct mm_struct *mm, > result = SCAN_PAGE_RO; > } else if (cc->is_khugepaged && > (!referenced || > - (unmapped && referenced < HPAGE_PMD_NR / 2))) { > + (unmapped && referenced < (1UL << order) / 2))) { > result = SCAN_LACK_REFERENCED_PAGE; > } else { > result = SCAN_SUCCEED; > @@ -1441,9 +1463,41 @@ static int hpage_collapse_scan_ptes(struct mm_struct *mm, > pte_unmap_unlock(pte, ptl); > if (result == SCAN_SUCCEED) { > result = collapse_huge_page(mm, address, referenced, > - unmapped, cc); > + unmapped, order, cc); > /* collapse_huge_page will return with the mmap_lock released */ > *mmap_locked = false; > + > + /* Immediately exit on exhaustion of range */ > + if (_address == org_address + (PAGE_SIZE << HPAGE_PMD_ORDER)) > + goto out; > + } > + if (result != SCAN_SUCCEED) { > + > + /* Go to the next order. */ > + order = next_order(&orders, order); > + if (order < 2) > + goto out; > + goto maybe_mmap_lock; > + } else { > + address = _address; > + pte = _pte; > + > + > + /* Get highest order possible starting from address */ > + order = count_trailing_zeros(address >> PAGE_SHIFT); > + > + /* This needs to be present in the mask too */ > + if (!(orders & (1UL << order))) > + order = next_order(&orders, order); > + if (order < 2) > + goto out; > + > +maybe_mmap_lock: > + if (!(*mmap_locked)) { > + mmap_read_lock(mm); > + *mmap_locked = true; > + } > + goto scan_pte_range; > } > out: > trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,