From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B490C71157 for ; Wed, 18 Jun 2025 16:14:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C125B6B007B; Wed, 18 Jun 2025 12:14:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BEA006B0089; Wed, 18 Jun 2025 12:14:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD8B46B008A; Wed, 18 Jun 2025 12:14:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A3CF26B007B for ; Wed, 18 Jun 2025 12:14:30 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 791B8140441 for ; Wed, 18 Jun 2025 16:14:30 +0000 (UTC) X-FDA: 83569019100.11.5BFBDDA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf06.hostedemail.com (Postfix) with ESMTP id E1189180013 for ; Wed, 18 Jun 2025 16:14:27 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XXL2Zy4A; spf=pass (imf06.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750263268; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qv6rWz3G7zQ3HwLveL9JzkEtus+WQoTHanBk5L+liRk=; b=p/QP2CtPmbOG/EPo+9cwl2HnWaX/I4TaVjACeunBazO7RxrbHq9H5jHwmg+RSFVL4Pg3gX SA45+Hymy9dEBC7pRqi90WME4YM582FUNdf3eYo2oaTqJCwTk/n3fp0dRGnkxz6cIh20ac NDaUAH1gIdZJkJ98BDr4gDOD32jsvMA= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XXL2Zy4A; spf=pass (imf06.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750263268; a=rsa-sha256; cv=none; b=CvRxrCJCTboTgMXATBZlWkhDdvGRedLzd2fcUu8kkZm2gbmbPW3XX2O+cAX1oZl8ObOTnw hKdVXQ2FxlDWdhKK335LmXBTR5KGi10WHYAj+dkQN6uJUMUxDTkT47w0ZT8EHbu7RJDFpz FDt1dJ9yW75P2k2g3CSEtVFLXSCx8xM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1750263267; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=qv6rWz3G7zQ3HwLveL9JzkEtus+WQoTHanBk5L+liRk=; b=XXL2Zy4AHVH4qFDXFHhKEMJV3NEDMog8o1H81HPE939lL/7AT5iDz9gmXBOcWo7zkmFKQI L+LXYgUF/2mS9Rau1IgnoYtIhtWClhgCQkOKDxsflDbzBMJYDM+treyEWdmCwlVpAj778C W6koVWqV42XU51PWTBYH0NK6eyRqkjo= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-112-bkgMS78-Pq-dR65m2blsPQ-1; Wed, 18 Jun 2025 12:14:25 -0400 X-MC-Unique: bkgMS78-Pq-dR65m2blsPQ-1 X-Mimecast-MFC-AGG-ID: bkgMS78-Pq-dR65m2blsPQ_1750263265 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-43eed325461so45387145e9.3 for ; Wed, 18 Jun 2025 09:14:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750263264; x=1750868064; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=qv6rWz3G7zQ3HwLveL9JzkEtus+WQoTHanBk5L+liRk=; b=M7rwx7S04DfQUUwBTwlAvkZswaOPAPWMlz1odW9/XwwjT6YwL6ofYN4PHX3pYnHWYi 3twWLxn+ZpNQ3XZA+fQCocQM+s23ChiaRqjlEz5r5X4DczynZ/LlCoC5NSHxkzDmuAgN S8p1e/hxt6np8FvD4/VwcZepUGjofdLh9crlpV9j9YDuw+bRejdu/BK/SGu6t99Q7y4b 6xsJ6je7nNpHbYYuY84B+5jIUIgZDo2zGuT33yVeBeTiIu/UcnpgtrTBxQrQvyK0e7Nl jSVotIwRoL2xdoOFRlGgSxcz2k3+p4DiSQW4HGC1uUnx/ISEzPNFoeb8/sZq82haA+J0 JkXw== X-Forwarded-Encrypted: i=1; AJvYcCWSaCHjyKQE+x5EyWp1TQNN3DXrpvJfuSjuN8AP0xD4/9ieq1Hp6obfhnl0sMUSx3ES1lWDa7yOlg==@kvack.org X-Gm-Message-State: AOJu0Yyj9gSR1Q6nrB7hkIfC6oNFNnpQuoeikbDRaWbsqwPzeia0O2qO Tjo+DVIIIhCUcG1MlLMaoQ8TLHmuEY8pDOdKubeHUMDesInJ6sFv/oKtHWPseASaoMgfrlIHQzH eoGDXmroZ4fFLxTs10+YU3juAEpbe6BWEy8gu/He3K2P9P/ckKM3+ X-Gm-Gg: ASbGncv1aH3ATD0YO3diCizJHxEvZqq1mZnijgl0ktZ8o9EaQ0crRbJezh9rrr51i4j bG1hG+1GIWOibeEXwvvRb/otw+fL7J8TxO4zwUu1mpFu0VMwrKOocPbUYKQsEr+JZrO3zUZ6KSo YXJU9tGqOJST5NyWLydqE8qvHUinbnJYO3lO1W4stMhR24vubNd8p7BsvTKdgRrb/rCzbUsqmiQ DejSe5tzKWt21yoxlCht5NPnUGwPSX+vhfnurZW76vchwE+roQzDmerpWVTISXiDbNDi8NoaZbm 7OsbBr89UCQbE9gf3zpSfpV+sJVWNuv6icFt7yVt18ecgO3nRqdD/s8rwtDd2Sf8EHwNJlBaRdI 3cQTSIky38RW3qt507h3IhGfws6hJ2eaBw3nQBdm/h3sZUh8= X-Received: by 2002:a05:600c:a089:b0:441:d4e8:76cd with SMTP id 5b1f17b1804b1-4533cb55c28mr171957905e9.29.1750263264607; Wed, 18 Jun 2025 09:14:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH9DGEuzgpP2c+6KXNCStjPN6PpIwlpQzQ+R7yaqZtF8tW81pjSe7ZCl5P8XepH5ib4ANXh/A== X-Received: by 2002:a05:600c:a089:b0:441:d4e8:76cd with SMTP id 5b1f17b1804b1-4533cb55c28mr171957635e9.29.1750263264182; Wed, 18 Jun 2025 09:14:24 -0700 (PDT) Received: from ?IPV6:2003:d8:2f2d:2400:4052:3b5:fff9:4ed0? (p200300d82f2d2400405203b5fff94ed0.dip0.t-ipconnect.de. [2003:d8:2f2d:2400:4052:3b5:fff9:4ed0]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4535e97abf6sm1813025e9.6.2025.06.18.09.14.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 18 Jun 2025 09:14:23 -0700 (PDT) Message-ID: <738669ec-a9e5-4ba1-85a7-605cb4132d05@redhat.com> Date: Wed, 18 Jun 2025 18:14:22 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] khugepaged: Optimize __collapse_huge_page_copy_succeeded() for large folios by PTE batching To: Dev Jain , akpm@linux-foundation.org Cc: ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, baohua@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20250618102607.10551-1-dev.jain@arm.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <20250618102607.10551-1-dev.jain@arm.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: Ae7ulFcnNoThVfzbDGuzjwM9chwt8Kr_ZQHJe-WlfdY_1750263265 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: E1189180013 X-Stat-Signature: fzscs57qu1493yhufde65okjyng7gw3a X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1750263267-365660 X-HE-Meta: U2FsdGVkX1/Y7GPprIJM0XrWtQWcmy02yeTcKjbgYDNF6Lp34VdtcthoNGFZOgb3PiflWxp5Auz33RXZm5+hmsilRa78+WVNc+Xm22y66EeO6oZ/bmkzRWjCZLMecucavTdVFe30x0bQ6ENWLmG212IYbUbB5bvW6eLLjmoJ9gHK3JX601gQ6Q7zqTtUC8xSuIColS8ipXoOhcpeaYpyw2R0jDU7Bk4IY+Z9V6u9w4b+fBMvZc/G0UB3/sIkkBgqGT4hX3ApdBMhuyn17J3lWxMvU7ZCj/TaijYIevs1eCsfLmMSaHk4i73YeLaYlJcqsAmH2NRzb0aeCmxuF5hg1GznJUpZzH6bqvGojH5iPMi4DuM2YLxtMOacT2r6MgX/LetPqQV95C7uNAXFlCtznGdnwK5EYs79PsoAhc28G5MCAdgdP9C65o2zaX1oJyoLlpzJglPKBWVJHRHOkTmtdyj2gsEENPBlea0vYPsFznn7UhhoYkGgj88ZZhJdMUOD1vMOF3owfHCKGOyevqXQAt2zgH/4ACFQjTdNmvKa3MdZ9OYwlYRmJCTSczDwSBCznZ5Fhc4Dp4pikiY7Ag9L+v9T9urBcD8rZe6zm4prxxUNwfQHFkX3FPsCZ2yuQRDwTGyMQq8PBo15He3Km7eY0s8GvVEi/qBFwV4XufKBHqxJkxN2bnkd6cjrLpsAKMOSp39kRqHR0kuH93ei4LUOYnORI4jEvE61+D1v77tRU03elJ2SsmYTdNQshRvAxKwhnfyMN/KCqICH4VO32Mu3odslW2UjUsWSuE0Y8UyvsLd1TQ7lAJbgew8DS+fgISMbJnHVPSD4dlqOMcam7zQNIxOeLzP/7wiQh8sER//0gWzZCF4/Uy12PSYCvuau+oZt0j4vSKZvx7/fu53l0+2XrVtX/DGeenx0kl3m+pORAsZRVFnUs/SW/hz4XWzqZdbD/LP4AWovwiYxv/utDG4 pP/l4E2+ o5P8h00du8aKABi084VfAbFiMi1W3+5YLU05+BII7BBV1PxeGL+9lb1lzAPQ5n27jMjP3rYDhSrVGh+R7ZZZQMaX+gzpN9ENxfNWQlQb2tqhO4KQAgqJLehuYimL9PTjZtb8AWwngPqs/DC1KJWFBxVyjnBbMucSTxhbFv92+zZV4dDsgvmSLq1Q9eUxk5r6+1T0sW9pGWZfn4p26jfcqJZkGHX8gJKIi+PlOSDXHQqVyDV45YOwUsDauEvYjN02dg280SX3n2szhLFuzWkut0qOmLwYZ9MwzEkeBhGQ/csiIAeBOZ/CiJq31duOGKoPO0cO2SGpcoPNFNjU9T9aO7mvrAvbBhMg8eJN7ZN8szzyYcF87moVi5qHdyOwQUdyGxToe7BXxRQoqBfl6LWvIDCqK8xNS/Ebfz24zsqGnO5ZAYXlv7pdb8zEvwaIfSpbVTDRtvuiNGNEGFlfgx2vOZQj4ce2RLJQGOSUyNQEFS7GYxavLqPY2Ya0/wEw3+U95j1Aq X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 18.06.25 12:26, Dev Jain wrote: > Use PTE batching to optimize __collapse_huge_page_copy_succeeded(). > > On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for collapse. > Then, calling ptep_clear() for every pte will cause a TLB flush for every > contpte block. Instead, clear_full_ptes() does a > contpte_try_unfold_partial() which will flush the TLB only for the (if any) > starting and ending contpte block, if they partially overlap with the range > khugepaged is looking at. > > For all arches, there should be a benefit due to batching atomic operations > on mapcounts due to folio_remove_rmap_ptes(). > > No issues were observed with mm-selftests. > > Signed-off-by: Dev Jain > --- > mm/khugepaged.c | 31 +++++++++++++++++++++++-------- > 1 file changed, 23 insertions(+), 8 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index d45d08b521f6..649ccb2670f8 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -700,12 +700,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte, > spinlock_t *ptl, > struct list_head *compound_pagelist) > { > + unsigned long end = address + HPAGE_PMD_SIZE; > struct folio *src, *tmp; > - pte_t *_pte; > + pte_t *_pte = pte; > pte_t pteval; > + int nr_ptes; > > - for (_pte = pte; _pte < pte + HPAGE_PMD_NR; > - _pte++, address += PAGE_SIZE) { > + do { > + nr_ptes = 1; > pteval = ptep_get(_pte); > if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { > add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); > @@ -719,23 +721,36 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte, > ksm_might_unmap_zero_page(vma->vm_mm, pteval); > } > } else { > + const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; > + int max_nr_ptes; > + bool is_large; folio_test_large() should be cheap, no need for the temporary variable (the compiler will likely optimize this either way). > + > struct page *src_page = pte_page(pteval); > > src = page_folio(src_page); > - if (!folio_test_large(src)) > + is_large = folio_test_large(src); > + if (!is_large) > release_pte_folio(src); > + > + max_nr_ptes = (end - address) >> PAGE_SHIFT; > + if (is_large && max_nr_ptes != 1) > + nr_ptes = folio_pte_batch(src, address, _pte, > + pteval, max_nr_ptes, > + flags, NULL, NULL, NULL); Starting to wonder if we want a simplified, non-inlined version of folio_pte_batch() in mm/util.c (e.g., without the 3 NULL parameters), renaming existing folio_pte_batch to __folio_pte_batch() and only using it where required (performance like in fork/zap, or because the other parameters are relevant). Let me see if I find time for a quick patch later. Have to look at what other similar code needs. > + > /* > * ptl mostly unnecessary, but preempt has to > * be disabled to update the per-cpu stats > * inside folio_remove_rmap_pte(). > */ > spin_lock(ptl); Existing code: The PTL locking should just be moved outside of the loop. > - ptep_clear(vma->vm_mm, address, _pte); > - folio_remove_rmap_pte(src, src_page, vma); > + clear_full_ptes(vma->vm_mm, address, _pte, nr_ptes, false); Starting to wonder if we want a shortcut #define clear_ptes(__mm, __addr, __pte, __nr_ptes) \ clear_full_ptes(__mm, __addr, __pte, __nr_ptes, false) > + folio_remove_rmap_ptes(src, src_page, nr_ptes, vma); > spin_unlock(ptl); > - free_folio_and_swap_cache(src); > + free_swap_cache(src); > + folio_put_refs(src, nr_ptes); > } > - } > + } while (_pte += nr_ptes, address += nr_ptes * PAGE_SIZE, address != end); > > list_for_each_entry_safe(src, tmp, compound_pagelist, lru) { > list_del(&src->lru); I think this should just work. -- Cheers, David / dhildenb