From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F131B3BD63A for ; Mon, 11 May 2026 08:53:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778489622; cv=none; b=KgWyfvzIifRLi479QYbFPuQe+Vl4YTa0CFEPNZxh+7WbIFXRFUeFMsJy3EpVLDVkZ+bHxFSC3y0KKxUzPnguy0j7pm1nyvocU38L91z4D5xsyyNvId6Xq2D2vREKyzKHlN5M7qbWfXJPxnNbkUkf118QteXRytciXOmFuhuewok= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778489622; c=relaxed/simple; bh=xVZpENOTPSkcCw9H72WToBvwV1ZsJAWZkigdtcUbLLU=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: In-Reply-To:Content-Type:Content-Disposition; b=c5mkrjYx5THj2Bqkbj6iQnWzqLdvtLCV0aMVbCYRtWj0tFYe0H/eroK8LiI2T052TKsVLxyj0yOy3aH0bgxGxZihVa7+VaMsp9wwDqbVWiDg5e64stWfzGkvU6E7pEZpeJCJrIL/8BEaGGUuMCl6bqjWIG5R669sPJZa5Oe7cLw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gJn9ML9C; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gJn9ML9C" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1778489620; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=+4kzhpMcne9QL/iUfytpcOnuwyhDfw3DKjMmXnNjefM=; b=gJn9ML9CYF/EHFDsI8PTBz4EcqyB9eaZVLtlCfsM56ICPhPq758MlKyHMrSnntM2PWrI2M FK4Qe5VJB1PoN+ci21UdOhY+4d/y+EnjtLbB3tsRbvqpShrZiNvYJV+7OMcQ7ukY2AVuFf NdJQbHbqFa36ULI7nayLBELYQ8V3vno= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-664-lG4qb7OUNZia3_caQtS7lg-1; Mon, 11 May 2026 04:53:37 -0400 X-MC-Unique: lG4qb7OUNZia3_caQtS7lg-1 X-Mimecast-MFC-AGG-ID: lG4qb7OUNZia3_caQtS7lg_1778489617 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-48e79219704so7664325e9.1 for ; Mon, 11 May 2026 01:53:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778489616; x=1779094416; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+4kzhpMcne9QL/iUfytpcOnuwyhDfw3DKjMmXnNjefM=; b=l9novlrJl0u+g/FifdD0kLJoaHkNq+YIruoYMGAeancml9UieRslMwkuSbMio9pG0w zbPSM4txGFR/n4pOIXMrNtFk3EqzplY7hc4laiOrf4DNqjF/XuCx+p1y/2vwXdNMO1GC 46IHDRmiHs7VMPgazz1puY2kHFsuisKYy8anlLmkK5Y0YQn+cQul0iTayvp5JXf+/8y/ Og0pTvNauefkrk5ay4nK4nTdnPbVbgPTe8XH8DqpX3EcLYgYe2P7TM+R6+28kRcgqTK5 4PQN/loHYO8gH0yMenUJ3PrlOr7RUHRBl0lZ7ls4sS2sakHt4aE6+GjGeZuqC5DqwAyT sdmw== X-Forwarded-Encrypted: i=1; AFNElJ8g22+uXSwFg5g+bGEGerKXcnvgQ0SwPLsrR4ihBt2I2KEHbj5+JcTvlwlecceZAqyijDbD4FufIwKaJ5H1Vg==@lists.linux.dev X-Gm-Message-State: AOJu0YxSL32LrU29q7XmGkrqEEqiiFv4TxZfZwWWs//ODB6IjqrxyJI3 yBMUdiTliwQOtS3pY+YbW9Sq256BErW1rKk/YdpmkYPvnOCBlS6gY2vtBINLOS2gdgvgKaSPSU8 +kxmvTqR7BN3PEzGg4E+36nyp6aatWFZMDxZaQguAsNjQmHaDHH5HWRAKBbXhs0aXjpD4 X-Gm-Gg: Acq92OF5dlIk01tzCkgHnZJgstD87mydhukhTTcmyuOQzgJtARfHPnm+kpeqsghVFFf CLzyhfn8276A0VM3wE8ERw7xm0p9suPVaWYzwuNx0yemdi6d1GyTjTTxXjoRjXcujSUrzLvaUl0 o6EBpXaZ3QLsnezCvMqfVLTDNt2/aF6iOj/MYgmPYavj6bl/csF3l2gTGoT2NDbgOesZoaYV10Z Olv5GPDyh3mNh4MBN5RcVI4mJt7U4cFM28jx4bdijTjCqamR0oG493+0i2lXXd9JPWc9aYft6oG Fh+heGyl6H0Oi6GFGJ+EnJudjXuqSjfbalZs3wbhx297Pr7X60QSlG1GfL2t6QStZd0rc3f037t Tw0PCICjRMqY66uiwXba2FFvuxNWOUoL4r6+K+uMs X-Received: by 2002:a05:600c:3b96:b0:48a:66a8:9981 with SMTP id 5b1f17b1804b1-48e51f55272mr385229755e9.27.1778489616179; Mon, 11 May 2026 01:53:36 -0700 (PDT) X-Received: by 2002:a05:600c:3b96:b0:48a:66a8:9981 with SMTP id 5b1f17b1804b1-48e51f55272mr385228675e9.27.1778489615481; Mon, 11 May 2026 01:53:35 -0700 (PDT) Received: from redhat.com (IGLD-80-230-48-7.inter.net.il. [80.230.48.7]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48e7041c4e8sm166061275e9.14.2026.05.11.01.53.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 May 2026 01:53:34 -0700 (PDT) Date: Mon, 11 May 2026 04:53:29 -0400 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: "David Hildenbrand (Arm)" , Jason Wang , Xuan Zhuo , Eugenio =?utf-8?B?UMOpcmV6?= , Muchun Song , Oscar Salvador , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Hugh Dickins , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Christoph Lameter , David Rientjes , Roman Gushchin , Harry Yoo , Axel Rasmussen , Yuanchu Xie , Wei Xu , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , virtualization@lists.linux.dev, linux-mm@kvack.org, Andrea Arcangeli , Magnus Lindholm , Greg Ungerer , Geert Uytterhoeven , Richard Henderson , Matt Turner , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-alpha@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org Subject: [PATCH v6 08/30] mm: remove arch vma_alloc_zeroed_movable_folio overrides Message-ID: <24a1b25f4f1cf31fc5bc053e475958ed5e1bf8bd.1778488966.git.mst@redhat.com> References: Precedence: bulk X-Mailing-List: virtualization@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In-Reply-To: X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1 X-Mutt-Fcc: =sent X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: oaF2F7FL1kEVz9qxMullA580AX4xcOGDN65aLJU9R-Q_1778489617 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Now that the generic vma_alloc_zeroed_movable_folio() uses __GFP_ZERO, the arch-specific macros on alpha, m68k, s390, and x86 that did the same thing are redundant. Remove them. arm64 is not affected: it has a real function override that handles MTE tag zeroing, not just __GFP_ZERO. Suggested-by: David Hildenbrand Acked-by: Magnus Lindholm Acked-by: Greg Ungerer Acked-by: Geert Uytterhoeven # m68k Signed-off-by: Michael S. Tsirkin --- arch/alpha/include/asm/page.h | 3 --- arch/m68k/include/asm/page_no.h | 3 --- arch/s390/include/asm/page.h | 3 --- arch/x86/include/asm/page.h | 3 --- 4 files changed, 12 deletions(-) diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h index 59d01f9b77f6..4327029cd660 100644 --- a/arch/alpha/include/asm/page.h +++ b/arch/alpha/include/asm/page.h @@ -12,9 +12,6 @@ extern void clear_page(void *page); -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr) - extern void copy_page(void * _to, void * _from); #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/m68k/include/asm/page_no.h b/arch/m68k/include/asm/page_no.h index d2532bc407ef..f511b763a235 100644 --- a/arch/m68k/include/asm/page_no.h +++ b/arch/m68k/include/asm/page_no.h @@ -12,9 +12,6 @@ extern unsigned long memory_end; #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr) - #define __pa(vaddr) ((unsigned long)(vaddr)) #define __va(paddr) ((void *)((unsigned long)(paddr))) diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index 56da819a79e6..e995d2a413f9 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -67,9 +67,6 @@ static inline void copy_page(void *to, void *from) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr) - #ifdef CONFIG_STRICT_MM_TYPECHECKS #define STRICT_MM_TYPECHECKS #endif diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 416dc88e35c1..92fa975b46f3 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -28,9 +28,6 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, copy_page(to, from); } -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr) - #ifndef __pa #define __pa(x) __phys_addr((unsigned long)(x)) #endif -- MST