From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45AF23C455F for ; Sun, 3 May 2026 13:05:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777813505; cv=none; b=CAto3qMynkjtLUR1T/aD7E6s87pG6TOGHzOX3Ux9pwos0/16IRjWG8xdJ3LmQtSt9EPIVwVbmfkrR6PqFIIc3wBDNuxLIK/lVl/6djz4AfS6JepsruqkCVWlNLpLizYng0h/QNLbePIAIsCmtT6TYiXTNopV1DdVQ7puT0N4U9k= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777813505; c=relaxed/simple; bh=ym/Joi6X213plggmDPayjvB0+wkdTiXwkQq9gw1aNDw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fO4tARSXMsihkvr+MGyjYBu++8E5EFnnEQ5iFIIVTvuD6Odz4qFD8ZKqahcLBlBXQOkzQ5lUqTJkkHQMe+7x5i80PjFpF4cd2GtU4RhG9GAfZsLyckdCJOaMC0viT9tMLoX386B+meouY2P2Ij+yHNbJTIGFwdwlSzxPxQzJ8ns= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XMFE6ZKs; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XMFE6ZKs" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ba5520bee9dso256102666b.1 for ; Sun, 03 May 2026 06:05:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1777813501; x=1778418301; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Kei7oZgQGZcRJ7k95kDl+H7tCLur3Tzob7YN4Ew6W5Q=; b=XMFE6ZKsr6gKdL+W6EHly3cAt2dENkkoilwgmSKb24dcZdkfU/nIv6GdJ0waHNUB7d eMPI/O25vItHXTZFL5flLmF3ymPM2jWk8v0CMfGTwzajMN8Ip8PFzbzKJNoYw1/qcXmJ bzy+Iy4wUqbnvsWbyMpqxQSs0sZ8B+LkDZiDebWvswIpcMGhOxYPpgJFJg1jEynOZz2d 9H3B6ff7iZ8NaoURsrirmTBqld7E9Og1an/JZpRp+4fc/+zw4C4gs6BBmTFEOcOfCJbU HzsnxRLqLr/jVa5a16shkfMwuIi286gi9xOJ8iY72+1dYD3turg+iuZV91VtOhPYFKKW 1T7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777813501; x=1778418301; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Kei7oZgQGZcRJ7k95kDl+H7tCLur3Tzob7YN4Ew6W5Q=; b=RFIDjWfAU8Swo0EljdiqCqxYFYXEMQf7se1l7XM5vE3dTv1zEn7W8nVwjdlTvEZ3XO kZamkFWwXJ+dyIaZeUcw3XwNwMu0rlbksg6MFor/xPDz3qMpMUNz2u4vBU+a1nmJbNl9 R6tNXs6asvS61E1H0z2MJPBOxKIFoqVArL1fX/8qIjzyeLmT46sukDTWV4ZqhsNdAwqu G8jtHcrq7DMmPmEhZpBBWFiiX3MR9cxGp2tUGb8ONOM/Rakut7zmQOW4ROQIsrYwyFTJ D0BLp3W6eQ58MypvopVX+gODoj7Ajw+DLCXO5+ve3aTw+Y6dZsHPPr1mzVIQN/MACMcK JrSA== X-Gm-Message-State: AOJu0Yxr7KCO7sXmST0qoQx+Bndl4q3lN6I10od31XOY8/TcHU8wv9rd 8pKeM87Fv/1V51IUiTP/Xy+8hSt73bTdTozA4pts3+EHwGHpZwe6QVUYLFcJg5L19cvtrhgQcZQ i08iMmaTNGEVCwg== X-Received: from ejcqw11.prod.google.com ([2002:a17:906:6a0b:b0:ba2:62ad:29f6]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:94d5:b0:baa:f6c9:51b2 with SMTP id a640c23a62f3a-bbffc47f49cmr285074566b.23.1777813501110; Sun, 03 May 2026 06:05:01 -0700 (PDT) Date: Sun, 03 May 2026 13:04:56 +0000 In-Reply-To: <20260503-x86-init-cleanup-v2-0-bb690bd2477c@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260503-x86-init-cleanup-v2-0-bb690bd2477c@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260503-x86-init-cleanup-v2-3-bb690bd2477c@google.com> Subject: [PATCH v2 3/3] x86/mm: drop unused returns from direct map setup functions From: Brendan Jackman To: Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner Cc: linux-kernel@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Nothing looks at these return values. Furthermore, as discussed in [0], it seems like in the case of a pre-existing 4K mapping, the return value of kernel_physical_mapping_init() is wrong anyway. So, just stop returning a value. Signed-off-by: Brendan Jackman --- arch/x86/mm/init_32.c | 5 +-- arch/x86/mm/init_64.c | 96 ++++++++++++++++------------------------------- arch/x86/mm/mm_internal.h | 11 ++---- 3 files changed, 38 insertions(+), 74 deletions(-) diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 0908c44d51e6f..05c456dc9855f 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -245,14 +245,13 @@ static inline int is_x86_32_kernel_text(unsigned long addr) * of max_low_pfn pages, by creating page tables starting from address * PAGE_OFFSET: */ -unsigned long __init +void __init kernel_physical_mapping_init(unsigned long start, unsigned long end, unsigned long page_size_mask, pgprot_t prot) { int use_pse = page_size_mask == (1<> PAGE_SHIFT, prot), init); - paddr_last = (paddr & PAGE_MASK) + PAGE_SIZE; } update_page_count(PG_LEVEL_4K, pages); - - return paddr_last; } /* * Create PMD level page table mapping for physical addresses. The virtual * and physical address have to be aligned at this level. - * It returns the last physical address mapped. */ -static unsigned long __meminit +static void __meminit phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { unsigned long pages = 0, paddr_next; - unsigned long paddr_last = paddr_end; int i = pmd_index(paddr); @@ -548,9 +539,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, if (!pmd_leaf(*pmd)) { spin_lock(&init_mm.page_table_lock); pte = (pte_t *)pmd_page_vaddr(*pmd); - paddr_last = phys_pte_init(pte, paddr, - paddr_end, prot, - init); + phys_pte_init(pte, paddr, paddr_end, prot, init); spin_unlock(&init_mm.page_table_lock); continue; } @@ -569,7 +558,6 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, if (page_size_mask & (1 << PG_LEVEL_2M)) { if (!after_bootmem) pages++; - paddr_last = paddr_next; continue; } new_prot = pte_pgprot(pte_clrhuge(*(pte_t *)pmd)); @@ -582,33 +570,29 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, pfn_pmd(paddr >> PAGE_SHIFT, prot_sethuge(prot)), init); spin_unlock(&init_mm.page_table_lock); - paddr_last = paddr_next; continue; } pte = alloc_low_page(); - paddr_last = phys_pte_init(pte, paddr, paddr_end, new_prot, init); + phys_pte_init(pte, paddr, paddr_end, new_prot, init); spin_lock(&init_mm.page_table_lock); pmd_populate_kernel_init(&init_mm, pmd, pte, init); spin_unlock(&init_mm.page_table_lock); } update_page_count(PG_LEVEL_2M, pages); - return paddr_last; } /* * Create PUD level page table mapping for physical addresses. The virtual * and physical address do not have to be aligned at this level. KASLR can * randomize virtual addresses up to this level. - * It returns the last physical address mapped. */ -static unsigned long __meminit +static void __meminit phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t _prot, bool init) { unsigned long pages = 0, paddr_next; - unsigned long paddr_last = paddr_end; unsigned long vaddr = (unsigned long)__va(paddr); int i = pud_index(vaddr); @@ -634,10 +618,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, if (!pud_none(*pud)) { if (!pud_leaf(*pud)) { pmd = pmd_offset(pud, 0); - paddr_last = phys_pmd_init(pmd, paddr, - paddr_end, - page_size_mask, - prot, init); + phys_pmd_init(pmd, paddr, paddr_end, + page_size_mask, prot, init); continue; } /* @@ -655,7 +637,6 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, if (page_size_mask & (1 << PG_LEVEL_1G)) { if (!after_bootmem) pages++; - paddr_last = paddr_next; continue; } prot = pte_pgprot(pte_clrhuge(*(pte_t *)pud)); @@ -668,13 +649,11 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, pfn_pud(paddr >> PAGE_SHIFT, prot_sethuge(prot)), init); spin_unlock(&init_mm.page_table_lock); - paddr_last = paddr_next; continue; } pmd = alloc_low_page(); - paddr_last = phys_pmd_init(pmd, paddr, paddr_end, - page_size_mask, prot, init); + phys_pmd_init(pmd, paddr, paddr_end, page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); pud_populate_init(&init_mm, pud, pmd, init); @@ -682,23 +661,22 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, } update_page_count(PG_LEVEL_1G, pages); - - return paddr_last; } -static unsigned long __meminit +static void __meminit phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { - unsigned long vaddr, vaddr_end, vaddr_next, paddr_next, paddr_last; + unsigned long vaddr, vaddr_end, vaddr_next, paddr_next; - paddr_last = paddr_end; vaddr = (unsigned long)__va(paddr); vaddr_end = (unsigned long)__va(paddr_end); - if (!pgtable_l5_enabled()) - return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, - page_size_mask, prot, init); + if (!pgtable_l5_enabled()) { + phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, + page_size_mask, prot, init); + return; + } for (; vaddr < vaddr_end; vaddr = vaddr_next) { p4d_t *p4d = p4d_page + p4d_index(vaddr); @@ -720,33 +698,30 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end, if (!p4d_none(*p4d)) { pud = pud_offset(p4d, 0); - paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); continue; } pud = alloc_low_page(); - paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); p4d_populate_init(&init_mm, p4d, pud, init); spin_unlock(&init_mm.page_table_lock); } - - return paddr_last; } -static unsigned long __meminit +static void __meminit __kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { bool pgd_changed = false; - unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last; + unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next; - paddr_last = paddr_end; vaddr = (unsigned long)__va(paddr_start); vaddr_end = (unsigned long)__va(paddr_end); vaddr_start = vaddr; @@ -759,16 +734,14 @@ __kernel_physical_mapping_init(unsigned long paddr_start, if (pgd_val(*pgd)) { p4d = (p4d_t *)pgd_page_vaddr(*pgd); - paddr_last = phys_p4d_init(p4d, __pa(vaddr), - __pa(vaddr_end), - page_size_mask, - prot, init); + phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); continue; } p4d = alloc_low_page(); - paddr_last = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), - page_size_mask, prot, init); + phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); if (pgtable_l5_enabled()) @@ -783,8 +756,6 @@ __kernel_physical_mapping_init(unsigned long paddr_start, if (pgd_changed) sync_global_pgds(vaddr_start, vaddr_end - 1); - - return paddr_last; } @@ -792,15 +763,15 @@ __kernel_physical_mapping_init(unsigned long paddr_start, * Create page table mapping for the physical memory for specific physical * addresses. Note that it can only be used to populate non-present entries. * The virtual and physical addresses have to be aligned on PMD level - * down. It returns the last physical address mapped. + * down. */ -unsigned long __meminit +void __meminit kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot) { - return __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, prot, true); + __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, prot, true); } /* @@ -809,14 +780,13 @@ kernel_physical_mapping_init(unsigned long paddr_start, * when updating the mapping. The caller is responsible to flush the TLBs after * the function returns. */ -unsigned long __meminit +void __meminit kernel_physical_mapping_change(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask) { - return __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, PAGE_KERNEL, - false); + __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, PAGE_KERNEL, false); } #ifndef CONFIG_NUMA diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 7c4a41235323b..dad8abe65ed03 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -10,13 +10,10 @@ static inline void *alloc_low_page(void) void early_ioremap_page_table_range_init(void); -unsigned long kernel_physical_mapping_init(unsigned long start, - unsigned long end, - unsigned long page_size_mask, - pgprot_t prot); -unsigned long kernel_physical_mapping_change(unsigned long start, - unsigned long end, - unsigned long page_size_mask); +void kernel_physical_mapping_init(unsigned long start, unsigned long end, + unsigned long page_size_mask, pgprot_t prot); +void kernel_physical_mapping_change(unsigned long start, unsigned long end, + unsigned long page_size_mask); extern int after_bootmem; -- 2.51.2