From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43921C83F1B for ; Fri, 11 Jul 2025 16:25:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB1AC6B00A6; Fri, 11 Jul 2025 12:25:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D87886B00A7; Fri, 11 Jul 2025 12:25:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C9CE96B00A8; Fri, 11 Jul 2025 12:25:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B8BDA6B00A6 for ; Fri, 11 Jul 2025 12:25:14 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4DD49140C9C for ; Fri, 11 Jul 2025 16:25:14 +0000 (UTC) X-FDA: 83652508548.26.B5F01E7 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by imf24.hostedemail.com (Postfix) with ESMTP id 337AA180005 for ; Fri, 11 Jul 2025 16:25:11 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=neon.tech header.s=google header.b=BUn4AD0h; dmarc=pass (policy=reject) header.from=neon.tech; spf=pass (imf24.hostedemail.com: domain of sharnoff@neon.tech designates 209.85.128.43 as permitted sender) smtp.mailfrom=sharnoff@neon.tech ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752251112; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vPw6Gd9zta7oYi5tyBx5JJzhglTf7Q/TnhfpcbeN++E=; b=IRTBcV/hCLjrsE5FRpMSJrkSIEfOEoQ06RkUMgV31l233mEAJtAOTnte4I75Y+GQv1tVbk jzE4+HoROBX11gE8R9XeaBIN6+Tw1WKVNJ6HrmmPG5iEhFEfqF8HLbEnvmd8WPA8YYB058 1WQC1NPFD0b8QgBEa1G7GWSOpM/rGs4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752251112; a=rsa-sha256; cv=none; b=4c6ET9YTN3ukG/8OoWvpKdwU652FmP4FOZGvI2HJlA8BVxW5E3xN/cenMeBynws2vJvYPP +tEbSA22a4SYDu9LgXXgChm6F78slDT+zamkkkXVbWTVWlrjzyTL0LLJRlW325q+MgfqYg ld9+DfGtkYb5O+piWUaoHVR2KzQVPqM= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=neon.tech header.s=google header.b=BUn4AD0h; dmarc=pass (policy=reject) header.from=neon.tech; spf=pass (imf24.hostedemail.com: domain of sharnoff@neon.tech designates 209.85.128.43 as permitted sender) smtp.mailfrom=sharnoff@neon.tech Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-45363645a8eso16812355e9.1 for ; Fri, 11 Jul 2025 09:25:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=neon.tech; s=google; t=1752251111; x=1752855911; darn=kvack.org; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id:from:to :cc:subject:date:message-id:reply-to; bh=vPw6Gd9zta7oYi5tyBx5JJzhglTf7Q/TnhfpcbeN++E=; b=BUn4AD0hX8WOTMuxWBQ1UneW218T674quPJtZam6GRqce5cyUPXxSEz4Ng9ev+zN9X jiSO5nP4BBEht1KY58isW0xFUkvLNTnk4Sr+xRV4bNtKTXdhPOJB6VdHCtBBPyOU+3mO CAJs9RWstd4hs97gtQyPBbJjA/HYipwrE2NPo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752251111; x=1752855911; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vPw6Gd9zta7oYi5tyBx5JJzhglTf7Q/TnhfpcbeN++E=; b=sEC4xdMpEzDvATe7fRi78b6PS9T7iS25YtozmqC2lvH0Azk79fRIlY866zXdc7o9mO GH2ZS01ocNtY5bD9Jd5pea0wJNbFOj0mDyNVy7z98NQWChloQuDtE3H3RMDxPb4Uennf 5jaRvffaMBB5G0dlKSa0PqL8UYfeT9skmPJGXkdX9JUeGMJAJ1pQxqdsYSuAXd8iEZTb uI7LAUF+lv2NuRMQsZwUGuFK1065JqChm19kARNcJghmHQwAaa4dTeq3z4IFa82fOkMJ vhFmBJrOkkxfDkDbEg3z1Xqw7VrHb8EuDOcXa8UrwtI4ROg9LDPPdpyaNlpRdevhyoUC ygSw== X-Forwarded-Encrypted: i=1; AJvYcCUDUYK+niqUISPa2AjdCgYwUMD16j0BddH7BbG3rj5nqPRuQtsGf7XqcGU1Y1eRDAxBCD12LnLL8w==@kvack.org X-Gm-Message-State: AOJu0YytBB6xPrtloZprOQrZx+ca6JX7vyzRZcpDRAMTomFXG6Ely1tz WUZt9fq4fMfIuWApgv6YBEm+yftSXuI7J7ssMWDst+tcNPCttaIHV8IytOSeDX4bf5I= X-Gm-Gg: ASbGncsW4pxthw6l410fFwwmGRMnKE7/l1J4/y9AWrcPC9zcCvoECdAwyX8LkEP3xqs UVm6WYf4WfsVj33gqDWLZvJMoWWXVwuflMeMMXRbO5LeflIqWZTYpTgHl0FdVlQJrXE25XdGF8l QVXxNmq2gWcyd+AT/AAJcA+BlPPW8TMyPr/u/f3Dn+AdO7sjRleyy76XVaNSjA6xwRUtfeDCEDy ODvnyNSRr6V/Cq5yT1Hst5xCWt6LVx+yokDDBSTiQRghlRQFJYmW/AEyuauV9xYw2wRwQPuTiUI SO0wpNL9KQqZvbCSCilUC9ULErbEGOdbgDxPltJO+FQ6EtJQJKXW8qxYvO2i8E8iaio8Mk6tVUQ 8hnio3oEcz/UDMR9sub6MpydY X-Google-Smtp-Source: AGHT+IEezts35ERD2mOnoR4eSve5zUvt8FCWsGXblWEc/120s3CiCcl600/E1qrGsHQay4GIBfA5gQ== X-Received: by 2002:a05:600c:1c1f:b0:440:9b1a:cd78 with SMTP id 5b1f17b1804b1-454ec16a5b1mr43392855e9.10.1752251109203; Fri, 11 Jul 2025 09:25:09 -0700 (PDT) Received: from [192.168.86.142] ([90.253.47.31]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3b5e8dc3a54sm4966000f8f.39.2025.07.11.09.25.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 11 Jul 2025 09:25:08 -0700 (PDT) Message-ID: <42995ddc-01f6-4ff4-92e4-b4d1e9c3ea42@neon.tech> Date: Fri, 11 Jul 2025 17:25:07 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH v5 2/4] x86/mm: Allow error returns from phys_*_init() From: Em Sharnoff To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-mm@kvack.org Cc: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "Edgecombe, Rick P" , Oleg Vasilev , Arthur Petukhovsky , Stefan Radig , Misha Sakhnov References: <4fe0984f-74dc-45fe-b2b6-bdd81ec15bac@neon.tech> Content-Language: en-US In-Reply-To: <4fe0984f-74dc-45fe-b2b6-bdd81ec15bac@neon.tech> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 337AA180005 X-Stat-Signature: aegf3zjahf44zuxyybwx6repesppwk7e X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1752251111-269713 X-HE-Meta: U2FsdGVkX18vjNFQSlT1OEPedkQY4RgBMXMi11unhvIL6lLDDW4sMvSNjU9RMrIlHcKiK29fqv2y5+eeVOpWCOwfFp3SBFvd1GuP83nbVhf+eV+pXw1YFt1U1+WOY52e2YcMN4B33sG3pnmLV4Aw5vT6bpWHeFRwOvgO8blWDoyeFjwAfxAFx9CCo9ngAcqEzD43TZ461nu+wex0GDFu6rut1eIDcuiLgZPjvvVtmh3fU2PUljJ+YrVVQPMSRAZAMSxEcK8AN0kj0BsEXZEItwirzSFjO+cwpsvPOFXjmXh8QXAt2vN7qYcrHHyX0WAXfHqoPohWF6+hpzBhrlV+/bEdS2wZtLCLhpq3ITyNFM5X6k9SvrWJG9A9ygwnH56dyLuqsNRb/I5/ly99SJqHLiWOE81nVdlS2fwNej9S57GifyU+a1vjHQkQSmOs0w1MSbbCBCdjSHqaUK9KvcIxzNSNFxRrWOhA1/CuvCQHeK3A4TQuRqiPhixfwe7T9XMhiGbp8YH7cPO4hYNa1N1wz6nFx0Qcnrs0yGFYF/ZBcUv13uKjQkrF5+bIEPFMBwk5fMnd6o7tvNFkRTXAYBSjwqhH5Op1WvP51IU6U0646ffUBuYlfsev1d7QRggaattuZDfPigShk+5y9cqwmawUuD4VLqgmZugc2eT4hQ1ftVdBP8yJQPP9wl32my1+WnVJJhR/2but1gac5Mxtfa/7uALEXmErUoriH7DCpY7zYeuT53mGLMvsuXIzgs2xzmLT9DA/oL/Vc6owP1jPnWAXr23kjx8bGBcZEZq1VEL6R4WLkgTFRqF0/IpkynhYPSlScuAZPdEHuJMdA/gAWqTZ4UpNTVopQw/UXB9a57bUGgawDmgCnlw0NC5qgvhcMhF/7z0cI4tNXWnvcU8ICMwS/dzG2Rs3NFG6tB14pjdNoU//eOzYUXpoGk9rVGOVNY0GChDh1tyRzjhgp9fLy8Z VZOpHNBy /zbAUESGykNoqBkUkhXDVyPfWP3mA4Q4u5krmxhv5RAAi3HryxjgK/RNvNAotttTHXfCuzWmrxnLqQJvENdp13HP38TaZR0yAx7NlGljpHn8sDDHdrVNlybCQjEJ60yaDL7zIxZ+OdT6IG7Qj2LaCcPg5zJ5XqdwGrub1R60fm6ziWpttAKW5i4c0Oh9AfIqzrL48U2lmMnAA+7ytVjjjK4yeSE0/RDyD/FVRcjO9FnpVBghF1IIO8s8ViCiaCkPDwcxbnfGQGZFydlKkah0hKP9hP95H4ww3Nkm+MavB5by5N+6NmfZTGin4d/CtUxr48/bppMY+ncfowYq/FZLlIV+gTJv36VYdMTHjUrVDMBN/D+DyEtONXw5T33ncj0eSzxorkI4AUAEc5+RO4p//ekKlBhqc+WNoTPCRQ4yJXam2lpNu6YkPPzrpQ/Ckx9Xt4OE1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Preparation for returning errors when alloc_low_page() fails. phys_pte_init() is excluded because it can't fail, and it's useful for it to return 'paddr_last' instead. This patch depends on the previous patch ("x86/mm: Update mapped addresses in phys_{pmd,pud}_init()"). Signed-off-by: Em Sharnoff --- Changleog: - v2: Switch from special-casing zero value to using ERR_PTR() - v3: Fix -Wint-conversion errors - v4: Switch return type to int, split alloc handling into separate patch. --- arch/x86/include/asm/pgtable.h | 2 +- arch/x86/mm/init.c | 14 +++-- arch/x86/mm/init_32.c | 4 +- arch/x86/mm/init_64.c | 100 ++++++++++++++++++++++----------- arch/x86/mm/mem_encrypt_amd.c | 8 ++- arch/x86/mm/mm_internal.h | 8 +-- 6 files changed, 87 insertions(+), 49 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 5d71cb192c57..f964f52327de 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1224,7 +1224,7 @@ extern int direct_gbpages; void init_mem_mapping(void); void early_alloc_pgt_buf(void); void __init poking_init(void); -void init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot); +int init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot); #ifdef CONFIG_X86_64 extern pgd_t trampoline_pgd_entry; diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index e87466489c66..474a7294016c 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -540,11 +540,12 @@ void add_paddr_range_mapped(unsigned long start_paddr, unsigned long end_paddr) * This runs before bootmem is initialized and gets pages directly from * the physical memory. To access them they are temporarily mapped. */ -void __ref init_memory_mapping(unsigned long start, +int __ref init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot) { struct map_range mr[NR_RANGE_MR]; int nr_range, i; + int ret; pr_debug("init_memory_mapping: [mem %#010lx-%#010lx]\n", start, end - 1); @@ -552,11 +553,14 @@ void __ref init_memory_mapping(unsigned long start, memset(mr, 0, sizeof(mr)); nr_range = split_mem_range(mr, 0, start, end); - for (i = 0; i < nr_range; i++) - kernel_physical_mapping_init(mr[i].start, mr[i].end, - mr[i].page_size_mask, prot); + for (i = 0; i < nr_range; i++) { + ret = kernel_physical_mapping_init(mr[i].start, mr[i].end, + mr[i].page_size_mask, prot); + if (ret) + return ret; + } - return; + return 0; } /* diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index a9a16d3d0eb2..6e13685d7ced 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -246,7 +246,7 @@ static inline int is_x86_32_kernel_text(unsigned long addr) * of max_low_pfn pages, by creating page tables starting from address * PAGE_OFFSET: */ -void __init +int __init kernel_physical_mapping_init(unsigned long start, unsigned long end, unsigned long page_size_mask, @@ -385,7 +385,7 @@ kernel_physical_mapping_init(unsigned long start, } add_paddr_range_mapped(start, last_map_addr); - return; + return 0; } #ifdef CONFIG_HIGHMEM diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index f0dc4a0e8cde..ca71eaec1db5 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -504,7 +504,7 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end, * Create PMD level page table mapping for physical addresses. The virtual * and physical address have to be aligned at this level. */ -static void __meminit +static int __meminit phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { @@ -586,7 +586,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, * It is idempotent, so this is ok. */ add_paddr_range_mapped(paddr_first, paddr_last); - return; + return 0; } /* @@ -594,12 +594,14 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, * and physical address do not have to be aligned at this level. KASLR can * randomize virtual addresses up to this level. */ -static void __meminit +static int __meminit phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t _prot, bool init) { unsigned long pages = 0, paddr_next; unsigned long vaddr = (unsigned long)__va(paddr); + int ret; + int i = pud_index(vaddr); for (; i < PTRS_PER_PUD; i++, paddr = paddr_next) { @@ -624,8 +626,10 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, if (!pud_none(*pud)) { if (!pud_leaf(*pud)) { pmd = pmd_offset(pud, 0); - phys_pmd_init(pmd, paddr, paddr_end, - page_size_mask, prot, init); + ret = phys_pmd_init(pmd, paddr, paddr_end, + page_size_mask, prot, init); + if (ret) + return ret; continue; } /* @@ -661,33 +665,39 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end, } pmd = alloc_low_page(); - phys_pmd_init(pmd, paddr, paddr_end, - page_size_mask, prot, init); + ret = phys_pmd_init(pmd, paddr, paddr_end, + page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); pud_populate_init(&init_mm, pud, pmd, init); spin_unlock(&init_mm.page_table_lock); + + /* + * Bail only after updating pud to keep progress from pmd across + * retries. + */ + if (ret) + return ret; } update_page_count(PG_LEVEL_1G, pages); - return; + return 0; } -static void __meminit +static int __meminit phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { unsigned long vaddr, vaddr_end, vaddr_next, paddr_next; + int ret; vaddr = (unsigned long)__va(paddr); vaddr_end = (unsigned long)__va(paddr_end); - if (!pgtable_l5_enabled()) { - phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, - page_size_mask, prot, init); - return; - } + if (!pgtable_l5_enabled()) + return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, + page_size_mask, prot, init); for (; vaddr < vaddr_end; vaddr = vaddr_next) { p4d_t *p4d = p4d_page + p4d_index(vaddr); @@ -709,24 +719,33 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end, if (!p4d_none(*p4d)) { pud = pud_offset(p4d, 0); - phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + ret = phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); + if (ret) + return ret; continue; } pud = alloc_low_page(); - phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + ret = phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); p4d_populate_init(&init_mm, p4d, pud, init); spin_unlock(&init_mm.page_table_lock); + + /* + * Bail only after updating p4d to keep progress from pud across + * retries. + */ + if (ret) + return ret; } - return; + return 0; } -static void __meminit +static int __meminit __kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, @@ -734,6 +753,7 @@ __kernel_physical_mapping_init(unsigned long paddr_start, { bool pgd_changed = false; unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next; + int ret; vaddr = (unsigned long)__va(paddr_start); vaddr_end = (unsigned long)__va(paddr_end); @@ -747,14 +767,16 @@ __kernel_physical_mapping_init(unsigned long paddr_start, if (pgd_val(*pgd)) { p4d = (p4d_t *)pgd_page_vaddr(*pgd); - phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), - page_size_mask, prot, init); + ret = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); + if (ret) + return ret; continue; } p4d = alloc_low_page(); - phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), - page_size_mask, prot, init); + ret = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); spin_lock(&init_mm.page_table_lock); if (pgtable_l5_enabled()) @@ -762,15 +784,22 @@ __kernel_physical_mapping_init(unsigned long paddr_start, else p4d_populate_init(&init_mm, p4d_offset(pgd, vaddr), (pud_t *) p4d, init); - spin_unlock(&init_mm.page_table_lock); + + /* + * Bail only after updating pgd/p4d to keep progress from p4d + * across retries. + */ + if (ret) + return ret; + pgd_changed = true; } if (pgd_changed) sync_global_pgds(vaddr_start, vaddr_end - 1); - return; + return 0; } @@ -780,13 +809,13 @@ __kernel_physical_mapping_init(unsigned long paddr_start, * The virtual and physical addresses have to be aligned on PMD level * down. */ -void __meminit +int __meminit kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot) { - __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, prot, true); + return __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, prot, true); } /* @@ -795,14 +824,14 @@ kernel_physical_mapping_init(unsigned long paddr_start, * when updating the mapping. The caller is responsible to flush the TLBs after * the function returns. */ -void __meminit +int __meminit kernel_physical_mapping_change(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask) { - __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, PAGE_KERNEL, - false); + return __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, PAGE_KERNEL, + false); } #ifndef CONFIG_NUMA @@ -984,8 +1013,11 @@ int arch_add_memory(int nid, u64 start, u64 size, { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; + int ret; - init_memory_mapping(start, start + size, params->pgprot); + ret = init_memory_mapping(start, start + size, params->pgprot); + if (ret) + return ret; return add_pages(nid, start_pfn, nr_pages, params); } diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index faf3a13fb6ba..15174940d218 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -446,9 +446,11 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr, * kernel_physical_mapping_change() does not flush the TLBs, so * a TLB flush is required after we exit from the for loop. */ - kernel_physical_mapping_change(__pa(vaddr & pmask), - __pa((vaddr_end & pmask) + psize), - split_page_size_mask); + ret = kernel_physical_mapping_change(__pa(vaddr & pmask), + __pa((vaddr_end & pmask) + psize), + split_page_size_mask); + if (ret) + return ret; } ret = 0; diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 5b873191c3c9..7f948d5377f0 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -12,10 +12,10 @@ void early_ioremap_page_table_range_init(void); void add_paddr_range_mapped(unsigned long start_paddr, unsigned long end_paddr); -void kernel_physical_mapping_init(unsigned long start, unsigned long end, - unsigned long page_size_mask, pgprot_t prot); -void kernel_physical_mapping_change(unsigned long start, unsigned long end, - unsigned long page_size_mask); +int kernel_physical_mapping_init(unsigned long start, unsigned long end, + unsigned long page_size_mask, pgprot_t prot); +int kernel_physical_mapping_change(unsigned long start, unsigned long end, + unsigned long page_size_mask); void zone_sizes_init(void); extern int after_bootmem; -- 2.39.5