From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 41464C021B1 for ; Tue, 18 Feb 2025 22:56:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=weE65saoEAtJfJdumH5/WGygs93HsfBZ8086e4OS9vc=; b=vWGu7Er4X5sn/A9G74FJOpub4F XkcY1S2CYmn89bAdbeLm5TkaZTFEyzV9Ym1gBcOg5hJEkdXkbPg9rDcIw3oYi7XJbylzQIWY6WJSr s6G2cyWo/S+POEtMgGdXz2tdUCva4sVqEKqGHQhahHpNVluf48lgq7T51RCyAJRzqI7FvDUqzmf+u FwGPTE+7IzGwe/HDhH8fafeHEftk1oCnj6scLBL4mf4OghoD2zxk3AXr4/6NnyXGgmcwSuhjQG7Yl GvADTzEeYsjYp7b6ZH4AN7lnUcleggTr0c2BaAV3ci/qfrSsnPO54lTePv6ZeFFRClYvb38ROcmE8 28Sdbjxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tkWW5-0000000ACZ0-3To7; Tue, 18 Feb 2025 22:56:37 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tkWUp-0000000ACHY-0Tv8 for kexec@bombadil.infradead.org; Tue, 18 Feb 2025 22:55:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=weE65saoEAtJfJdumH5/WGygs93HsfBZ8086e4OS9vc=; b=OTRJSQZtQeQJuYnm1XPLDltE97 S6DjAezceVxXK/5HdJyuNWtmOeugD9h4CrlEmeqpAyexh2dAsP/DcD5ceJYhpj+13IgtqNbJWMIaj 1P+MXX36S158yDUqK771jjqloNHSqHBu5d6PIH8kMqUu+Dq8uKTY2J1KLN5xfCbKPbCq618nJVY2E GuXsvI/qazYx1ajTY7O2N2+/vO1h1iSLJKj7V3GNdGqv2aZvZUWxZtBguzmeDqY/Ro/pQTuTwDRlk oXVtGX/cjhlG8aPauDGljS1ILMkd/6Sg8RogaMSIPeIX5U2lbxpYR8cQ5zrZUUTbBfyjo3Yy5t9+0 pCcwyVTQ==; Received: from linux.microsoft.com ([13.77.154.182]) by desiato.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tkWUk-000000024JN-2tGD for kexec@lists.infradead.org; Tue, 18 Feb 2025 22:55:17 +0000 Received: from localhost.localdomain (unknown [167.220.59.4]) by linux.microsoft.com (Postfix) with ESMTPSA id DF09920376EE; Tue, 18 Feb 2025 14:55:09 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com DF09920376EE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1739919310; bh=weE65saoEAtJfJdumH5/WGygs93HsfBZ8086e4OS9vc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GuyC0im5MKmXvXFkm2rxELQE3z6zjTnqRiP9vHlWwi7CKZfy7qNXfVhenMBqWIT8d gH02tiGLsPwHTwlOpY8AFpSGKBv/aBUOOQTcnyv9zixzOddMAED+DjDNVtv/XIi8bB 5MtIazPb+4k0l5HP2MSCkzWr7J5uv081GbO76aGs= From: steven chen To: zohar@linux.ibm.com, stefanb@linux.ibm.com, roberto.sassu@huaweicloud.com, roberto.sassu@huawei.com, eric.snowberg@oracle.com, ebiederm@xmission.com, paul@paul-moore.com, code@tyhicks.com, bauermann@kolabnow.com, linux-integrity@vger.kernel.org, kexec@lists.infradead.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org Cc: madvenka@linux.microsoft.com, nramas@linux.microsoft.com, James.Bottomley@HansenPartnership.com, bhe@redhat.com, vgoyal@redhat.com, dyoung@redhat.com Subject: [PATCH v8 2/7] kexec: define functions to map and unmap segments Date: Tue, 18 Feb 2025 14:54:57 -0800 Message-Id: <20250218225502.747963-3-chenste@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250218225502.747963-1-chenste@linux.microsoft.com> References: <20250218225502.747963-1-chenste@linux.microsoft.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250218_225515_320869_EA19506F X-CRM114-Status: GOOD ( 17.02 ) X-BeenThere: kexec@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kexec" Errors-To: kexec-bounces+kexec=archiver.kernel.org@lists.infradead.org Currently, the mechanism to map and unmap segments to the kimage structure is not available to the subsystems outside of kexec. This functionality is needed when IMA is allocating the memory segments during kexec 'load' operation. Implement functions to map and unmap segments to kimage. Implement kimage_map_segment() to enable mapping of IMA buffer source pages to the kimage structure post kexec 'load'. This function, accepting a kimage pointer, an address, and a size, will gather the source pages within the specified address range, create an array of page pointers, and map these to a contiguous virtual address range. The function returns the start of this range if successful, or NULL if unsuccessful. Implement kimage_unmap_segment() for unmapping segments using vunmap(). From: Tushar Sugandhi Author: Tushar Sugandhi Signed-off-by: Tushar Sugandhi Signed-off-by: steven chen --- include/linux/kexec.h | 5 ++++ kernel/kexec_core.c | 54 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index f0e9f8eda7a3..4dbf806bccef 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -467,6 +467,8 @@ extern bool kexec_file_dbg_print; #define kexec_dprintk(fmt, arg...) \ do { if (kexec_file_dbg_print) pr_info(fmt, ##arg); } while (0) +extern void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size); +extern void kimage_unmap_segment(void *buffer); #else /* !CONFIG_KEXEC_CORE */ struct pt_regs; struct task_struct; @@ -474,6 +476,9 @@ static inline void __crash_kexec(struct pt_regs *regs) { } static inline void crash_kexec(struct pt_regs *regs) { } static inline int kexec_should_crash(struct task_struct *p) { return 0; } static inline int kexec_crash_loaded(void) { return 0; } +static inline void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size) +{ return NULL; } +static inline void kimage_unmap_segment(void *buffer) { } #define kexec_in_progress false #endif /* CONFIG_KEXEC_CORE */ diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index c0bdc1686154..63e4d16b6023 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -867,6 +867,60 @@ int kimage_load_segment(struct kimage *image, return result; } +void *kimage_map_segment(struct kimage *image, + unsigned long addr, unsigned long size) +{ + unsigned long eaddr = addr + size; + unsigned long src_page_addr, dest_page_addr; + unsigned int npages; + struct page **src_pages; + int i; + kimage_entry_t *ptr, entry; + void *vaddr = NULL; + + /* + * Collect the source pages and map them in a contiguous VA range. + */ + npages = PFN_UP(eaddr) - PFN_DOWN(addr); + src_pages = kmalloc_array(npages, sizeof(*src_pages), GFP_KERNEL); + if (!src_pages) { + pr_err("Could not allocate ima pages array.\n"); + return NULL; + } + + i = 0; + for_each_kimage_entry(image, ptr, entry) { + if (entry & IND_DESTINATION) { + dest_page_addr = entry & PAGE_MASK; + } else if (entry & IND_SOURCE) { + if (dest_page_addr >= addr && dest_page_addr < eaddr) { + src_page_addr = entry & PAGE_MASK; + src_pages[i++] = + virt_to_page(__va(src_page_addr)); + if (i == npages) + break; + dest_page_addr += PAGE_SIZE; + } + } + } + + /* Sanity check. */ + WARN_ON(i < npages); + + vaddr = vmap(src_pages, npages, VM_MAP, PAGE_KERNEL); + kfree(src_pages); + + if (!vaddr) + pr_err("Could not map ima buffer.\n"); + + return vaddr; +} + +void kimage_unmap_segment(void *segment_buffer) +{ + vunmap(segment_buffer); +} + struct kexec_load_limit { /* Mutex protects the limit count. */ struct mutex mutex; -- 2.25.1