From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3sB6RV2hhfzDqTs for ; Sat, 13 Aug 2016 13:19:02 +1000 (AEST) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7D3IuS8119799 for ; Fri, 12 Aug 2016 23:18:59 -0400 Received: from e24smtp02.br.ibm.com (e24smtp02.br.ibm.com [32.104.18.86]) by mx0b-001b2d01.pphosted.com with ESMTP id 24s2upf7mm-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 12 Aug 2016 23:18:59 -0400 Received: from localhost by e24smtp02.br.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 13 Aug 2016 00:18:57 -0300 Received: from d24relay03.br.ibm.com (d24relay03.br.ibm.com [9.13.184.25]) by d24dlp02.br.ibm.com (Postfix) with ESMTP id A491E1DC0051 for ; Fri, 12 Aug 2016 23:18:45 -0400 (EDT) Received: from d24av05.br.ibm.com (d24av05.br.ibm.com [9.18.232.44]) by d24relay03.br.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7D3IsnF13500916 for ; Sat, 13 Aug 2016 00:18:54 -0300 Received: from d24av05.br.ibm.com (localhost [127.0.0.1]) by d24av05.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u7D3Iraa013834 for ; Sat, 13 Aug 2016 00:18:54 -0300 From: Thiago Jung Bauermann To: kexec@lists.infradead.org Cc: linux-security-module@vger.kernel.org, linux-ima-devel@lists.sourceforge.net, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, x86@kernel.org, Eric Biederman , Dave Young , Vivek Goyal , Baoquan He , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Stewart Smith , Samuel Mendoza-Jonas , Mimi Zohar , Eric Richter , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Andrew Morton , Petko Manolov , David Laight , Balbir Singh , Thiago Jung Bauermann Subject: [PATCH v2 1/6] kexec_file: Add buffer hand-over support for the next kernel Date: Sat, 13 Aug 2016 00:18:20 -0300 In-Reply-To: <1471058305-30198-1-git-send-email-bauerman@linux.vnet.ibm.com> References: <1471058305-30198-1-git-send-email-bauerman@linux.vnet.ibm.com> Message-Id: <1471058305-30198-2-git-send-email-bauerman@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , The buffer hand-over mechanism allows the currently running kernel to pass data to kernel that will be kexec'd via a kexec segment. The second kernel can check whether the previous kernel sent data and retrieve it. This is the architecture-independent part of the feature. Signed-off-by: Thiago Jung Bauermann --- include/linux/kexec.h | 29 ++++++++++++++++++++++ kernel/kexec_file.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index ceccc5856aab..4559a1a01b0a 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -388,6 +388,35 @@ static inline void *boot_phys_to_virt(unsigned long entry) return phys_to_virt(boot_phys_to_phys(entry)); } +#ifdef CONFIG_KEXEC_FILE +bool __weak kexec_can_hand_over_buffer(void); +int __weak arch_kexec_add_handover_buffer(struct kimage *image, + unsigned long load_addr, + unsigned long size); +int kexec_add_handover_buffer(struct kexec_buf *kbuf); +int __weak kexec_get_handover_buffer(void **addr, unsigned long *size); +int __weak kexec_free_handover_buffer(void); +#else +static inline bool kexec_can_hand_over_buffer(void) +{ + return false; +} + +static inline int kexec_add_handover_buffer(struct kexec_buf *kbuf) +{ + return -ENOTSUPP; +} + +static inline int kexec_get_handover_buffer(void **addr, unsigned long *size) +{ + return -ENOTSUPP; +} + +static inline int kexec_free_handover_buffer(void) +{ + return -ENOTSUPP; +} +#endif /* CONFIG_KEXEC_FILE */ #else /* !CONFIG_KEXEC_CORE */ struct pt_regs; struct task_struct; diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 772cb491715e..c8418d62e2fc 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -135,6 +135,74 @@ int __weak arch_kexec_verify_buffer(enum kexec_file_type type, const void *buf, return -EINVAL; } +/** + * kexec_can_hand_over_buffer - can we pass data to the kexec'd kernel? + */ +bool __weak kexec_can_hand_over_buffer(void) +{ + return false; +} + +/** + * arch_kexec_add_handover_buffer - do arch-specific steps to handover buffer + * + * Architectures should use this function to pass on the handover buffer + * information to the next kernel. + * + * Return: 0 on success, negative errno on error. + */ +int __weak arch_kexec_add_handover_buffer(struct kimage *image, + unsigned long load_addr, + unsigned long size) +{ + return -ENOTSUPP; +} + +/** + * kexec_add_handover_buffer - add buffer to be used by the next kernel + * @kbuf: Buffer contents and memory parameters. + * + * This function assumes that kexec_mutex is held. + * On successful return, @kbuf->mem will have the physical address of + * the buffer in the next kernel. + * + * Return: 0 on success, negative errno on error. + */ +int kexec_add_handover_buffer(struct kexec_buf *kbuf) +{ + int ret; + + if (!kexec_can_hand_over_buffer()) + return -ENOTSUPP; + + ret = kexec_add_buffer(kbuf); + if (ret) + return ret; + + return arch_kexec_add_handover_buffer(kbuf->image, kbuf->mem, + kbuf->memsz); +} + +/** + * kexec_get_handover_buffer - get the handover buffer from the previous kernel + * @addr: On successful return, set to point to the buffer contents. + * @size: On successful return, set to the buffer size. + * + * Return: 0 on success, negative errno on error. + */ +int __weak kexec_get_handover_buffer(void **addr, unsigned long *size) +{ + return -ENOTSUPP; +} + +/** + * kexec_free_handover_buffer - free memory used by the handover buffer + */ +int __weak kexec_free_handover_buffer(void) +{ + return -ENOTSUPP; +} + /* * In file mode list of segments is prepared by kernel. Copy relevant * data from user space, do error checking, prepare segment list -- 1.9.1