From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 480E1C07E9B for ; Tue, 20 Jul 2021 15:39:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EB49F610F7 for ; Tue, 20 Jul 2021 15:39:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB49F610F7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8D42F6B005D; Tue, 20 Jul 2021 11:39:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 883766B006C; Tue, 20 Jul 2021 11:39:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FDAE6B0070; Tue, 20 Jul 2021 11:39:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 3E35A6B005D for ; Tue, 20 Jul 2021 11:39:29 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C4E041850A0C7 for ; Tue, 20 Jul 2021 15:39:27 +0000 (UTC) X-FDA: 78383375574.09.C7C02DC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id F1EE150000AC for ; Tue, 20 Jul 2021 15:39:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=3Wj/HwXFX2bmhHvOuLlpR0gcFs8JOAD2ucTx8ZSKbwk=; b=OWbp/EA8gH0ewGwF5iqYwn5hMZ vMDMXsuoD6H9phlC+ZhkDCZzwsfSVfIPhbzgu1VHaodfMejY18ovrGF8pyA/qBkB7Aae/a3VVXLhR 882WX4HufAjKw/JXXwH/UTB3Potsl4nWzOMh738U1VQyRnonmYv8dXdBIsZpck6BEj1C/Ez5HQg2R 8OJnKbUmyAQ5WCzy/dJvEo0/ax8wHUAYMyvmYRXrSbn3FqDcGi9u28MlWgi69H9L2VuO+c5ERVPr6 o6MdOsT7qT9yeU6t7LT9CBNmxtq/zUJYkDZl2Mno8eLVZssU+knw4fvRhuJ0KT6MK4z1WXXfHWY6Z /txM1gJQ==; Received: from hch by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5ro9-008GZ6-4O; Tue, 20 Jul 2021 15:37:28 +0000 Date: Tue, 20 Jul 2021 16:37:21 +0100 From: Christoph Hellwig To: Arnd Bergmann Cc: linux-arch@vger.kernel.org, Arnd Bergmann , Christoph Hellwig , Alexander Viro , Andrew Morton , Borislav Petkov , Brian Gerst , Eric Biederman , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , Linux ARM , linux-kernel@vger.kernel.org, Linux-MM , kexec@lists.infradead.org Subject: Re: [PATCH v4 1/4] kexec: avoid compat_alloc_user_space Message-ID: References: <20210720150950.3669610-1-arnd@kernel.org> <20210720150950.3669610-2-arnd@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210720150950.3669610-2-arnd@kernel.org> X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="OWbp/EA8"; spf=none (imf04.hostedemail.com: domain of BATV@casper.srs.infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=BATV@casper.srs.infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: sjyysn3yw1cfjckwc7h6y5ow9y8paa96 X-Rspamd-Queue-Id: F1EE150000AC X-HE-Tag: 1626795566-292837 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This can be simplified a little more by killing off copy_user_segment_list entirely, using memdup_user and dropping the not really required _locked wrapper. The locking move might make most sense as a separate prep patch. diff --git a/kernel/kexec.c b/kernel/kexec.c index c82c6c06f051..3140fe7af801 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -19,26 +19,9 @@ #include "kexec_internal.h" -static int copy_user_segment_list(struct kimage *image, - unsigned long nr_segments, - struct kexec_segment __user *segments) -{ - int ret; - size_t segment_bytes; - - /* Read in the segments */ - image->nr_segments = nr_segments; - segment_bytes = nr_segments * sizeof(*segments); - ret = copy_from_user(image->segment, segments, segment_bytes); - if (ret) - ret = -EFAULT; - - return ret; -} - static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, + struct kexec_segment *segments, unsigned long flags) { int ret; @@ -58,10 +41,8 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, return -ENOMEM; image->start = entry; - - ret = copy_user_segment_list(image, nr_segments, segments); - if (ret) - goto out_free_image; + image->nr_segments = nr_segments; + memcpy(image->segment, segments, nr_segments * sizeof(*segments)); if (kexec_on_panic) { /* Enable special crash kernel control page alloc policy. */ @@ -104,11 +85,22 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, } static int do_kexec_load(unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, unsigned long flags) + struct kexec_segment *segments, unsigned long flags) { struct kimage **dest_image, *image; unsigned long i; - int ret; + int ret = 0; + + /* + * Because we write directly to the reserved memory region when loading + * crash kernels we need a mutex here to prevent multiple crash kernels + * from attempting to load simultaneously, and to prevent a crash kernel + * from loading over the top of a in use crash kernel. + * + * KISS: always take the mutex. + */ + if (!mutex_trylock(&kexec_mutex)) + return -EBUSY; if (flags & KEXEC_ON_CRASH) { dest_image = &kexec_crash_image; @@ -121,7 +113,7 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, if (nr_segments == 0) { /* Uninstall image */ kimage_free(xchg(dest_image, NULL)); - return 0; + goto out_unlock; } if (flags & KEXEC_ON_CRASH) { /* @@ -134,7 +126,7 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, ret = kimage_alloc_init(&image, entry, nr_segments, segments, flags); if (ret) - return ret; + goto out_unlock; if (flags & KEXEC_PRESERVE_CONTEXT) image->preserve_context = 1; @@ -171,6 +163,8 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, arch_kexec_protect_crashkres(); kimage_free(image); +out_unlock: + mutex_unlock(&kexec_mutex); return ret; } @@ -236,7 +230,8 @@ static inline int kexec_load_check(unsigned long nr_segments, SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, struct kexec_segment __user *, segments, unsigned long, flags) { - int result; + struct kexec_segment *ksegments; + unsigned long result; result = kexec_load_check(nr_segments, flags); if (result) @@ -247,21 +242,11 @@ SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) return -EINVAL; - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; - - result = do_kexec_load(entry, nr_segments, segments, flags); - - mutex_unlock(&kexec_mutex); - + ksegments = memdup_user(segments, nr_segments * sizeof(ksegments[0])); + if (IS_ERR(ksegments)) + return PTR_ERR(ksegments); + result = do_kexec_load(entry, nr_segments, ksegments, flags); + kfree(ksegments); return result; } @@ -272,7 +257,7 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, compat_ulong_t, flags) { struct compat_kexec_segment in; - struct kexec_segment out, __user *ksegments; + struct kexec_segment *ksegments; unsigned long i, result; result = kexec_load_check(nr_segments, flags); @@ -285,37 +270,24 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, if ((flags & KEXEC_ARCH_MASK) == KEXEC_ARCH_DEFAULT) return -EINVAL; - ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); + ksegments = kmalloc_array(nr_segments, sizeof(ksegments[0]), + GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + for (i = 0; i < nr_segments; i++) { result = copy_from_user(&in, &segments[i], sizeof(in)); if (result) - return -EFAULT; - - out.buf = compat_ptr(in.buf); - out.bufsz = in.bufsz; - out.mem = in.mem; - out.memsz = in.memsz; - - result = copy_to_user(&ksegments[i], &out, sizeof(out)); - if (result) - return -EFAULT; + goto fail; + ksegments[i].buf = compat_ptr(in.buf); + ksegments[i].bufsz = in.bufsz; + ksegments[i].mem = in.mem; + ksegments[i].memsz = in.memsz; } - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; - result = do_kexec_load(entry, nr_segments, ksegments, flags); - - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; } #endif