From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F69922D79B; Mon, 10 Mar 2025 17:08:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741626503; cv=none; b=mn7Men5siZRXC0U/K2iS7t6wmZaVEtAPOcfdodt+jm4+x0NVgBXcPrZQfy7vE5HuQK9XRaj8+FtR3sIaib1gpTs9+Z7wIBlfCGySIvr9L6qO30TQ2WmxV5ecfvpC04AJ1H31KVVHqWh8WEelx84GSvxVFSM5baYvYRM5WQHr274= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741626503; c=relaxed/simple; bh=BDPwHEQJtDtD2d6bq3wswYDHcfGQePvakq3VHHuiDXk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=qfAOkHeVIzBSd8/exBjbvSDY5d6mJ3vtwLd5F7VgU2XrArnjNafb60mstVw3ouYAh1f0h+nEr6bageGQFTQNQrsjsnP5bAbYhnJOsohZv2p64NxILpfvGmRwzTDjfvba2aFur8HXzqMyXLPxXg135vM5pzXAQLju718VDb3Hgp8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dP64LCSw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dP64LCSw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BD209C4CEEB; Mon, 10 Mar 2025 17:08:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741626502; bh=BDPwHEQJtDtD2d6bq3wswYDHcfGQePvakq3VHHuiDXk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=dP64LCSwb0qvMdmxNsq4aBgEkKCWTngBOaUBShh/E456amPJ+27bT3yYrEE5Gbe2Y pF1oiiVL7+YWc6JNBHa7O2WtXGhJjTJsTLGJyLtE+bAmPCfq/3Pc4kAd9bqycwPSz5 Adjk1OWKoidmg9OiT3mJIBc1IgB4uMqcUHbsgzJ+KzsaRTD5TKEwDpPg4x9PsZvD/4 EZ7Q2cZdK8XAVDyjTOEv9MAuloiaC6ai4D/YMIbH2cPTCon/hLkqajOt3wbw54YP15 wU2ay6wa88LQr0FNFLZu5rhY2ee46XOCZ7yoC5WtWWFs0NwciIsHg/fMOPeT7J/JrC 2y2dbOxSiNHfA== Date: Mon, 10 Mar 2025 19:08:01 +0200 From: Mike Rapoport To: Pratyush Yadav Cc: linux-kernel@vger.kernel.org, Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: Re: [PATCH v4 06/14] kexec: Add KHO parsing support Message-ID: References: <20250206132754.2596694-1-rppt@kernel.org> <20250206132754.2596694-7-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Hi Pratyush, On Mon, Mar 10, 2025 at 04:20:01PM +0000, Pratyush Yadav wrote: > Hi Mike, > > On Thu, Feb 06 2025, Mike Rapoport wrote: > [...] > > @@ -444,7 +576,141 @@ static void kho_reserve_scratch(void) > > kho_enable = false; > > } > > > > +/* > > + * Scan the DT for any memory ranges and make sure they are reserved in > > + * memblock, otherwise they will end up in a weird state on free lists. > > + */ > > +static void kho_init_reserved_pages(void) > > +{ > > + const void *fdt = kho_get_fdt(); > > + int offset = 0, depth = 0, initial_depth = 0, len; > > + > > + if (!fdt) > > + return; > > + > > + /* Go through the mem list and add 1 for each reference */ > > + for (offset = 0; > > + offset >= 0 && depth >= initial_depth; > > + offset = fdt_next_node(fdt, offset, &depth)) { > > + const struct kho_mem *mems; > > + u32 i; > > + > > + mems = fdt_getprop(fdt, offset, "mem", &len); > > + if (!mems || len & (sizeof(*mems) - 1)) > > + continue; > > + > > + for (i = 0; i < len; i += sizeof(*mems)) { > > + const struct kho_mem *mem = &mems[i]; > > i goes from 0 to len in steps of 16, but you use it to dereference an > array of type struct kho_mem. So you end up only looking at only one of > every 16 mems and do an out of bounds access. I found this when testing > the memfd patches and any time the file was more than 1 page, it started > to crash randomly. Thanks! Changyuan already pointed that out privately. But I'm going to adopt the memory reservation scheme Jason proposed so this code is going to go away anyway :) > Below patch should fix that: > > ---- 8< ---- > diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c > index c26753d613cbc..40d1d8ac68d44 100644 > --- a/kernel/kexec_handover.c > +++ b/kernel/kexec_handover.c > @@ -685,13 +685,15 @@ static void kho_init_reserved_pages(void) > offset >= 0 && depth >= initial_depth; > offset = fdt_next_node(fdt, offset, &depth)) { > const struct kho_mem *mems; > - u32 i; > + u32 i, nr_mems; > > mems = fdt_getprop(fdt, offset, "mem", &len); > if (!mems || len & (sizeof(*mems) - 1)) > continue; > > - for (i = 0; i < len; i += sizeof(*mems)) { > + nr_mems = len / sizeof(*mems); > + > + for (i = 0; i < nr_mems; i++) { > const struct kho_mem *mem = &mems[i]; > > memblock_reserve(mem->addr, mem->size); > ---- >8 ---- > [...] > > -- > Regards, > Pratyush Yadav -- Sincerely yours, Mike.