From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by ozlabs.org (Postfix) with ESMTP id 88B091007D1 for ; Thu, 5 Aug 2010 12:28:06 +1000 (EST) Message-ID: <4C5A22B0.7050207@redhat.com> Date: Thu, 05 Aug 2010 10:32:16 +0800 From: Cong Wang MIME-Version: 1.0 To: Milton Miller Subject: Re: [Patch v2] kexec: increase max of kexec segments and use dynamic allocation References: <4C5122D1.8080203@redhat.com> In-Reply-To: <4C5122D1.8080203@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Cc: Neil Horman , Neil Horman , huang ying , linux-kernel@vger.kernel.org, kexec@lists.infadead.org, "Eric W. Biederman" , linuxppc-dev@lists.ozlabs.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , (Ping Milton...) On 07/29/10 14:42, Cong Wang wrote: > On 07/27/10 18:00, Milton Miller wrote: >> [ Added kexec at lists.infradead.org and linuxppc-dev@lists.ozlabs.org ] >> >>> >>> Currently KEXEC_SEGMENT_MAX is only 16 which is too small for machine >>> with >>> many memory ranges. When hibernate on a machine with disjoint memory >>> we do >>> need one segment for each memory region. Increase this hard limit to 16K >>> which is reasonably large. >>> >>> And change ->segment from a static array to a dynamically allocated >>> memory. >>> >>> Cc: Neil Horman >>> Cc: huang ying >>> Cc: Eric W. Biederman >>> Signed-off-by: WANG Cong >>> >>> --- >>> diff --git a/arch/powerpc/kernel/machine_kexec_64.c >>> b/arch/powerpc/kernel/machine_kexec_64.c >>> index ed31a29..f115585 100644 >>> --- a/arch/powerpc/kernel/machine_kexec_64.c >>> +++ b/arch/powerpc/kernel/machine_kexec_64.c >>> @@ -131,10 +131,7 @@ static void copy_segments(unsigned long ind) >>> void kexec_copy_flush(struct kimage *image) >>> { >>> long i, nr_segments = image->nr_segments; >>> - struct kexec_segment ranges[KEXEC_SEGMENT_MAX]; >>> - >>> - /* save the ranges on the stack to efficiently flush the icache */ >>> - memcpy(ranges, image->segment, sizeof(ranges)); >>> + struct kexec_segment range; >> >> I'm glad you found our copy on the stack and removed the stack overflow >> that comes with this bump, but ... >> >>> >>> /* >>> * After this call we may not use anything allocated in dynamic >>> @@ -148,9 +145,11 @@ void kexec_copy_flush(struct kimage *image) >>> * we need to clear the icache for all dest pages sometime, >>> * including ones that were in place on the original copy >>> */ >>> - for (i = 0; i< nr_segments; i++) >>> - flush_icache_range((unsigned long)__va(ranges[i].mem), >>> - (unsigned long)__va(ranges[i].mem + ranges[i].memsz)); >>> + for (i = 0; i< nr_segments; i++) { >>> + memcpy(&range,&image->segment[i], sizeof(range)); >>> + flush_icache_range((unsigned long)__va(range.mem), >>> + (unsigned long)__va(range.mem + range.memsz)); >>> + } >>> } >> >> This is executed after the copy, so as it says, >> "we may not use anything allocated in dynamic memory". >> >> We could allocate control pages to copy the segment list into. >> Actually ppc64 doesn't use the existing control page, but that >> is only 4kB today. >> >> We need the list to icache flush all the pages in all the segments. >> The as the indirect list doesn't have pages that were allocated at >> their destination. >> >> Or maybe the icache flush should be done in the generic code >> like it does for crash load segments? >> > > I don't get the point here, according to the comments, > it is copied into stack because of efficiency. > -- The opposite of love is not hate, it's indifference. - Elie Wiesel