From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935255AbcKXAFj (ORCPT ); Wed, 23 Nov 2016 19:05:39 -0500 Received: from ipmail05.adl6.internode.on.net ([150.101.137.143]:12023 "EHLO ipmail05.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933573AbcKXAFi (ORCPT ); Wed, 23 Nov 2016 19:05:38 -0500 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CwBwBLLjZYIFo9LHleHAEBBAEBCgEBgzgBAQEBAR+BWoJ7g3mGTJV4BoEdjCiGPYIOggeGGwICAQECgh1AFAECAQEBAQEBAQYBAQEBAQE4AUWEaQEBBDocIxAIAxgJJQ8FJQMHGhOIbLBRi1wBAQgCASQghVSFJYoqBZpPkHSQPY1qhAwegSwTDIVcKjQBiEQBAQE Date: Thu, 24 Nov 2016 11:04:58 +1100 From: Dave Chinner To: Dan Williams Cc: Kees Cook , Ingo Molnar , Dave Jiang , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , X86 ML , "linux-kernel@vger.kernel.org" , "linux-nvdimm@lists.01.org" Subject: Re: [PATCH] x86: fix kaslr and memmap collision Message-ID: <20161124000458.GS31101@dastard> References: <147977413859.13657.2181994710415174471.stgit@djiang5-desk3.ch.intel.com> <20161122084754.GA25596@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 22, 2016 at 11:01:32AM -0800, Dan Williams wrote: > On Tue, Nov 22, 2016 at 10:54 AM, Kees Cook wrote: > > On Tue, Nov 22, 2016 at 9:26 AM, Dan Williams wrote: > >> No, you're right, we need to handle multiple ranges. Since the > >> mem_avoid array is statically allocated perhaps we can handle up to 4 > >> memmap= entries, but past that point disable kaslr for that boot? > > > > Yeah, that seems fine to me. I assume it's rare to have 4? > > > > It should be rare to have *one* since ACPI 6.0 added support for > communicating persistent memory ranges. However there are legacy > nvdimm users that I know are doing at least 2, but I have hard time > imagining they would ever do more than 4. I doubt it's rare amongst the people using RAM to emulate pmem for filesystem testing purposes. My "pmem" test VM always has at least 2 ranges set to give me two discrete pmem devices, and I have used 4 from time to time to do things like test multi-volume scratch XFS filesystems in xfstests (i.e. data, log and realtime volumes) so I didn't need to play games with partitioning or DM... Cheers, Dave. -- Dave Chinner david@fromorbit.com