From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0D972F21 for ; Thu, 3 Mar 2022 17:10:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646327413; x=1677863413; h=message-id:date:mime-version:to:cc:references:from: subject:in-reply-to:content-transfer-encoding; bh=vaOGjw7VdBKeP6eHbl1iLmTW0NFWNNn5qWcGsk5ZbQE=; b=RBNkJOVZXp94FkIVrhZelpegPQHK9mVUaiNZLYEyCojb7o9ZNj8QPHQ6 vti5NyEoYngUtYCWb/Qeqpfnuaf6CkDDSIwMO2RStvFARGu8ayZq1nYGz nIjWnSkW33DOsijF1YRpTJz+t+sTf43g/Lnv77gtvu92fEXqt9JqJL3Oo 79Vamx0CyOpQoCfofNv5JH2zJ6YorT2Db0LZAf1dk9TIDyt8sfZcEFf3/ z7oudHLEbNCz1kRLT1k+srU4nuYHzUUCHT9BOMRgniGCYS4698/nQODCH nj1Qtue2CYIMcF1DfWmhJ629CnjwXmJzYBTJ3WTLRcVd8V4M8ma+Dtc63 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10275"; a="278419758" X-IronPort-AV: E=Sophos;i="5.90,151,1643702400"; d="scan'208";a="278419758" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2022 09:09:44 -0800 X-IronPort-AV: E=Sophos;i="5.90,151,1643702400"; d="scan'208";a="642184867" Received: from eabada-mobl2.amr.corp.intel.com (HELO [10.209.6.252]) ([10.209.6.252]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2022 09:09:43 -0800 Message-ID: <5ca2583b-a873-fc5d-ece6-d4bdbd133a89@intel.com> Date: Thu, 3 Mar 2022 09:09:37 -0800 Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Content-Language: en-US To: Brijesh Singh , x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, platform-driver-x86@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org Cc: Thomas Gleixner , Ingo Molnar , Joerg Roedel , Tom Lendacky , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Gonda , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , Michael Roth , Vlastimil Babka , "Kirill A . Shutemov" , Andi Kleen , "Dr . David Alan Gilbert" , brijesh.ksingh@gmail.com, tony.luck@intel.com, marcorr@google.com, sathyanarayanan.kuppuswamy@linux.intel.com References: <20220224165625.2175020-1-brijesh.singh@amd.com> <20220224165625.2175020-23-brijesh.singh@amd.com> From: Dave Hansen Subject: Re: [PATCH v11 22/45] x86/sev: Use SEV-SNP AP creation to start secondary CPUs In-Reply-To: <20220224165625.2175020-23-brijesh.singh@amd.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 2/24/22 08:56, Brijesh Singh wrote: > + /* > + * Allocate VMSA page to work around the SNP erratum where the CPU will > + * incorrectly signal an RMP violation #PF if a large page (2MB or 1GB) > + * collides with the RMP entry of VMSA page. The recommended workaround > + * is to not use a large page. > + * > + * Allocate one extra page, use a page which is not 2MB-aligned > + * and free the other. > + */ > + p = alloc_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO, 1); > + if (!p) > + return NULL; > + > + split_page(p, 1); > + > + pfn = page_to_pfn(p); > + if (IS_ALIGNED(__pfn_to_phys(pfn), PMD_SIZE)) { > + pfn++; > + __free_page(p); > + } else { > + __free_page(pfn_to_page(pfn + 1)); > + } > + > + return page_address(pfn_to_page(pfn)); > +} This can be simplified. There's no need for all the sill pfn_to_page() conversions or even an alignment check. The second page (page[1]) of an order-1 page is never 2M/1G aligned. Just use that: // Alloc an 8k page which is also 8k-aligned: p = alloc_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO, 1); if (!p) return NULL; split_page(p, 1); // Free the first 4k. This page _may_ // be 2M/1G aligned and can not be used: __free_page(p); // Return the unaligned page: return page_address(p+1);