From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29319C33C9B for ; Wed, 8 Jan 2020 00:04:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 047122077B for ; Wed, 8 Jan 2020 00:04:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727099AbgAHAEO (ORCPT ); Tue, 7 Jan 2020 19:04:14 -0500 Received: from mga04.intel.com ([192.55.52.120]:19712 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725601AbgAHAEO (ORCPT ); Tue, 7 Jan 2020 19:04:14 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Jan 2020 16:04:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,407,1571727600"; d="scan'208";a="211354742" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.202]) by orsmga007.jf.intel.com with ESMTP; 07 Jan 2020 16:04:12 -0800 Date: Tue, 7 Jan 2020 16:04:12 -0800 From: Sean Christopherson To: Tom Lendacky Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Brijesh Singh Subject: Re: [PATCH v2] KVM: SVM: Override default MMIO mask if memory encryption is enabled Message-ID: <20200108000412.GE16987@linux.intel.com> References: <20200106224931.GB12879@linux.intel.com> <20200106233846.GC12879@linux.intel.com> <20200107222813.GB16987@linux.intel.com> <298352c6-7670-2929-9621-1124775bfaed@amd.com> <20200107233102.GC16987@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, Jan 07, 2020 at 05:51:51PM -0600, Tom Lendacky wrote: > On 1/7/20 5:31 PM, Sean Christopherson wrote: > > AIUI, using phys_bits=48, then the standard scenario is Cbit=47 and some > > additional bits 46:M are reserved. Applying that logic to phys_bits=52, > > then Cbit=51 and bits 50:M are reserved, so there's a collision but it's > > There's no requirement that the C-bit correspond to phys_bits. So, for > example, you can have C-bit=51 and phys_bits=48 and so 47:M are reserved. But then using blindly using x86_phys_bits would break if the PA bits aren't reduced, e.g. C-bit=47 and phys_bits=47. AFAICT, there's no requirement that there be reduced PA bits when there is a C-bit. I'm guessing there aren't plans to ship such CPUs, but I don't see anything in the APM to prevent such a scenario. Maybe the least painful approach would be to go with a version of this patch and add a check that there are indeeded reserved/reduced bits? Probably with a WARN_ON_ONCE if the check fails.