From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC7B7C433F5 for ; Thu, 18 Nov 2021 16:12:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9E1C61A0C for ; Thu, 18 Nov 2021 16:12:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232987AbhKRQPW (ORCPT ); Thu, 18 Nov 2021 11:15:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232882AbhKRQPV (ORCPT ); Thu, 18 Nov 2021 11:15:21 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE002C06173E for ; Thu, 18 Nov 2021 08:12:20 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id k4so5636929plx.8 for ; Thu, 18 Nov 2021 08:12:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=JtUbgJF6NuH5ekchLkHHx7uhAr5eiEYDxaa/1ZDxQZM=; b=iRBhWG5YmMZ8saHTlZ/Xuz0EDAF+36mnUXW9sw4XNE40qtsP9abacRjGj7c8sYEJEN RXbIuWjqZsO2VHrGVChvijshzPkxkBds4BrFHqRB1S3+hBj0sTAveaI1hlcsqSNT9Smq jC28mFTpF5iFgIIvodg94YTOjPtG4lfvEgBH8ybTeo2YZ6JL4FivN48XTr4SQiXqNRIr pow/xaD/Fm5PaS4WLDjH63Mi6bOt+4itX5B3TAY3F0tzBgnwAVlDU3C4MfbPRRpcloXH 8BFnHBhpgmg0VTW2Hj8w6dRDFUcL/0usr/iVid4soYx1bKkPTGP51kRAQO7tHc5YCgqw JyAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=JtUbgJF6NuH5ekchLkHHx7uhAr5eiEYDxaa/1ZDxQZM=; b=T/TxykQEELsJ8z4mcVKqNQLbWU9p8upJZTox125MLiVwB1dDJNvAY3pdxBpsg6QaoT tLmhRKqeLsbQMbwod62HpYQ66ygZChQ+TszchoHywN6TyiApO/n26t233o8hv/kxVdAj BD1xLFC9u4afdk58kQTosAh1LcGhA92IEdu+1MhQQmOf3A7BOFifGb8clBbDW9jaQuC1 Wm/9jOmshvIFTZdXfrc49baM5KNZ7prSwrcLhWFUZHiNDDUj9W30uhhMFG1OLHiNwseb sefnzgTlS2AOxh5InMgZHqXp1T+feh1u6VRxd6moI1B6IL47PopK8TZdZYStHGHo2eiZ 1jkw== X-Gm-Message-State: AOAM533qWW6C6LhcjwQkzopuCfuQkZtLQT8f9T1Qcy9eNBWcTv2+X9Cg enm9pdzr9wylavln0O1iJEuiWw== X-Google-Smtp-Source: ABdhPJz/oQk2Mm80u4EHRmjaQ3+PKJloRdFqaSBeITGTV+Ei76TrbAUwoZVcNLOQ3yPldyZqefDXDQ== X-Received: by 2002:a17:90a:49c2:: with SMTP id l2mr12111506pjm.23.1637251939046; Thu, 18 Nov 2021 08:12:19 -0800 (PST) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id f29sm121048pgf.34.2021.11.18.08.12.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Nov 2021 08:12:18 -0800 (PST) Date: Thu, 18 Nov 2021 16:12:15 +0000 From: Sean Christopherson To: Juergen Gross Cc: kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Subject: Re: [PATCH v3 3/4] x86/kvm: add max number of vcpus for hyperv emulation Message-ID: References: <20211116141054.17800-1-jgross@suse.com> <20211116141054.17800-4-jgross@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 18, 2021, Juergen Gross wrote: > On 18.11.21 15:49, Sean Christopherson wrote: > > On Thu, Nov 18, 2021, Juergen Gross wrote: > > > On 17.11.21 21:50, Sean Christopherson wrote: > > > > > @@ -166,7 +166,7 @@ static struct kvm_vcpu *get_vcpu_by_vpidx(struct kvm *kvm, u32 vpidx) > > > > > struct kvm_vcpu *vcpu = NULL; > > > > > int i; > > > > > - if (vpidx >= KVM_MAX_VCPUS) > > > > > + if (vpidx >= min(KVM_MAX_VCPUS, KVM_MAX_HYPERV_VCPUS)) > > > > > > > > IMO, this is conceptually wrong. KVM should refuse to allow Hyper-V to be enabled > > > > if the max number of vCPUs exceeds what can be supported, or should refuse to create > > > > > > TBH, I wasn't sure where to put this test. Is there a guaranteed > > > sequence of ioctl()s regarding vcpu creation (or setting the max > > > number of vcpus) and the Hyper-V enabling? > > > > For better or worse (mostly worse), like all other things CPUID, Hyper-V is a per-vCPU > > knob. If KVM can't detect the impossible condition at compile time, kvm_check_cpuid() > > is probably the right place to prevent enabling Hyper-V on an unreachable vCPU. > > With HYPERV_CPUID_IMPLEMENT_LIMITS already returning the > supported number of vcpus for the Hyper-V case I'm not sure > there is really more needed. Yep, that'll do nicely. > The problem I'm seeing is that the only thing I can do is to > let kvm_get_hv_cpuid() not adding the Hyper-V cpuid leaves for > vcpus > 64. I can't return a failure, because that would > probably let vcpu creation fail. And this is something we don't > want, as kvm_get_hv_cpuid() is called even in the case the guest > doesn't plan to use Hyper-V extensions. Argh, that thing is annoying. My vote is still to reject KVM_SET_CPUID{2} if userspace attempts to enable Hyper-V for a vCPU when the max number of vCPUs exceeds HYPERV_CPUID_IMPLEMENT_LIMITS. If userspace parrots back KVM_GET_SUPPORTED_CPUID, it will specify KVM as the hypervisor, i.e. enabling Hyper-V requires deliberate action from userspace. The non-vCPU version of KVM_GET_SUPPORTED_HV_CPUID is not an issue, e.g. the generic KVM_GET_SUPPORTED_CPUID also reports features that become unsupported if dependent CPUID features are not enabled by userspace. The discrepancy with the per-vCPU variant of kvm_get_hv_cpuid() would be unfortunate, but IMO that ship sailed when the per-vCPU variant was added by commit 2bc39970e932 ("x86/kvm/hyper-v: Introduce KVM_GET_SUPPORTED_HV_CPUID"). We can't retroactively yank that code out, but I don't think we should be overly concerned with keeping it 100% accurate. IMO it's perfectly fine for KVM to define the output of KVM_GET_SUPPORTED_HV_CPUID as being garbage if the vCPU cannot possibly support Hyper-V enlightments. That situation isn't possible today, so there's no backwards compatibility to worry about.