From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CEFED1D5151 for ; Sat, 13 Dec 2025 01:07:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765588031; cv=none; b=FuLPjGeyohFeeGsL8cl/2WG8pAEoTdS+XvSRTUFtWYJZuzAU7QWhV3XiomxWx4HNeYI4bLcIvOWoaByYZM9JCefQ7gSqP5qAMurXq+4laevCUNu2blY3DVJrsLdGaZSVG3Gb2JsW9GWx0186DAZFI+OvvCb19Nr38ChBiw7tVqg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765588031; c=relaxed/simple; bh=kmXLobmCXS3R8kh3ZgeIP8Mu5I1AXodDia4mNYvs8wE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SuWC7qW45M/yWVZoDg6s6A6CwfS5dcwIXIMikXQUyB57N4rUb7j+ulMr8XCuP4njqaBHkKPKAMtMm83Nv1vmAlY5UiNRgQVYY4cNBmJB0YfSt2LrNwr/OMTBYtMkyYeJ9R60l+PK+dIj3XY2R4LOSBGD+Wf0rh4pu0FdYPQIEbU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RdLbmk8S; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RdLbmk8S" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-29557f43d56so21248845ad.3 for ; Fri, 12 Dec 2025 17:07:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765588028; x=1766192828; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9Hwyd1rimjLYvRaRvg8sNpXmW9eOFzclIP8X3X+ApI4=; b=RdLbmk8SSxGLeaNBqn4s+OqC3O1/jHeJu6suMl3zVRt8zJtduKUDQa+CqlNYIw38Iz ppiMD1FpTSXaP4E0OM8uQadbU+IKlkHQ/gauKm1GC1Zcvq65EUqSqTwE+xS5ins7DVX/ DhYMHb1PxVao2VCn31nVsCTUaf8HQZ5FTScXQfV3k1UrLxeQ8t2x2d9W6uOZ6dKFh+EL 5f1uNjXrs0pgZXRs/kel8G4e/sbHdPbz6zVhqLlby1gQU4a2R7g1R4kYms9dXQgRUV8E 9MShJQptl2dsENtCcHpQnFpIpkuIUXA1MW3XuScoQMVc17RE6QfokNV/WHsgd+6MuBRo 2Mcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765588028; x=1766192828; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9Hwyd1rimjLYvRaRvg8sNpXmW9eOFzclIP8X3X+ApI4=; b=dTt5gw3AdkLFBC5WtZbbV3df75ihDqz37e/jr7kpi8jTT4o8nhNKa8KtvvllZCqsKY 5Z6WqakJNMh8u800mUrrcR2HSvm2w6hNKvVotFKSFgMaxl8JBqQc49E0QIfK+sEhvFKI WIOcfEGGMrIMZxjYTbuSoyrVLPCXkG1A/K7dzOPL0OAFQ0wLsnCqB/pCHWhnJrhg2lbc WLetjH6rxCRJayUo3mI2tFJwOr8kdYYt7VtwF6gHldmhYi2kgPpeRaH1IkXDQuNMQpnp NEQEInF6hqGwmaO27dHJWuNiTst7iF1ngRVl/Y4yWgULkTMRg5MP9DAAoR2tfg9UxqY6 erxg== X-Forwarded-Encrypted: i=1; AJvYcCV1BUTZIqHU0nNYcMjQNpyBV7cPhHkEeTDbtWGNATvX14gAy8IhfO+N4f7Qe4EuxY65TY/zpzrT73SCOpw=@vger.kernel.org X-Gm-Message-State: AOJu0YzmD5h4UgSlFhqVfc1iHeUqpF5iAWHPsHBO4y0+xooNwI5HqprH nzhGYo8073wJ6/idW2EkiGHWIoWJHvhUEYCX+QmVonMSNGmFleVTkPazbr0X0b84ohlR7+zf3f5 6z/sfCQ== X-Google-Smtp-Source: AGHT+IHCiEhmID2LaoVd+RXtq+vAyKTjDZ3vK41d/P6PUu3il4TQ94p4+XlM4I+BZqfDZdpsO3eNObnjorI= X-Received: from plcj20.prod.google.com ([2002:a17:902:f254:b0:2a0:81d1:64f4]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:1a68:b0:274:3db8:e755 with SMTP id d9443c01a7336-29f2404b18emr31791315ad.30.1765588028118; Fri, 12 Dec 2025 17:07:08 -0800 (PST) Date: Fri, 12 Dec 2025 17:07:06 -0800 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251110222922.613224-1-yosry.ahmed@linux.dev> <20251110222922.613224-5-yosry.ahmed@linux.dev> Message-ID: Subject: Re: [PATCH v2 04/13] KVM: nSVM: Fix consistency checks for NP_ENABLE From: Sean Christopherson To: Yosry Ahmed Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Content-Type: text/plain; charset="us-ascii" On Fri, Dec 12, 2025, Yosry Ahmed wrote: > On Fri, Dec 12, 2025 at 10:32:23AM -0800, Sean Christopherson wrote: > > On Tue, Dec 09, 2025, Yosry Ahmed wrote: > > > Do I keep that as-is, or do you prefer that I also sanitize these fields > > > when copying to the cache in nested_copy_vmcb_control_to_cache()? > > > > I don't think I follow. What would the sanitization look like? Note, I don't > > think we need to completely sanitize _every_ field. The key fields are ones > > where KVM consumes and/or acts on the field. > > Patch 12 currently sanitizes what is copied from VMCB12 to VMCB02 for > int_vector, int_state, and event_inj in nested_vmcb02_prepare_control(): > > @@ -890,9 +893,9 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm, > (svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) | > (vmcb01->control.int_ctl & int_ctl_vmcb01_bits); > > - vmcb02->control.int_vector = svm->nested.ctl.int_vector; > - vmcb02->control.int_state = svm->nested.ctl.int_state; > - vmcb02->control.event_inj = svm->nested.ctl.event_inj; > + vmcb02->control.int_vector = svm->nested.ctl.int_vector & SVM_INT_VECTOR_MASK; > + vmcb02->control.int_state = svm->nested.ctl.int_state & SVM_INTERRUPT_SHADOW_MASK; > + vmcb02->control.event_inj = svm->nested.ctl.event_inj & ~SVM_EVTINJ_RESERVED_BITS; > vmcb02->control.event_inj_err = svm->nested.ctl.event_inj_err; > > My question was: given this: > > > I want to solidify sanitizing the cache as standard behavior > > Do you prefer that I move this sanitization when copying from L1's > VMCB12 to the cached VMCB12 in nested_copy_vmcb_control_to_cache()? Hmm, good question. Probably? If the main motivation for sanitizing is to guard against effectively exposing new features unintentionally via vmcs12, then it seems like the safest option is to ensure the "bad" bits are _never_ set in KVM-controlled state. > I initially made it part of nested_vmcb02_prepare_control() as it > already filters what to pick from the VMCB12 for some other related > fields like int_ctl based on what features are exposed to the guest.