From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7EBC7CCD1BF for ; Fri, 24 Oct 2025 16:57:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=kCSFoTED+bN9czctrBnViFjaUxjZI+PzjY/BcjiPyhU=; b=fh687mUOO+uGCA6JIYYZ8f0yEd /361kUY9Sq47qDGb83Zg2G8JzSMY8+c3ICDW44uoH3VIKyvGSp6tIJawPRvwDsuJvB+dfeKEJ4t5h 7tLG8NeTM3nf/VnJrfPvcSZJxB2UV2Ps5TuNTMkF2Xlvx7GBbN00bolo83AhqLDXRigr/9YREiL9n VTDcdrZtmL2DKGsKA9JXLShpTbfGo42D0XeUEp0a+r9SPQfaLaZ3AoTsT2rSJHEgJlvibVEaLyCrx nWImBPRYVpg2lDPt/Q7lr9vf2MYpl5BqUpcGjZ6Q4VvNslgXI4xyPvLWfKEnc3HP+Krw/7JWGpGFD ydyhAMAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vCL6U-0000000A2uz-24WQ; Fri, 24 Oct 2025 16:57:26 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vCL6R-0000000A2tQ-0l3N for linux-arm-kernel@lists.infradead.org; Fri, 24 Oct 2025 16:57:25 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-33bcb779733so2075951a91.3 for ; Fri, 24 Oct 2025 09:57:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761325042; x=1761929842; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kCSFoTED+bN9czctrBnViFjaUxjZI+PzjY/BcjiPyhU=; b=qFOeZvctdOKcD2AAWySIZhv7DJL8i//PoRxtG0hYHDe37gRLGWMwquh7+DN6coIIrq VgSE9+Hf+0bz0lPRzLIbiiNqmSQ9dzcH8aOTWrn1/k3DzZALggrwCcd4/bttyWVC2Es+ BX0O5a22R/m5xMSAtGZ/qYCRSeax5N1ycX6xEmhIRO51veoKHPfaOZ0R4tXNfVZ51auX f+hOyFV04ARKklgwG1l5DIGWVodCkxe0fDlHWCw8+xvJ8slZMd2is6TkuOzV3wWnc40j DVeUDfILvoR/dw7S5si7xEGPJSnW3bSIcmDfmlvElDBr9sBYi+fhnClMPsvoKK31KlpT +IiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761325042; x=1761929842; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kCSFoTED+bN9czctrBnViFjaUxjZI+PzjY/BcjiPyhU=; b=tejrAJz0OSKf/ZFKZPJRaQoTod1+/XxVW9tPrzYMfdijYQRpan3au7QytnqOovWt/E S8tdf3fURQDZycpjVzpkG5rV4+Ryo9DMukwr9Omz03ZKicfa4P8zJCQ9UimHLrqVQToW H7mvB4hJTauxDGzOEjjB488IzdzCfl7/Cz1mzo98O79nmVo3ogdoeLwYdc6J6QvheKUk q0Hkb1Xq649QXfgZYXpebqx/HXLiAoL3T4Xs+QFnx2dQUgjEp/oqQEr7BIVZ3URZxK7r XX0Nxv3DFBNBVDXY0mkb4hSk8vfz79jRpk27erGYGLOxBR0v0vAuPHFp7k2dnAc50zLa Gx/g== X-Forwarded-Encrypted: i=1; AJvYcCXgn0fu6U3MZrcjZTQ7OeorkPGrRmoDIUuojJCTOC0ejCgTcyEiQvtFOVJ/bCOamYXnaPcWQ908DaXUe643WOF/@lists.infradead.org X-Gm-Message-State: AOJu0Yzd8wbUzgDZvcvUJ6cxvNa6d4jzLnz3HhbTjHUj0Xxtx6iTv4wM j/2shL1kBXB2Nc7g4RNX8R2FzHOMJXIq8y2fy6lgxOUzkOVligEW+EJxFO/jWhr7LEsPCussXVM MeglsVQ== X-Google-Smtp-Source: AGHT+IHXmSeD3kNK6POSE1hGR7Jvio0RnirzUH7WSrLaKZ09lPaFZJIUb09S/aBT/hnx0ZxmJITFzdMKVRo= X-Received: from pjnu4.prod.google.com ([2002:a17:90a:8904:b0:339:dc19:ae5d]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:48c8:b0:33b:ba55:f5dd with SMTP id 98e67ed59e1d1-33bcf93ab88mr32766320a91.37.1761325041555; Fri, 24 Oct 2025 09:57:21 -0700 (PDT) Date: Fri, 24 Oct 2025 09:57:20 -0700 In-Reply-To: Mime-Version: 1.0 References: <20251017003244.186495-1-seanjc@google.com> <20251017003244.186495-25-seanjc@google.com> Message-ID: Subject: Re: [PATCH v3 24/25] KVM: TDX: Guard VM state transitions with "all" the locks From: Sean Christopherson To: Yan Zhao Cc: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Madhavan Srinivasan , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Paolo Bonzini , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, Ira Weiny , Kai Huang , Michael Roth , Vishal Annapurve , Rick Edgecombe , Ackerley Tng , Binbin Wu Content-Type: text/plain; charset="us-ascii" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251024_095723_217589_4BA1BCEA X-CRM114-Status: GOOD ( 23.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Oct 24, 2025, Yan Zhao wrote: > On Thu, Oct 16, 2025 at 05:32:42PM -0700, Sean Christopherson wrote: > > Acquire kvm->lock, kvm->slots_lock, and all vcpu->mutex locks when > > servicing ioctls that (a) transition the TD to a new state, i.e. when > > doing INIT or FINALIZE or (b) are only valid if the TD is in a specific > > state, i.e. when initializing a vCPU or memory region. Acquiring "all" > > the locks fixes several KVM_BUG_ON() situations where a SEAMCALL can fail > > due to racing actions, e.g. if tdh_vp_create() contends with either > > tdh_mr_extend() or tdh_mr_finalize(). > > > > For all intents and purposes, the paths in question are fully serialized, > > i.e. there's no reason to try and allow anything remotely interesting to > > happen. Smack 'em with a big hammer instead of trying to be "nice". > > > > Acquire kvm->lock to prevent VM-wide things from happening, slots_lock to > > prevent kvm_mmu_zap_all_fast(), and _all_ vCPU mutexes to prevent vCPUs > slots_lock to prevent kvm_mmu_zap_memslot()? > kvm_mmu_zap_all_fast() does not operate on the mirror root. Oh, right. > We may have missed a zap in the guest_memfd punch hole path: > > The SEAMCALLs tdh_mem_range_block(), tdh_mem_track() tdh_mem_page_remove() > in the guest_memfd punch hole path are only protected by the filemap invaliate > lock and mmu_lock, so they could contend with v1 version of tdh_vp_init(). > > (I'm writing a selftest to verify this, haven't been able to reproduce > tdh_vp_init(v1) returning BUSY yet. However, this race condition should be > theoretically possible.) > > Resources SHARED users EXCLUSIVE users > ------------------------------------------------------------------------ > (1) TDR tdh_mng_rdwr tdh_mng_create > tdh_vp_create tdh_mng_add_cx > tdh_vp_addcx tdh_mng_init > tdh_vp_init(v0) tdh_mng_vpflushdone > tdh_vp_enter tdh_mng_key_config > tdh_vp_flush tdh_mng_key_freeid > tdh_vp_rd_wr tdh_mr_extend > tdh_mem_sept_add tdh_mr_finalize > tdh_mem_sept_remove tdh_vp_init(v1) > tdh_mem_page_aug tdh_mem_page_add > tdh_mem_page_remove > tdh_mem_range_block > tdh_mem_track > tdh_mem_range_unblock > tdh_phymem_page_reclaim > > Do you think we can acquire the mmu_lock for cmd KVM_TDX_INIT_VCPU? Ugh, I'd rather not? Refresh me, what's the story with "v1"? Are we now on v2? If this is effectively limited to deprecated TDX modules, my vote would be to ignore the flaw and avoid even more complexity in KVM. > > @@ -3155,12 +3198,13 @@ int tdx_vcpu_unlocked_ioctl(struct kvm_vcpu *vcpu, void __user *argp) > > if (r) > > return r; > > > > + CLASS(tdx_vm_state_guard, guard)(kvm); > Should we move the guard to inside each cmd? Then there's no need to acquire the > locks in the default cases. No, I don't think it's a good tradeoff. We'd also need to move vcpu_{load,put}() into the cmd helpers, and theoretically slowing down a bad ioctl invocation due to taking extra locks is a complete non-issue.