From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB967C433F5 for ; Sun, 1 May 2022 14:32:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235564AbiEAOfx (ORCPT ); Sun, 1 May 2022 10:35:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234918AbiEAOfs (ORCPT ); Sun, 1 May 2022 10:35:48 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F1E2D3F304 for ; Sun, 1 May 2022 07:32:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1651415541; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cKypoI33MDoy6I8Ug3lnbhlVO8VTOxTIqSavAqvfnH4=; b=PU+4eqmRmXwdsOrjRMPoJVt5YEFB+LRJRb6V0HDO9nsxd9MkCsb0tlwfWrW/OBwk4aajQL RAMcHX7tsKGpP3LojhcRrnQVjw8zHmfyTS6dr2nX28l4YnsrDWoFQbk3eFmfmwqR6pqZcp Qs6BMJ+UAorCmZunM8ae7Um3pO51xOQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-10-hM3YZzhuM8WTo2LkfYseMw-1; Sun, 01 May 2022 10:32:15 -0400 X-MC-Unique: hM3YZzhuM8WTo2LkfYseMw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F1E1D1C05144; Sun, 1 May 2022 14:32:14 +0000 (UTC) Received: from starship (unknown [10.40.192.26]) by smtp.corp.redhat.com (Postfix) with ESMTP id AE0FB14C26C8; Sun, 1 May 2022 14:32:12 +0000 (UTC) Message-ID: Subject: Re: [PATCH] KVM: x86/mmu: Do not create SPTEs for GFNs that exceed host.MAXPHYADDR From: Maxim Levitsky To: Sean Christopherson , Paolo Bonzini Cc: Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , David Matlack Date: Sun, 01 May 2022 17:32:11 +0300 In-Reply-To: References: <20220428233416.2446833-1-seanjc@google.com> <337332ca-835c-087c-c99b-92c35ea8dcd3@redhat.com> <20e1e7b1-ece7-e9e7-9085-999f7a916ac2@redhat.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.36.5 (3.36.5-2.fc32) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 2022-05-01 at 17:28 +0300, Maxim Levitsky wrote: > On Fri, 2022-04-29 at 16:01 +0000, Sean Christopherson wrote: > > On Fri, Apr 29, 2022, Paolo Bonzini wrote: > > > On 4/29/22 16:42, Sean Christopherson wrote: > > > > On Fri, Apr 29, 2022, Paolo Bonzini wrote: > > > > > On 4/29/22 16:24, Sean Christopherson wrote: > > > > > > I don't love the divergent memslot behavior, but it's technically correct, so I > > > > > > can't really argue. Do we want to "officially" document the memslot behavior? > > > > > > > > > > > > > > > > I don't know what you mean by officially document, > > > > > > > > Something in kvm/api.rst under KVM_SET_USER_MEMORY_REGION. > > > > > > Not sure if the API documentation is the best place because userspace does > > > not know whether shadow paging is on (except indirectly through other > > > capabilities, perhaps)? > > > > Hrm, true, it's not like the userspace VMM can rewrite itself at runtime. > > > > > It could even be programmatic, such as returning 52 for CPUID[0x80000008]. > > > A nested KVM on L1 would not be able to use the #PF(RSVD) trick to detect > > > MMIO faults. That's not a big price to pay, however I'm not sure it's a > > > good idea in general... > > > > Agreed, messing with CPUID is likely to end in tears. > > > > Also I can reproduce it all the way to 5.14 kernel (last kernel I have installed in this VM). > > I tested kvm/queue as of today, sadly I still see the warning. Due to a race, the above statements are out of order ;-) Best regards, Maxim Levitsky > > [mlevitsk@fedora34 ~]$[ 35.205241] ------------[ cut here ]------------ > [ 35.207156] WARNING: CPU: 6 PID: 3236 at arch/x86/kvm/mmu/tdp_mmu.c:46 kvm_mmu_uninit_tdp_mmu+0x47/0x50 [kvm] > [ 35.211468] Modules linked in: uinput snd_seq_dummy snd_hrtimer xt_MASQUERADE xt_conntrack ipt_REJECT ip6table_filter ip6_tables iptable_mangle iptable_nat nf_nat bridge rpcsec_gss_krb5 auth_rpcgss > nfsv4 dns_resolver nfs lockd grace fscache netfs rfkill sunrpc vfat fat snd_hda_codec_generic snd_hda_intel snd_intel_dspcfg snd_hda_codec kvm_amd snd_hwdep ccp snd_hda_core rng_core snd_seq kvm > snd_seq_device snd_pcm joydev irqbypass snd_timer input_leds snd lpc_ich virtio_input mfd_core pcspkr efi_pstore rtc_cmos button ext4 mbcache jbd2 hid_generic usbhid hid virtio_gpu virtio_dma_buf > drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops cec virtio_net net_failover drm virtio_console failover i2c_core virtio_blk crc32_pclmul xhci_pci crc32c_intel xhci_hcd virtio_pci > virtio_pci_modern_dev virtio_ring virtio dm_mirror dm_region_hash dm_log fuse ipv6 autofs4 > [ 35.248745] CPU: 6 PID: 3236 Comm: CPU 2/KVM Not tainted 5.14.0.stable #90 > [ 35.251559] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 > [ 35.255011] RIP: 0010:kvm_mmu_uninit_tdp_mmu+0x47/0x50 [kvm] > [ 35.257531] Code: 48 89 e5 48 39 c2 75 21 48 8b 87 b0 91 00 00 48 81 c7 b0 91 00 00 48 39 f8 75 08 e8 b3 7c cd e0 5d c3 c3 90 0f 0b 90 eb f2 90 <0f> 0b 90 eb d9 0f 1f 40 00 0f 1f 44 00 00 55 b8 ff > ff ff ff 48 89 > [ 35.265355] RSP: 0018:ffffc90001f6fc28 EFLAGS: 00010283 > [ 35.267659] RAX: ffffc90001f5a1c0 RBX: 0000000000000008 RCX: 0000000000000000 > [ 35.270823] RDX: ffff888114168958 RSI: ffff888115636ac0 RDI: ffffc90001f51000 > [ 35.273769] RBP: ffffc90001f6fc28 R08: 0000000000004802 R09: 0000000000000000 > [ 35.276595] R10: 00000000000001cd R11: 0000000000000018 R12: ffffc90001f51000 > [ 35.279470] R13: ffffc90001f51998 R14: ffff8881001d3060 R15: dead000000000100 > [ 35.282314] FS: 0000000000000000(0000) GS:ffff88846ef80000(0000) knlGS:0000000000000000 > [ 35.285594] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 35.287943] CR2: 0000000000000000 CR3: 0000000002a0b000 CR4: 0000000000350ee0 > [ 35.290979] Call Trace: > [ 35.292082] kvm_mmu_uninit_vm+0x22/0x30 [kvm] > [ 35.293909] kvm_arch_destroy_vm+0x18f/0x200 [kvm] > [ 35.295884] kvm_destroy_vm+0x164/0x250 [kvm] > [ 35.297680] kvm_put_kvm+0x26/0x40 [kvm] > [ 35.299309] kvm_vm_release+0x22/0x30 [kvm] > [ 35.301088] __fput+0x94/0x240 > [ 35.302338] ____fput+0xe/0x10 > [ 35.303599] task_work_run+0x63/0xa0 > [ 35.305083] do_exit+0x353/0x9d0 > [ 35.306470] do_group_exit+0x3b/0xa0 > [ 35.307882] get_signal+0x163/0x850 > [ 35.309403] arch_do_signal_or_restart+0xf3/0x7c0 > [ 35.311390] exit_to_user_mode_prepare+0x112/0x1f0 > [ 35.313374] syscall_exit_to_user_mode+0x18/0x40 > [ 35.315244] do_syscall_64+0x44/0xb0 > [ 35.316819] entry_SYSCALL_64_after_hwframe+0x44/0xae > [ 35.318874] RIP: 0033:0x7f51e5f8b0ab > [ 35.320395] Code: Unable to access opcode bytes at RIP 0x7f51e5f8b081. > [ 35.322985] RSP: 002b:00007f50dbdfd5c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 > [ 35.326015] RAX: fffffffffffffffc RBX: 000055df487dd0a0 RCX: 00007f51e5f8b0ab > [ 35.328914] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000000e > [ 35.332162] RBP: 00007f50dbdfd6c0 R08: 000055df46521e60 R09: 00007ffcd47ed080 > [ 35.335172] R10: 00007ffcd47ed090 R11: 0000000000000246 R12: 00007ffcd4653f2e > [ 35.338302] R13: 00007ffcd4653f2f R14: 0000000000000000 R15: 00007f50dbdff640 > [ 35.341320] ---[ end trace fa01d10f9909874f ]--- > > > Oh, well, I will now switch to vanilla L0 kernel, just in case, and see where to go from this point. > > Best regards, > Maxim Levitsky >