From: Avi Kivity <avi@redhat.com>
To: "Roedel, Joerg" <Joerg.Roedel@amd.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 16/22] KVM: MMU: Track page fault data in struct vcpu
Date: Tue, 04 May 2010 12:45:53 +0300 [thread overview]
Message-ID: <4BDFECD1.8040109@redhat.com> (raw)
In-Reply-To: <20100504093709.GE28950@amd.com>
On 05/04/2010 12:37 PM, Roedel, Joerg wrote:
>
> This is the lockdep warning I get when I start booting a Linux kernel.
> It is with the nested-npt patchset but the warning occurs without it too
> (slightly different backtraces then).
>
> [60390.953424] =======================================================
> [60390.954324] [ INFO: possible circular locking dependency detected ]
> [60390.954324] 2.6.34-rc5 #7
> [60390.954324] -------------------------------------------------------
> [60390.954324] qemu-system-x86/2506 is trying to acquire lock:
> [60390.954324] (&mm->mmap_sem){++++++}, at: [<c10ab0f4>] might_fault+0x4c/0x86
> [60390.954324]
> [60390.954324] but task is already holding lock:
> [60390.954324] (&(&kvm->mmu_lock)->rlock){+.+...}, at: [<f8ec1b50>] spin_lock+0xd/0xf [kvm]
> [60390.954324]
> [60390.954324] which lock already depends on the new lock.
> [60390.954324]
> [60390.954324]
> [60390.954324] the existing dependency chain (in reverse order) is:
> [60390.954324]
> [60390.954324] -> #1 (&(&kvm->mmu_lock)->rlock){+.+...}:
> [60390.954324] [<c10575ad>] __lock_acquire+0x9fa/0xb6c
> [60390.954324] [<c10577b8>] lock_acquire+0x99/0xb8
> [60390.954324] [<c15afa2b>] _raw_spin_lock+0x20/0x2f
> [60390.954324] [<f8eafe19>] spin_lock+0xd/0xf [kvm]
> [60390.954324] [<f8eb104e>] kvm_mmu_notifier_invalidate_range_start+0x2f/0x71 [kvm]
> [60390.954324] [<c10bc994>] __mmu_notifier_invalidate_range_start+0x31/0x57
> [60390.954324] [<c10b1de3>] mprotect_fixup+0x153/0x3d5
> [60390.954324] [<c10b21ca>] sys_mprotect+0x165/0x1db
> [60390.954324] [<c10028cc>] sysenter_do_call+0x12/0x32
>
Unrelated. This can take the lock and free it. It only shows up
because we do memory ops inside the mmu_lock, which is deeply forbidden
(anything which touches user memory, including kmalloc(), can trigger
mmu notifiers and recursive locking).
> [60390.954324]
> [60390.954324] -> #0 (&mm->mmap_sem){++++++}:
> [60390.954324] [<c10574af>] __lock_acquire+0x8fc/0xb6c
> [60390.954324] [<c10577b8>] lock_acquire+0x99/0xb8
> [60390.954324] [<c10ab111>] might_fault+0x69/0x86
> [60390.954324] [<c11d5987>] _copy_from_user+0x36/0x119
> [60390.954324] [<f8eafcd9>] copy_from_user+0xd/0xf [kvm]
> [60390.954324] [<f8eb0ac0>] kvm_read_guest_page+0x24/0x33 [kvm]
> [60390.954324] [<f8ebb362>] kvm_read_guest_page_mmu+0x55/0x63 [kvm]
> [60390.954324] [<f8ebb397>] kvm_read_nested_guest_page+0x27/0x2e [kvm]
> [60390.954324] [<f8ebb3da>] load_pdptrs+0x3c/0x9e [kvm]
> [60390.954324] [<f84747ac>] svm_cache_reg+0x25/0x2b [kvm_amd]
> [60390.954324] [<f8ec7894>] kvm_mmu_load+0xf1/0x1fa [kvm]
> [60390.954324] [<f8ebbdfc>] kvm_arch_vcpu_ioctl_run+0x252/0x9c7 [kvm]
> [60390.954324] [<f8eb1fb5>] kvm_vcpu_ioctl+0xee/0x432 [kvm]
> [60390.954324] [<c10cf8e9>] vfs_ioctl+0x2c/0x96
> [60390.954324] [<c10cfe88>] do_vfs_ioctl+0x491/0x4cf
> [60390.954324] [<c10cff0c>] sys_ioctl+0x46/0x66
> [60390.954324] [<c10028cc>] sysenter_do_call+0x12/0x32
>
Just a silly bug. kvm_pdptr_read() can cause a guest memory read on
svm, in this case with the mmu lock taken. I'll post something to fix it.
> What makes me wondering about this is that the two traces to the locks seem to
> belong to different threads.
>
Ever increasing complexity...
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2010-05-04 9:45 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-27 10:38 [PATCH 0/22] Nested Paging support for Nested SVM v2 Joerg Roedel
2010-04-27 10:38 ` [PATCH 01/22] KVM: MMU: Check for root_level instead of long mode Joerg Roedel
2010-04-27 10:38 ` [PATCH 02/22] KVM: MMU: Make tdp_enabled a mmu-context parameter Joerg Roedel
2010-04-27 12:06 ` Avi Kivity
2010-04-27 10:38 ` [PATCH 03/22] KVM: MMU: Make set_cr3 a function pointer in kvm_mmu Joerg Roedel
2010-04-27 10:38 ` [PATCH 04/22] KVM: X86: Introduce a tdp_set_cr3 function Joerg Roedel
2010-04-27 10:38 ` [PATCH 05/22] KVM: MMU: Introduce get_cr3 function pointer Joerg Roedel
2010-04-27 10:38 ` [PATCH 06/22] KVM: MMU: Introduce inject_page_fault " Joerg Roedel
2010-04-27 10:38 ` [PATCH 07/22] KVM: SVM: Implement MMU helper functions for Nested Nested Paging Joerg Roedel
2010-04-27 10:38 ` [PATCH 08/22] KVM: MMU: Change init_kvm_softmmu to take a context as parameter Joerg Roedel
2010-04-27 10:38 ` [PATCH 09/22] KVM: MMU: Let is_rsvd_bits_set take mmu context instead of vcpu Joerg Roedel
2010-04-27 10:38 ` [PATCH 10/22] KVM: MMU: Introduce generic walk_addr function Joerg Roedel
2010-04-27 10:38 ` [PATCH 11/22] KVM: MMU: Add infrastructure for two-level page walker Joerg Roedel
2010-04-27 12:34 ` Avi Kivity
2010-04-28 10:52 ` Joerg Roedel
2010-04-28 11:24 ` Avi Kivity
2010-04-28 11:03 ` Joerg Roedel
2010-04-28 11:09 ` Avi Kivity
2010-04-27 10:38 ` [PATCH 12/22] KVM: MMU: Implement nested gva_to_gpa functions Joerg Roedel
2010-04-27 12:37 ` Avi Kivity
2010-04-28 14:20 ` Joerg Roedel
2010-04-27 10:38 ` [PATCH 13/22] KVM: X86: Add kvm_read_guest_page_tdp function Joerg Roedel
2010-04-27 12:42 ` Avi Kivity
2010-04-27 13:10 ` Joerg Roedel
2010-04-27 13:40 ` Avi Kivity
2010-04-27 10:38 ` [PATCH 14/22] KVM: MMU: Make walk_addr_generic capable for two-level walking Joerg Roedel
2010-04-27 10:38 ` [PATCH 15/22] KVM: MMU: Introduce kvm_read_guest_page_x86() Joerg Roedel
2010-04-27 12:52 ` Avi Kivity
2010-04-27 13:20 ` Joerg Roedel
2010-04-27 13:35 ` Avi Kivity
2010-04-27 15:40 ` Joerg Roedel
2010-04-27 16:09 ` Avi Kivity
2010-04-27 16:27 ` Joerg Roedel
2010-04-28 15:31 ` Joerg Roedel
2010-04-27 10:38 ` [PATCH 16/22] KVM: MMU: Track page fault data in struct vcpu Joerg Roedel
2010-04-27 12:58 ` Avi Kivity
2010-04-27 13:28 ` Joerg Roedel
2010-04-27 13:37 ` Avi Kivity
2010-04-27 13:57 ` Joerg Roedel
2010-04-27 16:02 ` Avi Kivity
2010-05-03 16:32 ` Joerg Roedel
2010-05-04 7:53 ` Avi Kivity
2010-05-04 9:11 ` Roedel, Joerg
2010-05-04 9:20 ` Avi Kivity
2010-05-04 9:37 ` Roedel, Joerg
2010-05-04 9:45 ` Avi Kivity [this message]
2010-05-04 9:50 ` Avi Kivity
2010-05-04 12:00 ` Roedel, Joerg
2010-05-04 12:04 ` Avi Kivity
2010-04-27 10:38 ` [PATCH 17/22] KVM: MMU: Propagate the right fault back to the guest after gva_to_gpa Joerg Roedel
2010-04-27 10:38 ` [PATCH 18/22] KVM: X86: Propagate fetch faults Joerg Roedel
2010-04-27 10:38 ` [PATCH 19/22] KVM: MMU: Introduce init_kvm_nested_mmu() Joerg Roedel
2010-04-27 10:38 ` [PATCH 20/22] KVM: SVM: Initialize Nested Nested MMU context on VMRUN Joerg Roedel
2010-04-27 13:01 ` Avi Kivity
2010-04-27 10:38 ` [PATCH 21/22] KVM: SVM: Report Nested Paging support to userspace Joerg Roedel
2010-04-27 10:38 ` [PATCH 22/22] KVM: SVM: Expect two more candiates for exit_int_info Joerg Roedel
2010-04-27 13:03 ` [PATCH 0/22] Nested Paging support for Nested SVM v2 Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BDFECD1.8040109@redhat.com \
--to=avi@redhat.com \
--cc=Joerg.Roedel@amd.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox