From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 0/4] move irq protection role to separate lock v2 Date: Thu, 21 May 2009 01:50:15 -0300 Message-ID: <20090521045015.GA1104@amt.cnet> References: <4A1413C3.4020606@redhat.com> <20090520184841.954066003@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, Christian Borntraeger To: Avi Kivity Return-path: Received: from mx2.redhat.com ([66.187.237.31]:40120 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750847AbZEUEuv (ORCPT ); Thu, 21 May 2009 00:50:51 -0400 Content-Disposition: inline In-Reply-To: <20090520184841.954066003@localhost.localdomain> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, May 20, 2009 at 03:48:41PM -0300, Marcelo Tosatti wrote: > Addressing comment and covering irqfd (did not remove the deadlock avoidance > there since it might be useful later). I can't see any reason why serializing the vm ioctl is a bad idea. There is one indication of disagreement here: http://patchwork.kernel.org/patch/5661/ "The original intent was that kvm_arch_vcpu_create() not "link in" the vcpu to any registers. That allows most of the vcpu creation to happen outside a lock." But I fail to see the case where vcpu creation is a fast path (unless you're benchmarking cpu hotplug/hotunplug). Note there is no risk of a deadlock in case a vcpu thread attempts to execute vm_ioctl. Also vm_ioctl_lock will always be the first lock grabbed in the vm_ioctl path, and not used in any other path, there's no risk of deadlock. The reason for this is there are sites, such as KVM_CREATE_PIT, which handle concurrency properly (with custom locking that becomes obsolete), but some don't. Index: kvm/include/linux/kvm_host.h =================================================================== --- kvm.orig/include/linux/kvm_host.h +++ kvm/include/linux/kvm_host.h @@ -131,9 +131,9 @@ struct kvm { KVM_PRIVATE_MEM_SLOTS]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; struct list_head vm_list; + struct mutex vm_ioctl_lock; /* protects concurrent vm fd ioctls */ struct mutex lock; /* * - protects mmio_bus, pio_bus. - * - protects a few concurrent ioctl's (FIXME). * - protects concurrent create_vcpu, but * kvm->vcpus walkers do it locklessly (FIXME). */ Index: kvm/virt/kvm/kvm_main.c =================================================================== --- kvm.orig/virt/kvm/kvm_main.c +++ kvm/virt/kvm/kvm_main.c @@ -988,6 +988,7 @@ static struct kvm *kvm_create_vm(void) INIT_LIST_HEAD(&kvm->irqfds); mutex_init(&kvm->lock); mutex_init(&kvm->irq_lock); + mutex_init(&kvm->vm_ioctl_lock); kvm_io_bus_init(&kvm->mmio_bus); init_rwsem(&kvm->slots_lock); atomic_set(&kvm->users_count, 1); @@ -2053,6 +2054,9 @@ static long kvm_vm_ioctl(struct file *fi if (kvm->mm != current->mm) return -EIO; + + mutex_lock(&kvm->vm_ioctl_lock); + switch (ioctl) { case KVM_CREATE_VCPU: r = kvm_vm_ioctl_create_vcpu(kvm, arg); @@ -2228,6 +2232,7 @@ static long kvm_vm_ioctl(struct file *fi r = kvm_arch_vm_ioctl(filp, ioctl, arg); } out: + mutex_unlock(&kvm->vm_ioctl_lock); return r; }