From mboxrd@z Thu Jan 1 00:00:00 1970 From: Asias He Subject: Re: [PATCH 3/3] virtio-blk: Use block layer provided spinlock Date: Fri, 25 May 2012 15:23:12 +0800 Message-ID: <4FBF3360.8040807@redhat.com> References: <1337913289-23486-1-git-send-email-asias@redhat.com> <1337913289-23486-4-git-send-email-asias@redhat.com> <20120525070245.GE15474@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org To: "Michael S. Tsirkin" Return-path: In-Reply-To: <20120525070245.GE15474@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org List-Id: kvm.vger.kernel.org On 05/25/2012 03:02 PM, Michael S. Tsirkin wrote: > On Fri, May 25, 2012 at 10:34:49AM +0800, Asias He wrote: >> Block layer will allocate a spinlock for the queue if the driver does >> not provide one in blk_init_queue(). >> >> The reason to use the internal spinlock is that blk_cleanup_queue() will >> switch to use the internal spinlock in the cleanup code path. >> if (q->queue_lock !=&q->__queue_lock) >> q->queue_lock =&q->__queue_lock; >> However, processes which are in D state might have taken the driver >> provided spinlock, when the processes wake up , they would release the >> block provided spinlock. >> >> ===================================== >> [ BUG: bad unlock balance detected! ] >> 3.4.0-rc7+ #238 Not tainted >> ------------------------------------- >> fio/3587 is trying to release lock (&(&q->__queue_lock)->rlock) at: >> [] blk_queue_bio+0x2a2/0x380 >> but there are no more locks to release! >> >> other info that might help us debug this: >> 1 lock held by fio/3587: >> #0: (&(&vblk->lock)->rlock){......}, at: >> [] get_request_wait+0x19a/0x250 >> >> Other drivers use block layer provided spinlock as well, e.g. SCSI. I >> do not see any reason why we shouldn't, > > OK, but the commit log is all wrong then, it should look like this: > > virtio uses an internal lock while block layer provides > its own spinlock. Switching to the common lock saves > a bit of memory and does not seem to have any disadvantages: > this does not increase lock contention because ..... > Performance tests show no real difference: before ... after ... Hmm. Why using the internal lock will have impact on the performance? Anyway I will update the commit log. > >> even the lock unblance issue can >> be fixed by block layer. > > s/even/even if/ ? yes ;-) > The lock unblance issue wasn't yet discussed upstream, was it? See the patch I sent this morning. [PATCH] block: Fix lock unbalance caused by lock disconnect > Looking at it from the other side, even if virtio can > work around the issue, block layer should be fixed if > it's buggy. Or maybe it's not buggy and this is just masking > some other real issue? Yes. I got your point. I am trying to fix block layer as well. > Does this mean it's inherently unsafe to use an internal spinlock? > Aren't there other drivers doing this? I think so. >> Cc: Rusty Russell >> Cc: "Michael S. Tsirkin" >> Cc: virtualization@lists.linux-foundation.org >> Cc: kvm@vger.kernel.org >> Signed-off-by: Asias He >> --- >> drivers/block/virtio_blk.c | 9 +++------ >> 1 file changed, 3 insertions(+), 6 deletions(-) >> >> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c >> index b4fa2d7..774c31d 100644 >> --- a/drivers/block/virtio_blk.c >> +++ b/drivers/block/virtio_blk.c >> @@ -21,8 +21,6 @@ struct workqueue_struct *virtblk_wq; >> >> struct virtio_blk >> { >> - spinlock_t lock; >> - >> struct virtio_device *vdev; >> struct virtqueue *vq; >> >> @@ -65,7 +63,7 @@ static void blk_done(struct virtqueue *vq) >> unsigned int len; >> unsigned long flags; >> >> - spin_lock_irqsave(&vblk->lock, flags); >> + spin_lock_irqsave(vblk->disk->queue->queue_lock, flags); >> while ((vbr = virtqueue_get_buf(vblk->vq,&len)) != NULL) { >> int error; >> >> @@ -99,7 +97,7 @@ static void blk_done(struct virtqueue *vq) >> } >> /* In case queue is stopped waiting for more buffers. */ >> blk_start_queue(vblk->disk->queue); >> - spin_unlock_irqrestore(&vblk->lock, flags); >> + spin_unlock_irqrestore(vblk->disk->queue->queue_lock, flags); >> } >> >> static bool do_req(struct request_queue *q, struct virtio_blk *vblk, >> @@ -431,7 +429,6 @@ static int __devinit virtblk_probe(struct virtio_device *vdev) >> goto out_free_index; >> } >> >> - spin_lock_init(&vblk->lock); >> vblk->vdev = vdev; >> vblk->sg_elems = sg_elems; >> sg_init_table(vblk->sg, vblk->sg_elems); >> @@ -456,7 +453,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev) >> goto out_mempool; >> } >> >> - q = vblk->disk->queue = blk_init_queue(do_virtblk_request,&vblk->lock); >> + q = vblk->disk->queue = blk_init_queue(do_virtblk_request, NULL); >> if (!q) { >> err = -ENOMEM; >> goto out_put_disk; >> -- >> 1.7.10.2 -- Asias