public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Asias He <asias@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org
Subject: Re: [PATCH 3/3] virtio-blk: Use block layer provided spinlock
Date: Fri, 25 May 2012 15:23:12 +0800	[thread overview]
Message-ID: <4FBF3360.8040807@redhat.com> (raw)
In-Reply-To: <20120525070245.GE15474@redhat.com>

On 05/25/2012 03:02 PM, Michael S. Tsirkin wrote:
> On Fri, May 25, 2012 at 10:34:49AM +0800, Asias He wrote:
>> Block layer will allocate a spinlock for the queue if the driver does
>> not provide one in blk_init_queue().
>>
>> The reason to use the internal spinlock is that blk_cleanup_queue() will
>> switch to use the internal spinlock in the cleanup code path.
>>          if (q->queue_lock !=&q->__queue_lock)
>>                  q->queue_lock =&q->__queue_lock;
>> However, processes which are in D state might have taken the driver
>> provided spinlock, when the processes wake up , they would release the
>> block provided spinlock.
>>
>> =====================================
>> [ BUG: bad unlock balance detected! ]
>> 3.4.0-rc7+ #238 Not tainted
>> -------------------------------------
>> fio/3587 is trying to release lock (&(&q->__queue_lock)->rlock) at:
>> [<ffffffff813274d2>] blk_queue_bio+0x2a2/0x380
>> but there are no more locks to release!
>>
>> other info that might help us debug this:
>> 1 lock held by fio/3587:
>>   #0:  (&(&vblk->lock)->rlock){......}, at:
>> [<ffffffff8132661a>] get_request_wait+0x19a/0x250
>>
>> Other drivers use block layer provided spinlock as well, e.g. SCSI.  I
>> do not see any reason why we shouldn't,
>
> OK, but the commit log is all wrong then, it should look like this:
>
> 	virtio uses an internal lock while block layer provides
> 	its own spinlock. Switching to the common lock saves
> 	a bit of memory and does not seem to have any disadvantages:
> 	this does not increase lock contention because .....
> 	Performance tests show no real difference: before ... after ...

Hmm. Why using the internal lock will have impact on the performance? 
Anyway I will update the commit log.

>
>> even the lock unblance issue can
>> be fixed by block layer.
>
> s/even/even if/ ?

yes ;-)

> The lock unblance issue wasn't yet discussed upstream, was it?

See the patch I sent this morning.

[PATCH] block: Fix lock unbalance caused by lock disconnect


> Looking at it from the other side, even if virtio can
> work around the issue, block layer should be fixed if
> it's buggy. Or maybe it's not buggy and this is just masking
> some other real issue?

Yes. I got your point. I am trying to fix block layer as well.

> Does this mean it's inherently unsafe to use an internal spinlock?
> Aren't there other drivers doing this?

I think so.

>> Cc: Rusty Russell<rusty@rustcorp.com.au>
>> Cc: "Michael S. Tsirkin"<mst@redhat.com>
>> Cc: virtualization@lists.linux-foundation.org
>> Cc: kvm@vger.kernel.org
>> Signed-off-by: Asias He<asias@redhat.com>
>> ---
>>   drivers/block/virtio_blk.c |    9 +++------
>>   1 file changed, 3 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>> index b4fa2d7..774c31d 100644
>> --- a/drivers/block/virtio_blk.c
>> +++ b/drivers/block/virtio_blk.c
>> @@ -21,8 +21,6 @@ struct workqueue_struct *virtblk_wq;
>>
>>   struct virtio_blk
>>   {
>> -	spinlock_t lock;
>> -
>>   	struct virtio_device *vdev;
>>   	struct virtqueue *vq;
>>
>> @@ -65,7 +63,7 @@ static void blk_done(struct virtqueue *vq)
>>   	unsigned int len;
>>   	unsigned long flags;
>>
>> -	spin_lock_irqsave(&vblk->lock, flags);
>> +	spin_lock_irqsave(vblk->disk->queue->queue_lock, flags);
>>   	while ((vbr = virtqueue_get_buf(vblk->vq,&len)) != NULL) {
>>   		int error;
>>
>> @@ -99,7 +97,7 @@ static void blk_done(struct virtqueue *vq)
>>   	}
>>   	/* In case queue is stopped waiting for more buffers. */
>>   	blk_start_queue(vblk->disk->queue);
>> -	spin_unlock_irqrestore(&vblk->lock, flags);
>> +	spin_unlock_irqrestore(vblk->disk->queue->queue_lock, flags);
>>   }
>>
>>   static bool do_req(struct request_queue *q, struct virtio_blk *vblk,
>> @@ -431,7 +429,6 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
>>   		goto out_free_index;
>>   	}
>>
>> -	spin_lock_init(&vblk->lock);
>>   	vblk->vdev = vdev;
>>   	vblk->sg_elems = sg_elems;
>>   	sg_init_table(vblk->sg, vblk->sg_elems);
>> @@ -456,7 +453,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
>>   		goto out_mempool;
>>   	}
>>
>> -	q = vblk->disk->queue = blk_init_queue(do_virtblk_request,&vblk->lock);
>> +	q = vblk->disk->queue = blk_init_queue(do_virtblk_request, NULL);
>>   	if (!q) {
>>   		err = -ENOMEM;
>>   		goto out_put_disk;
>> --
>> 1.7.10.2


-- 
Asias

  reply	other threads:[~2012-05-25  7:23 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-25  2:34 [PATCH 0/3] Fix hot-unplug race in virtio-blk Asias He
2012-05-25  2:34 ` [PATCH 1/3] virtio-blk: Call del_gendisk() before disable guest kick Asias He
2012-05-25  7:07   ` Michael S. Tsirkin
2012-05-25  2:34 ` [PATCH 2/3] virtio-blk: Reset device after blk_cleanup_queue() Asias He
2012-05-25  6:52   ` Michael S. Tsirkin
2012-05-25  7:03     ` Asias He
2012-05-25  7:06       ` Michael S. Tsirkin
2012-05-25  7:08   ` Michael S. Tsirkin
2012-05-25  2:34 ` [PATCH 3/3] virtio-blk: Use block layer provided spinlock Asias He
2012-05-25  7:02   ` Michael S. Tsirkin
2012-05-25  7:23     ` Asias He [this message]
2012-05-25  8:03     ` [PATCH v2 " Asias He
2012-05-25 13:10       ` Michael S. Tsirkin
2012-06-01  8:49       ` Michael S. Tsirkin
2012-06-04  0:41 ` [PATCH 0/3] Fix hot-unplug race in virtio-blk Rusty Russell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FBF3360.8040807@redhat.com \
    --to=asias@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox