From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:48227) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RG6MR-0002Z4-Tr for qemu-devel@nongnu.org; Tue, 18 Oct 2011 05:53:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RG6MM-0005PB-U6 for qemu-devel@nongnu.org; Tue, 18 Oct 2011 05:53:27 -0400 Received: from mx1.redhat.com ([209.132.183.28]:26518) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RG6MM-0005OK-Ir for qemu-devel@nongnu.org; Tue, 18 Oct 2011 05:53:22 -0400 Message-ID: <4E9D4D31.1070406@redhat.com> Date: Tue, 18 Oct 2011 11:56:01 +0200 From: Kevin Wolf MIME-Version: 1.0 References: <1315476668-19812-1-git-send-email-wuzhy@linux.vnet.ibm.com> <1315476668-19812-2-git-send-email-wuzhy@linux.vnet.ibm.com> <4E7CA6A6.10500@redhat.com> <4E9C00CB.8080003@redhat.com> <4E9D3A8E.7020908@redhat.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v8 1/4] block: add the block queue support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Zhi Yong Wu Cc: Zhi Yong Wu , stefanha@linux.vnet.ibm.com, kvm@vger.kernel.org, qemu-devel@nongnu.org Am 18.10.2011 11:29, schrieb Zhi Yong Wu: >>>>>>> +void qemu_del_block_queue(BlockQueue *queue) >>>>>>> +{ >>>>>>> + BlockQueueAIOCB *request, *next; >>>>>>> + >>>>>>> + QTAILQ_FOREACH_SAFE(request, &queue->requests, entry, next) { >>>>>>> + QTAILQ_REMOVE(&queue->requests, request, entry); >>>>>>> + qemu_aio_release(request); >>>>>>> + } >>>>>>> + >>>>>>> + g_free(queue); >>>>>>> +} >>>>>> >>>>>> Can we be sure that no AIO requests are in flight that still use the now >>>>>> released AIOCB? >>>>> Yeah, since qemu core code is serially performed, i think that when >>>>> qemu_del_block_queue is performed, no requests are in flight. Right? >>>> >>>> Patch 2 has this code: >>>> >>>> +void bdrv_io_limits_disable(BlockDriverState *bs) >>>> +{ >>>> + bs->io_limits_enabled = false; >>>> + >>>> + if (bs->block_queue) { >>>> + qemu_block_queue_flush(bs->block_queue); >>>> + qemu_del_block_queue(bs->block_queue); >>>> + bs->block_queue = NULL; >>>> + } >>>> >>>> Does this mean that you can't disable I/O limits while the VM is running? >>> NO, you can even though VM is running. >> >> Okay, in general qemu_block_queue_flush() empties the queue so that >> there are no requests left that qemu_del_block_queue() could drop from >> the queue. So in the common case it doesn't even enter the FOREACH loop. > I think that we should adopt !QTAILQ_EMPTY(&queue->requests), not > QTAILQ_FOREACH_SAFE in qemu_del_block_queue(), > right? I think QTAILQ_FOREACH_SAFE is fine. >> >> I think problems start when requests have failed or exceeded the limit >> again, then you have requests queued even after >> qemu_block_queue_flush(). You must be aware of this, otherwise the code >> in qemu_del_block_queue() wouldn't exist. >> >> But you can't release the ACBs without having called their callback, >> otherwise the caller would still assume that its ACB pointer is valid. >> Maybe calling the callback before releasing the ACB would be enough. > Good, thanks. >>>> >>>>>>> + } >>>>>>> + } >>>>>>> + } >>>>>>> + >>>>>>> + queue->req_failed = true; >>>>>>> + queue->flushing = false; >>>>>>> +} >>>>>>> + >>>>>>> +bool qemu_block_queue_has_pending(BlockQueue *queue) >>>>>>> +{ >>>>>>> + return !queue->flushing && !QTAILQ_EMPTY(&queue->requests); >>>>>>> +} >>>>>> >>>>>> Why doesn't the queue have pending requests in the middle of a flush >>>>>> operation? (That is, the flush hasn't completed yet) >>>>> It is possible for the queue to have pending requests. if yes, how about? >>>> >>>> Sorry, can't parse this. >>>> >>>> I don't understand why the !queue->flushing part is correct. >> >> What about this? > When bdrv_aio_readv/writev handle one request, it will determine if > block queue is not being flushed and isn't NULL; if yes, It assume > that this request is one new request from upper layer, so it won't > determine if the I/O rate at runtime has exceeded the limits, but > immediately insert it into block queue. Hm, I think I understand what you're saying, but only after looking at patch 3. This is not really implementing a has_pending(), but has_pending_and_caller_wasnt_called_during_flush(). I think it would be better to handle the queue->flushing condition in the caller where its use is more obvious. Kevin