From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH 0/5] Add vhost-blk support Date: Tue, 17 Jul 2012 10:52:10 +0200 Message-ID: <500527BA.9000001@redhat.com> References: <1342107302-28116-1-git-send-email-asias@redhat.com> <50052276.2080906@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Stefan Hajnoczi , linux-kernel@vger.kernel.org, linux-aio@kvack.org, kvm@vger.kernel.org, "Michael S. Tsirkin" , virtualization@lists.linux-foundation.org, Benjamin LaHaise , Alexander Viro , linux-fsdevel@vger.kernel.org To: Asias He Return-path: In-Reply-To: <50052276.2080906@redhat.com> Sender: owner-linux-aio@kvack.org List-Id: linux-fsdevel.vger.kernel.org Il 17/07/2012 10:29, Asias He ha scritto: > So, vhost-blk at least saves ~6 syscalls for us in each request. Are they really 6? If I/O is coalesced by a factor of 3, for example (i.e. each exit processes 3 requests), it's really 2 syscalls per request. Also, is there anything we can improve? Perhaps we can modify epoll and ask it to clear the eventfd for us (would save 2 reads)? Or io_getevents (would save 1)? > I guess you mean qemu here. Yes, in theory, qemu's block layer can be > improved to achieve similar performance as vhost-blk or kvm tool's > userspace virito-blk has. But I think it makes no sense to prevent one > solution becase there is another in theory solution called: we can do > similar in qemu. It depends. Like vhost-scsi, vhost-blk has the problem of a crippled feature set: no support for block device formats, non-raw protocols, etc. This makes it different from vhost-net. So it begs the question, is it going to be used in production, or just a useful reference tool? Paolo -- To unsubscribe, send a message with 'unsubscribe linux-aio' in the body to majordomo@kvack.org. For more info on Linux AIO, see: http://www.kvack.org/aio/ Don't email: aart@kvack.org