From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrea Arcangeli Subject: Re: [Qemu-devel] [RFC] Replace posix-aio with custom thread pool Date: Thu, 11 Dec 2008 14:12:22 +0100 Message-ID: <20081211131222.GA14908@random.random> References: <1228512061-25398-1-git-send-email-aliguori@us.ibm.com> <493E941D.4000608@redhat.com> <493E965E.5050701@us.ibm.com> <20081210164401.GF18814@random.random> <493FFAB6.2000106@codemonkey.ws> <493FFC8E.9080802@redhat.com> <49400F69.8080707@codemonkey.ws> <20081210190810.GG18814@random.random> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Gerd Hoffmann , kvm-devel , qemu-devel@nongnu.org To: Anthony Liguori Return-path: Received: from mx2.redhat.com ([66.187.237.31]:34137 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755522AbYLKNMa (ORCPT ); Thu, 11 Dec 2008 08:12:30 -0500 Content-Disposition: inline In-Reply-To: <20081210190810.GG18814@random.random> Sender: kvm-owner@vger.kernel.org List-ID: My current feeling is that this user thread aio thing will never satisfy enterprise usage and kernel aio is mandatory in my view. I had the same feeling before too, but I thought clone aio was desiderable as intermediate step, because it could help whatever other unix host OS that may not have native aio support. But if there's a problem with opening the file multiple times (which btw is limiting the total number of bdev to a dozen on a default ulimit -n with 64 max threads, but it's probably ok), then we could as well stick to glibc aio, and perhaps wait it to evolve with aio_readv/writev (probably backed by a preadv/pwritev). And we should concentrate on kernel aio and get rid of threads when host OS is linux. We can add a dependency where the dma api will not bounce and linearize the buffer, only if the host backend supports native aio. Has anybody a patch implementing kernel aio that I can plug into the dma zerocopy api? I'm not so sure clone aio is worth maintaining inside qemu instead of evolving glibc and kernel with preadv/pwritev for the long term. Thanks!