From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LOwEQ-0002QT-8O for qemu-devel@nongnu.org; Mon, 19 Jan 2009 10:40:06 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LOwEO-0002Pb-Fp for qemu-devel@nongnu.org; Mon, 19 Jan 2009 10:40:04 -0500 Received: from [199.232.76.173] (port=49698 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LOwEO-0002PX-Cm for qemu-devel@nongnu.org; Mon, 19 Jan 2009 10:40:04 -0500 Received: from qw-out-1920.google.com ([74.125.92.147]:40565) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LOwEN-0007ij-VJ for qemu-devel@nongnu.org; Mon, 19 Jan 2009 10:40:04 -0500 Received: by qw-out-1920.google.com with SMTP id 5so528821qwc.4 for ; Mon, 19 Jan 2009 07:40:02 -0800 (PST) Message-ID: <49749EC5.3080704@codemonkey.ws> Date: Mon, 19 Jan 2009 09:39:49 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API References: <1232308399-21679-1-git-send-email-avi@redhat.com> <1232308399-21679-2-git-send-email-avi@redhat.com> <18804.34053.211615.181730@mariner.uk.xensource.com> <4974943B.4020507@redhat.com> In-Reply-To: <4974943B.4020507@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Ian Jackson , Avi Kivity Avi Kivity wrote: >> The interface when cpu_physical_memory_map returns 0 is strange. >> Normally everything in qemu is done with completion callbacks, but >> here we have a kind of repeated polling. >> > > I agree. This was done at Anthony's request so I'll defer the > response to him. There are two distinct IO consumers of this API: streaming IO and packet IO. For streaming IO, you have a model of: process_data: while (offset < size) { data = map(offset, &len) if (data) { do IO on (data, len) unmap(data) offset += len; } else break } if (offset < size) add callback for mappability to process_data I agree that this model could be formalized into something that took a 'do IO on (data, len)' actor. In fact, since map() and unmap() are pretty generic, they too could be actors. This would then work for CPU memory IO, PCI memory IO, etc. The packet IO API is a bit different. It looks like: while (offset < size) { data = map(offset, &len) if (data == NULL) break; sg[n_sg].iov_base = data; sg[n_sg].iov_len = len; n_sg++; offset += len; } if (offset < len) { for (i = 0; i < n_sg; i++) unmap(sg[i].iov_base); sg[0].iov_base = alloc_buffer(size); sg[0].iov_len = size; cpu_physical_memory_rw(sg[0].iov_base, size); } do IO on (sg) if (offset < len) { free(sg[0].iov_base); } In this case, it isn't useful to get a callback with some of the packet data. You need to know up front whether you can map all of the packet data. In fact, a callback API doesn't really work because it implies that at the end of the callback, you either release the data or that the next callback could not be invoked until you unmap a previous data. So this is why I prefer the map() API, as it accommodates two distinct users in a way that the callback API wouldn't. We can formalize these idioms into an API, of course. BTW, to support this model, we have to reserve at least one bounce buffer for cpu_physical_memory_rw. Regards, Anthony Liguori