From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LOzb9-00004u-F3 for qemu-devel@nongnu.org; Mon, 19 Jan 2009 14:15:47 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LOzb7-0008VL-NA for qemu-devel@nongnu.org; Mon, 19 Jan 2009 14:15:47 -0500 Received: from [199.232.76.173] (port=59699 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LOzb7-0008VA-GY for qemu-devel@nongnu.org; Mon, 19 Jan 2009 14:15:45 -0500 Received: from qw-out-1920.google.com ([74.125.92.145]:59064) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LOzb7-00060m-3M for qemu-devel@nongnu.org; Mon, 19 Jan 2009 14:15:45 -0500 Received: by qw-out-1920.google.com with SMTP id 5so540588qwc.4 for ; Mon, 19 Jan 2009 11:15:44 -0800 (PST) Message-ID: <4974D154.5010601@codemonkey.ws> Date: Mon, 19 Jan 2009 13:15:32 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API References: <1232308399-21679-1-git-send-email-avi@redhat.com> <4974943B.4020507@redhat.com> <49749EC5.3080704@codemonkey.ws> <200901191618.10082.paul@codesourcery.com> <4974AB48.1070301@codemonkey.ws> <4974ACD0.601@redhat.com> In-Reply-To: <4974ACD0.601@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Ian Jackson , Paul Brook , qemu-devel@nongnu.org Avi Kivity wrote: > Anthony Liguori wrote: >> Paul Brook wrote: >>> It looks like what you're actually doing is pushing the bounce >>> buffer allocation into the individual packet consumers. >>> >>> Maybe a solution to this is a 'do IO on IOVEC' actor, with an >>> additional flag that says whether it is acceptable to split the >>> allocation. That way both block and packet interfaces use the same >>> API, and avoids proliferation of manual bounce buffers in packet >>> devices. >>> >> >> I think there may be utility in having packet devices provide the >> bounce buffers, in which case, you could probably unique both into a >> single function with a flag. But why not just have two separate >> functions? >> >> Those two functions can live in exec.c too. The nice thing about >> using map() is that it's easily overriden and chained. So what I'm >> proposing. >> >> cpu_physical_memory_map() >> cpu_physical_memory_unmap() > > This should be the baseline API with the rest using it. Yup. >> do_streaming_IO(map, unmap, ioworker, opaque); > > Why pass map and unmap? Because we'll eventually have: pci_device_memory_map() pci_device_memory_unmap() In the simplest case, pci_device_memory_map() just calls cpu_physical_memory_map(). But it may do other things. > grant based devices needn't go through this at all, since you never > mix grants and physical addresses, and since grants never need bouncing. So the grant map/unmap function doesn't need to deal with calling cpu_physical_memory_map/unmap. You could still use the above API or not. It's hard to say. >> do_packet_IO(map, unmap, buffer, size, ioworker, opaque); > > If you pass the buffer then the device needs to allocate large amounts > of bounce memory. If do_packet_IO took a buffer, instead of calling alloc_buffer(size) when map fails (you run out of bounce memory), you simply use buffer. Otherwise, alloc_buffer() must be able to allocate enough memory to satisfy any request. Since each packet device knows it's maximum size up front, it makes sense for the device to allocate it. You could also not care and just trust that callers do the right thing. Regards, Anthony Liguori