From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LPgic-0002wD-7N for qemu-devel@nongnu.org; Wed, 21 Jan 2009 12:18:22 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LPgib-0002v7-Cw for qemu-devel@nongnu.org; Wed, 21 Jan 2009 12:18:21 -0500 Received: from [199.232.76.173] (port=58657 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LPgib-0002ux-1U for qemu-devel@nongnu.org; Wed, 21 Jan 2009 12:18:21 -0500 Received: from mx2.redhat.com ([66.187.237.31]:40805) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LPgia-0005vP-Hg for qemu-devel@nongnu.org; Wed, 21 Jan 2009 12:18:20 -0500 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n0LHIJcu001883 for ; Wed, 21 Jan 2009 12:18:19 -0500 Message-ID: <497758D8.8010806@redhat.com> Date: Wed, 21 Jan 2009 19:18:16 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API References: <1232308399-21679-1-git-send-email-avi@redhat.com> <1232308399-21679-2-git-send-email-avi@redhat.com> <18804.34053.211615.181730@mariner.uk.xensource.com> <4974943B.4020507@redhat.com> <18804.44271.868488.32192@mariner.uk.xensource.com> <4974B82F.9020805@redhat.com> <20090119182502.GA2080@shareable.org> <4974C9BA.1050803@redhat.com> <18805.58460.230350.875145@mariner.uk.xensource.com> <49760D1C.3020202@redhat.com> <18807.21106.292689.945702@mariner.uk.xensource.com> In-Reply-To: <18807.21106.292689.945702@mariner.uk.xensource.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Ian Jackson wrote: > Avi Kivity writes ("Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API"): > >> Ian Jackson wrote: >> >>> Which devices ? All devices ever that want to do zero-copy DMA into >>> the guest ? >>> >> IDE, scsi, virtio-blk, virtio-net, e1000, maybe a few more. >> > > Yesterday I produced the example of a SCSI tape drive, which is > vanishingly unlikely to results in writes past the actual transfer > length since the drive definitely produces all of the data in order. > And I explained that it's very unlikely to ever be noticed by a guest, since the dma will happen into kernel memory (which will be clobbered), but the subsequent copying into userspace will use the correct size. I also pointed out that the holy kernel itself might use bounce buffers and disregard the actually copied size. If you're into accurate emulation but not into performance, use cpu_physical_memory_rw(). This API is optional, for performance minded implementations. > As I have already pointed out, we won't discover that any guest > depends on those promises in testing, because it's the kind of thing > that will only happen in practice with reasonably obscure situations > including some error conditions. > > So "let's only do this if we discover we need it" is not good enough. > We won't know that we need it. What will probably happen is that some > user somewhere who is already suffering from some kind of problem will > experience additional apparently-random corruption. Naturally that's > not going to result in a good bug report. > > Even from our point of view as the programmers this isn't a good > approach because the proposed fix is an API and API change. What > you're suggesting is that we introduce a bug, and wait and see if it > bites anyone, in the full knowledge that by then fixing the bug will > involve either widespread changes to all of the DMA API users or > changing a particular driver to be much slower. > That's because I estimate the probability of change being required as zero. >> Framebuffers? Those are RAM. USB webcams? These can't be interrupted >> by SIGINT. Are you saying a guest depends on an O_DIRECT USB transfer >> not affecting memory when a USB cable is pulled out? >> > > No, as I said earlier, and as you appeared to accept, it is quite > possible that in some uses of the qemu code - including some uses of > Xen - _all_ DMA will go through bounce buffers. > Right now Xen doesn't bounce DMAs, it uses the map cache. I am not coding for all possible uses of qemu. I am coding for what's in upstream qemu, and allowing for reasonable implementations in Xen. >> I'm suggesting we do that unconditionally (as my patch does) and only >> add that complexity when we know it's needed for certain. >> > > At the moment there are no such devices (your claims about ide > notwithstanding) but I think it will be easier to argue about the > specific case after we have agreed on a non-deficient API. > I don't think we'll ever reach agreement. -- error compiling committee.c: too many arguments to function