From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=45764 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Or5rh-0007Tt-Tb for qemu-devel@nongnu.org; Thu, 02 Sep 2010 05:13:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Or5rg-0002QE-Ie for qemu-devel@nongnu.org; Thu, 02 Sep 2010 05:13:49 -0400 Received: from mail-fx0-f45.google.com ([209.85.161.45]:60969) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Or5rg-0002Q4-B9 for qemu-devel@nongnu.org; Thu, 02 Sep 2010 05:13:48 -0400 Received: by fxm7 with SMTP id 7so66523fxm.4 for ; Thu, 02 Sep 2010 02:13:47 -0700 (PDT) Sender: Eduard - Gabriel Munteanu Date: Thu, 2 Sep 2010 12:12:00 +0300 From: Eduard - Gabriel Munteanu Message-ID: <20100902091200.GB7319@localhost> References: <1283007298-10942-1-git-send-email-eduard.munteanu@linux360.ro> <1283007298-10942-5-git-send-email-eduard.munteanu@linux360.ro> <20100902051911.GA4273@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100902051911.GA4273@redhat.com> Subject: [Qemu-devel] Re: [PATCH 4/7] ide: use the PCI memory access interface List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: kvm@vger.kernel.org, joro@8bytes.org, qemu-devel@nongnu.org, blauwirbel@gmail.com, yamahata@valinux.co.jp, paul@codesourcery.com, avi@redhat.com On Thu, Sep 02, 2010 at 08:19:11AM +0300, Michael S. Tsirkin wrote: > On Sat, Aug 28, 2010 at 05:54:55PM +0300, Eduard - Gabriel Munteanu wrote: > > Emulated PCI IDE controllers now use the memory access interface. This > > also allows an emulated IOMMU to translate and check accesses. > > > > Map invalidation results in cancelling DMA transfers. Since the guest OS > > can't properly recover the DMA results in case the mapping is changed, > > this is a fairly good approximation. > > > > Signed-off-by: Eduard - Gabriel Munteanu > > --- [snip] > > +static inline void bmdma_memory_read(BMDMAState *bm, > > + target_phys_addr_t addr, > > + uint8_t *buf, > > + target_phys_addr_t len) > > +{ > > + bm->rw(bm->opaque, addr, buf, len, 0); > > +} > > + > > +static inline void bmdma_memory_write(BMDMAState *bm, > > + target_phys_addr_t addr, > > + uint8_t *buf, > > + target_phys_addr_t len) > > +{ > > + bm->rw(bm->opaque, addr, buf, len, 1); > > +} > > + > > Here again, I am concerned about indirection and pointer chaising on data path. > Can we have an iommu pointer in the device, and do a fast path in case > there is no iommu? > See my other reply. > > static inline IDEState *idebus_active_if(IDEBus *bus) > > { > > return bus->ifs + bus->unit; > > diff --git a/hw/ide/macio.c b/hw/ide/macio.c > > index bd1c73e..962ae13 100644 > > --- a/hw/ide/macio.c > > +++ b/hw/ide/macio.c > > @@ -79,7 +79,7 @@ static void pmac_ide_atapi_transfer_cb(void *opaque, int ret) > > > > s->io_buffer_size = io->len; > > > > - qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1); > > + qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1, NULL, NULL, NULL); > > qemu_sglist_add(&s->sg, io->addr, io->len); > > io->addr += io->len; > > io->len = 0; > > @@ -141,7 +141,7 @@ static void pmac_ide_transfer_cb(void *opaque, int ret) > > s->io_buffer_index = 0; > > s->io_buffer_size = io->len; > > > > - qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1); > > + qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1, NULL, NULL, NULL); > > qemu_sglist_add(&s->sg, io->addr, io->len); > > io->addr += io->len; > > io->len = 0; > > diff --git a/hw/ide/pci.c b/hw/ide/pci.c > > index 4d95cc5..5879044 100644 > > --- a/hw/ide/pci.c > > +++ b/hw/ide/pci.c > > @@ -183,4 +183,11 @@ void pci_ide_create_devs(PCIDevice *dev, DriveInfo **hd_table) > > continue; > > ide_create_drive(d->bus+bus[i], unit[i], hd_table[i]); > > } > > + > > + for (i = 0; i < 2; i++) { > > + d->bmdma[i].rw = (void *) pci_memory_rw; > > + d->bmdma[i].map = (void *) pci_memory_map; > > + d->bmdma[i].unmap = (void *) pci_memory_unmap; > > + d->bmdma[i].opaque = dev; > > + } > > } > > These casts show something is wrong with the API, IMO. > Hm, here's an oversight on my part: I think I should provide explicit bmdma hooks, since pcibus_t is a uint64_t and target_phys_addr_t is a uint{32,64}_t depending on the guest machine, so it might be buggy on 32-bit wrt calling conventions. But that introduces yet another non-inlined function call :-(. That would drop the (void *) cast, though. Eduard