From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joerg Roedel Subject: Re: [PATCH 2/3] virtio_ring: Support DMA APIs Date: Wed, 28 Oct 2015 11:21:48 +0900 Message-ID: <20151028022148.GD18467@suse.de> References: <6b42014d04258c706c7c43ae739efb30e32496b9.1445994839.git.luto@kernel.org> <20151028020650.GA18467@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Andy Lutomirski , "linux-kernel@vger.kernel.org" , Christian Borntraeger , Cornelia Huck , Sebastian Ott , Paolo Bonzini , Christoph Hellwig , Benjamin Herrenschmidt , KVM , David Woodhouse , Martin Schwidefsky , linux-s390 To: Andy Lutomirski Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Tue, Oct 27, 2015 at 07:13:56PM -0700, Andy Lutomirski wrote: > On Tue, Oct 27, 2015 at 7:06 PM, Joerg Roedel wrote: > > Hi Andy, > > > > On Tue, Oct 27, 2015 at 06:17:09PM -0700, Andy Lutomirski wrote: > >> From: Andy Lutomirski > >> > >> virtio_ring currently sends the device (usually a hypervisor) > >> physical addresses of its I/O buffers. This is okay when DMA > >> addresses and physical addresses are the same thing, but this isn't > >> always the case. For example, this never works on Xen guests, and > >> it is likely to fail if a physical "virtio" device ever ends up > >> behind an IOMMU or swiotlb. > > > > The overall code looks good, but I havn't seen and dma_sync* calls. > > When swiotlb=force is in use this would break. > > > >> + vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, vring_map_single( > >> + vq, > >> + desc, total_sg * sizeof(struct vring_desc), > >> + DMA_TO_DEVICE)); > > > > Are you talking about a dma_sync call on the descriptor ring itself? > Isn't dma_alloc_coherent supposed to make that unnecessary? I should > move the allocation into the virtqueue code. > > The docs suggest that I might need to "flush the processor's write > buffers before telling devices to read that memory". I'm not sure how > to do that. The write buffers should be flushed by the dma-api functions if necessary. For dma_alloc_coherent allocations you don't need to call dma_sync*, but for the map_single/map_page/map_sg ones, as these might be bounce-buffered. Joerg