From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0163DC43441 for ; Tue, 27 Nov 2018 18:04:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C9F002082F for ; Tue, 27 Nov 2018 18:04:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C9F002082F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-pci-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732196AbeK1FDC (ORCPT ); Wed, 28 Nov 2018 00:03:02 -0500 Received: from mx1.redhat.com ([209.132.183.28]:45794 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732136AbeK1FDB (ORCPT ); Wed, 28 Nov 2018 00:03:01 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 107BB83F3D; Tue, 27 Nov 2018 18:04:19 +0000 (UTC) Received: from redhat.com (ovpn-116-85.ams2.redhat.com [10.36.116.85]) by smtp.corp.redhat.com (Postfix) with SMTP id 96F9660BF6; Tue, 27 Nov 2018 18:04:09 +0000 (UTC) Date: Tue, 27 Nov 2018 13:04:08 -0500 From: "Michael S. Tsirkin" To: Jean-Philippe Brucker Cc: mark.rutland@arm.com, virtio-dev@lists.oasis-open.org, lorenzo.pieralisi@arm.com, tnowicki@caviumnetworks.com, devicetree@vger.kernel.org, marc.zyngier@arm.com, linux-pci@vger.kernel.org, joro@8bytes.org, will.deacon@arm.com, virtualization@lists.linux-foundation.org, eric.auger@redhat.com, iommu@lists.linux-foundation.org, robh+dt@kernel.org, bhelgaas@google.com, robin.murphy@arm.com, kvmarm@lists.cs.columbia.edu Subject: Re: [PATCH v5 5/7] iommu: Add virtio-iommu driver Message-ID: <20181127125424-mutt-send-email-mst@kernel.org> References: <20181122193801.50510-1-jean-philippe.brucker@arm.com> <20181122193801.50510-6-jean-philippe.brucker@arm.com> <20181123165742-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Tue, 27 Nov 2018 18:04:19 +0000 (UTC) Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org On Tue, Nov 27, 2018 at 05:50:50PM +0000, Jean-Philippe Brucker wrote: > On 23/11/2018 22:02, Michael S. Tsirkin wrote: > >> +/* > >> + * __viommu_sync_req - Complete all in-flight requests > >> + * > >> + * Wait for all added requests to complete. When this function returns, all > >> + * requests that were in-flight at the time of the call have completed. > >> + */ > >> +static int __viommu_sync_req(struct viommu_dev *viommu) > >> +{ > >> + int ret = 0; > >> + unsigned int len; > >> + size_t write_len; > >> + struct viommu_request *req; > >> + struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ]; > >> + > >> + assert_spin_locked(&viommu->request_lock); > >> + > >> + virtqueue_kick(vq); > >> + > >> + while (!list_empty(&viommu->requests)) { > >> + len = 0; > >> + req = virtqueue_get_buf(vq, &len); > >> + if (!req) > >> + continue; > >> + > >> + if (!len) > >> + viommu_set_req_status(req->buf, req->len, > >> + VIRTIO_IOMMU_S_IOERR); > >> + > >> + write_len = req->len - req->write_offset; > >> + if (req->writeback && len == write_len) > >> + memcpy(req->writeback, req->buf + req->write_offset, > >> + write_len); > >> + > >> + list_del(&req->list); > >> + kfree(req); > >> + } > > > > I didn't notice this in the past but it seems this will spin > > with interrupts disabled until host handles the request. > > Please do not do this - host execution can be another > > task that needs the same host CPU. This will then disable > > interrupts for a very very long time. > > In the guest yes, but that doesn't prevent the host from running another > task right? Doesn't prevent it but it will delay it significantly until scheduler decides to kick the VCPU task out. > My tests run fine when QEMU is bound to a single CPU, even > though vcpu and viommu run in different threads > > > What to do then? Queue in software and wake up task. > > Unfortunately I can't do anything here, because IOMMU drivers can't > sleep in the iommu_map() or iommu_unmap() path. > > The problem is the same > for all IOMMU drivers. That's because the DMA API allows drivers to call > some functions with interrupts disabled. For example > Documentation/DMA-API-HOWTO.txt allows dma_alloc_coherent() and > dma_unmap_single() to be called in interrupt context. In fact I don't really understand how it's supposed to work at all: you only sync when ring is full. So host may not have seen your map request if ring is not full. Why is it safe to use the address with a device then? > > As kick is vm exit, kick under interrupts disabled is discouraged too: > > better to prepare for kick enable interrupts then kick. > > That was on my list of things to look at, because it could relax > things for device drivers that don't call us with interrupts disabled. I > just tried it and I can see some performance improvement (7% and 4% on > tcp_stream and tcp_maerts respectively, +/-2.5%). > > Since it's an optimization I'll leave it for later (ACPI and module > support is higher on my list). The resulting change is complicated > because we now need to deal with threads adding new requests while > sync() is running. With my current prototype one thread could end up > staying in sync() while other threads add new async requests, so I need > to find a way to bound it. > > Thanks, > Jean