From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on archive.lwn.net X-Spam-Level: X-Spam-Status: No, score=-6.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham autolearn_force=no version=3.4.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by archive.lwn.net (Postfix) with ESMTP id 752847E3B8 for ; Mon, 6 Aug 2018 17:05:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728349AbeHFTPY (ORCPT ); Mon, 6 Aug 2018 15:15:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54658 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728071AbeHFTPY (ORCPT ); Mon, 6 Aug 2018 15:15:24 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BAF6B307D862; Mon, 6 Aug 2018 17:05:23 +0000 (UTC) Received: from t450s.home (ovpn-116-35.phx2.redhat.com [10.3.116.35]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24C5360BEC; Mon, 6 Aug 2018 17:05:22 +0000 (UTC) Date: Mon, 6 Aug 2018 11:05:21 -0600 From: Alex Williamson To: "Raj, Ashok" Cc: Kenneth Lee , "kvm@vger.kernel.org" , "linux-doc@vger.kernel.org" , Zaibo Xu , sanjay.k.kumar@intel.com, Kenneth Lee , Hao Fang , Herbert Xu , Jonathan Corbet , Joerg Roedel , Zhou Wang , "Tian, Kevin" , linuxarm@huawei.com, Thomas Gleixner , Greg Kroah-Hartman , Cornelia Huck , "linux-kernel@vger.kernel.org" , "iommu@lists.linux-foundation.org" , "linux-crypto@vger.kernel.org" , Philippe Ombredanne , "David S . Miller" , "linux-accelerators@lists.ozlabs.org" , Lu Baolu Subject: Re: [RFC PATCH 3/7] vfio: add spimdev support Message-ID: <20180806110521.0b708e0b@t450s.home> In-Reply-To: <20180806163428.GB32409@otc-nc-03> References: <20180801102221.5308-1-nek.in.cn@gmail.com> <20180801102221.5308-4-nek.in.cn@gmail.com> <20180802034727.GK160746@Turing-Arch-b> <20180802073440.GA91035@Turing-Arch-b> <20180802103528.0b863030.cohuck@redhat.com> <20180802124327.403b10ab@t450s.home> <20180806014004.GF91035@Turing-Arch-b> <20180806094940.47c70be9@t450s.home> <20180806163428.GB32409@otc-nc-03> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Mon, 06 Aug 2018 17:05:24 +0000 (UTC) Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On Mon, 6 Aug 2018 09:34:28 -0700 "Raj, Ashok" wrote: > On Mon, Aug 06, 2018 at 09:49:40AM -0600, Alex Williamson wrote: > > On Mon, 6 Aug 2018 09:40:04 +0800 > > Kenneth Lee wrote: > > > > > > 1. It supports thousands of processes. Take zip accelerator as an example, any > > > application need data compression/decompression will need to interact with the > > > accelerator. To support that, you have to create tens of thousands of mdev for > > > their usage. I don't think it is a good idea to have so many devices in the > > > system. > > > > Each mdev is a device, regardless of whether there are hardware > > resources committed to the device, so I don't understand this argument. > > > > > 2. The application does not want to own the mdev for long. It just need an > > > access point for the hardware service. If it has to interact with an management > > > agent for allocation and release, this makes the problem complex. > > > > I don't see how the length of the usage plays a role here either. Are > > you concerned that the time it takes to create and remove an mdev is > > significant compared to the usage time? Userspace is certainly welcome > > to create a pool of devices, but why should it be the kernel's > > responsibility to dynamically assign resources to an mdev? What's the > > usage model when resources are unavailable? It seems there's > > complexity in either case, but it's generally userspace's responsibility > > to impose a policy. > > > > Can vfio dev's created representing an mdev be shared between several > processes? It doesn't need to be exclusive. > > The path to hardware is established by the processes binding to SVM and > IOMMU ensuring that the PASID is plummed properly. One can think the > same hardware is shared between several processes, hardware knows the > isolation is via the PASID. > > For these cases it isn't required to create a dev per process. The iommu group is the unit of ownership, a vfio group mirrors an iommu group, therefore a vfio group only allows a single open(2). A group also represents the minimum isolation set of devices, therefore devices within a group are not considered isolated and must share the same address space represented by the vfio container. Beyond that, it is possible to share devices among processes, but (I think) it generally implies a hierarchical rather than peer relationship between processes. Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html