From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Gunthorpe Subject: Re: [RFC PATCH 15/18] cgroup: Introduce ioasids controller Date: Thu, 4 Mar 2021 13:54:02 -0400 Message-ID: <20210304175402.GG4247@nvidia.com> References: <1614463286-97618-1-git-send-email-jacob.jun.pan@linux.intel.com> <1614463286-97618-16-git-send-email-jacob.jun.pan@linux.intel.com> <20210303131726.7a8cb169@jacob-builder> <20210303160205.151d114e@jacob-builder> <20210304094603.4ab6c1c4@jacob-builder> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20210304094603.4ab6c1c4@jacob-builder> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1614880459; bh=1IsSiLJgMTXbIalwO2pbn+VbK2Xk9986eGeD/lKT7yQ=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:Date: From:To:CC:Subject:Message-ID:References:Content-Type: Content-Disposition:In-Reply-To:X-ClientProxiedBy:MIME-Version: X-MS-Exchange-MessageSentRepresentingType:X-Header; b=rDYRwf0dwRc+djrxenqKoY96c9s8HF7iXjhjlMjLqhK7AgVH6nhHtLXOWqj5t6afh lRlLZiPC9KLV1CRT6m/HH8hUjiKj4J/qgp8jy29CLE3yY/zivgM0gAL0fTyY7ItEUT gAHQeIPZwaga7VM1F/4FXVNfUxC9hE0ZV/HL6apEPfv9s9IMJczvXmjfAiVzbLcjWV wbkKRvXO0p6/xkWn+ZArhyau+4P0VUr//MMXazPjODaYUWG7gFwiCOOvQTvKXFiK0q 5f1xa+nA+hZu6N1SC31B8xRv1PcVkOgp9v2jnkOhuNTtx0++k1MFiF/khEFzvPqp5a 1uWiZtwPm8+qQ== List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Sender: "iommu" To: Jacob Pan Cc: Jean-Philippe Brucker , "Tian, Kevin" , Alex Williamson , Raj Ashok , Jonathan Corbet , Jean-Philippe Brucker , LKML , Dave Jiang , iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, Johannes Weiner , Tejun Heo , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Wu Hao , David Woodhouse On Thu, Mar 04, 2021 at 09:46:03AM -0800, Jacob Pan wrote: > Right, I was assuming have three use cases of IOASIDs: > 1. host supervisor SVA (not a concern, just one init_mm to bind) > 2. host user SVA, either one IOASID per process or perhaps some private > IOASID for private address space > 3. VM use for guest SVA, each IOASID is bound to a guest process > > My current cgroup proposal applies to #3 with IOASID_SET_TYPE_MM, which is > allocated by the new /dev/ioasid interface. > > For #2, I was thinking you can limit the host process via PIDs cgroup? i.e. > limit fork. So the host IOASIDs are currently allocated from the system pool > with quota of chosen by iommu_sva_init() in my patch, 0 means unlimited use > whatever is available. https://lkml.org/lkml/2021/2/28/18 Why do we need two pools? If PASID's are limited then why does it matter how the PASID was allocated? Either the thing requesting it is below the limit, or it isn't. For something like qemu I'd expect to put the qemu process in a cgroup with 1 PASID. Who cares what qemu uses the PASID for, or how it was allocated? Jason