From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for ioreq server Date: Mon, 6 Jul 2015 14:28:14 +0100 Message-ID: <559A826E.1000905@eu.citrix.com> References: <1436163912-1506-1-git-send-email-yu.c.zhang@linux.intel.com> <1436163912-1506-2-git-send-email-yu.c.zhang@linux.intel.com> <9AAE0902D5BC7E449B7C8E4E778ABCD02598AEEF@AMSPEX01CL02.citrite.net> <9AAE0902D5BC7E449B7C8E4E778ABCD02598B096@AMSPEX01CL02.citrite.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD02598B096@AMSPEX01CL02.citrite.net> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Paul Durrant , George Dunlap Cc: Kevin Tian , "Keir (Xen.org)" , Andrew Cooper , "xen-devel@lists.xen.org" , Yu Zhang , "zhiyuan.lv@intel.com" , Jan Beulich List-Id: xen-devel@lists.xenproject.org On 07/06/2015 02:09 PM, Paul Durrant wrote: >> -----Original Message----- >> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of >> George Dunlap >> Sent: 06 July 2015 13:50 >> To: Paul Durrant >> Cc: Yu Zhang; xen-devel@lists.xen.org; Keir (Xen.org); Jan Beulich; Andrew >> Cooper; Kevin Tian; zhiyuan.lv@intel.com >> Subject: Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for >> ioreq server >> >> On Mon, Jul 6, 2015 at 1:38 PM, Paul Durrant >> wrote: >>>> -----Original Message----- >>>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of >>>> George Dunlap >>>> Sent: 06 July 2015 13:36 >>>> To: Yu Zhang >>>> Cc: xen-devel@lists.xen.org; Keir (Xen.org); Jan Beulich; Andrew Cooper; >>>> Paul Durrant; Kevin Tian; zhiyuan.lv@intel.com >>>> Subject: Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES >> for >>>> ioreq server >>>> >>>> On Mon, Jul 6, 2015 at 7:25 AM, Yu Zhang >>>> wrote: >>>>> MAX_NR_IO_RANGES is used by ioreq server as the maximum >>>>> number of discrete ranges to be tracked. This patch changes >>>>> its value to 8k, so that more ranges can be tracked on next >>>>> generation of Intel platforms in XenGT. Future patches can >>>>> extend the limit to be toolstack tunable, and MAX_NR_IO_RANGES >>>>> can serve as a default limit. >>>>> >>>>> Signed-off-by: Yu Zhang >>>> >>>> I said this at the Hackathon, and I'll say it here: I think this is >>>> the wrong approach. >>>> >>>> The problem here is not that you don't have enough memory ranges. The >>>> problem is that you are not tracking memory ranges, but individual >>>> pages. >>>> >>>> You need to make a new interface that allows you to tag individual >>>> gfns as p2m_mmio_write_dm, and then allow one ioreq server to get >>>> notifications for all such writes. >>>> >>> >>> I think that is conflating things. It's quite conceivable that more than one >> ioreq server will handle write_dm pages. If we had enough types to have >> two page types per server then I'd agree with you, but we don't. >> >> What's conflating things is using an interface designed for *device >> memory ranges* to instead *track writes to gfns*. > > What's the difference? Are you asserting that all device memory ranges have read side effects and therefore write_dm is not a reasonable optimization to use? I would not want to make that assertion. Using write_dm is not the problem; it's having thousands of memory "ranges" of 4k each that I object to. Which is why I suggested adding an interface to request updates to gfns (by marking them write_dm), rather than abusing the io range interface. -George