From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e31.co.us.ibm.com (e31.co.us.ibm.com [32.97.110.149]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "e31.co.us.ibm.com", Issuer "Equifax" (verified OK)) by ozlabs.org (Postfix) with ESMTP id E3C33DE133 for ; Thu, 21 Feb 2008 05:14:11 +1100 (EST) Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e31.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id m1KIDo2O006965 for ; Wed, 20 Feb 2008 13:13:50 -0500 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v8.7) with ESMTP id m1KIE7hg212698 for ; Wed, 20 Feb 2008 11:14:07 -0700 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id m1KIE6Kg006764 for ; Wed, 20 Feb 2008 11:14:07 -0700 Subject: Re: [PATCH] drivers/base: export gpl (un)register_memory_notifier From: Dave Hansen To: Jan-Bernd Themann In-Reply-To: <200802181100.12995.ossthema@de.ibm.com> References: <200802111724.12416.ossthema@de.ibm.com> <1203094538.8142.23.camel@nimitz.home.sr71.net> <200802181100.12995.ossthema@de.ibm.com> Content-Type: text/plain Date: Wed, 20 Feb 2008 10:14:02 -0800 Message-Id: <1203531242.15017.20.camel@nimitz.home.sr71.net> Mime-Version: 1.0 Cc: Thomas Q Klein , ossthema@linux.vnet.ibm.com, Jan-Bernd Themann , Greg KH , apw , linux-kernel , linuxppc-dev@ozlabs.org, Christoph Raisch , Badari Pulavarty , netdev , tklein@linux.ibm.com List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Mon, 2008-02-18 at 11:00 +0100, Jan-Bernd Themann wrote: > Dave Hansen wrote on 15.02.2008 17:55:38: > > > I've been thinking about that, and I don't think you really *need* to > > keep a comprehensive map like that. > > > > When the memory is in a particular configuration (range of memory > > present along with unique set of holes) you get a unique ehea_bmap > > configuration. That layout is completely predictable. > > > > So, if at any time you want to figure out what the ehea_bmap address for > > a particular *Linux* virtual address is, you just need to pretend that > > you're creating the entire ehea_bmap, use the same algorithm and figure > > out host you would have placed things, and use that result. > > > > Now, that's going to be a slow, crappy linear search (but maybe not as > > slow as recreating the silly thing). So, you might eventually run into > > some scalability problems with a lot of packets going around. But, I'd > > be curious if you do in practice. > > Up to 14 addresses translation per packet (sg_list) might be required on > the transmit side. On receive side it is only 1. Most packets require only > very few translations (1 or sometimes more) translations. However, > with more then 700.000 packets per second this approach does not seem > reasonable from performance perspective when memory is fragmented as you > described. OK, but let's see the data. *SHOW* me that it's slow. If the algorithm works, then perhaps we can simply speed it up with a little caching and *MUCH* less memory overhead. -- Dave