From mboxrd@z Thu Jan 1 00:00:00 1970 From: Benjamin Herrenschmidt Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Date: Fri, 02 Mar 2018 10:26:07 +1100 Message-ID: <1519946767.4592.49.camel@kernel.crashing.org> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com> <1519945204.4592.45.camel@au1.ibm.com> <595acefb-18fc-e650-e172-bae271263c4c@deltatee.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <595acefb-18fc-e650-e172-bae271263c4c-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Logan Gunthorpe , Dan Williams Cc: Jens Axboe , linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Oliver OHalloran , linux-nvdimm , linux-rdma , linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Linux Kernel Mailing List , linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, Keith Busch , Alex Williamson , Jason Gunthorpe , =?ISO-8859-1?Q?J=E9r=F4me?= Glisse , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig List-Id: linux-rdma@vger.kernel.org On Thu, 2018-03-01 at 16:19 -0700, Logan Gunthorpe wrote: (Switching back to my non-IBM address ...) > On 01/03/18 04:00 PM, Benjamin Herrenschmidt wrote: > > We use only 52 in practice but yes. > > > > > That's 64PB. If you use need > > > a sparse vmemmap for the entire space it will take 16TB which leaves you > > > with 63.98PB of address space left. (Similar calculations for other > > > numbers of address bits.) > > > > We only have 52 bits of virtual space for the kernel with the radix > > MMU. > > Ok, assuming you only have 52 bits of physical address space: the sparse > vmemmap takes 1TB and you're left with 3.9PB of address space for other > things. So, again, why doesn't that work? Is my math wrong The big problem is not the vmemmap, it's the linear mapping. Cheers, Ben.