From mboxrd@z Thu Jan 1 00:00:00 1970 From: benh@au1.ibm.com (Benjamin Herrenschmidt) Date: Fri, 02 Mar 2018 10:25:34 +1100 Subject: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory In-Reply-To: <595acefb-18fc-e650-e172-bae271263c4c@deltatee.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com> <1519945204.4592.45.camel@au1.ibm.com> <595acefb-18fc-e650-e172-bae271263c4c@deltatee.com> Message-ID: <1519946734.4592.48.camel@au1.ibm.com> On Thu, 2018-03-01@16:19 -0700, Logan Gunthorpe wrote: > > On 01/03/18 04:00 PM, Benjamin Herrenschmidt wrote: > > We use only 52 in practice but yes. > > > > > That's 64PB. If you use need > > > a sparse vmemmap for the entire space it will take 16TB which leaves you > > > with 63.98PB of address space left. (Similar calculations for other > > > numbers of address bits.) > > > > We only have 52 bits of virtual space for the kernel with the radix > > MMU. > > Ok, assuming you only have 52 bits of physical address space: the sparse > vmemmap takes 1TB and you're left with 3.9PB of address space for other > things. So, again, why doesn't that work? Is my math wrong The big problem is not the vmemmap, it's the linear mapping. Cheers, Ben.