From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49406) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c2gjP-0006Uh-U6 for qemu-devel@nongnu.org; Fri, 04 Nov 2016 11:48:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c2gjP-0004PV-0q for qemu-devel@nongnu.org; Fri, 04 Nov 2016 11:48:39 -0400 References: <1476827711-20758-1-git-send-email-logang@deltatee.com> <20161104104921.GD9817@stefanha-x1.localdomain> From: Logan Gunthorpe Message-ID: Date: Fri, 4 Nov 2016 09:47:33 -0600 MIME-Version: 1.0 In-Reply-To: <20161104104921.GD9817@stefanha-x1.localdomain> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [Qemu-block] [PATCH] Added iopmem device emulation List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: Kevin Wolf , Max Reitz , "Michael S. Tsirkin" , Marcel Apfelbaum , qemu-devel@nongnu.org, qemu-block@nongnu.org, Stephen Bates Hi Stefan, On 04/11/16 04:49 AM, Stefan Hajnoczi wrote: > QEMU already has NVDIMM support (https://pmem.io/). It can be used both > for passthrough and fake non-volatile memory: > > qemu-system-x86_64 \ > -M pc,nvdimm=on \ > -m 1024,maxmem=$((4096 * 1024 * 1024)),slots=2 \ > -object memory-backend-file,id=mem0,mem-path=/tmp/foo,size=$((64 * 1024 * 1024)) \ > -device nvdimm,memdev=mem0 > > Please explain where iopmem comes from, where the hardware spec is, etc? Yes, we are aware of nvdimm and, yes, there are quite a few commonalities. The difference between nvdimm and iopmem is that the memory that backs iopmem is on a PCI device and not connected through system memory. Currently, we are working with prototype hardware so there is no open spec that I'm aware of but the concept is really simple: a single bar directly maps volatile or non-volatile memory. One of the primary motivations behind iopmem is to provide memory to do peer to peer transactions between PCI devices such that, for example, an RDMA NIC could transfer data directly to storage and bypass the system memory bus all together. > Perhaps you could use nvdimm instead of adding a new device? I'm afraid not. The main purpose of this patch is to enable us to test kernel drivers for this type of hardware. If we use nvdimm, there is no PCI device for our driver to enumerate and the existing, different, NVDIMM drivers would be used instead. Thanks for the consideration, Logan