From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50250) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bfdJ5-0001pw-1C for qemu-devel@nongnu.org; Thu, 01 Sep 2016 21:30:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bfdJ0-0002iO-S5 for qemu-devel@nongnu.org; Thu, 01 Sep 2016 21:30:10 -0400 Received: from mga06.intel.com ([134.134.136.31]:51491) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bfdJ0-0002fz-FB for qemu-devel@nongnu.org; Thu, 01 Sep 2016 21:30:06 -0400 Message-ID: <57C8D60C.4070802@intel.com> Date: Fri, 02 Sep 2016 09:29:48 +0800 From: Wei Wang MIME-Version: 1.0 References: <1466345649-64841-1-git-send-email-wei.w.wang@intel.com> <57C856C0.40302@intel.com> <57C81B5D.40405@intel.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [virtio-comment] Re: [PATCH] *** Vhost-pci RFC v2 *** List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= Cc: virtio-comment@lists.oasis-open.org, stefanha@redhat.com, "qemu-devel@nongnu.org" , pbonzini@redhat.com, "mst@redhat.com" On 09/01/2016 09:05 PM, Marc-André Lureau wrote: > On Thu, Sep 1, 2016 at 4:13 PM Wei Wang > wrote: > > On 09/01/2016 04:49 PM, Marc-André Lureau wrote: > > Hi > > > > On Thu, Sep 1, 2016 at 12:19 PM Wei Wang > > >> wrote: > > > > On 08/31/2016 08:30 PM, Marc-André Lureau wrote: > > > >> - If it could be made not pci-specific, a better name for the > >> device could be simply "driver": the driver of a virtio device. > >> Or the "slave" in vhost-user terminology - consumer of virtq. I > >> think you prefer to call it "backend" in general, but I find it > >> more confusing. > > > > Not really. A virtio device has it own driver (e.g. a virtio-net > > driver for a virtio-net device). A vhost-pci device plays > the role > > of a backend (just like vhost_net, vhost_user) for a virtio > > device. If we use the "device/driver" naming convention, the > > vhost-pci device is part of the "device". But I actually > prefer to > > use "frontend/backend" :) If we check the QEMU's > > doc/specs/vhost-user.txt, it also uses "backend" to describe. > > > > > > Yes, but it uses "backend" freely without any definition and to name > > eventually different things. (at least "slave" is being defined > as the > > consumer of virtq, but I think some people don't like to use > that word). > > > > I think most people know the concept of backend/frontend, that's > probably the reason why they usually don't explicitly explain it in a > doc. If you guys don't have an objection, I suggest to use it in the > discussion :) The goal here is to get the design finalized first. > When > it comes to the final spec wording phase, we can decide which > description is more proper. > > > "backend" is too broad for me. Instead I would stick to something > closer to what we want to name and define. If it's the consumer of > virtq, then why not call it that way. OK. Let me get used to it (provider VM - frontend, consumer VM - backend). > > > Have you thought about making the device not pci specific? I don't > > know much about mmio devices nor s/390, but if devices can hotplug > > their own memory (I believe mmio can), then it should be possible to > > define a device generic enough. > > Not yet. I think the main difference would be the way to map the > frontend VM's memory (in our case, we use a BAR). Other things > should be > generic. > > > I hope some more knowledgeable people will chime in. That would be great. > > > > > >> - Why is it required or beneficial to support multiple > "frontend" > >> devices over the same "vhost-pci" device? It could simplify > >> things if it was a single device. If necessary, that could also > >> be interesting as a vhost-user extension. > > > > We call it "multiple backend functionalities" (e.g. > vhost-pci-net, > > vhost-pci-scsi..). A vhost-pci driver contains multiple such > > backend functionalities, because in this way they can reuse > > (share) the same memory mapping. To be more precise, a vhost-pci > > device supplies the memory of a frontend VM, and all the backend > > functionalities need to access the same frontend VM memory, > so we > > consolidate them into one vhost-pci driver to use one vhost-pci > > device. > > > > > > That's what I imagined. Do you have a use case for that? > > Currently, we only have the network use cases. I think we can > design it > that way (multple backend functionalities), which is more generic (not > just limited to network usages). When implementing it, we can > first have > the network backend functionality (i.e. vhost-pci-net) implemented. In > the future, if people are interested in other backend > functionalities, I > think it should be easy to add them. > > > My question is not about the support of various kind of devices (that > is clearly a worthy goal to me) but to have support simultaneously of > several frontend/provider devices on the same vhost-pci device: is > this required or necessary? I think it would simplify things if it was > 1-1 instead, I would like to understand why you propose a different > design. It is not required, but necessary, I think. As mentioned above, those consumer-side functionalities basically access the same provider VM's memory, so I think one vhost-pci device is enough to hold that memory. When it comes to the consumer guest kernel, we only need to ioremap that memory once. Also, a pair of controlq-s is enough to handle the control path messages between all those functionalities and the QEMU. I think the design also looks compact in this way. what do you think? If we make it an N-N model (each functionality has a vhost-pci device), then the QEMU and guest kernel need to repeat those memory setup things N times. Best, Wei