qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Wei Wang <wei.w.wang@intel.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, mst@redhat.com, zhiyong.yang@intel.com,
	Maxime Coquelin <maxime.coquelin@redhat.com>,
	jasowang@redhat.com
Subject: Re: [Qemu-devel] [RFC 0/2] virtio-vhost-user: add virtio-vhost-user device
Date: Tue, 30 Jan 2018 20:09:19 +0800	[thread overview]
Message-ID: <5A70606F.3030307@intel.com> (raw)
In-Reply-To: <20180126144429.GD17788@stefanha-x1.localdomain>

On 01/26/2018 10:44 PM, Stefan Hajnoczi wrote:
> On Thu, Jan 25, 2018 at 06:19:13PM +0800, Wei Wang wrote:
>> On 01/24/2018 07:40 PM, Stefan Hajnoczi wrote:
>>> On Tue, Jan 23, 2018 at 09:06:49PM +0800, Wei Wang wrote:
>>>> On 01/23/2018 07:12 PM, Stefan Hajnoczi wrote:
>>>>> On Mon, Jan 22, 2018 at 07:09:06PM +0800, Wei Wang wrote:
>>>>>> On 01/19/2018 09:06 PM, Stefan Hajnoczi wrote:
>>>>>>
>>>>>>
>>>>>>     - Suppose in the future there is also a kernel virtio-vhost-user driver as
>>>>>> other PCI devices, can we unbind the kernel driver first, and then bind the
>>>>>> device to the dpdk driver? A normal PCI device should be able to smoothly
>>>>>> switch between the kernel driver and dpdk driver.
>>>>> It depends what you mean by "smoothly switch".
>>>>>
>>>>> If you mean whether it's possible to go from a kernel driver to
>>>>> vfio-pci, then the answer is yes.
>>>>>
>>>>> But if the kernel driver has an established vhost-user connection then
>>>>> it will be closed.  This is the same as reconnecting with AF_UNIX
>>>>> vhost-user.
>>>>>
>>>> Actually not only the case of switching to testpmd after kernel establishes
>>>> the connection, but also for several runs of testpmd. That is, if we run
>>>> testpmd, then exit testpmd. I think the second run of testpmd won't work.
>>> The vhost-user master must reconnect and initialize again (SET_FEATURES,
>>> SET_MEM_TABLE, etc).  Is your master reconnecting after the AF_UNIX
>>> connection is closed?
>> Is this an explicit qmp operation to make the master re-connect?
> I haven't tested it myself but I'm aware of two modes of operation:
>
> 1. -chardev socket,id=chardev0,...,server
>     -netdev vhost-user,chardev=chardev0
>
>     When the vhost-user socket is disconnected the peer needs to
>     reconnect.  In this case no special commands are necessary.
>
>     Here we're relying on DPDK librte_vhost's reconnection behavior.
>
> Or
>
> 2. -chardev socket,id=chardev0,...,reconnect=3
>     -netdev vhost-user,chardev=chardev0
>
>     When the vhost-user socket is disconnected a new connection attempt
>     will be made after 3 seconds.
>
> In both cases vhost-user negotiation will resume when the new connection
> is established.
>
> Stefan

I've been thinking about the issues, and it looks vhost-pci outperforms 
in this aspect.
Vhost-pci is like using a mail box. messages are just dropped into the 
box, and whenever vhost-pci pmd gets booted, it can always get the 
messages from the box, the negotiation between vhost-pci pmd and 
virtio-net is asynchronous.
Virtio-vhost-user is like a phone call, which is a synchronous 
communication. If one side is absent, then the other side will hang on 
without knowing when it could get connected or hang up with messages not 
passed (lost).

I also think the above solutions won't help. Please see below:

Background:
The vhost-user negotiation is split into 2 phases currently. The 1st 
phase happens when the connection is established, and we can find what's 
done in the 1st phase in vhost_user_init(). The 2nd phase happens when 
the master driver is loaded (e.g. run of virtio-net pmd) and set status 
to the device, and we can find what's done in the 2nd phase in 
vhost_dev_start(), which includes sending the memory info and virtqueue 
info. The socket is connected, till one of the QEMU devices exits, so 
pmd exiting won't end the QEMU side socket connection.

Issues:
Suppose we have both the vhost and virtio-net set up, and vhost pmd <-> 
virtio-net pmd communication works well. Now, vhost pmd exits 
(virtio-net pmd is still there). Some time later, we re-run vhost pmd, 
the vhost pmd doesn't know the virtqueue addresses of the virtio-net 
pmd, unless the virtio-net pmd reloads to start the 2nd phase of the 
vhost-user protocol. So the second run of the vhost pmd won't work.

Any thoughts?

Best,
Wei

  reply	other threads:[~2018-01-30 12:07 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-19 13:06 [Qemu-devel] [RFC 0/2] virtio-vhost-user: add virtio-vhost-user device Stefan Hajnoczi
2018-01-19 13:06 ` [Qemu-devel] [RFC 1/2] vhost-user: share the vhost-user protocol related structures Stefan Hajnoczi
2018-01-19 13:06 ` [Qemu-devel] [RFC 2/2] virtio-vhost-user: add virtio-vhost-user device Stefan Hajnoczi
2018-01-19 13:55 ` [Qemu-devel] [RFC 0/2] " Stefan Hajnoczi
2018-01-22  3:33 ` Jason Wang
2018-01-22 12:17   ` Stefan Hajnoczi
2018-01-22 20:04     ` Michael S. Tsirkin
2018-01-23 10:01       ` Jason Wang
2018-01-23 16:07         ` Michael S. Tsirkin
2018-01-25 14:07           ` Paolo Bonzini
2018-01-25 14:48             ` Michael S. Tsirkin
2018-01-26  3:49               ` Jason Wang
2018-01-23 10:09       ` Stefan Hajnoczi
2018-01-23 10:46     ` Wei Wang
2018-01-22 11:09 ` Wei Wang
2018-01-23 11:12   ` Stefan Hajnoczi
2018-01-23 13:06     ` Wei Wang
2018-01-24 11:40       ` Stefan Hajnoczi
2018-01-25 10:19         ` Wei Wang
2018-01-26 14:44           ` Stefan Hajnoczi
2018-01-30 12:09             ` Wei Wang [this message]
2018-02-01 17:08               ` Michael S. Tsirkin
2018-02-02 13:08                 ` Wei Wang
2018-02-05 16:25                   ` Stefan Hajnoczi
2018-02-06  1:28                     ` Wang, Wei W
2018-02-06  9:31                       ` Stefan Hajnoczi
2018-02-06 12:42                         ` Wang, Wei W
2018-02-06 14:13                           ` Stefan Hajnoczi
2018-02-02 15:25               ` Stefan Hajnoczi
2018-02-05  9:57                 ` Wang, Wei W
2018-02-05 15:57                   ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5A70606F.3030307@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=jasowang@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=zhiyong.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).