qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: <ning.bo9@zte.com.cn>
To: <stefanha@gmail.com>
Cc: mst@redhat.com, qemu-devel@nongnu.org, armbru@redhat.com
Subject: Re:[Qemu-devel] [PATCH v2] vhost-vsock: report QMP event when setrunning
Date: Thu, 28 Nov 2019 19:26:47 +0800 (CST)	[thread overview]
Message-ID: <201911281926474453744@zte.com.cn> (raw)
In-Reply-To: <20190809134134.GA8594@stefanha-x1.localdomain>


[-- Attachment #1.1: Type: text/plain, Size: 3520 bytes --]

Let me describe the issue with an example via `nc-vsock`:

Let's assume the Guest cid is 3.
execute 'rmmod vmw_vsock_virtio_transport' in Guest,
then execute 'while true; do nc-vsock 3 1234' in Host.

Host                             Guest
                                 # rmmod vmw_vsock_virtio_transport

# while true; do ./nc-vsock 3 1234; done
(after 2 second)
connect: Connection timed out
(after 2 second)
connect: Connection timed out
...

                                 # modprobe vmw_vsock_virtio_transport

connect: Connection reset by peer
connect: Connection reset by peer
connect: Connection reset by peer
...

                                 # nc-vsock -l 1234
                                 Connetion from cid 2 port ***...
(stop printing)


The above process simulates the communication process between
the `kata-runtime` and `kata-agent` after starting the Guest.
In order to connect to `kata-agent` as soon as possible, 
`kata-runtime` will continuously try to connect to `kata-agent` in a loop.
see https://github.com/kata-containers/runtime/blob/d054556f60f092335a22a288011fa29539ad4ccc/vendor/github.com/kata-containers/agent/protocols/client/client.go#L327
But when the vsock device in the Guest is not ready, the connection
will block for 2 seconds. This situation actually slows down
the entire startup time of `kata-runtime`.


> I think that adding a QMP event is working around the issue rather than
> fixing the root cause.  This is probably a vhost_vsock.ko problem and
> should be fixed there.

After looking at the source code of vhost_vsock.ko, 
I think it is possible to optimize the logic here too.
The simple patch is as follows. Do you think the modification is appropriate?

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 9f57736f..8fad67be 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -51,6 +51,7 @@ struct vhost_vsock {
 	atomic_t queued_replies;

 	u32 guest_cid;
+	u32 state;
 };

 static u32 vhost_transport_get_local_cid(void)
@@ -497,6 +541,7 @@ static int vhost_vsock_start(struct vhost_vsock *vsock)

 		mutex_unlock(&vq->mutex);
 	}
+	vsock->state = 1;

 	mutex_unlock(&vsock->dev.mutex);
 	return 0;
@@ -535,6 +580,7 @@ static int vhost_vsock_stop(struct vhost_vsock *vsock)
 		vq->private_data = NULL;
 		mutex_unlock(&vq->mutex);
 	}
+	vsock->state = 0;

 err:
 	mutex_unlock(&vsock->dev.mutex);
@@ -786,6 +832,27 @@ static struct miscdevice vhost_vsock_misc = {
 	.fops = &vhost_vsock_fops,
 };

+int vhost_transport_connect(struct vsock_sock *vsk) {
+	struct vhost_vsock *vsock;
+
+	rcu_read_lock();
+
+	/* Find the vhost_vsock according to guest context id  */
+	vsock = vhost_vsock_get(vsk->remote_addr.svm_cid);
+	if (!vsock) {
+		rcu_read_unlock();
+		return -ENODEV;
+	}
+
+	rcu_read_unlock();
+
+	if (vsock->state == 1) {
+		return virtio_transport_connect(vsk);
+	} else {
+		return -ECONNRESET;
+	}
+}
+
 static struct virtio_transport vhost_transport = {
 	.transport = {
 		.get_local_cid            = vhost_transport_get_local_cid,
@@ -793,7 +860,7 @@ static struct virtio_transport vhost_transport = {
 		.init                     = virtio_transport_do_socket_init,
 		.destruct                 = virtio_transport_destruct,
 		.release                  = virtio_transport_release,
-		.connect                  = virtio_transport_connect,
+		.connect                  = vhost_transport_connect,
 		.shutdown                 = virtio_transport_shutdown,
 		.cancel_pkt               = vhost_transport_cancel_pkt,

  reply	other threads:[~2019-11-28 11:35 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-05  3:32 [Qemu-devel] [PATCH v2] vhost-vsock: report QMP event when set running Ning Bo
2019-08-09 13:41 ` Stefan Hajnoczi
2019-11-28 11:26   ` ning.bo9 [this message]
2019-12-12 11:05     ` [Qemu-devel] [PATCH v2] vhost-vsock: report QMP event when setrunning Stefan Hajnoczi
2019-12-12 11:24       ` Michael S. Tsirkin
2019-12-19 11:18         ` Stefan Hajnoczi
2019-12-13  7:11       ` Re:[Qemu-devel] [PATCH v2] vhost-vsock: report QMP event whensetrunning ning.bo9
2019-12-19 11:35         ` [Qemu-devel] " Stefan Hajnoczi
2019-12-20  2:38           ` Re:[Qemu-devel] [PATCH v2] vhost-vsock: report QMP eventwhensetrunning ning.bo9

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201911281926474453744@zte.com.cn \
    --to=ning.bo9@zte.com.cn \
    --cc=armbru@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).