From: "Michael S. Tsirkin" <mst@redhat.com>
To: Antonios Motakis <a.motakis@virtualopensystems.com>
Cc: snabb-devel@googlegroups.com,
Anthony Liguori <aliguori@amazon.com>,
Jason Wang <jasowang@redhat.com>,
qemu-devel qemu-devel <qemu-devel@nongnu.org>,
Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Luke Gorrie <lukego@gmail.com>,
Paolo Bonzini <pbonzini@redhat.com>,
VirtualOpenSystems Technical Team <tech@virtualopensystems.com>
Subject: Re: [Qemu-devel] [PATCH v5 7/7] Add vhost-user reconnection
Date: Fri, 10 Jan 2014 14:29:17 +0200 [thread overview]
Message-ID: <20140110122917.GB10700@redhat.com> (raw)
In-Reply-To: <CAG8rG2xvNTn=USTzFG02X0p7z9hcr7B76m8igh9wF6Dv147t2A@mail.gmail.com>
On Fri, Jan 10, 2014 at 11:59:24AM +0100, Antonios Motakis wrote:
>
>
>
> On Thu, Jan 9, 2014 at 5:16 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Jan 09, 2014 at 04:00:01PM +0100, Antonios Motakis wrote:
> > At runtime vhost-user netdev will detect if the vhost backend is up or
> down.
> > Upon disconnection it will set link_down accordingly and notify
> virtio-net.
>
> And then what happens?
>
>
> The virtio-net interface goes down. On the next polling cycle the connection
> will be re-attempted (see vhost_user_timer_handler).
>
I'm guessing user should have control over how often to retry then,
it's a policy thing.
>
> >
> > Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
> > Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> > ---
> > hw/net/vhost_net.c | 16 +++++++++++
> > hw/virtio/vhost-backend.c | 16 +++++++++++
> > include/hw/virtio/vhost-backend.h | 2 ++
> > include/net/vhost_net.h | 1 +
> > net/vhost-user.c | 56
> +++++++++++++++++++++++++++++++++++++++
> > 5 files changed, 91 insertions(+)
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index e42f4d6..56c218e 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -304,6 +304,17 @@ void vhost_net_virtqueue_mask(VHostNetState *net,
> VirtIODevice *dev,
> > vhost_virtqueue_mask(&net->dev, dev, idx, mask);
> > }
> >
> > +int vhost_net_link_status(VHostNetState *net)
> > +{
> > + int r = 0;
> > +
> > + if (net->dev.vhost_ops->vhost_status) {
> > + r = net->dev.vhost_ops->vhost_status(&net->dev);
> > + }
> > +
> > + return r;
> > +}
> > +
> > VHostNetState *get_vhost_net(NetClientState *nc)
> > {
> > VHostNetState *vhost_net = 0;
> > @@ -372,6 +383,11 @@ void vhost_net_virtqueue_mask(VHostNetState *net,
> VirtIODevice *dev,
> > {
> > }
> >
> > +int vhost_net_link_status(VHostNetState *net)
> > +{
> > + return 0;
> > +}
> > +
> > VHostNetState *get_vhost_net(NetClientState *nc)
> > {
> > return 0;
> > diff --git a/hw/virtio/vhost-backend.c b/hw/virtio/vhost-backend.c
> > index 50ea307..fcd274f 100644
> > --- a/hw/virtio/vhost-backend.c
> > +++ b/hw/virtio/vhost-backend.c
> > @@ -350,9 +350,23 @@ static int vhost_user_call(struct vhost_dev *dev,
> unsigned long int request,
> > }
> > }
> >
> > + /* mark the backend non operational */
> > + if (result < 0) {
> > + error_report("%s: Connection break detected\n", __func__);
> > + vhost_user_cleanup(dev);
> > + return 0;
> > + }
> > +
> > return result;
> > }
> >
> > +static int vhost_user_status(struct vhost_dev *dev)
> > +{
> > + vhost_user_echo(dev);
> > +
> > + return (dev->control >= 0);
> > +}
> > +
> > static int vhost_user_init(struct vhost_dev *dev, const char *devpath)
> > {
> > int fd = -1;
> > @@ -432,6 +446,7 @@ static int vhost_user_cleanup(struct vhost_dev *dev)
> > static const VhostOps user_ops = {
> > .backend_type = VHOST_BACKEND_TYPE_USER,
> > .vhost_call = vhost_user_call,
> > + .vhost_status = vhost_user_status,
> > .vhost_backend_init = vhost_user_init,
> > .vhost_backend_cleanup = vhost_user_cleanup
> > };
> > @@ -464,6 +479,7 @@ static int vhost_kernel_cleanup(struct vhost_dev
> *dev)
> > static const VhostOps kernel_ops = {
> > .backend_type = VHOST_BACKEND_TYPE_KERNEL,
> > .vhost_call = vhost_kernel_call,
> > + .vhost_status = 0,
> > .vhost_backend_init = vhost_kernel_init,
> > .vhost_backend_cleanup = vhost_kernel_cleanup
> > };
> > diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/
> vhost-backend.h
> > index ef87ffa..f2b4a6c 100644
> > --- a/include/hw/virtio/vhost-backend.h
> > +++ b/include/hw/virtio/vhost-backend.h
> > @@ -22,12 +22,14 @@ struct vhost_dev;
> >
> > typedef int (*vhost_call)(struct vhost_dev *dev, unsigned long int
> request,
> > void *arg);
> > +typedef int (*vhost_status)(struct vhost_dev *dev);
> > typedef int (*vhost_backend_init)(struct vhost_dev *dev, const char
> *devpath);
> > typedef int (*vhost_backend_cleanup)(struct vhost_dev *dev);
> >
> > typedef struct VhostOps {
> > VhostBackendType backend_type;
> > vhost_call vhost_call;
> > + vhost_status vhost_status;
> > vhost_backend_init vhost_backend_init;
> > vhost_backend_cleanup vhost_backend_cleanup;
> > } VhostOps;
> > diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> > index abd3d0b..6390907 100644
> > --- a/include/net/vhost_net.h
> > +++ b/include/net/vhost_net.h
> > @@ -31,5 +31,6 @@ void vhost_net_ack_features(VHostNetState *net,
> unsigned features);
> > bool vhost_net_virtqueue_pending(VHostNetState *net, int n);
> > void vhost_net_virtqueue_mask(VHostNetState *net, VirtIODevice *dev,
> > int idx, bool mask);
> > +int vhost_net_link_status(VHostNetState *net);
> > VHostNetState *get_vhost_net(NetClientState *nc);
> > #endif
> > diff --git a/net/vhost-user.c b/net/vhost-user.c
> > index 6fd5afc..56f7dd4 100644
> > --- a/net/vhost-user.c
> > +++ b/net/vhost-user.c
> > @@ -12,6 +12,7 @@
> > #include "net/vhost_net.h"
> > #include "net/vhost-user.h"
> > #include "qemu/error-report.h"
> > +#include "qemu/timer.h"
> >
> > typedef struct VhostUserState {
> > NetClientState nc;
> > @@ -19,6 +20,9 @@ typedef struct VhostUserState {
> > char *devpath;
> > } VhostUserState;
> >
> > +static QEMUTimer *vhost_user_timer;
> > +#define VHOST_USER_TIMEOUT (1*1000)
> > +
> > VHostNetState *vhost_user_get_vhost_net(NetClientState *nc)
> > {
> > VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
> > @@ -31,6 +35,11 @@ static int vhost_user_running(VhostUserState *s)
> > return (s->vhost_net) ? 1 : 0;
> > }
> >
> > +static int vhost_user_link_status(VhostUserState *s)
> > +{
> > + return (!s->nc.link_down) && vhost_net_link_status(s->vhost_net);
> > +}
> > +
> > static int vhost_user_start(VhostUserState *s)
> > {
> > VhostNetOptions options;
> > @@ -59,6 +68,48 @@ static void vhost_user_stop(VhostUserState *s)
> > s->vhost_net = 0;
> > }
> >
> > +static void vhost_user_timer_handler(void *opaque)
> > +{
> > + VhostUserState *s = opaque;
> > + int link_down = 0;
> > +
> > + if (vhost_user_running(s)) {
> > + if (!vhost_user_link_status(s)) {
> > + link_down = 1;
> > + }
> > + } else {
> > + vhost_user_start(s);
> > + if (!vhost_user_running(s)) {
> > + link_down = 1;
> > + }
> > + }
> > +
> > + if (link_down != s->nc.link_down) {
> > +
> > + s->nc.link_down = link_down;
> > +
> > + if (s->nc.peer) {
> > + s->nc.peer->link_down = link_down;
> > + }
> > +
> > + if (s->nc.info->link_status_changed) {
> > + s->nc.info->link_status_changed(&s->nc);
> > + }
> > +
> > + if (s->nc.peer && s->nc.peer->info->link_status_changed) {
> > + s->nc.peer->info->link_status_changed(s->nc.peer);
> > + }
> > +
> > + if (link_down) {
> > + vhost_user_stop(s);
> > + }
> > + }
> > +
> > + /* reschedule */
> > + timer_mod(vhost_user_timer,
> > + qemu_clock_get_ms(QEMU_CLOCK_REALTIME) +
> VHOST_USER_TIMEOUT);
> > +}
> > +
> > static void vhost_user_cleanup(NetClientState *nc)
> > {
> > VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
> > @@ -93,6 +144,11 @@ static int net_vhost_user_init(NetClientState *peer,
> const char *device,
> >
> > r = vhost_user_start(s);
> >
> > + vhost_user_timer = timer_new_ms(QEMU_CLOCK_REALTIME,
> > + vhost_user_timer_handler, s);
> > + timer_mod(vhost_user_timer,
> > + qemu_clock_get_ms(QEMU_CLOCK_REALTIME) +
> VHOST_USER_TIMEOUT);
> > +
> > return r;
> > }
> >
> > --
> > 1.8.3.2
> >
>
>
next prev parent reply other threads:[~2014-01-10 12:29 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-09 14:59 [Qemu-devel] [PATCH v5 0/7] Vhost and vhost-net support for userspace based backends Antonios Motakis
2014-01-09 14:59 ` [Qemu-devel] [PATCH v5 1/7] Convert -mem-path to QemuOpts and add prealloc, share and unlink properties Antonios Motakis
2014-01-09 16:01 ` Michael S. Tsirkin
2014-01-10 11:05 ` Antonios Motakis
2014-01-10 12:31 ` Michael S. Tsirkin
2014-01-13 8:30 ` Edgar E. Iglesias
2014-01-09 14:59 ` [Qemu-devel] [PATCH v5 2/7] Decouple vhost from kernel interface Antonios Motakis
2014-01-09 14:59 ` [Qemu-devel] [PATCH v5 3/7] Add vhost-user skeleton Antonios Motakis
2014-01-09 14:59 ` [Qemu-devel] [PATCH v5 4/7] Add domain socket communication for vhost-user backend Antonios Motakis
2014-01-09 15:31 ` Michael S. Tsirkin
2014-01-10 11:09 ` Antonios Motakis
2014-01-09 14:59 ` [Qemu-devel] [PATCH v5 5/7] Add vhost-user calls implementation Antonios Motakis
2014-01-09 15:47 ` Michael S. Tsirkin
2014-01-10 11:07 ` Antonios Motakis
2014-01-09 15:00 ` [Qemu-devel] [PATCH v5 6/7] Add new vhost-user netdev backend Antonios Motakis
2014-01-09 16:14 ` Michael S. Tsirkin
2014-01-10 11:00 ` Antonios Motakis
2014-01-09 15:00 ` [Qemu-devel] [PATCH v5 7/7] Add vhost-user reconnection Antonios Motakis
2014-01-09 16:16 ` Michael S. Tsirkin
2014-01-10 10:59 ` Antonios Motakis
2014-01-10 12:29 ` Michael S. Tsirkin [this message]
2014-01-09 16:11 ` [Qemu-devel] [PATCH v5 0/7] Vhost and vhost-net support for userspace based backends Michael S. Tsirkin
2014-01-10 10:58 ` Antonios Motakis
2014-01-10 12:25 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140110122917.GB10700@redhat.com \
--to=mst@redhat.com \
--cc=a.motakis@virtualopensystems.com \
--cc=aliguori@amazon.com \
--cc=jasowang@redhat.com \
--cc=lukego@gmail.com \
--cc=n.nikolaev@virtualopensystems.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=snabb-devel@googlegroups.com \
--cc=stefanha@redhat.com \
--cc=tech@virtualopensystems.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).