qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: geoff@hostfission.com
Cc: Peter Maydell <peter.maydell@linaro.org>,
	QEMU Developers <qemu-devel@nongnu.org>
Subject: Re: RFC: New device for zero-copy VM memory access
Date: Thu, 31 Oct 2019 15:52:04 +0000	[thread overview]
Message-ID: <20191031155204.GD3128@work-vm> (raw)
In-Reply-To: <b87d5b2fb84ac0a3c98a62dcc0c19077@hostfission.com>

* geoff@hostfission.com (geoff@hostfission.com) wrote:
> 
> 
> On 2019-11-01 01:52, Peter Maydell wrote:
> > On Thu, 31 Oct 2019 at 14:26, <geoff@hostfission.com> wrote:
> > > As the author of Looking Glass, I also have to consider the
> > > maintenance
> > > and the complexity of implementing the vhost protocol into the
> > > project.
> > > At this time a complete Porthole client can be implemented in 150
> > > lines
> > > of C without external dependencies, and most of that is boilerplate
> > > socket code. This IMO is a major factor in deciding to avoid
> > > vhost-user.
> > 
> > This is essentially a proposal that we should make our project and
> > code more complicated so that your project and code can be simpler.
> > I hope you can see why this isn't necessarily an argument that will hold
> > very much weight for us :-)
> 
> Certainly, I do which is why I am still going to see about using vhost,
> however, a device that uses vhost is likely more complex then the device
> as it stands right now and as such more maintenance would be involved on
> your end also. Or have I missed something in that vhost-user can be used
> directly as a device?

The basic vhost-user stuff isn't actually that hard;  if you aren't
actually shuffling commands over the queues you should find it pretty
simple - so I think your assumption about it being simpler if you avoid
it might be wrong.  It might be easier if you use it!

Dave

> > 
> > thanks
> > -- PMM
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



  reply	other threads:[~2019-10-31 15:58 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-29 14:31 RFC: New device for zero-copy VM memory access geoff
2019-10-29 22:53 ` geoff
2019-10-30  8:10   ` geoff
2019-10-30 18:52 ` Dr. David Alan Gilbert
2019-10-31  2:55   ` geoff
2019-10-31 11:52     ` geoff
2019-10-31 12:36     ` Peter Maydell
2019-10-31 13:24     ` Dr. David Alan Gilbert
2019-10-31 14:18       ` geoff
2019-10-31 14:52         ` Peter Maydell
2019-10-31 15:21           ` geoff
2019-10-31 15:52             ` Dr. David Alan Gilbert [this message]
2019-11-03 10:10               ` geoff
2019-11-03 11:03                 ` geoff
2019-11-04 11:55                   ` Dr. David Alan Gilbert
2019-11-04 12:05                     ` geoff
2019-11-04 16:35                       ` Dr. David Alan Gilbert
2019-11-05 10:05                       ` Marc-André Lureau
2019-11-26 18:25                         ` Dr. David Alan Gilbert
2019-11-04 10:26 ` Gerd Hoffmann
2019-11-04 10:31   ` geoff
2019-11-05  9:38     ` Gerd Hoffmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191031155204.GD3128@work-vm \
    --to=dgilbert@redhat.com \
    --cc=geoff@hostfission.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).