From: "Michael S. Tsirkin" <mst@redhat.com>
To: Gregory Haskins <gregory.haskins@gmail.com>
Cc: Anthony Liguori <anthony@codemonkey.ws>,
Ingo Molnar <mingo@elte.hu>,
kvm@vger.kernel.org, Avi Kivity <avi@redhat.com>,
alacrityvm-devel@lists.sourceforge.net,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for vbus_driver objects
Date: Tue, 18 Aug 2009 19:25:14 +0300 [thread overview]
Message-ID: <20090818162514.GA19846@redhat.com> (raw)
In-Reply-To: <4A8AC68C.6040308@gmail.com>
On Tue, Aug 18, 2009 at 11:19:40AM -0400, Gregory Haskins wrote:
> >>>> OTOH, Michael's patch is purely targeted at improving virtio-net on kvm,
> >>>> and its likewise constrained by various limitations of that decision
> >>>> (such as its reliance of the PCI model, and the kvm memory scheme).
> >>> vhost is actually not related to PCI in any way. It simply leaves all
> >>> setup for userspace to do. And the memory scheme was intentionally
> >>> separated from kvm so that it can easily support e.g. lguest.
> >>>
> >> I think you have missed my point. I mean that vhost requires a separate
> >> bus-model (ala qemu-pci).
> >
> > So? That can be in userspace, and can be anything including vbus.
>
> -ENOPARSE
>
> Can you elaborate?
Write a device that signals an eventfd on virtio kick, and poll eventfd
for notifications, and you can use vhost-net. vbus, surely, can do
this?
> >
> >> And no, your memory scheme is not separated,
> >> at least, not very well. It still assumes memory-regions and
> >> copy_to_user(), which is very kvm-esque.
> >
> > I don't think so: works for lguest, kvm, UML and containers
>
> kvm _esque_ , meaning anything that follows the region+copy_to_user
> model. Not all things do.
Pretty much all things where it makes sense to share code with
vhost-net. If there's hardware that wants direct access to descriptor
rings, it just needs a driver.
> >> Vbus has people using things
> >> like userspace containers (no regions),
> >
> > vhost by default works without regions
>
> Thats a start, but not good enough if you were trying to achieve the
> same thing as vbus. As I said before, I've never said you had to
> achieve the same thing, but do note they are distinctly different with
> different goals. You are solving a directed problem. I am solving a
> general problem, and trying to solve it once.
Heh. A good demonstration of vbus generality would be a solution that
speeds up virtio in guests. What venet seems to illustrate instead is
that one has to rework all of host, guest and hypervisor to use vbus.
Maybe it does not need to be that way - it just seems so.
> >> and physical hardware (dma
> >> controllers, so no regions or copy_to_user) so your scheme quickly falls
> >> apart once you get away from KVM.
> >
> > Someone took a driver and is building hardware for it ... so what?
>
> What is your point?
OK, can we forget about that physical hardware then?
> >> Don't get me wrong: That design may have its place. Perhaps you only
> >> care about fixing KVM, which is a perfectly acceptable strategy.
> >> Its just not a strategy that I think is the best approach. Essentially you
> >> are promoting the proliferation of competing backends, and I am trying
> >> to unify them (which is ironic that this thread started with concerns I
> >> was fragmenting things ;).
> >
> > So, you don't see how venet fragments things? It's pretty obvious ...
>
> I never said it doesn't. venet started as a test harness, but now it is
> inadvertently fragmenting the virtio-net effort. I admit it. It wasn't
> intentional, but just worked out that way. Until your vhost idea is
> vetted and benchmarked, its not even in the running.
>
> Venet is currently
> the highest performing 802.x acceleration for KVM that I am aware of, so
> it will continue to garner interest from users concerned with performance.
>
> But likewise, vhost has the potential to fragment the back-end model.
> That was my point.
You don't see the difference? Long term vhost-net can just be enabled by
default whenever it is present, and there is a single guest driver to
support. OTOH, venet means that we have to support 2 guest drivers:
virtio and venet, for a long time.
> >
> >> The bottom line is, you have a simpler solution that is more finely
> >> targeted at KVM and virtio-networking. It fixes probably a lot of
> >> problems with the existing implementation, but it still has limitations.
> >>
> >> OTOH, what I am promoting is more complex, but more flexible. That is
> >> the tradeoff. You can't have both ;)
> >
> > We can. connect eventfds to hypercalls, and vhost will work with vbus.
>
> -ENOPARSE
>
> vbus doesnt use hypercalls, and I do not see why or how you would
> connect two backend models together like this. Can you elaborate.
I think some older version did. But whatever. signal eventfd on guest
kick, poll eventfd to notify guest, and you can use vhost-net with vbus.
> >
> >> So do not for one second think
> >> that what you implemented is equivalent, because they are not.
> >>
> >> In fact, I believe I warned you about this potential problem when you
> >> decided to implement your own version. I think I said something to the
> >> effect of "you will either have a subset of functionality, or you will
> >> ultimately reinvent what I did". Right now you are in the subset phase.
> >
> > No. Unlike vbus, vhost supports unmodified guests and live migration.
>
> By "subset", I am referring to your interfaces and the scope of its
> applicability. The things you need to do to make vhost work and a vbus
> device work from a memory and signaling abstration POV are going to be
> extremely similar.
>
> The difference in how the guest sees them these backends is all
> contained in the vbus-connector. Therefore, what you *could* have done
> is simply written a connector that does something like only support
> "virtio" backends, and surfaced them as regular PCI devices to the
> guest. Then you could have reused all the abstraction features in vbus,
> instead of reinventing them (case in point, your region+copy_to_user
> code). And likewise, anyone using vbus could use your virtio-net backend.
>
> Instead, I am still left with no virtio-net backend implemented, and you
> were left designing, writing, and testing facilities that I've already
> completed. So it was duplicative effort.
>
> Kind Regards,
> -Greg
>
As I said, I couldn't reuse your code the way it's written. But happily
you can reuse vhost - it's just a library, link with it - or even vhost
net as I explained above.
--
MST
next prev parent reply other threads:[~2009-08-18 16:25 UTC|newest]
Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-14 15:42 [PATCH v3 0/6] AlacrityVM guest drivers Gregory Haskins
2009-08-14 15:42 ` [PATCH v3 1/6] shm-signal: shared-memory signals Gregory Haskins
2009-08-14 15:43 ` [PATCH v3 2/6] ioq: Add basic definitions for a shared-memory, lockless queue Gregory Haskins
2009-08-14 15:43 ` [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for vbus_driver objects Gregory Haskins
2009-08-15 10:32 ` Ingo Molnar
2009-08-15 19:15 ` Anthony Liguori
2009-08-16 7:16 ` Ingo Molnar
2009-08-17 13:54 ` Anthony Liguori
2009-08-17 14:23 ` Ingo Molnar
2009-08-17 14:14 ` Gregory Haskins
2009-08-17 14:58 ` Avi Kivity
2009-08-17 15:05 ` Ingo Molnar
2009-08-17 17:41 ` Michael S. Tsirkin
2009-08-17 20:17 ` Gregory Haskins
2009-08-18 8:46 ` Michael S. Tsirkin
2009-08-18 15:19 ` Gregory Haskins
2009-08-18 16:25 ` Michael S. Tsirkin [this message]
2009-08-18 15:53 ` [Alacrityvm-devel] " Ira W. Snyder
2009-08-18 16:51 ` Avi Kivity
2009-08-18 17:27 ` Ira W. Snyder
2009-08-18 17:47 ` Avi Kivity
2009-08-18 18:27 ` Ira W. Snyder
2009-08-18 18:52 ` Avi Kivity
2009-08-18 20:59 ` Ira W. Snyder
2009-08-18 21:26 ` Avi Kivity
2009-08-18 22:06 ` Avi Kivity
2009-08-19 0:44 ` Ira W. Snyder
2009-08-19 5:26 ` Avi Kivity
2009-08-19 0:38 ` Ira W. Snyder
2009-08-19 5:40 ` Avi Kivity
2009-08-19 15:28 ` Ira W. Snyder
2009-08-19 15:37 ` Avi Kivity
2009-08-19 16:29 ` Ira W. Snyder
2009-08-19 16:38 ` Avi Kivity
2009-08-19 21:05 ` Hollis Blanchard
2009-08-20 9:57 ` Stefan Hajnoczi
2009-08-20 10:08 ` Avi Kivity
2009-08-18 20:35 ` Michael S. Tsirkin
2009-08-18 21:04 ` Arnd Bergmann
2009-08-18 20:39 ` Michael S. Tsirkin
2009-08-18 20:57 ` Michael S. Tsirkin
2009-08-18 23:24 ` Ira W. Snyder
2009-08-18 1:08 ` Anthony Liguori
2009-08-18 7:38 ` Avi Kivity
2009-08-18 8:54 ` Michael S. Tsirkin
2009-08-18 13:16 ` Gregory Haskins
2009-08-18 13:45 ` Avi Kivity
2009-08-18 15:51 ` Gregory Haskins
2009-08-18 16:14 ` Ingo Molnar
2009-08-19 4:27 ` Gregory Haskins
2009-08-19 5:22 ` Avi Kivity
2009-08-19 13:27 ` Gregory Haskins
2009-08-19 14:35 ` Avi Kivity
2009-08-18 16:47 ` Avi Kivity
2009-08-18 16:51 ` Michael S. Tsirkin
2009-08-19 5:36 ` Gregory Haskins
2009-08-19 5:48 ` Avi Kivity
2009-08-19 6:40 ` Gregory Haskins
2009-08-19 7:13 ` Avi Kivity
2009-08-19 11:40 ` Gregory Haskins
2009-08-19 11:49 ` Avi Kivity
2009-08-19 11:52 ` Gregory Haskins
2009-08-19 14:33 ` Michael S. Tsirkin
2009-08-20 12:12 ` Michael S. Tsirkin
2009-08-16 8:30 ` Avi Kivity
2009-08-17 14:16 ` Gregory Haskins
2009-08-17 14:59 ` Avi Kivity
2009-08-17 15:09 ` Gregory Haskins
2009-08-17 15:14 ` Ingo Molnar
2009-08-17 19:35 ` Gregory Haskins
2009-08-17 15:18 ` Avi Kivity
2009-08-17 13:02 ` Gregory Haskins
2009-08-17 14:25 ` Ingo Molnar
2009-08-17 15:05 ` Gregory Haskins
2009-08-17 15:08 ` Ingo Molnar
2009-08-17 19:33 ` Gregory Haskins
2009-08-18 8:33 ` Avi Kivity
2009-08-18 14:46 ` Gregory Haskins
2009-08-18 16:27 ` Avi Kivity
2009-08-19 6:28 ` Gregory Haskins
2009-08-19 7:11 ` Avi Kivity
2009-08-19 18:23 ` Nicholas A. Bellinger
2009-08-19 18:39 ` Gregory Haskins
2009-08-19 19:19 ` Nicholas A. Bellinger
2009-08-19 19:34 ` Nicholas A. Bellinger
2009-08-19 20:12 ` configfs/sysfs Avi Kivity
2009-08-19 20:48 ` configfs/sysfs Ingo Molnar
2009-08-19 20:53 ` configfs/sysfs Avi Kivity
2009-08-19 21:19 ` configfs/sysfs Nicholas A. Bellinger
2009-08-19 22:15 ` configfs/sysfs Gregory Haskins
2009-08-19 22:16 ` configfs/sysfs Joel Becker
2009-08-19 23:48 ` [Alacrityvm-devel] configfs/sysfs Alex Tsariounov
2009-08-19 23:54 ` configfs/sysfs Nicholas A. Bellinger
2009-08-20 6:09 ` configfs/sysfs Avi Kivity
[not found] ` <4A8CE891.2010502@redhat.com>
2009-08-20 22:48 ` configfs/sysfs Joel Becker
2009-08-21 4:14 ` configfs/sysfs Avi Kivity
2009-08-19 18:26 ` [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for vbus_driver objects Gregory Haskins
2009-08-19 20:37 ` Avi Kivity
2009-08-19 20:53 ` Ingo Molnar
2009-08-20 17:25 ` Muli Ben-Yehuda
2009-08-20 20:58 ` Caitlin Bestler
2009-08-18 18:20 ` Arnd Bergmann
2009-08-18 19:08 ` Avi Kivity
2009-08-19 5:36 ` Gregory Haskins
2009-08-18 9:53 ` Michael S. Tsirkin
2009-08-18 10:00 ` Avi Kivity
2009-08-18 10:09 ` Michael S. Tsirkin
2009-08-18 10:13 ` Avi Kivity
2009-08-18 10:28 ` Michael S. Tsirkin
2009-08-18 10:45 ` Avi Kivity
2009-08-18 11:07 ` Michael S. Tsirkin
2009-08-18 11:15 ` Avi Kivity
2009-08-18 11:49 ` Michael S. Tsirkin
2009-08-18 11:54 ` Avi Kivity
2009-08-18 15:39 ` Gregory Haskins
2009-08-18 16:39 ` Michael S. Tsirkin
2009-08-17 15:13 ` Avi Kivity
2009-08-14 15:43 ` [PATCH v3 4/6] vbus-proxy: add a pci-to-vbus bridge Gregory Haskins
2009-08-14 15:43 ` [PATCH v3 5/6] ioq: add driver-side vbus helpers Gregory Haskins
2009-08-14 15:43 ` [PATCH v3 6/6] net: Add vbus_enet driver Gregory Haskins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090818162514.GA19846@redhat.com \
--to=mst@redhat.com \
--cc=alacrityvm-devel@lists.sourceforge.net \
--cc=anthony@codemonkey.ws \
--cc=avi@redhat.com \
--cc=gregory.haskins@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).