xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: David Vrabel <david.vrabel@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Cc: Vincent Hanquez <vincent.hanquez@citrix.com>,
	Tim Deegan <tim@xen.org>,
	Xen-devel@lists.xen.org
Subject: Re: Inter-domain Communication using Virtual Sockets (high-level design)
Date: Thu, 20 Jun 2013 12:05:56 +0100	[thread overview]
Message-ID: <51C2E214.8090004@citrix.com> (raw)
In-Reply-To: <51BF555E.1040306@citrix.com>

On 17/06/13 19:28, Ross Philipson wrote:
> On 06/13/2013 12:27 PM, Tim Deegan wrote:
>> Hi,
>>
>> At 19:07 +0100 on 11 Jun (1370977636), David Vrabel wrote:
>>> This is a high-level design document for an inter-domain communication
>>> system under the virtual sockets API (AF_VSOCK) recently added to Linux.
>>
>> I'd be very interested to hear the v4v authors' opinions on this VSOCK
>> draft, btw -- in particular if it (or something similar) can provide all
>> v4v's features without new hypervisor code, I'd very much prefer it.
> 
> I guess I cannot be 100% just by reading the part of the spec on the low
> level transport mechanism. We originally tried to use a grant based
> model and ran into issue. Two of the most pronounced were:
> 
>  - Failure of grantees to release grants would cause hung domains under
> certain situations. This was discussed early in the V4V RFC work that
> Jean G. did. I am not sure if this has been fixed and if so, how. There
> was a suggestion about a fix in a reply from Daniel a while back.

The use of grants that only permit copying (i.e., no map/unmap) should
avoid any issues like these.  The granter can't revoke a copy-only grant
at any time.

>  - Synchronization between guests was very complicated without a central
> arbitrator like the hypervisor.

I'm not sure what you mean here.  What are you synchronizing?

> Also this solution may have some scaling issues. If I understand the
> model being proposed here, each ring which I guess is a connection
> consumes an event channel. In the large number of connections scenario
> is this not a scaling problem? I may not fully understand the proposed
> low level transport spec.

If there are N bits of work to do, N messages to resend for example,
then it doesn't matter if we have N notifications via event channels or
1 notification and some other data structure listing the N peers that
need work -- it's the same amount of work.

The number of event channels being a hard scalability limit will be
removed in Xen 4.4 (using one of the two proposals for an extended event
channel ABI).

David

  reply	other threads:[~2013-06-20 11:05 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-11 18:07 Inter-domain Communication using Virtual Sockets (high-level design) David Vrabel
2013-06-11 18:54 ` Andrew Cooper
2013-06-13 16:27 ` Tim Deegan
2013-06-17 16:19   ` David Vrabel
2013-06-20 11:15     ` Tim Deegan
2013-06-17 18:28   ` Ross Philipson
2013-06-20 11:05     ` David Vrabel [this message]
2013-06-20 11:30     ` Tim Deegan
2013-06-20 14:11       ` Ross Philipson
2013-10-30 14:51 ` David Vrabel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51C2E214.8090004@citrix.com \
    --to=david.vrabel@citrix.com \
    --cc=Xen-devel@lists.xen.org \
    --cc=ross.philipson@citrix.com \
    --cc=tim@xen.org \
    --cc=vincent.hanquez@citrix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).