public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Cam Macdonell <cam@cs.ualberta.ca>
To: subbu kl <subbukl@gmail.com>
Cc: kvm@vger.kernel.org
Subject: Re: [PATCH] Add shared memory PCI device that shares a memory object betweens VMs
Date: Thu, 23 Apr 2009 10:28:00 -0600	[thread overview]
Message-ID: <49F09710.5000405@cs.ualberta.ca> (raw)
In-Reply-To: <f3b32c250904222355y687c39ecl11c52267a1ea7386@mail.gmail.com>

subbu kl wrote:
> Cam,
> 
> just a wild though about alternative approach. 

Ideas are always good.

> Once specific set of 
> address range of one guest is visible to other guest its just a matter 
> of DMA/single memcpy will transfer the data across.

My idea is to eliminate unnecessary copying.  This introduces one.

> usually non-transparent PCIe bridges(NTB) will be used for inter 
> processor data communication. physical PCIe NTB between two processors 
> just sets up a PCIe data channel with some Address translation stuffs.
> 
> So i was just wondering if we can write this non transparent bridge 
> (qemu PCI device) with Addrdess translation capability then guests just 
> can start mmap and start accessing each others memory :)

I think your concept is similar to what Anthony suggested using virtio 
to export and import other VMs memory.  However, RAM and shared memory 
are not the same thing and having one guest access another's RAM could 
confuse the guest.  With the approach of mapping a BAR, the shared 
memory is separate from the guest RAM but it can be mapped by the guest 
processes.

Cam

> ~subbu
> 
> On Thu, Apr 23, 2009 at 4:11 AM, Cam Macdonell <cam@cs.ualberta.ca 
> <mailto:cam@cs.ualberta.ca>> wrote:
> 
>     subbu kl wrote:
> 
>         correct me if wrong,
>         can we do the sharing business by writing a non-transparent qemu
>         PCI device in host and guests can access each other's address
>         space ?
> 
> 
>     Hi Subbu,
> 
>     I'm a bit confused by your question.  Are you asking how this device
>     works or suggesting an alternative approach?  I'm not sure what you
>     mean by a non-transparent qemu device.
> 
>     Cam
> 
> 
>         ~subbu
> 
> 
>         On Sun, Apr 19, 2009 at 3:56 PM, Avi Kivity <avi@redhat.com
>         <mailto:avi@redhat.com> <mailto:avi@redhat.com
>         <mailto:avi@redhat.com>>> wrote:
> 
>            Cameron Macdonell wrote:
> 
> 
>                Hi Avi and Anthony,
> 
>                Sorry for the top-reply, but we haven't discussed this aspect
>                here before.
> 
>                I've been thinking about how to implement interrupts.  As
>         far as
>                I can tell, unix domain sockets in Qemu/KVM are used
>                point-to-point with one VM being the server by specifying
>                "server" along with the unix: option.  This works simply
>         for two
>                VMs, but I'm unsure how this can extend to multiple VMs.  How
>                would a server VM know how many clients to wait for?  How can
>                messages then be multicast or broadcast?  Is a separate
>                "interrupt server" necessary?
> 
> 
> 
>            I don't think unix provides a reliable multicast RPC.  So yes, an
>            interrupt server seems necessary.
> 
>            You could expand its role an make it a "shared memory PCI card
>            server", and have it also be responsible for providing the
>         backing
>            file using an SCM_RIGHTS fd.  That would reduce setup
>         headaches for
>            users (setting up a file for which all VMs have permissions).
> 
>            --    Do not meddle in the internals of kernels, for they are
>         subtle and
>            quick to panic.
> 
> 
>            --
>            To unsubscribe from this list: send the line "unsubscribe kvm" in
>            the body of a message to majordomo@vger.kernel.org
>         <mailto:majordomo@vger.kernel.org>
>            <mailto:majordomo@vger.kernel.org
>         <mailto:majordomo@vger.kernel.org>>
> 
>            More majordomo info at
>          http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> 
>         -- 
>         ~subbu
> 
> 
> 
> 
> -- 
> ~subbu

      parent reply	other threads:[~2009-04-23 16:28 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-01 15:43 [PATCH] Add shared memory PCI device that shares a memory object betweens VMs Cam Macdonell
2009-04-01 16:29 ` Anthony Liguori
2009-04-01 18:07   ` Avi Kivity
2009-04-01 18:52     ` Anthony Liguori
2009-04-01 20:32       ` Cam Macdonell
2009-04-02  7:07         ` Avi Kivity
2009-04-03 16:54           ` Cam Macdonell
2009-04-02  7:05       ` Avi Kivity
2009-04-19  5:22       ` Cameron Macdonell
2009-04-19 10:26         ` Avi Kivity
     [not found]           ` <f3b32c250904202348t6514d3efjc691b48c4dafe76a@mail.gmail.com>
2009-04-22 22:41             ` Cam Macdonell
     [not found]               ` <f3b32c250904222355y687c39ecl11c52267a1ea7386@mail.gmail.com>
2009-04-23 16:28                 ` Cam Macdonell [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49F09710.5000405@cs.ualberta.ca \
    --to=cam@cs.ualberta.ca \
    --cc=kvm@vger.kernel.org \
    --cc=subbukl@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox