From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ian Campbell Subject: Re: Fwd: Fwd: [Xen-users] Shared memory between Dom0 and DomU Date: Wed, 16 Mar 2011 08:03:02 +0000 Message-ID: <1300262582.15812.18.camel@localhost.localdomain> References: <4D78A1ED.8040403@enseeiht.fr> <20110316031714.GB7905@dumpdata.com> <20110316075847.GS32595@reaktio.net> Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <20110316075847.GS32595@reaktio.net> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= Cc: "xen-users@lists.xensource.com" , "xen-devel@lists.xensource.com" , MAYAP christine , Konrad Rzeszutek Wilk List-Id: xen-devel@lists.xenproject.org On Wed, 2011-03-16 at 07:58 +0000, Pasi K=E4rkk=E4inen wrote: > On Tue, Mar 15, 2011 at 11:17:14PM -0400, Konrad Rzeszutek Wilk wrote: > > On Thu, Mar 10, 2011 at 11:03:25AM +0100, MAYAP christine wrote: > > > Hello, > > > > > > I'm still reading document about inter-VM communication via shared = memory. > > > I found many reference about IVC. > > > > The IVC on I think is the XCP source code. Google for XCP source code > > and you should find it. > > >=20 > There's also V4V in Citrix XenClient.. sources available in the source = iso. > It includes xen patches, linux kernel patches, and user space libraries= providing a socket-like API (for linux and windows). I wasn't aware of an IVC mechanisms in XCP so I suspect this is what Konrad was thinking of. Ian. >=20 > -- Pasi >=20 > > > Is it already present in Xen? How can i use them? > > > > You could also look at gntalloc driver. > > Look in git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git = #master tree. > > > > > > I think it pretty much does what you want. Here is test code that > > you can use to communicate between domains using mmap-ed memory. > > > > > > Courtesy of Daniel De Graaf. > > > > #include > > #include > > #include > > #include > > #include > > #include > > #include > > #include > > > > struct ioctl_gntdev_grant_ref { > > /* The domain ID of the grant to be mapped. */ > > uint32_t domid; > > /* The grant reference of the grant to be mapped. */ > > uint32_t ref; > > }; > > > > /* > > * Allocates a new page and creates a new grant reference. > > */ > > #define IOCTL_GNTALLOC_ALLOC_GREF \ > > _IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_gntalloc_alloc_gref)) > > struct ioctl_gntalloc_alloc_gref { > > /* IN parameters */ > > /* The ID of the domain to be given access to the grants. */ > > uint16_t domid; > > /* Flags for this mapping */ > > uint16_t flags; > > /* Number of pages to map */ > > uint32_t count; > > /* OUT parameters */ > > /* The offset to be used on a subsequent call to mmap(). */ > > uint64_t index; > > /* The grant references of the newly created grant, one per pag= e */ > > /* Variable size, depending on count */ > > uint32_t gref_ids[1]; > > }; > > > > #define GNTALLOC_FLAG_WRITABLE 1 > > > > /* > > * Deallocates the grant reference, allowing the associated page to b= e freed if > > * no other domains are using it. > > */ > > #define IOCTL_GNTALLOC_DEALLOC_GREF \ > > _IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_gntalloc_dealloc_gref)) > > struct ioctl_gntalloc_dealloc_gref { > > /* IN parameters */ > > /* The offset returned in the map operation */ > > uint64_t index; > > /* Number of references to unmap */ > > uint32_t count; > > }; > > > > #define IOCTL_GNTDEV_MAP_GRANT_REF \ > > _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_gntdev_map_grant_ref)) > > struct ioctl_gntdev_map_grant_ref { > > /* IN parameters */ > > /* The number of grants to be mapped. */ > > uint32_t count; > > uint32_t pad; > > /* OUT parameters */ > > /* The offset to be used on a subsequent call to mmap(). */ > > uint64_t index; > > /* Variable IN parameter. */ > > /* Array of grant references, of size @count. */ > > struct ioctl_gntdev_grant_ref refs[1]; > > }; > > #define GNTDEV_MAP_WRITABLE 0x1 > > > > #define IOCTL_GNTDEV_UNMAP_GRANT_REF \ > > _IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_gntdev_unmap_grant_ref)) > > struct ioctl_gntdev_unmap_grant_ref { > > /* IN parameters */ > > /* The offset was returned by the corresponding map operation. */ > > uint64_t index; > > /* The number of pages to be unmapped. */ > > uint32_t count; > > uint32_t pad; > > }; > > > > /* > > * Sets up an unmap notification within the page, so that the other s= ide can do > > * cleanup if this side crashes. Required to implement cross-domain r= obust > > * mutexes or close notification on communication channels. > > * > > * Each mapped page only supports one notification; multiple calls re= ferring to > > * the same page overwrite the previous notification. You must clear = the > > * notification prior to the IOCTL_GNTALLOC_DEALLOC_GREF if you do no= t want it > > * to occur. > > */ > > #define IOCTL_GNTDEV_SET_UNMAP_NOTIFY \ > > _IOC(_IOC_NONE, 'G', 7, sizeof(struct ioctl_gntdev_unmap_notify)) > > struct ioctl_gntdev_unmap_notify { > > /* IN parameters */ > > /* Index of a byte in the page */ > > uint64_t index; > > /* Action(s) to take on unmap */ > > uint32_t action; > > /* Event channel to notify */ > > uint32_t event_channel_port; > > }; > > > > /* Clear (set to zero) the byte specified by index */ > > #define UNMAP_NOTIFY_CLEAR_BYTE 0x1 > > /* Send an interrupt on the indicated event channel */ > > #define UNMAP_NOTIFY_SEND_EVENT 0x2 > > > > /* > > * Sets up an unmap notification within the page, so that the other s= ide can do > > * cleanup if this side crashes. Required to implement cross-domain r= obust > > * mutexes or close notification on communication channels. > > * > > * Each mapped page only supports one notification; multiple calls re= ferring to > > * the same page overwrite the previous notification. You must clear = the > > * notification prior to the IOCTL_GNTALLOC_DEALLOC_GREF if you do no= t want it > > * to occur. > > */ > > #define IOCTL_GNTALLOC_SET_UNMAP_NOTIFY \ > > _IOC(_IOC_NONE, 'G', 7, sizeof(struct ioctl_gntalloc_unmap_notify)) > > struct ioctl_gntalloc_unmap_notify { > > /* IN parameters */ > > /* Index of a byte in the page */ > > uint64_t index; > > /* Action(s) to take on unmap */ > > uint32_t action; > > /* Event channel to notify */ > > uint32_t event_channel_port; > > }; > > > > /* Clear (set to zero) the byte specified by index */ > > #define UNMAP_NOTIFY_CLEAR_BYTE 0x1 > > /* Send an interrupt on the indicated event channel */ > > #define UNMAP_NOTIFY_SEND_EVENT 0x2 > > > > #ifndef offsetof > > #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) > > #endif > > > > > > int a_fd; > > int d_fd; > > > > struct shr_page { > > uint64_t id; > > char buffer[64]; > > uint8_t notifies[8]; > > }; > > > > struct data { > > struct shr_page* mem; > > int handle; > > } items[128]; > > > > void sa(int id) > > { > > struct ioctl_gntalloc_alloc_gref arg =3D { > > .domid =3D id, > > .flags =3D GNTALLOC_FLAG_WRITABLE, > > .count =3D 1 > > }; > > int rv =3D ioctl(a_fd, IOCTL_GNTALLOC_ALLOC_GREF, &arg); > > if (rv) { > > printf("src-add error: %s (rv=3D%d)\n", strerror(errno)= , rv); > > return; > > } > > int i=3D0; > > while (items[i].mem) i++; > > items[i].mem =3D mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_SHARED= , a_fd, arg.index); > > if (items[i].mem =3D=3D MAP_FAILED) { > > items[i].mem =3D 0; > > printf("mmap failed: SHOULD NOT HAPPEN\n"); > > return; > > } > > items[i].handle =3D arg.index; > > printf("Created shared page with domain %d, grant #%d. Mapped l= ocally at %d=3D%p\n", > > id, arg.gref_ids[0], arg.index, items[i].mem); > > > > items[i].mem->id =3D rand() | ((long)(getpid()) << 32); > > items[i].mem->notifies[0] =3D 1; > > struct ioctl_gntalloc_unmap_notify uarg =3D { > > .index =3D arg.index + offsetof(struct shr_page, notifi= es[0]), > > .action =3D UNMAP_NOTIFY_CLEAR_BYTE > > }; > > rv =3D ioctl(a_fd, IOCTL_GNTALLOC_SET_UNMAP_NOTIFY, &uarg); > > if (rv) > > printf("gntalloc unmap notify error: %s (rv=3D%d)\n", s= trerror(errno), rv); > > } > > > > void sd(int ref) { > > struct ioctl_gntalloc_dealloc_gref arg =3D { > > .index =3D ref, > > .count =3D 1 > > }; > > > > int rv =3D ioctl(a_fd, IOCTL_GNTALLOC_DEALLOC_GREF, &arg); > > if (rv) > > printf("src-del error: %s (rv=3D%d)\n", strerror(errno)= , rv); > > else > > printf("Stopped offering grant at offset %d\n", ref); > > } > > > > void mm(int domid, int refid) { > > struct ioctl_gntdev_map_grant_ref arg =3D { > > .count =3D 1, > > .refs[0].domid =3D domid, > > .refs[0].ref =3D refid, > > }; > > int rv =3D ioctl(d_fd, IOCTL_GNTDEV_MAP_GRANT_REF, &arg); > > if (rv) { > > printf("Could not map grant %d.%d: %s (rv=3D%d)\n", dom= id, refid, strerror(errno), rv); > > return; > > } > > int i=3D0,j=3D1; > > while (items[i].mem) i++; > > items[i].mem =3D mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_SHARED= , d_fd, arg.index); > > if (items[i].mem =3D=3D MAP_FAILED) { > > items[i].mem =3D 0; > > printf("Could not map grant %d.%d: %s (map failed)\n", = domid, refid, strerror(errno), rv); > > return; > > } > > items[i].handle =3D arg.index; > > printf("Mapped grant %d.%d as %d=3D%p\n", domid, refid, arg.ind= ex, items[i].mem); > > > > while (items[i].mem->notifies[j]) j++; > > items[i].mem->notifies[j] =3D 1; > > struct ioctl_gntalloc_unmap_notify uarg =3D { > > .index =3D arg.index + offsetof(struct shr_page, notifi= es[j]), > > .action =3D UNMAP_NOTIFY_CLEAR_BYTE > > }; > > rv =3D ioctl(d_fd, IOCTL_GNTDEV_SET_UNMAP_NOTIFY, &uarg); > > if (rv) > > printf("gntdev unmap notify error: %s (rv=3D%d)\n", str= error(errno), rv); > > } > > > > void gu(int index) { > > struct ioctl_gntdev_unmap_grant_ref arg =3D { > > .index =3D index, > > .count =3D 1, > > }; > > int rv =3D ioctl(d_fd, IOCTL_GNTDEV_UNMAP_GRANT_REF, &arg); > > if (rv) > > printf("gu error: %s (rv=3D%d)\n", strerror(errno), rv)= ; > > else > > printf("Unhooked mapped grant at offset %d\n", index); > > } > > > > void mu(void* addr) { > > int i =3D 0; > > munmap(addr, 4096); > > while (i < 128) > > { > > if (items[i].mem =3D=3D addr) > > items[i].mem =3D 0; > > i++; > > } > > printf("Unmapped page at %p\n", addr); > > } > > > > void show(char* word) { > > int i; > > int wlen =3D strlen(word); > > for(i=3D0; i < 128; i++) { > > if (!items[i].mem) > > continue; > > memmove(items[i].mem->buffer + wlen, items[i].mem->buff= er, 63 - wlen); > > memcpy(items[i].mem->buffer, word, wlen); > > printf("%02d(%ld,%d): id %16lx n=3D%d%d%d%d%d%d%d%d b=3D= %s\n", > > i, items[i].mem, items[i].handle, items[i].mem-= >id, > > items[i].mem->notifies[0], items[i].mem->notifi= es[1], items[i].mem->notifies[2], items[i].mem->notifies[3], > > items[i].mem->notifies[4], items[i].mem->notifi= es[5], items[i].mem->notifies[6], items[i].mem->notifies[7], > > items[i].mem->buffer); > > } > > printf("END\n"); > > } > > > > int main(int argc, char** argv) { > > a_fd =3D open("/dev/xen/gntalloc", O_RDWR); > > d_fd =3D open("/dev/xen/gntdev", O_RDWR); > > printf( > > "add return gntref, address\n" > > "map return index, address\n" > > "adel delete internal\n" > > "ddel delete internal\n" > > "unmap
unmap memory\n" > > "show show all pages\n" > > " append word to all mapped pages,= show\n" > > " PID %x\n", getpid() > > ); > > while (1) { > > char line[80]; > > char word[80]; > > long a, b; > > printf("\n> "); > > fflush(stdout); > > fgets(line, 80, stdin); > > sscanf(line, "%s %ld %ld", word, &a, &b); > > if (!strcmp(word, "add")) { > > sa(a); > > } else if (!strcmp(word, "map")) { > > mm(a, b); > > } else if (!strcmp(word, "adel")) { > > sd(a); > > } else if (!strcmp(word, "ddel")) { > > gu(a); > > } else if (!strcmp(word, "unmap")) { > > mu((void*)a); > > } else if (!strcmp(word, "show")) { > > show(""); > > } else { > > show(word); > > } > > } > > } > > > > > > I really need some explanations . > > > Thanks! > > > > > > > > > > > > -------- Message original -------- > > > Sujet: Fwd: [Xen-users] Shared memory between Dom0 and DomU > > > Date : Wed, 09 Mar 2011 15:12:06 +0100 > > > De : MAYAP christine > > > Pour : xen-devel@lists.xensource.com, xen-users@lists.xensourc= e.com > > > > > > > > > > > > Hello > > > > > > While waiting for some help, i continued to google about this topic= . > > > In fact, i want two distinct process (one on the Dom0 and the secon= d > > > one on the DomU) to be able the read or write in a share memory. > > > > > > Many post are talking about granting permissions or grant table. > > > > > > I'm sorry if my question is a bit stupid. Where would i start to be > > > able the test those grant permission? > > > Should i need to write an external program or should i modify some > > > Xen source codes? > > > > > > I really have no idea about where to start. > > > I'll sincerely appreciate any idea. Even one stating where to start= . > > > > > > Thanks in advance! > > > > > > -------- Message original -------- > > > Sujet: [Xen-users] Shared memory between Dom0 and DomU > > > Date : Wed, 09 Mar 2011 13:07:19 +0100 > > > De : MAYAP christine > > > Pour : xen-devel@lists.xensource.com, xen-users@lists.xensourc= e.com > > > > > > > > > > > > HI, > > > > > > I'm a newbie in using shared memory with Xen. I was usually unsing = it > > > between process on a same Computer. > > > > > > Please, can i have some usefull links abour sharing memoru betwen D= om0 > > > and DomU first and between DomU. > > > > > > I'm able to give more informations if needed! > > > > > > Cheers! > > > > > > -- > > > MAYAP Christine > > > IRIT/ENSEEIHT > > > 2 rue Charles Camichel - BP 7122 > > > 31071 Toulouse cedex 7 > > > > > > > > > _______________________________________________ > > > Xen-users mailing list > > > Xen-users@lists.xensource.com > > > http://lists.xensource.com/xen-users > > > > > > > > _______________________________________________ > > > Xen-users mailing list > > > Xen-users@lists.xensource.com > > > http://lists.xensource.com/xen-users > > > > > _______________________________________________ > > > Xen-devel mailing list > > > Xen-devel@lists.xensource.com > > > http://lists.xensource.com/xen-devel > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel >=20 > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel