* [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
@ 2007-06-24 13:05 Shahar Livne
2007-06-24 14:07 ` Anthony Liguori
2007-06-24 19:16 ` Paul Brook
0 siblings, 2 replies; 14+ messages in thread
From: Shahar Livne @ 2007-06-24 13:05 UTC (permalink / raw)
To: qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1216 bytes --]
Hi,
I am working on a project that runs many concurrent qemu sessions with vnc.
Handling the vnc ports for the different sessions is an issue that also
project like qemudo tries to solve.
The solution chosen there was to handle a pool of ports and manage them
internally.
Such a solution ignores the fact that these ports may be occupied by
other processes on the same OS, and it actually duplicates an OS task.
A solution that uses external port handling facility (like a native OS
free ports selection), by using a pre-allocated port is suggested.
Currently there is the following vnc option:
-vnc display [start a VNC server on display]
Adding the following option:
-vnc-socket sd [force VNC server on an already opened Socket Descriptor]
overrides the new socket opening for the vnc on 5900+display port, and
uses the given sd socket descriptor instead.
In this way, one can create a socket, bind it to any specific port (e.g.
arbitrary free port given by the OS), and only then start the qemu with
the socket descriptor. Doing this, all the ports accounting is done by
the OS.
The patch is against cvs 2007-06-21, but I think nothing relevant has
changed since.
Comments are welcome,
Shahar
[-- Attachment #2: qemu_vnc_forced_socket.patch --]
[-- Type: text/plain, Size: 6958 bytes --]
Index: vnc.c
===================================================================
--- vnc.c (revision 6)
+++ vnc.c (revision 8)
@@ -59,6 +59,7 @@
QEMUTimer *timer;
int lsock;
int csock;
+ int fsock;
DisplayState *ds;
int need_update;
int width;
@@ -99,9 +100,14 @@
if (vnc_state == NULL)
term_printf("VNC server disabled\n");
else {
- term_printf("VNC server active on: ");
- term_print_filename(vnc_state->display);
- term_printf("\n");
+ if (vnc_state->fsock == -1) {
+ term_printf("VNC server active on: ");
+ term_print_filename(vnc_state->display);
+ term_printf("\n");
+ } else {
+ term_printf("VNC server active on socket descriptor: %d", vnc_state->fsock);
+ term_printf("\n");
+ }
if (vnc_state->csock == -1)
term_printf("No client connected\n");
@@ -1169,7 +1175,7 @@
extern int parse_host_port(struct sockaddr_in *saddr, const char *str);
-void vnc_display_init(DisplayState *ds, const char *arg)
+void vnc_display_init(DisplayState *ds, const char *arg, int forced_vnc_socket)
{
struct sockaddr *addr;
struct sockaddr_in iaddr;
@@ -1191,6 +1197,7 @@
vs->lsock = -1;
vs->csock = -1;
+ vs->fsock = forced_vnc_socket;
vs->depth = 4;
vs->last_x = -1;
vs->last_y = -1;
@@ -1213,60 +1220,67 @@
vnc_dpy_resize(vs->ds, 640, 400);
-#ifndef _WIN32
- if (strstart(arg, "unix:", &p)) {
- addr = (struct sockaddr *)&uaddr;
- addrlen = sizeof(uaddr);
-
- vs->lsock = socket(PF_UNIX, SOCK_STREAM, 0);
- if (vs->lsock == -1) {
- fprintf(stderr, "Could not create socket\n");
- exit(1);
- }
-
- uaddr.sun_family = AF_UNIX;
- memset(uaddr.sun_path, 0, 108);
- snprintf(uaddr.sun_path, 108, "%s", p);
- unlink(uaddr.sun_path);
- } else
+ if (vs->fsock == -1) {
+#ifndef _WIN32
+ if (strstart(arg, "unix:", &p)) {
+ addr = (struct sockaddr *)&uaddr;
+ addrlen = sizeof(uaddr);
+
+ vs->lsock = socket(PF_UNIX, SOCK_STREAM, 0);
+ if (vs->lsock == -1) {
+ fprintf(stderr, "Could not create socket\n");
+ exit(1);
+ }
+
+ uaddr.sun_family = AF_UNIX;
+ memset(uaddr.sun_path, 0, 108);
+ snprintf(uaddr.sun_path, 108, "%s", p);
+
+ unlink(uaddr.sun_path);
+ } else
#endif
- {
- addr = (struct sockaddr *)&iaddr;
- addrlen = sizeof(iaddr);
-
- vs->lsock = socket(PF_INET, SOCK_STREAM, 0);
- if (vs->lsock == -1) {
- fprintf(stderr, "Could not create socket\n");
- exit(1);
- }
-
- if (parse_host_port(&iaddr, arg) < 0) {
- fprintf(stderr, "Could not parse VNC address\n");
- exit(1);
- }
-
- iaddr.sin_port = htons(ntohs(iaddr.sin_port) + 5900);
-
- reuse_addr = 1;
- ret = setsockopt(vs->lsock, SOL_SOCKET, SO_REUSEADDR,
- (const char *)&reuse_addr, sizeof(reuse_addr));
- if (ret == -1) {
- fprintf(stderr, "setsockopt() failed\n");
- exit(1);
- }
- }
-
- if (bind(vs->lsock, addr, addrlen) == -1) {
- fprintf(stderr, "bind() failed\n");
- exit(1);
- }
+ {
+ addr = (struct sockaddr *)&iaddr;
+ addrlen = sizeof(iaddr);
+
+ vs->lsock = socket(PF_INET, SOCK_STREAM, 0);
+ if (vs->lsock == -1) {
+ fprintf(stderr, "Could not create socket\n");
+ exit(1);
+ }
+
+ if (parse_host_port(&iaddr, arg) < 0) {
+ fprintf(stderr, "Could not parse VNC address\n");
+ exit(1);
+ }
+
+ iaddr.sin_port = htons(ntohs(iaddr.sin_port) + 5900);
+
+ reuse_addr = 1;
+ ret = setsockopt(vs->lsock, SOL_SOCKET, SO_REUSEADDR,
+ (const char *)&reuse_addr, sizeof(reuse_addr));
+ if (ret == -1) {
+ fprintf(stderr, "setsockopt() failed\n");
+ exit(1);
+ }
+ }
+
+ if (bind(vs->lsock, addr, addrlen) == -1) {
+ fprintf(stderr, "bind() failed\n");
+ exit(1);
+ }
+
+ if (listen(vs->lsock, 1) == -1) {
+ fprintf(stderr, "listen() failed\n");
+ exit(1);
+ }
- if (listen(vs->lsock, 1) == -1) {
- fprintf(stderr, "listen() failed\n");
- exit(1);
+ } else {
+ vs->lsock = vs->fsock;
+ fprintf(stdout, "using forced socket %d\n",vs->fsock);
}
-
+
ret = qemu_set_fd_handler2(vs->lsock, vnc_listen_poll, vnc_listen_read, NULL, vs);
if (ret == -1) {
exit(1);
Index: vl.c
===================================================================
--- vl.c (revision 6)
+++ vl.c (revision 8)
@@ -178,6 +178,7 @@
static VLANState *first_vlan;
int smp_cpus = 1;
const char *vnc_display;
+int vnc_socket = -1;
#if defined(TARGET_SPARC)
#define MAX_CPUS 16
#elif defined(TARGET_I386)
@@ -6651,6 +6652,7 @@
"-no-reboot exit instead of rebooting\n"
"-loadvm file start right away with a saved state (loadvm in monitor)\n"
"-vnc display start a VNC server on display\n"
+ "-vnc-socket sd force VNC server on an already opened Socket Descriptor\n"
#ifndef _WIN32
"-daemonize daemonize QEMU after initializing\n"
#endif
@@ -6745,6 +6747,7 @@
QEMU_OPTION_usbdevice,
QEMU_OPTION_smp,
QEMU_OPTION_vnc,
+ QEMU_OPTION_vnc_socket,
QEMU_OPTION_no_acpi,
QEMU_OPTION_no_reboot,
QEMU_OPTION_show_cursor,
@@ -6836,6 +6839,7 @@
{ "usbdevice", HAS_ARG, QEMU_OPTION_usbdevice },
{ "smp", HAS_ARG, QEMU_OPTION_smp },
{ "vnc", HAS_ARG, QEMU_OPTION_vnc },
+ { "vnc-socket", HAS_ARG, QEMU_OPTION_vnc_socket },
/* temporary options */
{ "usb", 0, QEMU_OPTION_usb },
@@ -7588,6 +7592,18 @@
case QEMU_OPTION_vnc:
vnc_display = optarg;
break;
+ case QEMU_OPTION_vnc_socket:
+ {
+ int sd;
+ sd = atoi(optarg);
+ if (sd < 0) {
+ fprintf(stderr, "Bad argument to vnc socket descriptor\n");
+ exit(1);
+ } else {
+ vnc_socket = sd;
+ }
+ break;
+ }
case QEMU_OPTION_no_acpi:
acpi_enabled = 0;
break;
@@ -7879,7 +7895,7 @@
if (nographic) {
/* nothing to do */
} else if (vnc_display != NULL) {
- vnc_display_init(ds, vnc_display);
+ vnc_display_init(ds, vnc_display, vnc_socket);
} else {
#if defined(CONFIG_SDL)
sdl_display_init(ds, full_screen, no_frame);
Index: vl.h
===================================================================
--- vl.h (revision 6)
+++ vl.h (revision 8)
@@ -965,7 +965,7 @@
void cocoa_display_init(DisplayState *ds, int full_screen);
/* vnc.c */
-void vnc_display_init(DisplayState *ds, const char *display);
+void vnc_display_init(DisplayState *ds, const char *display, int forced_vnc_socket);
void do_info_vnc(void);
/* x_keymap.c */
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-24 13:05 [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port Shahar Livne
@ 2007-06-24 14:07 ` Anthony Liguori
2007-06-24 21:45 ` Shahar Livne
2007-06-24 19:16 ` Paul Brook
1 sibling, 1 reply; 14+ messages in thread
From: Anthony Liguori @ 2007-06-24 14:07 UTC (permalink / raw)
To: qemu-devel
Shahar Livne wrote:
> Hi,
>
> I am working on a project that runs many concurrent qemu sessions with
> vnc.
> Handling the vnc ports for the different sessions is an issue that
> also project like qemudo tries to solve.
> The solution chosen there was to handle a pool of ports and manage
> them internally.
> Such a solution ignores the fact that these ports may be occupied by
> other processes on the same OS, and it actually duplicates an OS task.
> A solution that uses external port handling facility (like a native OS
> free ports selection), by using a pre-allocated port is suggested.
>
> Currently there is the following vnc option:
> -vnc display [start a VNC server on display]
>
> Adding the following option:
> -vnc-socket sd [force VNC server on an already opened Socket Descriptor]
Just redirect each port to a unique unix domain socket and then you can
forward traffic to TCP sockets to your heart's content.
Regards,
Anthony Liguori
> overrides the new socket opening for the vnc on 5900+display port, and
> uses the given sd socket descriptor instead.
>
> In this way, one can create a socket, bind it to any specific port
> (e.g. arbitrary free port given by the OS), and only then start the
> qemu with the socket descriptor. Doing this, all the ports accounting
> is done by the OS.
>
> The patch is against cvs 2007-06-21, but I think nothing relevant has
> changed since.
>
> Comments are welcome,
> Shahar
> ------------------------------------------------------------------------
>
> Index: vnc.c
> ===================================================================
> --- vnc.c (revision 6)
> +++ vnc.c (revision 8)
> @@ -59,6 +59,7 @@
> QEMUTimer *timer;
> int lsock;
> int csock;
> + int fsock;
> DisplayState *ds;
> int need_update;
> int width;
> @@ -99,9 +100,14 @@
> if (vnc_state == NULL)
> term_printf("VNC server disabled\n");
> else {
> - term_printf("VNC server active on: ");
> - term_print_filename(vnc_state->display);
> - term_printf("\n");
> + if (vnc_state->fsock == -1) {
> + term_printf("VNC server active on: ");
> + term_print_filename(vnc_state->display);
> + term_printf("\n");
> + } else {
> + term_printf("VNC server active on socket descriptor: %d", vnc_state->fsock);
> + term_printf("\n");
> + }
>
> if (vnc_state->csock == -1)
> term_printf("No client connected\n");
> @@ -1169,7 +1175,7 @@
>
> extern int parse_host_port(struct sockaddr_in *saddr, const char *str);
>
> -void vnc_display_init(DisplayState *ds, const char *arg)
> +void vnc_display_init(DisplayState *ds, const char *arg, int forced_vnc_socket)
> {
> struct sockaddr *addr;
> struct sockaddr_in iaddr;
> @@ -1191,6 +1197,7 @@
>
> vs->lsock = -1;
> vs->csock = -1;
> + vs->fsock = forced_vnc_socket;
> vs->depth = 4;
> vs->last_x = -1;
> vs->last_y = -1;
> @@ -1213,60 +1220,67 @@
>
> vnc_dpy_resize(vs->ds, 640, 400);
>
> -#ifndef _WIN32
> - if (strstart(arg, "unix:", &p)) {
> - addr = (struct sockaddr *)&uaddr;
> - addrlen = sizeof(uaddr);
> -
> - vs->lsock = socket(PF_UNIX, SOCK_STREAM, 0);
> - if (vs->lsock == -1) {
> - fprintf(stderr, "Could not create socket\n");
> - exit(1);
> - }
> -
> - uaddr.sun_family = AF_UNIX;
> - memset(uaddr.sun_path, 0, 108);
> - snprintf(uaddr.sun_path, 108, "%s", p);
>
> - unlink(uaddr.sun_path);
> - } else
> + if (vs->fsock == -1) {
> +#ifndef _WIN32
> + if (strstart(arg, "unix:", &p)) {
> + addr = (struct sockaddr *)&uaddr;
> + addrlen = sizeof(uaddr);
> +
> + vs->lsock = socket(PF_UNIX, SOCK_STREAM, 0);
> + if (vs->lsock == -1) {
> + fprintf(stderr, "Could not create socket\n");
> + exit(1);
> + }
> +
> + uaddr.sun_family = AF_UNIX;
> + memset(uaddr.sun_path, 0, 108);
> + snprintf(uaddr.sun_path, 108, "%s", p);
> +
> + unlink(uaddr.sun_path);
> + } else
> #endif
> - {
> - addr = (struct sockaddr *)&iaddr;
> - addrlen = sizeof(iaddr);
> -
> - vs->lsock = socket(PF_INET, SOCK_STREAM, 0);
> - if (vs->lsock == -1) {
> - fprintf(stderr, "Could not create socket\n");
> - exit(1);
> - }
> -
> - if (parse_host_port(&iaddr, arg) < 0) {
> - fprintf(stderr, "Could not parse VNC address\n");
> - exit(1);
> - }
> -
> - iaddr.sin_port = htons(ntohs(iaddr.sin_port) + 5900);
> -
> - reuse_addr = 1;
> - ret = setsockopt(vs->lsock, SOL_SOCKET, SO_REUSEADDR,
> - (const char *)&reuse_addr, sizeof(reuse_addr));
> - if (ret == -1) {
> - fprintf(stderr, "setsockopt() failed\n");
> - exit(1);
> - }
> - }
> -
> - if (bind(vs->lsock, addr, addrlen) == -1) {
> - fprintf(stderr, "bind() failed\n");
> - exit(1);
> - }
> + {
> + addr = (struct sockaddr *)&iaddr;
> + addrlen = sizeof(iaddr);
> +
> + vs->lsock = socket(PF_INET, SOCK_STREAM, 0);
> + if (vs->lsock == -1) {
> + fprintf(stderr, "Could not create socket\n");
> + exit(1);
> + }
> +
> + if (parse_host_port(&iaddr, arg) < 0) {
> + fprintf(stderr, "Could not parse VNC address\n");
> + exit(1);
> + }
> +
> + iaddr.sin_port = htons(ntohs(iaddr.sin_port) + 5900);
> +
> + reuse_addr = 1;
> + ret = setsockopt(vs->lsock, SOL_SOCKET, SO_REUSEADDR,
> + (const char *)&reuse_addr, sizeof(reuse_addr));
> + if (ret == -1) {
> + fprintf(stderr, "setsockopt() failed\n");
> + exit(1);
> + }
> + }
> +
> + if (bind(vs->lsock, addr, addrlen) == -1) {
> + fprintf(stderr, "bind() failed\n");
> + exit(1);
> + }
> +
> + if (listen(vs->lsock, 1) == -1) {
> + fprintf(stderr, "listen() failed\n");
> + exit(1);
> + }
>
> - if (listen(vs->lsock, 1) == -1) {
> - fprintf(stderr, "listen() failed\n");
> - exit(1);
> + } else {
> + vs->lsock = vs->fsock;
> + fprintf(stdout, "using forced socket %d\n",vs->fsock);
> }
> -
> +
> ret = qemu_set_fd_handler2(vs->lsock, vnc_listen_poll, vnc_listen_read, NULL, vs);
> if (ret == -1) {
> exit(1);
> Index: vl.c
> ===================================================================
> --- vl.c (revision 6)
> +++ vl.c (revision 8)
> @@ -178,6 +178,7 @@
> static VLANState *first_vlan;
> int smp_cpus = 1;
> const char *vnc_display;
> +int vnc_socket = -1;
> #if defined(TARGET_SPARC)
> #define MAX_CPUS 16
> #elif defined(TARGET_I386)
> @@ -6651,6 +6652,7 @@
> "-no-reboot exit instead of rebooting\n"
> "-loadvm file start right away with a saved state (loadvm in monitor)\n"
> "-vnc display start a VNC server on display\n"
> + "-vnc-socket sd force VNC server on an already opened Socket Descriptor\n"
> #ifndef _WIN32
> "-daemonize daemonize QEMU after initializing\n"
> #endif
> @@ -6745,6 +6747,7 @@
> QEMU_OPTION_usbdevice,
> QEMU_OPTION_smp,
> QEMU_OPTION_vnc,
> + QEMU_OPTION_vnc_socket,
> QEMU_OPTION_no_acpi,
> QEMU_OPTION_no_reboot,
> QEMU_OPTION_show_cursor,
> @@ -6836,6 +6839,7 @@
> { "usbdevice", HAS_ARG, QEMU_OPTION_usbdevice },
> { "smp", HAS_ARG, QEMU_OPTION_smp },
> { "vnc", HAS_ARG, QEMU_OPTION_vnc },
> + { "vnc-socket", HAS_ARG, QEMU_OPTION_vnc_socket },
>
> /* temporary options */
> { "usb", 0, QEMU_OPTION_usb },
> @@ -7588,6 +7592,18 @@
> case QEMU_OPTION_vnc:
> vnc_display = optarg;
> break;
> + case QEMU_OPTION_vnc_socket:
> + {
> + int sd;
> + sd = atoi(optarg);
> + if (sd < 0) {
> + fprintf(stderr, "Bad argument to vnc socket descriptor\n");
> + exit(1);
> + } else {
> + vnc_socket = sd;
> + }
> + break;
> + }
> case QEMU_OPTION_no_acpi:
> acpi_enabled = 0;
> break;
> @@ -7879,7 +7895,7 @@
> if (nographic) {
> /* nothing to do */
> } else if (vnc_display != NULL) {
> - vnc_display_init(ds, vnc_display);
> + vnc_display_init(ds, vnc_display, vnc_socket);
> } else {
> #if defined(CONFIG_SDL)
> sdl_display_init(ds, full_screen, no_frame);
> Index: vl.h
> ===================================================================
> --- vl.h (revision 6)
> +++ vl.h (revision 8)
> @@ -965,7 +965,7 @@
> void cocoa_display_init(DisplayState *ds, int full_screen);
>
> /* vnc.c */
> -void vnc_display_init(DisplayState *ds, const char *display);
> +void vnc_display_init(DisplayState *ds, const char *display, int forced_vnc_socket);
> void do_info_vnc(void);
>
> /* x_keymap.c */
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-24 14:07 ` Anthony Liguori
@ 2007-06-24 21:45 ` Shahar Livne
2007-06-24 22:40 ` Anthony Liguori
0 siblings, 1 reply; 14+ messages in thread
From: Shahar Livne @ 2007-06-24 21:45 UTC (permalink / raw)
To: qemu-devel
Anthony Liguori wrote:
> Shahar Livne wrote:
>> Hi,
>>
>> I am working on a project that runs many concurrent qemu sessions
>> with vnc.
[..]
>> Adding the following option:
>> -vnc-socket sd [force VNC server on an already opened Socket
>> Descriptor]
>
> Just redirect each port to a unique unix domain socket and then you
> can forward traffic to TCP sockets to your heart's content.
>
> Regards,
>
> Anthony Liguori
Hi Anthony,
Thanks for your comment.
The problem with the solution you suggest is that all VNC traffic will
be first sent to the unix domain socket, and then copied to the TCP
socket. This double work may be acceptable if we're talking about one
instance of qemu, but as I said, I run many concurrent sessions which
create too much load. In the solution I suggest, this extra copying is
not needed.
Regards,
Shahar
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-24 21:45 ` Shahar Livne
@ 2007-06-24 22:40 ` Anthony Liguori
2007-06-25 8:28 ` Gilad Ben-Yossef
0 siblings, 1 reply; 14+ messages in thread
From: Anthony Liguori @ 2007-06-24 22:40 UTC (permalink / raw)
To: qemu-devel
Shahar Livne wrote:
> Hi Anthony,
>
> Thanks for your comment.
>
> The problem with the solution you suggest is that all VNC traffic will
> be first sent to the unix domain socket, and then copied to the TCP
> socket. This double work may be acceptable if we're talking about one
> instance of qemu, but as I said, I run many concurrent sessions which
> create too much load. In the solution I suggest, this extra copying is
> not needed.
You're optimizing prematurely. The overhead of the copy is negligible
for something like VNC. Under normal circumstances, we're talking about
30-100k/s. During idle usage, the bandwidth drops to almost nothing.
Regards,
Anthony Liguori
> Regards,
> Shahar
>
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-24 22:40 ` Anthony Liguori
@ 2007-06-25 8:28 ` Gilad Ben-Yossef
2007-06-25 11:55 ` Anthony Liguori
0 siblings, 1 reply; 14+ messages in thread
From: Gilad Ben-Yossef @ 2007-06-25 8:28 UTC (permalink / raw)
To: qemu-devel
Hi Anthony,
Thanks for the feedback.
I'm afraid I'm to blame for the idea to this patch (but Shahar was the one
that actually did the real work, I'm just bothering him).
Anthony Liguori wrote:
>>
>> The problem with the solution you suggest is that all VNC traffic will
>> be first sent to the unix domain socket, and then copied to the TCP
>> socket. This double work may be acceptable if we're talking about one
>> instance of qemu, but as I said, I run many concurrent sessions which
>> create too much load. In the solution I suggest, this extra copying is
>> not needed.
>
>
> You're optimizing prematurely. The overhead of the copy is negligible
> for something like VNC. Under normal circumstances, we're talking about
> 30-100k/s. During idle usage, the bandwidth drops to almost nothing.
>
There are also the double context switches, more file descriptors and
extra proccess to handle the copy but you are abosutly right - we have
no indication what so ever that this really has any measurable impact
on perfomance.
I guess it's easier to resort to perfomance as an excuse since it involves
things you can measure (even if they are meaningless) rather then trying
to justify a design decision because it simply looks better. ;-)
I'll try to do just that, anyway:
Using Unix domain sockets would require adding extra code in some other proccess
that will handle the socket to socket transfer. About 15 lines of code that must
be running for as long as qemu does to handle the communication. That code still
needs to be mnaintained, seperate from qemu, by anyone that trying to do something
similar (so we have sync problems etc.)
On the other hand, the change to qemu is ~5 lines (option parsing not included ;-)
It's initaliation code only (no suprises mid run) and is maintained as part of qemu
with exact same functionality.
If you think users other then us will use the patch (and we believe they will),
we think it'll be useful for this to be included in qemu mainline.
Anyway, thanks for reading this long email and for qemu VNC support in general :-)
Cheers,
Gilad
--
Gilad Ben-Yossef <gilad@codefidence.com>
Codefidence. A name you can trust(tm)
http://www.codefidence.com
Phone: +972.3.7515563 ext. 201 | Cellular: +972.52.8260388
SIP: gilad@pbx.codefidence.com | Fax: +972.3.7515503
Lacking fins or tail
the gefilte fish swims with
great difficulty.
-- A Jewish Haiku
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-25 8:28 ` Gilad Ben-Yossef
@ 2007-06-25 11:55 ` Anthony Liguori
2007-06-26 10:17 ` Gilad Ben-Yossef
0 siblings, 1 reply; 14+ messages in thread
From: Anthony Liguori @ 2007-06-25 11:55 UTC (permalink / raw)
To: qemu-devel
Gilad Ben-Yossef wrote:
> Hi Anthony,
>
> Thanks for the feedback.
>
> I'm afraid I'm to blame for the idea to this patch (but Shahar was the
> one
> that actually did the real work, I'm just bothering him).
>
> Anthony Liguori wrote:
>
>>>
>>> The problem with the solution you suggest is that all VNC traffic
>>> will be first sent to the unix domain socket, and then copied to the
>>> TCP socket. This double work may be acceptable if we're talking
>>> about one instance of qemu, but as I said, I run many concurrent
>>> sessions which create too much load. In the solution I suggest, this
>>> extra copying is not needed.
>>
>>
>> You're optimizing prematurely. The overhead of the copy is
>> negligible for something like VNC. Under normal circumstances, we're
>> talking about 30-100k/s. During idle usage, the bandwidth drops to
>> almost nothing.
>>
>
> There are also the double context switches, more file descriptors and
> extra proccess to handle the copy but you are abosutly right - we have
> no indication what so ever that this really has any measurable impact
> on perfomance.
>
> I guess it's easier to resort to perfomance as an excuse since it
> involves
> things you can measure (even if they are meaningless) rather then trying
> to justify a design decision because it simply looks better. ;-)
>
> I'll try to do just that, anyway:
>
> Using Unix domain sockets would require adding extra code in some
> other proccess
> that will handle the socket to socket transfer. About 15 lines of code
> that must
> be running for as long as qemu does to handle the communication. That
> code still
> needs to be mnaintained, seperate from qemu, by anyone that trying to
> do something
> similar (so we have sync problems etc.)
>
> On the other hand, the change to qemu is ~5 lines (option parsing not
> included ;-)
> It's initaliation code only (no suprises mid run) and is maintained as
> part of qemu
> with exact same functionality.
Here are the reasons why the Unix domain socket approach is superior:
Sharing a file descriptor implies a parent/child relationship. It also
implies that the daemon will be running for the entire lifetime of the
VM. Since VM's are meant to run for very long periods of time, this is
quite limiting. By utilizing a domain socket, you gain the ability to
record on disk the state of the daemon and then restart. The layer of
redirection also allows you to let your uses change the VNC server
properties while the VM is running (so you change the listening vnc
display from localhost:3 to :22 without restarting the VM).
Plus, live migration has no hope of working if you're passing file
descriptors on the command line as they're meaningless once you've migrated.
Regards,
Anthony Liguor
> If you think users other then us will use the patch (and we believe
> they will),
> we think it'll be useful for this to be included in qemu mainline.
>
> Anyway, thanks for reading this long email and for qemu VNC support in
> general :-)
>
> Cheers,
> Gilad
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-25 11:55 ` Anthony Liguori
@ 2007-06-26 10:17 ` Gilad Ben-Yossef
2007-06-26 10:48 ` Jannes Faber
2007-06-28 15:25 ` Anthony Liguori
0 siblings, 2 replies; 14+ messages in thread
From: Gilad Ben-Yossef @ 2007-06-26 10:17 UTC (permalink / raw)
To: qemu-devel
Anthony Liguori wrote:
> Here are the reasons why the Unix domain socket approach is superior:
>
> Sharing a file descriptor implies a parent/child relationship.
True.
> It also
> implies that the daemon will be running for the entire lifetime of the
> VM.
No. In fact, running an extra daemon for the entire life time of the
VM is exactly what I'm trying to avoid (one of the things, anyway).
Now I see why you think the unix domain socket option solves the problem
already. Our use case is actully a little different. Let me explain:
The machine running qemu has a web based interface to start VMs.
A user asks for a new VM to start by browsing to a URL. The CGI
implmenting that URL will start a new qemu instance, send to the user
web browser an HTML page with a JAVA VNC viewer embedded and terminate.
Here is the problem: the HTML page needs to have the port number
for the JAVA VNC viewer to connect to embedded in it.
Of course, the CGI can pick a free port and ask qemu to start the VNC
server on it, but it means CGI needs to maintain a list of free/used
port ranges in some shared data structue, track the qemu instance to know
when it is termianted and the port is free again and of course, hope that
not non related proccess will snatch a port in the port range and generally
duplicate the ifnormation the operating system already has on free/in use
ports.
In our suggested solution, our CGI simply opens a listening socket on an
ethermal port, letting the OS do the allocation, hands the file descriptor
to qemu to use and *terminates* (after sending the HTML page).
No long running daemons.
Having a daemon sit around just to shove the data from the Unix domain socket
to the TCP socket and back and needing to track it and all really puts an ugly
dent on the whole idea and, more important - I think what we are doing is
a rather general concept, certainly not unique to us (just look at qemudo,
only of course, they got it wrong... :-)
Hope this explains things a little better.
> Since VM's are meant to run for very long periods of time, this is
> quite limiting. By utilizing a domain socket, you gain the ability to
> record on disk the state of the daemon and then restart. The layer of
> redirection also allows you to let your uses change the VNC server
> properties while the VM is running (so you change the listening vnc
> display from localhost:3 to :22 without restarting the VM).
All the above are really nice to have, but nit with the cost of
extra management overhead, as explained above.
Also, our VM life time is typically 15 minutes long... :-)
> Plus, live migration has no hope of working if you're passing file
> descriptors on the command line as they're meaningless once you've
> migrated.
That, I have no answer for. What do you do with the Unix domain socket?
open it by path/filename on the new machines?
Gilad
--
Gilad Ben-Yossef <gilad@codefidence.com>
Codefidence. A name you can trust(tm)
http://www.codefidence.com
Phone: +972.3.7515563 ext. 201 | Cellular: +972.52.8260388
SIP: gilad@pbx.codefidence.com | Fax: +972.3.7515503
Lacking fins or tail
the gefilte fish swims with
great difficulty.
-- A Jewish Haiku
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-26 10:17 ` Gilad Ben-Yossef
@ 2007-06-26 10:48 ` Jannes Faber
2007-06-28 15:25 ` Anthony Liguori
1 sibling, 0 replies; 14+ messages in thread
From: Jannes Faber @ 2007-06-26 10:48 UTC (permalink / raw)
To: qemu-devel
[-- Attachment #1: Type: text/plain, Size: 2406 bytes --]
On 6/26/07, Gilad Ben-Yossef <gilad@codefidence.com> wrote:
>
> Anthony Liguori wrote:
>
> > It also
> > implies that the daemon will be running for the entire lifetime of the
> > VM.
>
> No. In fact, running an extra daemon for the entire life time of the
> VM is exactly what I'm trying to avoid (one of the things, anyway).
>
> Now I see why you think the unix domain socket option solves the problem
> already. Our use case is actully a little different. Let me explain:
>
> The machine running qemu has a web based interface to start VMs.
> A user asks for a new VM to start by browsing to a URL. The CGI
> implmenting that URL will start a new qemu instance, send to the user
> web browser an HTML page with a JAVA VNC viewer embedded and terminate.
>
> Here is the problem: the HTML page needs to have the port number
> for the JAVA VNC viewer to connect to embedded in it.
>
> Of course, the CGI can pick a free port and ask qemu to start the VNC
> server on it, but it means CGI needs to maintain a list of free/used
> port ranges in some shared data structue, track the qemu instance to know
> when it is termianted and the port is free again and of course, hope that
> not non related proccess will snatch a port in the port range and
> generally
> duplicate the ifnormation the operating system already has on free/in use
> ports.
>
> In our suggested solution, our CGI simply opens a listening socket on an
> ethermal port, letting the OS do the allocation, hands the file descriptor
> to qemu to use and *terminates* (after sending the HTML page).
> No long running daemons.
>
> Having a daemon sit around just to shove the data from the Unix domain
> socket
> to the TCP socket and back and needing to track it and all really puts an
> ugly
> dent on the whole idea and, more important - I think what we are doing is
> a rather general concept, certainly not unique to us (just look at qemudo,
> only of course, they got it wrong... :-)
>
> Hope this explains things a little better.
Isn't your suggestion also how xinetd works? I guess starting qemu
from an xinetd could be useful as well
in some use cases. A client can simply VNC to a server (on the standard
port). xinetd listens on that port, starts a qemu passing it the connection
handle.
I guess the disadvantage would be that as soon as you loose the VNC
connection you can't get it back anymore.
> Gilad
>
>
--
Jannes Faber
[-- Attachment #2: Type: text/html, Size: 3197 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-26 10:17 ` Gilad Ben-Yossef
2007-06-26 10:48 ` Jannes Faber
@ 2007-06-28 15:25 ` Anthony Liguori
2007-06-30 21:15 ` Gilad Ben-Yossef
1 sibling, 1 reply; 14+ messages in thread
From: Anthony Liguori @ 2007-06-28 15:25 UTC (permalink / raw)
To: qemu-devel
Gilad Ben-Yossef wrote:
>> It also implies that the daemon will be running for the entire
>> lifetime of the VM.
>
> No. In fact, running an extra daemon for the entire life time of the
> VM is exactly what I'm trying to avoid (one of the things, anyway).
>
> Now I see why you think the unix domain socket option solves the problem
> already. Our use case is actully a little different. Let me explain:
>
> The machine running qemu has a web based interface to start VMs.
> A user asks for a new VM to start by browsing to a URL. The CGI
> implmenting that URL will start a new qemu instance, send to the user
> web browser an HTML page with a JAVA VNC viewer embedded and terminate.
Passing an fd is still the wrong solution due to the problems with
save/restore/migrate.
It may be interesting to have something like -vnc
tcp://0.0.0.0:5900-6000 to let QEMU try to find an unused port in the
given range. Combined with -daemonize and having the monitor on a Unix
socket, you could:
1) create a vm with qemu -vnc tcp://0.0.0.0:5900-6000 -monitor
unix:/path/to/socket -daemonize
2) *wait* for qemu to finish running and daemonize properly
3) connect to /path/to/socket and issue a 'info vnc' command to discover
which port it's actually using
4) render that port with your HTML.
The nice thing about this is that it not only continues to work with
save/restore/migrate, it's smart enough to allocate a new port to ensure
that you always tend to succeed. Choosing :3 might be okay on machine
A, but there's no guarantee that it's okay on machine B so you have to
allow QEMU to try and find a new port after restore/migrate.
I prefer this syntax over Xen's -vncunused since you can restrict the
allocated ports to a particular region.
Regards,
Anthony Liguori
> Here is the problem: the HTML page needs to have the port number
> for the JAVA VNC viewer to connect to embedded in it.
>
> Of course, the CGI can pick a free port and ask qemu to start the VNC
> server on it, but it means CGI needs to maintain a list of free/used
> port ranges in some shared data structue, track the qemu instance to know
> when it is termianted and the port is free again and of course, hope that
> not non related proccess will snatch a port in the port range and
> generally
> duplicate the ifnormation the operating system already has on free/in use
> ports.
>
> In our suggested solution, our CGI simply opens a listening socket on an
> ethermal port, letting the OS do the allocation, hands the file
> descriptor
> to qemu to use and *terminates* (after sending the HTML page).
> No long running daemons.
>
> Having a daemon sit around just to shove the data from the Unix domain
> socket
> to the TCP socket and back and needing to track it and all really puts
> an ugly
> dent on the whole idea and, more important - I think what we are doing is
> a rather general concept, certainly not unique to us (just look at
> qemudo,
> only of course, they got it wrong... :-)
>
> Hope this explains things a little better.
>
>
>> Since VM's are meant to run for very long periods of time, this is
>> quite limiting. By utilizing a domain socket, you gain the ability
>> to record on disk the state of the daemon and then restart. The
>> layer of redirection also allows you to let your uses change the VNC
>> server properties while the VM is running (so you change the
>> listening vnc display from localhost:3 to :22 without restarting the
>> VM).
>
> All the above are really nice to have, but nit with the cost of
> extra management overhead, as explained above.
>
> Also, our VM life time is typically 15 minutes long... :-)
>
>> Plus, live migration has no hope of working if you're passing file
>> descriptors on the command line as they're meaningless once you've
>> migrated.
>
> That, I have no answer for. What do you do with the Unix domain socket?
> open it by path/filename on the new machines?
>
> Gilad
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-28 15:25 ` Anthony Liguori
@ 2007-06-30 21:15 ` Gilad Ben-Yossef
2007-07-01 15:33 ` Anthony Liguori
0 siblings, 1 reply; 14+ messages in thread
From: Gilad Ben-Yossef @ 2007-06-30 21:15 UTC (permalink / raw)
To: qemu-devel
Anthony Liguori wrote:
>> The machine running qemu has a web based interface to start VMs.
>> A user asks for a new VM to start by browsing to a URL. The CGI
>> implmenting that URL will start a new qemu instance, send to the user
>> web browser an HTML page with a JAVA VNC viewer embedded and terminate.
>
> Passing an fd is still the wrong solution due to the problems with
> save/restore/migrate.
There is no problem with save/restore or migration.
For save/restore, the fd is not saved in the saved state. You need to
specify a (new?) fd when you restore, if that is what you want to do,
that is.
It just like I can run a VM with SDL, save the state and restore it with
VNC - the question if the restored session should use the same fd (for
whatever definition of "the same" you want) or not is left for the user.
We should be providing mechanism, not policy.
The same applies for migration - when you migrate your VM, its your call
and responsibility to do the right thing. Just spawn a wrapper on the
target machine that opens a new fd and exec into qemu and use *that* as
your migrate command line argument instead of plain qemu, as an obvious
example.
As someone wrote in their blog (this is why I write my blog in a
language that's been dead for aprox. 2000 years... :-):
"Instead of just saying 'migrate hostname' you now have to construct a
rather long command like 'migrate "ssh hostname qemu -loadvm -"'. A nice
side effect though is that you can completely change the command line
arguments in case you have NFS mounts at different locations."
I couldn't agree more. :-)
>
> It may be interesting to have something like -vnc
> tcp://0.0.0.0:5900-6000 to let QEMU try to find an unused port in the
> given range. Combined with -daemonize and having the monitor on a Unix
> socket, you could:
>
> 1) create a vm with qemu -vnc tcp://0.0.0.0:5900-6000 -monitor
> unix:/path/to/socket -daemonize
> 2) *wait* for qemu to finish running and daemonize properly
> 3) connect to /path/to/socket and issue a 'info vnc' command to discover
> which port it's actually using
> 4) render that port with your HTML.
What you're saying is the same as claiming that "ls" needs to implement
sorting, compression (in both gz and bzip algorithms), regexp searching
and transferring it's output over an SSL connection to another machine.
We don't do that. We have pipes . VNC socket fd passing is just the old
pipe concept (adapted to the situation) - nothing more.
Or in other words: you can easily implement the VNC over unix domain
socket feature (if it was not in qemu already) using the fd passing
method, but you can't do it the other way around (without an extra long
running process, that is)
>
> The nice thing about this is that it not only continues to work with
> save/restore/migrate, it's smart enough to allocate a new port to ensure
> that you always tend to succeed. Choosing :3 might be okay on machine
> A, but there's no guarantee that it's okay on machine B so you have to
> allow QEMU to try and find a new port after restore/migrate.
You're assuming we know what the correct behavior is. We really don't
know. Maybe we want to use the same fd number on the new machine, maybe
we don't. The user somehow provided a good fd on the original machine,
trust him to provide a good one on the new one (via a qemu wrapper for
migrate as explained above, for example).
Again it's that mechanism vs. policy thing again.,
Gilad
PS. This is not some ego trip about whether you'll take the patch or
not, so just tell me if this back and forth gets annoying and I'll shut
up :-)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-30 21:15 ` Gilad Ben-Yossef
@ 2007-07-01 15:33 ` Anthony Liguori
0 siblings, 0 replies; 14+ messages in thread
From: Anthony Liguori @ 2007-07-01 15:33 UTC (permalink / raw)
To: qemu-devel
Gilad Ben-Yossef wrote:
> Anthony Liguori wrote:
>
>>> The machine running qemu has a web based interface to start VMs.
>>> A user asks for a new VM to start by browsing to a URL. The CGI
>>> implmenting that URL will start a new qemu instance, send to the user
>>> web browser an HTML page with a JAVA VNC viewer embedded and terminate.
>>
>> Passing an fd is still the wrong solution due to the problems with
>> save/restore/migrate.
>
> There is no problem with save/restore or migration.
>
> For save/restore, the fd is not saved in the saved state. You need to
> specify a (new?) fd when you restore, if that is what you want to do,
> that is.
>
> It just like I can run a VM with SDL, save the state and restore it
> with VNC - the question if the restored session should use the same fd
> (for whatever definition of "the same" you want) or not is left for
> the user. We should be providing mechanism, not policy.
>
> The same applies for migration - when you migrate your VM, its your
> call and responsibility to do the right thing. Just spawn a wrapper on
> the target machine that opens a new fd and exec into qemu and use
> *that* as your migrate command line argument instead of plain qemu, as
> an obvious example.
>
> As someone wrote in their blog (this is why I write my blog in a
> language that's been dead for aprox. 2000 years... :-):
>
> "Instead of just saying 'migrate hostname' you now have to construct a
> rather long command like 'migrate "ssh hostname qemu -loadvm -"'. A
> nice side effect though is that you can completely change the command
> line arguments in case you have NFS mounts at different locations."
>
> I couldn't agree more. :-)
Except that the syntax has changed since then. It is now 'migrate
ssh://hostname' in which case, the command line qemu was launched with
is used to relaunch on the new machine. This is where passing an fd
fails. The syntax I proposed addresses this issue.
>
>>
>> It may be interesting to have something like -vnc
>> tcp://0.0.0.0:5900-6000 to let QEMU try to find an unused port in the
>> given range. Combined with -daemonize and having the monitor on a
>> Unix socket, you could:
>>
>> 1) create a vm with qemu -vnc tcp://0.0.0.0:5900-6000 -monitor
>> unix:/path/to/socket -daemonize
>> 2) *wait* for qemu to finish running and daemonize properly
>> 3) connect to /path/to/socket and issue a 'info vnc' command to
>> discover which port it's actually using
>> 4) render that port with your HTML.
>
> What you're saying is the same as claiming that "ls" needs to
> implement sorting, compression (in both gz and bzip algorithms),
> regexp searching and transferring it's output over an SSL connection
> to another machine.
>
> We don't do that. We have pipes . VNC socket fd passing is just the
> old pipe concept (adapted to the situation) - nothing more.
>
> Or in other words: you can easily implement the VNC over unix domain
> socket feature (if it was not in qemu already) using the fd passing
> method, but you can't do it the other way around (without an extra
> long running process, that is)
There is such a thing as too many options. Adding new options requires
weighing the utility that that option brings verses the added
complexity. In this case, the option would not be useful to the vast
majority of users, only those writing front-ends. I proposed an option
that would still be useful to those writing front-ends but would also be
useful to general users.
Regards,
Anthony Liguori
>>
>> The nice thing about this is that it not only continues to work with
>> save/restore/migrate, it's smart enough to allocate a new port to
>> ensure that you always tend to succeed. Choosing :3 might be okay on
>> machine A, but there's no guarantee that it's okay on machine B so
>> you have to allow QEMU to try and find a new port after restore/migrate.
>
> You're assuming we know what the correct behavior is. We really don't
> know. Maybe we want to use the same fd number on the new machine,
> maybe we don't. The user somehow provided a good fd on the original
> machine, trust him to provide a good one on the new one (via a qemu
> wrapper for migrate as explained above, for example).
>
> Again it's that mechanism vs. policy thing again.,
>
> Gilad
>
> PS. This is not some ego trip about whether you'll take the patch or
> not, so just tell me if this back and forth gets annoying and I'll
> shut up :-)
>
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-24 13:05 [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port Shahar Livne
2007-06-24 14:07 ` Anthony Liguori
@ 2007-06-24 19:16 ` Paul Brook
2007-06-24 20:12 ` Anthony Liguori
1 sibling, 1 reply; 14+ messages in thread
From: Paul Brook @ 2007-06-24 19:16 UTC (permalink / raw)
To: qemu-devel; +Cc: Shahar Livne
> Currently there is the following vnc option:
> -vnc display [start a VNC server on display]
>
> Adding the following option:
> -vnc-socket sd [force VNC server on an already opened Socket Descriptor]
>
> overrides the new socket opening for the vnc on 5900+display port, and
> uses the given sd socket descriptor instead.
Better would be to make -vnc accept the standard serial device syntax.
Paul
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
2007-06-24 19:16 ` Paul Brook
@ 2007-06-24 20:12 ` Anthony Liguori
0 siblings, 0 replies; 14+ messages in thread
From: Anthony Liguori @ 2007-06-24 20:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Shahar Livne
Paul Brook wrote:
>> Currently there is the following vnc option:
>> -vnc display [start a VNC server on display]
>>
>> Adding the following option:
>> -vnc-socket sd [force VNC server on an already opened Socket Descriptor]
>>
>> overrides the new socket opening for the vnc on 5900+display port, and
>> uses the given sd socket descriptor instead.
>>
>
> Better would be to make -vnc accept the standard serial device syntax.
>
The current CharDriver infrastructure isn't good enough yet. It doesn't
really support the idea of connectons that can be accepted/closed.
Regards,
Anthony Liguori
> Paul
>
>
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port
@ 2007-06-25 14:58 n schembr
0 siblings, 0 replies; 14+ messages in thread
From: n schembr @ 2007-06-25 14:58 UTC (permalink / raw)
To: qemu-devel
Gilad Ben-Yossef wrote:
>If you think users other then us will use the patch (and we believe they will),
>we think it'll be useful for this to be included in qemu mainline.
I have more then 100 images on one box. It would make it much easer to
write my startup scripts. :)
Background
10 host with 10 guests. I have it setup that any guest can run on any host.
I would love to use port xx200 for the host running on ip 10.200.1.200
It is not intuitive to start xvncviewer 10.200.1.xxx:6100 but it works.
Nicholas A. Schembri
State College PA USA
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2007-07-01 15:33 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-24 13:05 [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port Shahar Livne
2007-06-24 14:07 ` Anthony Liguori
2007-06-24 21:45 ` Shahar Livne
2007-06-24 22:40 ` Anthony Liguori
2007-06-25 8:28 ` Gilad Ben-Yossef
2007-06-25 11:55 ` Anthony Liguori
2007-06-26 10:17 ` Gilad Ben-Yossef
2007-06-26 10:48 ` Jannes Faber
2007-06-28 15:25 ` Anthony Liguori
2007-06-30 21:15 ` Gilad Ben-Yossef
2007-07-01 15:33 ` Anthony Liguori
2007-06-24 19:16 ` Paul Brook
2007-06-24 20:12 ` Anthony Liguori
-- strict thread matches above, loose matches on Subject: below --
2007-06-25 14:58 n schembr
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).