public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* sharing memory
@ 2007-09-03 12:04 Francesco Cipollone
  2007-09-04  8:50 ` Dor Laor
  2007-09-05 20:09 ` Avi Kivity
  0 siblings, 2 replies; 7+ messages in thread
From: Francesco Cipollone @ 2007-09-03 12:04 UTC (permalink / raw)
  To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f


[-- Attachment #1.1: Type: text/plain, Size: 824 bytes --]

Hy,
I'm writing my thesis on Virtualizzation and it's application to security.
Now for Xen Hypervisor there are a lot of application already developed (like XenRim, XenFit, XenKimono) and very nice ideas...
I want to transfer these ideas on Kvm...but is a little bit harder than i thought. 
So the firs treat was to do an application in the "host" machine that comunicate in some way with another application in the "guest" machine (the VM).
I've tried to use the Libvirt function...but they're designed principally to work with Xen...
So I guest how i can read the memory of a VM ?! 
Must I interface my application in the host machine directly with the Qemu ?
There is a better solution supported by KVM to implement shared memory between VMs or between VMs and "host" system?

Thank you for your time
Francesco

[-- Attachment #1.2: Type: text/html, Size: 1628 bytes --]

[-- Attachment #2: Type: text/plain, Size: 315 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/

[-- Attachment #3: Type: text/plain, Size: 186 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: sharing memory
  2007-09-03 12:04 sharing memory Francesco Cipollone
@ 2007-09-04  8:50 ` Dor Laor
       [not found]   ` <64F9B87B6B770947A9F8391472E032160D706627-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org>
  2007-09-05 20:09 ` Avi Kivity
  1 sibling, 1 reply; 7+ messages in thread
From: Dor Laor @ 2007-09-04  8:50 UTC (permalink / raw)
  To: Francesco Cipollone, kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f


[-- Attachment #1.1: Type: text/plain, Size: 1349 bytes --]

Hy,

I'm writing my thesis on Virtualizzation and it's application to
security.

Now for Xen Hypervisor there are a lot of application already developed
(like XenRim, XenFit, XenKimono) and very nice ideas...

I want to transfer these ideas on Kvm...but is a little bit harder than
i thought. 

So the firs treat was to do an application in the "host" machine that
comunicate in some way with another application in the "guest" machine
(the VM).

I've tried to use the Libvirt function...but they're designed
principally to work with Xen...

So I guest how i can read the memory of a VM ?! 

Must I interface my application in the host machine directly with the
Qemu ?

There is a better solution supported by KVM to implement shared memory
between VMs or between VMs and "host" system?

 

You have the following options:

1. A plain tcp/ip from guest to host (you probably considered that ahh)

2. We have a virtual device in qemu called vmchannel it is visible as
pci device in the guest.

    Currently they communicate using port io, it will soon change to
virtio interace (shared memry).

    It is goo for guest-host communication.

3. A 9p interface is developed too by Eric Van Hensbergen, it is work in
progress.

 

You're free to add your own.

 

Thank you for your time

Francesco


[-- Attachment #1.2: Type: text/html, Size: 5529 bytes --]

[-- Attachment #2: Type: text/plain, Size: 315 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/

[-- Attachment #3: Type: text/plain, Size: 186 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: sharing memory
       [not found]   ` <64F9B87B6B770947A9F8391472E032160D706627-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org>
@ 2007-09-04 20:03     ` Cam Macdonell
       [not found]       ` <46DDB9FC.4030408-edFDblaTWIyXbbII50Afww@public.gmane.org>
  0 siblings, 1 reply; 7+ messages in thread
From: Cam Macdonell @ 2007-09-04 20:03 UTC (permalink / raw)
  To: Dor Laor; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Francesco Cipollone

Dor Laor wrote:

> You have the following options:
> 
> 1. A plain tcp/ip from guest to host (you probably considered that ahh)
> 2. We have a virtual device in qemu called vmchannel it is visible as 
> pci device in the guest.
> 
>     Currently they communicate using port io, it will soon change to 
> virtio interace (shared memry).
> 
>     It is goo for guest-host communication.
> 

Hi Dor,

How is VMChannel setup to be used?  Does it require using the KVM kernel 
in the guest or is
there a less intrusive way to use it?

Thanks,
Cam

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: sharing memory
       [not found]       ` <46DDB9FC.4030408-edFDblaTWIyXbbII50Afww@public.gmane.org>
@ 2007-09-05  7:33         ` Dor Laor
       [not found]           ` <64F9B87B6B770947A9F8391472E032160D7E927C-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org>
  0 siblings, 1 reply; 7+ messages in thread
From: Dor Laor @ 2007-09-05  7:33 UTC (permalink / raw)
  To: Cam Macdonell
  Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Francesco Cipollone

[-- Attachment #1: Type: text/plain, Size: 987 bytes --]

>> You have the following options:
>>
>> 1. A plain tcp/ip from guest to host (you probably considered that
>ahh)
>> 2. We have a virtual device in qemu called vmchannel it is visible as
>> pci device in the guest.
>>
>>     Currently they communicate using port io, it will soon change to
>> virtio interace (shared memry).
>>
>>     It is goo for guest-host communication.
>>
>
>Hi Dor,
>
>How is VMChannel setup to be used?  Does it require using the KVM
kernel
>in the guest or is
>there a less intrusive way to use it?
>

You don't need a special kernel in the guest. You need to use the
vmchannel in qemu,
by adding -vmchannel parameter. It requires a format of
di:[PCI)VENDOR_ID],QEMU_DEVICE where QEMU_DEVICE is standard qemu device
format, e.g. file/socket/..

In the guest you need the matching pci driver. Currently you can you the
attached one,
I'm not sure if it is uptodate. Soon we'll post a device that uses
virtio for the vmchannel.
-Dor

[-- Attachment #2: hypercall.c --]
[-- Type: application/octet-stream, Size: 13654 bytes --]

#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/compiler.h>
#include <linux/pci.h>
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/completion.h>
#include <linux/interrupt.h>
#include <asm/io.h>
#include <asm/uaccess.h>
#include <asm/irq.h>

#define HYPERCALL_DRIVER_NAME "Qumranet_hypercall_driver"
#define HYPERCALL_DRIVER_VERSION "1"
#define PCI_VENDOR_ID_HYPERCALL	0x5002
#define PCI_DEVICE_ID_HYPERCALL 0x2258

MODULE_AUTHOR ("Dor Laor <dor.laor@qumranet.com>");
MODULE_DESCRIPTION (HYPERCALL_DRIVER_NAME);
MODULE_LICENSE("GPL");
MODULE_VERSION(HYPERCALL_DRIVER_VERSION);

static int debug = 0;
module_param(debug, int, 0);
MODULE_PARM_DESC (debug, "toggle debug flag");

#define HYPERCALL_DEBUG 1
#if HYPERCALL_DEBUG
#  define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __FUNCTION__ , ## args)
#  define assert(expr) \
        if(unlikely(!(expr))) {				        \
        printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n",	\
        #expr,__FILE__,__FUNCTION__,__LINE__);		        \
        }
#else
#  define DPRINTK(fmt, args...)
#  define assert(expr) do {} while (0)
#endif

static struct pci_device_id hypercall_pci_tbl[] = {
	{PCI_VENDOR_ID_HYPERCALL, PCI_DEVICE_ID_HYPERCALL, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
	{0,}
};
MODULE_DEVICE_TABLE (pci, hypercall_pci_tbl);



/****** Hypercall device definitions ***************/
#include <qemu/hw/hypercall.h>

/* read PIO/MMIO register */
#define HIO_READ8(reg, ioaddr)		ioread8(ioaddr + (reg))
#define HIO_READ16(reg, ioaddr)		ioread16(ioaddr + (reg))
#define HIO_READ32(reg, ioaddr)		ioread32(ioaddr + (reg))

/* write PIO/MMIO register */
#define HIO_WRITE8(reg, val8, ioaddr)	iowrite8((val8), ioaddr + (reg))
#define HIO_WRITE16(reg, val16, ioaddr)	iowrite16((val16), ioaddr + (reg))
#define HIO_WRITE32(reg, val32, ioaddr)	iowrite32((val32), ioaddr + (reg))

/* cyclic recv data buffer */
#define HYPERCALL_DATA_BUFFER_SIZE	PAGE_SIZE
unsigned long hypercall_buffer = 0;
volatile unsigned long hypercall_pending_data;
DECLARE_WAIT_QUEUE_HEAD(hypercall_queue);
void hypercall_do_tasklet(unsigned long);
DECLARE_TASKLET (hypercall_tasklet, hypercall_do_tasklet, 0);

struct hypercall_dev {
	struct pci_dev  *pci_dev;
	struct kobject	kobject;
	u32 		state;
	spinlock_t	lock;
	u8		name[128];
	u16		irq;
	u32		regs_len;
	void __iomem 	*io_addr;
	unsigned long	base_addr;	/* device I/O address	*/
	unsigned long 	cmd;
};


static int hypercall_close(struct hypercall_dev* dev);
static int hypercall_open(struct hypercall_dev *dev);
static void hypercall_cleanup_dev(struct hypercall_dev *dev);
static irqreturn_t hypercall_interrupt(int irq, void *dev_instance,
				       struct pt_regs *regs);

static void __exit hypercall_sysfs_remove(struct hypercall_dev *dev);
static int hypercall_sysfs_add(struct hypercall_dev *dev);


static int __devinit hypercall_init_board(struct pci_dev *pdev,
					  struct hypercall_dev **dev_out)
{
	unsigned long ioaddr;
	struct hypercall_dev *dev;
	int rc;
	u32 disable_dev_on_err = 0;
	unsigned long pio_start, pio_end, pio_flags, pio_len;
	unsigned long mmio_start, mmio_end, mmio_flags, mmio_len;

	assert(pdev != NULL);

	*dev_out = NULL;

	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
	if (dev == NULL) {
		printk (KERN_ERR "%s: Unable to alloc hypercall device\n", pci_name(pdev));
		return -ENOMEM;
	}
	dev->pci_dev = pdev;
	rc = pci_enable_device(pdev);
	if (rc)
		goto err_out;
	disable_dev_on_err = 1;

	pio_start = pci_resource_start (pdev, 0);
	pio_end = pci_resource_end (pdev, 0);
	pio_flags = pci_resource_flags (pdev, 0);
	pio_len = pci_resource_len (pdev, 0);

	mmio_start = pci_resource_start (pdev, 1);
	mmio_end = pci_resource_end (pdev, 1);
	mmio_flags = pci_resource_flags (pdev, 1);
	mmio_len = pci_resource_len (pdev, 1);

	DPRINTK("PIO region size == 0x%02lX\n", pio_len);
	DPRINTK("MMIO region size == 0x%02lX\n", mmio_len);

	rc = pci_request_regions (pdev, "hypercall");
	if (rc)
		goto err_out;

#define USE_IO_OPS 1
#ifdef USE_IO_OPS
	ioaddr = (unsigned long)pci_iomap(pdev, 0, 0);
	//ioaddr = ioport_map(pio_start, pio_len);
	if (!ioaddr) {
		printk(KERN_ERR "%s: cannot map PIO, aborting\n", pci_name(pdev));
		rc = -EIO;
		goto err_out;
	}
	dev->base_addr = (unsigned long)pio_start;
	dev->io_addr = (void*)ioaddr;
	dev->regs_len = pio_len;
#else
	ioaddr = pci_iomap(pdev, 1, 0);
	if (ioaddr == NULL) {
		printk(KERN_ERR "%s: cannot remap MMIO, aborting\n", pci_name(pdev));
		rc = -EIO;
		goto err_out;
	}
	dev->base_addr = ioaddr;
	dev->io_addr = (void*)ioaddr;
	dev->regs_len = mmio_len;
#endif /* USE_IO_OPS */

	*dev_out = dev;
	return 0;

err_out:
	hypercall_cleanup_dev(dev);
	if (disable_dev_on_err)
		pci_disable_device(pdev);
	return rc;
}

static int __devinit hypercall_init_one(struct pci_dev *pdev,
				        const struct pci_device_id *ent)
{
	struct hypercall_dev *dev;
	u8 pci_rev;

	assert(pdev != NULL);
	assert(ent != NULL);

	pci_read_config_byte(pdev, PCI_REVISION_ID, &pci_rev);

	if (pdev->vendor == PCI_VENDOR_ID_HYPERCALL &&
	    pdev->device == PCI_DEVICE_ID_HYPERCALL) {
		printk(KERN_INFO "pci dev %s (id %04x:%04x rev %02x) is a guest hypercall device\n",
		       pci_name(pdev), pdev->vendor, pdev->device, pci_rev);
	}

	if (hypercall_init_board(pdev, &dev) != 0)
		return -1;
	
	assert(dev != NULL);
                    
	dev->irq = pdev->irq;

	spin_lock_init(&dev->lock);
        pci_set_drvdata(pdev, dev);

	printk (KERN_INFO "name=%s: base_addr=0x%lx, io_addr=0x%lx, IRQ=%d\n",
		dev->name, dev->base_addr, (unsigned long)dev->io_addr, dev->irq);
	hypercall_open(dev);

	if (hypercall_sysfs_add(dev) != 0)
		return -1;

	return 0;
}

static void __devexit hypercall_remove_one(struct pci_dev *pdev)
{
	struct hypercall_dev *dev = pci_get_drvdata(pdev);

	assert(dev != NULL);

	hypercall_close(dev);
	hypercall_sysfs_remove(dev);
	hypercall_cleanup_dev(dev);
	pci_disable_device(pdev);
}

void hypercall_do_tasklet(unsigned long p)
{
	int len;
	struct hypercall_dev *dev = (struct hypercall_dev*)p;

	do {
		DPRINTK("In main loop\n");

		wait_event_interruptible(&hypercall_queue, hypercall_pending_data);
		len = (int)hypercall_buffer;
		DPRINTK("Got buffer %s\n", (u8*)hypercall_buffer+sizeof(int));
		hypercall_pending_data = 0;

		hypercall_tx(dev, "hello host", sizeof("hello host"));
	} while (1);
}

static int hypercall_tx(struct hypercall_dev *dev, unsigned char *buf, size_t len)
{
	void __iomem *ioaddr = (void __iomem*)dev->io_addr;
	int i;

	if (len > HP_MEM_SIZE)
		return -EINVAL;

	spin_lock(&dev->lock);
	HIO_WRITE8(HP_TXSIZE, len, ioaddr);
	for (i=0; i< len; i++)
		HIO_WRITE8(HP_TXBUFF, buf[i], ioaddr);
	spin_unlock(&dev->lock);

	return 0;
}

/* 
 * The interrupt handler does all of the rx  work and cleans up
 * after the tx
 */
static irqreturn_t hypercall_interrupt(int irq, void *dev_instance,
				       struct pt_regs *regs)
{
	struct hypercall_dev *dev = (struct hypercall_dev *)dev_instance;
	void __iomem *ioaddr = (void __iomem*)dev->io_addr;
	u32 status;
	int irq_handled = IRQ_NONE;
	int rx_buf_size = 0;
	int i;
	u8 buffer[HP_MEM_SIZE];
	u8 *pbuf;

	DPRINTK("base addr is 0x%lx, io_addr=0x%lx\n", dev->base_addr, (long)dev->io_addr);
	
	spin_lock(&dev->lock);
	status = HIO_READ8(HSR_REGISTER, ioaddr);
	DPRINTK("irq status is 0x%x\n", status);

	/* shared irq? */
	if (unlikely((status & HSR_VDR) == 0)) {
		DPRINTK("not handeling irq, not ours\n");
		goto out;
	}
	
	/* Disable device interrupts */
	HIO_WRITE8(HCR_REGISTER, HCR_DI, ioaddr);
	DPRINTK("disable device interrupts\n");

	rx_buf_size = HIO_READ8(HP_RXSIZE, ioaddr);
	DPRINTK("Rx buffer size is %d\n", rx_buf_size);

	if (rx_buf_size > HP_MEM_SIZE)
		rx_buf_size = HP_MEM_SIZE;

	for (i=0, pbuf=buffer; i<rx_buf_size; i++, pbuf++) {
		*pbuf = HIO_READ8(HP_RXBUFF, ioaddr + i);
		DPRINTK("Read 0x%x as dword %d\n", *pbuf, i);
	}
	*pbuf = '\0';
	DPRINTK("Read buffer %s", (char*)buffer);

	HIO_WRITE8(HCR_REGISTER, HCR_EI, ioaddr);
	DPRINTK("Enable interrupt\n");
	irq_handled = IRQ_HANDLED;

	hypercall_buffer = rx_buf_size;
	memcpy(hypercall_buffer + sizeof(rx_buf_size), buffer, rx_buf_size);
 out:
	spin_unlock(&dev->lock);

	if (rx_buf_size) {
		wake_up_interruptible(&hypercall_queue);
		//tasklet_schedule(&hypercall_tasklet);
	}
	
	return irq_handled;
}


static int hypercall_open(struct hypercall_dev *dev)
{
	int rc;

	rc = request_irq(dev->irq, &hypercall_interrupt,
			 SA_SHIRQ, dev->name, dev);
	if (rc) {
		printk(KERN_ERR "%s failed to request an irq\n", __FUNCTION__);
		return rc;
	}

	hypercall_buffer = alloc_pages(GFP_KERNEL,0);

	hypercall_task.routine = (void (*)(void *))hypercall_do_tasklet;
	hypercall_task.data = dev;
	//hypercall_thread_start(dev);

	return 0;
}

static int hypercall_close(struct hypercall_dev* dev)
{
	//hypercall_thread_stop(dev);
	synchronize_irq(dev->irq);
	free_irq(dev->irq, dev);
	if (hypercall_buffer)
		free_page(hypercall_buffer);

	return 0;
}

#ifdef CONFIG_PM

static int hypercall_suspend(struct pci_dev *pdev, pm_message_t state)
{
	pci_save_state(pdev);
	pci_set_power_state(pdev, PCI_D3hot);
	DPRINTK("Power mgmt suspend, set power state to PCI_D3hot\n");

	return 0;
}

static int hypercall_resume(struct pci_dev *pdev)
{
	pci_restore_state(pdev);
	pci_set_power_state(pdev, PCI_D0);
	DPRINTK("Power mgmt resume, set power state to PCI_D0\n");

	return 0;
}

#endif /* CONFIG_PM */

static void hypercall_cleanup_dev(struct hypercall_dev *dev)
{
	DPRINTK("cleaning up\n");
        pci_release_regions(dev->pci_dev);
	pci_iounmap(dev->pci_dev, (void*)dev->io_addr);
	pci_set_drvdata (dev->pci_dev, NULL);
	kfree(dev);
}

static struct pci_driver hypercall_pci_driver = {
	.name		= HYPERCALL_DRIVER_NAME,
	.id_table	= hypercall_pci_tbl,
	.probe		= hypercall_init_one,
	.remove		= __devexit_p(hypercall_remove_one),
#ifdef CONFIG_PM
	.suspend	= hypercall_suspend,
	.resume		= hypercall_resume,
#endif /* CONFIG_PM */
};

static int __init hypercall_init_module(void)
{
	printk (KERN_INFO HYPERCALL_DRIVER_NAME "\n");
	return pci_module_init(&hypercall_pci_driver);
}

static void __exit hypercall_cleanup_module(void)
{
	pci_unregister_driver(&hypercall_pci_driver);
}

/*
 * sysfs support
 */

struct hypercall_attribute {
	struct attribute attr;
	ssize_t (*show)(struct hypercall_dev*, char *buf);
	ssize_t (*store)(struct hypercall_dev*, unsigned long val);
};

static ssize_t hypercall_attribute_show(struct kobject *kobj,
		struct attribute *attr, char *buf)
{
	struct hypercall_attribute *hypercall_attr;
	struct hypercall_dev *hdev;

	hypercall_attr = container_of(attr, struct hypercall_attribute, attr);
	hdev = container_of(kobj, struct hypercall_dev, kobject);

	if (!hypercall_attr->show)
		return -EIO;

	return hypercall_attr->show(hdev, buf);
}

static ssize_t hypercall_attribute_store(struct kobject *kobj,
		struct attribute *attr, const char *buf, size_t count)
{
	struct hypercall_attribute *hypercall_attr;
	struct hypercall_dev *hdev;
	char *endp;
	unsigned long val;
	int rc;

	val = simple_strtoul(buf, &endp, 0);

	hypercall_attr = container_of(attr, struct hypercall_attribute, attr);
	hdev = container_of(kobj, struct hypercall_dev, kobject);

	if (!hypercall_attr->store)
		return -EIO;

	rc = hypercall_attr->store(hdev, val);
	if (!rc)
		rc = count;
	return rc;
}

#define MAKE_HYPERCALL_R_ATTR(_name)					\
static ssize_t _name##_show(struct hypercall_dev *hdev, char *buf)	\
{									\
	return sprintf(buf, "%lu\n", (unsigned long)hdev->_name);	\
}									\
struct hypercall_attribute hypercall_attr_##_name = __ATTR_RO(_name)

#define MAKE_HYPERCALL_WR_ATTR(_name)					\
static int _name##_store(struct hypercall_dev *hdev, unsigned long val)	\
{									\
	hdev->_name = (typeof(hdev->_name))val;				\
	return 0;							\
}									\
static ssize_t _name##_show(struct hypercall_dev *hdev, char *buf)	\
{									\
	return sprintf(buf, "%lu\n", (unsigned long)hdev->_name);	\
}									\
struct hypercall_attribute hypercall_attr_##_name = 			\
	__ATTR(_name,S_IRUGO|S_IWUGO,_name##_show,_name##_store)

MAKE_HYPERCALL_R_ATTR(base_addr);
MAKE_HYPERCALL_R_ATTR(irq);
MAKE_HYPERCALL_WR_ATTR(cmd);

#define GET_HYPERCALL_ATTR(_name)	(&hypercall_attr_##_name.attr)

static struct attribute *hypercall_default_attrs[] = {
	GET_HYPERCALL_ATTR(base_addr),
	GET_HYPERCALL_ATTR(irq),
	GET_HYPERCALL_ATTR(cmd),
	NULL
};

static struct sysfs_ops hypercall_sysfs_ops = {
	.show = hypercall_attribute_show,
	.store = hypercall_attribute_store,
};

static void hypercall_sysfs_release(struct kobject *kobj)
{
	DPRINTK(" called for obj name %s\n", kobj->name);
}

static struct kobj_type hypercall_ktype = {
	.release	= hypercall_sysfs_release,
	.sysfs_ops	= &hypercall_sysfs_ops,
	.default_attrs	= hypercall_default_attrs
};


static int hypercall_sysfs_add(struct hypercall_dev *dev)
{
	int rc;

	kobject_init(&dev->kobject);
	dev->kobject.ktype = &hypercall_ktype;
	rc = kobject_set_name(&dev->kobject, "%s", HYPERCALL_DRIVER_NAME);
	if (rc != 0) {
		printk("%s: kobject_set_name failed, err=%d\n", __FUNCTION__, rc);
		return rc;
	}

        rc = kobject_add(&dev->kobject);
	if (rc != 0) {
		printk("%s: kobject_add failed, err=%d\n", __FUNCTION__, rc);
		return rc;
	}

        rc = sysfs_create_link(&dev->pci_dev->dev.kobj, &dev->kobject,
			       HYPERCALL_DRIVER_NAME);
	if (rc != 0) {
		printk("%s: sysfs_create_link failed, err=%d\n", __FUNCTION__, rc);
		kobject_del(&dev->kobject);
	}
        
	return rc;
}

static void hypercall_sysfs_remove(struct hypercall_dev *dev)
{
	sysfs_remove_link(&dev->pci_dev->dev.kobj, HYPERCALL_DRIVER_NAME);
	kobject_del(&dev->kobject);
}

module_init(hypercall_init_module);
module_exit(hypercall_cleanup_module);

[-- Attachment #3: Type: text/plain, Size: 315 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/

[-- Attachment #4: Type: text/plain, Size: 186 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: sharing memory
  2007-09-03 12:04 sharing memory Francesco Cipollone
  2007-09-04  8:50 ` Dor Laor
@ 2007-09-05 20:09 ` Avi Kivity
  1 sibling, 0 replies; 7+ messages in thread
From: Avi Kivity @ 2007-09-05 20:09 UTC (permalink / raw)
  To: Francesco Cipollone; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

Francesco Cipollone wrote:
> Hy,
> I'm writing my thesis on Virtualizzation and it's application to security.
> Now for Xen Hypervisor there are a lot of application already 
> developed (like XenRim, XenFit, XenKimono) and very nice ideas...
> I want to transfer these ideas on Kvm...but is a little bit harder 
> than i thought.
> So the firs treat was to do an application in the "host" machine that 
> comunicate in some way with another application in the "guest" machine 
> (the VM).
> I've tried to use the Libvirt function...but they're designed 
> principally to work with Xen...
> So I guest how i can read the memory of a VM ?!
> Must I interface my application in the host machine directly with the 
> Qemu ?
> There is a better solution supported by KVM to implement shared memory 
> between VMs or between VMs and "host" system?
>  

kvm will soon support an interface to mmap() a file to a VM.  You could 
then mmap one file to several virtual machines and thus achieve shared 
memory.  It would also work with system V shared memory.

-- 
Any sufficiently difficult bug is indistinguishable from a feature.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: sharing memory
       [not found]           ` <64F9B87B6B770947A9F8391472E032160D7E927C-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org>
@ 2007-09-07 18:11             ` Cam Macdonell
       [not found]               ` <46E19448.9020803-edFDblaTWIyXbbII50Afww@public.gmane.org>
  0 siblings, 1 reply; 7+ messages in thread
From: Cam Macdonell @ 2007-09-07 18:11 UTC (permalink / raw)
  To: Dor Laor; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

Dor Laor wrote:
> 
> In the guest you need the matching pci driver. Currently you can you the
> attached one,
> I'm not sure if it is uptodate. Soon we'll post a device that uses
> virtio for the vmchannel.
> -Dor

Would the existing hypercall.c that is in kvm-userspace/drivers/ work as 
well.  What is the difference?

Thanks,
Cam

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: sharing memory
       [not found]               ` <46E19448.9020803-edFDblaTWIyXbbII50Afww@public.gmane.org>
@ 2007-09-08 22:09                 ` Dor Laor
  0 siblings, 0 replies; 7+ messages in thread
From: Dor Laor @ 2007-09-08 22:09 UTC (permalink / raw)
  To: Cam Macdonell; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

Cam Macdonell wrote:
> Dor Laor wrote:
>>
>> In the guest you need the matching pci driver. Currently you can you the
>> attached one,
>> I'm not sure if it is uptodate. Soon we'll post a device that uses
>> virtio for the vmchannel.
>> -Dor
>
> Would the existing hypercall.c that is in kvm-userspace/drivers/ work 
> as well.  What is the difference?
>
> Thanks,
> Cam
>

The existing hypercall driver should do the work too. Actually I forgot 
about it when I posted the new one.
The one I sent might be half baked, you might be better of with the 
drivers/hypercal.c
Anyway we will soon rebase it over virtio.
-Dor

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2007-09-08 22:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-03 12:04 sharing memory Francesco Cipollone
2007-09-04  8:50 ` Dor Laor
     [not found]   ` <64F9B87B6B770947A9F8391472E032160D706627-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org>
2007-09-04 20:03     ` Cam Macdonell
     [not found]       ` <46DDB9FC.4030408-edFDblaTWIyXbbII50Afww@public.gmane.org>
2007-09-05  7:33         ` Dor Laor
     [not found]           ` <64F9B87B6B770947A9F8391472E032160D7E927C-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org>
2007-09-07 18:11             ` Cam Macdonell
     [not found]               ` <46E19448.9020803-edFDblaTWIyXbbII50Afww@public.gmane.org>
2007-09-08 22:09                 ` Dor Laor
2007-09-05 20:09 ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox