* [PATCH 1/6] kvm tools: Prevent duplicate definitions of ALIGN
@ 2011-04-28 13:40 Sasha Levin
2011-04-28 13:40 ` [PATCH 2/6] kvm tools: Add kernel headers required for using list Sasha Levin
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Sasha Levin @ 2011-04-28 13:40 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/include/kvm/bios.h | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/tools/kvm/include/kvm/bios.h b/tools/kvm/include/kvm/bios.h
index dd70c44..914720b 100644
--- a/tools/kvm/include/kvm/bios.h
+++ b/tools/kvm/include/kvm/bios.h
@@ -51,8 +51,10 @@
#define MB_BIOS_SS 0xfff7
#define MB_BIOS_SP 0x40
+#ifndef ALIGN
#define ALIGN(x, a) \
(((x) + ((a) - 1)) & ~((a) - 1))
+#endif
/*
* note we use 16 bytes alignment which makes segment based
--
1.7.5.rc3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/6] kvm tools: Add kernel headers required for using list
2011-04-28 13:40 [PATCH 1/6] kvm tools: Prevent duplicate definitions of ALIGN Sasha Levin
@ 2011-04-28 13:40 ` Sasha Levin
2011-04-28 13:40 ` [PATCH 3/6] kvm tools: Introduce generic IO threadpool Sasha Levin
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2011-04-28 13:40 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
Adds kernel headers so that <linux/list.h> (and others) could be included directly.
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/include/linux/kernel.h | 26 ++++++++++++++++++++++++++
tools/kvm/include/linux/prefetch.h | 6 ++++++
tools/kvm/include/linux/types.h | 12 ++++++++++++
3 files changed, 44 insertions(+), 0 deletions(-)
create mode 100644 tools/kvm/include/linux/kernel.h
create mode 100644 tools/kvm/include/linux/prefetch.h
diff --git a/tools/kvm/include/linux/kernel.h b/tools/kvm/include/linux/kernel.h
new file mode 100644
index 0000000..8d83037
--- /dev/null
+++ b/tools/kvm/include/linux/kernel.h
@@ -0,0 +1,26 @@
+#ifndef KVM__LINUX_KERNEL_H_
+#define KVM__LINUX_KERNEL_H_
+
+#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))
+
+#define ALIGN(x,a) __ALIGN_MASK(x,(typeof(x))(a)-1)
+#define __ALIGN_MASK(x,mask) (((x)+(mask))&~(mask))
+
+#ifndef offsetof
+#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
+#endif
+
+#ifndef container_of
+/**
+ * container_of - cast a member of a structure out to the containing structure
+ * @ptr: the pointer to the member.
+ * @type: the type of the container struct this is embedded in.
+ * @member: the name of the member within the struct.
+ *
+ */
+#define container_of(ptr, type, member) ({ \
+ const typeof(((type *)0)->member) * __mptr = (ptr); \
+ (type *)((char *)__mptr - offsetof(type, member)); })
+#endif
+
+#endif
diff --git a/tools/kvm/include/linux/prefetch.h b/tools/kvm/include/linux/prefetch.h
new file mode 100644
index 0000000..62f6788
--- /dev/null
+++ b/tools/kvm/include/linux/prefetch.h
@@ -0,0 +1,6 @@
+#ifndef KVM__LINUX_PREFETCH_H
+#define KVM__LINUX_PREFETCH_H
+
+static inline void prefetch(void *a __attribute__((unused))) { }
+
+#endif
diff --git a/tools/kvm/include/linux/types.h b/tools/kvm/include/linux/types.h
index efd8519..c7c444e 100644
--- a/tools/kvm/include/linux/types.h
+++ b/tools/kvm/include/linux/types.h
@@ -46,4 +46,16 @@ typedef __u32 __bitwise __be32;
typedef __u64 __bitwise __le64;
typedef __u64 __bitwise __be64;
+struct list_head {
+ struct list_head *next, *prev;
+};
+
+struct hlist_head {
+ struct hlist_node *first;
+};
+
+struct hlist_node {
+ struct hlist_node *next, **pprev;
+};
+
#endif /* LINUX_TYPES_H */
--
1.7.5.rc3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/6] kvm tools: Introduce generic IO threadpool
2011-04-28 13:40 [PATCH 1/6] kvm tools: Prevent duplicate definitions of ALIGN Sasha Levin
2011-04-28 13:40 ` [PATCH 2/6] kvm tools: Add kernel headers required for using list Sasha Levin
@ 2011-04-28 13:40 ` Sasha Levin
2011-04-29 7:08 ` Asias He
2011-04-28 13:40 ` [PATCH 4/6] kvm tools: Use threadpool for virtio-blk Sasha Levin
` (2 subsequent siblings)
4 siblings, 1 reply; 8+ messages in thread
From: Sasha Levin @ 2011-04-28 13:40 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
This patch adds a generic pool to create a common interface for working with threads within the kvm tool.
Main idea here is using this threadpool for all I/O threads instead of having every I/O module write it's own thread code.
The process of working with the thread pool is supposed to be very simple.
During initialization, Each module which is interested in working with the threadpool will call threadpool__add_jobtype with the callback function and a void* parameter. For example, virtio modules will register every virt_queue as a new job type.
During operation, When theres work to do for a specific job, the module will signal it to the queue and would expect the callback to be called with proper parameters. It is assured that the callback will be called once for every signal action and each callback will be called only once at a time (i.e. callback functions themselves don't need to handle threading).
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/Makefile | 1 +
tools/kvm/include/kvm/threadpool.h | 16 ++++
tools/kvm/kvm-run.c | 5 +
tools/kvm/threadpool.c | 171 ++++++++++++++++++++++++++++++++++++
4 files changed, 193 insertions(+), 0 deletions(-)
create mode 100644 tools/kvm/include/kvm/threadpool.h
create mode 100644 tools/kvm/threadpool.c
diff --git a/tools/kvm/Makefile b/tools/kvm/Makefile
index 1b0c76e..fbce14d 100644
--- a/tools/kvm/Makefile
+++ b/tools/kvm/Makefile
@@ -36,6 +36,7 @@ OBJS += kvm-cmd.o
OBJS += kvm-run.o
OBJS += qcow.o
OBJS += mptable.o
+OBJS += threadpool.o
DEPS := $(patsubst %.o,%.d,$(OBJS))
diff --git a/tools/kvm/include/kvm/threadpool.h b/tools/kvm/include/kvm/threadpool.h
new file mode 100644
index 0000000..25b5eb8
--- /dev/null
+++ b/tools/kvm/include/kvm/threadpool.h
@@ -0,0 +1,16 @@
+#ifndef KVM__THREADPOOL_H
+#define KVM__THREADPOOL_H
+
+#include <stdint.h>
+
+struct kvm;
+
+typedef void (*kvm_thread_callback_fn_t)(struct kvm *kvm, void *data);
+
+int thread_pool__init(unsigned long thread_count);
+
+void *thread_pool__add_jobtype(struct kvm *kvm, kvm_thread_callback_fn_t callback, void *data);
+
+void thread_pool__signal_work(void *job);
+
+#endif
diff --git a/tools/kvm/kvm-run.c b/tools/kvm/kvm-run.c
index 071157a..97a17dd 100644
--- a/tools/kvm/kvm-run.c
+++ b/tools/kvm/kvm-run.c
@@ -24,6 +24,7 @@
#include <kvm/pci.h>
#include <kvm/term.h>
#include <kvm/ioport.h>
+#include <kvm/threadpool.h>
/* header files for gitish interface */
#include <kvm/kvm-run.h>
@@ -312,6 +313,7 @@ int kvm_cmd_run(int argc, const char **argv, const char *prefix)
int i;
struct virtio_net_parameters net_params;
char *hi;
+ unsigned int nr_online_cpus;
signal(SIGALRM, handle_sigalrm);
signal(SIGQUIT, handle_sigquit);
@@ -457,6 +459,9 @@ int kvm_cmd_run(int argc, const char **argv, const char *prefix)
kvm__init_ram(kvm);
+ nr_online_cpus = sysconf(_SC_NPROCESSORS_ONLN);
+ thread_pool__init(nr_online_cpus);
+
for (i = 0; i < nrcpus; i++) {
if (pthread_create(&kvm_cpus[i]->thread, NULL, kvm_cpu_thread, kvm_cpus[i]) != 0)
die("unable to create KVM VCPU thread");
diff --git a/tools/kvm/threadpool.c b/tools/kvm/threadpool.c
new file mode 100644
index 0000000..e78db3a
--- /dev/null
+++ b/tools/kvm/threadpool.c
@@ -0,0 +1,171 @@
+#include "kvm/threadpool.h"
+#include "kvm/mutex.h"
+
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <pthread.h>
+#include <stdbool.h>
+
+struct thread_pool__job_info {
+ kvm_thread_callback_fn_t callback;
+ struct kvm *kvm;
+ void *data;
+
+ int signalcount;
+ pthread_mutex_t mutex;
+
+ struct list_head queue;
+};
+
+static pthread_mutex_t job_mutex = PTHREAD_MUTEX_INITIALIZER;
+static pthread_mutex_t thread_mutex = PTHREAD_MUTEX_INITIALIZER;
+static pthread_cond_t job_cond = PTHREAD_COND_INITIALIZER;
+
+static LIST_HEAD(head);
+
+static pthread_t *threads;
+static long threadcount;
+
+static struct thread_pool__job_info *thread_pool__job_info_pop(void)
+{
+ struct thread_pool__job_info *job;
+
+ if (list_empty(&head))
+ return NULL;
+
+ job = list_first_entry(&head, struct thread_pool__job_info, queue);
+ list_del(&job->queue);
+
+ return job;
+}
+
+static void thread_pool__job_info_push(struct thread_pool__job_info *job)
+{
+ list_add_tail(&job->queue, &head);
+}
+
+static struct thread_pool__job_info *thread_pool__job_info_pop_locked(void)
+{
+ struct thread_pool__job_info *job;
+
+ mutex_lock(&job_mutex);
+ job = thread_pool__job_info_pop();
+ mutex_unlock(&job_mutex);
+ return job;
+}
+
+static void thread_pool__job_info_push_locked(struct thread_pool__job_info *job)
+{
+ mutex_lock(&job_mutex);
+ thread_pool__job_info_push(job);
+ mutex_unlock(&job_mutex);
+}
+
+static void thread_pool__handle_job(struct thread_pool__job_info *job)
+{
+ while (job) {
+ job->callback(job->kvm, job->data);
+
+ mutex_lock(&job->mutex);
+
+ if (--job->signalcount > 0)
+ /* If the job was signaled again while we were working */
+ thread_pool__job_info_push_locked(job);
+
+ mutex_unlock(&job->mutex);
+
+ job = thread_pool__job_info_pop_locked();
+ }
+}
+
+static void thread_pool__threadfunc_cleanup(void *param)
+{
+ mutex_unlock(&job_mutex);
+}
+
+static void *thread_pool__threadfunc(void *param)
+{
+ pthread_cleanup_push(thread_pool__threadfunc_cleanup, NULL);
+
+ for (;;) {
+ struct thread_pool__job_info *curjob;
+
+ mutex_lock(&job_mutex);
+ pthread_cond_wait(&job_cond, &job_mutex);
+ curjob = thread_pool__job_info_pop();
+ mutex_unlock(&job_mutex);
+
+ if (curjob)
+ thread_pool__handle_job(curjob);
+ }
+
+ pthread_cleanup_pop(0);
+
+ return NULL;
+}
+
+static int thread_pool__addthread(void)
+{
+ int res;
+ void *newthreads;
+
+ mutex_lock(&thread_mutex);
+ newthreads = realloc(threads, (threadcount + 1) * sizeof(pthread_t));
+ if (newthreads == NULL) {
+ mutex_unlock(&thread_mutex);
+ return -1;
+ }
+
+ threads = newthreads;
+
+ res = pthread_create(threads + threadcount, NULL,
+ thread_pool__threadfunc, NULL);
+
+ if (res == 0)
+ threadcount++;
+ mutex_unlock(&thread_mutex);
+
+ return res;
+}
+
+int thread_pool__init(unsigned long thread_count)
+{
+ unsigned long i;
+
+ for (i = 0 ; i < thread_count ; i++)
+ if (thread_pool__addthread() < 0)
+ return i;
+
+ return i;
+}
+
+void *thread_pool__add_jobtype(struct kvm *kvm,
+ kvm_thread_callback_fn_t callback,
+ void *data)
+{
+ struct thread_pool__job_info *job = calloc(1, sizeof(*job));
+
+ *job = (struct thread_pool__job_info) {
+ .kvm = kvm,
+ .data = data,
+ .callback = callback,
+ .mutex = PTHREAD_MUTEX_INITIALIZER
+ };
+
+ return job;
+}
+
+void thread_pool__signal_work(void *job)
+{
+ struct thread_pool__job_info *jobinfo = job;
+
+ if (jobinfo == NULL)
+ return;
+
+ mutex_lock(&jobinfo->mutex);
+ if (jobinfo->signalcount++ == 0)
+ thread_pool__job_info_push_locked(job);
+ mutex_unlock(&jobinfo->mutex);
+
+ pthread_cond_signal(&job_cond);
+}
--
1.7.5.rc3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 4/6] kvm tools: Use threadpool for virtio-blk
2011-04-28 13:40 [PATCH 1/6] kvm tools: Prevent duplicate definitions of ALIGN Sasha Levin
2011-04-28 13:40 ` [PATCH 2/6] kvm tools: Add kernel headers required for using list Sasha Levin
2011-04-28 13:40 ` [PATCH 3/6] kvm tools: Introduce generic IO threadpool Sasha Levin
@ 2011-04-28 13:40 ` Sasha Levin
2011-04-28 13:40 ` [PATCH 5/6] kvm tools: Use threadpool for virtio-console Sasha Levin
2011-04-28 13:40 ` [PATCH 6/6] kvm tools: Use threadpool for virtio-net Sasha Levin
4 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2011-04-28 13:40 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
virtio-blk has been converted to use the threadpool.
All the threading code has been removed, which left only simple callback handling code.
New threadpool job types are created within VIRTIO_PCI_QUEUE_PFN for every queue (just one in the case of virtio-blk).
The module signals for work after receiving VIRTIO_PCI_QUEUE_NOTIFY and expects the threadpool to call virtio_blk_do_io to handle the I/O.
It is possible that the module will signal work several times while virtio_blk_do_io is already working, but there is no need to handle multithreading there since the threadpool will call each job in linear and not in parallel.
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/virtio-blk.c | 86 +++++++-----------------------------------------
1 files changed, 12 insertions(+), 74 deletions(-)
diff --git a/tools/kvm/virtio-blk.c b/tools/kvm/virtio-blk.c
index 3516b1c..3feabd0 100644
--- a/tools/kvm/virtio-blk.c
+++ b/tools/kvm/virtio-blk.c
@@ -9,6 +9,7 @@
#include "kvm/util.h"
#include "kvm/kvm.h"
#include "kvm/pci.h"
+#include "kvm/threadpool.h"
#include <linux/virtio_ring.h>
#include <linux/virtio_blk.h>
@@ -31,15 +32,13 @@ struct blk_device {
uint32_t guest_features;
uint16_t config_vector;
uint8_t status;
- pthread_t io_thread;
- pthread_mutex_t io_mutex;
- pthread_cond_t io_cond;
/* virtio queue */
uint16_t queue_selector;
- uint64_t virtio_blk_queue_set_flags;
struct virt_queue vqs[NUM_VIRT_QUEUES];
+
+ void *jobs[NUM_VIRT_QUEUES];
};
#define DISK_SEG_MAX 126
@@ -57,9 +56,6 @@ static struct blk_device blk_device = {
* same applies to VIRTIO_BLK_F_BLK_SIZE
*/
.host_features = (1UL << VIRTIO_BLK_F_SEG_MAX),
-
- .io_mutex = PTHREAD_MUTEX_INITIALIZER,
- .io_cond = PTHREAD_COND_INITIALIZER
};
static bool virtio_blk_pci_io_device_specific_in(void *data, unsigned long offset, int size, uint32_t count)
@@ -156,73 +152,14 @@ static bool virtio_blk_do_io_request(struct kvm *self, struct virt_queue *queue)
return true;
}
-
-
-static int virtio_blk_get_selected_queue(struct blk_device *dev)
-{
- int i;
-
- for (i = 0 ; i < NUM_VIRT_QUEUES ; i++) {
- if (dev->virtio_blk_queue_set_flags & (1 << i)) {
- dev->virtio_blk_queue_set_flags &= ~(1 << i);
- return i;
- }
- }
-
- return -1;
-}
-
-static void virtio_blk_do_io(struct kvm *kvm, struct blk_device *dev)
+static void virtio_blk_do_io(struct kvm *kvm, void *param)
{
- for (;;) {
- struct virt_queue *vq;
- int queue_index;
-
- mutex_lock(&dev->io_mutex);
- queue_index = virtio_blk_get_selected_queue(dev);
- mutex_unlock(&dev->io_mutex);
-
- if (queue_index < 0)
- break;
+ struct virt_queue *vq = param;
- vq = &dev->vqs[queue_index];
+ while (virt_queue__available(vq))
+ virtio_blk_do_io_request(kvm, vq);
- while (virt_queue__available(vq))
- virtio_blk_do_io_request(kvm, vq);
-
- kvm__irq_line(kvm, VIRTIO_BLK_IRQ, 1);
- }
-}
-
-static void *virtio_blk_io_thread(void *ptr)
-{
- struct kvm *self = ptr;
-
- for (;;) {
- int ret;
-
- mutex_lock(&blk_device.io_mutex);
- ret = pthread_cond_wait(&blk_device.io_cond, &blk_device.io_mutex);
- mutex_unlock(&blk_device.io_mutex);
-
- if (ret != 0)
- break;
-
- virtio_blk_do_io(self, &blk_device);
- }
-
- return NULL;
-}
-
-static void virtio_blk_handle_callback(struct blk_device *dev, uint16_t queue_index)
-{
- mutex_lock(&dev->io_mutex);
-
- dev->virtio_blk_queue_set_flags |= (1 << queue_index);
-
- mutex_unlock(&dev->io_mutex);
-
- pthread_cond_signal(&dev->io_cond);
+ kvm__irq_line(kvm, VIRTIO_BLK_IRQ, 1);
}
static bool virtio_blk_pci_io_out(struct kvm *self, uint16_t port, void *data, int size, uint32_t count)
@@ -250,6 +187,9 @@ static bool virtio_blk_pci_io_out(struct kvm *self, uint16_t port, void *data, i
vring_init(&queue->vring, VIRTIO_BLK_QUEUE_SIZE, p, 4096);
+ blk_device.jobs[blk_device.queue_selector] =
+ thread_pool__add_jobtype(self, virtio_blk_do_io, queue);
+
break;
}
case VIRTIO_PCI_QUEUE_SEL:
@@ -258,7 +198,7 @@ static bool virtio_blk_pci_io_out(struct kvm *self, uint16_t port, void *data, i
case VIRTIO_PCI_QUEUE_NOTIFY: {
uint16_t queue_index;
queue_index = ioport__read16(data);
- virtio_blk_handle_callback(&blk_device, queue_index);
+ thread_pool__signal_work(blk_device.jobs[queue_index]);
break;
}
case VIRTIO_PCI_STATUS:
@@ -308,8 +248,6 @@ void virtio_blk__init(struct kvm *self)
if (!self->disk_image)
return;
- pthread_create(&blk_device.io_thread, NULL, virtio_blk_io_thread, self);
-
blk_device.blk_config.capacity = self->disk_image->size / SECTOR_SIZE;
pci__register(&virtio_blk_pci_device, PCI_VIRTIO_BLK_DEVNUM);
--
1.7.5.rc3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 5/6] kvm tools: Use threadpool for virtio-console.
2011-04-28 13:40 [PATCH 1/6] kvm tools: Prevent duplicate definitions of ALIGN Sasha Levin
` (2 preceding siblings ...)
2011-04-28 13:40 ` [PATCH 4/6] kvm tools: Use threadpool for virtio-blk Sasha Levin
@ 2011-04-28 13:40 ` Sasha Levin
2011-04-28 13:40 ` [PATCH 6/6] kvm tools: Use threadpool for virtio-net Sasha Levin
4 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2011-04-28 13:40 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
This is very similar to the change done in virtio-net.
Notice that one signal here comes from outside the module (actual terminal) while the other one is generated by the virtio module.
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/virtio-console.c | 40 ++++++++++++++++++++++++++--------------
1 files changed, 26 insertions(+), 14 deletions(-)
diff --git a/tools/kvm/virtio-console.c b/tools/kvm/virtio-console.c
index f11ce4e..e66d198 100644
--- a/tools/kvm/virtio-console.c
+++ b/tools/kvm/virtio-console.c
@@ -8,6 +8,7 @@
#include "kvm/mutex.h"
#include "kvm/kvm.h"
#include "kvm/pci.h"
+#include "kvm/threadpool.h"
#include <linux/virtio_console.h>
#include <linux/virtio_ring.h>
@@ -41,6 +42,8 @@ struct console_device {
uint16_t config_vector;
uint8_t status;
uint16_t queue_selector;
+
+ void *jobs[VIRTIO_CONSOLE_NUM_QUEUES];
};
static struct console_device console_device = {
@@ -58,7 +61,7 @@ static struct console_device console_device = {
/*
* Interrupts are injected for hvc0 only.
*/
-void virtio_console__inject_interrupt(struct kvm *self)
+static void virtio_console__inject_interrupt_callback(struct kvm *self, void *param)
{
struct iovec iov[VIRTIO_CONSOLE_QUEUE_SIZE];
struct virt_queue *vq;
@@ -68,7 +71,7 @@ void virtio_console__inject_interrupt(struct kvm *self)
mutex_lock(&console_device.mutex);
- vq = &console_device.vqs[VIRTIO_CONSOLE_RX_QUEUE];
+ vq = param;
if (term_readable(CONSOLE_VIRTIO) && virt_queue__available(vq)) {
head = virt_queue__get_iov(vq, iov, &out, &in, self);
@@ -80,6 +83,11 @@ void virtio_console__inject_interrupt(struct kvm *self)
mutex_unlock(&console_device.mutex);
}
+void virtio_console__inject_interrupt(struct kvm *self)
+{
+ thread_pool__signal_work(console_device.jobs[VIRTIO_CONSOLE_RX_QUEUE]);
+}
+
static bool virtio_console_pci_io_device_specific_in(void *data, unsigned long offset, int size, uint32_t count)
{
uint8_t *config_space = (uint8_t *) &console_device.console_config;
@@ -138,7 +146,7 @@ static bool virtio_console_pci_io_in(struct kvm *self, uint16_t port, void *data
return ret;
}
-static void virtio_console_handle_callback(struct kvm *self, uint16_t queue_index)
+static void virtio_console_handle_callback(struct kvm *self, void *param)
{
struct iovec iov[VIRTIO_CONSOLE_QUEUE_SIZE];
struct virt_queue *vq;
@@ -146,18 +154,15 @@ static void virtio_console_handle_callback(struct kvm *self, uint16_t queue_inde
uint16_t head;
uint32_t len;
- vq = &console_device.vqs[queue_index];
-
- if (queue_index == VIRTIO_CONSOLE_TX_QUEUE) {
+ vq = param;
- while (virt_queue__available(vq)) {
- head = virt_queue__get_iov(vq, iov, &out, &in, self);
- len = term_putc_iov(CONSOLE_VIRTIO, iov, out);
- virt_queue__set_used_elem(vq, head, len);
- }
-
- kvm__irq_line(self, VIRTIO_CONSOLE_IRQ, 1);
+ while (virt_queue__available(vq)) {
+ head = virt_queue__get_iov(vq, iov, &out, &in, self);
+ len = term_putc_iov(CONSOLE_VIRTIO, iov, out);
+ virt_queue__set_used_elem(vq, head, len);
}
+
+ kvm__irq_line(self, VIRTIO_CONSOLE_IRQ, 1);
}
static bool virtio_console_pci_io_out(struct kvm *self, uint16_t port, void *data, int size, uint32_t count)
@@ -183,6 +188,13 @@ static bool virtio_console_pci_io_out(struct kvm *self, uint16_t port, void *dat
vring_init(&queue->vring, VIRTIO_CONSOLE_QUEUE_SIZE, p, 4096);
+ if (console_device.queue_selector == VIRTIO_CONSOLE_TX_QUEUE)
+ console_device.jobs[console_device.queue_selector] =
+ thread_pool__add_jobtype(self, virtio_console_handle_callback, queue);
+ else if (console_device.queue_selector == VIRTIO_CONSOLE_RX_QUEUE)
+ console_device.jobs[console_device.queue_selector] =
+ thread_pool__add_jobtype(self, virtio_console__inject_interrupt_callback, queue);
+
break;
}
case VIRTIO_PCI_QUEUE_SEL:
@@ -191,7 +203,7 @@ static bool virtio_console_pci_io_out(struct kvm *self, uint16_t port, void *dat
case VIRTIO_PCI_QUEUE_NOTIFY: {
uint16_t queue_index;
queue_index = ioport__read16(data);
- virtio_console_handle_callback(self, queue_index);
+ thread_pool__signal_work(console_device.jobs[queue_index]);
break;
}
case VIRTIO_PCI_STATUS:
--
1.7.5.rc3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 6/6] kvm tools: Use threadpool for virtio-net
2011-04-28 13:40 [PATCH 1/6] kvm tools: Prevent duplicate definitions of ALIGN Sasha Levin
` (3 preceding siblings ...)
2011-04-28 13:40 ` [PATCH 5/6] kvm tools: Use threadpool for virtio-console Sasha Levin
@ 2011-04-28 13:40 ` Sasha Levin
4 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2011-04-28 13:40 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
virtio-net has been converted to use the threadpool.
This is very similar to the change done in virtio-blk, only here we had 2 queues to handle.
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/virtio-net.c | 101 ++++++++++++------------------------------------
1 files changed, 25 insertions(+), 76 deletions(-)
diff --git a/tools/kvm/virtio-net.c b/tools/kvm/virtio-net.c
index 3e13429..58b3de4 100644
--- a/tools/kvm/virtio-net.c
+++ b/tools/kvm/virtio-net.c
@@ -7,6 +7,7 @@
#include "kvm/util.h"
#include "kvm/kvm.h"
#include "kvm/pci.h"
+#include "kvm/threadpool.h"
#include <linux/virtio_net.h>
#include <linux/if_tun.h>
@@ -40,16 +41,9 @@ struct net_device {
uint8_t status;
uint16_t queue_selector;
- pthread_t io_rx_thread;
- pthread_mutex_t io_rx_mutex;
- pthread_cond_t io_rx_cond;
-
- pthread_t io_tx_thread;
- pthread_mutex_t io_tx_mutex;
- pthread_cond_t io_tx_cond;
-
int tap_fd;
char tap_name[IFNAMSIZ];
+ void *jobs[VIRTIO_NET_NUM_QUEUES];
};
static struct net_device net_device = {
@@ -69,70 +63,44 @@ static struct net_device net_device = {
1UL << VIRTIO_NET_F_GUEST_TSO6,
};
-static void *virtio_net_rx_thread(void *p)
+static void virtio_net_rx_callback(struct kvm *self, void *param)
{
struct iovec iov[VIRTIO_NET_QUEUE_SIZE];
struct virt_queue *vq;
- struct kvm *self;
uint16_t out, in;
uint16_t head;
int len;
- self = p;
- vq = &net_device.vqs[VIRTIO_NET_RX_QUEUE];
-
- while (1) {
- mutex_lock(&net_device.io_rx_mutex);
- if (!virt_queue__available(vq))
- pthread_cond_wait(&net_device.io_rx_cond, &net_device.io_rx_mutex);
- mutex_unlock(&net_device.io_rx_mutex);
-
- while (virt_queue__available(vq)) {
- head = virt_queue__get_iov(vq, iov, &out, &in, self);
- len = readv(net_device.tap_fd, iov, in);
- virt_queue__set_used_elem(vq, head, len);
- /* We should interrupt guest right now, otherwise latency is huge. */
- kvm__irq_line(self, VIRTIO_NET_IRQ, 1);
- }
+ vq = param;
+ while (virt_queue__available(vq)) {
+ head = virt_queue__get_iov(vq, iov, &out, &in, self);
+ len = readv(net_device.tap_fd, iov, in);
+ virt_queue__set_used_elem(vq, head, len);
}
- pthread_exit(NULL);
- return NULL;
-
+ kvm__irq_line(self, VIRTIO_NET_IRQ, 1);
}
-static void *virtio_net_tx_thread(void *p)
+static void virtio_net_tx_callback(struct kvm *self, void *param)
{
struct iovec iov[VIRTIO_NET_QUEUE_SIZE];
struct virt_queue *vq;
- struct kvm *self;
uint16_t out, in;
uint16_t head;
int len;
- self = p;
- vq = &net_device.vqs[VIRTIO_NET_TX_QUEUE];
-
- while (1) {
- mutex_lock(&net_device.io_tx_mutex);
- if (!virt_queue__available(vq))
- pthread_cond_wait(&net_device.io_tx_cond, &net_device.io_tx_mutex);
- mutex_unlock(&net_device.io_tx_mutex);
+ vq = param;
- while (virt_queue__available(vq)) {
- head = virt_queue__get_iov(vq, iov, &out, &in, self);
- len = writev(net_device.tap_fd, iov, out);
- virt_queue__set_used_elem(vq, head, len);
- }
-
- kvm__irq_line(self, VIRTIO_NET_IRQ, 1);
+ while (virt_queue__available(vq)) {
+ head = virt_queue__get_iov(vq, iov, &out, &in, self);
+ len = writev(net_device.tap_fd, iov, out);
+ virt_queue__set_used_elem(vq, head, len);
}
- pthread_exit(NULL);
- return NULL;
-
+ kvm__irq_line(self, VIRTIO_NET_IRQ, 1);
}
+
static bool virtio_net_pci_io_device_specific_in(void *data, unsigned long offset, int size, uint32_t count)
{
uint8_t *config_space = (uint8_t *) &net_device.net_config;
@@ -193,19 +161,7 @@ static bool virtio_net_pci_io_in(struct kvm *self, uint16_t port, void *data, in
static void virtio_net_handle_callback(struct kvm *self, uint16_t queue_index)
{
- if (queue_index == VIRTIO_NET_TX_QUEUE) {
-
- mutex_lock(&net_device.io_tx_mutex);
- pthread_cond_signal(&net_device.io_tx_cond);
- mutex_unlock(&net_device.io_tx_mutex);
-
- } else if (queue_index == VIRTIO_NET_RX_QUEUE) {
-
- mutex_lock(&net_device.io_rx_mutex);
- pthread_cond_signal(&net_device.io_rx_cond);
- mutex_unlock(&net_device.io_rx_mutex);
-
- }
+ thread_pool__signal_work(net_device.jobs[queue_index]);
}
static bool virtio_net_pci_io_out(struct kvm *self, uint16_t port, void *data, int size, uint32_t count)
@@ -231,6 +187,13 @@ static bool virtio_net_pci_io_out(struct kvm *self, uint16_t port, void *data, i
vring_init(&queue->vring, VIRTIO_NET_QUEUE_SIZE, p, 4096);
+ if (net_device.queue_selector == VIRTIO_NET_TX_QUEUE)
+ net_device.jobs[net_device.queue_selector] =
+ thread_pool__add_jobtype(self, virtio_net_tx_callback, queue);
+ else if (net_device.queue_selector == VIRTIO_NET_RX_QUEUE)
+ net_device.jobs[net_device.queue_selector] =
+ thread_pool__add_jobtype(self, virtio_net_rx_callback, queue);
+
break;
}
case VIRTIO_PCI_QUEUE_SEL:
@@ -367,24 +330,10 @@ fail:
return 0;
}
-static void virtio_net__io_thread_init(struct kvm *self)
-{
- pthread_mutex_init(&net_device.io_rx_mutex, NULL);
- pthread_cond_init(&net_device.io_tx_cond, NULL);
-
- pthread_mutex_init(&net_device.io_rx_mutex, NULL);
- pthread_cond_init(&net_device.io_tx_cond, NULL);
-
- pthread_create(&net_device.io_rx_thread, NULL, virtio_net_rx_thread, (void *)self);
- pthread_create(&net_device.io_tx_thread, NULL, virtio_net_tx_thread, (void *)self);
-}
-
void virtio_net__init(const struct virtio_net_parameters *params)
{
if (virtio_net__tap_init(params)) {
pci__register(&virtio_net_pci_device, PCI_VIRTIO_NET_DEVNUM);
ioport__register(IOPORT_VIRTIO_NET, &virtio_net_io_ops, IOPORT_VIRTIO_NET_SIZE);
-
- virtio_net__io_thread_init(params->self);
}
}
--
1.7.5.rc3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 3/6] kvm tools: Introduce generic IO threadpool
2011-04-28 13:40 ` [PATCH 3/6] kvm tools: Introduce generic IO threadpool Sasha Levin
@ 2011-04-29 7:08 ` Asias He
[not found] ` <4DBA653A.90700@cs.helsinki.fi>
0 siblings, 1 reply; 8+ messages in thread
From: Asias He @ 2011-04-29 7:08 UTC (permalink / raw)
To: Sasha Levin; +Cc: penberg, mingo, gorcunov, prasadjoshi124, kvm
On 04/28/2011 09:40 PM, Sasha Levin wrote:
> This patch adds a generic pool to create a common interface for working with threads within the kvm tool.
> Main idea here is using this threadpool for all I/O threads instead of having every I/O module write it's own thread code.
>
> The process of working with the thread pool is supposed to be very simple.
> During initialization, Each module which is interested in working with the threadpool will call threadpool__add_jobtype with the callback function and a void* parameter. For example, virtio modules will register every virt_queue as a new job type.
> During operation, When theres work to do for a specific job, the module will signal it to the queue and would expect the callback to be called with proper parameters. It is assured that the callback will be called once for every signal action and each callback will be called only once at a time (i.e. callback functions themselves don't need to handle threading).
>
> Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
> ---
> tools/kvm/Makefile | 1 +
> tools/kvm/include/kvm/threadpool.h | 16 ++++
> tools/kvm/kvm-run.c | 5 +
> tools/kvm/threadpool.c | 171 ++++++++++++++++++++++++++++++++++++
> 4 files changed, 193 insertions(+), 0 deletions(-)
> create mode 100644 tools/kvm/include/kvm/threadpool.h
> create mode 100644 tools/kvm/threadpool.c
>
> diff --git a/tools/kvm/Makefile b/tools/kvm/Makefile
> index 1b0c76e..fbce14d 100644
> --- a/tools/kvm/Makefile
> +++ b/tools/kvm/Makefile
> @@ -36,6 +36,7 @@ OBJS += kvm-cmd.o
> OBJS += kvm-run.o
> OBJS += qcow.o
> OBJS += mptable.o
> +OBJS += threadpool.o
>
> DEPS := $(patsubst %.o,%.d,$(OBJS))
>
> diff --git a/tools/kvm/include/kvm/threadpool.h b/tools/kvm/include/kvm/threadpool.h
> new file mode 100644
> index 0000000..25b5eb8
> --- /dev/null
> +++ b/tools/kvm/include/kvm/threadpool.h
> @@ -0,0 +1,16 @@
> +#ifndef KVM__THREADPOOL_H
> +#define KVM__THREADPOOL_H
> +
> +#include <stdint.h>
> +
> +struct kvm;
> +
> +typedef void (*kvm_thread_callback_fn_t)(struct kvm *kvm, void *data);
> +
> +int thread_pool__init(unsigned long thread_count);
> +
> +void *thread_pool__add_jobtype(struct kvm *kvm, kvm_thread_callback_fn_t callback, void *data);
> +
> +void thread_pool__signal_work(void *job);
> +
> +#endif
> diff --git a/tools/kvm/kvm-run.c b/tools/kvm/kvm-run.c
> index 071157a..97a17dd 100644
> --- a/tools/kvm/kvm-run.c
> +++ b/tools/kvm/kvm-run.c
> @@ -24,6 +24,7 @@
> #include <kvm/pci.h>
> #include <kvm/term.h>
> #include <kvm/ioport.h>
> +#include <kvm/threadpool.h>
>
> /* header files for gitish interface */
> #include <kvm/kvm-run.h>
> @@ -312,6 +313,7 @@ int kvm_cmd_run(int argc, const char **argv, const char *prefix)
> int i;
> struct virtio_net_parameters net_params;
> char *hi;
> + unsigned int nr_online_cpus;
>
> signal(SIGALRM, handle_sigalrm);
> signal(SIGQUIT, handle_sigquit);
> @@ -457,6 +459,9 @@ int kvm_cmd_run(int argc, const char **argv, const char *prefix)
>
> kvm__init_ram(kvm);
>
> + nr_online_cpus = sysconf(_SC_NPROCESSORS_ONLN);
> + thread_pool__init(nr_online_cpus);
We may benefit from more threads than the number of hardware thread we
have. Currently, virtio_console consumes two, virio_net consumes two,
and virtio_blk consumes one. Can we adjust the thread pool size when
devices register to use thread pool?
> +
> for (i = 0; i < nrcpus; i++) {
> if (pthread_create(&kvm_cpus[i]->thread, NULL, kvm_cpu_thread, kvm_cpus[i]) != 0)
> die("unable to create KVM VCPU thread");
> diff --git a/tools/kvm/threadpool.c b/tools/kvm/threadpool.c
> new file mode 100644
> index 0000000..e78db3a
> --- /dev/null
> +++ b/tools/kvm/threadpool.c
> @@ -0,0 +1,171 @@
> +#include "kvm/threadpool.h"
> +#include "kvm/mutex.h"
> +
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <pthread.h>
> +#include <stdbool.h>
> +
> +struct thread_pool__job_info {
> + kvm_thread_callback_fn_t callback;
> + struct kvm *kvm;
> + void *data;
> +
> + int signalcount;
> + pthread_mutex_t mutex;
> +
> + struct list_head queue;
> +};
Does 'struct thread_pool__job' sound better?
> +static pthread_mutex_t job_mutex = PTHREAD_MUTEX_INITIALIZER;
> +static pthread_mutex_t thread_mutex = PTHREAD_MUTEX_INITIALIZER;
> +static pthread_cond_t job_cond = PTHREAD_COND_INITIALIZER;
These mutex and cond are global. As the number of thread/job grows,
there may be a lot of contention.
> +
> +static LIST_HEAD(head);
> +
> +static pthread_t *threads;
> +static long threadcount;
> +
> +static struct thread_pool__job_info *thread_pool__job_info_pop(void)
> +{
> + struct thread_pool__job_info *job;
> +
> + if (list_empty(&head))
> + return NULL;
> +
> + job = list_first_entry(&head, struct thread_pool__job_info, queue);
> + list_del(&job->queue);
> +
> + return job;
> +}
> +
> +static void thread_pool__job_info_push(struct thread_pool__job_info *job)
> +{
> + list_add_tail(&job->queue, &head);
> +}
> +
> +static struct thread_pool__job_info *thread_pool__job_info_pop_locked(void)
> +{
> + struct thread_pool__job_info *job;
> +
> + mutex_lock(&job_mutex);
> + job = thread_pool__job_info_pop();
> + mutex_unlock(&job_mutex);
> + return job;
> +}
> +
> +static void thread_pool__job_info_push_locked(struct thread_pool__job_info *job)
> +{
> + mutex_lock(&job_mutex);
> + thread_pool__job_info_push(job);
> + mutex_unlock(&job_mutex);
> +}
> +
> +static void thread_pool__handle_job(struct thread_pool__job_info *job)
> +{
> + while (job) {
> + job->callback(job->kvm, job->data);
> +
> + mutex_lock(&job->mutex);
> +
> + if (--job->signalcount > 0)
> + /* If the job was signaled again while we were working */
> + thread_pool__job_info_push_locked(job);
> +
> + mutex_unlock(&job->mutex);
> +
> + job = thread_pool__job_info_pop_locked();
> + }
> +}
> +
> +static void thread_pool__threadfunc_cleanup(void *param)
> +{
> + mutex_unlock(&job_mutex);
> +}
> +
> +static void *thread_pool__threadfunc(void *param)
> +{
> + pthread_cleanup_push(thread_pool__threadfunc_cleanup, NULL);
> +
> + for (;;) {
> + struct thread_pool__job_info *curjob;
> +
> + mutex_lock(&job_mutex);
> + pthread_cond_wait(&job_cond, &job_mutex);
> + curjob = thread_pool__job_info_pop();
> + mutex_unlock(&job_mutex);
> +
> + if (curjob)
> + thread_pool__handle_job(curjob);
> + }
> +
> + pthread_cleanup_pop(0);
> +
> + return NULL;
> +}
> +
> +static int thread_pool__addthread(void)
> +{
> + int res;
> + void *newthreads;
> +
> + mutex_lock(&thread_mutex);
> + newthreads = realloc(threads, (threadcount + 1) * sizeof(pthread_t));
> + if (newthreads == NULL) {
> + mutex_unlock(&thread_mutex);
> + return -1;
> + }
> +
> + threads = newthreads;
> +
> + res = pthread_create(threads + threadcount, NULL,
> + thread_pool__threadfunc, NULL);
> +
> + if (res == 0)
> + threadcount++;
> + mutex_unlock(&thread_mutex);
> +
> + return res;
> +}
> +
> +int thread_pool__init(unsigned long thread_count)
> +{
> + unsigned long i;
> +
> + for (i = 0 ; i < thread_count ; i++)
> + if (thread_pool__addthread() < 0)
> + return i;
> +
> + return i;
> +}
> +
> +void *thread_pool__add_jobtype(struct kvm *kvm,
> + kvm_thread_callback_fn_t callback,
> + void *data)
Is thread_pool__add_job() better?
> +{
> + struct thread_pool__job_info *job = calloc(1, sizeof(*job));
> +
> + *job = (struct thread_pool__job_info) {
> + .kvm = kvm,
> + .data = data,
> + .callback = callback,
> + .mutex = PTHREAD_MUTEX_INITIALIZER
> + };
> +
> + return job;
> +}
> +
> +void thread_pool__signal_work(void *job)
I think thread_pool__signal_job() or thread_pool__do_job()
would be more consistent.
Consumer of this API can simply use it with: thread_pool_{add,do}_job().
> +{
> + struct thread_pool__job_info *jobinfo = job;
> +
> + if (jobinfo == NULL)
> + return;
> +
> + mutex_lock(&jobinfo->mutex);
> + if (jobinfo->signalcount++ == 0)
> + thread_pool__job_info_push_locked(job);
> + mutex_unlock(&jobinfo->mutex);
> +
> + pthread_cond_signal(&job_cond);
> +}
--
Best Regards,
Asias He
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/6] kvm tools: Introduce generic IO threadpool
[not found] ` <1304064977.10069.15.camel@lappy>
@ 2011-04-29 11:12 ` Sasha Levin
0 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2011-04-29 11:12 UTC (permalink / raw)
To: Pekka Enberg; +Cc: Asias He, mingo, gorcunov, prasadjoshi124, kvm
On Fri, 2011-04-29 at 15:08 +0800, Asias He wrote:
On 04/28/2011 09:40 PM, Sasha Levin wrote:
> > + nr_online_cpus = sysconf(_SC_NPROCESSORS_ONLN);
> > + thread_pool__init(nr_online_cpus);
>
> We may benefit from more threads than the number of hardware thread we
> have. Currently, virtio_console consumes two, virio_net consumes two,
> and virtio_blk consumes one. Can we adjust the thread pool size when
> devices register to use thread pool?
How many threads do we want to have the threadpool use?
Currently theres a thread allocated for each VCPU + _SC_NPROCESSORS_ONLN
I/O threads.
Should we just allocate another thread for each device that registers
within the threadpool? This number can grow to be pretty big once we
start adding multiple devices.
--
Sasha.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2011-04-29 11:12 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-28 13:40 [PATCH 1/6] kvm tools: Prevent duplicate definitions of ALIGN Sasha Levin
2011-04-28 13:40 ` [PATCH 2/6] kvm tools: Add kernel headers required for using list Sasha Levin
2011-04-28 13:40 ` [PATCH 3/6] kvm tools: Introduce generic IO threadpool Sasha Levin
2011-04-29 7:08 ` Asias He
[not found] ` <4DBA653A.90700@cs.helsinki.fi>
[not found] ` <1304064977.10069.15.camel@lappy>
2011-04-29 11:12 ` Sasha Levin
2011-04-28 13:40 ` [PATCH 4/6] kvm tools: Use threadpool for virtio-blk Sasha Levin
2011-04-28 13:40 ` [PATCH 5/6] kvm tools: Use threadpool for virtio-console Sasha Levin
2011-04-28 13:40 ` [PATCH 6/6] kvm tools: Use threadpool for virtio-net Sasha Levin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).