* [PATCH 1/4] kvm tools: Thread virtio-blk
@ 2011-04-18 13:02 Sasha Levin
2011-04-18 13:02 ` [PATCH 2/4] kvm tools: Use virtio_blk_parameters to configure virtio-blk Sasha Levin
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Sasha Levin @ 2011-04-18 13:02 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
Add I/O thread to handle I/O operations in virtio-blk.
There is currently support for multiple virtio queues but the kernel side supports only one virtio queue. It's not too much of a performance impact and the ABI does support multiple queues there - So I've prefered to do it like that to keep it flexible.
I/O performance itself doesn't increase much due to the patch, what changes is system responsiveness during I/O operations.
On an unthreaded system, The VCPU is frozen up until the I/O request is complete. On the other hand, On a threaded system the VCPU is free to do other work or queue more I/O while waiting for the original I/O request to complete.
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/virtio-blk.c | 61 ++++++++++++++++++++++++++++++++++++++++++++----
1 files changed, 56 insertions(+), 5 deletions(-)
diff --git a/tools/kvm/virtio-blk.c b/tools/kvm/virtio-blk.c
index 124ce95..029f753 100644
--- a/tools/kvm/virtio-blk.c
+++ b/tools/kvm/virtio-blk.c
@@ -30,9 +30,13 @@ struct blk_device {
uint32_t guest_features;
uint16_t config_vector;
uint8_t status;
+ pthread_t io_thread;
+ pthread_mutex_t io_mutex;
+ pthread_cond_t io_cond;
/* virtio queue */
uint16_t queue_selector;
+ uint64_t virtio_blk_queue_set_flags;
struct virt_queue vqs[NUM_VIRT_QUEUES];
};
@@ -52,6 +56,9 @@ static struct blk_device blk_device = {
* same applies to VIRTIO_BLK_F_BLK_SIZE
*/
.host_features = (1UL << VIRTIO_BLK_F_SEG_MAX),
+
+ .io_mutex = PTHREAD_MUTEX_INITIALIZER,
+ .io_cond = PTHREAD_COND_INITIALIZER
};
static bool virtio_blk_pci_io_device_specific_in(void *data, unsigned long offset, int size, uint32_t count)
@@ -148,15 +155,57 @@ static bool virtio_blk_do_io_request(struct kvm *self, struct virt_queue *queue)
return true;
}
-static void virtio_blk_handle_callback(struct kvm *self, uint16_t queue_index)
+
+
+static int virtio_blk_get_selected_queue(void)
{
- struct virt_queue *vq = &blk_device.vqs[queue_index];
+ int i;
- while (virt_queue__available(vq))
- virtio_blk_do_io_request(self, vq);
+ for (i = 0 ; i < NUM_VIRT_QUEUES ; i++) {
+ if (blk_device.virtio_blk_queue_set_flags & (1 << i)) {
+ blk_device.virtio_blk_queue_set_flags &= ~(1 << i);
+ return i;
+ }
+ }
- kvm__irq_line(self, VIRTIO_BLK_IRQ, 1);
+ return -1;
+}
+static void *virtio_blk_io_thread(void *ptr)
+{
+ struct kvm *self = ptr;
+ int ret;
+ mutex_lock(&blk_device.io_mutex);
+ ret = pthread_cond_wait(&blk_device.io_cond, &blk_device.io_mutex);
+ while (ret == 0) {
+ int queue_index = virtio_blk_get_selected_queue();
+ mutex_unlock(&blk_device.io_mutex);
+ while (queue_index >= 0) {
+ struct virt_queue *vq = &blk_device.vqs[queue_index];
+
+ while (virt_queue__available(vq))
+ virtio_blk_do_io_request(self, vq);
+
+ kvm__irq_line(self, VIRTIO_BLK_IRQ, 1);
+
+ mutex_lock(&blk_device.io_mutex);
+ queue_index = virtio_blk_get_selected_queue();
+ mutex_unlock(&blk_device.io_mutex);
+ }
+ mutex_lock(&blk_device.io_mutex);
+ ret = pthread_cond_wait(&(blk_device.io_cond), &(blk_device.io_mutex));
+ }
+
+ return NULL;
+}
+
+static void virtio_blk_handle_callback(struct kvm *self, uint16_t queue_index)
+{
+ pthread_mutex_lock(&(blk_device.io_mutex));
+ blk_device.virtio_blk_queue_set_flags |= (1 << queue_index);
+ pthread_mutex_unlock(&(blk_device.io_mutex));
+
+ pthread_cond_signal(&(blk_device.io_cond));
}
static bool virtio_blk_pci_io_out(struct kvm *self, uint16_t port, void *data, int size, uint32_t count)
@@ -242,6 +291,8 @@ void virtio_blk__init(struct kvm *self)
if (!self->disk_image)
return;
+ pthread_create(&blk_device.io_thread, NULL, virtio_blk_io_thread, self);
+
blk_device.blk_config.capacity = self->disk_image->size / SECTOR_SIZE;
pci__register(&virtio_blk_pci_device, PCI_VIRTIO_BLK_DEVNUM);
--
1.7.5.rc1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/4] kvm tools: Use virtio_blk_parameters to configure virtio-blk
2011-04-18 13:02 [PATCH 1/4] kvm tools: Thread virtio-blk Sasha Levin
@ 2011-04-18 13:02 ` Sasha Levin
2011-04-18 13:02 ` [PATCH 3/4] kvm tools: Add debug feature to test the IO thread Sasha Levin
2011-04-18 13:02 ` [PATCH 4/4] kvm tools: Complete missing segments in a iov op using regular op Sasha Levin
2 siblings, 0 replies; 10+ messages in thread
From: Sasha Levin @ 2011-04-18 13:02 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
Like in virtio-net, use virtio_blk_parameters to pass configuration parameters
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/include/kvm/virtio-blk.h | 6 +++++-
tools/kvm/kvm-run.c | 7 ++++++-
tools/kvm/virtio-blk.c | 4 +++-
3 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/tools/kvm/include/kvm/virtio-blk.h b/tools/kvm/include/kvm/virtio-blk.h
index f82bbc7..9e91035 100644
--- a/tools/kvm/include/kvm/virtio-blk.h
+++ b/tools/kvm/include/kvm/virtio-blk.h
@@ -3,6 +3,10 @@
struct kvm;
-void virtio_blk__init(struct kvm *self);
+struct virtio_blk_parameters {
+ struct kvm *self;
+};
+
+void virtio_blk__init(struct virtio_blk_parameters *params);
#endif /* KVM__BLK_VIRTIO_H */
diff --git a/tools/kvm/kvm-run.c b/tools/kvm/kvm-run.c
index 17fef20..5b71fb4 100644
--- a/tools/kvm/kvm-run.c
+++ b/tools/kvm/kvm-run.c
@@ -220,6 +220,7 @@ int kvm_cmd_run(int argc, const char **argv, const char *prefix)
int exit_code = 0;
int i;
struct virtio_net_parameters net_params;
+ struct virtio_blk_parameters blk_params;
signal(SIGALRM, handle_sigalrm);
signal(SIGQUIT, handle_sigquit);
@@ -317,7 +318,11 @@ int kvm_cmd_run(int argc, const char **argv, const char *prefix)
pci__init();
- virtio_blk__init(kvm);
+ blk_params = (struct virtio_blk_parameters) {
+ .self = kvm
+ };
+
+ virtio_blk__init(&blk_params);
virtio_console__init(kvm);
diff --git a/tools/kvm/virtio-blk.c b/tools/kvm/virtio-blk.c
index 029f753..2470583 100644
--- a/tools/kvm/virtio-blk.c
+++ b/tools/kvm/virtio-blk.c
@@ -286,8 +286,10 @@ static struct pci_device_header virtio_blk_pci_device = {
#define PCI_VIRTIO_BLK_DEVNUM 1
-void virtio_blk__init(struct kvm *self)
+void virtio_blk__init(struct virtio_blk_parameters *params)
{
+ struct kvm *self = params->self;
+
if (!self->disk_image)
return;
--
1.7.5.rc1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/4] kvm tools: Add debug feature to test the IO thread
2011-04-18 13:02 [PATCH 1/4] kvm tools: Thread virtio-blk Sasha Levin
2011-04-18 13:02 ` [PATCH 2/4] kvm tools: Use virtio_blk_parameters to configure virtio-blk Sasha Levin
@ 2011-04-18 13:02 ` Sasha Levin
2011-04-19 16:52 ` Pekka Enberg
2011-04-18 13:02 ` [PATCH 4/4] kvm tools: Complete missing segments in a iov op using regular op Sasha Levin
2 siblings, 1 reply; 10+ messages in thread
From: Sasha Levin @ 2011-04-18 13:02 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
Add --debug-io-delay-cycles and --debug-io-delay-amount to delay the completion of IO requests within virtio-blk.
This feature allows to verify and debug the threading within virtio-blk.
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/include/kvm/virtio-blk.h | 6 +++++-
tools/kvm/kvm-run.c | 10 +++++++++-
tools/kvm/virtio-blk.c | 11 +++++++++++
3 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/tools/kvm/include/kvm/virtio-blk.h b/tools/kvm/include/kvm/virtio-blk.h
index 9e91035..c0211a0 100644
--- a/tools/kvm/include/kvm/virtio-blk.h
+++ b/tools/kvm/include/kvm/virtio-blk.h
@@ -1,10 +1,14 @@
#ifndef KVM__BLK_VIRTIO_H
#define KVM__BLK_VIRTIO_H
+#include <stdint.h>
+
struct kvm;
struct virtio_blk_parameters {
- struct kvm *self;
+ struct kvm *self;
+ uint64_t debug_delay_cycles;
+ uint64_t debug_delay_amount;
};
void virtio_blk__init(struct virtio_blk_parameters *params);
diff --git a/tools/kvm/kvm-run.c b/tools/kvm/kvm-run.c
index 5b71fb4..3392bfa 100644
--- a/tools/kvm/kvm-run.c
+++ b/tools/kvm/kvm-run.c
@@ -57,6 +57,8 @@ static void handle_sigalrm(int sig)
}
static u64 ram_size = MIN_RAM_SIZE_MB;
+static u64 virtio_blk_delay_cycles = -1;
+static u64 virtio_blk_delay_amount;
static const char *kernel_cmdline;
static const char *kernel_filename;
static const char *initrd_filename;
@@ -112,6 +114,10 @@ static const struct option options[] = {
"Enable single stepping"),
OPT_BOOLEAN('g', "ioport-debug", &ioport_debug,
"Enable ioport debugging"),
+ OPT_U64('\0', "debug-io-delay-cycles", &virtio_blk_delay_cycles,
+ "Wait this amount of cycles before delay"),
+ OPT_U64('\0', "debug-io-delay-amount", &virtio_blk_delay_amount,
+ "Delay each I/O request by this amount (usec)"),
OPT_END()
};
@@ -319,7 +325,9 @@ int kvm_cmd_run(int argc, const char **argv, const char *prefix)
pci__init();
blk_params = (struct virtio_blk_parameters) {
- .self = kvm
+ .self = kvm,
+ .debug_delay_cycles = virtio_blk_delay_cycles,
+ .debug_delay_amount = virtio_blk_delay_amount
};
virtio_blk__init(&blk_params);
diff --git a/tools/kvm/virtio-blk.c b/tools/kvm/virtio-blk.c
index 2470583..ea8c4e7 100644
--- a/tools/kvm/virtio-blk.c
+++ b/tools/kvm/virtio-blk.c
@@ -38,6 +38,9 @@ struct blk_device {
uint16_t queue_selector;
uint64_t virtio_blk_queue_set_flags;
+ uint64_t debug_delay_cycles;
+ uint64_t debug_delay_amount;
+
struct virt_queue vqs[NUM_VIRT_QUEUES];
};
@@ -174,6 +177,7 @@ static int virtio_blk_get_selected_queue(void)
static void *virtio_blk_io_thread(void *ptr)
{
struct kvm *self = ptr;
+ uint64_t io_cycles = 0;
int ret;
mutex_lock(&blk_device.io_mutex);
ret = pthread_cond_wait(&blk_device.io_cond, &blk_device.io_mutex);
@@ -183,6 +187,10 @@ static void *virtio_blk_io_thread(void *ptr)
while (queue_index >= 0) {
struct virt_queue *vq = &blk_device.vqs[queue_index];
+ if (blk_device.debug_delay_cycles != (uint64_t)-1 &&
+ ++io_cycles > blk_device.debug_delay_cycles)
+ usleep(blk_device.debug_delay_amount);
+
while (virt_queue__available(vq))
virtio_blk_do_io_request(self, vq);
@@ -293,6 +301,9 @@ void virtio_blk__init(struct virtio_blk_parameters *params)
if (!self->disk_image)
return;
+ blk_device.debug_delay_amount = params->debug_delay_amount;
+ blk_device.debug_delay_cycles = params->debug_delay_cycles;
+
pthread_create(&blk_device.io_thread, NULL, virtio_blk_io_thread, self);
blk_device.blk_config.capacity = self->disk_image->size / SECTOR_SIZE;
--
1.7.5.rc1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/4] kvm tools: Complete missing segments in a iov op using regular op
2011-04-18 13:02 [PATCH 1/4] kvm tools: Thread virtio-blk Sasha Levin
2011-04-18 13:02 ` [PATCH 2/4] kvm tools: Use virtio_blk_parameters to configure virtio-blk Sasha Levin
2011-04-18 13:02 ` [PATCH 3/4] kvm tools: Add debug feature to test the IO thread Sasha Levin
@ 2011-04-18 13:02 ` Sasha Levin
2 siblings, 0 replies; 10+ messages in thread
From: Sasha Levin @ 2011-04-18 13:02 UTC (permalink / raw)
To: penberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm, Sasha Levin
If any of the iov operations return mid-block, use regular ops to complete the current block and continue using iov ops.
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
tools/kvm/read-write.c | 58 ++++++++++++++++++++++++++++++++++++++++++-----
1 files changed, 51 insertions(+), 7 deletions(-)
diff --git a/tools/kvm/read-write.c b/tools/kvm/read-write.c
index 0c995c8..bf2e4a0 100644
--- a/tools/kvm/read-write.c
+++ b/tools/kvm/read-write.c
@@ -189,10 +189,10 @@ static inline ssize_t get_iov_size(const struct iovec *iov, int iovcnt)
}
static inline void shift_iovec(const struct iovec **iov, int *iovcnt,
- size_t nr, ssize_t *total, size_t *count, off_t *offset)
+ ssize_t *nr, ssize_t *total, size_t *count, off_t *offset)
{
- while (nr >= (*iov)->iov_len) {
- nr -= (*iov)->iov_len;
+ while ((size_t)*nr >= (*iov)->iov_len) {
+ *nr -= (*iov)->iov_len;
*total += (*iov)->iov_len;
*count -= (*iov)->iov_len;
if (offset)
@@ -218,7 +218,18 @@ ssize_t readv_in_full(int fd, const struct iovec *iov, int iovcnt)
return -1;
}
- shift_iovec(&iov, &iovcnt, nr, &total, &count, NULL);
+ shift_iovec(&iov, &iovcnt, &nr, &total, &count, NULL);
+
+ while (nr > 0) {
+ ssize_t nr_readagain;
+ nr_readagain = xread(fd, iov->iov_base + nr,
+ iov->iov_len - nr);
+ if (nr_readagain <= 0)
+ return total;
+
+ nr += nr_readagain;
+ shift_iovec(&iov, &iovcnt, &nr, &total, &count, NULL);
+ }
}
return total;
@@ -240,7 +251,18 @@ ssize_t writev_in_full(int fd, const struct iovec *iov, int iovcnt)
return -1;
}
- shift_iovec(&iov, &iovcnt, nr, &total, &count, NULL);
+ shift_iovec(&iov, &iovcnt, &nr, &total, &count, NULL);
+
+ while (nr > 0) {
+ ssize_t nr_writeagain;
+ nr_writeagain = xwrite(fd, iov->iov_base + nr,
+ iov->iov_len - nr);
+ if (nr_writeagain <= 0)
+ return total;
+
+ nr += nr_writeagain;
+ shift_iovec(&iov, &iovcnt, &nr, &total, &count, NULL);
+ }
}
return total;
@@ -288,7 +310,18 @@ ssize_t preadv_in_full(int fd, const struct iovec *iov, int iovcnt, off_t offset
return -1;
}
- shift_iovec(&iov, &iovcnt, nr, &total, &count, &offset);
+ shift_iovec(&iov, &iovcnt, &nr, &total, &count, &offset);
+
+ while (nr > 0) {
+ ssize_t nr_readagain;
+ nr_readagain = xpread(fd, iov->iov_base + nr,
+ iov->iov_len - nr, offset + nr);
+ if (nr_readagain <= 0)
+ return total;
+
+ nr += nr_readagain;
+ shift_iovec(&iov, &iovcnt, &nr, &total, &count, &offset);
+ }
}
return total;
@@ -310,7 +343,18 @@ ssize_t pwritev_in_full(int fd, const struct iovec *iov, int iovcnt, off_t offse
return -1;
}
- shift_iovec(&iov, &iovcnt, nr, &total, &count, &offset);
+ shift_iovec(&iov, &iovcnt, &nr, &total, &count, &offset);
+
+ while (nr > 0) {
+ ssize_t nr_writeagain;
+ nr_writeagain = xpread(fd, iov->iov_base + nr,
+ iov->iov_len - nr, offset + nr);
+ if (nr_writeagain <= 0)
+ return total;
+
+ nr += nr_writeagain;
+ shift_iovec(&iov, &iovcnt, &nr, &total, &count, &offset);
+ }
}
return total;
--
1.7.5.rc1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 3/4] kvm tools: Add debug feature to test the IO thread
2011-04-18 13:02 ` [PATCH 3/4] kvm tools: Add debug feature to test the IO thread Sasha Levin
@ 2011-04-19 16:52 ` Pekka Enberg
2011-04-19 17:04 ` Sasha Levin
0 siblings, 1 reply; 10+ messages in thread
From: Pekka Enberg @ 2011-04-19 16:52 UTC (permalink / raw)
To: Sasha Levin; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm
On Mon, Apr 18, 2011 at 4:02 PM, Sasha Levin <levinsasha928@gmail.com> wrote:
> Add --debug-io-delay-cycles and --debug-io-delay-amount to delay the completion of IO requests within virtio-blk.
> This feature allows to verify and debug the threading within virtio-blk.
>
> Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
Well, to be honest, I'm not convinced we need both of these. Isn't
--debug-io-delay=<msec> enough for our use?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 3/4] kvm tools: Add debug feature to test the IO thread
2011-04-19 16:52 ` Pekka Enberg
@ 2011-04-19 17:04 ` Sasha Levin
2011-04-19 17:11 ` Ingo Molnar
0 siblings, 1 reply; 10+ messages in thread
From: Sasha Levin @ 2011-04-19 17:04 UTC (permalink / raw)
To: Pekka Enberg; +Cc: mingo, asias.hejun, gorcunov, prasadjoshi124, kvm
On Tue, 2011-04-19 at 19:52 +0300, Pekka Enberg wrote:
> On Mon, Apr 18, 2011 at 4:02 PM, Sasha Levin <levinsasha928@gmail.com> wrote:
> > Add --debug-io-delay-cycles and --debug-io-delay-amount to delay the completion of IO requests within virtio-blk.
> > This feature allows to verify and debug the threading within virtio-blk.
> >
> > Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
>
> Well, to be honest, I'm not convinced we need both of these. Isn't
> --debug-io-delay=<msec> enough for our use?
This came up during our testing.
Ingo suggested a large delay so we could easily see the results of
threading. The problem we encountered was that having a delay right from
the beginning will make the guest kernel take a rather long time to boot
and would make actually testing the threading impossible.
I've added a delay before the activation of the I/O request completion
delay to give the tester/debugger enough time to boot into the guest and
prepare anything needed for the test.
Making it a constant is also rather hard because different kernels can
take a very different amount of I/O requests to boot. Take the simple
example of a whether fsck was running during the boot or not.
--
Sasha.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 3/4] kvm tools: Add debug feature to test the IO thread
2011-04-19 17:04 ` Sasha Levin
@ 2011-04-19 17:11 ` Ingo Molnar
2011-04-19 23:10 ` Asias He
0 siblings, 1 reply; 10+ messages in thread
From: Ingo Molnar @ 2011-04-19 17:11 UTC (permalink / raw)
To: Sasha Levin; +Cc: Pekka Enberg, asias.hejun, gorcunov, prasadjoshi124, kvm
* Sasha Levin <levinsasha928@gmail.com> wrote:
> On Tue, 2011-04-19 at 19:52 +0300, Pekka Enberg wrote:
> > On Mon, Apr 18, 2011 at 4:02 PM, Sasha Levin <levinsasha928@gmail.com> wrote:
> > > Add --debug-io-delay-cycles and --debug-io-delay-amount to delay the completion of IO requests within virtio-blk.
> > > This feature allows to verify and debug the threading within virtio-blk.
> > >
> > > Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
> >
> > Well, to be honest, I'm not convinced we need both of these. Isn't
> > --debug-io-delay=<msec> enough for our use?
>
> This came up during our testing.
>
> Ingo suggested a large delay so we could easily see the results of
> threading. The problem we encountered was that having a delay right from
> the beginning will make the guest kernel take a rather long time to boot
> and would make actually testing the threading impossible.
>
> I've added a delay before the activation of the I/O request completion
> delay to give the tester/debugger enough time to boot into the guest and
> prepare anything needed for the test.
>
> Making it a constant is also rather hard because different kernels can
> take a very different amount of I/O requests to boot. Take the simple
> example of a whether fsck was running during the boot or not.
I suspect we'll eventually want to have some sort of 'kvm set' subcommand that
can modify attributes of running instances? Setting the debug delay would be
one of the options:
kvm set MyInstance-1 --debug-io-delay-ms 10000
That way the delay can be activated in a suitable moment - and disabled again
after testing:
kvm set MyInstance-1 --debug-io-delay-ms 0
?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 3/4] kvm tools: Add debug feature to test the IO thread
2011-04-19 17:11 ` Ingo Molnar
@ 2011-04-19 23:10 ` Asias He
2011-04-20 5:41 ` Pekka Enberg
0 siblings, 1 reply; 10+ messages in thread
From: Asias He @ 2011-04-19 23:10 UTC (permalink / raw)
To: Ingo Molnar; +Cc: Sasha Levin, Pekka Enberg, gorcunov, prasadjoshi124, kvm
On 04/20/2011 01:11 AM, Ingo Molnar wrote:
>
> * Sasha Levin <levinsasha928@gmail.com> wrote:
>
>> On Tue, 2011-04-19 at 19:52 +0300, Pekka Enberg wrote:
>>> On Mon, Apr 18, 2011 at 4:02 PM, Sasha Levin <levinsasha928@gmail.com> wrote:
>>>> Add --debug-io-delay-cycles and --debug-io-delay-amount to delay the completion of IO requests within virtio-blk.
>>>> This feature allows to verify and debug the threading within virtio-blk.
>>>>
>>>> Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
>>>
>>> Well, to be honest, I'm not convinced we need both of these. Isn't
>>> --debug-io-delay=<msec> enough for our use?
>>
>> This came up during our testing.
>>
>> Ingo suggested a large delay so we could easily see the results of
>> threading. The problem we encountered was that having a delay right from
>> the beginning will make the guest kernel take a rather long time to boot
>> and would make actually testing the threading impossible.
>>
>> I've added a delay before the activation of the I/O request completion
>> delay to give the tester/debugger enough time to boot into the guest and
>> prepare anything needed for the test.
>>
>> Making it a constant is also rather hard because different kernels can
>> take a very different amount of I/O requests to boot. Take the simple
>> example of a whether fsck was running during the boot or not.
>
> I suspect we'll eventually want to have some sort of 'kvm set' subcommand that
> can modify attributes of running instances? Setting the debug delay would be
> one of the options:
>
> kvm set MyInstance-1 --debug-io-delay-ms 10000
>
> That way the delay can be activated in a suitable moment - and disabled again
> after testing:
>
> kvm set MyInstance-1 --debug-io-delay-ms 0
>
It's a very good idea.
Do we need:
kvm get MyInstance-1
which reports all the attributes, or
kvm get MyInstance-1 --debug-io-delay-ms
which only reports the interested attribute.
--
Best Regards,
Asias He
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 3/4] kvm tools: Add debug feature to test the IO thread
2011-04-19 23:10 ` Asias He
@ 2011-04-20 5:41 ` Pekka Enberg
2011-04-20 8:54 ` Ingo Molnar
0 siblings, 1 reply; 10+ messages in thread
From: Pekka Enberg @ 2011-04-20 5:41 UTC (permalink / raw)
To: Asias He; +Cc: Ingo Molnar, Sasha Levin, gorcunov, prasadjoshi124, kvm
On Wed, Apr 20, 2011 at 2:10 AM, Asias He <asias.hejun@gmail.com> wrote:
>>> This came up during our testing.
>>>
>>> Ingo suggested a large delay so we could easily see the results of
>>> threading. The problem we encountered was that having a delay right from
>>> the beginning will make the guest kernel take a rather long time to boot
>>> and would make actually testing the threading impossible.
>>>
>>> I've added a delay before the activation of the I/O request completion
>>> delay to give the tester/debugger enough time to boot into the guest and
>>> prepare anything needed for the test.
>>>
>>> Making it a constant is also rather hard because different kernels can
>>> take a very different amount of I/O requests to boot. Take the simple
>>> example of a whether fsck was running during the boot or not.
>>
>> I suspect we'll eventually want to have some sort of 'kvm set' subcommand that
>> can modify attributes of running instances? Setting the debug delay would be
>> one of the options:
>>
>> kvm set MyInstance-1 --debug-io-delay-ms 10000
>>
>> That way the delay can be activated in a suitable moment - and disabled again
>> after testing:
>>
>> kvm set MyInstance-1 --debug-io-delay-ms 0
>>
>
> It's a very good idea.
>
> Do we need:
>
> kvm get MyInstance-1
>
> which reports all the attributes, or
>
> kvm get MyInstance-1 --debug-io-delay-ms
>
> which only reports the interested attribute.
Sorry for the bikeshedding but wouldn't it be better to follow Git's
lead and have something like
kvm config MyInstance-1 --set debug.io.delay.ms 100
and
kvm config MyInstance-1 --list
Pekka
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 3/4] kvm tools: Add debug feature to test the IO thread
2011-04-20 5:41 ` Pekka Enberg
@ 2011-04-20 8:54 ` Ingo Molnar
0 siblings, 0 replies; 10+ messages in thread
From: Ingo Molnar @ 2011-04-20 8:54 UTC (permalink / raw)
To: Pekka Enberg; +Cc: Asias He, Sasha Levin, gorcunov, prasadjoshi124, kvm
* Pekka Enberg <penberg@kernel.org> wrote:
> Sorry for the bikeshedding but wouldn't it be better to follow Git's lead and
> have something like
>
> kvm config MyInstance-1 --set debug.io.delay.ms 100
>
> and
>
> kvm config MyInstance-1 --list
Yeah, agreed - 'kvm config' is intuitive. I tried to think of something better
than 'kvm set' but failed.
( And no, being super diligent with high level, very user visible changes and
names is not bikeshed painting. )
Note that 'git config' touches the .gitconfig IIRC - while here we really also
want to include runtime, dynamic configuration - but i think that distinction
is fine.
Now the whole 'kvm config' thing needs more thought and the whole enumeration
of KVM instances needs to be well thought out as well. How do we list instances
- 'kvm list' - or should perhaps 'kvm config' list all the currently running
instances?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2011-04-20 8:54 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-18 13:02 [PATCH 1/4] kvm tools: Thread virtio-blk Sasha Levin
2011-04-18 13:02 ` [PATCH 2/4] kvm tools: Use virtio_blk_parameters to configure virtio-blk Sasha Levin
2011-04-18 13:02 ` [PATCH 3/4] kvm tools: Add debug feature to test the IO thread Sasha Levin
2011-04-19 16:52 ` Pekka Enberg
2011-04-19 17:04 ` Sasha Levin
2011-04-19 17:11 ` Ingo Molnar
2011-04-19 23:10 ` Asias He
2011-04-20 5:41 ` Pekka Enberg
2011-04-20 8:54 ` Ingo Molnar
2011-04-18 13:02 ` [PATCH 4/4] kvm tools: Complete missing segments in a iov op using regular op Sasha Levin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox