* [Qemu-devel] [PATCH v3 1/4] really fix -icount in the iothread case
2011-04-12 8:44 [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread Paolo Bonzini
@ 2011-04-12 8:44 ` Paolo Bonzini
2011-04-12 8:44 ` [Qemu-devel] [PATCH v3 2/4] enable vm_clock to "warp" in the iothread+icount case Paolo Bonzini
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2011-04-12 8:44 UTC (permalink / raw)
To: qemu-devel
The correct fix for -icount is to consider the biggest difference
between iothread and non-iothread modes. In the traditional model,
CPUs run _before_ the iothread calls select (or WaitForMultipleObjects
for Win32). In the iothread model, CPUs run while the iothread
isn't holding the mutex, i.e. _during_ those same calls.
So, the iothread should always block as long as possible to let
the CPUs run smoothly---the timeout might as well be infinite---and
either the OS or the CPU thread itself will let the iothread know
when something happens. At this point, the iothread wakes up and
interrupts the CPU.
This is exactly the approach that this patch takes: when cpu_exec_all
returns in -icount mode, and it is because a vm_clock deadline has
been met, it wakes up the iothread to process the timers. This is
really the "bulk" of fixing icount.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/cpus.c b/cpus.c
index 41bec7c..c72fbb7 100644
--- a/cpus.c
+++ b/cpus.c
@@ -830,6 +830,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
while (1) {
cpu_exec_all();
+ if (use_icount && qemu_next_deadline() <= 0) {
+ qemu_notify_event();
+ }
qemu_tcg_wait_io_event();
}
--
1.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [Qemu-devel] [PATCH v3 2/4] enable vm_clock to "warp" in the iothread+icount case
2011-04-12 8:44 [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread Paolo Bonzini
2011-04-12 8:44 ` [Qemu-devel] [PATCH v3 1/4] really fix -icount in the iothread case Paolo Bonzini
@ 2011-04-12 8:44 ` Paolo Bonzini
2011-04-12 8:44 ` [Qemu-devel] [PATCH v3 3/4] Revert wrong fixes for -icount in the iothread case Paolo Bonzini
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2011-04-12 8:44 UTC (permalink / raw)
To: qemu-devel
The previous patch however is not enough, because if the virtual CPU
goes to sleep waiting for a future timer interrupt to wake it up, qemu
deadlocks. The timer interrupt never comes because time is driven by
icount, but the vCPU doesn't run any insns.
You could say that VCPUs should never go to sleep in icount
mode if there is a pending vm_clock timer; rather time should
just warp to the next vm_clock event with no sleep ever taking place.
Even better, you can sleep for some time related to the
time left until the next event, to avoid that the warps are too visible
externally; for example, you could be sending network packets continously
instead of every 100ms.
This is what this patch implements. qemu_clock_warp is called: 1)
whenever a vm_clock timer is adjusted, to ensure the warp_timer is
synchronized; at strategic points in the CPU thread, to make sure
the insn counter is synchronized before the CPU starts running.
In any case, the warp_timer is disabled while the CPU is running,
beacuse the insn counter then will be making progress on its own.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 8 +++++-
qemu-common.h | 1 +
qemu-timer.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
qemu-timer.h | 1 +
4 files changed, 78 insertions(+), 2 deletions(-)
diff --git a/cpus.c b/cpus.c
index c72fbb7..2ac2e9d 100644
--- a/cpus.c
+++ b/cpus.c
@@ -155,7 +155,7 @@ static bool cpu_thread_is_idle(CPUState *env)
return true;
}
-static bool all_cpu_threads_idle(void)
+bool all_cpu_threads_idle(void)
{
CPUState *env;
@@ -739,6 +739,9 @@ static void qemu_tcg_wait_io_event(void)
CPUState *env;
while (all_cpu_threads_idle()) {
+ /* Start accounting real time to the virtual clock if the CPUs
+ are idle. */
+ qemu_clock_warp(vm_clock);
qemu_cond_wait(tcg_halt_cond, &qemu_global_mutex);
}
@@ -1073,6 +1076,9 @@ bool cpu_exec_all(void)
{
int r;
+ /* Account partial waits to the vm_clock. */
+ qemu_clock_warp(vm_clock);
+
if (next_cpu == NULL) {
next_cpu = first_cpu;
}
diff --git a/qemu-common.h b/qemu-common.h
index 82e27c1..4f6037b 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -298,6 +298,7 @@ void qemu_notify_event(void);
void qemu_cpu_kick(void *env);
void qemu_cpu_kick_self(void);
int qemu_cpu_is_self(void *env);
+bool all_cpu_threads_idle(void);
/* work queue */
struct qemu_work_item {
diff --git a/qemu-timer.c b/qemu-timer.c
index 50f1943..4658134 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -153,6 +153,8 @@ void cpu_disable_ticks(void)
struct QEMUClock {
int type;
int enabled;
+
+ QEMUTimer *warp_timer;
};
struct QEMUTimer {
@@ -386,6 +388,66 @@ void qemu_clock_enable(QEMUClock *clock, int enabled)
clock->enabled = enabled;
}
+static int64_t vm_clock_warp_start;
+
+static void icount_warp_rt(void *opaque)
+{
+ if (vm_clock_warp_start == -1) {
+ return;
+ }
+
+ if (vm_running) {
+ int64_t clock = qemu_get_clock_ns(rt_clock);
+ int64_t warp_delta = clock - vm_clock_warp_start;
+ if (use_icount == 1) {
+ qemu_icount_bias += warp_delta;
+ } else {
+ /*
+ * In adaptive mode, do not let the vm_clock run too
+ * far ahead of real time.
+ */
+ int64_t cur_time = cpu_get_clock();
+ int64_t cur_icount = qemu_get_clock_ns(vm_clock);
+ int64_t delta = cur_time - cur_icount;
+ qemu_icount_bias += MIN(warp_delta, delta);
+ }
+ if (qemu_timer_expired(active_timers[QEMU_CLOCK_VIRTUAL],
+ qemu_get_clock_ns(vm_clock))) {
+ qemu_notify_event();
+ }
+ }
+ vm_clock_warp_start = -1;
+}
+
+void qemu_clock_warp(QEMUClock *clock)
+{
+ int64_t deadline;
+
+ if (!clock->warp_timer) {
+ return;
+ }
+
+ /*
+ * There are too many global variables to make the "warp" behavior
+ * applicable to other clocks. But a clock argument removes the
+ * need for if statements all over the place.
+ */
+ assert (clock == vm_clock);
+ icount_warp_rt(NULL);
+ if (!all_cpu_threads_idle() || !active_timers[clock->type]) {
+ qemu_del_timer(clock->warp_timer);
+ return;
+ }
+
+ vm_clock_warp_start = qemu_get_clock_ns(rt_clock);
+ deadline = qemu_next_deadline();
+ if (deadline > 0) {
+ qemu_mod_timer(clock->warp_timer, vm_clock_warp_start + deadline);
+ } else {
+ qemu_notify_event();
+ }
+}
+
QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
QEMUTimerCB *cb, void *opaque)
{
@@ -454,8 +516,10 @@ static void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
qemu_rearm_alarm_timer(alarm_timer);
}
/* Interrupt execution to force deadline recalculation. */
- if (use_icount)
+ qemu_clock_warp(ts->clock);
+ if (use_icount) {
qemu_notify_event();
+ }
}
}
@@ -576,6 +640,10 @@ void configure_icount(const char *option)
if (!option)
return;
+#ifdef CONFIG_IOTHREAD
+ vm_clock->warp_timer = qemu_new_timer_ns(rt_clock, icount_warp_rt, NULL);
+#endif
+
if (strcmp(option, "auto") != 0) {
icount_time_shift = strtol(option, NULL, 0);
use_icount = 1;
diff --git a/qemu-timer.h b/qemu-timer.h
index 75d5675..c01bcab 100644
--- a/qemu-timer.h
+++ b/qemu-timer.h
@@ -39,6 +39,7 @@ extern QEMUClock *host_clock;
int64_t qemu_get_clock_ns(QEMUClock *clock);
void qemu_clock_enable(QEMUClock *clock, int enabled);
+void qemu_clock_warp(QEMUClock *clock);
QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
QEMUTimerCB *cb, void *opaque);
--
1.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [Qemu-devel] [PATCH v3 3/4] Revert wrong fixes for -icount in the iothread case
2011-04-12 8:44 [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread Paolo Bonzini
2011-04-12 8:44 ` [Qemu-devel] [PATCH v3 1/4] really fix -icount in the iothread case Paolo Bonzini
2011-04-12 8:44 ` [Qemu-devel] [PATCH v3 2/4] enable vm_clock to "warp" in the iothread+icount case Paolo Bonzini
@ 2011-04-12 8:44 ` Paolo Bonzini
2011-04-12 8:44 ` [Qemu-devel] [PATCH v3 4/4] qemu_next_deadline should not consider host-time timers Paolo Bonzini
2011-04-12 9:26 ` [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread Jan Kiszka
4 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2011-04-12 8:44 UTC (permalink / raw)
To: qemu-devel
This reverts commits 225d02cd and c9f7383c. While some parts of
the latter could be saved, I preferred a smooth, complete revert.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
qemu-timer.c | 66 +++++++++++++++++++++++++++++++--------------------------
1 files changed, 36 insertions(+), 30 deletions(-)
diff --git a/qemu-timer.c b/qemu-timer.c
index 4658134..95f2251 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -110,9 +110,12 @@ static int64_t cpu_get_clock(void)
}
}
+#ifndef CONFIG_IOTHREAD
static int64_t qemu_icount_delta(void)
{
- if (use_icount == 1) {
+ if (!use_icount) {
+ return 5000 * (int64_t) 1000000;
+ } else if (use_icount == 1) {
/* When not using an adaptive execution frequency
we tend to get badly out of sync with real time,
so just delay for a reasonable amount of time. */
@@ -121,6 +124,7 @@ static int64_t qemu_icount_delta(void)
return cpu_get_icount() - cpu_get_clock();
}
}
+#endif
/* enable cpu_get_ticks() */
void cpu_enable_ticks(void)
@@ -1123,39 +1127,41 @@ void quit_timers(void)
int qemu_calculate_timeout(void)
{
+#ifndef CONFIG_IOTHREAD
int timeout;
- int64_t add;
- int64_t delta;
- /* When using icount, making forward progress with qemu_icount when the
- guest CPU is idle is critical. We only use the static io-thread timeout
- for non icount runs. */
- if (!use_icount || !vm_running) {
- return 5000;
- }
-
- /* Advance virtual time to the next event. */
- delta = qemu_icount_delta();
- if (delta > 0) {
- /* If virtual time is ahead of real time then just
- wait for IO. */
- timeout = (delta + 999999) / 1000000;
- } else {
- /* Wait for either IO to occur or the next
- timer event. */
- add = qemu_next_deadline();
- /* We advance the timer before checking for IO.
- Limit the amount we advance so that early IO
- activity won't get the guest too far ahead. */
- if (add > 10000000)
- add = 10000000;
- delta += add;
- qemu_icount += qemu_icount_round (add);
- timeout = delta / 1000000;
- if (timeout < 0)
- timeout = 0;
+ if (!vm_running)
+ timeout = 5000;
+ else {
+ /* XXX: use timeout computed from timers */
+ int64_t add;
+ int64_t delta;
+ /* Advance virtual time to the next event. */
+ delta = qemu_icount_delta();
+ if (delta > 0) {
+ /* If virtual time is ahead of real time then just
+ wait for IO. */
+ timeout = (delta + 999999) / 1000000;
+ } else {
+ /* Wait for either IO to occur or the next
+ timer event. */
+ add = qemu_next_deadline();
+ /* We advance the timer before checking for IO.
+ Limit the amount we advance so that early IO
+ activity won't get the guest too far ahead. */
+ if (add > 10000000)
+ add = 10000000;
+ delta += add;
+ qemu_icount += qemu_icount_round (add);
+ timeout = delta / 1000000;
+ if (timeout < 0)
+ timeout = 0;
+ }
}
return timeout;
+#else /* CONFIG_IOTHREAD */
+ return 1000;
+#endif
}
--
1.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [Qemu-devel] [PATCH v3 4/4] qemu_next_deadline should not consider host-time timers
2011-04-12 8:44 [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread Paolo Bonzini
` (2 preceding siblings ...)
2011-04-12 8:44 ` [Qemu-devel] [PATCH v3 3/4] Revert wrong fixes for -icount in the iothread case Paolo Bonzini
@ 2011-04-12 8:44 ` Paolo Bonzini
2011-04-12 9:26 ` [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread Jan Kiszka
4 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2011-04-12 8:44 UTC (permalink / raw)
To: qemu-devel
It is purely for icount-based virtual timers. And now that we got the
code right, rename the function to clarify the intended scope.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 4 ++--
qemu-timer.c | 13 ++++---------
qemu-timer.h | 2 +-
3 files changed, 7 insertions(+), 12 deletions(-)
diff --git a/cpus.c b/cpus.c
index 2ac2e9d..4e8e386 100644
--- a/cpus.c
+++ b/cpus.c
@@ -833,7 +833,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
while (1) {
cpu_exec_all();
- if (use_icount && qemu_next_deadline() <= 0) {
+ if (use_icount && qemu_next_icount_deadline() <= 0) {
qemu_notify_event();
}
qemu_tcg_wait_io_event();
@@ -1050,7 +1050,7 @@ static int tcg_cpu_exec(CPUState *env)
qemu_icount -= (env->icount_decr.u16.low + env->icount_extra);
env->icount_decr.u16.low = 0;
env->icount_extra = 0;
- count = qemu_icount_round (qemu_next_deadline());
+ count = qemu_icount_round (qemu_next_icount_deadline());
qemu_icount += count;
decr = (count > 0xffff) ? 0xffff : count;
count -= decr;
diff --git a/qemu-timer.c b/qemu-timer.c
index 95f2251..e27bec1 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -444,7 +444,7 @@ void qemu_clock_warp(QEMUClock *clock)
}
vm_clock_warp_start = qemu_get_clock_ns(rt_clock);
- deadline = qemu_next_deadline();
+ deadline = qemu_next_icount_deadline();
if (deadline > 0) {
qemu_mod_timer(clock->warp_timer, vm_clock_warp_start + deadline);
} else {
@@ -741,21 +741,16 @@ static void host_alarm_handler(int host_signum)
}
}
-int64_t qemu_next_deadline(void)
+int64_t qemu_next_icount_deadline(void)
{
/* To avoid problems with overflow limit this to 2^32. */
int64_t delta = INT32_MAX;
+ assert(use_icount);
if (active_timers[QEMU_CLOCK_VIRTUAL]) {
delta = active_timers[QEMU_CLOCK_VIRTUAL]->expire_time -
qemu_get_clock_ns(vm_clock);
}
- if (active_timers[QEMU_CLOCK_HOST]) {
- int64_t hdelta = active_timers[QEMU_CLOCK_HOST]->expire_time -
- qemu_get_clock_ns(host_clock);
- if (hdelta < delta)
- delta = hdelta;
- }
if (delta < 0)
delta = 0;
@@ -1145,7 +1140,7 @@ int qemu_calculate_timeout(void)
} else {
/* Wait for either IO to occur or the next
timer event. */
- add = qemu_next_deadline();
+ add = qemu_next_icount_deadline();
/* We advance the timer before checking for IO.
Limit the amount we advance so that early IO
activity won't get the guest too far ahead. */
diff --git a/qemu-timer.h b/qemu-timer.h
index c01bcab..3a9228f 100644
--- a/qemu-timer.h
+++ b/qemu-timer.h
@@ -51,7 +51,7 @@ int qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
void qemu_run_all_timers(void);
int qemu_alarm_pending(void);
-int64_t qemu_next_deadline(void);
+int64_t qemu_next_icount_deadline(void);
void configure_alarms(char const *opt);
void configure_icount(const char *option);
int qemu_calculate_timeout(void);
--
1.7.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread
2011-04-12 8:44 [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread Paolo Bonzini
` (3 preceding siblings ...)
2011-04-12 8:44 ` [Qemu-devel] [PATCH v3 4/4] qemu_next_deadline should not consider host-time timers Paolo Bonzini
@ 2011-04-12 9:26 ` Jan Kiszka
2011-04-12 13:26 ` Paolo Bonzini
4 siblings, 1 reply; 9+ messages in thread
From: Jan Kiszka @ 2011-04-12 9:26 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel
On 2011-04-12 10:44, Paolo Bonzini wrote:
> This series finally fixes -icount with iothread and avoids deadlocks
> due to the vm_clock not making progress when the VM is stopped.
> The crux of the fix is in patch 1, while patch 2 implements the
> "clock warping" that fixes deadlocks in v2. Clock warping uses
> the nanosecond resolution rt_clock timers introduced by my previous
> series.
>
> With this in place, patch 3 can revert the previous attempt(s).
> Finally, patch 4 makes the icount code clearer by finishing the
> bugfix/reorganization of qemu_next_deadline vs. qemu_next_alarm_deadline.
>
> v1->v2:
> reordered patches, renamed qemu_next_deadline
>
> v2->v3:
> introduced warp timer
>
> Paolo Bonzini (4):
> really fix -icount in the iothread case
> enable vm_clock to "warp" in the iothread+icount case
> Revert wrong fixes for -icount in the iothread case
> qemu_next_deadline should not consider host-time timers
>
> cpus.c | 13 ++++-
> qemu-common.h | 1 +
> qemu-timer.c | 145 ++++++++++++++++++++++++++++++++++++++++++---------------
> qemu-timer.h | 3 +-
> 4 files changed, 121 insertions(+), 41 deletions(-)
>
On first glance, I've spotted a view coding style issues. Try checkpatch
(maybe excluding patch 3).
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread
2011-04-12 9:26 ` [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread Jan Kiszka
@ 2011-04-12 13:26 ` Paolo Bonzini
2011-04-12 21:50 ` Edgar E. Iglesias
0 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2011-04-12 13:26 UTC (permalink / raw)
To: Jan Kiszka; +Cc: Edgar E. Iglesias, qemu-devel
On 04/12/2011 11:26 AM, Jan Kiszka wrote:
> On 2011-04-12 10:44, Paolo Bonzini wrote:
>> This series finally fixes -icount with iothread and avoids deadlocks
>> due to the vm_clock not making progress when the VM is stopped.
>> The crux of the fix is in patch 1, while patch 2 implements the
>> "clock warping" that fixes deadlocks in v2. Clock warping uses
>> the nanosecond resolution rt_clock timers introduced by my previous
>> series.
>>
>> With this in place, patch 3 can revert the previous attempt(s).
>> Finally, patch 4 makes the icount code clearer by finishing the
>> bugfix/reorganization of qemu_next_deadline vs. qemu_next_alarm_deadline.
>>
>> v1->v2:
>> reordered patches, renamed qemu_next_deadline
>>
>> v2->v3:
>> introduced warp timer
>>
>> Paolo Bonzini (4):
>> really fix -icount in the iothread case
>> enable vm_clock to "warp" in the iothread+icount case
>> Revert wrong fixes for -icount in the iothread case
>> qemu_next_deadline should not consider host-time timers
>>
>> cpus.c | 13 ++++-
>> qemu-common.h | 1 +
>> qemu-timer.c | 145 ++++++++++++++++++++++++++++++++++++++++++---------------
>> qemu-timer.h | 3 +-
>> 4 files changed, 121 insertions(+), 41 deletions(-)
>>
>
> On first glance, I've spotted a view coding style issues. Try checkpatch
> (maybe excluding patch 3).
Will repost, testing is welcome in the meanwhile!
Paolo
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] Fix -icount with iothread
2011-04-12 13:26 ` Paolo Bonzini
@ 2011-04-12 21:50 ` Edgar E. Iglesias
2011-04-13 5:03 ` Paolo Bonzini
0 siblings, 1 reply; 9+ messages in thread
From: Edgar E. Iglesias @ 2011-04-12 21:50 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Jan Kiszka, qemu-devel
On Tue, Apr 12, 2011 at 03:26:39PM +0200, Paolo Bonzini wrote:
> On 04/12/2011 11:26 AM, Jan Kiszka wrote:
> > On 2011-04-12 10:44, Paolo Bonzini wrote:
> >> This series finally fixes -icount with iothread and avoids deadlocks
> >> due to the vm_clock not making progress when the VM is stopped.
> >> The crux of the fix is in patch 1, while patch 2 implements the
> >> "clock warping" that fixes deadlocks in v2. Clock warping uses
> >> the nanosecond resolution rt_clock timers introduced by my previous
> >> series.
> >>
> >> With this in place, patch 3 can revert the previous attempt(s).
> >> Finally, patch 4 makes the icount code clearer by finishing the
> >> bugfix/reorganization of qemu_next_deadline vs. qemu_next_alarm_deadline.
> >>
> >> v1->v2:
> >> reordered patches, renamed qemu_next_deadline
> >>
> >> v2->v3:
> >> introduced warp timer
> >>
> >> Paolo Bonzini (4):
> >> really fix -icount in the iothread case
> >> enable vm_clock to "warp" in the iothread+icount case
> >> Revert wrong fixes for -icount in the iothread case
> >> qemu_next_deadline should not consider host-time timers
> >>
> >> cpus.c | 13 ++++-
> >> qemu-common.h | 1 +
> >> qemu-timer.c | 145 ++++++++++++++++++++++++++++++++++++++++++---------------
> >> qemu-timer.h | 3 +-
> >> 4 files changed, 121 insertions(+), 41 deletions(-)
> >>
> >
> > On first glance, I've spotted a view coding style issues. Try checkpatch
> > (maybe excluding patch 3).
>
> Will repost, testing is welcome in the meanwhile!
The logic of the patches looks good to me, but it would be nice if you
could add a comment in the code regarding why we do the "warping". I think
parts of it could be based from the commit message.
I also tested the code and it works beautifully for my testcases!
iothread & icount ran faster than icount without iothread.
Thanks alot, cheers
^ permalink raw reply [flat|nested] 9+ messages in thread