qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services
@ 2012-04-05 10:59 Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX Jan Kiszka
                   ` (9 more replies)
  0 siblings, 10 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Michael S. Tsirkin

Patches 1-3 are unmodified from the previous round. Patches 4-10 aim at
consolidating our event abstractions over a single QemuEvent object.
This is heavily based on Paolo's work, just slightly refactored, plugged
into the qemu-thread-* infrastructure and applied on more consumers.

CC: Michael S. Tsirkin <mst@redhat.com>

Jan Kiszka (10):
  Introduce qemu_cond_timedwait for POSIX
  Switch POSIX compat AIO to QEMU abstractions
  Switch compatfd to QEMU thread
  qemu-thread: Factor out qemu_error_exit
  Introduce QemuEvent abstraction
  Use QemuEvent in main loop
  Drop unused qemu_eventfd
  Use QemuEvent for POSIX AIO
  virtio: Switch to QemuEvent
  Remove EventNotifier

 Makefile            |    2 +-
 Makefile.objs       |    7 ++-
 compatfd.c          |   16 +----
 event_notifier.c    |   61 -----------------
 event_notifier.h    |   28 --------
 hw/vhost.c          |    4 +-
 hw/virtio-pci.c     |   61 +++++++----------
 hw/virtio.c         |   12 ++--
 hw/virtio.h         |    6 +-
 main-loop.c         |  104 ++++--------------------------
 oslib-posix.c       |   31 ---------
 posix-aio-compat.c  |  180 ++++++++++++---------------------------------------
 qemu-common.h       |    1 -
 qemu-event-posix.c  |  110 +++++++++++++++++++++++++++++++
 qemu-event-win32.c  |   65 ++++++++++++++++++
 qemu-thread-posix.c |   51 +++++++++++----
 qemu-thread-posix.h |   10 +++
 qemu-thread-win32.c |   18 +++---
 qemu-thread-win32.h |    4 +
 qemu-thread.h       |   14 ++++
 20 files changed, 349 insertions(+), 436 deletions(-)
 delete mode 100644 event_notifier.c
 delete mode 100644 event_notifier.h
 create mode 100644 qemu-event-posix.c
 create mode 100644 qemu-event-win32.c

-- 
1.7.3.4

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 11:19   ` Peter Maydell
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 02/10] Switch POSIX compat AIO to QEMU abstractions Jan Kiszka
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

First user will be POSIX compat aio. Windows use cases aren't in sight,
so this remains a POSIX-only service for now.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 qemu-thread-posix.c |   23 +++++++++++++++++++++++
 qemu-thread-posix.h |    5 +++++
 2 files changed, 28 insertions(+), 0 deletions(-)

diff --git a/qemu-thread-posix.c b/qemu-thread-posix.c
index 9e1b5fb..cd65df2 100644
--- a/qemu-thread-posix.c
+++ b/qemu-thread-posix.c
@@ -17,6 +17,7 @@
 #include <signal.h>
 #include <stdint.h>
 #include <string.h>
+#include <sys/time.h>
 #include "qemu-thread.h"
 
 static void error_exit(int err, const char *msg)
@@ -115,6 +116,28 @@ void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex)
         error_exit(err, __func__);
 }
 
+/* Returns true if condition was signals, false if timed out. */
+bool qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex,
+                         unsigned int timeout_ms)
+{
+    struct timespec ts;
+    struct timeval tv;
+    int err;
+
+    gettimeofday(&tv, NULL);
+    ts.tv_sec = tv.tv_sec + timeout_ms / 1000;
+    ts.tv_nsec = tv.tv_usec * 1000 + timeout_ms % 1000;
+    if (ts.tv_nsec > 1000000000) {
+        ts.tv_sec++;
+        ts.tv_nsec -= 1000000000;
+    }
+    err = pthread_cond_timedwait(&cond->cond, &mutex->lock, &ts);
+    if (err && err != ETIMEDOUT) {
+        error_exit(err, __func__);
+    }
+    return err == 0;
+}
+
 void qemu_thread_create(QemuThread *thread,
                        void *(*start_routine)(void*),
                        void *arg, int mode)
diff --git a/qemu-thread-posix.h b/qemu-thread-posix.h
index ee4618e..9f00524 100644
--- a/qemu-thread-posix.h
+++ b/qemu-thread-posix.h
@@ -1,5 +1,6 @@
 #ifndef __QEMU_THREAD_POSIX_H
 #define __QEMU_THREAD_POSIX_H 1
+#include <stdbool.h>
 #include "pthread.h"
 
 struct QemuMutex {
@@ -14,4 +15,8 @@ struct QemuThread {
     pthread_t thread;
 };
 
+/* only provided for posix so far */
+bool qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex,
+                         unsigned int timeout_ms);
+
 #endif
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 02/10] Switch POSIX compat AIO to QEMU abstractions
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 03/10] Switch compatfd to QEMU thread Jan Kiszka
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

Although there is nothing to wrap for non-POSIX here, redirecting thread
and synchronization services to our core simplifies managements jobs
like scheduling parameter adjustment. It also frees compat AIO from some
duplicate code (/wrt qemu-thread).

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 posix-aio-compat.c |  118 +++++++++++++++------------------------------------
 1 files changed, 35 insertions(+), 83 deletions(-)

diff --git a/posix-aio-compat.c b/posix-aio-compat.c
index d311d13..c9b8ebf 100644
--- a/posix-aio-compat.c
+++ b/posix-aio-compat.c
@@ -15,7 +15,6 @@
 
 #include <sys/ioctl.h>
 #include <sys/types.h>
-#include <pthread.h>
 #include <unistd.h>
 #include <errno.h>
 #include <time.h>
@@ -29,9 +28,12 @@
 #include "qemu-common.h"
 #include "trace.h"
 #include "block_int.h"
+#include "qemu-thread.h"
 
 #include "block/raw-posix-aio.h"
 
+#define AIO_THREAD_IDLE_TIMEOUT 10000 /* 10 s */
+
 static void do_spawn_thread(void);
 
 struct qemu_paiocb {
@@ -59,10 +61,9 @@ typedef struct PosixAioState {
 } PosixAioState;
 
 
-static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
-static pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
-static pthread_t thread_id;
-static pthread_attr_t attr;
+static QemuMutex lock;
+static QemuCond cond;
+static QemuThread thread;
 static int max_threads = 64;
 static int cur_threads = 0;
 static int idle_threads = 0;
@@ -88,39 +89,6 @@ static void die(const char *what)
     die2(errno, what);
 }
 
-static void mutex_lock(pthread_mutex_t *mutex)
-{
-    int ret = pthread_mutex_lock(mutex);
-    if (ret) die2(ret, "pthread_mutex_lock");
-}
-
-static void mutex_unlock(pthread_mutex_t *mutex)
-{
-    int ret = pthread_mutex_unlock(mutex);
-    if (ret) die2(ret, "pthread_mutex_unlock");
-}
-
-static int cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex,
-                           struct timespec *ts)
-{
-    int ret = pthread_cond_timedwait(cond, mutex, ts);
-    if (ret && ret != ETIMEDOUT) die2(ret, "pthread_cond_timedwait");
-    return ret;
-}
-
-static void cond_signal(pthread_cond_t *cond)
-{
-    int ret = pthread_cond_signal(cond);
-    if (ret) die2(ret, "pthread_cond_signal");
-}
-
-static void thread_create(pthread_t *thread, pthread_attr_t *attr,
-                          void *(*start_routine)(void*), void *arg)
-{
-    int ret = pthread_create(thread, attr, start_routine, arg);
-    if (ret) die2(ret, "pthread_create");
-}
-
 static ssize_t handle_aiocb_ioctl(struct qemu_paiocb *aiocb)
 {
     int ret;
@@ -313,28 +281,26 @@ static void posix_aio_notify_event(void);
 
 static void *aio_thread(void *unused)
 {
-    mutex_lock(&lock);
+    qemu_mutex_lock(&lock);
     pending_threads--;
-    mutex_unlock(&lock);
+    qemu_mutex_unlock(&lock);
     do_spawn_thread();
 
     while (1) {
         struct qemu_paiocb *aiocb;
-        ssize_t ret = 0;
-        qemu_timeval tv;
-        struct timespec ts;
-
-        qemu_gettimeofday(&tv);
-        ts.tv_sec = tv.tv_sec + 10;
-        ts.tv_nsec = 0;
+        bool timed_out;
+        ssize_t ret;
 
-        mutex_lock(&lock);
+        qemu_mutex_lock(&lock);
 
-        while (QTAILQ_EMPTY(&request_list) &&
-               !(ret == ETIMEDOUT)) {
+        while (QTAILQ_EMPTY(&request_list)) {
             idle_threads++;
-            ret = cond_timedwait(&cond, &lock, &ts);
+            timed_out = !qemu_cond_timedwait(&cond, &lock,
+                                             AIO_THREAD_IDLE_TIMEOUT);
             idle_threads--;
+            if (timed_out) {
+                break;
+            }
         }
 
         if (QTAILQ_EMPTY(&request_list))
@@ -343,7 +309,7 @@ static void *aio_thread(void *unused)
         aiocb = QTAILQ_FIRST(&request_list);
         QTAILQ_REMOVE(&request_list, aiocb, node);
         aiocb->active = 1;
-        mutex_unlock(&lock);
+        qemu_mutex_unlock(&lock);
 
         switch (aiocb->aio_type & QEMU_AIO_TYPE_MASK) {
         case QEMU_AIO_READ:
@@ -375,41 +341,33 @@ static void *aio_thread(void *unused)
             break;
         }
 
-        mutex_lock(&lock);
+        qemu_mutex_lock(&lock);
         aiocb->ret = ret;
-        mutex_unlock(&lock);
+        qemu_mutex_unlock(&lock);
 
         posix_aio_notify_event();
     }
 
     cur_threads--;
-    mutex_unlock(&lock);
+    qemu_mutex_unlock(&lock);
 
     return NULL;
 }
 
 static void do_spawn_thread(void)
 {
-    sigset_t set, oldset;
-
-    mutex_lock(&lock);
+    qemu_mutex_lock(&lock);
     if (!new_threads) {
-        mutex_unlock(&lock);
+        qemu_mutex_unlock(&lock);
         return;
     }
 
     new_threads--;
     pending_threads++;
 
-    mutex_unlock(&lock);
+    qemu_mutex_unlock(&lock);
 
-    /* block all signals */
-    if (sigfillset(&set)) die("sigfillset");
-    if (sigprocmask(SIG_SETMASK, &set, &oldset)) die("sigprocmask");
-
-    thread_create(&thread_id, &attr, aio_thread, NULL);
-
-    if (sigprocmask(SIG_SETMASK, &oldset, NULL)) die("sigprocmask restore");
+    qemu_thread_create(&thread, aio_thread, NULL, QEMU_THREAD_DETACHED);
 }
 
 static void spawn_thread_bh_fn(void *opaque)
@@ -437,21 +395,21 @@ static void qemu_paio_submit(struct qemu_paiocb *aiocb)
 {
     aiocb->ret = -EINPROGRESS;
     aiocb->active = 0;
-    mutex_lock(&lock);
+    qemu_mutex_lock(&lock);
     if (idle_threads == 0 && cur_threads < max_threads)
         spawn_thread();
     QTAILQ_INSERT_TAIL(&request_list, aiocb, node);
-    mutex_unlock(&lock);
-    cond_signal(&cond);
+    qemu_mutex_unlock(&lock);
+    qemu_cond_signal(&cond);
 }
 
 static ssize_t qemu_paio_return(struct qemu_paiocb *aiocb)
 {
     ssize_t ret;
 
-    mutex_lock(&lock);
+    qemu_mutex_lock(&lock);
     ret = aiocb->ret;
-    mutex_unlock(&lock);
+    qemu_mutex_unlock(&lock);
 
     return ret;
 }
@@ -582,14 +540,14 @@ static void paio_cancel(BlockDriverAIOCB *blockacb)
 
     trace_paio_cancel(acb, acb->common.opaque);
 
-    mutex_lock(&lock);
+    qemu_mutex_lock(&lock);
     if (!acb->active) {
         QTAILQ_REMOVE(&request_list, acb, node);
         acb->ret = -ECANCELED;
     } else if (acb->ret == -EINPROGRESS) {
         active = 1;
     }
-    mutex_unlock(&lock);
+    qemu_mutex_unlock(&lock);
 
     if (active) {
         /* fail safe: if the aio could not be canceled, we wait for
@@ -655,11 +613,13 @@ int paio_init(void)
 {
     PosixAioState *s;
     int fds[2];
-    int ret;
 
     if (posix_aio_state)
         return 0;
 
+    qemu_mutex_init(&lock);
+    qemu_cond_init(&cond);
+
     s = g_malloc(sizeof(PosixAioState));
 
     s->first_aio = NULL;
@@ -678,14 +638,6 @@ int paio_init(void)
     qemu_aio_set_fd_handler(s->rfd, posix_aio_read, NULL, posix_aio_flush,
         posix_aio_process_queue, s);
 
-    ret = pthread_attr_init(&attr);
-    if (ret)
-        die2(ret, "pthread_attr_init");
-
-    ret = pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
-    if (ret)
-        die2(ret, "pthread_attr_setdetachstate");
-
     QTAILQ_INIT(&request_list);
     new_thread_bh = qemu_bh_new(spawn_thread_bh_fn, NULL);
 
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 03/10] Switch compatfd to QEMU thread
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 02/10] Switch POSIX compat AIO to QEMU abstractions Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 04/10] qemu-thread: Factor out qemu_error_exit Jan Kiszka
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

qemu_thread_create already does signal blocking and detaching for us.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 compatfd.c |   16 +++-------------
 1 files changed, 3 insertions(+), 13 deletions(-)

diff --git a/compatfd.c b/compatfd.c
index 42f81ca..8d5a63f 100644
--- a/compatfd.c
+++ b/compatfd.c
@@ -14,10 +14,10 @@
  */
 
 #include "qemu-common.h"
+#include "qemu-thread.h"
 #include "compatfd.h"
 
 #include <sys/syscall.h>
-#include <pthread.h>
 
 struct sigfd_compat_info
 {
@@ -28,10 +28,6 @@ struct sigfd_compat_info
 static void *sigwait_compat(void *opaque)
 {
     struct sigfd_compat_info *info = opaque;
-    sigset_t all;
-
-    sigfillset(&all);
-    pthread_sigmask(SIG_BLOCK, &all, NULL);
 
     while (1) {
         int sig;
@@ -71,9 +67,8 @@ static void *sigwait_compat(void *opaque)
 
 static int qemu_signalfd_compat(const sigset_t *mask)
 {
-    pthread_attr_t attr;
-    pthread_t tid;
     struct sigfd_compat_info *info;
+    QemuThread thread;
     int fds[2];
 
     info = malloc(sizeof(*info));
@@ -93,12 +88,7 @@ static int qemu_signalfd_compat(const sigset_t *mask)
     memcpy(&info->mask, mask, sizeof(*mask));
     info->fd = fds[1];
 
-    pthread_attr_init(&attr);
-    pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
-
-    pthread_create(&tid, &attr, sigwait_compat, info);
-
-    pthread_attr_destroy(&attr);
+    qemu_thread_create(&thread, sigwait_compat, info, QEMU_THREAD_DETACHED);
 
     return fds[0];
 }
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 04/10] qemu-thread: Factor out qemu_error_exit
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
                   ` (2 preceding siblings ...)
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 03/10] Switch compatfd to QEMU thread Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 05/10] Introduce QemuEvent abstraction Jan Kiszka
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

Will be used in another wrapper module soon.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 qemu-thread-posix.c |   30 +++++++++++++++---------------
 qemu-thread-win32.c |   18 +++++++++---------
 qemu-thread.h       |    2 ++
 3 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/qemu-thread-posix.c b/qemu-thread-posix.c
index cd65df2..6b687f0 100644
--- a/qemu-thread-posix.c
+++ b/qemu-thread-posix.c
@@ -20,7 +20,7 @@
 #include <sys/time.h>
 #include "qemu-thread.h"
 
-static void error_exit(int err, const char *msg)
+void qemu_error_exit(int err, const char *msg)
 {
     fprintf(stderr, "qemu: %s: %s\n", msg, strerror(err));
     abort();
@@ -36,7 +36,7 @@ void qemu_mutex_init(QemuMutex *mutex)
     err = pthread_mutex_init(&mutex->lock, &mutexattr);
     pthread_mutexattr_destroy(&mutexattr);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 void qemu_mutex_destroy(QemuMutex *mutex)
@@ -45,7 +45,7 @@ void qemu_mutex_destroy(QemuMutex *mutex)
 
     err = pthread_mutex_destroy(&mutex->lock);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 void qemu_mutex_lock(QemuMutex *mutex)
@@ -54,7 +54,7 @@ void qemu_mutex_lock(QemuMutex *mutex)
 
     err = pthread_mutex_lock(&mutex->lock);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 int qemu_mutex_trylock(QemuMutex *mutex)
@@ -68,7 +68,7 @@ void qemu_mutex_unlock(QemuMutex *mutex)
 
     err = pthread_mutex_unlock(&mutex->lock);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 void qemu_cond_init(QemuCond *cond)
@@ -77,7 +77,7 @@ void qemu_cond_init(QemuCond *cond)
 
     err = pthread_cond_init(&cond->cond, NULL);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 void qemu_cond_destroy(QemuCond *cond)
@@ -86,7 +86,7 @@ void qemu_cond_destroy(QemuCond *cond)
 
     err = pthread_cond_destroy(&cond->cond);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 void qemu_cond_signal(QemuCond *cond)
@@ -95,7 +95,7 @@ void qemu_cond_signal(QemuCond *cond)
 
     err = pthread_cond_signal(&cond->cond);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 void qemu_cond_broadcast(QemuCond *cond)
@@ -104,7 +104,7 @@ void qemu_cond_broadcast(QemuCond *cond)
 
     err = pthread_cond_broadcast(&cond->cond);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex)
@@ -113,7 +113,7 @@ void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex)
 
     err = pthread_cond_wait(&cond->cond, &mutex->lock);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 }
 
 /* Returns true if condition was signals, false if timed out. */
@@ -133,7 +133,7 @@ bool qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex,
     }
     err = pthread_cond_timedwait(&cond->cond, &mutex->lock, &ts);
     if (err && err != ETIMEDOUT) {
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
     }
     return err == 0;
 }
@@ -148,12 +148,12 @@ void qemu_thread_create(QemuThread *thread,
 
     err = pthread_attr_init(&attr);
     if (err) {
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
     }
     if (mode == QEMU_THREAD_DETACHED) {
         err = pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
         if (err) {
-            error_exit(err, __func__);
+            qemu_error_exit(err, __func__);
         }
     }
 
@@ -162,7 +162,7 @@ void qemu_thread_create(QemuThread *thread,
     pthread_sigmask(SIG_SETMASK, &set, &oldset);
     err = pthread_create(&thread->thread, &attr, start_routine, arg);
     if (err)
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
 
     pthread_sigmask(SIG_SETMASK, &oldset, NULL);
 
@@ -191,7 +191,7 @@ void *qemu_thread_join(QemuThread *thread)
 
     err = pthread_join(thread->thread, &ret);
     if (err) {
-        error_exit(err, __func__);
+        qemu_error_exit(err, __func__);
     }
     return ret;
 }
diff --git a/qemu-thread-win32.c b/qemu-thread-win32.c
index 3524c8b..01664fb 100644
--- a/qemu-thread-win32.c
+++ b/qemu-thread-win32.c
@@ -16,7 +16,7 @@
 #include <assert.h>
 #include <limits.h>
 
-static void error_exit(int err, const char *msg)
+void qemu_error_exit(int err, const char *msg)
 {
     char *pstr;
 
@@ -75,14 +75,14 @@ void qemu_cond_init(QemuCond *cond)
 
     cond->sema = CreateSemaphore(NULL, 0, LONG_MAX, NULL);
     if (!cond->sema) {
-        error_exit(GetLastError(), __func__);
+        qemu_error_exit(GetLastError(), __func__);
     }
     cond->continue_event = CreateEvent(NULL,    /* security */
                                        FALSE,   /* auto-reset */
                                        FALSE,   /* not signaled */
                                        NULL);   /* name */
     if (!cond->continue_event) {
-        error_exit(GetLastError(), __func__);
+        qemu_error_exit(GetLastError(), __func__);
     }
 }
 
@@ -91,12 +91,12 @@ void qemu_cond_destroy(QemuCond *cond)
     BOOL result;
     result = CloseHandle(cond->continue_event);
     if (!result) {
-        error_exit(GetLastError(), __func__);
+        qemu_error_exit(GetLastError(), __func__);
     }
     cond->continue_event = 0;
     result = CloseHandle(cond->sema);
     if (!result) {
-        error_exit(GetLastError(), __func__);
+        qemu_error_exit(GetLastError(), __func__);
     }
     cond->sema = 0;
 }
@@ -124,7 +124,7 @@ void qemu_cond_signal(QemuCond *cond)
     result = SignalObjectAndWait(cond->sema, cond->continue_event,
                                  INFINITE, FALSE);
     if (result == WAIT_ABANDONED || result == WAIT_FAILED) {
-        error_exit(GetLastError(), __func__);
+        qemu_error_exit(GetLastError(), __func__);
     }
 }
 
@@ -142,7 +142,7 @@ void qemu_cond_broadcast(QemuCond *cond)
     cond->target = 0;
     result = ReleaseSemaphore(cond->sema, cond->waiters, NULL);
     if (!result) {
-        error_exit(GetLastError(), __func__);
+        qemu_error_exit(GetLastError(), __func__);
     }
 
     /*
@@ -268,7 +268,7 @@ static inline void qemu_thread_init(void)
     if (qemu_thread_tls_index == TLS_OUT_OF_INDEXES) {
         qemu_thread_tls_index = TlsAlloc();
         if (qemu_thread_tls_index == TLS_OUT_OF_INDEXES) {
-            error_exit(ERROR_NO_SYSTEM_RESOURCES, __func__);
+            qemu_error_exit(ERROR_NO_SYSTEM_RESOURCES, __func__);
         }
     }
 }
@@ -295,7 +295,7 @@ void qemu_thread_create(QemuThread *thread,
     hThread = (HANDLE) _beginthreadex(NULL, 0, win32_start_routine,
                                       data, 0, &thread->tid);
     if (!hThread) {
-        error_exit(GetLastError(), __func__);
+        qemu_error_exit(GetLastError(), __func__);
     }
     CloseHandle(hThread);
     thread->data = (mode == QEMU_THREAD_DETACHED) ? NULL : data;
diff --git a/qemu-thread.h b/qemu-thread.h
index a78a8f2..10f2c51 100644
--- a/qemu-thread.h
+++ b/qemu-thread.h
@@ -45,4 +45,6 @@ void qemu_thread_get_self(QemuThread *thread);
 int qemu_thread_is_self(QemuThread *thread);
 void qemu_thread_exit(void *retval);
 
+/* internal helper */
+void qemu_error_exit(int err, const char *msg);
 #endif
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 05/10] Introduce QemuEvent abstraction
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
                   ` (3 preceding siblings ...)
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 04/10] qemu-thread: Factor out qemu_error_exit Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 11:23   ` Paolo Bonzini
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 06/10] Use QemuEvent in main loop Jan Kiszka
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

Provide generic services for binary events. Blocking wait would be
feasible but is not included yet as there are no users.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 Makefile            |    2 +-
 Makefile.objs       |    5 ++-
 qemu-event-posix.c  |  110 +++++++++++++++++++++++++++++++++++++++++++++++++++
 qemu-event-win32.c  |   65 ++++++++++++++++++++++++++++++
 qemu-thread-posix.h |    5 ++
 qemu-thread-win32.h |    4 ++
 qemu-thread.h       |   12 ++++++
 7 files changed, 201 insertions(+), 2 deletions(-)
 create mode 100644 qemu-event-posix.c
 create mode 100644 qemu-event-win32.c

diff --git a/Makefile b/Makefile
index 35c7a2a..ea127f9 100644
--- a/Makefile
+++ b/Makefile
@@ -153,7 +153,7 @@ endif
 qemu-img.o: qemu-img-cmds.h
 qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o cmd.o qemu-ga.o: $(GENERATED_HEADERS)
 
-tools-obj-y = $(oslib-obj-y) $(trace-obj-y) qemu-tool.o qemu-timer.o \
+tools-obj-y = $(oslib-obj-y) $(event-obj-y) $(trace-obj-y) qemu-tool.o qemu-timer.o \
 	qemu-timer-common.o main-loop.o notify.o iohandler.o cutils.o async.o
 tools-obj-$(CONFIG_POSIX) += compatfd.o
 
diff --git a/Makefile.objs b/Makefile.objs
index f308b57..377bfe2 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -24,6 +24,9 @@ oslib-obj-y = osdep.o
 oslib-obj-$(CONFIG_WIN32) += oslib-win32.o qemu-thread-win32.o
 oslib-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-thread-posix.o
 
+event-obj-$(CONFIG_POSIX) = qemu-event-posix.o
+event-obj-$(CONFIG_WIN32) = qemu-event-win32.o
+
 #######################################################################
 # coroutines
 coroutine-obj-y = qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
@@ -97,7 +100,7 @@ common-obj-y += $(net-obj-y)
 common-obj-y += $(qom-obj-twice-y)
 common-obj-$(CONFIG_LINUX) += $(fsdev-obj-$(CONFIG_LINUX))
 common-obj-y += readline.o console.o cursor.o
-common-obj-y += $(oslib-obj-y)
+common-obj-y += $(oslib-obj-y) $(event-obj-y)
 common-obj-$(CONFIG_WIN32) += os-win32.o
 common-obj-$(CONFIG_POSIX) += os-posix.o
 
diff --git a/qemu-event-posix.c b/qemu-event-posix.c
new file mode 100644
index 0000000..6138168
--- /dev/null
+++ b/qemu-event-posix.c
@@ -0,0 +1,110 @@
+/*
+ * Posix implementations of event signaling service
+ *
+ * Copyright Red Hat, Inc. 2012
+ * Copyright Siemens AG 2012
+ *
+ * Author:
+ *  Paolo Bonzini <pbonzini@redhat.com>
+ *  Jan Kiszka <jan.kiszka@siemens.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#include "qemu-thread.h"
+#include "qemu-common.h"
+#include "main-loop.h"
+
+#ifdef CONFIG_EVENTFD
+#include <sys/eventfd.h>
+#endif
+
+void qemu_event_init(QemuEvent *event, bool signaled)
+{
+    int fds[2];
+    int ret;
+
+#ifdef CONFIG_EVENTFD
+    ret = eventfd(signaled, EFD_NONBLOCK | EFD_CLOEXEC);
+    if (ret >= 0) {
+        event->rfd = ret;
+        event->wfd = dup(ret);
+        if (event->wfd < 0) {
+            qemu_error_exit(errno, __func__);
+        }
+        qemu_set_cloexec(event->wfd);
+        return;
+    }
+    if (errno != ENOSYS) {
+        qemu_error_exit(errno, __func__);
+    }
+    /* fall back to pipe-based support */
+#endif
+
+    ret = qemu_pipe(fds);
+    if (ret < 0) {
+        qemu_error_exit(errno, __func__);
+    }
+    event->rfd = fds[0];
+    event->wfd = fds[1];
+    if (signaled) {
+        qemu_event_signal(event);
+    }
+}
+
+void qemu_event_destroy(QemuEvent *event)
+{
+    close(event->rfd);
+    close(event->wfd);
+}
+
+int qemu_event_get_signal_fd(QemuEvent *event)
+{
+    return event->wfd;
+}
+
+int qemu_event_get_poll_fd(QemuEvent *event)
+{
+    return event->rfd;
+}
+
+void qemu_event_set_handler(QemuEvent *event, QemuEventHandler *handler,
+                            void *opaque)
+{
+    qemu_set_fd_handler2(event->rfd, NULL, (IOHandler *)handler, NULL, opaque);
+}
+
+void qemu_event_signal(QemuEvent *event)
+{
+    /* Write 8 bytes to be compatible with eventfd. */
+    static const uint64_t val = 1;
+    ssize_t ret;
+
+    do {
+        ret = write(event->wfd, &val, sizeof(val));
+    } while (ret < 0 && errno == EINTR);
+
+    if (ret < 0 && errno != EAGAIN) {
+        qemu_error_exit(errno, __func__);
+    }
+}
+
+bool qemu_event_consume(QemuEvent *event)
+{
+    bool was_set = false;
+    uint64_t val;
+    int ret;
+
+    while (1) {
+        ret = read(event->rfd, &val, sizeof(val));
+        if (ret == sizeof(val)) {
+            was_set = true;
+            continue;
+        }
+        if (ret < 0 && errno == EAGAIN) {
+            return was_set;
+        }
+        qemu_error_exit(errno, __func__);
+    }
+}
diff --git a/qemu-event-win32.c b/qemu-event-win32.c
new file mode 100644
index 0000000..38fe9ae
--- /dev/null
+++ b/qemu-event-win32.c
@@ -0,0 +1,65 @@
+/*
+ * Win32 implementations of event signaling service
+ *
+ * Copyright Red Hat, Inc. 2012
+ * Copyright Siemens AG 2012
+ *
+ * Author:
+ *  Paolo Bonzini <pbonzini@redhat.com>
+ *  Jan Kiszka <jan.kiszka@siemens.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#include <stdbool.h>
+#include "qemu-thread.h"
+#include "qemu-common.h"
+#include "main-loop.h"
+
+void qemu_event_init(QemuEvent *event, bool signaled)
+{
+    event->event = CreateEvent(NULL, FALSE, signaled, NULL);
+    if (!event->event) {
+        qemu_error_exit(GetLastError(), __func__);
+    }
+}
+
+void qemu_event_destroy(QemuEvent *event)
+{
+    CloseHandle(event->event);
+}
+
+int qemu_event_get_signal_fd(QemuEvent *event)
+{
+    /* unsupported on Win32 */
+    abort();
+}
+
+void qemu_event_set_handler(QemuEvent *event, QemuEventHandler *handler,
+                            void *opaque)
+{
+    if (handler) {
+        qemu_add_wait_object(event->event, (IOHandler *)handler, opaque);
+    } else {
+        qemu_del_wait_object(event->event, (IOHandler *)handler, opaque);
+    }
+}
+
+void qemu_event_signal(QemuEvent *event)
+{
+    if (!SetEvent(event->event)) {
+        qemu_error_exit(GetLastError(), __func__);
+    }
+}
+
+bool qemu_event_consume(QemuEvent *event)
+{
+    DWORD ret;
+
+    ret = WaitForSingleObject(event->event, 0);
+    if (ret == WAIT_FAILED) {
+        qemu_error_exit(GetLastError(), __func__);
+    }
+    return ret == WAIT_OBJECT_0;
+}
diff --git a/qemu-thread-posix.h b/qemu-thread-posix.h
index 9f00524..95fbb45 100644
--- a/qemu-thread-posix.h
+++ b/qemu-thread-posix.h
@@ -11,6 +11,11 @@ struct QemuCond {
     pthread_cond_t cond;
 };
 
+struct QemuEvent {
+    int rfd;
+    int wfd;
+};
+
 struct QemuThread {
     pthread_t thread;
 };
diff --git a/qemu-thread-win32.h b/qemu-thread-win32.h
index b9d1be8..d1b7631 100644
--- a/qemu-thread-win32.h
+++ b/qemu-thread-win32.h
@@ -13,6 +13,10 @@ struct QemuCond {
     HANDLE continue_event;
 };
 
+struct QemuEvent {
+    HANDLE event;
+};
+
 typedef struct QemuThreadData QemuThreadData;
 struct QemuThread {
     QemuThreadData *data;
diff --git a/qemu-thread.h b/qemu-thread.h
index 10f2c51..a53384f 100644
--- a/qemu-thread.h
+++ b/qemu-thread.h
@@ -5,8 +5,11 @@
 
 typedef struct QemuMutex QemuMutex;
 typedef struct QemuCond QemuCond;
+typedef struct QemuEvent QemuEvent;
 typedef struct QemuThread QemuThread;
 
+typedef void QemuEventHandler(void *opaque);
+
 #ifdef _WIN32
 #include "qemu-thread-win32.h"
 #else
@@ -28,6 +31,15 @@ void qemu_mutex_unlock(QemuMutex *mutex);
 void qemu_cond_init(QemuCond *cond);
 void qemu_cond_destroy(QemuCond *cond);
 
+void qemu_event_init(QemuEvent *event, bool signaled);
+void qemu_event_destroy(QemuEvent *event);
+int qemu_event_get_signal_fd(QemuEvent *event);
+int qemu_event_get_poll_fd(QemuEvent *event);
+void qemu_event_set_handler(QemuEvent *event, QemuEventHandler *handler,
+                            void *opaque);
+void qemu_event_signal(QemuEvent *event);
+bool qemu_event_consume(QemuEvent *event);
+
 /*
  * IMPORTANT: The implementation does not guarantee that pthread_cond_signal
  * and pthread_cond_broadcast can be called except while the same mutex is
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 06/10] Use QemuEvent in main loop
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
                   ` (4 preceding siblings ...)
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 05/10] Introduce QemuEvent abstraction Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 07/10] Drop unused qemu_eventfd Jan Kiszka
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

QemuEvent provides the same service, just in a central place.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 main-loop.c |  104 +++++++---------------------------------------------------
 1 files changed, 13 insertions(+), 91 deletions(-)

diff --git a/main-loop.c b/main-loop.c
index db23de0..998d291 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -23,78 +23,18 @@
  */
 
 #include "qemu-common.h"
+#include "qemu-thread.h"
 #include "qemu-timer.h"
 #include "slirp/slirp.h"
 #include "main-loop.h"
 
+static bool io_event_initialized;
+static QemuEvent io_event;
+
 #ifndef _WIN32
 
 #include "compatfd.h"
 
-static int io_thread_fd = -1;
-
-void qemu_notify_event(void)
-{
-    /* Write 8 bytes to be compatible with eventfd.  */
-    static const uint64_t val = 1;
-    ssize_t ret;
-
-    if (io_thread_fd == -1) {
-        return;
-    }
-    do {
-        ret = write(io_thread_fd, &val, sizeof(val));
-    } while (ret < 0 && errno == EINTR);
-
-    /* EAGAIN is fine, a read must be pending.  */
-    if (ret < 0 && errno != EAGAIN) {
-        fprintf(stderr, "qemu_notify_event: write() failed: %s\n",
-                strerror(errno));
-        exit(1);
-    }
-}
-
-static void qemu_event_read(void *opaque)
-{
-    int fd = (intptr_t)opaque;
-    ssize_t len;
-    char buffer[512];
-
-    /* Drain the notify pipe.  For eventfd, only 8 bytes will be read.  */
-    do {
-        len = read(fd, buffer, sizeof(buffer));
-    } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));
-}
-
-static int qemu_event_init(void)
-{
-    int err;
-    int fds[2];
-
-    err = qemu_eventfd(fds);
-    if (err == -1) {
-        return -errno;
-    }
-    err = fcntl_setfl(fds[0], O_NONBLOCK);
-    if (err < 0) {
-        goto fail;
-    }
-    err = fcntl_setfl(fds[1], O_NONBLOCK);
-    if (err < 0) {
-        goto fail;
-    }
-    qemu_set_fd_handler2(fds[0], NULL, qemu_event_read, NULL,
-                         (void *)(intptr_t)fds[0]);
-
-    io_thread_fd = fds[1];
-    return 0;
-
-fail:
-    close(fds[0]);
-    close(fds[1]);
-    return err;
-}
-
 /* If we have signalfd, we mask out the signals we want to handle and then
  * use signalfd to listen for them.  We rely on whatever the current signal
  * handler is to dispatch the signals when we receive them.
@@ -164,40 +104,23 @@ static int qemu_signal_init(void)
 
 #else /* _WIN32 */
 
-HANDLE qemu_event_handle = NULL;
-
-static void dummy_event_handler(void *opaque)
-{
-}
-
-static int qemu_event_init(void)
+static int qemu_signal_init(void)
 {
-    qemu_event_handle = CreateEvent(NULL, FALSE, FALSE, NULL);
-    if (!qemu_event_handle) {
-        fprintf(stderr, "Failed CreateEvent: %ld\n", GetLastError());
-        return -1;
-    }
-    qemu_add_wait_object(qemu_event_handle, dummy_event_handler, NULL);
     return 0;
 }
+#endif
 
 void qemu_notify_event(void)
 {
-    if (!qemu_event_handle) {
-        return;
-    }
-    if (!SetEvent(qemu_event_handle)) {
-        fprintf(stderr, "qemu_notify_event: SetEvent failed: %ld\n",
-                GetLastError());
-        exit(1);
+    if (io_event_initialized) {
+        qemu_event_signal(&io_event);
     }
 }
 
-static int qemu_signal_init(void)
+static void drain_io_event(void *opaque)
 {
-    return 0;
+    qemu_event_consume(&io_event);
 }
-#endif
 
 int main_loop_init(void)
 {
@@ -210,10 +133,9 @@ int main_loop_init(void)
     }
 
     /* Note eventfd must be drained before signalfd handlers run */
-    ret = qemu_event_init();
-    if (ret) {
-        return ret;
-    }
+    qemu_event_init(&io_event, false);
+    qemu_event_set_handler(&io_event, drain_io_event, NULL);
+    io_event_initialized = true;
 
     return 0;
 }
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 07/10] Drop unused qemu_eventfd
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
                   ` (5 preceding siblings ...)
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 06/10] Use QemuEvent in main loop Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 08/10] Use QemuEvent for POSIX AIO Jan Kiszka
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

The only user was the main loop which is now relying on QemuEvent.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 oslib-posix.c |   31 -------------------------------
 qemu-common.h |    1 -
 2 files changed, 0 insertions(+), 32 deletions(-)

diff --git a/oslib-posix.c b/oslib-posix.c
index b6a3c7f..7928a0d 100644
--- a/oslib-posix.c
+++ b/oslib-posix.c
@@ -58,9 +58,6 @@ static int running_on_valgrind = -1;
 #ifdef CONFIG_LINUX
 #include <sys/syscall.h>
 #endif
-#ifdef CONFIG_EVENTFD
-#include <sys/eventfd.h>
-#endif
 
 int qemu_get_thread_id(void)
 {
@@ -177,34 +174,6 @@ int qemu_pipe(int pipefd[2])
     return ret;
 }
 
-/*
- * Creates an eventfd that looks like a pipe and has EFD_CLOEXEC set.
- */
-int qemu_eventfd(int fds[2])
-{
-#ifdef CONFIG_EVENTFD
-    int ret;
-
-    ret = eventfd(0, 0);
-    if (ret >= 0) {
-        fds[0] = ret;
-        fds[1] = dup(ret);
-        if (fds[1] == -1) {
-            close(ret);
-            return -1;
-        }
-        qemu_set_cloexec(ret);
-        qemu_set_cloexec(fds[1]);
-        return 0;
-    }
-    if (errno != ENOSYS) {
-        return -1;
-    }
-#endif
-
-    return qemu_pipe(fds);
-}
-
 int qemu_utimens(const char *path, const struct timespec *times)
 {
     struct timeval tv[2], tv_now;
diff --git a/qemu-common.h b/qemu-common.h
index 4647dd9..732be5d 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -192,7 +192,6 @@ ssize_t qemu_recv_full(int fd, void *buf, size_t count, int flags)
     QEMU_WARN_UNUSED_RESULT;
 
 #ifndef _WIN32
-int qemu_eventfd(int pipefd[2]);
 int qemu_pipe(int pipefd[2]);
 #endif
 
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 08/10] Use QemuEvent for POSIX AIO
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
                   ` (6 preceding siblings ...)
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 07/10] Drop unused qemu_eventfd Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 09/10] virtio: Switch to QemuEvent Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 10/10] Remove EventNotifier Jan Kiszka
  9 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

Use QemuEvent for signaling POSIX AIO completions. If native eventfd
support is available, this avoids multiple read accesses to drain
multiple pending signals.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 posix-aio-compat.c |   62 ++++++----------------------------------------------
 1 files changed, 7 insertions(+), 55 deletions(-)

diff --git a/posix-aio-compat.c b/posix-aio-compat.c
index c9b8ebf..4581972 100644
--- a/posix-aio-compat.c
+++ b/posix-aio-compat.c
@@ -56,7 +56,7 @@ struct qemu_paiocb {
 };
 
 typedef struct PosixAioState {
-    int rfd, wfd;
+    QemuEvent event;
     struct qemu_paiocb *first_aio;
 } PosixAioState;
 
@@ -71,6 +71,7 @@ static int new_threads = 0;     /* backlog of threads we need to create */
 static int pending_threads = 0; /* threads created but not running yet */
 static QEMUBH *new_thread_bh;
 static QTAILQ_HEAD(, qemu_paiocb) request_list;
+static PosixAioState *posix_aio_state;
 
 #ifdef CONFIG_PREADV
 static int preadv_present = 1;
@@ -78,17 +79,6 @@ static int preadv_present = 1;
 static int preadv_present = 0;
 #endif
 
-static void die2(int err, const char *what)
-{
-    fprintf(stderr, "%s failed: %s\n", what, strerror(err));
-    abort();
-}
-
-static void die(const char *what)
-{
-    die2(errno, what);
-}
-
 static ssize_t handle_aiocb_ioctl(struct qemu_paiocb *aiocb)
 {
     int ret;
@@ -277,8 +267,6 @@ static ssize_t handle_aiocb_rw(struct qemu_paiocb *aiocb)
     return nbytes;
 }
 
-static void posix_aio_notify_event(void);
-
 static void *aio_thread(void *unused)
 {
     qemu_mutex_lock(&lock);
@@ -345,7 +333,7 @@ static void *aio_thread(void *unused)
         aiocb->ret = ret;
         qemu_mutex_unlock(&lock);
 
-        posix_aio_notify_event();
+        qemu_event_signal(&posix_aio_state->event);
     }
 
     cur_threads--;
@@ -479,20 +467,8 @@ static int posix_aio_process_queue(void *opaque)
 static void posix_aio_read(void *opaque)
 {
     PosixAioState *s = opaque;
-    ssize_t len;
-
-    /* read all bytes from signal pipe */
-    for (;;) {
-        char bytes[16];
-
-        len = read(s->rfd, bytes, sizeof(bytes));
-        if (len == -1 && errno == EINTR)
-            continue; /* try again */
-        if (len == sizeof(bytes))
-            continue; /* more to read */
-        break;
-    }
 
+    qemu_event_consume(&s->event);
     posix_aio_process_queue(s);
 }
 
@@ -502,18 +478,6 @@ static int posix_aio_flush(void *opaque)
     return !!s->first_aio;
 }
 
-static PosixAioState *posix_aio_state;
-
-static void posix_aio_notify_event(void)
-{
-    char byte = 0;
-    ssize_t ret;
-
-    ret = write(posix_aio_state->wfd, &byte, sizeof(byte));
-    if (ret < 0 && errno != EAGAIN)
-        die("write()");
-}
-
 static void paio_remove(struct qemu_paiocb *acb)
 {
     struct qemu_paiocb **pacb;
@@ -612,7 +576,6 @@ BlockDriverAIOCB *paio_ioctl(BlockDriverState *bs, int fd,
 int paio_init(void)
 {
     PosixAioState *s;
-    int fds[2];
 
     if (posix_aio_state)
         return 0;
@@ -623,20 +586,9 @@ int paio_init(void)
     s = g_malloc(sizeof(PosixAioState));
 
     s->first_aio = NULL;
-    if (qemu_pipe(fds) == -1) {
-        fprintf(stderr, "failed to create pipe\n");
-        g_free(s);
-        return -1;
-    }
-
-    s->rfd = fds[0];
-    s->wfd = fds[1];
-
-    fcntl(s->rfd, F_SETFL, O_NONBLOCK);
-    fcntl(s->wfd, F_SETFL, O_NONBLOCK);
-
-    qemu_aio_set_fd_handler(s->rfd, posix_aio_read, NULL, posix_aio_flush,
-        posix_aio_process_queue, s);
+    qemu_event_init(&s->event, false);
+    qemu_aio_set_fd_handler(qemu_event_get_poll_fd(&s->event), posix_aio_read,
+                            NULL, posix_aio_flush, posix_aio_process_queue, s);
 
     QTAILQ_INIT(&request_list);
     new_thread_bh = qemu_bh_new(spawn_thread_bh_fn, NULL);
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 09/10] virtio: Switch to QemuEvent
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
                   ` (7 preceding siblings ...)
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 08/10] Use QemuEvent for POSIX AIO Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 10/10] Remove EventNotifier Jan Kiszka
  9 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Michael S. Tsirkin

Replace EventNotifier with the new generic QemuEvent service.

CC: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 hw/vhost.c      |    4 +-
 hw/virtio-pci.c |   61 ++++++++++++++++++++++--------------------------------
 hw/virtio.c     |   12 +++++-----
 hw/virtio.h     |    6 ++--
 4 files changed, 36 insertions(+), 47 deletions(-)

diff --git a/hw/vhost.c b/hw/vhost.c
index 8d3ba5b..c7bd8b7 100644
--- a/hw/vhost.c
+++ b/hw/vhost.c
@@ -673,14 +673,14 @@ static int vhost_virtqueue_init(struct vhost_dev *dev,
         r = -errno;
         goto fail_alloc;
     }
-    file.fd = event_notifier_get_fd(virtio_queue_get_host_notifier(vvq));
+    file.fd = qemu_event_get_signal_fd(virtio_queue_get_host_event(vvq));
     r = ioctl(dev->control, VHOST_SET_VRING_KICK, &file);
     if (r) {
         r = -errno;
         goto fail_kick;
     }
 
-    file.fd = event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
+    file.fd = qemu_event_get_signal_fd(virtio_queue_get_guest_event(vvq));
     r = ioctl(dev->control, VHOST_SET_VRING_CALL, &file);
     if (r) {
         r = -errno;
diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
index a0fb7c1..9b9e69a 100644
--- a/hw/virtio-pci.c
+++ b/hw/virtio-pci.c
@@ -162,53 +162,47 @@ static int virtio_pci_set_host_notifier_internal(VirtIOPCIProxy *proxy,
                                                  int n, bool assign)
 {
     VirtQueue *vq = virtio_get_queue(proxy->vdev, n);
-    EventNotifier *notifier = virtio_queue_get_host_notifier(vq);
+    QemuEvent *event = virtio_queue_get_host_event(vq);
     int r = 0;
 
     if (assign) {
-        r = event_notifier_init(notifier, 1);
-        if (r < 0) {
-            error_report("%s: unable to init event notifier: %d",
-                         __func__, r);
-            return r;
-        }
+        qemu_event_init(event, true);
         memory_region_add_eventfd(&proxy->bar, VIRTIO_PCI_QUEUE_NOTIFY, 2,
-                                  true, n, event_notifier_get_fd(notifier));
+                                  true, n, qemu_event_get_signal_fd(event));
     } else {
         memory_region_del_eventfd(&proxy->bar, VIRTIO_PCI_QUEUE_NOTIFY, 2,
-                                  true, n, event_notifier_get_fd(notifier));
+                                  true, n, qemu_event_get_signal_fd(event));
         /* Handle the race condition where the guest kicked and we deassigned
          * before we got around to handling the kick.
          */
-        if (event_notifier_test_and_clear(notifier)) {
+        if (qemu_event_consume(event)) {
             virtio_queue_notify_vq(vq);
         }
 
-        event_notifier_cleanup(notifier);
+        qemu_event_destroy(event);
     }
     return r;
 }
 
-static void virtio_pci_host_notifier_read(void *opaque)
+static void virtio_pci_host_event(void *opaque)
 {
     VirtQueue *vq = opaque;
-    EventNotifier *n = virtio_queue_get_host_notifier(vq);
-    if (event_notifier_test_and_clear(n)) {
-        virtio_queue_notify_vq(vq);
-    }
+    QemuEvent *event = virtio_queue_get_host_event(vq);
+
+    qemu_event_consume(event);
+    virtio_queue_notify_vq(vq);
 }
 
 static void virtio_pci_set_host_notifier_fd_handler(VirtIOPCIProxy *proxy,
                                                     int n, bool assign)
 {
     VirtQueue *vq = virtio_get_queue(proxy->vdev, n);
-    EventNotifier *notifier = virtio_queue_get_host_notifier(vq);
+    QemuEvent *event = virtio_queue_get_host_event(vq);
+
     if (assign) {
-        qemu_set_fd_handler(event_notifier_get_fd(notifier),
-                            virtio_pci_host_notifier_read, NULL, vq);
+        qemu_event_set_handler(event, virtio_pci_host_event, vq);
     } else {
-        qemu_set_fd_handler(event_notifier_get_fd(notifier),
-                            NULL, NULL, NULL);
+        qemu_event_set_handler(event, NULL, NULL);
     }
 }
 
@@ -530,32 +524,27 @@ static unsigned virtio_pci_get_features(void *opaque)
     return proxy->host_features;
 }
 
-static void virtio_pci_guest_notifier_read(void *opaque)
+static void virtio_pci_guest_event(void *opaque)
 {
     VirtQueue *vq = opaque;
-    EventNotifier *n = virtio_queue_get_guest_notifier(vq);
-    if (event_notifier_test_and_clear(n)) {
-        virtio_irq(vq);
-    }
+    QemuEvent *event = virtio_queue_get_guest_event(vq);
+
+    qemu_event_consume(event);
+    virtio_irq(vq);
 }
 
 static int virtio_pci_set_guest_notifier(void *opaque, int n, bool assign)
 {
     VirtIOPCIProxy *proxy = opaque;
     VirtQueue *vq = virtio_get_queue(proxy->vdev, n);
-    EventNotifier *notifier = virtio_queue_get_guest_notifier(vq);
+    QemuEvent *event = virtio_queue_get_guest_event(vq);
 
     if (assign) {
-        int r = event_notifier_init(notifier, 0);
-        if (r < 0) {
-            return r;
-        }
-        qemu_set_fd_handler(event_notifier_get_fd(notifier),
-                            virtio_pci_guest_notifier_read, NULL, vq);
+        qemu_event_init(event, false);
+        qemu_event_set_handler(event, virtio_pci_guest_event, vq);
     } else {
-        qemu_set_fd_handler(event_notifier_get_fd(notifier),
-                            NULL, NULL, NULL);
-        event_notifier_cleanup(notifier);
+        qemu_event_set_handler(event, NULL, NULL);
+        qemu_event_destroy(event);
     }
 
     return 0;
diff --git a/hw/virtio.c b/hw/virtio.c
index 064aecf..7f43663 100644
--- a/hw/virtio.c
+++ b/hw/virtio.c
@@ -76,9 +76,9 @@ struct VirtQueue
 
     uint16_t vector;
     void (*handle_output)(VirtIODevice *vdev, VirtQueue *vq);
+    QemuEvent guest_event;
+    QemuEvent host_event;
     VirtIODevice *vdev;
-    EventNotifier guest_notifier;
-    EventNotifier host_notifier;
 };
 
 /* virt queue functions */
@@ -966,11 +966,11 @@ VirtQueue *virtio_get_queue(VirtIODevice *vdev, int n)
     return vdev->vq + n;
 }
 
-EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq)
+QemuEvent *virtio_queue_get_guest_event(VirtQueue *vq)
 {
-    return &vq->guest_notifier;
+    return &vq->guest_event;
 }
-EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq)
+QemuEvent *virtio_queue_get_host_event(VirtQueue *vq)
 {
-    return &vq->host_notifier;
+    return &vq->host_event;
 }
diff --git a/hw/virtio.h b/hw/virtio.h
index 400c092..199b0d8 100644
--- a/hw/virtio.h
+++ b/hw/virtio.h
@@ -19,7 +19,7 @@
 #include "qdev.h"
 #include "sysemu.h"
 #include "block.h"
-#include "event_notifier.h"
+#include "qemu-thread.h"
 #ifdef CONFIG_LINUX
 #include "9p.h"
 #endif
@@ -229,8 +229,8 @@ target_phys_addr_t virtio_queue_get_ring_size(VirtIODevice *vdev, int n);
 uint16_t virtio_queue_get_last_avail_idx(VirtIODevice *vdev, int n);
 void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx);
 VirtQueue *virtio_get_queue(VirtIODevice *vdev, int n);
-EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq);
-EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
+QemuEvent *virtio_queue_get_guest_event(VirtQueue *vq);
+QemuEvent *virtio_queue_get_host_event(VirtQueue *vq);
 void virtio_queue_notify_vq(VirtQueue *vq);
 void virtio_irq(VirtQueue *vq);
 #endif
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [Qemu-devel] [PATCH v3 10/10] Remove EventNotifier
  2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
                   ` (8 preceding siblings ...)
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 09/10] virtio: Switch to QemuEvent Jan Kiszka
@ 2012-04-05 10:59 ` Jan Kiszka
  9 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 10:59 UTC (permalink / raw)
  To: Anthony Liguori, qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini

Obsoleted by QemuEvent.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
 Makefile.objs    |    2 +-
 event_notifier.c |   61 ------------------------------------------------------
 event_notifier.h |   28 ------------------------
 3 files changed, 1 insertions(+), 90 deletions(-)
 delete mode 100644 event_notifier.c
 delete mode 100644 event_notifier.h

diff --git a/Makefile.objs b/Makefile.objs
index 377bfe2..755b672 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -183,7 +183,7 @@ common-obj-$(CONFIG_VNC) += $(addprefix ui/, $(vnc-obj-y))
 
 common-obj-y += iov.o acl.o
 common-obj-$(CONFIG_POSIX) += compatfd.o
-common-obj-y += notify.o event_notifier.o
+common-obj-y += notify.o
 common-obj-y += qemu-timer.o qemu-timer-common.o
 
 slirp-obj-y = cksum.o if.o ip_icmp.o ip_input.o ip_output.o
diff --git a/event_notifier.c b/event_notifier.c
deleted file mode 100644
index 0b82981..0000000
--- a/event_notifier.c
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * event notifier support
- *
- * Copyright Red Hat, Inc. 2010
- *
- * Authors:
- *  Michael S. Tsirkin <mst@redhat.com>
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#include "event_notifier.h"
-#ifdef CONFIG_EVENTFD
-#include <sys/eventfd.h>
-#endif
-
-int event_notifier_init(EventNotifier *e, int active)
-{
-#ifdef CONFIG_EVENTFD
-    int fd = eventfd(!!active, EFD_NONBLOCK | EFD_CLOEXEC);
-    if (fd < 0)
-        return -errno;
-    e->fd = fd;
-    return 0;
-#else
-    return -ENOSYS;
-#endif
-}
-
-void event_notifier_cleanup(EventNotifier *e)
-{
-    close(e->fd);
-}
-
-int event_notifier_get_fd(EventNotifier *e)
-{
-    return e->fd;
-}
-
-int event_notifier_test_and_clear(EventNotifier *e)
-{
-    uint64_t value;
-    int r = read(e->fd, &value, sizeof(value));
-    return r == sizeof(value);
-}
-
-int event_notifier_test(EventNotifier *e)
-{
-    uint64_t value;
-    int r = read(e->fd, &value, sizeof(value));
-    if (r == sizeof(value)) {
-        /* restore previous value. */
-        int s = write(e->fd, &value, sizeof(value));
-        /* never blocks because we use EFD_SEMAPHORE.
-         * If we didn't we'd get EAGAIN on overflow
-         * and we'd have to write code to ignore it. */
-        assert(s == sizeof(value));
-    }
-    return r == sizeof(value);
-}
diff --git a/event_notifier.h b/event_notifier.h
deleted file mode 100644
index 886222c..0000000
--- a/event_notifier.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/*
- * event notifier support
- *
- * Copyright Red Hat, Inc. 2010
- *
- * Authors:
- *  Michael S. Tsirkin <mst@redhat.com>
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#ifndef QEMU_EVENT_NOTIFIER_H
-#define QEMU_EVENT_NOTIFIER_H
-
-#include "qemu-common.h"
-
-struct EventNotifier {
-	int fd;
-};
-
-int event_notifier_init(EventNotifier *, int active);
-void event_notifier_cleanup(EventNotifier *);
-int event_notifier_get_fd(EventNotifier *);
-int event_notifier_test_and_clear(EventNotifier *);
-int event_notifier_test(EventNotifier *);
-
-#endif
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX Jan Kiszka
@ 2012-04-05 11:19   ` Peter Maydell
  2012-04-05 11:56     ` Jan Kiszka
  0 siblings, 1 reply; 27+ messages in thread
From: Peter Maydell @ 2012-04-05 11:19 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel

On 5 April 2012 11:59, Jan Kiszka <jan.kiszka@siemens.com> wrote:
> +/* Returns true if condition was signals, false if timed out. */
> +bool qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex,
> +                         unsigned int timeout_ms)
> +{
> +    struct timespec ts;
> +    struct timeval tv;
> +    int err;
> +
> +    gettimeofday(&tv, NULL);
> +    ts.tv_sec = tv.tv_sec + timeout_ms / 1000;
> +    ts.tv_nsec = tv.tv_usec * 1000 + timeout_ms % 1000;
> +    if (ts.tv_nsec > 1000000000) {
> +        ts.tv_sec++;
> +        ts.tv_nsec -= 1000000000;
> +    }
> +    err = pthread_cond_timedwait(&cond->cond, &mutex->lock, &ts);

Use clock_gettime() and avoid the need to convert a struct timeval
to a struct timespec ?

-- PMM

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 05/10] Introduce QemuEvent abstraction
  2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 05/10] Introduce QemuEvent abstraction Jan Kiszka
@ 2012-04-05 11:23   ` Paolo Bonzini
  2012-04-05 12:20     ` Jan Kiszka
  0 siblings, 1 reply; 27+ messages in thread
From: Paolo Bonzini @ 2012-04-05 11:23 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Kevin Wolf, Anthony Liguori, qemu-devel, Michael S. Tsirkin

Il 05/04/2012 12:59, Jan Kiszka ha scritto:
> Provide generic services for binary events. Blocking wait would be
> feasible but is not included yet as there are no users.
> 
> diff --git a/qemu-event-posix.c b/qemu-event-posix.c
> new file mode 100644
> index 0000000..6138168
> --- /dev/null
> +++ b/qemu-event-posix.c
> @@ -0,0 +1,110 @@
> +/*
> + * Posix implementations of event signaling service
> + *
> + * Copyright Red Hat, Inc. 2012
> + * Copyright Siemens AG 2012
> + *
> + * Author:
> + *  Paolo Bonzini <pbonzini@redhat.com>
> + *  Jan Kiszka <jan.kiszka@siemens.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + *
> + */
> +#include "qemu-thread.h"
> +#include "qemu-common.h"
> +#include "main-loop.h"
> +
> +#ifdef CONFIG_EVENTFD
> +#include <sys/eventfd.h>
> +#endif
> +
> +void qemu_event_init(QemuEvent *event, bool signaled)
> +{
> +    int fds[2];
> +    int ret;
> +
> +#ifdef CONFIG_EVENTFD
> +    ret = eventfd(signaled, EFD_NONBLOCK | EFD_CLOEXEC);
> +    if (ret >= 0) {
> +        event->rfd = ret;
> +        event->wfd = dup(ret);
> +        if (event->wfd < 0) {
> +            qemu_error_exit(errno, __func__);
> +        }
> +        qemu_set_cloexec(event->wfd);
> +        return;
> +    }
> +    if (errno != ENOSYS) {
> +        qemu_error_exit(errno, __func__);
> +    }
> +    /* fall back to pipe-based support */
> +#endif
> +
> +    ret = qemu_pipe(fds);
> +    if (ret < 0) {
> +        qemu_error_exit(errno, __func__);
> +    }
> +    event->rfd = fds[0];
> +    event->wfd = fds[1];
> +    if (signaled) {
> +        qemu_event_signal(event);
> +    }
> +}
> +
> +void qemu_event_destroy(QemuEvent *event)
> +{
> +    close(event->rfd);
> +    close(event->wfd);
> +}
> +
> +int qemu_event_get_signal_fd(QemuEvent *event)
> +{
> +    return event->wfd;
> +}

How would you use it?  Since qemu_event_signal ignores EAGAIN, polling
for writeability of the fd is useless.

This is really little more than search-and-replace from what gets out of
event-notifier.c after my patches, and the commit message does not
explain the differences in the two APIs.  Having separate commits as I
had would make the steps obvious.  Is it really worthwhile to do this
and introduce the need for patches 9+10 (plus IIRC conflicts in qemu-kvm)?

(In fact, qemu_event_get_poll_fd is a layering violation too.  In a
perfect world, aio.c would simply get a list of EventNotifiers and no
other types of fd).

Paolo

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 11:19   ` Peter Maydell
@ 2012-04-05 11:56     ` Jan Kiszka
  2012-04-05 12:15       ` Peter Maydell
  2012-04-05 12:30       ` malc
  0 siblings, 2 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 11:56 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel@nongnu.org

On 2012-04-05 13:19, Peter Maydell wrote:
> On 5 April 2012 11:59, Jan Kiszka <jan.kiszka@siemens.com> wrote:
>> +/* Returns true if condition was signals, false if timed out. */
>> +bool qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex,
>> +                         unsigned int timeout_ms)
>> +{
>> +    struct timespec ts;
>> +    struct timeval tv;
>> +    int err;
>> +
>> +    gettimeofday(&tv, NULL);
>> +    ts.tv_sec = tv.tv_sec + timeout_ms / 1000;
>> +    ts.tv_nsec = tv.tv_usec * 1000 + timeout_ms % 1000;
>> +    if (ts.tv_nsec > 1000000000) {
>> +        ts.tv_sec++;
>> +        ts.tv_nsec -= 1000000000;
>> +    }
>> +    err = pthread_cond_timedwait(&cond->cond, &mutex->lock, &ts);
> 
> Use clock_gettime() and avoid the need to convert a struct timeval
> to a struct timespec ?

Would save that "* 1000". I just wondered why we do not use it elsewhere
in QEMU and was reluctant to risk some BSD breakage.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 11:56     ` Jan Kiszka
@ 2012-04-05 12:15       ` Peter Maydell
  2012-04-05 12:30       ` malc
  1 sibling, 0 replies; 27+ messages in thread
From: Peter Maydell @ 2012-04-05 12:15 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel@nongnu.org

On 5 April 2012 12:56, Jan Kiszka <jan.kiszka@siemens.com> wrote:
> On 2012-04-05 13:19, Peter Maydell wrote:
>> Use clock_gettime() and avoid the need to convert a struct timeval
>> to a struct timespec ?
>
> Would save that "* 1000". I just wondered why we do not use it elsewhere
> in QEMU and was reluctant to risk some BSD breakage.

Well, we use it in qemu-timer-common.c for linux/FreeBSD/DragonFly/
OpenBSD, so I think it should be OK... (I think it's reasonable
to go ahead and use POSIX specified functions unless we *know*
there's some portability issue with them on some platform we care
about -- default to assuming sanity rather than insanity, if you
like :-))

-- PMM

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 05/10] Introduce QemuEvent abstraction
  2012-04-05 11:23   ` Paolo Bonzini
@ 2012-04-05 12:20     ` Jan Kiszka
  0 siblings, 0 replies; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 12:20 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Anthony Liguori, qemu-devel@nongnu.org,
	Michael S. Tsirkin

On 2012-04-05 13:23, Paolo Bonzini wrote:
> Il 05/04/2012 12:59, Jan Kiszka ha scritto:
>> Provide generic services for binary events. Blocking wait would be
>> feasible but is not included yet as there are no users.
>>
>> diff --git a/qemu-event-posix.c b/qemu-event-posix.c
>> new file mode 100644
>> index 0000000..6138168
>> --- /dev/null
>> +++ b/qemu-event-posix.c
>> @@ -0,0 +1,110 @@
>> +/*
>> + * Posix implementations of event signaling service
>> + *
>> + * Copyright Red Hat, Inc. 2012
>> + * Copyright Siemens AG 2012
>> + *
>> + * Author:
>> + *  Paolo Bonzini <pbonzini@redhat.com>
>> + *  Jan Kiszka <jan.kiszka@siemens.com>
>> + *
>> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
>> + * See the COPYING file in the top-level directory.
>> + *
>> + */
>> +#include "qemu-thread.h"
>> +#include "qemu-common.h"
>> +#include "main-loop.h"
>> +
>> +#ifdef CONFIG_EVENTFD
>> +#include <sys/eventfd.h>
>> +#endif
>> +
>> +void qemu_event_init(QemuEvent *event, bool signaled)
>> +{
>> +    int fds[2];
>> +    int ret;
>> +
>> +#ifdef CONFIG_EVENTFD
>> +    ret = eventfd(signaled, EFD_NONBLOCK | EFD_CLOEXEC);
>> +    if (ret >= 0) {
>> +        event->rfd = ret;
>> +        event->wfd = dup(ret);
>> +        if (event->wfd < 0) {
>> +            qemu_error_exit(errno, __func__);
>> +        }
>> +        qemu_set_cloexec(event->wfd);
>> +        return;
>> +    }
>> +    if (errno != ENOSYS) {
>> +        qemu_error_exit(errno, __func__);
>> +    }
>> +    /* fall back to pipe-based support */
>> +#endif
>> +
>> +    ret = qemu_pipe(fds);
>> +    if (ret < 0) {
>> +        qemu_error_exit(errno, __func__);
>> +    }
>> +    event->rfd = fds[0];
>> +    event->wfd = fds[1];
>> +    if (signaled) {
>> +        qemu_event_signal(event);
>> +    }
>> +}
>> +
>> +void qemu_event_destroy(QemuEvent *event)
>> +{
>> +    close(event->rfd);
>> +    close(event->wfd);
>> +}
>> +
>> +int qemu_event_get_signal_fd(QemuEvent *event)
>> +{
>> +    return event->wfd;
>> +}
> 
> How would you use it?  Since qemu_event_signal ignores EAGAIN, polling
> for writeability of the fd is useless.

vhost, see patch 9.

> 
> This is really little more than search-and-replace from what gets out of
> event-notifier.c after my patches, and the commit message does not
> explain the differences in the two APIs.  Having separate commits as I
> had would make the steps obvious.  Is it really worthwhile to do this
> and introduce the need for patches 9+10 (plus IIRC conflicts in qemu-kvm)?

Will reorganize the patches to morph & split event-notifier.c. The
conflict with qemu-kvm is minimal, though.

> 
> (In fact, qemu_event_get_poll_fd is a layering violation too.  In a
> perfect world, aio.c would simply get a list of EventNotifiers and no
> other types of fd).

The actual problem is that we do not have an abstraction of a handle
that can be registered with a generic polling service. So we have
qemu_aio_set_fd_handler, qemu_set_fd_handler2, qemu_add_wait_object etc.
That requires the introduction of qemu_event_set_handler - and makes the
service dependent on the main loop. I'm not happy about this either,
maybe it's worth rethinking this.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 11:56     ` Jan Kiszka
  2012-04-05 12:15       ` Peter Maydell
@ 2012-04-05 12:30       ` malc
  2012-04-05 12:37         ` Paolo Bonzini
  1 sibling, 1 reply; 27+ messages in thread
From: malc @ 2012-04-05 12:30 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Kevin Wolf, Peter Maydell, Anthony Liguori, qemu-devel@nongnu.org,
	Paolo Bonzini

On Thu, 5 Apr 2012, Jan Kiszka wrote:

> On 2012-04-05 13:19, Peter Maydell wrote:
> > On 5 April 2012 11:59, Jan Kiszka <jan.kiszka@siemens.com> wrote:
> >> +/* Returns true if condition was signals, false if timed out. */
> >> +bool qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex,
> >> +                         unsigned int timeout_ms)
> >> +{
> >> +    struct timespec ts;
> >> +    struct timeval tv;
> >> +    int err;
> >> +
> >> +    gettimeofday(&tv, NULL);
> >> +    ts.tv_sec = tv.tv_sec + timeout_ms / 1000;
> >> +    ts.tv_nsec = tv.tv_usec * 1000 + timeout_ms % 1000;
> >> +    if (ts.tv_nsec > 1000000000) {
> >> +        ts.tv_sec++;
> >> +        ts.tv_nsec -= 1000000000;
> >> +    }
> >> +    err = pthread_cond_timedwait(&cond->cond, &mutex->lock, &ts);
> > 
> > Use clock_gettime() and avoid the need to convert a struct timeval
> > to a struct timespec ?
> 
> Would save that "* 1000". I just wondered why we do not use it elsewhere
> in QEMU and was reluctant to risk some BSD breakage.
> 

It's probably worth mentioning that using anything other than 
clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
clock attr on the condition variable) is prone to the surprises (such
as NTP corrections and daylight saving changes).

-- 
mailto:av1474@comtv.ru

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 12:30       ` malc
@ 2012-04-05 12:37         ` Paolo Bonzini
  2012-04-05 12:53           ` malc
  0 siblings, 1 reply; 27+ messages in thread
From: Paolo Bonzini @ 2012-04-05 12:37 UTC (permalink / raw)
  To: malc
  Cc: Kevin Wolf, Jan Kiszka, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

Il 05/04/2012 14:30, malc ha scritto:
>> > Would save that "* 1000". I just wondered why we do not use it elsewhere
>> > in QEMU and was reluctant to risk some BSD breakage.
>> > 
> It's probably worth mentioning that using anything other than 
> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
> clock attr on the condition variable) is prone to the surprises (such
> as NTP corrections and daylight saving changes).

I was about to suggest the same, but how widespread is support for
pthread_condattr_setclock?

Paolo

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 12:37         ` Paolo Bonzini
@ 2012-04-05 12:53           ` malc
  2012-04-05 12:56             ` Paolo Bonzini
  0 siblings, 1 reply; 27+ messages in thread
From: malc @ 2012-04-05 12:53 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Jan Kiszka, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

On Thu, 5 Apr 2012, Paolo Bonzini wrote:

> Il 05/04/2012 14:30, malc ha scritto:
> >> > Would save that "* 1000". I just wondered why we do not use it elsewhere
> >> > in QEMU and was reluctant to risk some BSD breakage.
> >> > 
> > It's probably worth mentioning that using anything other than 
> > clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
> > clock attr on the condition variable) is prone to the surprises (such
> > as NTP corrections and daylight saving changes).
> 
> I was about to suggest the same, but how widespread is support for
> pthread_condattr_setclock?

If it's not all is lost anyway.

-- 
mailto:av1474@comtv.ru

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 12:53           ` malc
@ 2012-04-05 12:56             ` Paolo Bonzini
  2012-04-05 12:59               ` Jan Kiszka
  2012-04-05 12:59               ` malc
  0 siblings, 2 replies; 27+ messages in thread
From: Paolo Bonzini @ 2012-04-05 12:56 UTC (permalink / raw)
  To: malc
  Cc: Kevin Wolf, Jan Kiszka, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

Il 05/04/2012 14:53, malc ha scritto:
> On Thu, 5 Apr 2012, Paolo Bonzini wrote:
> 
>> Il 05/04/2012 14:30, malc ha scritto:
>>>>> Would save that "* 1000". I just wondered why we do not use it elsewhere
>>>>> in QEMU and was reluctant to risk some BSD breakage.
>>>>>
>>> It's probably worth mentioning that using anything other than 
>>> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
>>> clock attr on the condition variable) is prone to the surprises (such
>>> as NTP corrections and daylight saving changes).
>>
>> I was about to suggest the same, but how widespread is support for
>> pthread_condattr_setclock?
> 
> If it's not all is lost anyway.

Only once every year. :)

Paolo

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 12:56             ` Paolo Bonzini
@ 2012-04-05 12:59               ` Jan Kiszka
  2012-04-05 13:00                 ` malc
  2012-04-05 12:59               ` malc
  1 sibling, 1 reply; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 12:59 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Peter Maydell, Anthony Liguori, qemu-devel@nongnu.org

On 2012-04-05 14:56, Paolo Bonzini wrote:
> Il 05/04/2012 14:53, malc ha scritto:
>> On Thu, 5 Apr 2012, Paolo Bonzini wrote:
>>
>>> Il 05/04/2012 14:30, malc ha scritto:
>>>>>> Would save that "* 1000". I just wondered why we do not use it elsewhere
>>>>>> in QEMU and was reluctant to risk some BSD breakage.
>>>>>>
>>>> It's probably worth mentioning that using anything other than 
>>>> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
>>>> clock attr on the condition variable) is prone to the surprises (such
>>>> as NTP corrections and daylight saving changes).
>>>
>>> I was about to suggest the same, but how widespread is support for
>>> pthread_condattr_setclock?
>>
>> If it's not all is lost anyway.
> 
> Only once every year. :)

...and not for the current user of this service which do not care that
much about the timeout and a potential delay or early shot.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 12:56             ` Paolo Bonzini
  2012-04-05 12:59               ` Jan Kiszka
@ 2012-04-05 12:59               ` malc
  1 sibling, 0 replies; 27+ messages in thread
From: malc @ 2012-04-05 12:59 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Jan Kiszka, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

On Thu, 5 Apr 2012, Paolo Bonzini wrote:

> Il 05/04/2012 14:53, malc ha scritto:
> > On Thu, 5 Apr 2012, Paolo Bonzini wrote:
> > 
> >> Il 05/04/2012 14:30, malc ha scritto:
> >>>>> Would save that "* 1000". I just wondered why we do not use it elsewhere
> >>>>> in QEMU and was reluctant to risk some BSD breakage.
> >>>>>
> >>> It's probably worth mentioning that using anything other than 
> >>> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
> >>> clock attr on the condition variable) is prone to the surprises (such
> >>> as NTP corrections and daylight saving changes).
> >>
> >> I was about to suggest the same, but how widespread is support for
> >> pthread_condattr_setclock?
> > 
> > If it's not all is lost anyway.
> 
> Only once every year. :)

DST changes happen twice if i'm not mistaken, NTP is at liberty to change
things more than once at any rate.

-- 
mailto:av1474@comtv.ru

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 12:59               ` Jan Kiszka
@ 2012-04-05 13:00                 ` malc
  2012-04-05 13:03                   ` Jan Kiszka
  0 siblings, 1 reply; 27+ messages in thread
From: malc @ 2012-04-05 13:00 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

On Thu, 5 Apr 2012, Jan Kiszka wrote:

> On 2012-04-05 14:56, Paolo Bonzini wrote:
> > Il 05/04/2012 14:53, malc ha scritto:
> >> On Thu, 5 Apr 2012, Paolo Bonzini wrote:
> >>
> >>> Il 05/04/2012 14:30, malc ha scritto:
> >>>>>> Would save that "* 1000". I just wondered why we do not use it elsewhere
> >>>>>> in QEMU and was reluctant to risk some BSD breakage.
> >>>>>>
> >>>> It's probably worth mentioning that using anything other than 
> >>>> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
> >>>> clock attr on the condition variable) is prone to the surprises (such
> >>>> as NTP corrections and daylight saving changes).
> >>>
> >>> I was about to suggest the same, but how widespread is support for
> >>> pthread_condattr_setclock?
> >>
> >> If it's not all is lost anyway.
> > 
> > Only once every year. :)
> 
> ...and not for the current user of this service which do not care that
> much about the timeout and a potential delay or early shot.
> 

An hour of potential delay mind you.

-- 
mailto:av1474@comtv.ru

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 13:00                 ` malc
@ 2012-04-05 13:03                   ` Jan Kiszka
  2012-04-05 13:20                     ` malc
  0 siblings, 1 reply; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 13:03 UTC (permalink / raw)
  To: malc
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

On 2012-04-05 15:00, malc wrote:
> On Thu, 5 Apr 2012, Jan Kiszka wrote:
> 
>> On 2012-04-05 14:56, Paolo Bonzini wrote:
>>> Il 05/04/2012 14:53, malc ha scritto:
>>>> On Thu, 5 Apr 2012, Paolo Bonzini wrote:
>>>>
>>>>> Il 05/04/2012 14:30, malc ha scritto:
>>>>>>>> Would save that "* 1000". I just wondered why we do not use it elsewhere
>>>>>>>> in QEMU and was reluctant to risk some BSD breakage.
>>>>>>>>
>>>>>> It's probably worth mentioning that using anything other than 
>>>>>> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
>>>>>> clock attr on the condition variable) is prone to the surprises (such
>>>>>> as NTP corrections and daylight saving changes).
>>>>>
>>>>> I was about to suggest the same, but how widespread is support for
>>>>> pthread_condattr_setclock?
>>>>
>>>> If it's not all is lost anyway.
>>>
>>> Only once every year. :)
>>
>> ...and not for the current user of this service which do not care that
>> much about the timeout and a potential delay or early shot.
>>
> 
> An hour of potential delay mind you.

Nope, look at posix-aio-compat. It's an optimization to keep the number
worker threads under control.

Granted, time adjustments can make qemu_cond_timedwait in this primitive
(but easily portable) form less useful for other purposes.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 13:03                   ` Jan Kiszka
@ 2012-04-05 13:20                     ` malc
  2012-04-05 13:24                       ` Jan Kiszka
  0 siblings, 1 reply; 27+ messages in thread
From: malc @ 2012-04-05 13:20 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

On Thu, 5 Apr 2012, Jan Kiszka wrote:

> On 2012-04-05 15:00, malc wrote:
> > On Thu, 5 Apr 2012, Jan Kiszka wrote:
> > 
> >> On 2012-04-05 14:56, Paolo Bonzini wrote:
> >>> Il 05/04/2012 14:53, malc ha scritto:
> >>>> On Thu, 5 Apr 2012, Paolo Bonzini wrote:
> >>>>
> >>>>> Il 05/04/2012 14:30, malc ha scritto:
> >>>>>>>> Would save that "* 1000". I just wondered why we do not use it elsewhere
> >>>>>>>> in QEMU and was reluctant to risk some BSD breakage.
> >>>>>>>>
> >>>>>> It's probably worth mentioning that using anything other than 
> >>>>>> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
> >>>>>> clock attr on the condition variable) is prone to the surprises (such
> >>>>>> as NTP corrections and daylight saving changes).
> >>>>>
> >>>>> I was about to suggest the same, but how widespread is support for
> >>>>> pthread_condattr_setclock?
> >>>>
> >>>> If it's not all is lost anyway.
> >>>
> >>> Only once every year. :)
> >>
> >> ...and not for the current user of this service which do not care that
> >> much about the timeout and a potential delay or early shot.
> >>
> > 
> > An hour of potential delay mind you.
> 
> Nope, look at posix-aio-compat. It's an optimization to keep the number
> worker threads under control.

The code attempts to sleep for ten seconds, which can turn into an hour
and ten seconds if the conditions are right.

> 
> Granted, time adjustments can make qemu_cond_timedwait in this primitive
> (but easily portable) form less useful for other purposes.
> 
> Jan
> 
> 

-- 
mailto:av1474@comtv.ru

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 13:20                     ` malc
@ 2012-04-05 13:24                       ` Jan Kiszka
  2012-04-05 13:37                         ` malc
  0 siblings, 1 reply; 27+ messages in thread
From: Jan Kiszka @ 2012-04-05 13:24 UTC (permalink / raw)
  To: malc
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

On 2012-04-05 15:20, malc wrote:
> On Thu, 5 Apr 2012, Jan Kiszka wrote:
> 
>> On 2012-04-05 15:00, malc wrote:
>>> On Thu, 5 Apr 2012, Jan Kiszka wrote:
>>>
>>>> On 2012-04-05 14:56, Paolo Bonzini wrote:
>>>>> Il 05/04/2012 14:53, malc ha scritto:
>>>>>> On Thu, 5 Apr 2012, Paolo Bonzini wrote:
>>>>>>
>>>>>>> Il 05/04/2012 14:30, malc ha scritto:
>>>>>>>>>> Would save that "* 1000". I just wondered why we do not use it elsewhere
>>>>>>>>>> in QEMU and was reluctant to risk some BSD breakage.
>>>>>>>>>>
>>>>>>>> It's probably worth mentioning that using anything other than 
>>>>>>>> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
>>>>>>>> clock attr on the condition variable) is prone to the surprises (such
>>>>>>>> as NTP corrections and daylight saving changes).
>>>>>>>
>>>>>>> I was about to suggest the same, but how widespread is support for
>>>>>>> pthread_condattr_setclock?
>>>>>>
>>>>>> If it's not all is lost anyway.
>>>>>
>>>>> Only once every year. :)
>>>>
>>>> ...and not for the current user of this service which do not care that
>>>> much about the timeout and a potential delay or early shot.
>>>>
>>>
>>> An hour of potential delay mind you.
>>
>> Nope, look at posix-aio-compat. It's an optimization to keep the number
>> worker threads under control.
> 
> The code attempts to sleep for ten seconds, which can turn into an hour
> and ten seconds if the conditions are right.

Yes, but look at what happens then: it is unlikely that the thread will
stay idle so long on a busy system (some request will wake it up earlier
again), and even if that happens, the thread will simply consume a few
resources "a bit" longer.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX
  2012-04-05 13:24                       ` Jan Kiszka
@ 2012-04-05 13:37                         ` malc
  0 siblings, 0 replies; 27+ messages in thread
From: malc @ 2012-04-05 13:37 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, qemu-devel@nongnu.org,
	Peter Maydell

On Thu, 5 Apr 2012, Jan Kiszka wrote:

> On 2012-04-05 15:20, malc wrote:
> > On Thu, 5 Apr 2012, Jan Kiszka wrote:
> > 
> >> On 2012-04-05 15:00, malc wrote:
> >>> On Thu, 5 Apr 2012, Jan Kiszka wrote:
> >>>
> >>>> On 2012-04-05 14:56, Paolo Bonzini wrote:
> >>>>> Il 05/04/2012 14:53, malc ha scritto:
> >>>>>> On Thu, 5 Apr 2012, Paolo Bonzini wrote:
> >>>>>>
> >>>>>>> Il 05/04/2012 14:30, malc ha scritto:
> >>>>>>>>>> Would save that "* 1000". I just wondered why we do not use it elsewhere
> >>>>>>>>>> in QEMU and was reluctant to risk some BSD breakage.
> >>>>>>>>>>
> >>>>>>>> It's probably worth mentioning that using anything other than 
> >>>>>>>> clock_gettime and CLOCK_MONOTONING (as well as setting proper pthread
> >>>>>>>> clock attr on the condition variable) is prone to the surprises (such
> >>>>>>>> as NTP corrections and daylight saving changes).
> >>>>>>>
> >>>>>>> I was about to suggest the same, but how widespread is support for
> >>>>>>> pthread_condattr_setclock?
> >>>>>>
> >>>>>> If it's not all is lost anyway.
> >>>>>
> >>>>> Only once every year. :)
> >>>>
> >>>> ...and not for the current user of this service which do not care that
> >>>> much about the timeout and a potential delay or early shot.
> >>>>
> >>>
> >>> An hour of potential delay mind you.
> >>
> >> Nope, look at posix-aio-compat. It's an optimization to keep the number

     ^^^^ This is what i have issues with...

> >> worker threads under control.
> > 
> > The code attempts to sleep for ten seconds, which can turn into an hour
> > and ten seconds if the conditions are right.
> 
> Yes, but look at what happens then: it is unlikely that the thread will
> stay idle so long on a busy system (some request will wake it up earlier
> again), and even if that happens, the thread will simply consume a few
> resources "a bit" longer.

not the practicallity...

-- 
mailto:av1474@comtv.ru

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2012-04-05 13:37 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-04-05 10:59 [Qemu-devel] [PATCH v3 00/10] Use more central threading and synchronization services Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 01/10] Introduce qemu_cond_timedwait for POSIX Jan Kiszka
2012-04-05 11:19   ` Peter Maydell
2012-04-05 11:56     ` Jan Kiszka
2012-04-05 12:15       ` Peter Maydell
2012-04-05 12:30       ` malc
2012-04-05 12:37         ` Paolo Bonzini
2012-04-05 12:53           ` malc
2012-04-05 12:56             ` Paolo Bonzini
2012-04-05 12:59               ` Jan Kiszka
2012-04-05 13:00                 ` malc
2012-04-05 13:03                   ` Jan Kiszka
2012-04-05 13:20                     ` malc
2012-04-05 13:24                       ` Jan Kiszka
2012-04-05 13:37                         ` malc
2012-04-05 12:59               ` malc
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 02/10] Switch POSIX compat AIO to QEMU abstractions Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 03/10] Switch compatfd to QEMU thread Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 04/10] qemu-thread: Factor out qemu_error_exit Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 05/10] Introduce QemuEvent abstraction Jan Kiszka
2012-04-05 11:23   ` Paolo Bonzini
2012-04-05 12:20     ` Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 06/10] Use QemuEvent in main loop Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 07/10] Drop unused qemu_eventfd Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 08/10] Use QemuEvent for POSIX AIO Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 09/10] virtio: Switch to QemuEvent Jan Kiszka
2012-04-05 10:59 ` [Qemu-devel] [PATCH v3 10/10] Remove EventNotifier Jan Kiszka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).