qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock
@ 2011-10-03 11:23 Harsh Prateek Bora
  2011-10-03 11:23 ` [Qemu-devel] [PATCH 1/2] " Harsh Prateek Bora
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Harsh Prateek Bora @ 2011-10-03 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, aneesh.kumar

SynthFS uses rwlocks, which raise the need of a generic QemuRWLock APIs.
This patchset introduces the same making necessary changes to relevant code.

Harsh Prateek Bora (2):
  Introduce QemuRWLock
  Use qemu_rwlock_* interface instead of pthread_rwlock_*

 hw/9pfs/virtio-9p-synth.c |   23 ++++++++++-------------
 qemu-thread-posix.c       |   26 ++++++++++++++++++++++++++
 qemu-thread-posix.h       |    4 ++++
 qemu-thread.h             |    6 ++++++
 4 files changed, 46 insertions(+), 13 deletions(-)

-- 
1.7.4.1

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 1/2] Introduce QemuRWLock
  2011-10-03 11:23 [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock Harsh Prateek Bora
@ 2011-10-03 11:23 ` Harsh Prateek Bora
  2011-10-03 11:23 ` [Qemu-devel] [PATCH 2/2] Use qemu_rwlock_* interface instead of pthread_rwlock_* Harsh Prateek Bora
  2011-10-03 12:16 ` [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock Jan Kiszka
  2 siblings, 0 replies; 6+ messages in thread
From: Harsh Prateek Bora @ 2011-10-03 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, aneesh.kumar

SynthFS introduced in
http://lists.gnu.org/archive/html/qemu-devel/2011-09/msg01206.html
uses pthread_rwlock_* APIs for rwlocks, which raise the need of a generic
Qemu specific, os independent rwlock APIs. This patch introduces the same.
Another patch to switch pthread_rwlock_* into qemu_rwlock_* follows.

Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com>
---
 qemu-thread-posix.c |   26 ++++++++++++++++++++++++++
 qemu-thread-posix.h |    4 ++++
 qemu-thread.h       |    6 ++++++
 3 files changed, 36 insertions(+), 0 deletions(-)

diff --git a/qemu-thread-posix.c b/qemu-thread-posix.c
index ac3c0c9..9ae7f44 100644
--- a/qemu-thread-posix.c
+++ b/qemu-thread-posix.c
@@ -147,3 +147,29 @@ void qemu_thread_exit(void *retval)
 {
     pthread_exit(retval);
 }
+
+int qemu_rwlock_wrlock(QemuRWLock *rwlock)
+{
+    return pthread_rwlock_wrlock(&rwlock->rwl);
+}
+
+int qemu_rwlock_unlock(QemuRWLock *rwlock)
+{
+    return pthread_rwlock_unlock(&rwlock->rwl);
+}
+
+int qemu_rwlock_rdlock(QemuRWLock *rwlock)
+{
+    return pthread_rwlock_rdlock(&rwlock->rwl);
+}
+
+void qemu_rwlock_init(QemuRWLock *rwlock)
+{
+    int err;
+    pthread_rwlockattr_t rwlockattr;
+
+    pthread_rwlockattr_init(&rwlockattr);
+    err = pthread_rwlock_init(&rwlock->rwl, &rwlockattr);
+    if (err)
+        error_exit(err, __func__);
+}
diff --git a/qemu-thread-posix.h b/qemu-thread-posix.h
index ee4618e..8c1e4f6 100644
--- a/qemu-thread-posix.h
+++ b/qemu-thread-posix.h
@@ -14,4 +14,8 @@ struct QemuThread {
     pthread_t thread;
 };
 
+struct QemuRWLock {
+    pthread_rwlock_t rwl;
+};
+
 #endif
diff --git a/qemu-thread.h b/qemu-thread.h
index 0a73d50..44321fb 100644
--- a/qemu-thread.h
+++ b/qemu-thread.h
@@ -6,6 +6,7 @@
 typedef struct QemuMutex QemuMutex;
 typedef struct QemuCond QemuCond;
 typedef struct QemuThread QemuThread;
+typedef struct QemuRWLock QemuRWLock;
 
 #ifdef _WIN32
 #include "qemu-thread-win32.h"
@@ -31,6 +32,11 @@ void qemu_cond_signal(QemuCond *cond);
 void qemu_cond_broadcast(QemuCond *cond);
 void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex);
 
+void qemu_rwlock_init(QemuRWLock *rwlock);
+int qemu_rwlock_rdlock(QemuRWLock *rwlock);
+int qemu_rwlock_wrlock(QemuRWLock *rwlock);
+int qemu_rwlock_unlock(QemuRWLock *rwlock);
+
 void qemu_thread_create(QemuThread *thread,
                        void *(*start_routine)(void*),
                        void *arg);
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 2/2] Use qemu_rwlock_* interface instead of pthread_rwlock_*
  2011-10-03 11:23 [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock Harsh Prateek Bora
  2011-10-03 11:23 ` [Qemu-devel] [PATCH 1/2] " Harsh Prateek Bora
@ 2011-10-03 11:23 ` Harsh Prateek Bora
  2011-10-03 12:16 ` [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock Jan Kiszka
  2 siblings, 0 replies; 6+ messages in thread
From: Harsh Prateek Bora @ 2011-10-03 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, aneesh.kumar


Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com>
---
 hw/9pfs/virtio-9p-synth.c |   23 ++++++++++-------------
 1 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/hw/9pfs/virtio-9p-synth.c b/hw/9pfs/virtio-9p-synth.c
index cbf74e4..9867bba 100644
--- a/hw/9pfs/virtio-9p-synth.c
+++ b/hw/9pfs/virtio-9p-synth.c
@@ -30,8 +30,7 @@ V9fsSynthNode v9fs_synth_root = {
     .attr = &v9fs_synth_root.actual_attr,
 };
 
-/*FIXME!! should be converted to qemu_rwlock_t */
-static pthread_rwlock_t v9fs_synth_mutex;
+static QemuRWLock v9fs_synth_mutex;
 static int v9fs_synth_node_count;
 /* set to 1 when the synth fs is ready */
 static int v9fs_synth_fs;
@@ -79,7 +78,7 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode,
     if (!parent) {
         parent = &v9fs_synth_root;
     }
-    pthread_rwlock_wrlock(&v9fs_synth_mutex);
+    qemu_rwlock_wrlock(&v9fs_synth_mutex);
     QLIST_FOREACH(tmp, &parent->child, sibling) {
         if (!strcmp(tmp->name, name)) {
             ret = EEXIST;
@@ -95,7 +94,7 @@ int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode,
     *result = node;
     ret = 0;
 err_out:
-    pthread_rwlock_unlock(&v9fs_synth_mutex);
+    qemu_rwlock_unlock(&v9fs_synth_mutex);
     return ret;
 }
 
@@ -116,7 +115,7 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int mode,
         parent = &v9fs_synth_root;
     }
 
-    pthread_rwlock_wrlock(&v9fs_synth_mutex);
+    qemu_rwlock_wrlock(&v9fs_synth_mutex);
     QLIST_FOREACH(tmp, &parent->child, sibling) {
         if (!strcmp(tmp->name, name)) {
             ret = EEXIST;
@@ -137,7 +136,7 @@ int qemu_v9fs_synth_add_file(V9fsSynthNode *parent, int mode,
     QLIST_INSERT_HEAD(&parent->child, node, sibling);
     ret = 0;
 err_out:
-    pthread_rwlock_unlock(&v9fs_synth_mutex);
+    qemu_rwlock_unlock(&v9fs_synth_mutex);
     return ret;
 }
 
@@ -230,7 +229,7 @@ static int v9fs_synth_get_dentry(V9fsSynthNode *dir, struct dirent *entry,
     int i = 0;
     V9fsSynthNode *node;
 
-    pthread_rwlock_rdlock(&v9fs_synth_mutex);
+    qemu_rwlock_rdlock(&v9fs_synth_mutex);
     QLIST_FOREACH(node, &dir->child, sibling) {
         /* This is the off child of the directory */
         if (i == off) {
@@ -238,7 +237,7 @@ static int v9fs_synth_get_dentry(V9fsSynthNode *dir, struct dirent *entry,
         }
         i++;
     }
-    pthread_rwlock_unlock(&v9fs_synth_mutex);
+    qemu_rwlock_unlock(&v9fs_synth_mutex);
     if (!node) {
         /* end of directory */
         *result = NULL;
@@ -483,13 +482,13 @@ static int v9fs_synth_name_to_path(FsContext *ctx, V9fsPath *dir_path,
         goto out;
     }
     /* search for the name in the childern */
-    pthread_rwlock_rdlock(&v9fs_synth_mutex);
+    qemu_rwlock_rdlock(&v9fs_synth_mutex);
     QLIST_FOREACH(node, &dir_node->child, sibling) {
         if (!strcmp(node->name, name)) {
             break;
         }
     }
-    pthread_rwlock_unlock(&v9fs_synth_mutex);
+    qemu_rwlock_unlock(&v9fs_synth_mutex);
     if (!node) {
         errno = ENOENT;
         return -1;
@@ -532,11 +531,9 @@ static ssize_t my_test_read(void *in_buf, int len, off_t offset, void *arg)
 static int v9fs_synth_init(FsContext *ctx)
 {
     V9fsSynthNode *node;
-    pthread_rwlockattr_t rwlockattr;
 
     QLIST_INIT(&v9fs_synth_root.child);
-    pthread_rwlockattr_init(&rwlockattr);
-    pthread_rwlock_init(&v9fs_synth_mutex, &rwlockattr);
+    qemu_rwlock_init(&v9fs_synth_mutex);
 
     /* Add "." and ".." entries for root */
     v9fs_add_dir_node(&v9fs_synth_root, v9fs_synth_root.attr->mode,
-- 
1.7.4.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock
  2011-10-03 11:23 [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock Harsh Prateek Bora
  2011-10-03 11:23 ` [Qemu-devel] [PATCH 1/2] " Harsh Prateek Bora
  2011-10-03 11:23 ` [Qemu-devel] [PATCH 2/2] Use qemu_rwlock_* interface instead of pthread_rwlock_* Harsh Prateek Bora
@ 2011-10-03 12:16 ` Jan Kiszka
  2011-10-03 17:30   ` Aneesh Kumar K.V
  2 siblings, 1 reply; 6+ messages in thread
From: Jan Kiszka @ 2011-10-03 12:16 UTC (permalink / raw)
  To: Harsh Prateek Bora; +Cc: stefanha, qemu-devel, aneesh.kumar

[-- Attachment #1: Type: text/plain, Size: 613 bytes --]

On 2011-10-03 13:23, Harsh Prateek Bora wrote:
> SynthFS uses rwlocks, which raise the need of a generic QemuRWLock APIs.
> This patchset introduces the same making necessary changes to relevant code.

Is the impact of using a plain mutex measurable with 9pfs? Usually it
takes very heavy write sections or highly concurrent read sections to
actually make a difference. Looking at the cited code, I would dare to
rule out the former (even more if the malloc was moved out of the
critical section). But I cannot assess the latter.

If it does matter, I would vote for introducing RCU directly.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock
  2011-10-03 12:16 ` [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock Jan Kiszka
@ 2011-10-03 17:30   ` Aneesh Kumar K.V
  2011-10-03 17:50     ` Jan Kiszka
  0 siblings, 1 reply; 6+ messages in thread
From: Aneesh Kumar K.V @ 2011-10-03 17:30 UTC (permalink / raw)
  To: Jan Kiszka, Harsh Prateek Bora; +Cc: stefanha, qemu-devel

On Mon, 03 Oct 2011 14:16:09 +0200, Jan Kiszka <jan.kiszka@web.de> wrote:
Non-text part: multipart/signed
> On 2011-10-03 13:23, Harsh Prateek Bora wrote:
> > SynthFS uses rwlocks, which raise the need of a generic QemuRWLock APIs.
> > This patchset introduces the same making necessary changes to relevant code.
> 
> Is the impact of using a plain mutex measurable with 9pfs? Usually it
> takes very heavy write sections or highly concurrent read sections to
> actually make a difference. Looking at the cited code, I would dare to
> rule out the former (even more if the malloc was moved out of the
> critical section). But I cannot assess the latter.
> 
> If it does matter, I would vote for introducing RCU directly.

I haven't done any measurements. The lock is taken in write mode
when creating new file system object and is taken in read mode during
lookup(walk) and readdir. Considering we allow creation of objects only
during init, it mostly will be taken in read mode. Currently there is no
deletion of object. We didn't want those parallel reads to be
mutually exclusive.

For RCU are you suggesting to work with userspace RCU implementation at 
http://lttng.org/urcu

-aneesh

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock
  2011-10-03 17:30   ` Aneesh Kumar K.V
@ 2011-10-03 17:50     ` Jan Kiszka
  0 siblings, 0 replies; 6+ messages in thread
From: Jan Kiszka @ 2011-10-03 17:50 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: stefanha, Harsh Prateek Bora, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1705 bytes --]

On 2011-10-03 19:30, Aneesh Kumar K.V wrote:
> On Mon, 03 Oct 2011 14:16:09 +0200, Jan Kiszka <jan.kiszka@web.de> wrote:
> Non-text part: multipart/signed
>> On 2011-10-03 13:23, Harsh Prateek Bora wrote:
>>> SynthFS uses rwlocks, which raise the need of a generic QemuRWLock APIs.
>>> This patchset introduces the same making necessary changes to relevant code.
>>
>> Is the impact of using a plain mutex measurable with 9pfs? Usually it
>> takes very heavy write sections or highly concurrent read sections to
>> actually make a difference. Looking at the cited code, I would dare to
>> rule out the former (even more if the malloc was moved out of the
>> critical section). But I cannot assess the latter.
>>
>> If it does matter, I would vote for introducing RCU directly.
> 
> I haven't done any measurements. The lock is taken in write mode
> when creating new file system object and is taken in read mode during
> lookup(walk) and readdir. Considering we allow creation of objects only
> during init, it mostly will be taken in read mode. Currently there is no
> deletion of object. We didn't want those parallel reads to be
> mutually exclusive.

That still doesn't answer if it justifies a new locking mechanism. RW
locks are rarely useful, a nightmare for RT, and widely obsolete when
RCU is available.

> 
> For RCU are you suggesting to work with userspace RCU implementation at 
> http://lttng.org/urcu

See http://thread.gmane.org/gmane.comp.emulators.qemu/113529. That would
also help my TCG locking optimization where I had to hack away some
ram_list changes for lock-less read access
(http://thread.gmane.org/gmane.comp.emulators.qemu/1188079).

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-10-03 17:50 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-10-03 11:23 [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock Harsh Prateek Bora
2011-10-03 11:23 ` [Qemu-devel] [PATCH 1/2] " Harsh Prateek Bora
2011-10-03 11:23 ` [Qemu-devel] [PATCH 2/2] Use qemu_rwlock_* interface instead of pthread_rwlock_* Harsh Prateek Bora
2011-10-03 12:16 ` [Qemu-devel] [PATCH 0/2] Introduce QemuRWLock Jan Kiszka
2011-10-03 17:30   ` Aneesh Kumar K.V
2011-10-03 17:50     ` Jan Kiszka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).