* [Qemu-devel] [PATCH 0/2] dataplane: add query-iothreads QMP command @ 2014-02-21 14:51 Stefan Hajnoczi 2014-02-21 14:51 ` [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away Stefan Hajnoczi 2014-02-21 14:51 ` [Qemu-devel] [PATCH 2/2] qmp: add query-iothreads command Stefan Hajnoczi 0 siblings, 2 replies; 12+ messages in thread From: Stefan Hajnoczi @ 2014-02-21 14:51 UTC (permalink / raw) To: qemu-devel Cc: Shergill, Gurinder, Paolo Bonzini, Vinod, Chegu, Luiz Capitulino This series applies on top of "[PATCH v4 0/6] dataplane: switch to N:M devices-per-thread model". The new "query-iothreads" command allows QMP clients to fetch the thread IDs for dataplane threads. This will allow libvirt and friends to bind dataplane threads. The approach is similar to "query-cpus" which is also used for vcpu thread binding (among other things). Stefan Hajnoczi (2): iothread: stash thread ID away qmp: add query-iothreads command iothread.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- qapi-schema.json | 29 +++++++++++++++++++++++ qmp-commands.hx | 39 +++++++++++++++++++++++++++++++ 3 files changed, 136 insertions(+), 2 deletions(-) -- 1.8.5.3 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away 2014-02-21 14:51 [Qemu-devel] [PATCH 0/2] dataplane: add query-iothreads QMP command Stefan Hajnoczi @ 2014-02-21 14:51 ` Stefan Hajnoczi 2014-02-21 15:18 ` Paolo Bonzini 2014-02-21 14:51 ` [Qemu-devel] [PATCH 2/2] qmp: add query-iothreads command Stefan Hajnoczi 1 sibling, 1 reply; 12+ messages in thread From: Stefan Hajnoczi @ 2014-02-21 14:51 UTC (permalink / raw) To: qemu-devel Cc: Shergill, Gurinder, Paolo Bonzini, Vinod, Chegu, Luiz Capitulino Keep the thread ID around so we can report it via QMP. There's only one problem: qemu_get_thread_id() (gettid() wrapper on Linux) must be called from the thread itself. There is no way to get the thread ID outside the thread. This patch uses a condvar to wait for iothread_run() to populate the thread_id inside the thread. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> --- iothread.c | 34 ++++++++++++++++++++++++++++++++-- 1 file changed, 32 insertions(+), 2 deletions(-) diff --git a/iothread.c b/iothread.c index 033de7f..1a0d1ec 100644 --- a/iothread.c +++ b/iothread.c @@ -26,8 +26,15 @@ struct IOThread { QemuThread thread; AioContext *ctx; bool stopping; + int thread_id; }; +typedef struct { + QemuMutex init_done_lock; + QemuCond init_done_cond; /* is thread initialization done? */ + IOThread *iothread; +} ThreadInitInfo; + #define IOTHREAD_GET_CLASS(obj) \ OBJECT_GET_CLASS(IOThreadClass, obj, TYPE_IOTHREAD) #define IOTHREAD_CLASS(klass) \ @@ -35,7 +42,15 @@ struct IOThread { static void *iothread_run(void *opaque) { - IOThread *iothread = opaque; + ThreadInitInfo *init_info = opaque; + IOThread *iothread = init_info->iothread; + + iothread->thread_id = qemu_get_thread_id(); + + /* Signal that initialization is done */ + qemu_mutex_lock(&init_info->init_done_lock); + qemu_cond_signal(&init_info->init_done_cond); + qemu_mutex_unlock(&init_info->init_done_lock); while (!iothread->stopping) { aio_context_acquire(iothread->ctx); @@ -60,15 +75,30 @@ static void iothread_instance_finalize(Object *obj) static void iothread_complete(UserCreatable *obj, Error **errp) { IOThread *iothread = IOTHREAD(obj); + ThreadInitInfo init_info = { + .iothread = iothread, + }; iothread->stopping = false; iothread->ctx = aio_context_new(); + qemu_mutex_init(&init_info.init_done_lock); + qemu_cond_init(&init_info.init_done_cond); + /* This assumes we are called from a thread with useful CPU affinity for us * to inherit. */ qemu_thread_create(&iothread->thread, iothread_run, - iothread, QEMU_THREAD_JOINABLE); + &init_info, QEMU_THREAD_JOINABLE); + + /* Wait for initialization to complete */ + qemu_mutex_lock(&init_info.init_done_lock); + qemu_cond_wait(&init_info.init_done_cond, + &init_info.init_done_lock); + qemu_mutex_unlock(&init_info.init_done_lock); + + qemu_cond_destroy(&init_info.init_done_cond); + qemu_mutex_destroy(&init_info.init_done_lock); } static void iothread_class_init(ObjectClass *klass, void *class_data) -- 1.8.5.3 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away 2014-02-21 14:51 ` [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away Stefan Hajnoczi @ 2014-02-21 15:18 ` Paolo Bonzini 2014-02-24 15:53 ` Stefan Hajnoczi 0 siblings, 1 reply; 12+ messages in thread From: Paolo Bonzini @ 2014-02-21 15:18 UTC (permalink / raw) To: Stefan Hajnoczi, qemu-devel Cc: Vinod, Chegu, Shergill, Gurinder, Luiz Capitulino Il 21/02/2014 15:51, Stefan Hajnoczi ha scritto: > Keep the thread ID around so we can report it via QMP. > > There's only one problem: qemu_get_thread_id() (gettid() wrapper on > Linux) must be called from the thread itself. There is no way to get > the thread ID outside the thread. > > This patch uses a condvar to wait for iothread_run() to populate the > thread_id inside the thread. > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > --- > iothread.c | 34 ++++++++++++++++++++++++++++++++-- > 1 file changed, 32 insertions(+), 2 deletions(-) > > diff --git a/iothread.c b/iothread.c > index 033de7f..1a0d1ec 100644 > --- a/iothread.c > +++ b/iothread.c > @@ -26,8 +26,15 @@ struct IOThread { > QemuThread thread; > AioContext *ctx; > bool stopping; > + int thread_id; > }; > > +typedef struct { > + QemuMutex init_done_lock; > + QemuCond init_done_cond; /* is thread initialization done? */ > + IOThread *iothread; > +} ThreadInitInfo; > + > #define IOTHREAD_GET_CLASS(obj) \ > OBJECT_GET_CLASS(IOThreadClass, obj, TYPE_IOTHREAD) > #define IOTHREAD_CLASS(klass) \ > @@ -35,7 +42,15 @@ struct IOThread { > > static void *iothread_run(void *opaque) > { > - IOThread *iothread = opaque; > + ThreadInitInfo *init_info = opaque; > + IOThread *iothread = init_info->iothread; > + > + iothread->thread_id = qemu_get_thread_id(); > + > + /* Signal that initialization is done */ > + qemu_mutex_lock(&init_info->init_done_lock); > + qemu_cond_signal(&init_info->init_done_cond); > + qemu_mutex_unlock(&init_info->init_done_lock); > > while (!iothread->stopping) { > aio_context_acquire(iothread->ctx); > @@ -60,15 +75,30 @@ static void iothread_instance_finalize(Object *obj) > static void iothread_complete(UserCreatable *obj, Error **errp) > { > IOThread *iothread = IOTHREAD(obj); > + ThreadInitInfo init_info = { > + .iothread = iothread, > + }; > > iothread->stopping = false; > iothread->ctx = aio_context_new(); > > + qemu_mutex_init(&init_info.init_done_lock); > + qemu_cond_init(&init_info.init_done_cond); > + > /* This assumes we are called from a thread with useful CPU affinity for us > * to inherit. > */ > qemu_thread_create(&iothread->thread, iothread_run, > - iothread, QEMU_THREAD_JOINABLE); > + &init_info, QEMU_THREAD_JOINABLE); > + > + /* Wait for initialization to complete */ > + qemu_mutex_lock(&init_info.init_done_lock); > + qemu_cond_wait(&init_info.init_done_cond, > + &init_info.init_done_lock); > + qemu_mutex_unlock(&init_info.init_done_lock); > + > + qemu_cond_destroy(&init_info.init_done_cond); > + qemu_mutex_destroy(&init_info.init_done_lock); Destroying the mutex here is racy. You need to keep it until the iothread is destroyed. Paolo > } > > static void iothread_class_init(ObjectClass *klass, void *class_data) > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away 2014-02-21 15:18 ` Paolo Bonzini @ 2014-02-24 15:53 ` Stefan Hajnoczi 2014-02-24 16:48 ` Paolo Bonzini 0 siblings, 1 reply; 12+ messages in thread From: Stefan Hajnoczi @ 2014-02-24 15:53 UTC (permalink / raw) To: Paolo Bonzini Cc: Shergill, Gurinder, Vinod, Chegu, qemu-devel, Stefan Hajnoczi, Luiz Capitulino On Fri, Feb 21, 2014 at 04:18:30PM +0100, Paolo Bonzini wrote: > Il 21/02/2014 15:51, Stefan Hajnoczi ha scritto: > >Keep the thread ID around so we can report it via QMP. > > > >There's only one problem: qemu_get_thread_id() (gettid() wrapper on > >Linux) must be called from the thread itself. There is no way to get > >the thread ID outside the thread. > > > >This patch uses a condvar to wait for iothread_run() to populate the > >thread_id inside the thread. > > > >Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > >--- > > iothread.c | 34 ++++++++++++++++++++++++++++++++-- > > 1 file changed, 32 insertions(+), 2 deletions(-) > > > >diff --git a/iothread.c b/iothread.c > >index 033de7f..1a0d1ec 100644 > >--- a/iothread.c > >+++ b/iothread.c > >@@ -26,8 +26,15 @@ struct IOThread { > > QemuThread thread; > > AioContext *ctx; > > bool stopping; > >+ int thread_id; > > }; > > > >+typedef struct { > >+ QemuMutex init_done_lock; > >+ QemuCond init_done_cond; /* is thread initialization done? */ > >+ IOThread *iothread; > >+} ThreadInitInfo; > >+ > > #define IOTHREAD_GET_CLASS(obj) \ > > OBJECT_GET_CLASS(IOThreadClass, obj, TYPE_IOTHREAD) > > #define IOTHREAD_CLASS(klass) \ > >@@ -35,7 +42,15 @@ struct IOThread { > > > > static void *iothread_run(void *opaque) > > { > >- IOThread *iothread = opaque; > >+ ThreadInitInfo *init_info = opaque; > >+ IOThread *iothread = init_info->iothread; > >+ > >+ iothread->thread_id = qemu_get_thread_id(); > >+ > >+ /* Signal that initialization is done */ > >+ qemu_mutex_lock(&init_info->init_done_lock); > >+ qemu_cond_signal(&init_info->init_done_cond); > >+ qemu_mutex_unlock(&init_info->init_done_lock); > > > > while (!iothread->stopping) { > > aio_context_acquire(iothread->ctx); > >@@ -60,15 +75,30 @@ static void iothread_instance_finalize(Object *obj) > > static void iothread_complete(UserCreatable *obj, Error **errp) > > { > > IOThread *iothread = IOTHREAD(obj); > >+ ThreadInitInfo init_info = { > >+ .iothread = iothread, > >+ }; > > > > iothread->stopping = false; > > iothread->ctx = aio_context_new(); > > > >+ qemu_mutex_init(&init_info.init_done_lock); > >+ qemu_cond_init(&init_info.init_done_cond); > >+ > > /* This assumes we are called from a thread with useful CPU affinity for us > > * to inherit. > > */ > > qemu_thread_create(&iothread->thread, iothread_run, > >- iothread, QEMU_THREAD_JOINABLE); > >+ &init_info, QEMU_THREAD_JOINABLE); > >+ > >+ /* Wait for initialization to complete */ > >+ qemu_mutex_lock(&init_info.init_done_lock); > >+ qemu_cond_wait(&init_info.init_done_cond, > >+ &init_info.init_done_lock); > >+ qemu_mutex_unlock(&init_info.init_done_lock); > >+ > >+ qemu_cond_destroy(&init_info.init_done_cond); > >+ qemu_mutex_destroy(&init_info.init_done_lock); > > Destroying the mutex here is racy. You need to keep it until the > iothread is destroyed. I don't think so: qemu_cond_signal() is called with the mutex held. Therefore, our qemu_cond_wait() followed by qemu_mutex_unlock() will only complete once the thread has released the mutex. The thread will never touch the mutex again so it is safe to destroy it. There is no race condition. Stefan ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away 2014-02-24 15:53 ` Stefan Hajnoczi @ 2014-02-24 16:48 ` Paolo Bonzini 2014-02-25 15:42 ` Stefan Hajnoczi 2014-02-25 16:17 ` Stefan Hajnoczi 0 siblings, 2 replies; 12+ messages in thread From: Paolo Bonzini @ 2014-02-24 16:48 UTC (permalink / raw) To: Stefan Hajnoczi Cc: Shergill, Gurinder, Vinod, Chegu, qemu-devel, Stefan Hajnoczi, Luiz Capitulino Il 24/02/2014 16:53, Stefan Hajnoczi ha scritto: >>> > >+ qemu_cond_destroy(&init_info.init_done_cond); >>> > >+ qemu_mutex_destroy(&init_info.init_done_lock); >> > >> > Destroying the mutex here is racy. You need to keep it until the >> > iothread is destroyed. > I don't think so: > > qemu_cond_signal() is called with the mutex held. Therefore, our > qemu_cond_wait() followed by qemu_mutex_unlock() will only complete once > the thread has released the mutex. > > The thread will never touch the mutex again so it is safe to destroy it. > There is no race condition. Could qemu_mutex_destroy run while the other thread has already released the main thread, but before it returns? As far as I know, the only time when it is safe to destroy the "last" synchronization object (in this case the mutex is the last, the condvar is not) is after pthread_join. Paolo ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away 2014-02-24 16:48 ` Paolo Bonzini @ 2014-02-25 15:42 ` Stefan Hajnoczi 2014-02-25 16:10 ` Paolo Bonzini 2014-02-25 16:17 ` Stefan Hajnoczi 1 sibling, 1 reply; 12+ messages in thread From: Stefan Hajnoczi @ 2014-02-25 15:42 UTC (permalink / raw) To: Paolo Bonzini Cc: Stefan Hajnoczi, Shergill, Gurinder, Vinod, Chegu, qemu-devel, Luiz Capitulino On Mon, Feb 24, 2014 at 05:48:13PM +0100, Paolo Bonzini wrote: > Il 24/02/2014 16:53, Stefan Hajnoczi ha scritto: > >>>> >+ qemu_cond_destroy(&init_info.init_done_cond); > >>>> >+ qemu_mutex_destroy(&init_info.init_done_lock); > >>> > >>> Destroying the mutex here is racy. You need to keep it until the > >>> iothread is destroyed. > >I don't think so: > > > >qemu_cond_signal() is called with the mutex held. Therefore, our > >qemu_cond_wait() followed by qemu_mutex_unlock() will only complete once > >the thread has released the mutex. > > > >The thread will never touch the mutex again so it is safe to destroy it. > >There is no race condition. > > Could qemu_mutex_destroy run while the other thread has already > released the main thread, but before it returns? As far as I know, > the only time when it is safe to destroy the "last" synchronization > object (in this case the mutex is the last, the condvar is not) is > after pthread_join. I guess you're saying that while unlocking the mutex is atomic, that doesn't guarantee pthread won't access the mutex internal state some more after it has unlocked it. Therefore it's not safe for another thread to destroy the mutex even after it has acquired it. POSIX does say that: "It shall be safe to destroy an initialized mutex that is unlocked." But maybe I am reading too much into that? After poking around glibc a little I think you are right. I can't say for sure but it seems even after a futex call glibc might still mess with internal state. But if anyone knows for certain, please speak up. Stefan ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away 2014-02-25 15:42 ` Stefan Hajnoczi @ 2014-02-25 16:10 ` Paolo Bonzini 0 siblings, 0 replies; 12+ messages in thread From: Paolo Bonzini @ 2014-02-25 16:10 UTC (permalink / raw) To: Stefan Hajnoczi Cc: Stefan Hajnoczi, Shergill, Gurinder, Vinod, Chegu, qemu-devel, Luiz Capitulino Il 25/02/2014 16:42, Stefan Hajnoczi ha scritto: > I guess you're saying that while unlocking the mutex is atomic, that > doesn't guarantee pthread won't access the mutex internal state some > more after it has unlocked it. Therefore it's not safe for another > thread to destroy the mutex even after it has acquired it. Yes. > POSIX does say that: > > "It shall be safe to destroy an initialized mutex that is unlocked." The question is what "unlocked" means... :) > But maybe I am reading too much into that? > > After poking around glibc a little I think you are right. I can't say > for sure but it seems even after a futex call glibc might still mess > with internal state. But if anyone knows for certain, please speak up. I think other races are possible. Let's look at the simple lock in nptl/lowlevellock.h: /* Mutex lock counter: bit 31 clear means unlocked; bit 31 set means locked. All code that looks at bit 31 first increases the 'number of interested threads' usage counter, which is in bits 0-30. The comment is wrong, there is a fast path that does not do that; I'm not sure if this is why the problem can happen, I'm just pointing this out because it contradicts the code I'm posting now. The file uses C code, but it's simpler to look at it in assembly. Unlocking is very simple: lock; btcl $31, futex jz 2f ... do futex wake ... 2: Locking has a fast path followed by preparing the slow path, re-checking the fastpath condition, and waiting if it fails still: lock; btsl $31, futex jnc 9f lock; incl futex 1: lock; btsl $31, futex jnc 8f ... do futex wait ... jmp 1b 8: lock; decl futex 9: It's possible, if futex is locked by CPU 0 and CPU 1 tries to grab it, that the following happens: CPU 0 CPU 1 lock; btsl $31, futex (fails) lock; incl futex lock; btcl %0 (not zero) lock; btsl $31, futex (succeeds) lock; decl futex destroy lock free(lock) futex wake If you get an EFAULT from the futex wakeup, this could be a problem. Paolo ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away 2014-02-24 16:48 ` Paolo Bonzini 2014-02-25 15:42 ` Stefan Hajnoczi @ 2014-02-25 16:17 ` Stefan Hajnoczi 2014-02-25 16:27 ` Paolo Bonzini 1 sibling, 1 reply; 12+ messages in thread From: Stefan Hajnoczi @ 2014-02-25 16:17 UTC (permalink / raw) To: Paolo Bonzini Cc: Stefan Hajnoczi, Shergill, Gurinder, Vinod, Chegu, qemu-devel, Luiz Capitulino On Mon, Feb 24, 2014 at 05:48:13PM +0100, Paolo Bonzini wrote: > Il 24/02/2014 16:53, Stefan Hajnoczi ha scritto: > >>>> >+ qemu_cond_destroy(&init_info.init_done_cond); > >>>> >+ qemu_mutex_destroy(&init_info.init_done_lock); > >>> > >>> Destroying the mutex here is racy. You need to keep it until the > >>> iothread is destroyed. > >I don't think so: > > > >qemu_cond_signal() is called with the mutex held. Therefore, our > >qemu_cond_wait() followed by qemu_mutex_unlock() will only complete once > >the thread has released the mutex. > > > >The thread will never touch the mutex again so it is safe to destroy it. > >There is no race condition. > > Could qemu_mutex_destroy run while the other thread has already > released the main thread, but before it returns? As far as I know, > the only time when it is safe to destroy the "last" synchronization > object (in this case the mutex is the last, the condvar is not) is > after pthread_join. For the default mutex type (PTHREAD_MUTEX_TIMED_NP) glibc looks safe to me. The other mutex types are trickier and I haven't audited them. Anyway, I can just move the mutex into the IOThread object and destroy it after the thread is joined :). Stefan ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away 2014-02-25 16:17 ` Stefan Hajnoczi @ 2014-02-25 16:27 ` Paolo Bonzini 0 siblings, 0 replies; 12+ messages in thread From: Paolo Bonzini @ 2014-02-25 16:27 UTC (permalink / raw) To: Stefan Hajnoczi Cc: Stefan Hajnoczi, Shergill, Gurinder, Vinod, Chegu, qemu-devel, Luiz Capitulino Il 25/02/2014 17:17, Stefan Hajnoczi ha scritto: > For the default mutex type (PTHREAD_MUTEX_TIMED_NP) glibc looks safe to > me. The other mutex types are trickier and I haven't audited them. It also depends on the low-level lock implementation. I looked at the C one and it's not safe, the x86-optimized one is hard to follow. I think I was looking at a different race, namely cases where the locking thread uses a fast path, while the unlocking thread plays it safe and uses the slow path instead. Then the slow path can run "asynchronously" from the locking thread, and the locking thread has time to unlock and destroy the mutex. See the other message. Paolo ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH 2/2] qmp: add query-iothreads command 2014-02-21 14:51 [Qemu-devel] [PATCH 0/2] dataplane: add query-iothreads QMP command Stefan Hajnoczi 2014-02-21 14:51 ` [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away Stefan Hajnoczi @ 2014-02-21 14:51 ` Stefan Hajnoczi 2014-02-21 15:27 ` Eric Blake 1 sibling, 1 reply; 12+ messages in thread From: Stefan Hajnoczi @ 2014-02-21 14:51 UTC (permalink / raw) To: qemu-devel Cc: Shergill, Gurinder, Paolo Bonzini, Vinod, Chegu, Luiz Capitulino The "query-iothreads" command returns a list of information about iothreads. See the patch for API documentation. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> --- iothread.c | 36 ++++++++++++++++++++++++++++++++++++ qapi-schema.json | 29 +++++++++++++++++++++++++++++ qmp-commands.hx | 39 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 104 insertions(+) diff --git a/iothread.c b/iothread.c index 1a0d1ec..7aa8463 100644 --- a/iothread.c +++ b/iothread.c @@ -17,6 +17,7 @@ #include "qemu/thread.h" #include "block/aio.h" #include "sysemu/iothread.h" +#include "qmp-commands.h" #define IOTHREADS_PATH "/objects" @@ -151,3 +152,38 @@ AioContext *iothread_get_aio_context(IOThread *iothread) { return iothread->ctx; } + +static int query_one_iothread(Object *object, void *opaque) +{ + IOThreadInfoList ***prev = opaque; + IOThreadInfoList *elem; + IOThreadInfo *info; + IOThread *iothread; + + iothread = (IOThread *)object_dynamic_cast(object, TYPE_IOTHREAD); + if (!iothread) { + return 0; + } + + info = g_new0(IOThreadInfo, 1); + info->id = iothread_get_id(iothread); + info->thread_id = iothread->thread_id; + + elem = g_new0(IOThreadInfoList, 1); + elem->value = info; + elem->next = NULL; + + **prev = elem; + *prev = &elem->next; + return 0; +} + +IOThreadInfoList *qmp_query_iothreads(Error **errp) +{ + IOThreadInfoList *head = NULL; + IOThreadInfoList **prev = &head; + Object *container = container_get(object_get_root(), IOTHREADS_PATH); + + object_child_foreach(container, query_one_iothread, &prev); + return head; +} diff --git a/qapi-schema.json b/qapi-schema.json index 473c096..6795a01 100644 --- a/qapi-schema.json +++ b/qapi-schema.json @@ -884,6 +884,35 @@ { 'command': 'query-cpus', 'returns': ['CpuInfo'] } ## +# @IOThreadInfo: +# +# Information about an iothread +# +# @id: the identifier of the iothread +# +# @thread_id: ID of the underlying host thread +# +# Since: 2.0 +## +{ 'type': 'IOThreadInfo', + 'data': {'id': 'str', 'thread_id': 'int'} } + +## +# @query-iothreads: +# +# Returns a list of information about each iothread. +# +# Note this list excludes the QEMU main loop thread, which is not declared +# using the -object iothread command-line option. It is always the main thread +# of the process. +# +# Returns: a list of @IOThreadInfo for each iothread +# +# Since: 2.0 +## +{ 'command': 'query-iothreads', 'returns': ['IOThreadInfo'] } + +## # @BlockDeviceInfo: # # Information about the backing device for a block device. diff --git a/qmp-commands.hx b/qmp-commands.hx index 8a0e832..ba16b61 100644 --- a/qmp-commands.hx +++ b/qmp-commands.hx @@ -2304,6 +2304,45 @@ EQMP }, SQMP +query-iothreads +--------------- + +Returns a list of information about each iothread. + +Note this list excludes the QEMU main loop thread, which is not declared +using the -object iothread command-line option. It is always the main thread +of the process. + +Return a json-array. Each iothread is represented by a json-object, which contains: + +- "id": name of iothread (json-str) +- "thread_id": ID of the underlying host thread (json-int) + +Example: + +-> { "execute": "query-iothreads" } +<- { + "return":[ + { + "id":"iothread0", + "thread_id":3134 + }, + { + "id":"iothread1", + "thread_id":3135 + } + ] + } + +EQMP + + { + .name = "query-iothreads", + .args_type = "", + .mhandler.cmd_new = qmp_marshal_input_query_iothreads, + }, + +SQMP query-pci --------- -- 1.8.5.3 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qmp: add query-iothreads command 2014-02-21 14:51 ` [Qemu-devel] [PATCH 2/2] qmp: add query-iothreads command Stefan Hajnoczi @ 2014-02-21 15:27 ` Eric Blake 2014-02-24 15:54 ` Stefan Hajnoczi 0 siblings, 1 reply; 12+ messages in thread From: Eric Blake @ 2014-02-21 15:27 UTC (permalink / raw) To: Stefan Hajnoczi, qemu-devel Cc: Paolo Bonzini, Shergill, Gurinder, Vinod, Chegu, Luiz Capitulino [-- Attachment #1: Type: text/plain, Size: 742 bytes --] On 02/21/2014 07:51 AM, Stefan Hajnoczi wrote: > The "query-iothreads" command returns a list of information about > iothreads. See the patch for API documentation. > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > --- > +++ b/qapi-schema.json > @@ -884,6 +884,35 @@ > { 'command': 'query-cpus', 'returns': ['CpuInfo'] } > > ## > +# @IOThreadInfo: > +# > +# Information about an iothread > +# > +# @id: the identifier of the iothread > +# > +# @thread_id: ID of the underlying host thread For new interfaces, we prefer thread-id (dash, not underscore). But the idea definitely makes sense. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 604 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qmp: add query-iothreads command 2014-02-21 15:27 ` Eric Blake @ 2014-02-24 15:54 ` Stefan Hajnoczi 0 siblings, 0 replies; 12+ messages in thread From: Stefan Hajnoczi @ 2014-02-24 15:54 UTC (permalink / raw) To: Eric Blake Cc: Shergill, Gurinder, qemu-devel, Luiz Capitulino, Stefan Hajnoczi, Paolo Bonzini, Vinod, Chegu On Fri, Feb 21, 2014 at 08:27:22AM -0700, Eric Blake wrote: > On 02/21/2014 07:51 AM, Stefan Hajnoczi wrote: > > The "query-iothreads" command returns a list of information about > > iothreads. See the patch for API documentation. > > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > > --- > > > +++ b/qapi-schema.json > > @@ -884,6 +884,35 @@ > > { 'command': 'query-cpus', 'returns': ['CpuInfo'] } > > > > ## > > +# @IOThreadInfo: > > +# > > +# Information about an iothread > > +# > > +# @id: the identifier of the iothread > > +# > > +# @thread_id: ID of the underlying host thread > > For new interfaces, we prefer thread-id (dash, not underscore). Will fix in v2. Stefan ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2014-02-25 16:27 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-02-21 14:51 [Qemu-devel] [PATCH 0/2] dataplane: add query-iothreads QMP command Stefan Hajnoczi 2014-02-21 14:51 ` [Qemu-devel] [PATCH 1/2] iothread: stash thread ID away Stefan Hajnoczi 2014-02-21 15:18 ` Paolo Bonzini 2014-02-24 15:53 ` Stefan Hajnoczi 2014-02-24 16:48 ` Paolo Bonzini 2014-02-25 15:42 ` Stefan Hajnoczi 2014-02-25 16:10 ` Paolo Bonzini 2014-02-25 16:17 ` Stefan Hajnoczi 2014-02-25 16:27 ` Paolo Bonzini 2014-02-21 14:51 ` [Qemu-devel] [PATCH 2/2] qmp: add query-iothreads command Stefan Hajnoczi 2014-02-21 15:27 ` Eric Blake 2014-02-24 15:54 ` Stefan Hajnoczi
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).