* [Qemu-devel] Re: [PATCH 3/3] vnc: threaded VNC server
2010-05-29 7:38 ` [Qemu-devel] [PATCH 3/3] vnc: threaded VNC server Corentin Chary
@ 2010-06-03 7:55 ` Paolo Bonzini
2010-06-03 8:26 ` Corentin Chary
0 siblings, 1 reply; 3+ messages in thread
From: Paolo Bonzini @ 2010-06-03 7:55 UTC (permalink / raw)
To: Corentin Chary
Cc: Anthony Liguori, Alexander Graf, Adam Litke, qemu-devel,
Gautham R Shenoy
On 05/29/2010 09:38 AM, Corentin Chary wrote:
> Implement a threaded VNC server using the producer-consumer model.
> The main thread will push encoding jobs (a list a rectangles to update)
> in a queue, and the VNC worker thread will consume that queue and send
> framebuffer updates to the output buffer.
>
> There is three levels of locking:
> - jobs queue lock: for each operation on the queue (push, pop, isEmpty?)
> - VncState global lock: mainly used for framebuffer updates to avoid
> screen corruption if the framebuffer is updated
> while the worker threaded is doing something.
> - VncState::output lock: used to make sure the output buffer is not corrupted
> if two threads try to write on it at the same time
>
> While the VNC worker thread is working, the VncState global lock is hold
> to avoid screen corruptions (this block vnc_refresh() for a short time) but the
> output lock is not hold because the thread work on its own output buffer. When
> the encoding job is done, the worker thread will hold the output lock and copy
> its output buffer in vs->output.
This belong in a comment in the code, not in the commit message (or in
both).
> +void vnc_job_push(VncJob *job)
> +{
> + vnc_lock_queue(queue);
> + if (QLIST_EMPTY(&job->rectangles)) {
> + qemu_free(job);
No need to lock if you get into the "then" block.
> + } else {
> + QTAILQ_INSERT_TAIL(&queue->jobs, job, next);
> + qemu_cond_broadcast(&queue->cond);
> + }
> + vnc_unlock_queue(queue);
> +}
...
> +static int vnc_worker_thread_loop(VncJobQueue *queue)
> +{
> + VncJob *job;
> + VncRectEntry *entry, *tmp;
> + VncState vs;
> + int n_rectangles;
> + int saved_offset;
> +
> + vnc_lock_queue(queue);
> + if (QTAILQ_EMPTY(&queue->jobs)) {
> + qemu_cond_wait(&queue->cond,&queue->mutex);
> + }
> +
> + /* If the queue is empty, it's an exit order */
> + if (QTAILQ_EMPTY(&queue->jobs)) {
> + vnc_unlock_queue(queue);
> + return -1;
> + }
This is not safe. It might work with a single consumer, but something
like this is better:
vnc_lock_queue(queue);
while (!queue->exit && QTAILQ_EMPTY(&queue->jobs)) {
qemu_cond_wait(&queue->cond,&queue->mutex);
}
if (queue->exit) {
vnc_unlock_queue(queue);
return -1;
}
(It occurred to me now that maybe you can reuse ->aborting. Not sure
though).
> + qemu_mutex_unlock(&job->vs->output_mutex);
> +
> + if (job->vs->csock != -1 && job->vs->abording != true) {
> + vnc_flush(job->vs);
> + }
> +
You're accessing the abort flag outside the mutex here. Also, you are
not using vnc_{,un}lock_output.
> + job = QTAILQ_FIRST(&queue->jobs);
> + vnc_unlock_queue(queue);
...
> +static void vnc_abord_display_jobs(VncDisplay *vd)
> +{
> + VncState *vs;
> +
> + QTAILQ_FOREACH(vs, &vd->clients, next) {
> + vnc_lock_output(vs);
> + vs->abording = true;
> + vnc_unlock_output(vs);
> + }
> + QTAILQ_FOREACH(vs, &vd->clients, next) {
> + vnc_jobs_join(vs);
> + }
> + QTAILQ_FOREACH(vs, &vd->clients, next) {
> + vnc_lock_output(vs);
> + vs->abording = false;
> + vnc_unlock_output(vs);
> + }
> +}
It's "abort" not "abord". :-)
...
> static void vnc_disconnect_finish(VncState *vs)
> {
> + vnc_jobs_join(vs); /* Wait encoding jobs */
> + vnc_lock(vs);
Possibly racy? Maybe you have to set the aforementioned new flag
queue->exit at the beginning of vnc_jobs_join, and refuse new jobs if it
is set.
Also, if anything waits on the same vs in vnc_refresh while you own it
in vnc_disconnect_finish, as soon as you unlock they'll have a dangling
pointer. (After you unlock the mutex the OS wakes the thread, but then
pthread_mutex_lock has to check again that no one got the lock in the
meanwhile; so QTAILQ_FOREACH_SAFE is not protecting you). Probably it's
better to use a single lock on vd->clients instead of one lock per VncState.
> +void vnc_client_write(void *opaque)
> +{
> + VncState *vs = opaque;
> +
> + vnc_lock_output(vs);
> + if (vs->output.offset) {
> + vnc_client_write_locked(opaque);
> + } else {
> + qemu_set_fd_handler2(vs->csock, NULL, vnc_client_read, NULL, vs);
> + }
Why the if? The "else" branch is already done by vnc_client_write_plain.
This may be a good time to port qemu-threads to Windows too. IO thread
has no hope to work under Windows at least without major hacks (because
Windows has no asynchronous interrupts; the only way I can imagine to
emulate them is a breakpoint) but threaded VNC should work.
Paolo
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 3/3] vnc: threaded VNC server
2010-06-03 7:55 ` [Qemu-devel] " Paolo Bonzini
@ 2010-06-03 8:26 ` Corentin Chary
0 siblings, 0 replies; 3+ messages in thread
From: Corentin Chary @ 2010-06-03 8:26 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Anthony Liguori, Adam Litke, Gautham R Shenoy, qemu-devel,
Alexander Graf
On Thu, Jun 3, 2010 at 9:55 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 05/29/2010 09:38 AM, Corentin Chary wrote:
>>
>> Implement a threaded VNC server using the producer-consumer model.
>> The main thread will push encoding jobs (a list a rectangles to update)
>> in a queue, and the VNC worker thread will consume that queue and send
>> framebuffer updates to the output buffer.
>>
>> There is three levels of locking:
>> - jobs queue lock: for each operation on the queue (push, pop, isEmpty?)
>> - VncState global lock: mainly used for framebuffer updates to avoid
>> screen corruption if the framebuffer is updated
>> while the worker threaded is doing something.
>> - VncState::output lock: used to make sure the output buffer is not
>> corrupted
>> if two threads try to write on it at the same time
>>
>> While the VNC worker thread is working, the VncState global lock is hold
>> to avoid screen corruptions (this block vnc_refresh() for a short time)
>> but the
>> output lock is not hold because the thread work on its own output buffer.
>> When
>> the encoding job is done, the worker thread will hold the output lock and
>> copy
>> its output buffer in vs->output.
>
> This belong in a comment in the code, not in the commit message (or in
> both).
Right
>> +void vnc_job_push(VncJob *job)
>> +{
>> + vnc_lock_queue(queue);
>> + if (QLIST_EMPTY(&job->rectangles)) {
>> + qemu_free(job);
>
> No need to lock if you get into the "then" block.
I locked it because the main thread can try to push a job while a
consumer is removing one, so I can't call QLIST_EMPTY() without
locking the queue.
>> + } else {
>> + QTAILQ_INSERT_TAIL(&queue->jobs, job, next);
>> + qemu_cond_broadcast(&queue->cond);
>> + }
>> + vnc_unlock_queue(queue);
>> +}
>
> ...
>
>> +static int vnc_worker_thread_loop(VncJobQueue *queue)
>> +{
>> + VncJob *job;
>> + VncRectEntry *entry, *tmp;
>> + VncState vs;
>> + int n_rectangles;
>> + int saved_offset;
>> +
>> + vnc_lock_queue(queue);
>> + if (QTAILQ_EMPTY(&queue->jobs)) {
>> + qemu_cond_wait(&queue->cond,&queue->mutex);
>> + }
>> +
>> + /* If the queue is empty, it's an exit order */
>> + if (QTAILQ_EMPTY(&queue->jobs)) {
>> + vnc_unlock_queue(queue);
>> + return -1;
>> + }
>
> This is not safe. It might work with a single consumer, but something like
> this is better:
>
> vnc_lock_queue(queue);
> while (!queue->exit && QTAILQ_EMPTY(&queue->jobs)) {
> qemu_cond_wait(&queue->cond,&queue->mutex);
> }
> if (queue->exit) {
> vnc_unlock_queue(queue);
> return -1;
> }
Right,
> (It occurred to me now that maybe you can reuse ->aborting. Not sure
> though).
>
>> + qemu_mutex_unlock(&job->vs->output_mutex);
>> +
>> + if (job->vs->csock != -1 && job->vs->abording != true) {
>> + vnc_flush(job->vs);
>> + }
>> +
>
> You're accessing the abort flag outside the mutex here. Also, you are not
> using vnc_{,un}lock_output.
I assumed that bool (int) where atomic .. but you're right I should lock that.
>> + job = QTAILQ_FIRST(&queue->jobs);
>> + vnc_unlock_queue(queue);
>
> ...
>
>> +static void vnc_abord_display_jobs(VncDisplay *vd)
>> +{
>> + VncState *vs;
>> +
>> + QTAILQ_FOREACH(vs, &vd->clients, next) {
>> + vnc_lock_output(vs);
>> + vs->abording = true;
>> + vnc_unlock_output(vs);
>> + }
>> + QTAILQ_FOREACH(vs, &vd->clients, next) {
>> + vnc_jobs_join(vs);
>> + }
>> + QTAILQ_FOREACH(vs, &vd->clients, next) {
>> + vnc_lock_output(vs);
>> + vs->abording = false;
>> + vnc_unlock_output(vs);
>> + }
>> +}
>
> It's "abort" not "abord". :-)
Ooops ...
> ...
>
>> static void vnc_disconnect_finish(VncState *vs)
>> {
>> + vnc_jobs_join(vs); /* Wait encoding jobs */
>> + vnc_lock(vs);
>
> Possibly racy? Maybe you have to set the aforementioned new flag
> queue->exit at the beginning of vnc_jobs_join, and refuse new jobs if it is
> set.
>
> Also, if anything waits on the same vs in vnc_refresh while you own it in
> vnc_disconnect_finish, as soon as you unlock they'll have a dangling
> pointer. (After you unlock the mutex the OS wakes the thread, but then
> pthread_mutex_lock has to check again that no one got the lock in the
> meanwhile; so QTAILQ_FOREACH_SAFE is not protecting you). Probably it's
> better to use a single lock on vd->clients instead of one lock per VncState.
vnc_disconnect_finish can only be called by the main thread, I don't
see how this could be racy, any hint ?
I am missing something ?
>> +void vnc_client_write(void *opaque)
>> +{
>> + VncState *vs = opaque;
>> +
>> + vnc_lock_output(vs);
>> + if (vs->output.offset) {
>> + vnc_client_write_locked(opaque);
>> + } else {
>> + qemu_set_fd_handler2(vs->csock, NULL, vnc_client_read, NULL, vs);
>> + }
>
> Why the if? The "else" branch is already done by vnc_client_write_plain.
This is because the vnc_write fd handler can be set by the thread, and
this can end up calling vnc_client_write_plain
with vs->output.offset == 0 and disconnecting.
> This may be a good time to port qemu-threads to Windows too. IO thread has
> no hope to work under Windows at least without major hacks (because Windows
> has no asynchronous interrupts; the only way I can imagine to emulate them
> is a breakpoint) but threaded VNC should work.
Right, but I won't do that since I don't have Windows :).
--
Corentin Chary
http://xf.iksaif.net
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 3/3] vnc: threaded VNC server
[not found] <1152486922.2330781275561618453.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com>
@ 2010-06-03 10:46 ` Paolo Bonzini
0 siblings, 0 replies; 3+ messages in thread
From: Paolo Bonzini @ 2010-06-03 10:46 UTC (permalink / raw)
To: Corentin Chary
Cc: Anthony Liguori, Adam Litke, Gautham R Shenoy, qemu-devel,
Alexander Graf
> >> +void vnc_job_push(VncJob *job)
> >> +{
> >> + vnc_lock_queue(queue);
> >> + if (QLIST_EMPTY(&job->rectangles)) {
> >> + qemu_free(job);
> >
> > No need to lock if you get into the "then" block.
>
> I locked it because the main thread can try to push a job while a
> consumer is removing one, so I can't call QLIST_EMPTY() without
> locking the queue.
You're obviously right.
>>> + qemu_mutex_unlock(&job->vs->output_mutex);
>>> +
>>> + if (job->vs->csock != -1 && job->vs->abording != true) {
>>> + vnc_flush(job->vs);
>>> + }
>>> +
>>
>> You're accessing the abort flag outside the mutex here. Also, you are not
>> using vnc_{,un}lock_output.
>
> I assumed that bool (int) where atomic .. but you're right I should lock that.
They are, however: 1) if you access them outside a mutex you have to think about
whether you need memory barriers and whether there are races; 2) since you already
own the mutex and you're just keeping it a bit longer, it costs basically nothing.
BTW, with the same reasoning you could avoid taking the mutex altogether in
vnc_abort_display_jobs (but I think it's better to keep it).
Also, I took a look at the code again and I noticed this:
>>> + if (job->vs->csock == -1) {
>>> + goto disconnected;
>>> + }
The "goto" is jumping over the unlocking of &job->vs->mutex. You only want
a "break;" I think.
>> static void vnc_disconnect_finish(VncState *vs)
>> {
>> + vnc_jobs_join(vs); /* Wait encoding jobs */
>> + vnc_lock(vs);
>
> Possibly racy? Maybe you have to set the aforementioned new flag
> queue->exit at the beginning of vnc_jobs_join, and refuse new jobs if it is
> set.
>
> vnc_disconnect_finish can only be called by the main thread, I don't
> see how this could be racy, any hint ?
I was thinking of someone queuing jobs between the end of vnc_jobs_join
and the time the vnc_lock is taken. But indeed jobs cannot be queued
at this time because vnc_refresh can only be called from the same
thread. So this is correct.
>> Also, if anything waits on the same vs in vnc_refresh while you own it in
>> vnc_disconnect_finish, as soon as you unlock they'll have a dangling
>> pointer. (After you unlock the mutex the OS wakes the thread, but then
>> pthread_mutex_lock has to check again that no one got the lock in the
>> meanwhile; so QTAILQ_FOREACH_SAFE is not protecting you). Probably it's
>> better to use a single lock on vd->clients instead of one lock per VncState.
Same here. No race because everything happens in the main thread.
Paolo
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2010-06-03 10:46 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1152486922.2330781275561618453.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com>
2010-06-03 10:46 ` [Qemu-devel] Re: [PATCH 3/3] vnc: threaded VNC server Paolo Bonzini
2010-05-29 7:38 [Qemu-devel] [PATCH 0/3] [RFC] Threaded vnc server Corentin Chary
2010-05-29 7:38 ` [Qemu-devel] [PATCH 3/3] vnc: threaded VNC server Corentin Chary
2010-06-03 7:55 ` [Qemu-devel] " Paolo Bonzini
2010-06-03 8:26 ` Corentin Chary
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).