qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Corentin Chary <corentin.chary@gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Anthony Liguori <aliguori@linux.vnet.ibm.com>,
	Adam Litke <agl@us.ibm.com>, Gautham R Shenoy <ego@in.ibm.com>,
	qemu-devel@nongnu.org, Alexander Graf <agraf@suse.de>
Subject: Re: [Qemu-devel] Re: [PATCH 3/3] vnc: threaded VNC server
Date: Thu, 3 Jun 2010 10:26:12 +0200	[thread overview]
Message-ID: <AANLkTiljPLUAx_m26HfFNI98GOoXFYMUnfCusC13OzZ5@mail.gmail.com> (raw)
In-Reply-To: <4C075FE0.80704@redhat.com>

On Thu, Jun 3, 2010 at 9:55 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 05/29/2010 09:38 AM, Corentin Chary wrote:
>>
>> Implement a threaded VNC server using the producer-consumer model.
>> The main thread will push encoding jobs (a list a rectangles to update)
>> in a queue, and the VNC worker thread will consume that queue and send
>> framebuffer updates to the output buffer.
>>
>> There is three levels of locking:
>> - jobs queue lock: for each operation on the queue (push, pop, isEmpty?)
>> - VncState global lock: mainly used for framebuffer updates to avoid
>>                         screen corruption if the framebuffer is updated
>>                        while the worker threaded is doing something.
>> - VncState::output lock: used to make sure the output buffer is not
>> corrupted
>>                         if two threads try to write on it at the same time
>>
>> While the VNC worker thread is working, the VncState global lock is hold
>> to avoid screen corruptions (this block vnc_refresh() for a short time)
>> but the
>> output lock is not hold because the thread work on its own output buffer.
>> When
>> the encoding job is done, the worker thread will hold the output lock and
>> copy
>> its output buffer in vs->output.
>
> This belong in a comment in the code, not in the commit message (or in
> both).

Right

>> +void vnc_job_push(VncJob *job)
>> +{
>> +    vnc_lock_queue(queue);
>> +    if (QLIST_EMPTY(&job->rectangles)) {
>> +        qemu_free(job);
>
> No need to lock if you get into the "then" block.

I locked it because the main thread can try to push a job while a
consumer is removing one, so I can't call QLIST_EMPTY() without
locking the queue.

>> +    } else {
>> +        QTAILQ_INSERT_TAIL(&queue->jobs, job, next);
>> +        qemu_cond_broadcast(&queue->cond);
>> +    }
>> +    vnc_unlock_queue(queue);
>> +}
>
> ...
>
>> +static int vnc_worker_thread_loop(VncJobQueue *queue)
>> +{
>> +    VncJob *job;
>> +    VncRectEntry *entry, *tmp;
>> +    VncState vs;
>> +    int n_rectangles;
>> +    int saved_offset;
>> +
>> +    vnc_lock_queue(queue);
>> +    if (QTAILQ_EMPTY(&queue->jobs)) {
>> +        qemu_cond_wait(&queue->cond,&queue->mutex);
>> +    }
>> +
>> +    /* If the queue is empty, it's an exit order */
>> +    if (QTAILQ_EMPTY(&queue->jobs)) {
>> +        vnc_unlock_queue(queue);
>> +        return -1;
>> +    }
>
> This is not safe.  It might work with a single consumer, but something like
> this is better:
>
>   vnc_lock_queue(queue);
>   while (!queue->exit && QTAILQ_EMPTY(&queue->jobs)) {
>        qemu_cond_wait(&queue->cond,&queue->mutex);
>   }
>   if (queue->exit) {
>       vnc_unlock_queue(queue);
>       return -1;
>   }

Right,

> (It occurred to me now that maybe you can reuse ->aborting.  Not sure
> though).
>
>> +    qemu_mutex_unlock(&job->vs->output_mutex);
>> +
>> +    if (job->vs->csock != -1 && job->vs->abording != true) {
>> +        vnc_flush(job->vs);
>> +    }
>> +
>
> You're accessing the abort flag outside the mutex here.  Also, you are not
> using vnc_{,un}lock_output.

I assumed that bool (int) where atomic .. but you're right I should lock that.

>> +    job = QTAILQ_FIRST(&queue->jobs);
>> +    vnc_unlock_queue(queue);
>
> ...
>
>> +static void vnc_abord_display_jobs(VncDisplay *vd)
>> +{
>> +    VncState *vs;
>> +
>> +    QTAILQ_FOREACH(vs, &vd->clients, next) {
>> +        vnc_lock_output(vs);
>> +        vs->abording = true;
>> +        vnc_unlock_output(vs);
>> +    }
>> +    QTAILQ_FOREACH(vs, &vd->clients, next) {
>> +        vnc_jobs_join(vs);
>> +    }
>> +    QTAILQ_FOREACH(vs, &vd->clients, next) {
>> +        vnc_lock_output(vs);
>> +        vs->abording = false;
>> +        vnc_unlock_output(vs);
>> +    }
>> +}
>
> It's "abort" not "abord". :-)

Ooops ...

> ...
>
>>  static void vnc_disconnect_finish(VncState *vs)
>>  {
>> +    vnc_jobs_join(vs); /* Wait encoding jobs */
>> +    vnc_lock(vs);
>
> Possibly racy?  Maybe you have to set the aforementioned new flag
> queue->exit at the beginning of vnc_jobs_join, and refuse new jobs if it is
> set.
>
> Also, if anything waits on the same vs in vnc_refresh while you own it in
> vnc_disconnect_finish, as soon as you unlock they'll have a dangling
> pointer.  (After you unlock the mutex the OS wakes the thread, but then
> pthread_mutex_lock has to check again that no one got the lock in the
> meanwhile; so QTAILQ_FOREACH_SAFE is not protecting you).  Probably it's
> better to use a single lock on vd->clients instead of one lock per VncState.

vnc_disconnect_finish can only be called by the main thread, I don't
see how this could be racy, any hint ?
I am missing something ?

>> +void vnc_client_write(void *opaque)
>> +{
>> +    VncState *vs = opaque;
>> +
>> +    vnc_lock_output(vs);
>> +    if (vs->output.offset) {
>> +        vnc_client_write_locked(opaque);
>> +    } else {
>> +        qemu_set_fd_handler2(vs->csock, NULL, vnc_client_read, NULL, vs);
>> +    }
>
> Why the if?  The "else" branch is already done by vnc_client_write_plain.

This is because the vnc_write fd handler can be set by the thread, and
this can end up calling vnc_client_write_plain
with vs->output.offset == 0 and disconnecting.

> This may be a good time to port qemu-threads to Windows too.  IO thread has
> no hope to work under Windows at least without major hacks (because Windows
> has no asynchronous interrupts; the only way I can imagine to emulate them
> is a breakpoint) but threaded VNC should work.

Right, but I won't do that since I don't have Windows :).


-- 
Corentin Chary
http://xf.iksaif.net

  reply	other threads:[~2010-06-03  8:26 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-29  7:38 [Qemu-devel] [PATCH 0/3] [RFC] Threaded vnc server Corentin Chary
2010-05-29  7:38 ` [Qemu-devel] [PATCH 1/3] qemu-thread: add qemu_mutex/cond_destroy and qemu_mutex_exit Corentin Chary
2010-05-29  7:38 ` [Qemu-devel] [PATCH 2/3] qemu-thread: add cleanup_push() and cleanup_pop() Corentin Chary
2010-06-03  5:50   ` Paul Brook
2010-06-03  7:27   ` [Qemu-devel] " Paolo Bonzini
2010-06-03  7:46     ` Corentin Chary
2010-05-29  7:38 ` [Qemu-devel] [PATCH 3/3] vnc: threaded VNC server Corentin Chary
2010-06-03  7:55   ` [Qemu-devel] " Paolo Bonzini
2010-06-03  8:26     ` Corentin Chary [this message]
     [not found] <1152486922.2330781275561618453.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com>
2010-06-03 10:46 ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AANLkTiljPLUAx_m26HfFNI98GOoXFYMUnfCusC13OzZ5@mail.gmail.com \
    --to=corentin.chary@gmail.com \
    --cc=agl@us.ibm.com \
    --cc=agraf@suse.de \
    --cc=aliguori@linux.vnet.ibm.com \
    --cc=ego@in.ibm.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).