qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Gerd Hoffmann <kraxel@redhat.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH] vnc: shared buffer: skip some optimizations.
Date: Mon, 16 Mar 2009 09:35:28 +0100	[thread overview]
Message-ID: <49BE0F50.5000009@redhat.com> (raw)
In-Reply-To: <49BA4B9B.7040208@eu.citrix.com>

Stefano Stabellini wrote:
> Gerd Hoffmann wrote:
> 
>> Sending screen updates to the client will be done using the server
>> surface + dirty map only, so the guest updating the screen in parallel
>> can't cause trouble here.
> 
> Could you please explain an example of trouble caused by sich situations?

Sure.

> Given A, B and C three different points in time with A < B < C,
> currently what we are doing is:
> 
> A) getting the guest dirty map
> 
> B) filtering the guest dirty map using memcmp with the server surface
> (called old_data in the current code), however the dirty map could be
> inconsistent with the framebuffer because was set in A
> 
> C) sending updates to the client from the guest surface based on the
> dirty map, however the dirty map is inconsistent because it was set in A
> and filtered in B

The dirty bitmap inconsistency isn't a problem at all.  The next update
round will fix that.

The trouble spot is here:  The guest might have updated the screen
between (b) and (c).  This will cause the vnc clients view of the screen
content become inconsistent with old_data.  On the next update round (b)
will stop working correctly due to that.

What my patch does in terms of the old code is make step (c) use
old_data instead of the guest-visible framebuffer.  I didn't like
stacking hacks on hacks like this though, so I decided to explicitly
name the two surfaces.

You still could get away with just one dirty bitmap for both surfaces.
Having two bitmaps IMHO makes the code more readable though.  It also
allows killing the memset(old_data,42,size) hack for forced client
updates (see framebuffer_update_request()).  An -- in preparation for
the future -- it makes the vnc code a bit more thread-friendly.  Guest
screen updating (aka vnc_update()) can run in parallel with encoding
updates for the guest (i.e. step (c)).

> If your goal is to send the most updated version of the framebuffer to
> the guest ASAP, the current code is better at it.

But *that* is exactly where the bug is ;)

cheers,
  Gerd

  reply	other threads:[~2009-03-16  8:35 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-11 14:43 [Qemu-devel] [PATCH] vnc: shared buffer: skip some optimizations Gerd Hoffmann
2009-03-11 15:43 ` Anthony Liguori
2009-03-12 19:58   ` Gerd Hoffmann
2009-03-13 12:03     ` Stefano Stabellini
2009-03-16  8:35       ` Gerd Hoffmann [this message]
2009-03-16 10:27         ` Stefano Stabellini
2009-03-16 11:06           ` Gerd Hoffmann
2009-03-16 11:17             ` Stefano Stabellini
2009-03-12 14:16 ` Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49BE0F50.5000009@redhat.com \
    --to=kraxel@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefano.stabellini@eu.citrix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).