qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Jan Kiszka <jan.kiszka@siemens.com>
Cc: qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
Date: Fri, 29 Apr 2011 09:38:28 -0600	[thread overview]
Message-ID: <1304091508.3418.11.camel@x201> (raw)
In-Reply-To: <4DBAD942.6080001@siemens.com>

On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> > On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> >> When we're trying to get a newly registered phys memory client updated
> >> with the current page mappings, we end up passing the region offset
> >> (a ram_addr_t) as the start address rather than the actual guest
> >> physical memory address (target_phys_addr_t).  If your guest has less
> >> than 3.5G of memory, these are coincidentally the same thing.  If
> 
> I think this broke even with < 3.5G as phys_offset also encodes the
> memory type while region_offset does not. So everything became RAMthis
> way, no MMIO was announced.
> 
> >> there's more, the region offset for the memory above 4G starts over
> >> at 0, so the set_memory client will overwrite it's lower memory entries.
> >>
> >> Instead, keep track of the guest phsyical address as we're walking the
> >> tables and pass that to the set_memory client.
> >>
> >> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> > 
> > Acked-by: Michael S. Tsirkin <mst@redhat.com>
> > 
> > Given all this, can yo tell how much time does
> > it take to hotplug a device with, say, a 40G RAM guest?
> 
> Why not collect pages of identical types and report them as one chunk
> once the type changes?

Good idea, I'll see if I can code that up.  I don't have a terribly
large system to test with, but with an 8G guest, it's surprisingly not
very noticeable.  For vfio, I intend to only have one memory client, so
adding additional devices won't have to rescan everything.  The memory
overhead of keeping the list that the memory client creates is probably
also low enough that it isn't worthwhile to tear it all down if all the
devices are removed.  Thanks,

Alex

  parent reply	other threads:[~2011-04-29 15:38 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-29  3:15 [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset Alex Williamson
2011-04-29 15:06 ` Michael S. Tsirkin
2011-04-29 15:29   ` Jan Kiszka
2011-04-29 15:34     ` Michael S. Tsirkin
2011-04-29 15:41       ` Alex Williamson
2011-04-29 15:38     ` Alex Williamson [this message]
2011-04-29 15:45       ` Jan Kiszka
2011-04-29 15:55         ` Alex Williamson
2011-04-29 16:07           ` Jan Kiszka
2011-04-29 16:20             ` Alex Williamson
2011-04-29 16:31               ` Jan Kiszka
2011-05-01 10:29                 ` Michael S. Tsirkin
2011-04-29 16:52       ` Alex Williamson
2011-05-03 13:15 ` Markus Armbruster
2011-05-03 14:20   ` Alex Williamson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1304091508.3418.11.camel@x201 \
    --to=alex.williamson@redhat.com \
    --cc=jan.kiszka@siemens.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).