From: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
To: Avi Kivity <avi@redhat.com>
Cc: Jan Kiszka <jan.kiszka@siemens.com>,
OHMURA Kei <ohmura.kei@lab.ntt.co.jp>,
qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: [Qemu-devel] Re: [PATCH 1/3] qemu-kvm: Wrap phys_ram_dirty with additional inline functions.
Date: Mon, 15 Feb 2010 14:05:15 +0900 [thread overview]
Message-ID: <4B78D60B.8030309@lab.ntt.co.jp> (raw)
In-Reply-To: <4B7655C4.1050507@redhat.com>
Avi Kivity wrote:
> On 02/12/2010 04:08 AM, OHMURA Kei wrote:
>>> Why do you need a counter? It may be sufficient to set a single bit.
>>> This reduces the memory overhead and perhaps cache thrashing.
>> Thanks for looking into this. I agree with your opinion.
>>
>> Our motivation here is to skip traveling when the dirty bitmap is
>> really sparse
>> or dense, so either setting a bit or counting up would be fine.
>>
>> There is one advantage to the counter approach that we can make this
>> large
>> traveling granularity flexible. In case of the bit approach, the maximum
>> granularity is limited to HOST_LONG_BITS. If you think this
>> flexibility is to
>> be useless, we would take the bit approach.
>
> The bit approach can be used for any packing ratio; for example you can
> pack 64 pages in a single bit. The rule is that if one or more pages is
> dirty, the bit is set; otherwise it is clear. This makes clearing a
> single page expensive (you have to examine the state of 63 other pages)
> but IIRC we always clear in ranges, so apart from the edges, you can use
> a memset.
Sounds good.
If we could extend the packing ratio to kvm (in kernel),
it would be more efficient.
>> By the way, this is about filling the gap of the dirty bitmap management
>> between kvm and qemu. Do you think we should set a bit when qemu's
>> phys_ram_dirty is 0xff or !0?
>>
>> Radically, if we could have a bit-based phys_ram_dirty_by_word, we may
>> just OR
>> the dirty bitmap of kvm with qemu in kvm_get_dirty_pages_log_range()...
>
> The problem is that the qemu uses the dirty information for at least
> three different purposes: live migration, vga updates, and tcg
> self-modifying code. But I think that's solvable: keep a separate bitmap
> for each purpose, and OR the kvm bitmap into any used qemu bitmap
> whenever we get it from the kernel.
>
> That has many advantages; foremost, when vnc is not connected and we
> aren't live migrating, we can drop all of the bitmaps and save some
> memory. If you can make that work I think that's best.
Would be happy to do it.
We were also thinking that approach originally, but hesitating because the
changes might be radical. I hope this plan is fine for upstream qemu too.
prev parent reply other threads:[~2010-02-15 5:05 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-02-08 10:22 [Qemu-devel] [PATCH 1/3] qemu-kvm: Wrap phys_ram_dirty with additional inline functions OHMURA Kei
2010-02-11 12:42 ` [Qemu-devel] " Avi Kivity
2010-02-12 2:08 ` OHMURA Kei
2010-02-13 7:33 ` Avi Kivity
2010-02-15 5:05 ` Yoshiaki Tamura [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4B78D60B.8030309@lab.ntt.co.jp \
--to=tamura.yoshiaki@lab.ntt.co.jp \
--cc=avi@redhat.com \
--cc=jan.kiszka@siemens.com \
--cc=kvm@vger.kernel.org \
--cc=ohmura.kei@lab.ntt.co.jp \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).