qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Hailiang Zhang <zhang.zhanghailiang@huawei.com>
To: quintela@redhat.com
Cc: xuquan8@huawei.com, qemu-devel@nongnu.org, dgilbert@redhat.com,
	zhangchen.fnst@cn.fujitsu.com
Subject: Re: [Qemu-devel] [PATCH RESEND v2 08/18] ram/COLO: Record the dirty pages that SVM received
Date: Tue, 25 Apr 2017 19:19:03 +0800	[thread overview]
Message-ID: <58FF30A7.2020706@huawei.com> (raw)
In-Reply-To: <87lgqpoe0d.fsf@secure.mitica>

On 2017/4/25 2:29, Juan Quintela wrote:
> zhanghailiang <zhang.zhanghailiang@huawei.com> wrote:
>> We record the address of the dirty pages that received,
>> it will help flushing pages that cached into SVM.
>>
>> Here, it is a trick, we record dirty pages by re-using migration
>> dirty bitmap. In the later patch, we will start the dirty log
>> for SVM, just like migration, in this way, we can record both
>> the dirty pages caused by PVM and SVM, we only flush those dirty
>> pages from RAM cache while do checkpoint.
>>
>> Cc: Juan Quintela <quintela@redhat.com>
>> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> ---
>>   migration/ram.c | 29 +++++++++++++++++++++++++++++
>>   1 file changed, 29 insertions(+)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 05d1b06..0653a24 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -2268,6 +2268,9 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
>>   static inline void *colo_cache_from_block_offset(RAMBlock *block,
>>                                                    ram_addr_t offset)
>>   {
>> +    unsigned long *bitmap;
>> +    long k;
>> +
>>       if (!offset_in_ramblock(block, offset)) {
>>           return NULL;
>>       }
>> @@ -2276,6 +2279,17 @@ static inline void *colo_cache_from_block_offset(RAMBlock *block,
>>                        __func__, block->idstr);
>>           return NULL;
>>       }
>> +
>> +    k = (memory_region_get_ram_addr(block->mr) + offset) >> TARGET_PAGE_BITS;
>> +    bitmap = atomic_rcu_read(&ram_state.ram_bitmap)->bmap;
>> +    /*
>> +    * During colo checkpoint, we need bitmap of these migrated pages.
>> +    * It help us to decide which pages in ram cache should be flushed
>> +    * into VM's RAM later.
>> +    */
>> +    if (!test_and_set_bit(k, bitmap)) {
>> +        ram_state.migration_dirty_pages++;
>> +    }
>>       return block->colo_cache + offset;
>>   }
>>   
>> @@ -2752,6 +2766,15 @@ int colo_init_ram_cache(void)
>>           memcpy(block->colo_cache, block->host, block->used_length);
>>       }
>>       rcu_read_unlock();
>> +    /*
>> +    * Record the dirty pages that sent by PVM, we use this dirty bitmap together
>> +    * with to decide which page in cache should be flushed into SVM's RAM. Here
>> +    * we use the same name 'ram_bitmap' as for migration.
>> +    */
>> +    ram_state.ram_bitmap = g_new0(RAMBitmap, 1);
>> +    ram_state.ram_bitmap->bmap = bitmap_new(last_ram_page());
>> +    ram_state.migration_dirty_pages = 0;
>> +
>>       return 0;
>>   
>>   out_locked:
>> @@ -2770,6 +2793,12 @@ out_locked:
>>   void colo_release_ram_cache(void)
>>   {
>>       RAMBlock *block;
>> +    RAMBitmap *bitmap = ram_state.ram_bitmap;
>> +
>> +    atomic_rcu_set(&ram_state.ram_bitmap, NULL);
>> +    if (bitmap) {
>> +        call_rcu(bitmap, migration_bitmap_free, rcu);
>> +    }
>>   
>>       rcu_read_lock();
>>       QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> You can see my Split bitmap patches, I am splitting the dirty bitmap per
> block, I think that it shouldn't make your life more difficult, but
> please take a look.

OK, I'll look at it.

> I am wondering if it is faster/easier to use the page_cache.c that
> xbzrle uses to store the dirty pages instead of copying the whole
> RAMBlocks, but I don't really know.

Hmm,  Yes, it takes long time (depends on VM's memory size) to backup the whole VM's memory data into cache.
And we can reduce the time by backup page one by one while loading the page during the first live migration round,
because we can know  if users enable COLO at the beginning of the first migration stage.
I'd like to send those optimization later in another series...

Thanks,
Hailiang

>
> Thanks, Juan.
>
> .
>

  reply	other threads:[~2017-04-25 11:19 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-22  8:35 [Qemu-devel] [PATCH RESEND v2 00/18] COLO: integrate colo frame with block replication and net compare zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 01/18] net/colo: Add notifier/callback related helpers for filter zhanghailiang
2017-04-25 11:40   ` Jason Wang
2017-04-26  8:14     ` Hailiang Zhang
2017-04-26  9:14       ` Jason Wang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 02/18] colo-compare: implement the process of checkpoint zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 03/18] colo-compare: use notifier to notify packets comparing result zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 04/18] COLO: integrate colo compare with colo frame zhanghailiang
2017-04-24 18:18   ` Juan Quintela
2017-04-25 11:03     ` Hailiang Zhang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 05/18] COLO: Handle shutdown command for VM in COLO state zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 06/18] COLO: Add block replication into colo process zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 07/18] COLO: Load dirty pages into SVM's RAM cache firstly zhanghailiang
2017-04-24 18:27   ` Juan Quintela
2017-04-25 11:06     ` Hailiang Zhang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 08/18] ram/COLO: Record the dirty pages that SVM received zhanghailiang
2017-04-24 18:29   ` Juan Quintela
2017-04-25 11:19     ` Hailiang Zhang [this message]
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 09/18] COLO: Flush memory data from ram cache zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 10/18] qmp event: Add COLO_EXIT event to notify users while exited COLO zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 11/18] savevm: split save/find loadvm_handlers entry into two helper functions zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 12/18] savevm: split the process of different stages for loadvm/savevm zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 13/18] COLO: Separate the process of saving/loading ram and device state zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 14/18] COLO: Split qemu_savevm_state_begin out of checkpoint process zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 15/18] COLO: flush host dirty ram from cache zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 16/18] filter: Add handle_event method for NetFilterClass zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 17/18] filter-rewriter: handle checkpoint and failover event zhanghailiang
2017-04-22  8:35 ` [Qemu-devel] [PATCH RESEND v2 18/18] COLO: notify net filters about checkpoint/failover event zhanghailiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=58FF30A7.2020706@huawei.com \
    --to=zhang.zhanghailiang@huawei.com \
    --cc=dgilbert@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=xuquan8@huawei.com \
    --cc=zhangchen.fnst@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).