From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH v2 6/8] migration: move handle of zero page to the thread Date: Mon, 23 Jul 2018 15:56:33 +0800 Message-ID: References: <20180719121520.30026-1-xiaoguangrong@tencent.com> <20180719121520.30026-7-xiaoguangrong@tencent.com> <20180723050325.GF2491@xz-mi> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, mst@redhat.com, mtosatti@redhat.com, Xiao Guangrong , dgilbert@redhat.com, qemu-devel@nongnu.org, wei.w.wang@intel.com, jiang.biao2@zte.com.cn, pbonzini@redhat.com To: Peter Xu Return-path: In-Reply-To: <20180723050325.GF2491@xz-mi> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel2=m.gmane.org@nongnu.org Sender: "Qemu-devel" List-Id: kvm.vger.kernel.org On 07/23/2018 01:03 PM, Peter Xu wrote: > On Thu, Jul 19, 2018 at 08:15:18PM +0800, guangrong.xiao@gmail.com wrote: > > [...] > >> @@ -1950,12 +1971,16 @@ retry: >> set_compress_params(&comp_param[idx], block, offset); >> qemu_cond_signal(&comp_param[idx].cond); >> qemu_mutex_unlock(&comp_param[idx].mutex); >> - pages = 1; >> - /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */ >> - compression_counters.reduced_size += TARGET_PAGE_SIZE - >> - bytes_xmit + 8; >> - compression_counters.pages++; >> ram_counters.transferred += bytes_xmit; >> + pages = 1; > > (moving of this line seems irrelevant; meanwhile more duplicated codes > so even better to have a helper now) > Good to me. :) >> + if (comp_param[idx].zero_page) { >> + ram_counters.duplicate++; >> + } else { >> + /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */ >> + compression_counters.reduced_size += TARGET_PAGE_SIZE - >> + bytes_xmit + 8; >> + compression_counters.pages++; >> + } >> break; >> } >> } > > [...] > >> @@ -2249,15 +2308,8 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, >> return res; >> } >> >> - /* >> - * When starting the process of a new block, the first page of >> - * the block should be sent out before other pages in the same >> - * block, and all the pages in last block should have been sent >> - * out, keeping this order is important, because the 'cont' flag >> - * is used to avoid resending the block name. >> - */ >> - if (block != rs->last_sent_block && save_page_use_compression(rs)) { >> - flush_compressed_data(rs); >> + if (save_compress_page(rs, block, offset)) { >> + return 1; > > It's a bit tricky (though it seems to be a good idea too) to move the > zero detect into the compression thread, though I noticed that we also > do something else for zero pages: > > res = save_zero_page(rs, block, offset); > if (res > 0) { > /* Must let xbzrle know, otherwise a previous (now 0'd) cached > * page would be stale > */ > if (!save_page_use_compression(rs)) { > XBZRLE_cache_lock(); > xbzrle_cache_zero_page(rs, block->offset + offset); > XBZRLE_cache_unlock(); > } > ram_release_pages(block->idstr, offset, res); > return res; > } > > I'd guess that the xbzrle update of the zero page is not needed for > compression since after all xbzrle is not enabled when compression is Yup. if they are both enabled, compression works only for the first iteration (i.e, ram_bulk_stage), at that point, nothing is cached in xbzrle's cahe, in other words, xbzrle has posted nothing to the destination. > enabled, however do we need to do ram_release_pages() somehow? > We have done it in the thread: +static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, ram_addr_t offset, uint8_t *source_buf) { + if (save_zero_page_to_file(rs, f, block, offset)) { + zero_page = true; + goto exit; + } ...... +exit: ram_release_pages(block->idstr, offset & TARGET_PAGE_MASK, 1); + return zero_page; } However, it is not safe to do ram_release_pages in the thread as it's not protected it multithreads. Fortunately, compression will be disabled if it switches to post-copy, so i preferred to keep current behavior and deferred to fix it after this patchset has been merged.