From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Wei Yang <richardw.yang@linux.intel.com>
Cc: qemu-devel@nongnu.org, quintela@redhat.com
Subject: Re: [Qemu-devel] [PATCH] migration/ram.c: fix typos in comments
Date: Tue, 14 May 2019 17:16:09 +0100 [thread overview]
Message-ID: <20190514161608.GP2753@work-vm> (raw)
In-Reply-To: <20190510233729.15554-1-richardw.yang@linux.intel.com>
* Wei Yang (richardw.yang@linux.intel.com) wrote:
> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> ---
> migration/ram.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
and queued.
> diff --git a/migration/ram.c b/migration/ram.c
> index 1def8122e9..720c2b73ca 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -888,7 +888,7 @@ struct {
> * - to make easier to know what to free at the end of migration
> *
> * This way we always know who is the owner of each "pages" struct,
> - * and we don't need any loocking. It belongs to the migration thread
> + * and we don't need any locking. It belongs to the migration thread
> * or to the channel thread. Switching is safe because the migration
> * thread is using the channel mutex when changing it, and the channel
> * have to had finish with its own, otherwise pending_job can't be
> @@ -1594,7 +1594,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
> *
> * Called with rcu_read_lock() to protect migration_bitmap
> *
> - * Returns the byte offset within memory region of the start of a dirty page
> + * Returns the page offset within memory region of the start of a dirty page
> *
> * @rs: current RAM state
> * @rb: RAMBlock where to search for dirty pages
> @@ -2108,7 +2108,7 @@ retry:
> * find_dirty_block: find the next dirty page and update any state
> * associated with the search process.
> *
> - * Returns if a page is found
> + * Returns true if a page is found
> *
> * @rs: current RAM state
> * @pss: data about the state of the current dirty page scan
> @@ -2204,7 +2204,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
> *
> * Skips pages that are already sent (!dirty)
> *
> - * Returns if a queued page is found
> + * Returns true if a queued page is found
> *
> * @rs: current RAM state
> * @pss: data about the state of the current dirty page scan
> @@ -3411,7 +3411,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>
> /* we want to check in the 1st loop, just in case it was the 1st time
> and we had to sync the dirty bitmap.
> - qemu_get_clock_ns() is a bit expensive, so we only check each some
> + qemu_clock_get_ns() is a bit expensive, so we only check each some
> iterations
> */
> if ((i & 63) == 0) {
> --
> 2.19.1
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
prev parent reply other threads:[~2019-05-14 16:18 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-10 23:37 [Qemu-devel] [PATCH] migration/ram.c: fix typos in comments Wei Yang
2019-05-14 16:16 ` Dr. David Alan Gilbert [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190514161608.GP2753@work-vm \
--to=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=richardw.yang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).