From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43184) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aSpDi-0002yY-QG for qemu-devel@nongnu.org; Mon, 08 Feb 2016 12:03:27 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aSpDf-0006du-U0 for qemu-devel@nongnu.org; Mon, 08 Feb 2016 12:03:26 -0500 Received: from mail-wm0-x235.google.com ([2a00:1450:400c:c09::235]:36052) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aSpDf-0006di-Mk for qemu-devel@nongnu.org; Mon, 08 Feb 2016 12:03:23 -0500 Received: by mail-wm0-x235.google.com with SMTP id p63so123126222wmp.1 for ; Mon, 08 Feb 2016 09:03:23 -0800 (PST) Received: from 640k.lan (94-39-141-130.adsl-ull.clienti.tiscali.it. [94.39.141.130]) by smtp.gmail.com with ESMTPSA id b1sm30651442wjy.0.2016.02.08.09.03.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Feb 2016 09:03:22 -0800 (PST) Sender: Paolo Bonzini From: Paolo Bonzini Date: Mon, 8 Feb 2016 18:02:52 +0100 Message-Id: <1454950999-64128-2-git-send-email-pbonzini@redhat.com> In-Reply-To: <1454950999-64128-1-git-send-email-pbonzini@redhat.com> References: <1454950999-64128-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PULL 01/28] memory: add early bail out from cpu_physical_memory_set_dirty_range List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org This condition is true in the common case, so we can cut out the body of the function. In addition, this makes it easier for the compiler to do at least partial inlining, even if it decides that fully inlining the function is unreasonable. Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- include/exec/ram_addr.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 606e277..f2e872d 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -165,6 +165,10 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start, unsigned long end, page; unsigned long **d = ram_list.dirty_memory; + if (!mask && !xen_enabled()) { + return; + } + end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; page = start >> TARGET_PAGE_BITS; if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) { -- 1.8.3.1