From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 94E83CCA470 for ; Wed, 1 Oct 2025 08:26:07 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1v3s8q-0002hB-Ez; Wed, 01 Oct 2025 04:24:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1v3s8e-0002ce-Sd for qemu-devel@nongnu.org; Wed, 01 Oct 2025 04:24:41 -0400 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1v3s7e-0006ay-N3 for qemu-devel@nongnu.org; Wed, 01 Oct 2025 04:24:38 -0400 Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-46e3a50bc0fso48305325e9.3 for ; Wed, 01 Oct 2025 01:23:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1759307014; x=1759911814; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZaIr6FH1wkvEsTTc63UU90krHzSjNDiVQPaADhPhLlM=; b=Gj/w1VeTFXCEovv6PsIcUu3huC15U0kAbN8V10jLnWYIdPKM0W2Z6mZaKkFOlEypA4 k4EDoflPLzFlu8KcPx2LHJYz1GXw31sW8DmlzxIMiIy576e54zgmDIkgh+PbFOwBqohb omwz4YfxYQ000awQTRvMzu1v7xvK0xSnBxMEU6X3meRdcyiqWdKD8WgFTnVVr7dy27jr S3CUAhI4h0EeUU+6biGyAPa6QD2gB+9UwETlaf0RV5U49gKph39vhPuJTKb7RE97CVhA ymqDAgDHkRLUcHmLJ34irv+CpRIiDO8CJZ8v5+y7lJ+zfm6eMjW8BYuKIQf5JRzYg9Ml zL8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759307014; x=1759911814; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZaIr6FH1wkvEsTTc63UU90krHzSjNDiVQPaADhPhLlM=; b=JQbvBtQir3iXxucX+oRRHZugHuGemJJHZ89UahpRCQt6pvUuQFyATw9dzfxIqLsovP Pe/G4X75lF+n9uNiEKK8+iF2e/IbbustFi3NVKL/it5HxternIoSJvUQPdpXwAUSkdCd YdPt+asWVpKS8x7hDMTjHqUy6AasJbCwCkaaxtYcvbwlJ6BtkpK7vVpREKKdqUttzXlt N61GyEbcwnxmo6kEHGJ1ijk/iEujWkYhTAFO4/vDbD1v4aqbzFg3ZN4uLLc3cm9qfycC czCk3Ezj9Yjjh+cI31hT5eor4m1w5bqm/OR5WnIRi5Gei5oe5slpSe9A19VrO5pZ5txB R8sg== X-Gm-Message-State: AOJu0YzI6EdaQOlqQFBZxhfQTyW+Cb57eyJkyyMGfkCVU+3HaVua52WG 2DT13gzNns9qbPU/eWOxLHJmmCIX/26FM3IYukl58nvKiV1wVwcwzO1Q7n9fjyBQWb+H9/kj9/Z qDnv1IQ9kgw== X-Gm-Gg: ASbGncueCTdehEXPAtkTJv/tG94BlkYy3QFk1uBa6k2krjeKy7ohqIobTwpleEreH+E xPiBX/X0/0U58wlwdsemBL1UuwtuOf2GhHELUjubU5Zo4zbF4Mb9PlUoMCn/QWiUn+60eKEa4No O9aQV9qZpeHrNsGMKvWHdOBgHK3RM/C+gQfGLnFg97BQtWyaU/q218Ky7WU7D9lj3Hs82cn+XHd 46DgqzdbLbUi2r9FUhmSG5P4WQABg7BLBeWYSp67bq9fS0BzENmGeMIKT6mUz+mcfAzZJMlgd/C QFdOd4LGsnRHmXrHB0LmMtyCqiKu7kQR+gD44ks4l6cGI7+M3LaUjRbWCzATcaU6wKfC7GLbOSy wIKefOxaN3zPDWXAug3RejCWsLALCqdL7cIoURgE55Z2VfaA2G2c2FSsFfXu7E9E408UNsXOmrc TL4ivZoYlJMJM5W+yYTYXv X-Google-Smtp-Source: AGHT+IGHB98tVMlfkOs0dlElGAUP3aU+L+HmrJm4W0gtdrPotgwnKOReHREs6U+cd2rGbgJkY6IrKQ== X-Received: by 2002:a05:600c:c4aa:b0:46d:d949:daba with SMTP id 5b1f17b1804b1-46e61262469mr24022125e9.4.1759307014182; Wed, 01 Oct 2025 01:23:34 -0700 (PDT) Received: from localhost.localdomain (88-187-86-199.subs.proxad.net. [88.187.86.199]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-46e5b622f37sm31576505e9.1.2025.10.01.01.23.32 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 01 Oct 2025 01:23:33 -0700 (PDT) From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: "Michael S. Tsirkin" , Nicholas Piggin , Elena Ufimtseva , qemu-arm@nongnu.org, Jagannathan Raman , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Halil Pasic , Eric Farman , Thomas Huth , Matthew Rosato , Jason Herne , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , kvm@vger.kernel.org, Christian Borntraeger , Ilya Leoshkevich , Peter Maydell , qemu-ppc@nongnu.org, Harsh Prateek Bora , Fabiano Rosas , Richard Henderson , Alex Williamson , Paolo Bonzini , qemu-s390x@nongnu.org, Peter Xu Subject: [PATCH 23/25] system/physmem: Reduce cpu_physical_memory_sync_dirty_bitmap() scope Date: Wed, 1 Oct 2025 10:21:23 +0200 Message-ID: <20251001082127.65741-24-philmd@linaro.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251001082127.65741-1-philmd@linaro.org> References: <20251001082127.65741-1-philmd@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=philmd@linaro.org; helo=mail-wm1-x333.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, T_SPF_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org cpu_physical_memory_sync_dirty_bitmap() is now only called within system/physmem.c, by ramblock_sync_dirty_bitmap(). Reduce its scope by making it internal to this file. Since it doesn't involve any CPU, remove the 'cpu_' prefix. Remove the now unneeded "qemu/rcu.h" and "system/memory.h" headers. Signed-off-by: Philippe Mathieu-Daudé --- include/system/ram_addr.h | 79 --------------------------------------- migration/ram.c | 78 +++++++++++++++++++++++++++++++++++++- 2 files changed, 77 insertions(+), 80 deletions(-) diff --git a/include/system/ram_addr.h b/include/system/ram_addr.h index c55e3849b0e..24dba1fba1c 100644 --- a/include/system/ram_addr.h +++ b/include/system/ram_addr.h @@ -21,10 +21,7 @@ #include "exec/ramlist.h" #include "system/ramblock.h" -#include "system/memory.h" #include "exec/target_page.h" -#include "qemu/rcu.h" - #include "exec/hwaddr.h" extern uint64_t total_dirty_pages; @@ -177,80 +174,4 @@ bool cpu_physical_memory_snapshot_get_dirty(DirtyBitmapSnapshot *snap, ram_addr_t start, ram_addr_t length); -/* Called with RCU critical section */ -static inline -uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, - ram_addr_t start, - ram_addr_t length) -{ - ram_addr_t addr; - unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); - uint64_t num_dirty = 0; - unsigned long *dest = rb->bmap; - - /* start address and length is aligned at the start of a word? */ - if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == - (start + rb->offset) && - !(length & ((BITS_PER_LONG << TARGET_PAGE_BITS) - 1))) { - int k; - int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); - unsigned long * const *src; - unsigned long idx = (word * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE; - unsigned long offset = BIT_WORD((word * BITS_PER_LONG) % - DIRTY_MEMORY_BLOCK_SIZE); - unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS); - - src = qatomic_rcu_read( - &ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION])->blocks; - - for (k = page; k < page + nr; k++) { - if (src[idx][offset]) { - unsigned long bits = qatomic_xchg(&src[idx][offset], 0); - unsigned long new_dirty; - new_dirty = ~dest[k]; - dest[k] |= bits; - new_dirty &= bits; - num_dirty += ctpopl(new_dirty); - } - - if (++offset >= BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) { - offset = 0; - idx++; - } - } - if (num_dirty) { - cpu_physical_memory_dirty_bits_cleared(start, length); - } - - if (rb->clear_bmap) { - /* - * Postpone the dirty bitmap clear to the point before we - * really send the pages, also we will split the clear - * dirty procedure into smaller chunks. - */ - clear_bmap_set(rb, start >> TARGET_PAGE_BITS, - length >> TARGET_PAGE_BITS); - } else { - /* Slow path - still do that in a huge chunk */ - memory_region_clear_dirty_bitmap(rb->mr, start, length); - } - } else { - ram_addr_t offset = rb->offset; - - for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { - if (cpu_physical_memory_test_and_clear_dirty( - start + addr + offset, - TARGET_PAGE_SIZE, - DIRTY_MEMORY_MIGRATION)) { - long k = (start + addr) >> TARGET_PAGE_BITS; - if (!test_and_set_bit(k, dest)) { - num_dirty++; - } - } - } - } - - return num_dirty; -} - #endif diff --git a/migration/ram.c b/migration/ram.c index 91e65be83d8..e797c0710d3 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -935,11 +935,87 @@ bool ramblock_page_is_discarded(RAMBlock *rb, ram_addr_t start) return false; } +/* Called with RCU critical section */ +static inline +uint64_t physical_memory_sync_dirty_bitmap(RAMBlock *rb, + ram_addr_t start, + ram_addr_t length) +{ + ram_addr_t addr; + unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); + uint64_t num_dirty = 0; + unsigned long *dest = rb->bmap; + + /* start address and length is aligned at the start of a word? */ + if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == + (start + rb->offset) && + !(length & ((BITS_PER_LONG << TARGET_PAGE_BITS) - 1))) { + int k; + int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); + unsigned long * const *src; + unsigned long idx = (word * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE; + unsigned long offset = BIT_WORD((word * BITS_PER_LONG) % + DIRTY_MEMORY_BLOCK_SIZE); + unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS); + + src = qatomic_rcu_read( + &ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION])->blocks; + + for (k = page; k < page + nr; k++) { + if (src[idx][offset]) { + unsigned long bits = qatomic_xchg(&src[idx][offset], 0); + unsigned long new_dirty; + new_dirty = ~dest[k]; + dest[k] |= bits; + new_dirty &= bits; + num_dirty += ctpopl(new_dirty); + } + + if (++offset >= BITS_TO_LONGS(DIRTY_MEMORY_BLOCK_SIZE)) { + offset = 0; + idx++; + } + } + if (num_dirty) { + cpu_physical_memory_dirty_bits_cleared(start, length); + } + + if (rb->clear_bmap) { + /* + * Postpone the dirty bitmap clear to the point before we + * really send the pages, also we will split the clear + * dirty procedure into smaller chunks. + */ + clear_bmap_set(rb, start >> TARGET_PAGE_BITS, + length >> TARGET_PAGE_BITS); + } else { + /* Slow path - still do that in a huge chunk */ + memory_region_clear_dirty_bitmap(rb->mr, start, length); + } + } else { + ram_addr_t offset = rb->offset; + + for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { + if (cpu_physical_memory_test_and_clear_dirty( + start + addr + offset, + TARGET_PAGE_SIZE, + DIRTY_MEMORY_MIGRATION)) { + long k = (start + addr) >> TARGET_PAGE_BITS; + if (!test_and_set_bit(k, dest)) { + num_dirty++; + } + } + } + } + + return num_dirty; +} + /* Called with RCU critical section */ static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb) { uint64_t new_dirty_pages = - cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length); + physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length); rs->migration_dirty_pages += new_dirty_pages; rs->num_dirty_pages_period += new_dirty_pages; -- 2.51.0