qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] hw/vfio: Improve vfio_get_dirty_bitmap() tracepoint
@ 2023-05-25 11:43 Joao Martins
  2023-05-25 11:43 ` [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap() Joao Martins
  2023-05-25 11:43 ` [PATCH v2 2/2] hw/vfio: Add nr of dirty pages to vfio_get_dirty_bitmap tracepoint Joao Martins
  0 siblings, 2 replies; 9+ messages in thread
From: Joao Martins @ 2023-05-25 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Williamson, Cedric Le Goater, Paolo Bonzini, Peter Xu,
	David Hildenbrand, Philippe Mathieu-Daude, Avihai Horon,
	Joao Martins

Hey,

This tiny series changes the tracepoint to include the number of
dirty pages via the vfio_get_dirty_bitmap. I find it useful for
observability in general to understand the number of dirty pages in an
IOVA range. With dirty tracking supported by device or IOMMU it's specially
relevant data to include in the tracepoint.

First patch changes the return value to be the number of dirty pages in
the helper function setting dirty bits and the second patch expands the
VFIO tracepoint to include the dirty pages.

Thanks,
	Joao

Changes since v1[1]:
* Make the nr of dirty pages a retval similar to sync variant of helper in
  patch 1 (Cedric)
* Stash number of bits set in bitmap quad in a variable and reuse in
  GLOBAL_DIRTY_RATE in patch 1
* Drop init to 0 given that we always initialize the @dirty used in the
  tracepoint

[1] https://lore.kernel.org/qemu-devel/20230523151217.46427-1-joao.m.martins@oracle.com/

Joao Martins (2):
  exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap()
  hw/vfio: Add nr of dirty pages to vfio_get_dirty_bitmap tracepoint

 hw/vfio/common.c        |  7 ++++---
 hw/vfio/trace-events    |  2 +-
 include/exec/ram_addr.h | 21 +++++++++++++++------
 3 files changed, 20 insertions(+), 10 deletions(-)

-- 
2.31.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap()
  2023-05-25 11:43 [PATCH v2 0/2] hw/vfio: Improve vfio_get_dirty_bitmap() tracepoint Joao Martins
@ 2023-05-25 11:43 ` Joao Martins
  2023-05-25 13:16   ` Peter Xu
  2023-05-25 11:43 ` [PATCH v2 2/2] hw/vfio: Add nr of dirty pages to vfio_get_dirty_bitmap tracepoint Joao Martins
  1 sibling, 1 reply; 9+ messages in thread
From: Joao Martins @ 2023-05-25 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Williamson, Cedric Le Goater, Paolo Bonzini, Peter Xu,
	David Hildenbrand, Philippe Mathieu-Daude, Avihai Horon,
	Joao Martins

In preparation for including the number of dirty pages in the
vfio_get_dirty_bitmap() tracepoint, return the number of dirty pages in
cpu_physical_memory_set_dirty_lebitmap() similar to
cpu_physical_memory_sync_dirty_bitmap().

To avoid counting twice when GLOBAL_DIRTY_RATE is enabled, stash the
number of bits set per bitmap quad in a variable (@nbits) and reuse it
there.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
---
 include/exec/ram_addr.h | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index f4fb6a211175..8b8f271d0731 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -333,14 +333,16 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
 }
 
 #if !defined(_WIN32)
-static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
-                                                          ram_addr_t start,
-                                                          ram_addr_t pages)
+static inline
+uint64_t cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
+                                                ram_addr_t start,
+                                                ram_addr_t pages)
 {
     unsigned long i, j;
-    unsigned long page_number, c;
+    unsigned long page_number, c, nbits;
     hwaddr addr;
     ram_addr_t ram_addr;
+    uint64_t num_dirty = 0;
     unsigned long len = (pages + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
     unsigned long hpratio = qemu_real_host_page_size() / TARGET_PAGE_SIZE;
     unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
@@ -368,6 +370,7 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
                 if (bitmap[k]) {
                     unsigned long temp = leul_to_cpu(bitmap[k]);
 
+                    nbits = ctpopl(temp);
                     qatomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
 
                     if (global_dirty_tracking) {
@@ -376,10 +379,12 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
                                 temp);
                         if (unlikely(
                             global_dirty_tracking & GLOBAL_DIRTY_DIRTY_RATE)) {
-                            total_dirty_pages += ctpopl(temp);
+                            total_dirty_pages += nbits;
                         }
                     }
 
+                    num_dirty += nbits;
+
                     if (tcg_enabled()) {
                         qatomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset],
                                    temp);
@@ -408,9 +413,11 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
         for (i = 0; i < len; i++) {
             if (bitmap[i] != 0) {
                 c = leul_to_cpu(bitmap[i]);
+                nbits = ctpopl(c);
                 if (unlikely(global_dirty_tracking & GLOBAL_DIRTY_DIRTY_RATE)) {
-                    total_dirty_pages += ctpopl(c);
+                    total_dirty_pages += nbits;
                 }
+                num_dirty += nbits;
                 do {
                     j = ctzl(c);
                     c &= ~(1ul << j);
@@ -423,6 +430,8 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
             }
         }
     }
+
+    return num_dirty;
 }
 #endif /* not _WIN32 */
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/2] hw/vfio: Add nr of dirty pages to vfio_get_dirty_bitmap tracepoint
  2023-05-25 11:43 [PATCH v2 0/2] hw/vfio: Improve vfio_get_dirty_bitmap() tracepoint Joao Martins
  2023-05-25 11:43 ` [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap() Joao Martins
@ 2023-05-25 11:43 ` Joao Martins
  2023-05-25 11:52   ` Philippe Mathieu-Daudé
  1 sibling, 1 reply; 9+ messages in thread
From: Joao Martins @ 2023-05-25 11:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Williamson, Cedric Le Goater, Paolo Bonzini, Peter Xu,
	David Hildenbrand, Philippe Mathieu-Daude, Avihai Horon,
	Joao Martins

Include the number of dirty pages on the vfio_get_dirty_bitmap tracepoint.
These are fetched from the newly added return value in
cpu_physical_memory_set_lebitmap().

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
---
 hw/vfio/common.c     | 7 ++++---
 hw/vfio/trace-events | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 4d01ea351515..3c9af2fed1b1 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -1747,6 +1747,7 @@ static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova,
     bool all_device_dirty_tracking =
         vfio_devices_all_device_dirty_tracking(container);
     VFIOBitmap vbmap;
+    uint64_t dirty;
     int ret;
 
     if (!container->dirty_pages_supported && !all_device_dirty_tracking) {
@@ -1771,11 +1772,11 @@ static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova,
         goto out;
     }
 
-    cpu_physical_memory_set_dirty_lebitmap(vbmap.bitmap, ram_addr,
-                                           vbmap.pages);
+    dirty = cpu_physical_memory_set_dirty_lebitmap(vbmap.bitmap, ram_addr,
+                                                   vbmap.pages);
 
     trace_vfio_get_dirty_bitmap(container->fd, iova, size, vbmap.size,
-                                ram_addr);
+                                ram_addr, dirty);
 out:
     g_free(vbmap.bitmap);
 
diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
index 646e42fd27f9..9265a406eda1 100644
--- a/hw/vfio/trace-events
+++ b/hw/vfio/trace-events
@@ -120,7 +120,7 @@ vfio_region_sparse_mmap_header(const char *name, int index, int nr_areas) "Devic
 vfio_region_sparse_mmap_entry(int i, unsigned long start, unsigned long end) "sparse entry %d [0x%lx - 0x%lx]"
 vfio_get_dev_region(const char *name, int index, uint32_t type, uint32_t subtype) "%s index %d, %08x/%08x"
 vfio_dma_unmap_overflow_workaround(void) ""
-vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64
+vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start, uint64_t dirty) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64" dirty=%"PRIu64
 vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64
 
 # platform.c
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] hw/vfio: Add nr of dirty pages to vfio_get_dirty_bitmap tracepoint
  2023-05-25 11:43 ` [PATCH v2 2/2] hw/vfio: Add nr of dirty pages to vfio_get_dirty_bitmap tracepoint Joao Martins
@ 2023-05-25 11:52   ` Philippe Mathieu-Daudé
  2023-05-25 13:36     ` Joao Martins
  0 siblings, 1 reply; 9+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-05-25 11:52 UTC (permalink / raw)
  To: Joao Martins, qemu-devel
  Cc: Alex Williamson, Cedric Le Goater, Paolo Bonzini, Peter Xu,
	David Hildenbrand, Avihai Horon

Hi Joao,

On 25/5/23 13:43, Joao Martins wrote:
> Include the number of dirty pages on the vfio_get_dirty_bitmap tracepoint.
> These are fetched from the newly added return value in
> cpu_physical_memory_set_lebitmap().
> 
> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> ---
>   hw/vfio/common.c     | 7 ++++---
>   hw/vfio/trace-events | 2 +-
>   2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 4d01ea351515..3c9af2fed1b1 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -1747,6 +1747,7 @@ static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova,
>       bool all_device_dirty_tracking =
>           vfio_devices_all_device_dirty_tracking(container);
>       VFIOBitmap vbmap;
> +    uint64_t dirty;

Could we rename this 'dirty_pages'?

>       int ret;
>   
>       if (!container->dirty_pages_supported && !all_device_dirty_tracking) {
> @@ -1771,11 +1772,11 @@ static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova,
>           goto out;
>       }
>   
> -    cpu_physical_memory_set_dirty_lebitmap(vbmap.bitmap, ram_addr,
> -                                           vbmap.pages);
> +    dirty = cpu_physical_memory_set_dirty_lebitmap(vbmap.bitmap, ram_addr,
> +                                                   vbmap.pages);
>   
>       trace_vfio_get_dirty_bitmap(container->fd, iova, size, vbmap.size,
> -                                ram_addr);
> +                                ram_addr, dirty);
>   out:
>       g_free(vbmap.bitmap);
>   
> diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
> index 646e42fd27f9..9265a406eda1 100644
> --- a/hw/vfio/trace-events
> +++ b/hw/vfio/trace-events
> @@ -120,7 +120,7 @@ vfio_region_sparse_mmap_header(const char *name, int index, int nr_areas) "Devic
>   vfio_region_sparse_mmap_entry(int i, unsigned long start, unsigned long end) "sparse entry %d [0x%lx - 0x%lx]"
>   vfio_get_dev_region(const char *name, int index, uint32_t type, uint32_t subtype) "%s index %d, %08x/%08x"
>   vfio_dma_unmap_overflow_workaround(void) ""
> -vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64
> +vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start, uint64_t dirty) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64" dirty=%"PRIu64

Ditto.

>   vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64
>   
>   # platform.c



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap()
  2023-05-25 11:43 ` [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap() Joao Martins
@ 2023-05-25 13:16   ` Peter Xu
  2023-05-25 13:47     ` Joao Martins
  0 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2023-05-25 13:16 UTC (permalink / raw)
  To: Joao Martins
  Cc: qemu-devel, Alex Williamson, Cedric Le Goater, Paolo Bonzini,
	David Hildenbrand, Philippe Mathieu-Daude, Avihai Horon

On Thu, May 25, 2023 at 12:43:20PM +0100, Joao Martins wrote:
> In preparation for including the number of dirty pages in the
> vfio_get_dirty_bitmap() tracepoint, return the number of dirty pages in
> cpu_physical_memory_set_dirty_lebitmap() similar to
> cpu_physical_memory_sync_dirty_bitmap().

The patch itself looks good to me, but it's slightly different from sync
version because that was only for MIGRATION bitmap, meanwhile it counts
newly dirtied ones (so exclude already dirtied ones even if re-dirtied in
the MIGRATION bitmap), while this one counts any dirty bits in *bitmap.

Shall we perhaps state it somewhere explicitly?  A comment for retval might
be suitable above the function?

Thanks,

> 
> To avoid counting twice when GLOBAL_DIRTY_RATE is enabled, stash the
> number of bits set per bitmap quad in a variable (@nbits) and reuse it
> there.
> 
> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> ---
>  include/exec/ram_addr.h | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index f4fb6a211175..8b8f271d0731 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -333,14 +333,16 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
>  }
>  
>  #if !defined(_WIN32)
> -static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
> -                                                          ram_addr_t start,
> -                                                          ram_addr_t pages)
> +static inline
> +uint64_t cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
> +                                                ram_addr_t start,
> +                                                ram_addr_t pages)
>  {
>      unsigned long i, j;
> -    unsigned long page_number, c;
> +    unsigned long page_number, c, nbits;
>      hwaddr addr;
>      ram_addr_t ram_addr;
> +    uint64_t num_dirty = 0;
>      unsigned long len = (pages + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
>      unsigned long hpratio = qemu_real_host_page_size() / TARGET_PAGE_SIZE;
>      unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
> @@ -368,6 +370,7 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
>                  if (bitmap[k]) {
>                      unsigned long temp = leul_to_cpu(bitmap[k]);
>  
> +                    nbits = ctpopl(temp);
>                      qatomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp);
>  
>                      if (global_dirty_tracking) {
> @@ -376,10 +379,12 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
>                                  temp);
>                          if (unlikely(
>                              global_dirty_tracking & GLOBAL_DIRTY_DIRTY_RATE)) {
> -                            total_dirty_pages += ctpopl(temp);
> +                            total_dirty_pages += nbits;
>                          }
>                      }
>  
> +                    num_dirty += nbits;
> +
>                      if (tcg_enabled()) {
>                          qatomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset],
>                                     temp);
> @@ -408,9 +413,11 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
>          for (i = 0; i < len; i++) {
>              if (bitmap[i] != 0) {
>                  c = leul_to_cpu(bitmap[i]);
> +                nbits = ctpopl(c);
>                  if (unlikely(global_dirty_tracking & GLOBAL_DIRTY_DIRTY_RATE)) {
> -                    total_dirty_pages += ctpopl(c);
> +                    total_dirty_pages += nbits;
>                  }
> +                num_dirty += nbits;
>                  do {
>                      j = ctzl(c);
>                      c &= ~(1ul << j);
> @@ -423,6 +430,8 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
>              }
>          }
>      }
> +
> +    return num_dirty;
>  }
>  #endif /* not _WIN32 */
>  
> -- 
> 2.31.1
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] hw/vfio: Add nr of dirty pages to vfio_get_dirty_bitmap tracepoint
  2023-05-25 11:52   ` Philippe Mathieu-Daudé
@ 2023-05-25 13:36     ` Joao Martins
  0 siblings, 0 replies; 9+ messages in thread
From: Joao Martins @ 2023-05-25 13:36 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Alex Williamson, Cedric Le Goater, Paolo Bonzini, Peter Xu,
	David Hildenbrand, Avihai Horon, qemu-devel

On 25/05/2023 12:52, Philippe Mathieu-Daudé wrote:
> Hi Joao,
> 
> On 25/5/23 13:43, Joao Martins wrote:
>> Include the number of dirty pages on the vfio_get_dirty_bitmap tracepoint.
>> These are fetched from the newly added return value in
>> cpu_physical_memory_set_lebitmap().
>>
>> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
>> ---
>>   hw/vfio/common.c     | 7 ++++---
>>   hw/vfio/trace-events | 2 +-
>>   2 files changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>> index 4d01ea351515..3c9af2fed1b1 100644
>> --- a/hw/vfio/common.c
>> +++ b/hw/vfio/common.c
>> @@ -1747,6 +1747,7 @@ static int vfio_get_dirty_bitmap(VFIOContainer
>> *container, uint64_t iova,
>>       bool all_device_dirty_tracking =
>>           vfio_devices_all_device_dirty_tracking(container);
>>       VFIOBitmap vbmap;
>> +    uint64_t dirty;
> 
> Could we rename this 'dirty_pages'?
> 
Yeap, will do.

>>       int ret;
>>         if (!container->dirty_pages_supported && !all_device_dirty_tracking) {
>> @@ -1771,11 +1772,11 @@ static int vfio_get_dirty_bitmap(VFIOContainer
>> *container, uint64_t iova,
>>           goto out;
>>       }
>>   -    cpu_physical_memory_set_dirty_lebitmap(vbmap.bitmap, ram_addr,
>> -                                           vbmap.pages);
>> +    dirty = cpu_physical_memory_set_dirty_lebitmap(vbmap.bitmap, ram_addr,
>> +                                                   vbmap.pages);
>>         trace_vfio_get_dirty_bitmap(container->fd, iova, size, vbmap.size,
>> -                                ram_addr);
>> +                                ram_addr, dirty);
>>   out:
>>       g_free(vbmap.bitmap);
>>   diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
>> index 646e42fd27f9..9265a406eda1 100644
>> --- a/hw/vfio/trace-events
>> +++ b/hw/vfio/trace-events
>> @@ -120,7 +120,7 @@ vfio_region_sparse_mmap_header(const char *name, int
>> index, int nr_areas) "Devic
>>   vfio_region_sparse_mmap_entry(int i, unsigned long start, unsigned long end)
>> "sparse entry %d [0x%lx - 0x%lx]"
>>   vfio_get_dev_region(const char *name, int index, uint32_t type, uint32_t
>> subtype) "%s index %d, %08x/%08x"
>>   vfio_dma_unmap_overflow_workaround(void) ""
>> -vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t
>> bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size=
>> 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64
>> +vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t
>> bitmap_size, uint64_t start, uint64_t dirty) "container fd=%d,
>> iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64"
>> dirty=%"PRIu64
> 
> Ditto.
> 
/me nods


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap()
  2023-05-25 13:16   ` Peter Xu
@ 2023-05-25 13:47     ` Joao Martins
  2023-05-25 13:54       ` Peter Xu
  0 siblings, 1 reply; 9+ messages in thread
From: Joao Martins @ 2023-05-25 13:47 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alex Williamson, Cedric Le Goater, Paolo Bonzini,
	David Hildenbrand, Philippe Mathieu-Daude, Avihai Horon

On 25/05/2023 14:16, Peter Xu wrote:
> On Thu, May 25, 2023 at 12:43:20PM +0100, Joao Martins wrote:
>> In preparation for including the number of dirty pages in the
>> vfio_get_dirty_bitmap() tracepoint, return the number of dirty pages in
>> cpu_physical_memory_set_dirty_lebitmap() similar to
>> cpu_physical_memory_sync_dirty_bitmap().
> 
> The patch itself looks good to me, but it's slightly different from sync
> version because that was only for MIGRATION bitmap, meanwhile it counts
> newly dirtied ones (so exclude already dirtied ones even if re-dirtied in
> the MIGRATION bitmap), while this one counts any dirty bits in *bitmap.
> 
Good callout.

> Shall we perhaps state it somewhere explicitly?  A comment for retval might
> be suitable above the function?
> 

Yeap, Something like this?

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 8b8f271d0731..deaf746421da 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -333,6 +333,13 @@ static inline void
cpu_physical_memory_set_dirty_range(ram_addr_t start,
 }

 #if !defined(_WIN32)
+
+/*
+ * Contrary to cpu_physical_memory_sync_dirty_bitmap() this function returns
+ * the number of dirty pages in @bitmap passed as argument. On the other hand,
+ * cpu_physical_memory_sync_dirty_bitmap() returns newly dirtied pages that
+ * weren't set in the global migration bitmap.
+ */
 static inline
 uint64_t cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
                                                 ram_addr_t start,


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap()
  2023-05-25 13:47     ` Joao Martins
@ 2023-05-25 13:54       ` Peter Xu
  2023-05-25 15:02         ` Joao Martins
  0 siblings, 1 reply; 9+ messages in thread
From: Peter Xu @ 2023-05-25 13:54 UTC (permalink / raw)
  To: Joao Martins
  Cc: qemu-devel, Alex Williamson, Cedric Le Goater, Paolo Bonzini,
	David Hildenbrand, Philippe Mathieu-Daude, Avihai Horon

On Thu, May 25, 2023 at 02:47:26PM +0100, Joao Martins wrote:
> Yeap, Something like this?
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 8b8f271d0731..deaf746421da 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -333,6 +333,13 @@ static inline void
> cpu_physical_memory_set_dirty_range(ram_addr_t start,
>  }
> 
>  #if !defined(_WIN32)
> +
> +/*
> + * Contrary to cpu_physical_memory_sync_dirty_bitmap() this function returns
> + * the number of dirty pages in @bitmap passed as argument. On the other hand,
> + * cpu_physical_memory_sync_dirty_bitmap() returns newly dirtied pages that
> + * weren't set in the global migration bitmap.
> + */
>  static inline
>  uint64_t cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
>                                                  ram_addr_t start,
> 

Good enough to me. :)  With that, feel free to add:

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap()
  2023-05-25 13:54       ` Peter Xu
@ 2023-05-25 15:02         ` Joao Martins
  0 siblings, 0 replies; 9+ messages in thread
From: Joao Martins @ 2023-05-25 15:02 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alex Williamson, Cedric Le Goater, Paolo Bonzini,
	David Hildenbrand, Philippe Mathieu-Daude, Avihai Horon

On 25/05/2023 14:54, Peter Xu wrote:
> On Thu, May 25, 2023 at 02:47:26PM +0100, Joao Martins wrote:
>> Yeap, Something like this?
>>
>> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
>> index 8b8f271d0731..deaf746421da 100644
>> --- a/include/exec/ram_addr.h
>> +++ b/include/exec/ram_addr.h
>> @@ -333,6 +333,13 @@ static inline void
>> cpu_physical_memory_set_dirty_range(ram_addr_t start,
>>  }
>>
>>  #if !defined(_WIN32)
>> +
>> +/*
>> + * Contrary to cpu_physical_memory_sync_dirty_bitmap() this function returns
>> + * the number of dirty pages in @bitmap passed as argument. On the other hand,
>> + * cpu_physical_memory_sync_dirty_bitmap() returns newly dirtied pages that
>> + * weren't set in the global migration bitmap.
>> + */
>>  static inline
>>  uint64_t cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
>>                                                  ram_addr_t start,
>>
> 
> Good enough to me. :)  With that, feel free to add:
> 
> Reviewed-by: Peter Xu <peterx@redhat.com>
> 
Thank you!


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-05-25 15:03 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-25 11:43 [PATCH v2 0/2] hw/vfio: Improve vfio_get_dirty_bitmap() tracepoint Joao Martins
2023-05-25 11:43 ` [PATCH v2 1/2] exec/ram_addr: return nr of dirty pages in cpu_physical_memory_set_dirty_lebitmap() Joao Martins
2023-05-25 13:16   ` Peter Xu
2023-05-25 13:47     ` Joao Martins
2023-05-25 13:54       ` Peter Xu
2023-05-25 15:02         ` Joao Martins
2023-05-25 11:43 ` [PATCH v2 2/2] hw/vfio: Add nr of dirty pages to vfio_get_dirty_bitmap tracepoint Joao Martins
2023-05-25 11:52   ` Philippe Mathieu-Daudé
2023-05-25 13:36     ` Joao Martins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).